uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,500,215 | arxiv |
\section{Introduction}\label{sec:intro}
The recent work~\cite{Wa13a} showed how the equations of
ideal, compressible magnetohydrodynamics may be elegantly formulated
in terms of Lie derivatives, building on the work of Helmholtz, Walen
and Arnold. For example,
the equation of magnetic induction in a compressible flow may
be formulated in terms of
a Lie derivative of a vector by introducing the field~$\tilde{\bf B}$
defined as the the magnetic field~${\bf B}$ divided by the mass density,
\begin{equation}\label{eq:induc}
\frac{\partial\tilde{\bf B}}{\partial t}=\mathcal{L}_{\bf u}(\tilde{\bf B})
\end{equation}
where $\mathcal{L}_{\bf u}$ is the Lie derivative with respect to the
flow field~${\bf u}$, $\tilde{\bf B}={\bf B}/\rho$ and $\rho$ is mass density.
The dynamical, potential vorticity equation
may also be put into the Lie derivative form~\cite{Wa13a}
\begin{equation}\label{eq:fullpv}
\frac{\partial \tilde{\boldsymbol{\omega}}}{\partial t}=\mathcal{R}+{\mathcal{L}}_{\bf u}(\tilde{\boldsymbol{\omega}})-{\mathcal{L}}_{\tilde{\bf B}}(\tilde{\bf J})
\end{equation}
where the potential vorticity $\tilde{\boldsymbol{\omega}}=\nabla\times{\bf u}/\rho$
and the potential current $\tilde{\bf J}=\nabla\times{\bf B}/\rho$. The
term~$\mathcal{R}$ vanishes either upon making the barotropic assumption
that pressure~$p(\rho)$ or sometimes in the isentropic approximation. The
system of equations is completed by the mass conservation relation
\begin{equation}\label{eq:mcons}
\frac{\partial\rho}{\partial t}+\nabla\cdot (\rho {\bf u})=0
\end{equation}
This work expands on and extends the results in ref~\cite{Wa13a}.
Much of it concerns further applications of the results in ref~\cite{Wa13a}
that rely on the peculiar properties of the Lie derivative and so
are mostly geometrical in nature. After \Sec{math} devoted to the underlying
mathematics, there are two sections on applications. The first~\Sec{appl1}
discusses the relationship between coordinate bases and steady solutions of the ideal MHD
equations. The second section of applications~\Sec{appl2} uses
the coordinate-invariant property of the Lie derivative to investigate
how new solutions may be generated by use of coordinate mappings.
\Sec{addphys} considers how additional physical processes
such as diffusion may be included in the MHD model and examines their effects
in analogous fashion.
\Sec{tdep} explores time-dependent solutions, and \Sec{summary} provides a brief
summary and discussion of the value of the more abstract mathematical approach
to MHD problems.
Although the main emphasis is on results for compressible MHD, there is
inevitably some overlap with other work on incompressible MHD. Most
of the previously published work on the subject is more directly constructive, concerned
typically with calculating a steady equilibrium corresponding to a specified
pressure distribution. See the books~\cite{dhaeseleer,schindler} which
are biased towards applications in
laboratory plasmas and space plasmas respectively. Of the more abstract,
geometrical and topological analysis of MHD, the
book by Arnold and Khesin~\cite{arnoldkhesin} cites a comprehensive selection
of works prior to its publication, although
the paper of Woolley~\cite{Wo90math} is a notable exception, cf.\ the work of \Sec{appl1}.
More recently Bogoyavlenskij~\cite{Bo01Infi,Bo02Symm} and Cheviakov~\cite{Ch05Cons}, see also the textbook~\cite[\S\,5.3.6]{blumancheviakovsanco},
have studied the capabilities of transformations to generate new equilibria
from `old', cf.\ the work of \Sec{appl2}.
\section{Mathematics}\label{sec:math}
\subsection{Coordinate Bases}\label{sec:coord}
A set of three vectors~${\bf e}_i,\,\,i=1,2,3$, forms a basis in 3-D provided the vectors
are linearly independent at each point. The vectors are said to form a
coordinate basis if each may be parameterised by~$x^i$ such that the
$x^i$ may be used as a set of coordinates. When this is not the case,
the~${\bf e}_i$ are referred to as a `frame'~\cite[\S\,4.5]{fecko}.
A set of coordinate vectors may be produced by
a mapping~${\bf{x}}(\bar{x}^1,\bar{x}^2,\bar{x}^3)$ to
the usual Cartesian coordinate system~${\bf x}=(x^1,x^2,x^3)=(x,y,z)$
from curvilinear coordinates~${\bf \bar{x}}=(\bar{x}^1,\bar{x}^2,\bar{x}^3)$, viz.
\begin{equation}\label{eq:coordv}
{\bf e}_i = \partial {\bf x} / \partial \bar{x}^i
\end{equation}
In the context of numerical grid generation, the~$\bar{x}^i$ are the
equivalent of the reference or computational coordinates, as for example they might be
the nodes of a regular cuboidal mesh, whereas the corresponding~${\bf{x}}$
might be arranged to sample the interior of an irregularly shaped cavity as uniformly as possible.
Normally such a mapping is required to be non-degenerate, except possibly
at a small number of isolated singular points such as exemplified by the
origin in polar coordinates. Hence, almost everywhere,
it constitutes a diffeomorphism, ie.\ the function ${\bf{x}}(\bar{x}^1,\bar{x}^2,\bar{x}^3)$
is differentiable as functions of its arguments, and invertible, ie.\ to
each~${\bf{x}}$ there corresponds a unique~${\bf \bar{x}}$.
The generation of such maps is a standard procedure in numerical grid
generation~\cite{gridgenhbook}.
Diffeomorphisms may also be conceived of as generated by
the flow-field of a smooth, non-vanishing vector~${\bf u}({\bf x},t)$.
Now in any reasonable 3-D coordinate system, as explained in the next \Sec{Lie},
there is the
remarkable result that the Lie derivative of a vector may be written
\begin{equation}\label{eq:lieder}
{\mathcal{L}}_{\bf v}({\bf w})^i= w^j\frac{\partial v^i}{\partial x^j}-v^j\frac{\partial w^i}{\partial x^j}
\end{equation}
where here and throughout, the Einstein summation convention will be used. Further,
superfixes will always indicate vector components, not exponents.
It is therefore helpful to introduce the Lie bracket notation
for the Lie derivative
\begin{equation}
\mathcal{L}_{\bf v}({\bf w})=[{\bf v},{\bf w}]
\end{equation}
so that the two vectors~${\bf v}$, ${\bf w}$ appear on an equal
footing. Adopting this notation~\cite[\S\,4.5]{fecko}, the condition that
the~${\bf e}_i$ form a coordinate basis may be expressed as
\begin{equation}\label{eq:coordb}
[{\bf e}_i,{\bf e}_j]={\bf 0},\;\;\; \forall i,\;j
\end{equation}
Conditionally, the converse also applies, ie.\ if \Eq{coordb} is satisfied
then the vectors~${\bf e}_i$ form a coordinate basis.
The conditions are that the~${\bf e}_i$ are everywhere non-vanishing and
linearly independent vectors and the result also requires the applicability
of the Poincare lemma~\cite[\S\,9.2]{fecko}. The lemma determines when an
irrotational vector field may be represented as the gradient of scalar.
The lemma requires the domain of interest (or manifold) to be topologically
simple, and notably excludes a toroidal geometry with the conventional
assignment of poloidal and toroidal angles to the range~$[0,2\pi]$.
An illustration from magnetic confinement
physics in a torus, involves the electric field of constant amplitude
applied in the toroidal direction which, according to Faraday's Law,
becomes irrotational in the limit where
the magnetic flux change vanishes. Now if the torus is
imagined to be of very large major radius, so that it is effectively straight,
directed in the $z$-direction say,
then the electric potential $\Phi_E \propto z$. However the corresponding expression
in the torus would have to be $\Phi_E \propto \phi$, where $\phi$ is the angle about
the major radius,
hence if a multi-valued potential is to be avoided by confining $\phi$~to~$[0,2\pi]$, special treatment is required at
$\phi=0=2\pi$. Operationally, as in this example, the Poincare lemma can
often be `forced to apply' by suitable use of boundary conditions, but it must be
remembered that there is an underlying topological constraint.
Here, the inapplicability of the Poincare lemma is unfortunate since it conflicts
with the requirement that the vectors
be non-zero everywhere in order to form part of a basis. In this context,
the Poincare-Hopf theorem is relevant. The theorem
relates the number of zeroes of a vector
field to the topology of the compact manifold on which it is
defined (and gives the `hairy-ball' theorem in the case of spherical surfaces).
There follows that the only compact coordinate systems
are to be found in a toroidal geometry. The only way therefore to
produce a coordinate basis for a compact geometry is to let the angular coordinates
range freely over a torus.
\subsection{Lie and Other Derivatives}\label{sec:Lie}
Since the derivation of the coordinate invariance of the Lie derivative~\Eq{lieder}
helps understanding of the scope of the mapping approach, in particular what is
meant by `reasonable' in the previous \Sec{coord}, it will be given here.
First, suppose that an arbitrary vector~${\bf v}$ has Cartesian components $v^j$
and curvilinear components~$\bar{v}^j$. If ${\bf i}_j$ are the vectors of the Cartesian
orthonormal basis, often written~$\{{\bf \hat{x}},{\bf \hat{y}},{\bf \hat{z}}\}$, then since a vector
is the same regardless of the coordinate system employed
\begin{equation}\label{eq:vector}
{\bf v}=v^j {\bf i}_j=\bar{v}^j {\bf e}_j
\end{equation}
or, on taking Cartesian components
\begin{equation}\label{eq:vecdef}
v^j = \frac {\partial x^j} {\partial \bar{x}^k} \bar{v}^k
\end{equation}
\Eq{vecdef} is often used to define a vector~\cite{lovelockrund}, viz.\ as a set of
quantities which transforms between coordinate systems following the above rule. Note that, unlike
some texts, ref~\cite{lovelockrund} has no constraint that the mapping be orthogonal, so that it does not
have to be a rigid-body rotation or translation. Since it is not obvious that \Eq{lieder}
defines a vector in a general curvilinear system, an important role of the following
derivation is to establish
that ${\mathcal{L}}_{\bf v}({\bf w})^i$ transforms as \Eq{vecdef}.
Using \Eq{vecdef} to express ${\bf v}$ and~${\bf w}$ in component form, \Eq{lieder} yields
\begin{equation}\label{eq:lieder2}
{\mathcal{L}}_{\bf v}({\bf w})^i=
\frac{\partial x^j}{\partial \bar{x}^k}\bar{w}^k\frac {\partial} {\partial x^j} \left(\frac{\partial x^i}{\partial \bar{x}^l} \bar{v}^l \right)-
\frac{\partial x^j}{\partial \bar{x}^k}\bar{v}^k\frac {\partial} {\partial x^j} \left(\frac{\partial x^i}{\partial \bar{x}^l} \bar{w}^l \right)
\end{equation}
which using
\begin{equation}\label{eq:partials}
\frac{\partial }{\partial x^j}=\frac{\partial \bar{x}^m}{\partial x^j}\frac{\partial }{\partial \bar{x}^m}
\end{equation}
and expanding the vector derivatives, gives for the first term on the right-hand side
\begin{equation}\label{eq:lieder3}
\bar{w}^k \frac{\partial \bar{v}^l}{\partial \bar{x}^m}
\left(\frac{\partial x^j}{\partial \bar{x}^k}\frac{\partial x^i}{\partial \bar{x}^l} \frac{\partial \bar{x}^m}{\partial x^j}\right)
+\bar{w}^k \bar{v}^l
\left(\frac{\partial x^j}{\partial \bar{x}^k} \frac{\partial \bar{x}^m}{\partial x^j} \frac{\partial^2 x^i}{\partial \bar{x}^m \partial \bar{x}^l} \right)
\end{equation}
Now the rules of partial differentiation imply that
\begin{equation}\label{eq:kronecker}
\delta^k_m = \frac{\partial x^j}{\partial \bar{x}^m} \frac{\partial \bar{x}^k}{\partial x^j}
\end{equation}
where the Kronecker delta symbol~$\delta^k_m=1$ if $k=m$ and is zero otherwise.
Hence \Eq{lieder3} simplifies dramatically, and noting that the second term on the right-hand side is the
same apart from interchange of the indices~$k$ and~$l$, there follows that
\begin{equation}\label{eq:lieder4}
{\mathcal{L}}_{\bf v}({\bf w})^i=
\bar{w}^k \frac{\partial \bar{v}^l}{\partial \bar{x}^k} \cdot \frac{\partial x^i}{\partial \bar{x}^l}
+\bar{w}^k \bar{v}^l \frac{\partial^2 x^i}{\partial \bar{x}^k \partial \bar{x}^l}
-\bar{v}^k \frac{\partial \bar{w}^l}{\partial \bar{x}^k} \cdot \frac{\partial x^i}{\partial \bar{x}^l}
-\bar{v}^k \bar{w}^l \frac{\partial^2 x^i}{\partial \bar{x}^k \partial \bar{x}^l}
\end{equation}
The terms in the second partial derivatives cancel, so that
\begin{equation}\label{eq:lieder5}
{\mathcal{L}}_{\bf v}({\bf w})^i=
\left(\bar{w}^k \frac{\partial \bar{v}^l}{\partial \bar{x}^k}
-\bar{v}^k \frac{\partial \bar{w}^l}{\partial \bar{x}^k} \right) \cdot \frac{\partial x^i}{\partial \bar{x}^l}
\end{equation}
which establishes both that the Lie derivative is a vector under coordinate transformation and
that it has a coordinate invariant expression.
To underline just how remarkable a result this is, consider the expression for the divergence of~${\bf v}$.
Differentiating \Eq{vecdef} with respect to~$x^i$,
\begin{equation}\label{eq:divij1}
\frac{\partial v^j}{\partial x^i}=\frac {\partial\bar{v}^k} {\partial x^i} \frac{\partial x^j}{\partial \bar{x}^k} +
\frac {\partial} {\partial x^i} \left(\frac{\partial x^j}{\partial \bar{x}^k}\right)\bar{v}^l
\end{equation}
which using \Eq{partials} gives
\begin{equation}\label{eq:divij2}
\frac{\partial v^j}{\partial x^i}=
\frac{\partial \bar{v}^k}{\partial \bar{x}^l} \frac{\partial x^j}{\partial \bar{x}^k}\frac{\partial \bar{x}^l}{\partial x^i}+
\frac{\partial \bar{x}^l}{\partial x^i} \frac{\partial^2 x^j}{\partial \bar{x}^k \partial \bar{x}^l} \bar{v}^k
\end{equation}
Setting $i=j$ and summing (contracting indices $i$ and~$j$), and using \Eq{kronecker}, gives
\begin{equation}\label{eq:div}
\nabla\cdot{\bf v}=
\frac{\partial v^j}{\partial x^j}=\frac {\partial\bar{v}^k} {\partial \bar{x}^k}+
\frac{\partial \bar{x}^l}{\partial x^j} \frac{\partial^2 x^j}{\partial \bar{x}^k \partial \bar{x}^l} \bar{v}^k
\end{equation}
Introducing the Jacobian of the transformation between the two coordinate systems as~$\sqrt{g}$,
defined as the determinant
\begin{equation}\label{eq:jac}
\sqrt{g}= \frac{\partial(x^1,x^2,x^3)}
{\partial(\bar{x}^1,\bar{x}^2,\bar{x}^3)}
={\bf e}_1.{\bf e}_2\times{\bf e}_3
\end{equation}
it may be shown~\cite[\S\,4.1]{lovelockrund}, introducing cofactors and
using elementary calculus, that
\begin{equation}\label{eq:jacd}
\frac{\partial \sqrt{g}}{\partial \bar{x}^i}= \sqrt{g}
\frac{\partial \bar{x}^l}{\partial x^j} \frac{\partial^2 x^j}{\partial \bar{x}^i \partial \bar{x}^l}
\end{equation}
Hence \Eq{div} may be written
\begin{equation}\label{eq:div2}
\frac{\partial v^j}{\partial x^j}=\frac {\partial\bar{v}^k} {\partial \bar{x}^k}+
\frac{1}{\sqrt{g}} \frac{\partial \sqrt{g}}{\partial \bar{x}^k} \bar{v}^k
\end{equation}
often rewritten as
\begin{equation}\label{eq:divg}
\frac{\partial v^j}{\partial x^j}=
\frac{1}{\sqrt{g}} \frac{\partial (\sqrt{g} \bar{v}^k) }{\partial \bar{x}^k}
\end{equation}
It is worth noting that the above formulae for Lie derivative and divergence actually
apply in any number of dimensions.
Other analysis establishes the formula for the curl
operator in curvilinear geometry.
\begin{equation}\label{eq:curlv}
\{\nabla \times {\bf v}\}^i=\frac{1}{\sqrt{g}} e^{ijk} \frac{\partial({g_{kl}\bar{v}^l})}{\partial \bar{x}^j}
\end{equation}
where the metric tensor~$g_{ij}$ is described in the next \Sec{mettensor} and $e^{ijk}=e_{ijk}$
is the alternating symbol, taking values $1$, $-1$ or~$0$,
depending whether $(ikl)$ is an even, odd or non-permutation of $(123)$.
The vanishing of the Lie bracket of two basis vectors is almost immediate from their definition,
for suppose that a vector function~$x^j(\bar{x}^i)$ is used to generate two vector fields
\begin{equation}\label{eq:vecf12}
{\bf u} = \partial {\bf x} / \partial \bar{x}^i,\;\;\;{\bf v} = \partial {\bf x} / \partial \bar{x}^j
\end{equation}
then by \Eq{kronecker}, ${\bf u}.\nabla=\partial/\partial \bar{x}^i$, ${\bf v}.\nabla=\partial/\partial \bar{x}^j$.
Upon substituting in \Eq{lieder5}, it becomes linear in second partial derivatives
and vanishes because the order in which partial derivatives are taken does not matter.
Lastly, the following useful results concerning the Lie bracket are noted, viz.
\begin{equation}\label{eq:liebrid1}
[ \mu {\bf u}, \lambda {\bf v}]= \mu [{\bf u}, \lambda {\bf v}]- {\bf u} (\lambda {\bf v}.\nabla \mu)
=\mu \lambda [{\bf u}, {\bf v}] +{\bf v} (\mu {\bf u}.\nabla \lambda)
-{\bf u} (\lambda {\bf v}.\nabla \mu)
\end{equation}
and in particular, setting ${\bf u}={\bf e}_i$, ${\bf v}={\bf e}_j$, then
\begin{equation}\label{eq:liebrid2}
[ \mu {\bf e}_i, \lambda {\bf e}_j]=
{\bf e}_j (\mu \lambda_{,i}) -{\bf e}_i (\lambda \mu_{,j})
\end{equation}
where suffix~$,i$ is used to denote differentiation with respect to~$\bar{x}^i$.
From \Eq{liebrid2}, it follows that if $\lambda$ is a function of~$\bar{x}^i$ only,
$\mu$ is a function of~$\bar{x}^j$ only, then the Lie bracket vanishes, ie.\ if each~${\bf e}_i$
is scaled by an arbitrary function~$\lambda_i$ of~$\bar{x}^i$ only, then the modified~${\bf e}_i$
also constitute a basis wherever the~$\lambda_i$ are non-zero.
\subsection{Metric Tensors}\label{sec:mettensor}
The metric tensor~$g_{ij}$ is introduced so the elementary distance~$ds$
measured in the coordinate system~${\bf \bar{x}}$ is
\begin{equation}\label{eq:mettensor}
ds^2=g_{ij}d\bar{x}^i d\bar{x}^j
\end{equation}
(using the Einstein summation convention)
when it follows that
\begin{equation}\label{eq:gequal}
g_{ij}= {\bf e}_i\cdot{\bf e}_j
\end{equation}
The quantity~$g$ is defined by $g=\det(g_{ij})$ and it may be shown, consistent with \Eq{jac}
that $g$ equals the square of the Jacobian of the transformation between the coordinate systems.
Elementary theory of determinants leads to the result that, in 3-D,
\begin{equation}\label{eq:2grad}
{\bf e}_i=\sqrt{g}(\nabla \bar{x}^j \times \nabla \bar{x}^k)
\end{equation}
where $(ijk)$ is a permutation of~$(123)$, whence
\begin{equation}\label{eq:diveg}
\nabla\cdot \left(\frac{{\bf e}_i}{\sqrt{g}}\right)= 0
\end{equation}
Specialising temporarily to 2-D coordinate systems, conformal mappings
between complex variables~$z=x+iy$ and $\bar{z}=\bar{x}+i\bar{y}$ may be defined
by $z= f(\bar{z})$ where $f(\bar{z})=u(\bar{z})+iv(\bar{z})$ is an analytic
complex function. (Note that overbar does \emph{not} denote complex conjugate.)
Elementary complex analysis then shows that
\begin{equation}\label{eq:cpartials}
\partial{u}/\partial{\bar{y}}=\partial{v}/\partial{\bar{x}},\;\;\;
\partial{u}/\partial{\bar{x}}=-\partial{v}/\partial{\bar{y}},\;\;\;
\end{equation}
Introducing the shorthand $u_y=\partial{u}/\partial{\bar{y}}$ and obvious
variants, the metric tensor may be written
\begin{equation}\label{eq:cpg}
g_{ij}=\left(\begin{matrix}
u_x^2+v_x^2 & u_x.u_y+v_x.v_y\\
u_x.u_y+v_x.v_y & u_y^2+v_y^2\\
\end{matrix}\right)
=\left(\begin{matrix}
u_x^2+u_y^2 & 0\\
0 & u_y^2+u_x^2\\
\end{matrix}\right)
\end{equation}
and since from the result immediately after \Eq{gequal}, $g=(u_x^2+u_y^2)^2$, it follows that
for conformal mappings, $g_{ij}=\sqrt{g}\delta_{ij}$ where
$\delta_{ij}$ is the Kronecker delta.
No such reduction is possible in 3-D however. For suppose there
is an orthogonal mapping such that~$ds^2=h_{(i)}^2 \delta_{ij} d\bar{x}^i d\bar{x}^j$
(the $h_i$ are known as Lam\'{e} coefficients),
then $g=h_1^2 h_2^2 h_3^2$ and
\begin{equation}\label{eq:mat3}
G_{ij}=\frac{g_{ij}}{\sqrt{g}}=\left(\begin{matrix}
\frac{h_1}{h_2 h_3} & 0 & 0 \\
0 & \frac{h_2}{h_1 h_3} & 0 \\
0 & 0 & \frac{h_3}{h_1 h_2} \\
\end{matrix}\right)
\end{equation}
leading to the system of equations
\begin{equation}\label{eq:gsys}
h_1=h_2 h_3,\;\;\;
h_2=h_1 h_3,\;\;\;
h_3=h_1 h_2
\end{equation}
which may easily be shown to have no solutions except for $h_1^2=h_2^2=h_3^2=1$.
\subsection{Example of Cylindrical Polars}\label{sec:cylpolars}
Further to illustrate the mathematical machinery of coordinate transformations
just introduced, consider the mapping to cylindrical polar
coordinates~$(R,\theta,Z)$ given by
\begin{equation}\label{eq:cpdef}
x=R \cos \theta,\;\;
y=R \sin \theta,\;\;z=Z
\end{equation}
The basis vectors~${\bf e}_i$ follow by differentiation as
\begin{equation}\label{eq:cpbasis}
{\bf e}_1 =\frac{\partial {\bf x}}{\partial R}= (\cos \theta, \sin \theta,0),\;\;\
{\bf e}_2 =\frac{\partial {\bf x}}{\partial \theta}= (-R\sin \theta, R \cos \theta,0),\;\;\
{\bf e}_3 = \frac{\partial {\bf x}}{\partial Z}={\bf i}_3
\end{equation}
where the important point is that while the basis is indeed orthogonal, it is
not orthonormal. Further, to work with the~${\bf e}_i$ as vectors
in Cartesian space, it is best to express them as functions of~$(x,y,z)$, viz.
\begin{equation}\label{eq:cpbcart}
{\bf e}_1 =\left(\frac{x}{r}, \frac{y}{r},0\right),\;\;\mbox{ where } r=\sqrt{x^2+y^2};\;\;\
{\bf e}_2 = (-y, x,0);\;\;\
{\bf e}_3 = (0,0,1)
\end{equation}
It can be seen by direct computation from \Eq{cpbcart}, using \Eq{gequal}, that
the metric tensor
\begin{equation}\label{eq:mats}
g_{ij}=\left(\begin{matrix}
1 & 0 & 0\\
0 & r^2 & 0\\
0 & 0 & 1\\
\end{matrix}\right)
\end{equation}
and hence $\sqrt{g}=r$. The expressions \Eq{cpbcart} may then be
used to verify for
example, that $\nabla.({\bf e}_2/\sqrt{g})=0$, because
\begin{equation}\label{eq:cpdiv}
\nabla.\left(\frac{{\bf e}_2}{\sqrt{g}}\right)=
\frac{\partial}{\partial x} \left(\frac{-y}{r}\right)+
\frac{\partial}{\partial y} \left(\frac{x}{r}\right)
\end{equation}
and it is easy to show by differentiating $r^2=x^2+y^2$ that
\begin{equation}\label{eq:cprdiff}
\frac{\partial r}{\partial x} =\frac{x}{r},\;\;
\frac{\partial r}{\partial y} =\frac{y}{r},\;\;
\end{equation}
so that the two terms on the right-hand side of \Eq{cpdiv} cancel.
The curls~$\nabla\times ({\bf e}_i/\sqrt{g})$ may be calculated in a similar
manner to the divergence. There is no simple general identity and the results have
no particular pattern, viz.
\begin{equation}\label{eq:cpcurl}
\nabla\times\left(\frac{{\bf e}_1}{\sqrt{g}}\right)=(0,0,0),\;\;
\nabla\times\left(\frac{{\bf e}_2}{\sqrt{g}}\right)=(0,0,\frac{1}{r}),\;\;
\nabla\times\left(\frac{{\bf e}_3}{\sqrt{g}}\right)=\left(-\frac{x}{r^3},\frac{y}{r^3},0\right)
\end{equation}
\section{Applications Exploiting the Basis Property}\label{sec:appl1}
\subsection{Steady Kinematics}\label{sec:kinematics}
Steady kinematics of the magnetic field is easily discussed in the present context,
for it requires the vanishing of the 3-D Lie bracket
$[{\bf u},\tilde{\bf B}]={\bf 0}$.
(This problem is referred to as kinematic MHD because~${\bf u}({\bf x},t)$ is an
arbitrarily specified field, ie.\ not necessarily dynamically consistent.)
For steady flow, solutions where ${\bf B} \propto$ mass flux~${\bf F}=\rho {\bf u}$
are of course
well-known, but the geometrical results of the present paper
also indicate that if ${\bf u}$ and $\tilde{\bf B}$ are different members of a
coordinate basis, then $\tilde{\bf B}$ will also be a steady solution.
The divergence constraint is easily
met (for steady flow)
because \Eq{2grad} implies $\nabla\cdot ({\bf e}_i/\sqrt{g})=0$.
Hence, if ${\bf e}_1$ and ${\bf e}_2$ are two members of a coordinate basis,
then $\tilde{\bf B}={\bf e}_2/(\rho \sqrt{g})$
is a steady solution of the induction equation in the steady flow
${\bf u}={\bf e}_1/(\rho \sqrt{g})$ with density~$\rho({\bf x})$, and
and vice versa, provided
\begin{equation}\label{eq:rhog}
\sqrt{g} \rho =\mbox{const.}
\end{equation}
The representation of fields~${\bf B}$ and~${\bf F}$
as basis vectors is equivalent to the use of the Clebsch
representation~\cite[\S\,2.5]{roberts}, \cite[\S\,5.2]{dhaeseleer}
for solenoidal 3-D vector fields.
This is in general only a local result, and there are significant
restrictions, notably in toroidal geometry, on its application
globally. The non-applicability of the Poincare lemma means that a
twisted magnetic field in a torus may not be identified in general
with a coordinate basis vector defined in terms of poloidal and toroidal angles,
unless their restriction to $[0,2\pi]$~is dropped, as discussed in \Sec{coord}.
Assuming the validity of the above representation, it is
not strictly necessary, here and in the next section for
${\bf e}_1$ and ${\bf e}_2$ to be linearly independent vectors,
since the requirement is only that they commute, ie.\ their Lie bracket vanishes.
Hence ${\bf e}_1 = {\bf e}_2$ is allowed here and later.
However, if they are linearly independent, then $\tilde{\bf B}$ and
${\bf u}$ form part of a coordinate basis.
This then implies that in steady-state there is
everywhere a non-vanishing electric field with potential~$\Phi$, so that
\begin{equation}
{\bf u}\times{\bf B}= \nabla\Phi
\end{equation}
\subsection{Equilibria 1}\label{sec:steady1}
It is also possible to treat magnetic equilibria satisfying
$[\tilde{\bf J},\tilde{\bf B}]={\bf 0}$ in a similar vein to
the previous section. The vanishing Lie bracket is equivalent to
the MHD equilibrium relation
\begin{equation}\label{eq:equil}
\nabla p = {\bf J}\times{\bf B}
\end{equation}
when the density is constant, which because of \Eq{rhog} implies
also that $\sqrt{g} =\mbox{const}$. \Eq{diveg} then implies
that the basis vectors are solenoidal.
There is a result from ref~\cite[\S\,II.1B]{arnoldkhesin},
to the effect that in a toroidal geometry $\nabla\times{\bf B}$
and ${\bf B}$ must represent a coordinate basis in each surface of constant
pressure, provided $\nabla p$ does not vanish.
However, supposing that ${\bf J}\propto{\bf e}_3$, there is
an additional constraint on the $\{{\bf e}_i\}$, namely that
\begin{equation}\label{eq:curlcon0}
{\bf e}_3=\nabla\times {\bf e}_2
\end{equation}
Although the constraint~\Eq{curlcon0} above looks relatively simple,
substituting with \Eq{coordv} produces a complicated nonlinear
constraint on the mapping.
However, in the related problem
where ${\bf J} \propto {\bf e}_2$, taking the curl of ${\bf e}_2=\nabla\times {\bf e}_2$
leads to a type of vector Helmholtz equation. Ref~\cite{gridgenhbook} indicates that
equations of similar complexity may be successfully solved numerically
to generate coordinate mappings.
However, it is further necessary that $\sqrt{g}$ be constant, and this
is a demanding extra constraint which makes it hard to prove even that
mappings exist which also satisfy~\Eq{curlcon0}, although see ref~\cite[\S\,3.6]{hazeltinemeiss}.
This motivates looking at using the extra freedom allowed by a spatially
varying~$\sqrt{g}$.
\subsection{Equilibria 2}\label{sec:steady2}
Following the discussion of the previous \Sec{steady1}, it is natural to
seek equilibrium fields of the form
\begin{equation}\label{eq:bjeqm}
{\bf B}=\frac{b^2(\bar{x}^1,\bar{x}^3){\bf e}_2}{\sqrt{g}},\;\;\;
{\bf J}=\frac{j^3(\bar{x}^1,\bar{x}^2){\bf e}_3}{\sqrt{g}}
\end{equation}
where it will be seen that the forms chosen are the most general of `coordinate basis'
type that guarantee that the two fields are both solenoidal. (This representation
appears easier to work with than that suggested in ref~\cite[\S\,II.1A]{arnoldkhesin}.) To ensure
that \Eq{bjeqm} represents an equilibrium, requires from \Eq{liebrid2} that
\begin{equation}\label{eq:cindep}
\frac{\partial (b^2/\sqrt{g})}{\partial \bar{x}^3}=
\frac{\partial (j^3/\sqrt{g})}{\partial \bar{x}^2}=0
\end{equation}
These relations are satisfied if
\begin{equation}\label{eq:gindep}
\sqrt{g}=b^2 j^3 F_1(\bar{x}^1)
\end{equation}
where substituting directly in \Eq{equil}, $F_1(\bar{x}^1)=(\partial p/\partial \bar{x}^1)^{-1}$.
The remaining constraint is that ${\bf J}=\nabla\times{\bf B}$,
which in curvilinear coordinates becomes
\begin{equation}\label{eq:genjdef}
\frac{j^3 \delta_3^i}{\sqrt{g}}
=\frac{1}{\sqrt{g}}
e^{ijk} \frac{\partial}{\partial \bar{x}^j} \left(\frac{g_{k2}{b}^2}{\sqrt{g}}\right)
\end{equation}
or using \Eq{gindep}
\begin{equation}\label{eq:genj}
j^3 \delta_3^i =
e^{ijk} \frac{\partial}{\partial \bar{x}^j} \left(g_{k2}\frac{1}{j^3}
\frac{d p}{d \bar{x}^1}
\right)
\end{equation}
It follows that
the metric tensor of the curvilinear coordinate system must satisfy \Eqs{genj}{gindep},
where the functions $b^2(\bar{x}^1,\bar{x}^3)$, $j^3(\bar{x}^1,\bar{x}^2)$
and $p'(\bar{x}^1)$ are arbitrary (prime denotes differentiation with respect to function
argument) except only that $\sqrt{g}$ and~$g_{22}$ cannot change sign. Since the system has four equations and the $g_{ij}$
represent six unknowns, it is plausible that solutions exist.
Indeed, it will be seen from \Sec{cylpolars},
\Eq{cpcurl}, that for $i=3$ and $k=2$
\begin{equation}\label{eq:curlcon}
{\bf e}_i/\sqrt{g}=\nabla\times ({\bf e}_k/\sqrt{g})
\end{equation}
so that the corresponding two cylindrical polar basis vectors represent an
equilibrium of the form~\Eq{bjeqm} with $b^2=j^3=1$. This solution might be used
to initialise calculations aimed at solving the equation pair
\begin{eqnarray}\label{eq:curlcon2}
j^3 {\bf e}_3&=&\nabla\times (b^2{\bf e}_2) + b^2{\bf e}_2 \times \nabla\ln(\sqrt{g})\nonumber\\
\sqrt{g}&=&\frac{b^2 j^3}{d p/d \bar{x}^1}
\end{eqnarray}
by substituting for the~${\bf e}_i$ using \Eq{coordv}.
As mentioned in \Sec{steady1}, this substitution
leads to a system that resembles those solved successfully as grid generation problems.
Another way to proceed
might be to introduce a pseudo-displacement current term, cf.\ \Eq{maxj} below.
In either context, \Eq{genj} is probably best regarded as a consistency check
to test coordinate mappings obtained by other means.
Note, as will also emerge in the next \Sec{appl2},
that the Lie derivative formalism means nonlinear force balance is
trivially satisfied, whereas the linear relation~${\bf J}=\nabla\times {\bf B}$
causes difficulties. All other approaches to the problem of
calculating fully 3-D magnetic equilibria described in ref~\cite[\S\,8.1]{dhaeseleer}
work the other way around, ie.\ they substitute for~${\bf J}$ in
the nonlinear equation which they then solve typically by a
variational technique.
\subsection{Example Equilibrium Converse}\label{sec:egeqm}
Now, consider the converse of the equilibrium problem examined in the previous
\Sec{steady1} and \Sec{steady2}, namely suppose that an equilibrium has been found
and a corresponding coordinate basis is required. Such a basis may be helpful
because the computation of Lie derivatives therein should generally be much easier.
Hence, consider the simplest non-trivial equilibrium relevant to magnetic confinement
physics~\cite[\S\,II.1B]{arnoldkhesin}, viz. a sheared magnetic field with
\begin{equation}\label{eq:simpeqm}
{\bf B}=B(x){\bf \hat{z}},\;\;\; {\bf J}=-B'(x){\bf \hat{y}}
\end{equation}
(It is here noted that this is obviously related to the equilibrium described
in the preceding \Sec{steady2}, and further that coordinate transformations
which establish the correspondence will be described in \Sec{appl2}.)
There is a brute force approach to discovering the mapping which underlies
an equilibrium field, namely to assume ${\bf B}$ and~${\bf J}$ are
proportional to two different basis vectors, and seek a third via the vanishing Lie bracket
property of a basis. Alternatively, the mapping may be guessed, as here, as
\begin{equation}\label{eq:simpeqmap}
x=\bar{x},\;\;\;y=-\frac{\bar{y}}{B(\bar{x})},
\;\;\;z=\frac{\bar{z}}{B'(\bar{x})},
\end{equation}
with inverse
\begin{equation}\label{eq:simpeqinvmap}
\bar{x}=x,\;\;\;\bar{y}=-B(x)y,
\;\;\;\bar{z}=B'(x) z
\end{equation}
when from \Eq{coordb}, the resulting basis vectors in Cartesian coordinates are
\begin{eqnarray}\label{eq:seqmbasis}
{\bf e}_1 &=& \frac{\partial(x,y,z)}{\partial \bar{x}} = \left(1, \frac{\bar{y}B'}{B^2}, -\frac{\bar{z}B''}{B'^2} \right)
= \left(1, \frac{yB'}{B}, -\frac{zB''}{B'} \right) \\
{\bf e}_2 &=& \frac{\partial(x,y,z)}{\partial \bar{y}} = \left(0, -\frac{1}{B},0 \right) \\
{\bf e}_3 &=& \frac{\partial(x,y,z)}{\partial \bar{z}} = \left(0, 0, \frac{1}{B'} \right)
\end{eqnarray}
From \Eq{jac}, $\sqrt{g}=1/(BB')$, hence
\begin{eqnarray}\label{eq:seqmfld}
\frac{{\bf e}_1}{\sqrt{g}} &=& \left(BB', -y B'^2, -zBB'' \right) \\
\frac{{\bf e}_2}{\sqrt{g}} &=& \left(0, -B', 0 \right) = {\bf J}\\
\frac{{\bf e}_3}{\sqrt{g}} &=& \left(0, 0, B \right) = {\bf B}
\end{eqnarray}
and it may be verified that all three~${\bf e}_i/\sqrt{g}$ are solenoidal.
It turns out that density which must satisfy $\rho\propto 1/\sqrt{g}$ so the above
formulae for ${\bf B}$ and~${\bf J}$ are consistent
with~$[\tilde{\bf B},\tilde{\bf J}]={\bf 0}$ is
given by $\rho\propto BB'$, hence is proportional to the equilibrium pressure gradient.
\subsection{Equilibrium with Flow}\label{sec:eqmflow}
There is the possibility that all three Lie brackets in the ideal MHD equations vanish.
For each field,
the divergence constraint may be easily met by scaling the basis by~$\sqrt{g}$.
Starting with the induction equation, suppose
\begin{equation}
{\bf B}={\bf e}_2/(\sqrt{g}),\;\;\; {\bf u}={\bf e}_1/(\rho \sqrt{g})
\end{equation}
then a steady-state with $\mbox{\boldmath $\omega$} \propto {\bf J} \propto {\bf e}_3$ is possible
provided \Eq{curlcon} is satisfied with $i=3$ and $k=2$,
not forgetting the requirement that $\rho \sqrt{g}=\mbox{const.}$ As in \Sec{steady1}
and \Sec{steady2},
${\bf e}_1$, ${\bf e}_2$ and ${\bf e}_3$ need not be linearly independent vectors,
and indeed two or more of them could be equal, but if they are independent, then
$\tilde{\bf B}$, $\tilde{\bf J}$ and ${\bf u}$ form a coordinate basis.
\section{Application to Complex Geometry}\label{sec:appl2}
\subsection{Invariance Properties}\label{sec:invar}
This section pursues the question as to what can be learnt about
MHD in complex geometries which are diffeomorphic to a Cartesian
geometry, from solutions in the Cartesian geometry. It provides a certain
amount of justification for the `rule of thumb' that such solutions
will be physically similar to those in the simpler geometry.
To focus ideas, suppose that the compressible magnetic induction
equations~\Eqs{induc}{mcons} have been solved
in Cartesian geometry and solutions for ${\bf B}$,
$\rho$ are available as functions of Cartesian coordinates~${\bf x}$
and possibly time~$t$ also.
Consider first the expression from \Eq{divg} for the divergence of the
magnetic field vector in general geometry
\begin{equation}\label{eq:divb}
\nabla\cdot {\bf B} = \frac{1}{\sqrt{g}}\frac{\partial (\sqrt{g} \bar{B}^i)}{\partial \bar{x}^i}
\end{equation}
Since ${\bf B}$ must be solenoidal in the new coordinate system, it
is evident that if the components of~${\bf x}=(x,y,z)$ are to be identified
with~$(\bar{x}^1,\bar{x}^2,\bar{x}^3)$, then those of~${\bf B}=B^i=(B_x,B_y,B_z)$
must be identified with~$\sqrt{g} \bar{B}^i,\;\;i=1,2,3$.
Now, for the Lie derivative in the induction equation~\Eq{induc} term
to be identified in the way just
described requires similar separate identifications for
$\tilde{\bf B}$ and ${\bf u}$.
For this to be possible whilst
satisfying the divergence-free constraint on the field
requires $\rho \propto 1/\sqrt{g}$, cf.\ \Sec{appl1}. It will be seen
that this relation is, fortunately, consistent with mass conservation \Eq{mcons},
provided that $u^i$ is identified with~$\bar{u}^i$.
To spell out the preceding, the mass conservation
equation may serve as an example for the others. In general
coordinates, \Eq{mcons} is
\begin{equation}\label{eq:massconsg}
\frac{\partial \bar{\rho} }{ \partial t} = -\frac{1}{\sqrt{g}} \frac{\partial (\sqrt{g} \bar{\rho} \bar{u}^i)}
{\partial \bar{x}^i}
\end{equation}
compared to the equation in Cartesians
\begin{equation}\label{eq:masscons}
\frac{\partial \rho }{ \partial t} = -\frac{\partial (\rho u^i)}{\partial x^i}
\end{equation}
Hence when a solution~$\rho$ to \Eq{masscons} is found (as a function of Cartesian
coordinates), a solution in curvilinear coordinates~$\bar{\rho}$ follows in a flow
with contravariant velocity components~$\bar{u}^i$ set equal to
Cartesian components and position vectors $\bar{x}^i$ set
equal to Cartesian positions, namely that given
by~$\bar{\rho}=\rho/\sqrt{g}$.
Note that is \emph{not} simply the original solution expressed in the
new coordinates because this would see functions such as $\rho({\bf x})$
written
as functions of $\bar{x}^i$ through the mapping~${\bf x}(\bar{x}^i)$,
as well as the obvious difference of the $\sqrt{g}$ factor in the
case of the density.
To complete the precise
identification for (`compressible') conformal invariance,
the following formulae are required
\begin{equation}\label{eq:ident}
\bar{x}^i=(\bar{x}^1,\bar{x}^2,\bar{x}^3) \leftrightarrow x^i=(x,y,z),\;\;
\bar{u}^i=(\bar{u}^1,\bar{u}^2,\bar{u}^3) \leftrightarrow u^i=(u_x,u_y,u_z),\;\;
\bar{\rho}(\bar{x}^i,t)\leftrightarrow \rho(x^i,t)/\sqrt{g(x^i)}
\end{equation}
Writing $B^i=\bar{B^i}\sqrt{g}$, the Lie bracket
involves vectors exemplified by~$\tilde{B}^i=B^i/\rho$, which are
invariant $\tilde{B}^i \leftrightarrow \bar{\tilde{B}}^i$ since
$\bar{B}^i/\bar{\rho}=B^i/\rho$. It is also possible to talk of
`incompressible' conformal invariance in steady-state, wherein
the density is not scaled, but~${\bf u}$ is, like all the other vector
fields, scaled by a factor~$\sqrt{g}$.
\subsection{Conformal Invariance}\label{sec:conform}
The above invariance properties of the compressible magnetic induction equations may
be useful, but it would be clearly be far more significant if something
similar could be found for the dynamic problem. At first sight, the Lie bracket
formulation of the Lorentz force term seems to make this possible.
Extending the argument concerning~${\bf B}$ of the previous section, since ${\bf J}$ must remain
divergence-free under transformation, $\tilde{J}^i \leftrightarrow \bar{\tilde{J}}^i$
is invariant, so the Lorentz bracket is invariant.
The remaining difficulty is the need to satisfy
\begin{equation}\label{eq:curlb}
\bar{J}^i=\frac{1}{\sqrt{g}} e^{ijk} \frac{\partial({g_{kl}\bar{B}^l})}{\partial \bar{x}^j}
\end{equation}
After substituting for $\bar{B}^i$ and~$\bar{J}^i$ in
terms of~$B^i$ and~$J^i$ (to ensure solenoidal fields), it
appears that the simplest way to make the identification for
current, assuming that all three
components of~${\bf B}$ are non-vanishing, is for
\begin{equation}\label{eq:gid}
G_{ij}=\frac{g_{ij}}{\sqrt{g}} =g_0\delta_{ij}
\end{equation}
for some constant~$g_0$.
The \Eq{gid} implies from \Sec{mettensor} that the map between coordinate
systems must be a conformal mapping for solutions to be equivalent in the
sense described in the previous \Sec{invar}. The property is known as homothetic conformal
invariance, and is possessed by for example Maxwell's equations~\cite[\S\,16.4]{fecko}.
Although \Eq{gid} represents a strong constraint because it will usually restrict
maps to two-dimensions by demanding~$g_{33}=0$, the theory of conformal mappings
is well developed, and they may be used to generate an infinite number of solutions
from one in Cartesian geometry. In fact, conformal invariance is a group property,
so a first calculation in any conformally related coordinate system may be used to
generate all the others.
\subsection{Restrictions on Conformal Invariance}\label{sec:restrict}
There are sadly a number of significant restrictions on the invariance property
of the preceding \Sec{conform}.
\subsubsection{Boundary Conditions}\label{sec:bcs}
Firstly, boundary conditions cannot be neglected in the
point mapping process. Obviously since all the
fields acquire a factor~$\sqrt{g}$, boundary conditions will change quantitatively
in many cases. However, common conditions such as vanishing or
periodic field may still be satisfied if the mapping is chosen
appropriately near the boundary. For example, vanishing normal
component, or vanishing normal derivative may be preserved if the
new coordinates are arranged to be normal to the boundary.
\subsubsection{Failure with Potential Vorticity}\label{sec:potvort}
Secondly, although the advective Lie bracket in \Eq{fullpv} may be seen
to be invariant if ${\bf u}$ is identified as in the induction equation case,
the fact that a~$\sqrt{g}$ multiplier is not needed, means that the
definition of potential vorticity is \emph{not} conformally invariant.
Thus conformal invariance only applies to magnetic equilibria defined by
the vanishing of the Lorentz Lie bracket, although these could contain flow
provided the flow was either dynamically negligible or balanced by pressure
forces.
\subsubsection{The Subtle Restriction}\label{sec:subtle}
Even with the above two restrictions, there is a further and highly
important restriction which arises so subtly
it is best illustrated by example.
Consider the simplest M\"{o}bius mapping of the complex plane,
viz. $z=1/\bar{z}$, which implies that
\begin{equation}\label{eq:mobmap}
x +i y = \frac{\bar{x} - i \bar{y}}{\bar{x}^2+\bar{y}^2}
\end{equation}
The point of selecting this mapping is that straight lines are sent into circles,
so that the simple sheared-field equilibrium~\Eq{simpeqm} will then occupy a finite
region of the plane and so might be realised at least approximately in a laboratory.
It is convenient first to relabel coordinates in \Eq{simpeqm} so that
\begin{equation}\label{eq:simpeqm2}
{\bf B}=B(x){\bf \hat{y}},\;\;\; {\bf J}=B'(x){\bf \hat{z}}
\end{equation}
Turning to the M\"{o}bius mapping, consider the line in the $(x,y)$-plane given by~$x=1/(2 x_r)$,
then the real part of \Eq{mobmap} shows that in the mapped plane
\begin{equation}\label{eq:mobeqn}
\bar{x}^2-2x_r \bar{x} +\bar{y}^2=0,\;\;\mbox{ or }\;\; (\bar{x} - x_r)^2+ \bar{y}^2= x_r^2
\end{equation}
which is the equation of a circle centered at~$(x_r,0)$ with radius~$x_r$.
Hence the infinite region in $0<x<1/(2 x_r)$ is sent to the interior of this circle.
The basis vectors (restricted now to the 2-D~$(x,y)$-plane),
may be calculated as in earlier examples as
\begin{equation}\label{eq:mobbasis}
{\bf e}_1 =\frac{\partial (x,y)}{\partial \bar{x}}=
\left(\frac{\bar{y}^2-\bar{x}^2}{\bar{r}^4}, \frac{2 \bar{x} \bar{y}}{\bar{r}^4}\right),\;\;
{\bf e}_2 =\frac{\partial (x,y)}{\partial \bar{y}}=
\left(-\frac{2 \bar{x} \bar{y}}{\bar{r}^4}, \frac{\bar{y}^2-\bar{x}^2}{\bar{r}^4}\right)
\end{equation}
where $\bar{r}^2=\bar{x}^2+\bar{y}^2$. It follows that
\begin{equation}\label{eq:matmob}
g_{ij}=\left(\begin{matrix}
\bar{r}^{-4} & 0 \\
0 & \bar{r}^{-4} \\
\end{matrix}\right)
\end{equation}
and hence $\sqrt{g}=1/\bar{r}^4$.
The identification of $B^i$ gives $\sqrt{g}\bar{B}^2(\bar{x})= B(x)$, so
\begin{equation}\label{eq:mobb}
\bar{B}^2(\bar{x},\bar{y})= \bar{r}^4 B(\bar{x}),\;\;\; \bar{B}^1(\bar{x},\bar{y})=\bar{B}^3(\bar{x},\bar{y})=0
\end{equation}
and similarly
\begin{equation}\label{eq:mobj}
\bar{J}^3(\bar{x},\bar{y})= \bar{r}^4 B'(\bar{x}),\;\;\; \bar{J}^1(\bar{x},\bar{y})=\bar{J}^2(\bar{x},\bar{y})=0
\end{equation}
The relation that $\bar{\bf J}=\nabla\times \bar{\bf B}$ may be
verified direct from the definition of the curl operator, as a
consequence of the fact that $g_{11}=g_{22}=\sqrt{g}$ in this example.
\emph{But} the Lorentz force may be computed directly in the curvilinear system
using the formula
\begin{equation}\label{eq:Lforc}
{F_L}_i= \sqrt{g} e_{ijk} J^j B^k
\end{equation}
and it will seen be immediately that this is not expressible as the gradient of
a scalar~$\partial p/\partial x^1$ unless $\sqrt{g}$ is a function of~$\bar{x}^1$
only, ie.\ nearly all
orthogonal mappings are disallowed. The further restriction results from the
fact that ${\mathcal{L}}_{\tilde{\bf B}}(\tilde{\bf J})={\bf 0}$
does not, from \Eq{liebrid1}, enforce the equilibrium relation~${\mathcal{L}}_{{\bf B}}({\bf J})={\bf 0}$
unless ${\bf B}.\nabla \rho= {\bf J}.\nabla \rho=0$. It will be seen that this
further restriction is however at least consistent with the assumption that~$p(\rho)$.
It has to be concluded that this `compressible' conformal
invariance is little more general that the `incompressible' variety, wherein
the resulting further constraint is,
similar to that above, namely
${\bf B}.\nabla \sqrt{g}= {\bf J}.\nabla \sqrt{g}=0$. Plus, if further
${\bf u}.\nabla \sqrt{g}= \omega.\nabla \sqrt{g}=0$, the entire MHD system
including pressure is conformally `incompressibly' invariant in steady-state.
A further cautionary note (applicable to either variety of invariant)
is provided by the resistive term which is often
added to the induction equation, given by
\begin{equation}\label{eq:resind}
{\bf R}=\frac{1}{\rho}\nabla \times \frac{{\bf J}}{\sigma_E}
=\frac{1}{\rho}\nabla \times \frac{1}{\sigma_E} \left(\nabla \times \frac{1}{\mu}{\bf B}\right)
\end{equation}
where $\sigma_E$ is the electrical conductivity and $\mu$~is
the plasma permeability~(normally $\mu=\mu_0$, the free-space value).
It seems on first inspection that this term is conformally invariant, but
unfortunately this fact is
not generally useful because when trying to complete the identification,
it is found that the repeated curls in \Eq{resind} normally
force two different diagonal components of~$g_{ij}$ to be zero.
The use of conformal mappings above is to be contrasted with Goedbloed's
conformal mapping approach~\cite[\S\,16.3.3]{GKP}, which is
used directly to generate equilibria. In the present work, it is necessary for
a solution to have been generated first, then it can be mapped.
In this respect the present work more closely resembles ref~\cite{Ma12Fini},
but the treatment herein does allow for equilibria with flow.
\section{Additional Physics}\label{sec:addphys}
This section investigates in more detail than was permitted in ref~\cite{Wa13a},
how additional physics terms may be introduced into the simplest ideal
system. Electrical resistance or equivalently magnetic diffusivity
was already discussed in the previous section. Repeated use of \Eq{curlv} in
\Eq{resind} gives the additional term in the required component form.
\subsection{Pressure Forcing}\label{sec:pterm}
It is particularly interesting to see, since the barotropic
assumption may well be overly restrictive, what
is required in order to include the forcing term
\begin{equation}\label{eq:force}
\mathcal{R}=\frac{1}{\rho}\nabla\times\left(\frac{\nabla p}{\rho}\right)
=\frac{1}{2}\left(\nabla\frac{1}{\rho^2}\times\nabla p\right)
\end{equation}
In Cartesian components, this may be written
\begin{equation}\label{eq:forcec}
\mathcal{R}_{x^i}
=\frac{1}{2}\left(\frac{\partial(\rho^{-2}, p, x^i)}
{\partial(x,y,z)}\right)
\end{equation}
This representation is convenient because Jacobians transform very simply
into general geometry. Writing $\bar{\rho}$ and
$\bar{p}$ for $\rho$ and~$p$ evaluated as functions of~$\bf \bar{x}$, \Eq{forcec}
becomes
\begin{equation}\label{eq:forceg}
\mathcal{R}^i
=\frac{1}{2\sqrt{g}}\left(\frac{\partial(\bar{\rho}^{-2}, \bar{p}, \bar{x}^i)}
{\partial(\bar{x}^1,\bar{x}^2,\bar{x}^3)}\right)
\end{equation}
It should be evident that \Eq{forceg} is simply a convenient way of summarising the
three different 2-D Jacobians which have to be added to the corresponding
components of \Eq{fullpv}.
It may be further deduced from \Eq{forceg}, as might be expected,
that this term is not conformally invariant in the `compressible' sense. This
property ultimately results from the presence of the gradient operator
in the momentum equation.
\subsection{Anisotropy}\label{sec:aniso}
The point is that if code is written to treat general geometry
then it is easily extended to treat
plasmas' having an anisotropic tensor permeability~$\mu_{ij}$.
Recall that for a general medium, Maxwell's equations give
\begin{equation}\label{eq:maxj}
{\bf J}=\nabla \times {\bf H}+ \frac{\partial {\bf D}}{\partial t},
\;\;\; \mbox{ where } \;\;\; {\bf B} = \mu {\bf H}
\end{equation}
Normally there is neglect of the displacement current term (containing~${\bf D}$)
in \Eq{maxj}.
Now from the general geometry expression for the curl, it is evident that
a $g_{ij}$ is required exactly where the reciprocal of~$\mu_{ij}$ should appear in \Eq{maxj}.
Further in the resistive case, see \Eq{resind}, a further $g_{ij}$~factor
is needed exactly where the reciprocal of the electrical conductivity
tensor~$\sigma_{Eij}$ is needed. Thus, since all tensors are symmetric and the products
and inverses of
symmetric tensors are symmetric, it is necessary only to allow for two
different `$g_{ij}$' to appear in the aforementioned places, to allow
for plasma anisotropy.
\subsubsection{Invariance Properties}\label{sec:invprop}
The result of the preceding paragraph also has an interesting
converse implication, namely that a solution obtained in Cartesian
geometry with anisotropic plasma properties satisfying
\begin{equation}
g_{ij} =\frac{1}{\mu_{ij}}=\frac{1}{\sigma_{Eij}}
\end{equation}
will furnish an equivalent solution
in general geometry, even in the resistive case.
This result holds provided that inertia is negligible
and subject to the possible redefinition of boundary conditions
much as discussed in \Sec{bcs}.
In relation to other work on invariance properties,
contrast ref~\cite{Ch05Cons} where instead the pressure tensor
is taken to be anisotropic, and ref~\cite{Ma12Fini} where an extraneous
force term is introduced.
If inertia is significant, then the situation
becomes that described in ref~\cite{Wa13a} where metric tensors have
to be explicitly introduced throughout.
\section{Time Dependent Behaviour}\label{sec:tdep}
\subsection{Elementary Field Kinematics}\label{sec:ekinematics}
Given that steady fields ${\bf u}_0$, $\rho_0$ and~$\tilde{\bf B}_0$ have been found
which satisfy the kinematic MHD equations, it is interesting to see whether
these may be used to construct time-dependent solutions.
The simplest form to try is
\begin{equation}\label{eq:simpsoln}
\tilde{\bf B}= \lambda(\bar{\bf x},t) \tilde{\bf B}_0
\end{equation}
Evidently
\begin{equation}\label{eq:inducs1}
\frac{\partial (\lambda \tilde{\bf B}_0)}{\partial t}=
\tilde{\bf B}_0 \frac{\partial \lambda}{\partial t}
\end{equation}
and from \Eq{liebrid1}
\begin{equation}\label{eq:inducs2}
[{\bf u}_0, \lambda \tilde{\bf B}_0] = \tilde{\bf B}_0 {\bf u}_0 .\nabla \lambda
\end{equation}
Hence
\begin{equation}\label{eq:inducs3}
\frac{\partial (\lambda \tilde{\bf B}_0)}{\partial t}-
[{\bf u}_0, \lambda \tilde{\bf B}_0] =
\tilde{\bf B}_0 \left(
\frac{\partial \lambda}{\partial t}
-{\bf u}_0 .\nabla \lambda
\right)
\end{equation}
so that \Eq{simpsoln} is indeed a solution of \Eq{induc} provided
$\partial \lambda/\partial t = {\bf u}_0. \nabla \lambda$.
However, ${\bf B}$ must remain solenoidal, implying $\tilde{\bf B}_0 .\nabla \lambda=0$
which is an awkward condition to satisfy in general. However, supposing that
$\tilde{\bf B}_0 ={\bf e}_2$, then the magnetic field, which will initially
be divergence-free provided $\rho \sqrt{g}=\mbox{const.}$, remains divergence-free provided
only that $\lambda$ does not depend on~$\bar{x}^2$. An explicit solution may now
be realised by supposing that ${\bf u}_0={\bf e}_1$, for then
\begin{equation}\label{eq:inducs4}
\lambda = F (\bar{x}^1+t, \bar{x}^3)
\end{equation}
where $F$ is an arbitrary function. It may be shown similarly that a consistent time dependent
density~$\rho= \xi(\bar{\bf x},t) \rho_0$
may be found provided that also $\xi= F_2(\bar{x}^1+t, \bar{x}^3)$.
\subsection{Time Dependent Kinematics}\label{sec:tkinematics}
This section illustrates the application of the Lie
derivative approach to the magnetic induction equation.
To treat the time dependent case, it is helpful to move to a 4-dimensional
space~$({\bf x},t)$ and introduce 4-vectors~$\tilde{\mbox{\boldmath $\mathcal{B}$}}=(\tilde{\bf B},0)$,
$\mbox{\boldmath $\mathcal{U}$}=({\bf u},1)$, so that the evolution equation for~$\tilde{\bf B}$
may be written as
\begin{equation}\label{eq:induclie}
{\mathcal{L}}^{-}_{\mbox{\boldmath $\mathcal{U}$}}(\tilde{\mbox{\boldmath $\mathcal{B}$}})=0,
\end{equation}
(The minus superscript denotes that the opposite sign convention to \Eq{lieder}
is being used in the definition of the Lie derivative.)
Introduce Greek indices~$\nu$, $\iota$, $\kappa$, $\lambda$ to index the 4-vectors, so
that for example~$\nu$ takes values~$1,2,3,4$ and $x^4=t$, then by definition
\begin{equation}\label{eq:inducexp}
{\mathcal{L}}^{-}_{\mbox{\boldmath $\mathcal{U}$}}(\mbox{\boldmath $\mathcal{B}$})=
\mathcal{U}^\lambda\frac{\partial \tilde{\mathcal{B}}^\nu}{\partial {x}^\lambda}-
\tilde{\mathcal{B}}^\lambda\frac{\partial \mathcal{U}^\nu}{\partial {x}^\lambda}
\end{equation}
Analogous to the 3-D result presented in \Sec{Lie}, substituting
\begin{equation}\label{eq:inducliesoln}
\tilde{\mathcal{B}}^\nu=\partial x^\nu / \partial \bar{x}^\iota,\;\;\;\;
\mathcal{U}^\nu=\partial x^\nu / \partial \bar{x}^\kappa
\end{equation}
in \Eq{inducexp} leads to a vanishing Lie bracket.
Taking $\kappa=4$ and $\bar{x}^4=t$ and letting~$\iota=j=1,2,3$,
application of the results in \Sec{Lie} shows that a solution of the induction
equation in the flow~${\bf \mathcal{U}}=(\partial{\bf x}/\partial t,1)$ is
\begin{equation}\label{eq:Btwsoln}
\tilde{\mathcal{B}}^\nu=\Sigma_{j=1}^{3} \tilde{b}^j\frac{\partial x^\nu}{\partial \bar{x}^j}
\end{equation}
provided $\partial \tilde{b}^j/\partial t=0$.
Clearly \Eq{Btwsoln} constitutes a completely general representation of a 3-vector,
hence \Eq{Btwsoln} is a completely general solution to a vanishing Lie bracket.
The kinematic MHD equations are not however solved until an expression has been
produced for the density, and this is easily provided if the
mapping~$x^\nu=({\bf x}(\bar{x}^1,\bar{x}^2,\bar{x}^3,t),t)$ is regarded as
being produced by a flow, meaning that it reduces to the identity at $t=0$
and thereafter changes smoothly with time. For such a map, the well-known
Lagrangian solution to the continuity equation gives
\begin{equation}\label{eq:rhoL}
\rho({\bf x},t) =\frac{1}{\sqrt{g}} \rho(\bar{\bf x},0)
\end{equation}
where $\sqrt{g}$ is the Jacobian of the map,
\begin{equation}
\sqrt{g}=
\frac{\partial(x^1,x^2,x^3,x^4)}
{\partial(\bar{x}^1,\bar{x}^2,\bar{x}^3,\bar{x}^4)}
=\frac{\partial(x^1,x^2,x^3)}
{\partial(\bar{x}^1,\bar{x}^2,\bar{x}^3)}
\end{equation}
the second equality following because $x^4=\bar{x}^4$. (Contrast \Eq{rhoL}
with \Eq{ident} where
because $\rho$ appears as a reciprocal in the vector fields, the $\sqrt{g}$~factor
is inverted relative to \Eq{rhoL}.)
Provided mass conservation is satisfied, an initially divergence free~${\bf B}$
satisfying~\Eq{inducexp} will remain solenoidal, hence the magnetic field solution of the induction
equation may be written
\begin{equation}\label{eq:Bsoln}
{\bf B}({\bf x},t)=\frac{1}{\sqrt{g}} \Sigma_{j=1}^{3} b^j(\bar{\bf x})\frac{\partial {\bf x}}{\partial \bar{x}^j}
\end{equation}
The above result re-expresses the well-known
Lagrangian solutions to the ideal induction equation~\cite[\S\,2.3]{roberts}.
Consider lastly the flow described in $\bar{\bf x}$ coordinates by
\begin{equation}\label{eq:3flow}
\frac{d\bar{x}^1}{dt}=\frac{d\bar{x}^2}{dt}=0,\;\;\frac{d\bar{x}^3}{dt}=1
\end{equation}
Since
\begin{equation}\label{eq:3flow2}
{\bf e}_3= \frac{\partial {\bf x}}{\partial \bar{x}^3}=\frac{\partial{\bf x}}{\partial t}
\end{equation}
\Eq{3flow} corresponds
to a motion in physical space
in the direction of the ${\bf e}_3$ coordinate, cf.\ ref~\cite{Wa13b}.
\subsection{Quasi-2-D Dynamics}\label{sec:2dtdepdyn}
In ref~\cite{Wa13a}, a time dependent compressible MHD solution was
found given by $\tilde{B}^1=\tilde{B}^2=0,\;\;\tilde{B}^3=1$ (so that magnetic field
and density evolution are tightly coupled),
subject to the restriction that all field quantities and the metric tensor depend only on
the other two coordinates~$\bar{x}^1,\bar{x}^2$.
\begin{figure}
\centerline{\rotatebox{0}{\includegraphics[width=10.0cm]{torus2}}}
\caption{Cutaway diagram of a torus, showing three different flux
surfaces~$\varrho(\psi,s,w)$ at constant~$\psi$, centred on $R=R_0$.
(The diagram in fact shows the situation where $\psi$ is independent of $s,w$.) As
explained in the text there is a correspondence between the coordinates~$\psi$,~$s$,~$w$
and $r$,~$\theta$,~$\phi$ respectively.
In a vertical plane, distance from the minor axis $R=R_0$ is usually denoted by~$r$
and the poloidal angle, usually denoted by~$\theta$, is measured around the minor axis.
The toroidal angle, usually denoted~$\phi$ is measured around the major axis~$R=0$.
\label{fig:torus2}}
\end{figure}
Further to explore the implications of this, introduce
generalised toroidal coordinates~$(\psi, s, w)$ (cf.\ $(r,\theta,\phi)$
as commonly employed in plasma physics~\cite{dhaeseleer}) so that
\begin{equation}\label{eq:torc}
{\bf x}=(x,y,z)=\left(R_{+}\cos w, R_{+}\sin w, \varrho\sin s\right),
\end{equation}
where
\begin{equation}\label{eq:rtorc}
R_{+}=R_0+\varrho(\psi,s,w) \cos s
\end{equation}
It will be seen from \Fig{torus2} that $\varrho(\psi,s,w)$ at constant~$\psi$
form a set of nested
toroidal surfaces each having a major axis of~$R_0$. Introduce helical
coordinates~$(u,v)$ on each surface, so that $s=u-v/q(\psi)$,
$w=v+u/q(\psi)$, and write $\tilde{\varrho}(\psi,u,v)=\varrho(\psi,s,w)$.
Suppose that $\psi$ is rotationally symmetric about the $z$-axis
and satisfies the Grad-(Schl\"{u}ter)-Shafranov equation~\cite[\S\,8]{dhaeseleer},
ie.\ $\psi$~is a flux function for an equilibrium magnetic field, then
the curves of \Eq{torc} as $v$ varies at constant $u$~and~$\psi$ are equivalent to lines
of the equilibrium field with helical pitch~$q(\psi)$. (Note that
$s$ and $w$, hence $u$ and $v$
need only be suitably periodic functions of the regular toroidal
angles~$\theta$ and~$\phi$. To define an equilibrium fully requires
defining these functions, but this is inessential for what follows.)
Since the metric tensor in a coordinate system $(\bar{x}^1,\bar{x}^2,\bar{x}^3)$
may be written
\begin{equation}\label{eq:gikd}
g_{ik}=\frac{\partial{\bf x}}{\partial \bar{x}^i}\cdot\frac{\partial{\bf x}}{\partial \bar{x}^k}
\end{equation}
taking $(\bar{x}^1,\bar{x}^2,\bar{x}^3)=(\psi, u,v)$ and using suffix~$i$ to denote
differentiation with respect to~$\bar{x}^i$, the components of~$g_{ik}$
are straightforwardly calculated as
\begin{equation}\label{eq:gpuv}
\left(
\input{gpuv}
\right)
\end{equation}
where the `dotted' entries may be deduced from the symmetry of~$g_{ik}$.
The usual rules for partial differentiation give
\begin{eqnarray}
\tilde{\varrho}_\psi &=& \varrho_\psi+\varrho_s s_\psi+\varrho_w w_\psi \nonumber \\
\tilde{\varrho}_u &=& \varrho_s s_u+\varrho_w w_u \label{eq:dsub}\\
\tilde{\varrho}_v &=& \varrho_s s_v+\varrho_w w_v \nonumber
\end{eqnarray}
The assumption of axisymmetric equilibrium is equivalent to $\varrho_w=0$, then
substituting \Eq{dsub} in \Eq{gpuv}, using the Reduce-algebra system~\cite{He05Redu}
gives
\begin{equation}\label{eq:gik}
\left(
\input{gpuvsub110}
\right)
\end{equation}
where $\varrho_{+}^2=\varrho^2+\varrho_s^2$ and $\iota=1/q$
has been introduced so that $s_{i}=(s_\psi,1,-\iota)$, $w_{i}=(w_\psi,\iota,1)$.
This $\bar{x}^k$~coordinate system has been chosen so that the equilibrium
field expected
in the tokamak confinement device may be expressed as $\tilde{B}^3=1$ ($\tilde{B}^1=\tilde{B}^2=0$),
but it will be seen that from the definition of $R_{+}$ in \Eq{rtorc}, the metric tensor does depend
on~$\bar{x}^3=v$ through $s=u-v\iota$.
By inspection, however, in the limit when $q$ is large and constant
or equivalently $\iota$ is negligible and constant (so $s_\psi=w_\psi=0$),
the metric tensor becomes
\begin{equation}\label{eq:gikc}
\left(
\input{gpuvsub110constq}
\right)
\end{equation}
thus $g_{ik}$ depends only on $\bar{x}^1$~and~$\bar{x}^2$. Hence a purely toroidal field,
ie.\ one tangent to circles about the major axis~$x=y=0$ of the torus,
allows for flux-compression solutions. Further, when $\varrho/R_0$
and $\varrho_s$ are both small,
the $s$-dependence of $g_{ik}$ is weak, so a helical field in a
circular torus with relatively large major radius is also in this category.
The preceding limits illustrate two of the Killing vector solution
symmetries~\cite[\S\,5.2.4]{schindler}, whereas the third is simply invariance
in a Cartesian coordinate.
Lastly, if the coordinates~$s$ is redefined so that $s=u$, then the
metric tensor becomes
\begin{equation}\label{eq:giku}
\left(
\input{gpuvsub010}
\right)
\end{equation}
and $g_{ik}$ depends only on $\bar{x}^1$~and~$\bar{x}^2$ regardless
of the presence of magnetic shear. In this
coordinate system, however, lack of orthogonality makes it impossible in general to
arrange so that simultaneously $\tilde{B}^1=\tilde{B}^2=0$ and $\tilde{B}^3=1$.
\section{Summary}\label{sec:summary}
\Sec{math} has collated relevant results from differential geometry,
especially those concerning Lie derivatives, from a widespread
mathematical literature. \Sec{appl1} and \Sec{appl2} then draw on these
results to derive solutions and solution methods for the ideal MHD equations
which are hopefully complementary to those presented in ref~\cite{dhaeseleer},
rather than simply `express old results in new language'.
The novelty of the Lie derivative approach is that coordinate basis
vector fields automatically cause the nonlinear terms in MHD to
disappear. The main bar to finding solutions is then the need to satisfy the
curl or `constitutive' relations ${\bf J}=\nabla\times{\bf B}$
and $\mbox{\boldmath $\omega$}=\nabla\times{\bf u}$.
In kinematic, compressible MHD, where
these constraints are not present, complete solution is possible
(see \Sec{tkinematics}), although this result was already known from
the Lagrangian approach. Suggestions are made for numerical solution
of the curl equations in \Sec{appl1} and \Sec{appl2}, but it seems that
for determining MHD equilibria,
the Grad-Shafranov equation
still has the advantage in most practical cases.
One possible practical improvement may result from the use
of coordinate bases, which simplify the calculation of Lie
derivatives, albeit in the example of \Sec{egeqm} at the expense
of introducing non-orthogonality. Linear stability analysis to investigate
this contention are outside the scope of the present work.
The reformulated equations at first sight appear to have significant
coordinate invariance properties, although \Sec{subtle} shows that
these are heavily qualified. The remaining parts of the present work
extend ref~\cite{Wa13a} by (i) considering additional physics effects
and how they might be modelled in the context of
differential geometry (\Sec{addphys}), and (ii) presenting
application of the flux-compression solution found in~\cite{Wa13a} (\Sec{2dtdepdyn}).
The current study has sought to apply results from 21st~Century
work on differential geometry to MHD. The mathematical notations and
concepts employed in the differential geometry literature have developed significantly
over the years. They may now seem to many excessively abstract, probably
because the subject has been strongly driven by Quantum Mechanics and
General Relativity applications, which require four or more
dimensions, and by other partial differential equation applications which require infinite
dimension. In 3-D, the coordinate-free approach can in fact be
unhelpful. The two curl equations exemplify this, for in the newer
language $\mbox{\boldmath $\omega$}=\nabla\times{\bf u}$ defines a 2-form in terms of a differential of a 1-form
or vector and is written $\omega=du$. However in ${\bf J}=\nabla\times{\bf B}$,
${\bf B}$ is already a 2-form, and so its dual has to be taken before
the `d' operator is applied, hence is written~$J=*d*B$.
Since this is often accompanied by a feeling of a
degree of uncertainty as to whether the operator on~$B$ should be thought
of acting in 3-D space or the 4-D Minkowski space-time where Maxwell's
equations are properly defined, the fact that there is no
indication above that the two curl equations have the same component form,
leads many to conclude that the more abstract approach just makes
a hard subject even harder.
It must however be pointed out that aspects of the modern, coordinate-free approach
are very valuable, for by having produced a more fundamental understanding of the
subject, it indicates in what circumstances analysis might lead to
physically useful results. The concept of the pull-back mapping
(even if it is misleadingly named, and although not used herein)
is also very useful for indicating
in the presence of coordinate mappings, precisely what functions of
which set of coordinates are being used. Without these latter positives,
the current paper would have been much weaker and more confused.
\section*{Acknowledgement}\label{sec:ackn}
I am grateful to Anthony J.~Webster for criticising the accessibility
of an early version of
the current work, and to John M.~Stuart for valuable advice on
the abstract mathematical background.
\input{ackn}
|
1,116,691,500,216 | arxiv | \section*{Introduction}
Shortly after the launch of the {\it Rossi} X-ray Timing Explorer
(RXTE) in late 1995, single kilohertz brightness oscillations were discovered
in RXTE countrate time series data from thermonuclear X-ray bursts in
several neutron-star low-mass X-ray binaries. These oscillations are
remarkably coherent and their frequencies are very stable from burst to
burst in a given source \cite{SSZ98}.
These oscillations are therefore thought to be at
the stellar spin frequency or its first overtone. This suggests that
the oscillations are caused by rotational modulation of a hot spot
produced by non-uniform nuclear burning and propagation. Analysis of
these oscillations can therefore constrain the mass and radius of the
star and yield valuable information about the speed and type of thermonuclear
propagation. In turn, this has implications for strong gravity and dense
matter, and for astrophysical thermonuclear propagation in other contexts,
such as classical novae and Type Ia supernovae.
A comparison of theoretical waveforms with the observations is required
to extract this fundamental information. Here we exhibit waveform
calculations that we have produced using general relativistic ray-tracing
codes. We survey the effects of parameters such as the spot size, the
stellar compactness, and the stellar rotational velocity,
and demonstrate that our computations
fit well the phase lag data from SAX~J1808--3658.
\section*{Computational Method}
To compute observed light curves, we do general relativistic ray tracing
from points on the surface to the observer at infinity in a way similar
to, but more general than, \cite{PFC83} and \cite{ML98}. For simplicity,
we assume that the exterior spacetime is Schwarzschild, that the surface
is dark except for the hot spot or spots, and that there is no background
emission. The amplitudes would be reduced by a constant factor
if there were background emission. The angular dependence of the specific
intensity at the surface depends on both radiation transfer effects and
Doppler boosting (see \cite{WML99}).
For each phase of rotation we compute the projected area of many small
elements of a given finite size spot. We then build up the light curve of
the entire spot by superposing the light curve of all the small elements.
The grid resolution of the spot is chosen so that the effect of having a
finite number of small elements can alter the value of the computed
oscillation amplitudes by a fraction no larger than $\sim 10^{-4}$.
After computing the oscillation waveform
using the above approach, we Fourier-analyze the resulting light curve to
determine the oscillation amplitudes and phases as a function of photon
energy at different harmonics.
\section*{Results}
\begin{figure}[ht]
\epsfig{file=Fig2.eps,height=3.0truein, width=6.0truein}
\vskip 0.2truein
\caption[]{\label{fig1}
(a)~Bolometric rms amplitude vs.\ the angular radius $\alpha$ of the spot
at the first
harmonic (upper curves) and the second harmonic (lower curves)
from a single emitting spot centered on the rotational equator as
seen by a distant observer in the rotational
plane. Numbers denote values of $R/M$, where we use geometrized
units in which $G=c\equiv 1$. (b)~Rms amplitude vs.
$\alpha$ at the second harmonic from two antipodal emitting
spots, with both the spots and the distant observer in the rotational
plane. Note the change in vertical scale in panel (b).
In both cases we assume a nonrotating star and an isotropic specific
intensity as measured by a local comoving observer.}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{2.8truein}
\mbox{}\\
\epsfig{file=Fig4.eps, height=3.0truein, width=3.0truein}
\end{minipage}
\begin{minipage}[t]{2.8truein}
\mbox{}\\
\vskip 0.3truein
\epsfig{file=1808lagv3.ps, height=2.7truein, width=2.7truein}
\end{minipage}
\vskip 0.2truein
\caption[]{\label{fig23}
(left panel) Rms amplitude versus spot angular radius $\alpha$
at the first harmonic (upper curves) and the second
harmonic (lower curves) from a single emitting spot with $v=0.1c$
(\textit{solid lines}) and $v=0$ (\textit{dashed lines}), $R/M = 5.0$, and
a spot and a distant observer both in the rotational equator. (right panel)
Phase lags versus photon energy for the millisecond X-ray
pulsar SAX~J1808--3658. The data (crosses) are from \cite{CMT98},
where the reference energy is the 2--3~keV band. The solid
line shows the phase lags in a Doppler shift model, assuming a
gravitational mass of $1.8\,M_\odot$ and a rotational velocity
of 0.1~c as measured at infinity, which would be appropriate for
the observed 401~Hz spin frequency and $R=11$~km.
The angular and spectral emission at the surface are that of a
grey atmosphere with an effective temperature of 0.6~keV as measured
at infinity. The excellent
fit apparent in this figure supports the Doppler shift
explanation for the soft lags in this source.}
\end{figure}
Panel (a) of Figure~1 shows the fractional rms amplitudes at the
first two harmonics as a function of spot size and stellar compactness
for one emitting spot
centered on the rotational equator, as seen by a distant observer in
the rotational plane. These curves demonstrate that a
common result of the hot-spot model is large-amplitude brightness oscillations
with a high contrast in strength between the dominant harmonic and weaker
harmonics, as is observed in several sources.
The curves for the first harmonic illustrate the
general shape of most of the first harmonic curves. Initially, the amplitude
depends only weakly on spot size. However, once the spot grows to an angular
radius of $\sim
40^{\circ}$ there is a steep decline in the oscillation amplitude which
flattens out only near the tail of the expansion. This expected behavior
appears to be in conflict with the decline in amplitude observed by
Strohmayer, Zhang, \& Swank (1997) from 4U~1728--34, in which the initial
decline is steep. The cause of this could be that the initial velocity
of propagation is large, or that the observed amplitude is diminished
significantly by isotropization of the beam due to scattering (Weinberg,
Miller, \& Lamb 1999).
Panel (b) of Figure~1 shows the
fractional rms amplitude at the second harmonic under the same assumptions but
for two identical, antipodal emitting spots. The range in spot size here is
$0^{\circ}-90^{\circ}$ since two antipodal spots of $90^{\circ}$ radii cover
the entire stellar surface. Note that in this situation, there is no first
harmonic.
These figures show that when there is only one emitting spot,
the fundamental is always much stronger than higher harmonics. Thus, a source
such as 4U~1636--536 with a stronger first overtone than fundamental
\cite{M99} must have
two nearly antipodal emitting spots. As described in detail in \cite{WML99},
we confirm the results of \cite{PFC83} and \cite{ML98}
that the rms amplitude decreases with increasing compactness
until $R/M\approx 4$, then increases due to the formation of caustics.
We also find that a finite surface rotational velocity increases the
amplitude at the second harmonic substantially, while having a relatively
small effect on the first harmonic (left panel of Figure~2).
As an application to data, in the right panel of Figure~2 we use our models
to fit phase lag data from the millisecond accreting X-ray pulsar
SAX~J1808--3658. The waveforms from the accreting spot are expected to
be similar to the waveforms from burst brightness oscillations, and the
signal to noise for this source greatly exceeds that from burst sources
such as Aql~X-1 \cite{F99}. As is apparent from the
figure, the fit is excellent. Further data, especially from a high-area
timing mission, could be used to constrain the stellar mass or radius
from phase lag data.
|
1,116,691,500,217 | arxiv | \section{Introduction}
The infinitesimal generator of a $d$-dimensional rotationally invariant L\'evy process is
a non-local operator of the form $\sL = b \Delta + {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$ where $b\ge 0$ and
$$
{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} f(x)=\int_{{\mathbb R}^d}\left(f(x+y)-f(x)-\nabla f(x) \cdot y {\mathbf 1}_{\{|y|\le 1\}}\right)\, \nu(dy)
=\lim_{\epsilon\to 0} \int_{\{|y|>\epsilon \}}\left(f(x+y)-f(x)\right)\, \nu(dy)\, .
$$
The measure $\nu$ on ${\mathbb R}^d\setminus \{0\}$ is invariant under rotations around origin
and satisfies $\int_{{\mathbb R}^d}(1\wedge |y|^2)\, \nu(dy) <\infty$. When $\nu=0$, the
operator $\sL$ is proportional to the Laplacian, hence a local operator, while
when $b=0$, the operator $\sL$ is a purely non-local integro-differential operator.
In particular, if $b=0$ and $\nu(dx)=c|x|^{-d-\alpha}dx$, $\alpha \in (0, 2)$,
then ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$ is proportional to
the fractional Laplacian
$\Delta^{\alpha/2}:=-(-\Delta)^{\alpha/2}$.
L\'evy processes are of intrinsic importance in probability theory, while integro-differential
operators
are important in the theory of partial differential equations.
Most of the research in the potential theory of L\'evy processes in
the last fifteen years concentrates on purely discontinuous L\'evy processes,
such as rotationally invariant stable processes,
or equivalently, on purely non-local operators of the type ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$.
For summary of some recent results from a probabilistic point of view one can consult
\cite{BBKRSV, C0, KSV2, KSV3}
and references therein. We refer the readers to \cite{CSS, CaS, CV} for a
sample of recent progress in the PDE literature, mostly for the case of a fractional Laplacian
$\Delta^{\alpha/2}$, $\alpha \in (0, 2)$.
In many situations one would like to study operators that have both local and non-local parts.
From a probabilistic point of view, this corresponds to processes with both a Gaussian component
and a jump component. The fact that such a process $X$ has both
Gaussian and jump components is the source of many difficulties in investigating the
potential theory of $X$. The main
difficulty in studying $X$ stems from the fact that it runs on two different scales:
on the small scale the diffusion corresponding to the Gaussian part dominates,
while on the large scale the jumps take over. Another difficulty is encountered when
looking at the exit of $X$ from an open set: for
diffusions, the exit is through the boundary, while for the pure jump processes,
typically the exit happens by jumping out from the
open set. For the process $X$, both cases will occur which makes the process
$X$ much more difficult to study.
Despite the difficulties mentioned above, in the last few years significant progress has been made in
understanding the potential theory of such
processes. Green function estimates (for the whole space) and the Harnack inequality
for a class of processes with both diffusion and
jump components were established in \cite{RSV, SV05}. The parabolic Harnack inequality
and heat kernel estimates were studied in
\cite{SV07} for L\'evy processes in $\bR^d$ that are independent sums of Brownian
motions and symmetric stable processes, and in \cite{CK08} for much more general
symmetric diffusions with jumps. Moreover, an a priori H\"older estimate was established in \cite{CK08}
for bounded parabolic functions. For earlier results on second order integro-differential
operators, one can see \cite{GM} and the references therein.
Important progress has been made in two recent papers \cite{CKSV1, CKSV2}
which consider operators of the type $\Delta + a^{\alpha}\Delta^{\alpha/2}$ for
$a\in [0, M]$. The process corresponding to such an operator is an independent sum of
a Brownian motion and a rotationally invariant $\alpha$-stable process with weight $a$.
In \cite{CKSV1} the authors established a (uniform in $a$) boundary Harnack principle
(BHP) with explicit boundary decay rate for non-negative harmonic functions with respect to
$\Delta + a^{\alpha}\Delta^{\alpha/2}$ in $C^{1,1}$ open sets. By using the BHP, the second
paper \cite{CKSV2} established sharp Green function estimates in bounded $C^{1,1}$ open
sets $D$, and identified the Martin boundary of $D$ for the operator $\Delta +
a^{\alpha}\Delta^{\alpha/2}$ with its Euclidean boundary.
The purpose of the current paper is to extend the results in \cite{CKSV1, CKSV2} to
more general operators than $\Delta + a^{\alpha}\Delta^{\alpha/2}$. Analytically,
the operators that we consider are certain functions of the Laplacian.
To be more precise,
we consider a Bernstein function $\phi:(0,\infty)\to (0,\infty)$ with $\phi(0+)=0$, i.e.,
$\phi$ is of the form
\begin{equation}\label{e:bernstein-function}
\phi(\lambda)=b \lambda +\int_{(0,\infty)}(1-e^{-\lambda t})\, \mu(dt)\, ,\quad \lambda >0\, ,
\end{equation}
where $b\ge 0$ and $\mu$ is a measure on $(0,\infty)$ satisfying
$\int_{(0,\infty)}(1\wedge t)\, \mu(dt)<\infty$. $\mu$ is called the L\'evy measure of $\phi$.
By Bochner's functional calculus one can define the operator
$\phi(\Delta):=-\phi(-\Delta)$ which on
$C_b^2({\mathbb R}^d)$, the collection of bounded $C^2$ functions in ${\mathbb R}^d$ with bounded derivatives,
turns out to be an integro-differential operator of the type
$$
b \Delta f(x)+\int_{{\mathbb R}^d}\left(f(x+y)-f(x)-\nabla f(x) \cdot y {\mathbf 1}_{\{|y|\le 1\}}\right)\, \nu(dy)\, ,
$$
where the measure $\nu$ has the form $\nu(dy)=j(|y|)\, dy $ with $j:(0,\infty)\to (0,\infty)$ given by
$$
j(r)=\int_0^{\infty} (4\pi t)^{-d/2} e^{-r^2/(4t)}\, \mu(dt)\, .
$$
In order for the operator to have both local and non-local parts we will assume that $b>0$
and $\mu\neq 0$. In fact, without loss of generality, throughout the paper we always
suppose that $b=1$. Note that by taking $\phi(\lambda)=\lambda + a^{\alpha}
\lambda^{\alpha/2}$ we are back to the operator $\Delta + a^{\alpha}\Delta^{\alpha/2}$.
The operator $\phi(\Delta)$ is the infinitesimal generator of the L\'evy process
$X$ that can be constructed as follows: Recall that a one-dimensional L\'evy
process $S=(S_t: t\ge 0)$ is called a subordinator if
it is non-negative and $S_0=0$.
A subordinator $S$ can be characterized by its Laplace exponent $\phi$ through the equality
$$
{\mathbb E}[e^{-\lambda S_t}]=e^{-t \phi(\lambda)}, \quad t>0, \lambda>0\, .
$$
The Laplace exponent $\phi$ can be written in the form \eqref{e:bernstein-function}.
We will assume that $b=1$.
Suppose that $W=(W_t: t\ge 0)$ is a $d$-dimensional
Brownian motion and $S=(S_t: t\ge 0)$ is a subordinator, independent of $W$, with Laplace exponent $\phi$.
The process $X=(X_t:\, t\ge0)$ defined by $X_t=W_{S_t}$ is
called a subordinate Brownian motion and its infinitesimal generator is
$\phi(\Delta)$. It is a sum of a Brownian motion and an independent purely
discontinuous (rotationally invariant) L\'evy process.
Potential theory of one-dimensional subordinate Brownian motions in this setting
was studied in \cite{KSV1}. In the current paper we look at the case when
$d\ge 2$. In order for our methods to work we need additional assumptions on the
Bernstein function $\phi$. We will assume that $\phi$ is a complete Bernstein
function, namely that the L\'evy measure $\mu$ has a completely monotone density.
By a slight abuse of notation we will denote the density by $\mu(t)$. For the
L\'evy density $\mu$ we assume a growth condition near zero: For any $K>0$, there exists $c=c(K) >1$ such that
\begin{equation}\label{H:1a}
\mu(r)\le c\, \mu(2r), \qquad \forall r\in (0, K)\, .
\end{equation}
We will later explain the role of these additional assumptions.
To state our main result, we first recall that an open set $D$ in $\bR^d$
(when $d\ge 2$) is said to be $C^{1,1}$ if
there exist a localization radius $R>0$ and a constant $\Lambda >0$ such that
for every $Q\in \partial D$, there exist a
$C^{1,1}$-function $\varphi=\varphi_Q: \bR^{d-1}\to \bR$ satisfying $\varphi (0)=0$,
$ \nabla\varphi (0)=(0, \dots 0)$, $\| \nabla
\varphi \|_\infty \leq \Lambda$, $| \nabla \varphi (x)-\nabla \varphi (y)| \leq
\Lambda |x-y|$, and an orthonormal coordinate system $CS_Q$: $y=(y_1, \cdots,
y_{d-1}, y_d)=:(\widetilde y, \, y_d)$ with its origin at $Q$ such that
$$
B(Q, R)\cap D=\{ y=(\widetilde y, y_d)\in B(0, R) \mbox{ in } CS_Q: y_d >
\varphi (\widetilde y) \}.
$$
The pair $(R, \Lambda)$ is called the characteristics of the $C^{1,1}$ open set $D$.
Note that a $C^{1,1}$ open set can be unbounded and disconnected.
For any $x\in D$, $\delta_D(x)$ denotes the Euclidean distance
between $x$ and $D^c$. For any $x\notin D$, $\delta_{\partial D}(x)$
denotes the Euclidean distance between $x$ and $\partial D$.
It is well known that, if $D$ is a $C^{1, 1}$ open set $D$ with
characteristics $(R, \Lambda)$, there exists $\widetilde R \le R$ such
that $D$ satisfies both the {\it uniform interior ball condition} and the
{\it uniform exterior ball condition} with radius $\widetilde R$:
for every $x\in D$ with $\delta_{D}(x)<\widetilde R$ and $y\in \bR^d
\setminus \overline D$ with $\delta_{\partial D}(y)<\widetilde R$, there are $z_x,
z_y\in \partial D$ so that $|x-z_x|=\delta_{ D}(x)$,
$|y-z_y|=\delta_{\partial D}(y)$ and that $B(x_0,\widetilde R)\subset D$ and $B(y_0,
\widetilde R)\subset \bR^d \setminus \overline D$ where $x_0=z_x+\widetilde
R(x-z_x)/|x-z_x|$ and $y_0=z_y+\widetilde R(y-z_y)/|y-z_y|$. Without loss
of generality, throughout this paper, we assume that the
characteristics $(R, \Lambda)$ of a $C^{1, 1}$ open set satisfies
$R=\widetilde R\le 1$ and $\Lambda\ge 1$.
For any open set $D\subset \bR^d$, $\tau_D:=\inf\{t>0: \,
X_t\notin D\}$ denotes the first exit time from $D$ by $X$.
\begin{defn}\label{D:1.1}
A function $f: {\mathbb R}^d\mapsto [0, \infty)$ is said to be
\begin{description}
\item{(1)} harmonic in an open set $D\subset {\mathbb R}^d$ with respect to
$X$ if for every open set $B$ whose closure is a compact subset of $D$,
\begin{equation}\label{e:har}
f(x)= {\mathbb E}_x \left[ f(X_{\tau_{B}})\right] \qquad
\mbox{for every } x\in B;
\end{equation}
\item{(2)} regular harmonic in $D$ for $X$ if
for each $x \in D$,
$f(x)= {\mathbb E}_x\left[f(X_{\tau_{D}})\right]$.
\end{description}
\end{defn}
We note that, by the strong Markov property of $X$,
every regular harmonic function is automatically harmonic.
Let $Q\in \partial D$. We will say that a function $f:\bR^d\to {\mathbb R}$
vanishes continuously on $ D^c \cap B(Q, r)$ if $f=0$ on $ D^c \cap
B(Q, r)$ and $f$ is continuous at every point of $\partial D\cap
B(Q,r)$. The following is the main result of this paper.
\begin{thm}\label{t:main}
Suppose that the Laplace exponent $\phi$ of
the subordinator $S$, independent of the Brownian motion $W$, is a complete Bernstein
function and that the L\'evy density of $S$ satisfies
\eqref{H:1a}. Let $X=(X_t:\, t\ge0 )$ be the subordinate Brownian motion defined by $X_t=W(S_t)$. For any $C^{1, 1}$ open set $D$ in $\bR^d$ with characteristics
$(R, \Lambda)$, there exists a positive constant
$C=C(d, \Lambda, R)$ such that for $r \in (0, R]$, $Q\in \partial D$ and any
nonnegative function $f$ in ${\mathbb R}^d$ which is
harmonic in $D \cap B(Q, r)$ with respect to $X$
and vanishes
continuously on $ D^c \cap B(Q, r)$, we have
\begin{equation}\label{e:bhp_m}
\frac{f(x)}{\delta_D(x)}\,\le C\,\frac{f(y)}{\delta_D(y)} \qquad
\hbox{for every } x, y\in D \cap B(Q, r/2).
\end{equation}
\end{thm}
We note that \eqref{e:bhp_m} is a strengthened version of the usual boundary
Harnack principle stated for the ratio of two non-negative functions,
$f$ and $g$, harmonic in $D \cap B(Q, r)$ with respect to $X$, and which says that
$$
\frac{f(x)}{g(x)}\,\le C\,\frac{f(y)}{g(y)} \qquad
\hbox{for every } x, y\in D \cap B(Q, r/2).
$$
Indeed, the above inequality is a consequence of \eqref{e:bhp_m}.
We note that \eqref{e:bhp_m} gives the precise boundary decay of non-negative harmonic functions and
that the function $x\mapsto \delta_D(x)$ is not harmonic in $D \cap B(Q, r)$ with respect to $X$.
\begin{remark}\label{r:counterexample}{\rm
The same type of boundary Harnack principle in $C^{1,1}$ domains is
valid also for Brownian motions, namely the boundary decay rate is of
the order $\delta_D(x)$. Since on the small scale the diffusion part of
$X$ dominates, one would expect that
harmonic functions of $X$ and of Brownian
motion
have the same decay rate
at the boundary.
For this reason, some people might expect that some kind of perturbation methods
can be used to prove the BHP for $X$.
We note that it is unlikely that any perturbation
method would work because of the following: Suppose that instead of $X$ we consider a process
$X^a$ with the infinitesimal generator
$$
{\sL}^af(x)= \Delta f(x) +\int_{{\mathbb R}^d}\left(f(x+y)-f(x)-\nabla f(x) \cdot y
{\mathbf 1}_{\{|y|\le 1\}}\right)\, {\nu}
^a(dy)\, ,
$$
where ${\nu}^a(dy)={\mathbf 1}_{\{|y|\le
a\}}\, \nu(dy)$ with $0<a<\infty$.
Thus $X^a$ is the process $X$ with jumps of size larger than
$a$ suppressed.
In Section \ref{counterexample} we present an example of a
(bounded) $C^{1,1}$ domain $D$ on which the boundary Harnack principle for
$X^a$ fails, even for regular harmonic functions vanishing on $D^c$. Note that
if we think of $X$ as a perturbation of Brownian motion, then
$X^a$ is an even smaller perturbation of the same Brownian motion. The counterexample
in Section \ref{counterexample} shows that, despite the (seemingly)
local nature of the BHP, one needs some information of the structure of large jumps of $X$.
}
\end{remark}
For any open set $D\subset {\mathbb R}^d$, we will use $X^D$ to denote the process
defined by $X^D_t(\omega)=
X_t(\omega)$ if $t<\tau_D(\omega)$ and $X^D_t(\omega)=\partial$ if $t\ge
\tau_D(\omega)$, where $\partial$ is a cemetery point.
The Green function of $X^D$ will be denoted by $G_D(x, y)$. For the precise
definition of $G_D$, see Section 2.
To state our result on Green function estimates, we introduce a function $g_D$ first.
For $d\ge 2$, we define for $x, y \in D$,
$$
g_D(x, y)=
\begin{cases}
\frac{1} {|x-y|^{d-2}} \left(1\wedge \frac{ \delta_D(x) \delta_D(y)}{ |x-y|^{2}}\right) & \text{ when } d \ge3,\\
\log\left(1+\frac{ \delta_D(x) \delta_D(y)}{ |x-y|^{2}}\right) & \text{ when } d=2.
\end{cases}
$$
\begin{thm}\label{t-main-green}
Suppose that the Laplace exponent $\phi$ of $S$ is a complete
Bernstein function and that the L\'evy density of $S$ satisfies
\eqref{H:1a}.
For any bounded $C^{1,1}$ open set $D\subset {\mathbb R}^d$, there exists
$C=C(D)>1$ such that for
all $x, y \in D$
\begin{equation}\label{e:1.3}
C^{-1} \, g_D(x, y) \leq G_D(x, y) \leq C\, g_D(x, y) .
\end{equation}
\end{thm}
Finally, we state the result about the Martin boundary of a bounded $C^{1,1}$ open set $D$ with respect to $X^D$. Fix $x_0\in D$ and define
$$
M_D(x, y):=\frac{G_D(x, y)}{G_D(x_0, y)}, \qquad x, y\in D,~y\neq x_0.
$$
A function $f$ is called a harmonic function for $X^D$ if it is harmonic for $X$ in $D$ and
vanishes outside $D$.
A positive harmonic function $f$ for $X^{D}$ is minimal if, whenever
$g$ is a positive harmonic function for $X^{D}$ with $g\le f$ on $D$,
one must have $f=cg$ for some constant $c$.
\begin{thm}\label{t-main-martin}
Suppose that $D$ is a bounded $C^{1,1}$ open set in ${\mathbb R}^d$. For every $z\in \partial D$, there exists $M_D(x,z):=\lim_{y\to z}M_D(x,y)$.
Further, for every $z \in \partial D$, $M_D(\cdot, z)$ is a minimal harmonic function for $X^{D}$ and $M_D(\cdot, z_1)\not=M_D(\cdot, z_1)$ if $z_1\not=z_2$.
Thus the minimal Martin boundary of $D$ can be identified with the Euclidean boundary.
\end{thm}
Thus, by the general theory of Martin representation in \cite{KW} and
Theorem \ref{t-main-martin} we conclude that, for every harmonic
function $u\geq 0$ with respect to $X^{D}$, there is a unique
finite measure $\nu$ on $\partial
D$ such that
$
u(x) =\int_{\partial D} M_D(x,z) \nu (dz).
$
Let us now describe the main ingredients of the proof of Theorem \ref{t:main},
the boundary Harnack principle. We follow the general strategy for proving the
boundary Harnack principle in different settings which requires the Carleson estimate, and upper and lower
estimates on exit probabilities and exit times from certain sets usually called
``boxes'' (see \cite{BBC, BB, CKSV1, G, KS}) .
In Theorem \ref{carleson} we prove the Carleson estimate for a Lipschitz
open set by modifying the proof in \cite{CKSV1}. In order to obtain the upper exit
probability and exit times estimates, we follow the approach from \cite{CKSV1},
the so-called ``test function'' method (which was modeled after some earlier ideas,
see \cite{BBC, G}), but have to make major modifications. In \cite{CKSV1}, the test
functions are power functions of the form $x\mapsto (x_d)^p$ which are either
superharmonic or subharmonic for the corresponding process, and the values of
the generator on these test functions are computed in detail. In our setting,
the power functions are neither superharmonic nor subharmonic, and explicit
calculations cannot be carried out because of the lack of explicit form of the
L\'evy measure. Instead we use the approach developed in \cite{KSV3} for the
case of certain pure-jump subordinate Brownian motions, which seems to be
quite versatile to cover various other cases.
One of the main ingredients
in \cite{KSV3} comes from the fluctuation theory of one-dimensional
L\'evy processes. Its purpose is to identify a correct boundary decay rate by finding an
appropriate harmonic function. Let $Z=(Z_t:\, t\ge 0)$ be the one-dimensional subordinate
Brownian motion defined by $Z_t:=W^d_{S_t}$, and let $V$ be the renewal function of the
ladder height process of $Z$. The function $V$ is harmonic for the process $Z$ killed upon
exiting $(0,\infty)$, and the function $w(x):= V(x_d){\bf 1}_{\{x_d >0\}}$, $x\in {\mathbb R}^d$, is
harmonic for the process $X$ killed upon exiting the half space ${\mathbb R}^d_+:=\{ x=(x_1, \dots,
x_{d-1}, x_d) \in \bR^d: x_d > 0 \}$ (Theorem \ref{t:Sil}). Therefore, $w$ gives the correct
rate of decay of harmonic functions near the boundary of ${\mathbb R}^d_+$. We will use the function
$w$ as our test function. Note that the assumption that $\phi$ is a complete Bernstein
function implies that $w$ is smooth. Using smoothness and harmonicity of $w$ together
with the characterization of harmonic functions recently established in \cite{C}, we
show that $(\Delta + \mathcal A) w \equiv 0$ on the half space (Theorem \ref{c:Aw=0}).
Consequently we prove the following fact in Lemma \ref{L:Main}, which is the key to
proving upper estimates: If $D$ is a $C^{1,1}$ open set with characteristics $(R, \Lambda)$,
$Q\in \partial D$ and $h(y)=V(\delta_D(y)) {\bf 1}_{D\cap B(Q,R)}$, then $(\Delta+\mathcal A) h(y)$
is a.e.~well defined and bounded for $y\in D$ close enough to the boundary point $Q$. Using
this lemma, we give necessary exit distribution estimates in Lemma \ref{L:2}. Here we modify
the test function $h$ by adding a quadratic type function (in one variable) -- this is
necessary due to the presence of the Laplacian. The desired exit distribution estimates
are directly derived by applying Dynkin's formula to the new test function.
The reader will note that our proof is even shorter than the one in \cite{CKSV1},
partly because, in \cite{CKSV1}, the uniformity of the boundary Harnack principle for $\Delta +
a^{\alpha}\Delta^{\alpha/2}$ in the weight $a \in (0, M]$ is established.
In order to prove the lower bound for the exit probabilities we compare the
process $X$ killed upon exiting a certain box $\widehat{D}$ with the so-called
subordinate killed Brownian motion obtained by first killing Brownian motion
upon exiting the box $\widehat{D}$, and then by subordinating the obtained process.
If the latter process is denoted by $Y^{\widehat{D}}$, then its infinitesimal generator is equal to
$-\phi(-\Delta|_{\widehat{D}})$.
Here $\Delta|_{\widehat{D}}$ is the Dirichlet Laplacian and
$-\phi(-\Delta|_{\widehat{D}})$ is constructed by Bochner's subordination. The
advantage of this approach is that the exit probabilities of $X^{\widehat{D}}$
dominate from the above those of the process $Y^{\widehat{D}}$, while the
latter can be rather easily computed, see \cite{SV08}. This idea is
carried out in Lemma \ref{L:200} (as well as for some other lower bounds
throughout the paper).
Once the boundary Harnack principle has been established, proofs of Theorems
\ref{t-main-green} and \ref{t-main-martin} are similar to the corresponding
proofs in \cite{CKSV2} for the operator $\Delta+ a^{\alpha}
\Delta^{\alpha}$.
Therefore we do not give complete proofs of these two theorems in this paper,
only indicate the necessary changes to the proofs in \cite{CKSV2}.
The rest of the paper is organized as follows.
In the next section we precisely describe
the settings and recall necessary preliminary results. Section 3 is
devoted to the analysis of the process and harmonic functions in the
half-space.
Section 4 is on the analysis in $C^{1,1}$ open sets, and is central to the paper,
and this is where most of the new ideas appear.
In this rather technical section we establish the upper and lower
bounds on the exit probabilities and exit times. In Section 5 we
first prove the Carleson estimate for Lipschitz open sets and then
prove the main Theorem \ref{t:main}. In Section 6 we provide the
counterexample already mentioned in Remark \ref{counterexample}.
Finally, in Section 7 we explain the differences between the proofs of
Theorems \ref{t-main-green} and \ref{t-main-martin} and their
counterparts from \cite{CKSV2}.
Throughout this paper, the constants $C_1$, $C_2$, $R$, $R_1$, $R_2$, $R_3$ will be fixed.
The lowercase constants $c_1, c_2, \cdots$ will denote generic constants whose exact values
are not important and can change from one appearance to another.
The dependence of the lower case constants on the dimension $d$
may not be mentioned explicitly. We will
use ``$:=$" to denote a definition, which is read as ``is defined to
be". For $a, b\in \bR$, $a\wedge b:=\min \{a, b\}$ and $a\vee
b:=\max\{a, b\}$. For every function $f$, let $f^+:=f \vee 0$.
For every
function $f$, we extend its definition to the cemetery point $\partial$ by setting
$f(\partial )=0$.
We will use $dx$ to denote the
Lebesgue measure in $\bR^d$ and, for a Borel set $A\subset \bR^d$, we
also use $|A|$ to denote its Lebesgue measure.
\section{Setting and Preliminary Results}
A $C^{\infty}$ function $\phi:(0,\infty)\to [0,\infty)$ is called a
Bernstein function if $(-1)^n D^n \phi\le 0$ for every
$n=1, 2, \dots$.
Every Bernstein function has a representation $
\phi(\lambda)=a+b\lambda +\int_{(0,\infty)}(1-e^{-\lambda t})\,
\mu(dt) $ where $a,b\ge 0$ and $\mu$ is a measure on $(0,\infty)$
satisfying $\int_{(0,\infty)}(1\wedge t)\, \mu(dt)<\infty$; $a$ is
called the killing coefficient, $b$ the drift and $\mu$ the L\'evy
measure of the Bernstein function. A Bernstein function $\phi$ is
called a complete Bernstein function if the L\'evy measure $\mu$ has
a completely monotone density $\mu(t)$, i.e., $(-1)^n D^n \mu(t)\ge 0$
for every non-negative integer $n$ and all $t>0$. Here and below, by abuse of
notation we denote the L\'evy density by $\mu(t)$. For more on
Bernstein and complete Bernstein functions we refer the readers to
\cite{SSV}.
A Bernstein function $\phi$ on $(0, \infty)$ is the Laplace exponent of a
subordinator if and only if $\phi(0+)=0$. Suppose that $S$ is a
subordinator with Laplace exponent $\phi$. $S$ is called a complete
subordinator if $\phi$ is a complete Bernstein function.
The potential measure $U$ of $S$ is defined by
\begin{equation}\label{potential measure}
U(A)={\mathbb E} \int_0^{\infty}
{\bf 1}_{\{S_t\in A\}}
\, dt, \quad A\subset [0,
\infty).
\end{equation}
Note that $U(A)$ is the expected time the subordinator $S$ spends in
the set $A$.
Throughout the remainder of this paper, we assume that $S=(S_t:\ t\ge 0)$
is a complete subordinator with a positive drift and, without loss
of generality, we shall assume that the drift of $S$ is equal to 1.
Thus the Laplace exponent of $S$ can be written as
$$
\phi(\lambda):=\lambda+\psi(\lambda)
\qquad
\text{ where }
~~
\psi(\lambda):=\int_{(0,\infty)}(1-e^{-\lambda t})\, \mu(dt).
$$
We will exclude the trivial case of $S_t=t$, that is
the case of $\psi\equiv 0$.
Since the drift of $S$ is equal to 1, the potential measure $U$ of $S$ has a
completely monotone density $u$
(cf. \cite[Corollary 5.4 and Corollary 5.5]{BBKRSV}).
Suppose that $W=(W_t: t\ge 0)$ is a Brownian motion in ${\mathbb R}^d$ independent
of $S$ and with
$$
{\mathbb E}_x[e^{i\theta\cdot (W_t-W_0)}]=e^{-t|\theta|^2}, \quad \mbox{ for all }
x, \theta\in {\mathbb R}^d.
$$
The process $X=(X_t: t\ge 0)$ defined by $X_t=W_{S_t}$ is called
a subordinate Brownian motion. It follows from \cite[Chapter 5]{BBKRSV} that
$X$ is a L\'evy process with L\'evy exponent $\phi(|\theta|^2)=|\theta|^2
+\psi(|\theta|^2)$:
$$
{\mathbb E}_x[e^{i\theta\cdot (X_t-X_0)}]=e^{-t\phi(|\theta|^2)}, \quad \mbox{ for all }
x, \theta\in {\mathbb R}^d.
$$
The L\'evy measure of the process $X$ has a density $J$, called
the L\'evy density, given by
$J(x)=j(|x|)$ where
\begin{equation}\label{e:representation-j}
j(r)
:=\int^{\infty}_0(4\pi t)^{-d/2}e^{-r^2/(4t)}\mu(t)\, dt, \qquad r>0.
\end{equation}
Note that the function $r\mapsto j(r)$ is continuous and decreasing
on $(0, \infty)$. We will sometimes use the notation $J(x,y)$ for
$J(x-y)$.
The function $J(x,y)$ is the L\'evy intensity of $X$. It determines
a L\'evy system for $X$, which describes the jumps of the process
$X$: For any non-negative measurable function $f$ on $\bR_+ \times
\bR^d\times \bR^d$ with $f(s, y, y)=0$ for all $y\in \bR^d$, any
stopping time $T$ (with respect to the filtration of $X$) and any
$x\in \bR^d$,
\begin{equation}\label{e:levy}
{\mathbb E}_x \left[\sum_{s\le T} f(s,X_{s-}, X_s) \right]= {\mathbb E}_x \left[
\int_0^T \left( \int_{\bR^d} f(s,X_s, y) J(X_s,y) dy \right)
ds \right].
\end{equation}
(See, for example, \cite[Proof of Lemma 4.7]{CK1} and \cite[Appendix
A]{CK2}.)
Recall that for any open set $U\subset \bR^d$,
$\tau_U=\inf\{t>0: \, X_t\notin U\}$ is the first exit time
from $U$ by $X$.
The following simple result will be used in Section 5.
\begin{lemma}\label{L:2.00}
For every $\varrho>0$, there exists $c=c(\varrho)>0$ such that for
every $x_0 \in \bR^d$ and $r \in (0, \varrho]$,
\begin{equation} \label{e:ext}
c^{-1} r^2 \, \le \, {\mathbb E}_{x_0}\left[\tau_{B(x_0,r)}\right]\, \le\, c\,r^2.
\end{equation}
\end{lemma}
{\medskip\noindent {\bf Proof. }}
In the case $d\ge 3$, this lemma has been proved in \cite{RSV}.
Moreover, one can easily adapt the proofs of \cite[Lemmas 2.1--2.1]{SV05} to arrive at
the desired lower bound for all dimensions.
Here
we provide a proof of the desired upper bound that works for all dimensions.
Let
$C^2_0(\bR^d)$ be the collection of $C^2$ functions in ${\mathbb R}^d$ vanishing at
infinity.
For any $f\in C^2_0({\mathbb R}^d)$,
we define
$$
{{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}}f(y)=\int_{{\mathbb R}^d}(f(z+y)-f(y)-z\cdot \nabla f(y)1_{\{|z|<1\}})J(z)dz, \quad y\in {\mathbb R}^d.
$$
Let $N\ge 5$ be such that
$$
\int_{\{|z|<1\}}|z|^2J(z)dz<\frac12N^2.
$$
Let $g$ be a radial $C^2$ function taking values in $[0, 2]$ such that
$$
g(y)=\begin{cases}|y|^2, &|y|<1\\
2, &2\le |y|\le 3\\
0, &|y|>N-1.
\end{cases}
$$
For any $r>0$, put $f(y)=g(y/r)$. Then for $y\in B(0, r)$, $\Delta f(y)=2dr^{-2}$. For any
$y\in B(0, r)$, we have
\begin{eqnarray*}
|{{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}}f(y)|
&=&|\int_{{\mathbb R}^d}(f(z+y)-f(y)-z\cdot \nabla f(y)1_{\{|z|<Nr\}})J(z)dz|\\
&\le&|\int_{\{|z|<Nr\}}(f(z+y)-f(y)-z\cdot \nabla f(y))J(z)dz|
\ + \ \int_{\{Nr<|z|\}}f(y)J(z)dz\\
&\le& c_1r^{-2}\int_{\{|z|<Nr\}}|z|^2J(z)dz + r^{-2}|y|^2\int_{\{Nr<|z|\}}J(z)dz\\
&\le& c_1r^{-2}\int_{\{|z|<Nr\}}|z|^2J(z)dz +N^{-2}r^{-2}\int_{\{Nr<|z|<1\}}|z|^2J(z)dz+\int_{\{|z|>1\}}J(z)dz.
\end{eqnarray*}
Thus we know that there exist $r_0\in (0, 1)$ and $c_2>0$ such that for any $r\in (0, r_0)$,
$$
\Delta f(y)+{{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}}f(y)\ge c_2 r^{-2}, \quad y\in B(0, r).
$$
Using this and the fact that $\Delta+{{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}}$ is the infinitesimal generator of
the process $X$, by the Dynkin's formula,
we have that for $r\in (0, r_0)$,
\begin{eqnarray*}
{\mathbb E}_{x_0}[\tau_{B(x_0, r)}]&=&{\mathbb E}_{0}[\tau_{B(0, r)}]=\lim_{t\uparrow\infty}{\mathbb E}_{0}[\tau_{B(0, r)}\wedge t]\\
&\le& c_2^{-1} r^2\lim_{t\uparrow\infty}{\mathbb E}_{0}\int^{\tau_{B(0, r)}\wedge t}_0 (\Delta + {{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}})f(X_s)ds\\
&=&c_2^{-1} r^2\lim_{t\uparrow\infty}{\mathbb E}_{0}[f(X_{\tau_{B(0, r)}\wedge t})]\le 2c_2^{-1} r^2\, ,.
\end{eqnarray*}
Now the desired upper bound follows easily.
{\hfill $\Box$ \bigskip}
In the remainder of this paper, we will need some control on the behavior of $j$ near the origin.
For this, we will assume that for any $K>0$, there exists $c=c(K) >1$ such that
\begin{equation}\label{H:1b}
\mu(r)\le c\, \mu(2r), \qquad \forall r\in (0, K).
\end{equation}
On the other hand, since $\phi$ is a complete Bernstein function, it follows from
\cite[Lemma 2.1]{KSV3} that there exists $c>1$ such that $\mu(t)\le c
\mu(t+1)$ for every $t>1$. Thus by repeating the proof of
\cite[Lemma 4.2]{RSV} (see also \cite[Proposition 1.3.5]{KSV2}), we can show that
for any $K>0$,
there exists $c=c(K)>1$ such that
\begin{equation}\label{H:1}
j(r)\le c\, j(2r), \qquad \forall r\in (0, K),
\end{equation}
and, there exists $c>1$ such that
\begin{equation}\label{H:2}
j(r)\le c\, j(r+1), \qquad \forall r>1.
\end{equation}
Note that, as a consequence of \eqref{H:1}, we have
that, for any $K>0$,
\begin{equation}\label{H:1n}
j(ar)\le c\, a^{-\nu} j(r), \qquad \forall r\in (0, K)
\quad\text{and}\quad a \in (0, 1)
\end{equation}
where $c=c(K)$ is the constant in \eqref{H:1} and $\nu=\nu(K):=\log_2 c$.
The following Harnack inequality
will be used to prove the main result of this paper.
\begin{prop}
[Harnack inequality]\label{uhp}
There exists a constant $c>0$ such that for any
$r\in (0,1]$ and $x_0\in {\mathbb R}^d$ and any function $f$ which is nonnegative in
${\mathbb R}^d$ and harmonic in $B(x_0, r)$ with respect to $X$ we have
$$
f(x)\le c f(y) \qquad \mbox{ for all } x, y\in B(x_0, r/2).
$$
\end{prop}
{\medskip\noindent {\bf Proof. }}
We first deal with the case $d\ge 3$.
When $f$ is bounded, this proposition is just \cite[Theorem 4.5]{RSV}.
Using the same argument as in the proof of
\cite[Corollary 4.7]{RSV}, one can easily see that \cite[Theorem
4.5]{RSV} can be extended to any nonnegative harmonic function.
The assertions of the proposition in the cases of $d=2$ and $d=1$
follow easily from the assertion in the case $d\ge 3$. Since the arguments
are similar, we will only spell out the details in the case $d=2$.
For any $x\in {\mathbb R}^3$, $x=(\widetilde{x},x^3)$, where $\widetilde{x}\in {\mathbb R}^2$.
Analog notation will be used also for other objects in ${\mathbb R}^3$.
Let $X=(X_t, {\mathbb P}_x)$ be the subordinate Brownian motion in ${\mathbb R}^3$ and write $X=(\widetilde{X}, X^3)$.
Note that $\widetilde{X}$ has the same distribution under
${\mathbb P}_{(\widetilde{x},0)}$ and ${\mathbb P}_{(\widetilde{x},x^3)}$ for any $x^3\in {\mathbb R}$.
Hence we can define ${\mathbb P}_{\widetilde{x}}:={\mathbb P}_{(\widetilde{x},0)}$. The process
$(\widetilde{X}, {\mathbb P}_{\widetilde{x}})$ is a subordinate Brownian motion in ${\mathbb R}^2$ via
the same subordinator as the one used to define $X$.
For any given
$\widetilde{f}:{\mathbb R}^{2}\to [0,\infty)$, we extend it to ${\mathbb R}^3$ by defining
$f(x)=f((\widetilde{x},x^3)):=\widetilde{f}(\widetilde{x})$. Then
\begin{description}
\item{(1)} If $\widetilde{f}$ is regular harmonic (with respect to $\widetilde{X}$)
in an open set $\widetilde{D}\subset {\mathbb R}^{2}$, then $f$ is regular harmonic
(with respect to $X$) in the cylinder $D:=\widetilde{D}\times {\mathbb R}$.
Indeed, let $\widetilde{\tau}_{\widetilde{D}}:=\inf\{t>0:\, \widetilde{X}_t\notin \widetilde{D}\}$
be the exit time of $\widetilde{X}$ from $\widetilde{D}$, and
$\tau_D:=\inf\{t>0:\, X_t\notin D\}$. Then clearly $\widetilde{\tau}_{\widetilde{D}}=\tau_D$. Thus, for any $x=(\widetilde{x},x^3)\in D$,
$$
{\mathbb E}_x[f(X_{\tau_D})]={\mathbb E}_{\widetilde{x}}[\widetilde{f}(\widetilde{X}_{\widetilde{\tau}_{\widetilde{D}}})]=\widetilde{f}(\widetilde{x})=f(x)\, .
$$
\item{(2)} If $\widetilde{f}$ is harmonic (with respect to $\widetilde{X}$) in an open set
$\widetilde{D}\subset {\mathbb R}^{2}$, then $f$ is harmonic (with respect to $X$) in the
cylinder $D:=\widetilde{D}\times {\mathbb R}$. Indeed, let $B\subset D$ be open and
relatively compact. Then there exists a cylinder $C=\widetilde{C}\times {\mathbb R}$
such that $B\subset C$ and $\widetilde{C}\subset \widetilde{D}$ is open and relatively
compact (in $\widetilde{D}$). Since $\widetilde{f}$ is harmonic (with respect to
$\widetilde{X}$) in $\widetilde{D}$, it is regular harmonic in $\widetilde{C}$. By (1),
$f$ is regular harmonic (with respect to $X$) in $C$, and therefore
also harmonic in $C$. Since $B$ is compactly contained in $C$, we see that
$$
f(x)={\mathbb E}_x[f(X_{\tau_B})]\, , \qquad \textrm{for all } x\in B\, .
$$
\end{description}
Let $r\in (0,1)$, $\widetilde{x}_0\in
{\mathbb R}^2$ and define $x_0:=(\widetilde{x}_0,0)$.
Assume that $\widetilde{f}:
{\mathbb R}^2\to [0,\infty)$ is harmonic
(with respect to $\widetilde{X}$) in $B(\widetilde{x}_0, r)$. Then $f$
defined by $f(x)=\widetilde{f}(\widetilde{x})$ is harmonic in $B(\widetilde{x}_0, r)\times {\mathbb R}$.
In particular, $f$ is harmonic in $B(x_0,r)$. By the assertion in the case $d=3$,
$$
f(x)\le c f(y)\, ,\qquad \textrm{for all }x,y\in B(x_0,r/2)\, .
$$
Let $\widetilde{x}, \widetilde{y}\in B(\widetilde{x}_0,r/2)$, and define $x:=(\widetilde{x},0)$, $y:=(\widetilde{y},0)$. Then
$$
\widetilde{f}(\widetilde{x})=f(x)\le cf(y)=c\widetilde{f}(\widetilde{y})\, .
$$
{\hfill $\Box$ \bigskip}
It follows from \cite[Chapter 5]{BBKRSV} that the process $X$ has a transition
density $p(t, x, y)$, which is jointly continuous. Using this and the strong Markov property, one can easily check that
$$
p_D(t, x, y):=p(t, x, y)-
{\mathbb E}_x[ p(t-\tau_D, X_{\tau_D}, y); t>\tau_D], \quad x, y \in D
$$
is continuous and is the transition density of $X^D$.
For any bounded open set $D\subset {\mathbb R}^d$, we
will use $G_D$ to denote the Green function of $X^D$, i.e.,
$$
G_D(x, y):=\int^\infty_0p_D(t, x, y)dt, \quad x, y\in D.
$$
Note that $G_D(x,y)$ is continuous on $\{(x,y) \in D \times D: x\not=y\}$.
\section{Analysis on half-space}
Recall that $X=(X_t:\, t\ge 0)$ is the $d$-dimensional subordinate
Brownian motion defined by $X_t=W_{S_t}$, where $W=(W^1,\dots, W^d)$
is a $d$-dimensional Brownian motion and $S=(S_t:\, t\ge 0)$ an
independent complete subordinator whose drift is equal to 1 and
whose L\'evy density satisfies \eqref{H:1a}.
Let $Z=(Z_t:\, t\ge 0)$ be the one-dimensional subordinate Brownian
motion defined as $Z_t:=W^d_{S_t}$. Let $\overline{Z}_t:=\sup\{0\vee
Z_s:0\le s\le t\}$ be the supremum process of $Z$ and let $L=(L_t:\,
t\ge 0)$ be a local time of $\overline{Z}-Z$ at $0$. $L$ is also
called a local time of the process $Z$ reflected at the supremum.
The right continuous inverse $L^{-1}_t$ of $L$ is a subordinator and
is called the ladder time process of $Z$. The process
$H_t=\overline{Z}_{L^{-1}_t}$ is also a subordinator and is called
the ladder height process of $Z$. (For the basic properties of the
ladder time and ladder height processes, we refer our readers to
\cite[Chapter 6]{Ber}.) The ladder height process $H$ has a drift
(\cite[Lemma 2.1]{KSV1}). The potential measure
of the subordinator $H$ will be denoted by $V$.
Let $V(t):=V((0, t))$ be the renewal function of $H$.
By \cite[Theorem 5, page 79]{Ber} and \cite[Lemma 2.1]{KSV1}, $V$ is
absolutely continuous and has a continuous and strictly positive
density $v$ such that $v(0+)=1$.
The functions $V$ and $v$ enjoy the following estimates near the origin.
\begin{lemma}
{\rm(\cite[Lemma 2.2]{KSV1})} \label{l:estimate-for-V} Let $R>0$.
There exists a constant $c=c(R) >1$ such that for all $x\in (0,R]$,
we have $ c^{-1} \le v(x) \le c $ and $ c^{-1} x\le V(x) \le c x\, .
$
\end{lemma}
By \cite[Proposition 2.4]{KSV3}, the Laplace exponent $\chi$ of the
ladder height process $H$ of $Z_t$ is also a complete Bernstein
function. Using this and the fact that $\chi$ has a drift,
we see from \cite[Corollary 2.3]{KSV2}, that $v$ is completely
monotone. In particular, $v$ and the renewal function $V$ are
$C^{\infty}$ functions.
We will use $\bR^d_+$ to denote the half-space $\{ x=(x_1, \dots,
x_{d-1}, x_d):=(\tilde{x}, x_d) \in \bR^d: x_d > 0 \}$.
Define $w(x):=V((x_d)^+)$.
\begin{thm}\label{t:Sil}
The function $w$ is harmonic in ${\mathbb R}^d_+$ with respect to $X$ and,
for any $r>0$, regular harmonic in ${\mathbb R}^{d-1}\times (0, r)$ with
respect to $X$.
\end{thm}
{\medskip\noindent {\bf Proof. }} Since $Z_t:=W^d_{S_t}$ has a transition density, it satisfies
the condition ACC in \cite{Sil}, namely the resolvent kernels are
absolutely continuous. The assumption in \cite{Sil} that $0$ is
regular for $(0,\infty)$ is also satisfied since $X$ is of unbounded
variation. Further, by symmetry of $Z$, the notions of
coharmonic and harmonic functions coincide.
Now the theorem follows by the same argument as in \cite[Theorem 4.1]{KSV3}.
{\hfill $\Box$ \bigskip}
Unlike \cite[Proposition 4.2]{KSV3}, we prove the next result
without using the boundary Harnack principle.
\begin{prop}\label{c:cforI}
For all positive constants $r_0$ and $L$, we have
$$
\sup_{x \in {\mathbb R}^d:\, 0<x_d <L} \int_{B(x, r_0)^c \cap \bR^d_+}
w(y) j(|x-y|)\, dy < \infty\, .
$$
\end{prop}
{\medskip\noindent {\bf Proof. }} Without loss of generality, we assume $\widetilde x=0$.
We consider two separate cases.
\noindent (a) Suppose $L>x_d \ge r_0/4$. By
\eqref{e:levy} and Theorem \ref{t:Sil},
for every $x \in {\mathbb R}^d_+$,
\begin{eqnarray}
w(x)
&\ge& {\mathbb E}_x\left[w\big(X_{\tau_{ B(x, r_0/2)\cap \bR^d_+}}\big):
X_{\tau_{ B(x, r_0/2)\cap \bR^d_+}} \in {B(x, r_0)}^c \cap
\bR^d_+ \right]\nonumber\\
&=& {\mathbb E}_x \left[\int_0^{\tau_{B(x, r_0/2)\cap \bR^d_+}} \int_{{B(x,
r_0)}^c \cap \bR^d_+} j(|X_t-y|)w(y)\, dydt \right]\, .
\label{e:fdfsaff}
\end{eqnarray}
Since $|z-y| \le |x-z|+|x-y| \le r_0 +|x-y| \le 2|x-y|$ for $(z,y)
\in B(x, r_0/2) \times B(x, r_0)^c$, using \eqref{H:1} and
\eqref{H:2}, we have $j(|z-y|) \ge c_1 j(|x-y|)$.
Thus, combining this with \eqref{e:fdfsaff}, we obtain that
$$
\int_{B(x, r_0)^c \cap \bR^d_+} w(y) j(|x-y|) dy \le
c_1^{-1} \frac{w(x)}{{\mathbb E}_{x}[\tau_{B(x, r_0/2)\cap \bR^d_+}]} \le c_1^{-1} \frac{V(L)}{{\mathbb E}_{0}[\tau_{B(0, r_0/4)}]}.
$$
\noindent (b) Suppose $x_d < r_0/4$. Note that if $|y-x|>r_0$, then
$|y|\ge |y-x|-|x| >3r_0/4$ and $|y| \le |y-x| +|x| \le |y-x|+ r_0/4
\le |y-x|+ |y-x|/4$. Thus, using \eqref{H:1} and \eqref{H:2}, we
have $j(|y|) \ge c_2 j(|x-y|)$ and
\begin{eqnarray}
\sup_{x \in {\mathbb R}^d:\, 0<x_d < r_0/4} \int_{B(x, r_0)^c \cap \bR^d_+}
w(y) j(|x-y|) dy \le c_3 \int_{B(0, r_0/2)^c \cap \bR^d_+} w(y)
j(|y|) dy . \label{e:fdfsaff0}
\end{eqnarray}
Let $x_1:=(\widetilde 0, r_0/8)$.
By Theorem \ref{t:Sil} and \eqref{e:levy},
\begin{eqnarray}
\infty >w(x_1)
&\ge& {\mathbb E}_{x_1}\left[w(X_{\tau_{ B(0, r_0/4)\cap \bR^d_+}}):
X_{\tau_{ B(0, r_0/4)\cap \bR^d_+}} \in {B(x, r_0/2)}^c \cap
\bR^d_+ \right]\nonumber\\
&=&
{\mathbb E}_{x_1} \left[\int_0^{\tau_{B(0, r_0/4)\cap \bR^d_+}}
\int_{{B(0, r_0/2)}^c \cap \bR^d_+} j(|X_t-y|)w(y)\, dy\,dt \right] .
\label{e:fdfsaff2}
\end{eqnarray}
Since $|z-y| \le |z|+|y| \le (r_0/4) +|y| \le 2|y|$ for $(z,y)
\in B(0, r_0/4) \times B(0, r_0/2)^c$, using \eqref{H:1} and
\eqref{H:2}, we have $j(|z-y|) \ge c_3 j(|y|)$.
Thus, combining this with \eqref{e:fdfsaff2}, we obtain that
\begin{eqnarray}
\infty >w(x_1) &>& c_3 {\mathbb E}_{x_1} \left[\int_0^{\tau_{B(0, r_0/4)
\cap \bR^d_+}} \int_{{B(0, r_0/2)}^c \cap \bR^d_+} j(|y|)w(y)\,
dy\,dt \right]\nonumber\\
&=&
c_3 {\mathbb E}_{x_1}[\tau_{B(0, r_0/4)\cap \bR^d_+}] \int_{B(0, r_0/2)^c
\cap \bR^d_+} j(|y|)w(y) dy.
\label{e:fdfsaff3}
\end{eqnarray}
Combining \eqref{e:fdfsaff0} and \eqref{e:fdfsaff3}, we conclude
that
$$
\sup_{x \in {\mathbb R}^d:\, 0<x_d < r_0/4}
\int_{B(x, r_0)^c \cap \bR^d_+} w(y) j(|x-y|) dy \le
c_4\frac{V(r_0/8)}{{\mathbb E}_{0}[\tau_{B(0, r_0/8)}] } < \infty.
$$
{\hfill $\Box$ \bigskip}
We now define an operator ($\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$, $\mathfrak{D}
(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})$) as follows:
\begin{align}
{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} f(x)
:=&\lim_{\varepsilon \downarrow 0}
\int_{B(x, \varepsilon)^c}
\left(f(y)-f(x)\right)j(|y-x|)\, dy, \nonumber\\
\mathfrak{D}(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}):=&\left\{f \in C^2({\mathbb R}^d): \lim_{\varepsilon \downarrow 0}
\int_{B(x, \varepsilon)^c}
\left(f(y)-f(x)\right)j(|y-x|)\, dy \text{ exists and is finite } \right\}.
\label{generator}
\end{align}
Recall that $C^2_0(\bR^d)$ is the collection of $C^2$ functions in ${\mathbb R}^d$ vanishing at
infinity
It is well known that
$C^2_0 (\bR^d)\subset \mathfrak{D}(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})$ and that, by the rotational symmetry of $X$, $\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$
restricted to $C^2_0(\bR^d)$ coincides with the infinitesimal generator of
the process $X$ (e.g. \cite[Theorem 31.5]{Sa}).
The proof of the next result is similar to that of \cite[Theorem
4.3]{KSV3}. We give the proof here for completeness.
\begin{thm}\label{c:Aw=0}
$(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) w(x)$ is well defined and $(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) w(x)=0$ for
all $x \in \bR^d_+$.
\end{thm}
{\medskip\noindent {\bf Proof. }} It follows from Proposition \ref{c:cforI} and the fact that $j$
is a L\'evy density that for
any $L>0$ and $\varepsilon \in (0,1/2)$
\begin{eqnarray}
\lefteqn{\sup_{x \in {\mathbb R}^d:\ 0<x_d <L}
\left|\int_{B(x, \varepsilon)^c}
(w(y)-w(x)) j(|y-x|)dy \right|\nonumber}\\
&\le&
\sup_{x \in {\mathbb R}^d:\ 0<x_d <L}
\int_{B(x, \varepsilon)^c}
w(y)j(|y-x|)\, dy
+V(L)\int_{B(x, \varepsilon)^c}
j(|y|)dy
<\infty\, . \label{e:dedwer}
\end{eqnarray}
Hence, for every $\varepsilon \in (0,1/2)$,
$
{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}_\varepsilon w(x):=\int_{B(x, \varepsilon)^c}(w(y)-w(x))j(|y-x|)dy
$
is well defined.
Using the smoothness of $w$ in ${\mathbb R}^d_+$ and following the same argument in
\cite[Theorem 4.3]{KSV3}, we can show that
${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} w$ is well defined in ${\mathbb R}^d_+$ and
${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}_\varepsilon w(x)$ converges to
$$
{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} w(x)=\int_{{\mathbb R}^d}\left(w(y)-w(x)-{\bf 1}_ {\{|y-x|<1\}} (y-x)
\cdot\nabla w(x)\right)j(|y-x|) dy
$$
locally uniformly in ${{\mathbb R}^d_+}$ as $\varepsilon\to 0$ and
the
function ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} w(x)$ is continuous in ${{\mathbb R}^d_+}$.
Suppose that $U_1$ and $U_2$ are relatively compact open subsets of
${\mathbb R}^d_+$ such that $\overline{U_1} \subset U_2 \subset
\overline{U_2} \subset {\mathbb R}^d_+$.
It follows again from the same argument in \cite[Theorem 4.3]{KSV3}
that the conditions \cite[(2.4),
(2.6)]{C} are true.
Thus, by \cite[Lemma 2.3, Theorem 2.11(ii)]{C},
we have that for any $f\in C^2_c({\mathbb R}^d_+)$, \begin{equation}\label{e:C2.11}
0=\int_{{\mathbb R}^d} \nabla w(x) \cdot \nabla f(x) \, dx +
\frac12\int_{{\mathbb R}^d}\int_{{\mathbb R}^d}(w(y)-w(x))(f(y)-f(x))j(|y-x|)\, dx\,
dy. \end{equation} For $f\in C^2_c({\mathbb R}^d_+)$ with supp$(f) \subset
\overline{U_1} \subset U_2 \subset \overline{U_2} \subset {\mathbb R}^d_+$,
\begin{align*}
&\int_{{\mathbb R}^d}\int_{{\mathbb R}^d}|w(y)-w(x)||f(y)-f(x)|j(|y-x|) dxdy\\
=&\int_{U_2}\int_{U_2}|w(y)-w(x)||f(y)-f(x)|j(|y-x|) dxdy+
2\int_{U_1}\int_{U_2^c}|w(y)-w(x)||f(x)|j(|y-x|) dxdy\\
\le&c_1\int_{U_2 \times U_2}
|y-x|^2 j(|y-x|) dxdy+2\|f\|_\infty|U_1| \left(\sup_{x \in U_1}
w(x)\right) \int_{U_2^c}j(|y-x|) dy\\
&+2\|f\|_\infty\int_{U_1} \int_{U_2^c} w(y) j(|x-y|)dydx
\end{align*}
is finite by
Proposition \ref{c:cforI} and the fact that $j(|x|)dx$ is a L\'evy measure.
Thus by \eqref{e:C2.11}, Fubini's theorem and the dominated
convergence theorem, we have for any $f\in C^2_c({\mathbb R}^d_+)$,
\begin{align*}
&0=\int_{{\mathbb R}^d} \nabla w(x) \cdot \nabla f(x) \, dx + \frac12
\lim_{\varepsilon\downarrow0}\int_{\{(x, y)\in {{\mathbb R}^d}\times {{\mathbb R}^d},\
|y-x|>\varepsilon\}}(w(y)-w(x))(f(y)-f(x))j(|y-x|)\, dx\, dy\\
&=-\int_{{\mathbb R}^d} \Delta w(x) f(x) \, dx
-\lim_{\varepsilon\downarrow0}\int_{{\mathbb R}^d_+} f(x)
\left(\int_{B(x, \varepsilon)^c}(w(y)-w(x))
j(|y-x|) dy \right) dx\\
&=- \int_{{\mathbb R}^d} \Delta w(x) f(x) \, dx -\,\int_{{\mathbb R}^d_+} f(x){\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}
w(x)\, dx\, =\,- \int_{{\mathbb R}^d}( \Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) w(x) f(x) \, dx
\end{align*}
where we have used the fact ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}_\varepsilon w \to {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} w$ converges
uniformly on the support of $f$. Hence, by the continuity of $(
\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) w$, we have $( \Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) w(x)=0$ in ${{\mathbb R}^d_+}$. {\hfill $\Box$ \bigskip}
\section{Analysis on $C^{1,1}$ open set}
Recall that $\Lambda \ge 1$ and that $D$ is a $C^{1,1}$ open set
with characteristics $(R, \Lambda)$ and $D$ satisfies the uniform
interior ball condition and the uniform exterior ball condition with
radius $R \le 1$.
The proof of the next lemma is motivated by that of \cite[Lemma 4.4]{KSV3}.
\begin{lemma}\label{L:Main}
Fix $Q \in \partial D$ and define
$$
h(y):=V\left(\delta_D (y)\right){\bf 1}_{D\cap B(Q, R)}(y).
$$
There exists $C_1=C_1(
\Lambda, R)>0$ independent of
$Q$ such that $( \Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h$ is well
defined in $D\cap B(Q, R/4)$ a.e. and
\begin{equation}\label{e:h3}
|(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h(x)|\le C_1 \quad \text{for a.e. } x \in D\cap B(Q, R/4)\, .
\end{equation}
\end{lemma}
{\medskip\noindent {\bf Proof. }} In this proof, we fix $x \in D\cap B(Q, R/4)$ and
$x_0\in\partial D$ satisfying $\delta_D(x)=|x-x_0|$. We also fix
the $C^{1, 1}$ function $\varphi$ and the coordinate system
$CS=CS_{x_0}$ in the definition of $C^{1, 1}$ open set so that $x=(
0, x_d)$ with $0<x_d <R/4$ and $B(x_0, R)\cap D=\{ y=(\widetilde y, \, y_d)
\in B(0, R) \mbox{ in } CS : y_d > \varphi (\widetilde y) \}.$ Let
$$
\varphi_1(\widetilde y):=R -\sqrt{ R^2-|\widetilde y|^2}
\quad \text{and}\quad \varphi_2(\widetilde y):=-R +\sqrt{ R^2-|\widetilde
y|^2}.
$$
Due to the uniform interior ball condition and the uniform
exterior ball condition with radius $R$, we have
\begin{equation}\label{e:phi012}
\varphi_2(\widetilde y) \le \varphi (\widetilde y) \le \varphi_1(\widetilde y) \quad \text{for
every } y \in D\cap B(x, R/4).
\end{equation}
Define $H^+:= \left\{y=(\widetilde y, \, y_d) \in CS:y_d>0 \right\}$ and
let
$$
A:=\{y=(\widetilde{y},y_d) \in (D \cup H^+)\cap B(x, R/4):
\varphi_2(\widetilde y) \le y_d \le \varphi_1(\widetilde y)\},
$$
$$
E:=\{y=(\widetilde{y},y_d) \in B(x, R/4): y_d > \varphi_1(\widetilde
y)\}.
$$
Note that, since $|y-Q| \le |y-x| +|x-Q| \le R/2$ for $y
\in B(x, R/4)$, we have $B(x, R/4) \cap D \subset
B(Q, R/2)\cap D.$
Let
$$
h_{x}(y):=V\left(\delta_{_{H^+}}(y)\right).
$$
Note that $h_{x}(x)=h(x)$. Moreover, since
$\delta_{_{H^+}}(y)=(y_d)^+$ in $CS$,
it follows from Theorem \ref{c:Aw=0} that ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} h_x$ is well defined in $H^+$ and
\begin{equation}\label{e:hz}
(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_{x}(y)=0, \quad \forall y\in H^+.
\end{equation}
We show now that ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} (h-h_x)(x)$ is well defined.
For each $\varepsilon >0$ we have that
\begin{align*}
&\bigg|\int_{\{y \in D \cup H^+: \, |y-x|>\varepsilon\}}
{(h(y)-h_{x}(y))}j(|y-x|)\ dy\bigg|
\nonumber\\
\leq & \int_{B(x,R/4)^c}
(h(y)+h_{x}(y))
j(|y-x|)dy +\int_{A} (h(y)+h_{x}(y))j(|y-x|)\ dy
\\&\quad +\int_{E}
{|h(y)-h_{x}(y)|}j(|y-x|) dy
\,=:\,I_1\,+\,I_2\,+\,I_3.
\end{align*}
By the fact that $h(y)=0$ for $y \in B(Q, R)^c$,
\begin{align}
I_1 \le &\sup_{z \in {\mathbb R}^d:\ 0<z_d <R} \int_{B(z, R/4)^c \cap H^+}
V(y_d) j(|z-y|) dy+c_1\int_{B(0, R/4)^c}j (|y|)dy =:K_1+K_2.
\nonumber
\end{align}
$K_2$ is clearly finite since $J$ is the L\'evy density of $X$ while
$K_1$ is finite by Proposition \ref{c:cforI}.
For $y \in A$, since $V$ is increasing and $(R - \sqrt{R^2-|\widetilde
y|^2}) \le R^{-1}|\widetilde y|^2$, we see that
\begin{align}
h_{x}(y)+h(y) \le 2V(\varphi_1 ( \widetilde y) -\varphi_2 ( \widetilde y)) \le2
V(2 R^{-1}|\widetilde y|^2) \le 2V(2 R^{-1}|y-x|^2). \label{e:dfe2}
\end{align}
Using \eqref{e:dfe2} and Lemma \ref{l:estimate-for-V}, we have
\begin{align}
I_2\leq&c_2 \int_{ A} |y-x|^2 j(|y-x| )dy \le c_2 \int_{ B(0,
R/4)} |z|^2 j(|z| )dz < \infty . \label{e:fgy6} \end{align}
For $I_3$, we consider two cases separately: If $0
<y_d=\delta_{_{H^+}}({y}) \le \delta_D({y})$, since $v$ is
decreasing,
\begin{align}
h(y)-h_{x}(y) \le V(y_d+R^{-1}|\widetilde y|^2) -V(y_d) =
\int_{y_d}^{y_d+R^{-1}|\widetilde y|^2} v(z)dz \le R^{-1}|\widetilde y|^2
v(y_d). \label{e:KG}
\end{align}
If $y_d=\delta_{_{H^+}}({y}) > \delta_D({y})$ and $y \in E$, using
the fact that $\delta_D({y})$ is greater than or equal to the
distance between $y$ and the graph of $\varphi_1$ and
\begin{align*}
y_d-R+\sqrt{ |\widetilde y|^2+(R-y_d)^2} &= \frac{|\widetilde y|^2} {\sqrt{ |\widetilde
y|^2+(R-y_d)^2} + (R-y_d)}\,\le\, \frac{ |y-x|^2} {2 (R-y_d)} \le
\frac{ |y-x|^2} {R},
\end{align*}
we have
\begin{align}
h_{x}(y)-h(y) \le\int^{y_d}_{R-\sqrt{ |\widetilde y|^2+(R-y_d)^2}}
v(z)dz\le R^{-1} |y-x|^2 \,v(R-\sqrt{ |\widetilde y|^2+(R-y_d)^2}).
\label{e:KG2}
\end{align}
Thus, by \eqref{e:KG}-\eqref{e:KG2} and Lemma \ref{l:estimate-for-V},
\begin{align*}
I_3 \,\le\, c_3\int_{E} | y-x|^2 j(|y-x|) dy \le c_3 \int_{ B(0,
R/4)} |z|^2 j(|z| )dz < \infty .
\end{align*}
We have proved
\begin{equation}\label{e:I1234}
|{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}(h-h_x)(x)|\le I_1+I_2+I_3 \leq c_4
\end{equation}
for some constant $c_4=c_4(R, \Lambda)>0$.
The estimate \eqref{e:I1234} shows in particular that the limit
$$
\lim_{\varepsilon\downarrow 0}\int_{\{y \in D \cup H^+:
|y-x|>\varepsilon\}}{(h(y)-h_{x}(y))}j(|y-x|)\, dy
$$
exists and hence ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}(h-h_x)(x)$ is well defined.
We now consider $\Delta(h-h_x)$.
Note that for a.e.~$x \in D\cap B(Q, R/4)$, the second order
partial derivatives of the function $y \to \delta_D (y)$ exist at
$x$. Without loss of generality we assume that $x$ has been chosen
so that the second order partial derivatives of the function $y \to
\delta_D (y)$ exist at $x$.
Since $h_{x}(y)=V\left((y_d)^+\right)$ in $CS$, we have $\Delta h_x(x)= v'(x_d)$.
Moreover, since $\delta_D(y)=y_d$ for $y=(\widetilde 0, x_d+\varepsilon)$
when $|\varepsilon|$ is small, $\partial^2_{x_d} h(x)= v'(x_d)$.
Thus
\begin{align}\label{e:dsnew1}
&\Delta(h-h_x)(x) = \sum_{i=1}^{d-1}
\frac{\partial^2 V\left(\delta_D (y)\right)}{\partial y_i^2}|_{y=x}
=\sum_{i=1}^{d-1}
\frac{\partial}{\partial y_i} \left(v(\delta_D (y))
\frac{\partial \delta_D (y)}{\partial y_i}\right)|_{y=x} \nonumber \\
&= \sum_{i=1}^{d-1} v'(\delta_D (x))\left(\frac{\partial \delta_D (y)}{\partial y_i}|_{y=x} \right)^2
+ v(\delta_D (x))\frac{\partial^2 \delta_D (y)}{\partial y_i^2}|_{y=x}\ .
\end{align}
In the coordinate system $CS$,
\begin{equation}\label{e:dsnew2}
\frac{\partial \delta_D(y)}{\partial y_i}\large|_{y=x}=0 \quad \text{ and } \quad
\left|\frac{\partial^2\delta_D(y)}{\partial y_i^2}\large|_{y=x}
\right|\le \frac{4}{3R}, \quad i=1, \dots, d-1.
\end{equation}
Indeed, let $\epsilon \in \bR$ with $|\epsilon|$ small,
and $x_{\epsilon,i}:=(0, \dots, \epsilon, \dots 0, x_d)$, $i=1, \dots , d-1$.
Due to the uniform interior ball condition and the uniform
exterior ball condition with radius $R$, we have
$$
R-\sqrt{\epsilon^2+(R-x_d)^2}-x_d \le \delta_D(x_{\epsilon,i})-\delta_D(x) \le \sqrt{\epsilon^2+(R+x_d)^2}-R-x_d\, ,
$$
so
\begin{align*}
\frac{1}{\epsilon}| \delta_D(x_{\epsilon,i})-\delta_D(x)|
\le& \frac{1}{\epsilon} \left( \sqrt{\epsilon^2+(R-x_d)^2}-(R-x_d) \right) \vee \frac{1}{\epsilon}\left( \sqrt{\epsilon^2+(R+x_d)^2}-(R+x_d) \right)\\
=&\left(\frac{\epsilon}{\sqrt{\epsilon^2+(R-x_d)^2}+(R-x_d) } \right) \vee \left( \frac{\epsilon}{\sqrt{\epsilon^2+(R+x_d)^2}+(R+x_d)} \right),
\end{align*}
which goes to zero as $\epsilon \to 0$. The bound involving the second partial derivatives can be proved in
a similar way using the elementary fact that $\frac{\partial^2\delta_D(y)}{\partial x_i^2}|_{y=x}
= \lim_{\epsilon \to 0} \frac{1}{h^2}( \delta_D(x_{\epsilon,i})+\delta_D(x_{-\epsilon,i})-2\delta_D(x)).$
Therefore, combining \eqref{e:dsnew1}, \eqref{e:dsnew2} and Lemma \ref{l:estimate-for-V}, we have
$$
|\Delta(h-h_x)(x)| \le c_5\sum_{i=1}^{d-1}
\left|\frac{\partial^2\delta_D(y)}{\partial x_i^2}\large|_{y=x}
\right| \le c_6.
$$
Using this, \eqref{e:hz}, \eqref{e:I1234}, and linearity we get
that $(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h(x)$ is well defined and $|(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})
h(x)|\le c_7$. {\hfill $\Box$ \bigskip}
We use $C^{\infty}_c({\mathbb R}^d)$ to denote the space of infinitely
differentiable functions with compact support.
Using the fact that $\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}$ restricted to
$C^{\infty}_c({\mathbb R}^d)$ coincides with the
infinitesimal generator of the process $X$, we see that the
following Dynkin's formula is true: for $f \in C_c^{\infty}({\mathbb R}^d)$ and
any bounded open subset $U$ of ${\mathbb R}^d$,
\begin{equation} \label{e:*334}
{\mathbb E}_x\int_0^{\tau_U} {(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})} f(X_t) dt
={\mathbb E}_x[f(X_{\tau_U})]- f(x).
\end{equation}
\begin{lemma}\label{l2.1}
For every $r_1> 0$ and every $a \in (0,1)$, there exists a
positive constant $c=c(r_1, d, a)$ such that for any $r\in
(0, r_1]$ and any open sets $U$ and $D$ with $B(0, a r ) \cap D
\subset U \subset D$, we have
$$
{\mathbb P}_x\left(X_{\tau_U} \in D\right) \,\le\, c\,r^{-2}\, {\mathbb E}_x[\tau_U],
\qquad x \in D\cap B(0, a r/2).
$$
\end{lemma}
{\medskip\noindent {\bf Proof. }} For fixed $a \in (0,1)$, take a sequence of radial functions
$\phi_m$ in $C^{\infty}_c({\mathbb R}^d)$ such that $0\le \phi_m\le 1$,
$$
\phi_m(y)=\left\{
\begin{array}{lll}
0, & |y|< a/2\\
1, & a\le |y|\le m+1\\
0, & |y|>m+2,
\end{array}
\right.
$$
and that $\|\sum_{i, j}|\frac{\partial^2}{\partial y_i\partial y_j}
\phi_m|\|_\infty < c_1=c_1(a)< \infty$.
Define $\phi_{m, r}(y)=\phi_m(\frac{y}{r})$ so that
$0\le \phi_{m, r}\le 1$,
\begin{equation}\label{e:2.11}\phi_{m, r}(y)=
\begin{cases}
0, & |y|<ar/2\\
1, & ar\le |y|\le r(m+1)\\
0, & |y|>r(m+2),
\end{cases}
\quad \text{and} \quad \sup_{y\in {\mathbb R}^d} \sum_{i,
j}\left|\frac{\partial^2}{\partial y_i\partial y_j} \phi_{m,
r}(y)\right| \,<\, c_1\, r^{-2}.
\end{equation}
Using \eqref{e:2.11}, we see that
\begin{eqnarray}
&& \left|\int_{{\mathbb R}^d} (\phi_{m,r}(x+y)-\phi_{m,r}(x)-(\nabla
\phi_{m,r}(x) \cdot y)1_{B(0, 1)}(y))J(y) dy \right|\nonumber\\
&&\le \left|\int_{\{|y|\le 1\}}
(\phi_{m,r}(x+y)-\phi_{m,r}(x)-(\nabla \phi_{m,r}(x) \cdot y)1_{B(0, 1)}(y))J(y) dy\right|+
2\int_{\{|y|>1\}}J(y) dy \nonumber\\
&&\le \frac{c_2}{r^2}\int_{\{|y|\le 1 \}} |y|^2 J(y)dy + 2
\int_{\{|y|>1\}}J(y) dy \le \frac{c_3}{r^2}\, . \label{e2.100}
\end{eqnarray}
Now, by combining \eqref{e:*334}, (\ref{e:2.11}) and (\ref{e2.100}),
we get that for any $x\in D\cap B(0, ar/2)$,
\begin{align*}
&{\mathbb P}_x\left(X_{\tau_U} \in \{ y \in D: ar \le |y| <(m+1)r \}\right)
={\mathbb E}_x\left[\phi_{m, r} \left(X_{\tau_U}\right): X_{\tau_U} \in \{ y
\in D: ar \le |y| <(m+1)r
\}\right]\\
&\le {\mathbb E}_x\left[\phi_{m, r} \left(X_{\tau_U}\right)\right]=
{\mathbb E}_x\left[ \int_0^{\tau_U} {(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})} \phi_{m, r}(X_t)dt \right] \le
\|{(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})}\phi_{m, r}\|_\infty\, {\mathbb E}_x[\tau_U]
\le c_{4} r^{-2}{\mathbb E}_x[\tau_U].
\end{align*}
Therefore, since $B(0, a r ) \cap D \subset U$,
\begin{align*}
{\mathbb P}_x\left(X_{\tau_U} \in D\right)&={\mathbb P}_x\left(X_{\tau_U} \in
\{ y \in D: ar \le |y| \}\right) \\
&= \lim_{m\to
\infty}{\mathbb P}_x\left(X_{\tau_U} \in \{ y \in D: a r \le |y| <(m+1)r
\}\right) \,\le\,
c_5\,r^{-2}{\mathbb E}_x[\tau_U].
\end{align*}
{\hfill $\Box$ \bigskip}
Define $\rho_Q (x) := x_d - \varphi_Q (\widetilde x),$ where $(\widetilde x,
x_d)$ are the coordinates of $x$ in $CS_Q$. Note that for every $Q
\in \partial D$ and $ x \in B(Q, R)\cap D$ we have
\begin{equation}\label{e:d_com}
(1+\Lambda^2)^{-1/2} \,\rho_Q (x) \,\le\, \delta_D(x) \,\le\,
\rho_Q(x).
\end{equation}
We define for $r_1, r_2>0$
$$
D_Q( r_1, r_2) :=\left\{ y\in D: r_1 >\rho_Q(y) >0,\, |\widetilde y | < r_2
\right\}.
$$
Let $R_1 :=R/(4\sqrt{1+(1+ \Lambda)^2})$.
By Lemma \ref{l:estimate-for-V}, $V(\delta_D(x)$ on the right-hand sides of
\eqref{e:L:2}--\eqref{e:L:3} can be replaced by $\delta_D(x)$. The reason
we prefer the forms below is that the function $V$ will be used in the proof.
\begin{lemma}\label{L:2}
There are constants $\lambda_0 >2 R_1^{-1} $, $\kappa_0 \in (0,1)$ and $c=c(R, \Lambda
)>0$ such that for every $\lambda \ge \lambda_0$, $Q \in
\partial D$ and $x \in D_Q (2^{-1}(1+\Lambda)^{-1}\kappa_0 \lambda^{-1} ,
\kappa_0\lambda^{-1} )$ with $\widetilde x=0$,
\begin{equation}\label{e:L:2}
{\mathbb P}_{x}\left(X_{ \tau_{ D_Q ( \kappa_0 \lambda^{-1} , \lambda^{-1} )}} \in
D\right) \,\le\, c \, \lambda \, V( \delta_D (x))
\end{equation}
and
\begin{equation}\label{e:L:3}
{\mathbb E}_x\left[ \tau_{ D_Q ( \kappa_0 \lambda^{-1} , \lambda^{-1} )}
\right]\,\le\, c\, \lambda^{-1}\, V(\delta_D (x)).
\end{equation}
\end{lemma}
{\medskip\noindent {\bf Proof. }} Without loss of generality, we assume $Q=0$. Let
$\varphi=\varphi_0:\bR^{d-1}\to \bR$ be the $C^{1,1}$ function and
$CS_0$ be the coordinate system in the definition of $C^{1, 1}$ open
set so that $B(0, R)\cap D= \big\{(\widetilde y, \, y_d) \in B(0,
R)\textrm{ in } CS_0: y_d > \varphi (\widetilde y) \big\}.$ Let $\rho (y)
:= y_d - \varphi (\widetilde y)$ and $D ( a, b):=D_0 ( a, b)$.
Note that
\begin{equation}\label{e:lsd}
|y|^2 = |\widetilde y|^2+ |y_d|^2 <r^2 +(|y_d- \varphi(\widetilde y)|+ |\varphi(\widetilde
y)|)^2 < (1+(1+ \Lambda)^2) r^2 \quad \text{for every } y \in
D(r,r)\, .
\end{equation}
By this and the definition of $R_1$, we
have $D ( r, s) \subset D(R_1, R_1)\subset B(0, R/4) \cap D \subset
B(0, R)\cap D$ for every $ r,s \le R_1 . $
Using Lemma \ref{l:estimate-for-V}, we can and will
choose $\delta_0 \in (0, R_1)$ small such that
$$
2 r^2 \le V( (1+ \Lambda^2)^{-1/2} r) \quad \text{ for all }
r \le 4\delta_0.
$$
Then, by \eqref{e:d_com}, the subadditivity and monotonicity of $V$,
for every $\lambda \ge 1$ and every $ y \in B(0,R) \cap D$ with
$\rho (y) \le 4\lambda^{-1}\delta_0$, we have
\begin{equation}\label{e:ssp1}
2 \lambda^2 \rho (y)^2 \le V(\lambda \delta_D(y)) \le (\lambda+1)
V( \delta_D(y)) \le 2\lambda V( \delta_D(y)).
\end{equation}
Since $\Delta \varphi(\tilde{y})$ is well defined for a.e.~$\tilde{y}$
with respect to the $(d-1)$-dimensional Lebesgue measure, it follows that
$\Delta \rho(y)$ exists for a.e.~$y$ with respect to the
$d$-dimensional Lebesgue measure.
Using the fact that the derivative
of a Lipschitz function is essentially bounded by its Lipschitz
constant, we have
for a.e.~$y\in B(0,R)\cap D$ that
\begin{eqnarray*}
\Delta \rho (y)^2 = \Delta (y_d - \varphi (\widetilde y))^2 =2(1+ |\nabla
\varphi (\widetilde y)|^2) - 2\, \rho (y) \Delta \varphi(\widetilde y) \ge 2(1-
\rho (y) \| \Delta \varphi\|_\infty)\, .
\end{eqnarray*}
Choosing $\delta_0 \in (0, R_1)$ smaller if necessary we can get that
\begin{equation}\label{e:ssp2}
\Delta \rho (y)^2 \ge 1 \quad \text{ for a.e. } y \in B(0,R) \cap D
\text{ with }\rho (y) \le 2\delta_0.
\end{equation}
Let $g(y)=g(\widetilde y, y_d)$ be a smooth function on $\bR^d$
with $0 \le g(\widetilde y, y_d) \le 2$, $g(\widetilde y, y_d) \le y_d^2$,
\begin{equation}\label{e:ssp101}
\sum_{i, j=1}^d|\frac{\partial^2 g}{\partial y_i\partial y_j} |+\sum_{i=1}^d
|\frac{\partial g}{\partial y_i} | \le c_1,
\end{equation}
and
$$
g(y)=
\begin{cases}
0, & \text{ if } -\infty < y_d <0, \text{ or } y_d \ge 4 \text{ or } |\widetilde y|>2 \\
y_d^2, & \text{ if } 0\le y_d<1 \text{ and } |\widetilde y|<1 \\
-(y_d-2)^2+2, & \text{ if } 1 \le y_d \le 3 \text{ and } |\widetilde y|<1\\
(y_d-4)^2, & \text{ if } 3 \le y_d \le 4 \text{ and } |\widetilde y|<1.
\end{cases}
$$
Thus $\mathrm{supp}(g) \subset \{ |\widetilde y| \le 2, 0 \le y_d \le 4\}$.
For $\lambda >1$, let
$
g_\lambda(y):=g_\lambda(\widetilde y, y_d):=g( \lambda \delta_0^{-1}
\widetilde y, \lambda \delta_0^{-1} \rho(y))
$
so that
\begin{equation}\label{e:new001}
\mathrm{supp}
(g_\lambda) \subset \{ |\widetilde y| \le 2\lambda^{-1} \delta_0, \ \, 0 \le \rho(y) \le 4\lambda^{-1} \delta_0 \}.
\end{equation}
Then, since $\sum_{i, j=1}^d|\frac{\partial^2}{\partial y_i\partial y_j} g(y)|$
is essentially bounded, using \eqref{e:ssp101}, we have
\begin{equation}\label{e:ssp3}
\sum_{i, j=1}^d|\frac{\partial^2}{\partial y_i\partial y_j}\, g_\lambda
(y)| \le c_2 \lambda^2 \quad \text{a.e. } y.
\end{equation}
Note that, by the definition of $g$, $g_\lambda(y)=\lambda^2
\delta_0^{-2} \rho(y)^2$ on $D(\lambda^{-1} \delta_0, \lambda^{-1}
\delta_0)$. Thus, from \eqref{e:ssp2} we get
\begin{equation}\label{e:ssp4}
\Delta g_\lambda(y) \ge \lambda^2 \delta_0^{-2} \quad \text{ for
a.e. } y \in D(\lambda^{-1} \delta_0, \lambda^{-1} \delta_0).
\end{equation}
On the other hand, by \eqref{e:ssp3} we have
\begin{eqnarray*}
&& \left|\int_{{\mathbb R}^d} (g_{\lambda}(y+z)-g_{\lambda}(y)-(\nabla
g_{\lambda}(y) \cdot z)1_{B(0, \lambda^{-1} )}(z))J(z)\, dz \right| \nonumber\\
&&\le \left|\int_{\{|z|\le \lambda^{-1} \}}
(g_{\lambda}(y+z)-g_{\lambda}(y)-(\nabla g_{\lambda}(y) \cdot
z)1_{B(0, \lambda^{-1} )}(z))J(z)\, dz \right| \nonumber \\
&&\quad+\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) g_{\lambda}(y+z)dz+
\left(\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) dz\right)g_{\lambda}(y)
+2\int_{\{1 <|z|\}}J(z)\, dz
\nonumber\\
&&\le c_3\lambda^2\int_{\{|z|\le \lambda^{-1} \}}
|z|^2 J(z)\, dz+
2\int_{\{1 <|z|\}}J(z)\, dz
\nonumber
\\
&&\quad+
\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) g_{\lambda}(y+z)dz
+\left(\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) dz\right)g_{\lambda}(y).
\end{eqnarray*}
Thus
\begin{eqnarray}\label{e:ssp51}
&&\lambda^{-2}
\left|\int_{{\mathbb R}^d} (g_{\lambda}(y+z)-g_{\lambda}(y)-(\nabla
g_{\lambda}(y) \cdot z)1_{B(0,\lambda^{-1})}(z))J(z)
dz \right| \nonumber \\
&\le& c_3\int_{\{|z|\le \lambda^{-1} \}}
|z|^2 J(z)\, dz+
2\lambda^{-2}\int_{\{1 <|z|\}}J(z)\, dz
\nonumber
\\
&&\quad+\lambda^{-2}
\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) g_{\lambda}(y+z)dz
+\lambda^{-2}\left(\int_{\{\lambda^{-1} <|z| \le 1\}}J(z) dz\right)g_{\lambda}(y)\nonumber\\
&\le& c_3\int_{\{|z|\le \lambda^{-1} \}}
|z|^2 J(z)\, dz+
2\lambda^{-2}\int_{\{1 <|z|\}}J(z)\, dz
\nonumber
\\
&&\quad+
\int_{\{\lambda^{-1} <|z| \le 1\}}J(z)|z|^2 g_{\lambda}(y+z)dz
+\left(\int_{\{0 <|z| \le 1\}}|z|^2J(z) dz\right)g_{\lambda}(y).
\end{eqnarray}
We claim that for every $\lambda >1$ and $y \in D(\lambda^{-1} \delta_0, \lambda^{-1} \delta_0)$, the function $z \to g_{\lambda}(y+z)$ is supported in
$B(0,
3 \lambda^{-1}\delta_0 \sqrt{(4\Lambda)^2+1})$.
Fix $\lambda >1$ and $y \in D(\lambda^{-1} \delta_0, \lambda^{-1} \delta_0)$ and
suppose that $z \in B(0, 3 \lambda^{-1}\delta_0 \sqrt{(4\Lambda)^2+1})^c$. Then
either $|\widetilde z| \ge 3 \lambda^{-1}\delta_0$, or $|\widetilde z| < 3 \lambda^{-1}\delta_0$ and $| z_d| \ge 12 \lambda^{-1}\delta_0 \Lambda$.
If $|\widetilde z| \ge 3 \lambda^{-1}\delta_0$, then clearly
$|\widetilde y + \widetilde z| \ge |\widetilde z|-|\widetilde y| \ge 3 \lambda^{-1}\delta_0- \lambda^{-1}\delta_0 = 2 \lambda^{-1}\delta_0.$ Thus by \eqref{e:new001},
$g_{\lambda}(y+z)=0$.
Now assume $|\widetilde z| < 3 \lambda^{-1}\delta_0$ and $|z_d| \ge 12 \lambda^{-1}\delta_0 \Lambda$.
If $z_d \le -12 \lambda^{-1}\delta_0 \Lambda$, then $g_{\lambda}(y+z)=0$.
If $z_d \ge 12 \lambda^{-1}\delta_0 \Lambda$, we have
$$
\rho(y+z) \ge z_d -|\psi(\widetilde y+ \widetilde z)| \ge 12 \lambda^{-1}\delta_0 \Lambda - \Lambda ( |\widetilde z|+|\widetilde y|)
\ge \lambda^{-1} \Lambda (12 \delta_0 - 3\delta_0- \delta_0) = 7 \lambda^{-1} \Lambda \delta_0.
$$ Thus by \eqref{e:new001},
$g_{\lambda}(y+z)=0$.
The claim is proved.
Using the above claim and the fact that $g_\lambda(y)=\lambda^2
\delta_0^{-2} \rho(y)^2$ on $D(\lambda^{-1} \delta_0, \lambda^{-1}
\delta_0)$,
we have from \eqref{e:ssp51}, that for $y \in D(\lambda^{-1} \delta_0, \lambda^{-1} \delta_0)$
\begin{align}\label{e:ssp52}
&\lambda^{-2}
\left|\int_{{\mathbb R}^d} (g_{\lambda}(y+z)-g_{\lambda}(y)-(\nabla
g_{\lambda}(y) \cdot z)1_{B(0,\lambda^{-1})}(z))J(z)
dz \right| \nonumber \\
\le& c_3\int_{\{|z|\le \lambda^{-1} \}}
|z|^2 J(z)\, dz+
2\lambda^{-2}\int_{\{1 <|z|\}}J(z)\, dz
\nonumber
\\
&\quad+
\int_{\{\lambda^{-1} <|z| \le 1 \wedge
3 \lambda^{-1}\delta_0 \sqrt{(4\Lambda)^2+1}\}}J(z)|z|^2 dz
+c_4\lambda^2
\delta_0^{-2} \rho(y)^2\nonumber \\
\le& (c_3+1) \int_{\{|z|\le
3 \lambda^{-1}\delta_0 \sqrt{(4\Lambda)^2+1} \}}
(1 \wedge |z|^2) J(z)\, dz+
2\lambda^{-2}\int_{\{1 <|z|\}}J(z)\, dz
+c_4\lambda^2
\delta_0^{-2} \rho(y)^2,
\end{align}
where $c_4:=
2^{-1} \vee \int_{\{0 <|z| \le 1\}}|z|^2J(z) dz$.
Define
$$
h(y) := V(\delta_D(y)) {\bf 1}_{B(0, R) \cap D}(y) \quad \text{ and}
\quad h_\lambda(y) := \lambda h(y) - g_{\lambda} (y).
$$
Choose $\lambda_* \ge 2
$ large such that for every $\lambda \ge
\lambda_*$,
$$
(c_3+1) \int_{\{|z|\le 2 \lambda^{-1}\delta_0 \sqrt{(4\Lambda)^2+1} \}}
(1 \wedge |z|^2) J(z)\, dz+
2\lambda^{-2}\int_{\{1 <|z|\}}J(z)\, dz \le 4^{-1} \delta_0^{-2}\,\quad
\text{ and } \quad \frac{1}{4} \lambda \delta_0^{-2} \ge C_1,
$$
where $C_1$ is the constant from Lemma \ref{L:Main}.
Then by
\eqref{e:ssp4} and \eqref{e:ssp52}, for every
$\lambda \ge
\lambda_*$ and a.e. $y \in D(\lambda^{-1} 2^{-1} c_4^{-1/2}\delta_0, \lambda^{-1} \delta_0)$,
\begin{equation}\label{e2.1}
(\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) g_{\lambda} (y) \ge \Delta g_{\lambda} (y)- | {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}
g_{\lambda} (y)| \ge \lambda^2 \delta_0^{-2}-4^{-1} \lambda^2 \delta_0^{-2}-c_4
\lambda^4
\delta_0^{-2} \rho(y)^2
\ge \frac{1}{2} \lambda^2 \delta_0^{-2}
\end{equation}
and
\begin{equation}\label{e:ssp6}
(\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_\lambda(y) \le \lambda|(\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h(y)| -
(\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C})g_{\lambda} (y) \le \lambda (C_1 -\frac{1}{2} \lambda
\delta_0^{-2}) \le - \frac{1}{4} \lambda^2 \delta_0^{-2}\, .
\end{equation}
Let $\delta_*:=2^{-1} c_4^{-1/2}\delta_0$ and
$f$ be a non-negative smooth radial function with compact
support such that $f(x)=0$ for $|x|>1$ and $\int_{{\mathbb R}^d} f (x) dx=1$.
For $k\geq 1$, define $f_k(x)=2^{kd} f (2^k x)$ and
$$
h_\lambda^{(k)}(z):= ( f_k*h_\lambda)(z) :=\int_{{\mathbb R}^d}f_k (y)
h_\lambda(z-y)dy\, .
$$
Let
$$
B_k^\lambda:=\left\{y \in D(\lambda^{-1} \delta_*, \lambda^{-1}
\delta_0): \delta_{D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)
}(y) \ge 2^{-k}\right\}
$$
and consider large $k$'s such that $B_k^\lambda$'s are non-empty
open sets. Since $h_\lambda^{(k)}$ is in $C_c^{\infty}$, ${\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}
h_\lambda^{(k)}$ is well defined everywhere. We claim that for every
$\lambda \ge \lambda_*$ and $k$ large enough,
\begin{equation} \label{e:*333}
(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_\lambda^{(k)} \leq -\frac{1}{4} \lambda^2
\delta_0^{-2} \quad \text{ on } B_k^\lambda\, .
\end{equation}
Indeed, for any $x\in B_k^\lambda$ and $z\in B(0,2^{-k})$, when $k$
is large enough, it holds that $x-z\in D(\lambda^{-1} \delta_*,
\lambda^{-1} \delta_0)$. By
the proof of Lemma \ref{L:Main} the following limit exists:
\begin{eqnarray*}
\lefteqn{\lim_{\varepsilon\to 0} \int_{B(x,\varepsilon)^c}
\left(h_\lambda(y-z)-h_\lambda(x-z)\right)\, j(|x-y|)\, dy}\\
&=&\lim_{\varepsilon\to 0} \int_{B(x-z, \varepsilon)^c}
\left(h_\lambda(y')-h_\lambda(x-z)\right)\, j(|(x-z)-y'|)\, dy'
\,=\,{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C} h_\lambda(x-z)\, .
\end{eqnarray*}
Moreover, by \eqref{e:ssp6} it holds that for every $\lambda \ge
\lambda_*$,
$(\Delta+ {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_\lambda \le -\frac{1}{4} \lambda^2 \delta_0^{-2}$
a.e.
on $D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)$. Next,
\begin{eqnarray*}
\lefteqn{\int_{B(x, \varepsilon)^c} (h_\lambda^{(k)}(y)-
h_\lambda^{(k)}(x))\, j(|x-y|)\, dy}\\
&=&\int_{|x-y|>\varepsilon} \left(\int_{{\mathbb R}^d}f_k(z)
(h_\lambda(y-z)-h_\lambda(x-z))\, dz\right)\, j(|x-y|)\, dy\\
&=&\int_{B(0, 2^{-k})}f_k(z) \left(\int_{B(x, \varepsilon)^c}
\left(h_\lambda(y-z)-h_\lambda(x-z)\right)\, j(|x-y|)\, dy\right)\,
dz.
\end{eqnarray*}
By letting $\varepsilon \to 0$ and using the dominated convergence
theorem, we get that for every
$\lambda \ge \lambda_*$ and $k$ large enough,
$$
(\Delta + {\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_\lambda^{(k)}(x)=\int_{|z|<2^{-k}}
f_k(z)(\Delta+{\mathcal A}} \def\sB {{\mathcal B}} \def\sC {{\mathcal C}) h_\lambda(x-z)\, dz\le -\frac{1}{4} \lambda^2
\delta_0^{-2}\int_{|z|<2^{-k}} f_k(z) \, dz =-\frac{1}{4} \lambda^2
\delta_0^{-2}\, .
$$
By using Dynkin's formula \eqref{e:*334}, the estimates \eqref{e:*333} and the fact
that $h_\lambda^{(k)}$ are in $C^\infty_c({\mathbb R}^d)$, and by letting
$k\to \infty$ we get for every $\lambda \ge
\lambda_*$ and $x \in
D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)$ with $\widetilde x=0$,
\begin{eqnarray}\label{e:ssp7}
{\mathbb E}_x[h_\lambda (X_{\tau_{D(\lambda^{-1} \delta_*, \lambda^{-1}
\delta_0)}})]-\lambda V( \delta_D(x)) &\le & {\mathbb E}_x[h_\lambda
(X_{\tau_{D(\lambda^{-1} \delta_*, \lambda^{-1}
\delta_0)}})]-h_\lambda(x)\nonumber \\
& \le & - \frac{1}{4} \lambda^2 \delta_0^{-2}
{\mathbb E}_x[\tau_{D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)}]\, .
\end{eqnarray}
It is easy to see that $h_\lambda \ge 0$. In fact, if $y \in (B(0,
R) \cap D)^c$, then both $h(y)$ and $g_\lambda(y)$ are zero. If
$y \in B(0, R) \cap D$ and $\rho(y ) \ge 4 \lambda^{-1} \delta_0$,
then $g_\lambda(y)=0$. Finally, if $y \in B(0, R) \cap D$ and
$\rho(y ) \le 4 \lambda^{-1} \delta_0$, then, since $g(y) \le y_d^2$
by \eqref{e:ssp101}, we have from \eqref{e:ssp1},
$$
h_\lambda(y) = \lambda V(\delta_D(y)) - g( \lambda \delta_0^{-1} \widetilde
y, \lambda \delta_0^{-1} \rho(y)) \ge \lambda V(\delta_D(y)) -
\lambda^2 \rho(y)^2 \ge 0.
$$
Therefore, from \eqref{e:ssp7},
\begin{equation}\label{e:ssp8}
V( \delta_D(x)) \,\ge\, \frac{1}{4}\, \lambda \,\delta_0^{-2}
\,{\mathbb E}_x[\tau_{D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)}].
\end{equation}
Since $B(0, (1+\Lambda)^{-1} \delta_* \lambda^{-1}) \cap D \subset
D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)$, using Lemma
\ref{l2.1} and \eqref{e:ssp8}, we have that for every $\lambda \ge
\lambda_*$ and $x \in B(0, 2^{-1}(1+\Lambda)^{-1} \delta_*
\lambda^{-1})$ with $
\widetilde x=0$,
$$
{\mathbb P}_{x}\left(X_{ \tau_{ D_Q ( \lambda^{-1} \delta_*, \lambda^{-1}
\delta_0)}} \in D\right) \,\le\, c_7\, \lambda^2
\,{\mathbb E}_x[\tau_{D(\lambda^{-1} \delta_*, \lambda^{-1} \delta_0)}] \,\le
\,c_8 \,\lambda\, V( \delta_D(x)).
$$
We have proved the lemma with $\lambda_0:=\lambda_*\delta_0^{-1}$.
{\hfill $\Box$ \bigskip}
\begin{lemma}\label{L:200}
There is a constant $c=c(R, \Lambda )>0$ such that for every
$\lambda \ge \lambda_0$, $\kappa \in (0, 1]$, $Q \in
\partial D$ and $x \in D_Q ( \kappa\lambda^{-1} , \lambda^{-1} )$ with $\widetilde x=0$,
\begin{equation}\label{e:L:1}
{\mathbb P}_{x}\left(X_{ \tau_{ D_Q ( \kappa \lambda^{-1} , \lambda^{-1} )}} \in
D_Q ( 2 \kappa\lambda^{-1} , \lambda^{-1} )\right) \ge c \lambda V(
\delta_D (x)).
\end{equation}
\end{lemma}
{\medskip\noindent {\bf Proof. }} Fix $\lambda \ge \lambda_0$ and $\kappa \in (0, 1]$.
For simplicity we denote $D_Q(\kappa\lambda^{-1} , \lambda^{-1})$ by $\widehat{D}$.
Further, let
$$
B=\left\{y\in D: \rho_Q(y)=\kappa\lambda^{-1}
\mbox{ and } |\widetilde{y}|<
\lambda^{-1}\right\}
$$
be the upper boundary of $\widehat{D}$.
Let $\tau^W_{\widehat{D}}$ be the first time the Brownian motion $W$ exits $\widehat{D}$
and $W^{\widehat{D}}$
be the killed Brownian motion in $\widehat{D}$.
Let $Y=(Y_t:\, t\ge 0)$ be the subordinate killed Brownian motion defined by
$Y_t=W^{\widehat{D}}_{S_t}$.
Let $\zeta$ denote the
lifetime of $Y$.
Recall that $u$ is the potential density of the subordinator $S$. It
follows from \cite[Corollary 4.4]{SV08} that
$$
{\mathbb P}_x(X_{\tau_{\widehat{D}}}\in B)\,
\ge\, {\mathbb P}_x(Y_{\zeta-}\in B)\,=\,{\mathbb E}_x\left[u(\tau^W_{\widehat{D}});
W_{\tau^W_{\widehat{D}}}\in B\right].
$$
Thus, since $u$ is deceasing, for any $t>0$,
\begin{align*}
&{\mathbb P}_x(X_{\tau_{\widehat{D}}}\in B)\,\ge \,{\mathbb E}_x\left[u(\tau^W_{\widehat{D}});
W_{\tau^W_{\widehat{D}}}\in B, \tau^W_{\widehat{D}}\le t \right] \,\ge\,
u(t){\mathbb P}_x\big(W_{\tau^W_{\widehat{D}}}\in B, \tau^W_{\widehat{D}}\le t \big)\\
&=\, u(t)\left[{\mathbb P}_x\big(W_{\tau^W_{\widehat{D}}}\in B\big)-{\mathbb P}_x\big(\tau^W_{\widehat{D}}> t\big)\right] \,\ge \,
u(t)\left[{\mathbb P}_x\big(W_{\tau^W_{\widehat{D}}}\in B\big) -t^{-1}
{\mathbb E}_x\big[\tau^W_{\widehat{D}}\big]\right]\, .
\end{align*}
Now we use the following two estimates which
are valid for the Brownian motion (for example, see \cite[Lemma
3.4]{CKSV1} with $a=0$). There exist constants $c_1>0$ and
$c_2>0$ (independent of $\lambda \ge \lambda_0 $) such that
$
{\mathbb P}_x\big(W_{\tau^W_{\widehat{D}}}\in B \big) \ge c_1 \lambda
\delta_D(x)$ and $ {\mathbb E}_x\big[\tau^W_{\widehat{D}}
\big]\le c_2 \lambda^{-1} \delta_D(x)\, .
$
Then, by choosing $t_0>0$ so that $c_1-t^{-1}_0 c_2 \lambda^{-2} \ge
c_1-t^{-1}_0 c_2 \lambda_0 \ge c_1/2=:c_3$, we get
$$
{\mathbb P}_x\big(X_{\tau_{\widehat{D}}}\in B \big)\ge u(t)(c_1 -c_2
t^{-1}\lambda^{-2})\lambda \delta_D(x)\ge c_3 u(t_0) \lambda
\delta_D(x)\, .
$${\hfill $\Box$ \bigskip}
\section{Carleson estimate and Boundary Harnack principle}
In this section, we give the proof of the boundary Harnack principle
for $X$. We first prove the Carleson estimate for $X$ on Lipschitz
open sets.
We recall that an open set $D$ in $\bR^d$ is said to be a Lipschitz
open set if there exist a localization radius $R_2>0$ and a constant
$\Lambda_1 >0$ such that for every $Q\in
\partial D$, there exist a Lipschitz function $\psi=\psi_Q:
\bR^{d-1}\to \bR$ satisfying $\psi (0)= 0$, $| \psi (x)- \psi (y)|
\leq \Lambda_1 |x-y|$, and an orthonormal coordinate system $CS_Q$:
$y=(y_1, \dots, y_{d-1}, y_d)=:(\widetilde y, \, y_d)$ with its origin at
$Q$ such that
$$
B(Q, R_2)\cap D=\{ y=(\widetilde y, y_d)\in B(0, R_2) \mbox{ in } CS_Q: y_d
> \psi (\widetilde y) \}.
$$
The pair $(R_2, \Lambda_1)$ is called the characteristics of the
Lipschitz open set $D$.
Without loss of generality, we will assume throughout this section that
$R_1<1$.
Note that a Lipschitz open set can be
unbounded and disconnected. For Lipschitz open set $D$ and every
$Q\in \partial D$ and $ x \in B(Q, R_2)\cap D$, we define
$$
\rho_Q (x) := x_d - \psi_Q (\tilde x)\, ,
$$
where $(\tilde x, x_d)$ are the coordinates of $x$ in $CS_Q$.
The proof of the next lemma is similar to that of \cite[Lemma 4.1]{CKSV1}.
\begin{lemma}\label{lower bound} Let $D\subset {\mathbb R}^d$ be a Lipschitz
open set with characteristics $(R_2, \Lambda_1)$. There exists
a constant $\delta=\delta(R_2, \Lambda_1)>0$ such that for all
$Q \in \partial D$ and $x\in D$ with $\rho_Q(x) < R_2/2$,
$$
{\mathbb P}_x(X_{\tau(x)}\in D^c)\ge \delta\, ,
$$
where $\tau(x):=\tau_{D\cap B(x,2\rho_Q(x))}=\inf\{t>0:\,
X_t\notin D\cap B(x,2\rho_Q(x))\}$.
\end{lemma}
{\medskip\noindent {\bf Proof. }}
Let $D_x:=D \cap B(x,2\rho_Q(x))$ and $W^{D_x}$ be the killed Brownian
motion in $D_x$. Here $W$ denotes the Brownian motion in ${\mathbb R}^d$. As in the proof of
Lemma \ref{L:200}, we define the subordinate killed Brownian motion
$Y=(Y_t:\, t\ge 0)$ in $D_x$ by $Y_t:=W^{D_x}(S_t)$.
We will use $\zeta$ to denote the
lifetime of $Y$ and let $C_x:=\partial D\cap
B(x,2\rho_Q(x))$
and $\tau^W _{U} :=\inf\{t>0:\, W_t\notin {U}\}$.
Since, see \cite{SV08},
$
{\mathbb P}_x\left(X_{\tau(x)}\in C_x\right)\ge
{\mathbb P}_x\left(Y_{\zeta-}\in C_x\right)={\mathbb E}_x\left[u(\tau^W _{D_x});\, W_{\tau^W _{D_x}}\in
C_x\right],
$
we have
\begin{eqnarray}\label{ineq1}
&&{\mathbb P}_x\left(X_{\tau(x)}\in D^c\right)\, \ge\, {\mathbb P}_x\left(X_{\tau(x)}\in
C_x\right) \ge
{\mathbb E}_x\left[u(\tau^W _{D_x}); \, W_{\tau^W _{D_x}}\in C_x,
\tau^W _{D_x} \le t\right]\nonumber \\
&&\ge u(t){\mathbb P}_x\left[W_{\tau^W _{D_x}}\in C_x, \tau^W _{D_x}
\le t\right] \ge u(t)\left({\mathbb P}_x(W_{\tau^W _{D_x}}\in
C_x)-{\mathbb P}_x(\tau^W _{D_x}>t)\right), \quad t>0.
\end{eqnarray}
By the fact that $D$ is a Lipschitz open set, there exists
$c_1=c_1(R_2, \Lambda_1)>0$ such that
\begin{equation}\label{ineq2}
{\mathbb P}_x(W_{\tau^W _{D_x}}\in C_x)\ge c_1\, .
\end{equation}
(See the proof of \cite[Lemma 4.1]{CKSV1}.)
Since
$$
{\mathbb P}_x(\tau^W _{D_x} >t)\le \frac{{\mathbb E}_x[\tau^W _{D_x}]}{t} \le
\frac{{\mathbb E}_x[\tau^W _{B(x, 2\rho_Q(x))}]}{t} \le c_2
\frac{(\rho_Q(x))^2}{t}\le c_2\frac{R_2^2}{t},$$
by using (\ref{ineq2}) and (\ref{ineq1}), we obtain that
$$
{\mathbb P}_x\left(X_{\tau(x)}\in D^c\right) \ge u(t)\left({\mathbb P}_x(W_{\tau^W _{D_x}}\in
C_x)-{\mathbb P}_x(\tau^W _{D_x}>t)\right)
\ge u(t) \left(c_1-c_2\frac{R^2_1}{t}\right) \ge c_1 u(t_0)/2 >0,
$$
where $t_0=t_0(R_2, \Lambda_1)>0$ is chosen so that $c_1-c_2
R_2^2/t\ge c_1/2$. The lemma is thus proved. {\hfill $\Box$ \bigskip}
Suppose that $D$ is an open set and that $U$ and $V$ are bounded
open sets with $V \subset \overline{V} \subset U$ and $ D \cap V
\not= \emptyset$. If $f$ vanishes continuously on $D^c\cap U$, then
by a finite covering argument, it is easy to see that $f$ is bounded
in an open neighborhood of $\partial D\cap V$. The proof of the next
result is the same as that of \cite[Lemma 4.2]{CKSV1}. So we omit
the proof.
\begin{lemma}\label{l:regularity}
Let $D$ be an open set and $U$ and $V$ be bounded open sets with $V
\subset \overline{V} \subset U$ and $ D \cap V \not= \emptyset$.
Suppose $f$ is a nonnegative function in ${\mathbb R}^d$ that is harmonic in
$D\cap U$ with respect to $X$ and vanishes continuously on $D^c\cap
U$. Then $f$ is regular harmonic in $D\cap V$ with respect to $X$,
i.e.,
\begin{equation}\label{e:regularity}
f(x)={\mathbb E}_x\left[ f(X_{\tau_{D\cap V}})\right] \qquad \hbox{
for all }x\in D\cap V\, .
\end{equation}
\end{lemma}
\begin{thm}[Carleson estimate]\label{carleson}
Let $D\subset {\mathbb R}^d$ be a Lipschitz open set with the characteristics
$(R_2, \Lambda_1)$. Then there exists a positive constant
$A=A(R_2, \Lambda_1)$ such that for every $Q\in
\partial D$, $0<r<R_2/2$, and any nonnegative function
$f$ in ${\mathbb R}^d$ that is harmonic in $D \cap B(Q, r)$ with respect to
$X$ and vanishes continuously on $ D^c \cap B(Q, r)$, we have
\begin{equation}\label{e:carleson}
f(x)\le A f(x_0) \qquad \hbox{for } x\in D\cap B(Q,r/2),
\end{equation}
where $x_0\in D
\cap B(Q,r)$ with $\rho_Q(x_0)=r/2$.
\end{thm}
{\medskip\noindent {\bf Proof. }} Since $D$ is Lipschitz and $r<R_2/2$, by
Proposition \ref{uhp}
and a standard chain argument, it suffices to prove
(\ref{e:carleson}) for $x\in D\cap B(Q,r/12)$ and $\widetilde x_0 = \widetilde Q$.
Without loss of generality, we may assume that $f(x_0)=1$. In this
proof, the constants $\delta, \beta, \eta$ and $c_i$'s are always
independent of $r$.
Let $\nu=\nu(3) \vee 2 $ where $\nu(3)$ is the constant in \eqref{H:1n} with $K=3$,
choose $0<\gamma < (\nu^{-1} \wedge (1-\nu^{-1}))$ and let
$$
B_0(x)=D\cap B(x,2\rho_Q(x))\, ,\qquad B_1(x)=B(x,r^{1-\gamma}
\rho_Q(x)^{\gamma})\,
$$
and
$$
B_2=B(x_0,\rho_Q(x_0)/3)\, ,\qquad B_3=B(x_0, 2\rho_Q(x_0)/3).
$$
By Lemma \ref{lower bound}, there exists $\delta=\delta(R_2,
\Lambda_1)>0$ such that
\begin{equation}\label{e:c:1}
{\mathbb P}_x(X_{\tau_{B_0(x)}}\in D^c)\ge \delta\, ,\quad x\in B(Q,r/4)\, .
\end{equation}
By the Harnack inequality and a chain argument, there exists
$\beta>0$ such that
\begin{equation}\label{e:c:2}
f(x)<(\rho_Q(x)/r)^{-\beta} f(x_0)\, ,\quad x\in D\cap B(Q,r/4)\, .
\end{equation}
In view of Lemma \ref{l:regularity}, $f$ is regular harmonic in
$B_0(x)$ with respect to $X$. So
\begin{equation}\label{e:c:3}
f(x)={\mathbb E}_x\big[f\big(X_{\tau_{B_0(x)}}\big); X_{\tau_{B_0(x)}}\in
B_1(x)\big]+ {\mathbb E}_x\big[f\big(X_{\tau_{B_0(x)}}\big);
X_{\tau_{B_0(x)}}\notin B_1(x)\big] \qquad \hbox{for } x\in B(Q,
r/4) .
\end{equation}
We first show that there exists $\eta>0$ such that
\begin{equation}\label{e:c:4}
{\mathbb E}_x\big[f\big(X_{\tau_{B_0(x)}}\big); X_{\tau_{B_0(x)}}\notin
B_1(x)\big]\le f(x_0) \quad \hbox{if } x\in D \cap B(Q,r/12) \hbox{
with } \rho_Q(x) < \eta r\, .
\end{equation}
Let $\eta_0 :=2^{-2 \nu }$,
then,
since $\gamma < 1-\nu^{-1}$,
for $\rho_Q(x)< \eta_0 r$,
$$
2\rho_Q(x) \le r^{1-\gamma} \rho_Q(x)^{\gamma} - 2\rho_Q(x).
$$
Thus if $x\in D \cap B(Q,r/12)$ with $\rho_Q(x) < \eta_0r$, then
$|x-y|\le 2|z-y|$ for $z\in B_0(x)$, $y\notin B_1(x)$. Moreover, by
the triangle inequality, $|x-y|\le |x-z|+|z-y|\le 1+|z-y|$. Thus we
have by \eqref{H:1}, \eqref{H:2}, \eqref{e:levy} and Lemma
\ref{L:2.00}
\begin{align}\label{e:c:5}
&{\mathbb E}_x\big[f\big(X_{\tau_{B_0(x)}}\big); X_{\tau_{B_0(x)}} \notin
B_1(x)\big]\nonumber\\
=&{\mathbb E}_x \int_0^{\tau_{B_0(x)}} \int_{2>|y-x|>r^{1-\gamma}
\rho_Q(x)^{\gamma}}j(|X_t-y|)f(y)\, dy\, dt
\nonumber +{\mathbb E}_x \int_0^{\tau_{B_0(x)}}\int_{|y-x|>2}j(|X_t-y|)f(y)\, dy\, dt
\nonumber \\
\le &c_1 {\mathbb E}_x [\tau_{B_0(x)}]\left(\int_{2>|y-x|>r^{1-\gamma}
\rho_Q(x)^{\gamma}}j(|x-y|)f(y)\, dy+\int_{|y-x|>2}j(|x-y|)f(y)\, dy\right)
\nonumber \\
\le &c_1 c_2 \rho_Q(x)^2
\left(\int_{|y-x|>r^{1-\gamma}\rho_Q(x)^{\gamma}, |y-x_0|>2
\rho_Q(x_0)/3}
j(|x-y|)f(y)\, dy \right.\nonumber \\
&\quad \quad \quad \quad \quad +\left.\int_{|y-x_0|\le
2\rho_Q(x_0)/3}j(|x-y|)f(y)\, dy\right)\,=:\, c_3 \rho_Q(x)^2
(I_1+I_2)\, .
\end{align}
On the other hand, for $z\in B_2$ and $y\notin B_3$, we have
$|z-y|\le |z-x_0|+|x_0-y|\le \rho_Q(x_0)/3+|x_0-y|\le 2|x_0-y|$ and
$|z-y|\le |z-x_0|+|x_0-y|\le 1+|x_0-y|$. We have again by
\eqref{e:levy}, \eqref{H:1}, \eqref{H:2} and Lemma \ref{L:2.00}
\begin{align}\label{e:c:6}
&f(x_0)\,\ge\, {\mathbb E}_{x_0}\left[f(X_{\tau_{B_2}}), X_{\tau_{B_2}}\notin B_3\right]\nonumber \\
&\ge {\mathbb E}_{x_0} \int_0^{\tau_{B_2}} \left(\int_{2>|y-x_0|>2\rho_Q(x_0)/3} j(|X_t-y|)f(y)\, dy
+ \int_{|y-x_0|\ge 2} j(|X_t-y|)f(y)\, dy\right) dt\nonumber \\
&\ge c_4{\mathbb E}_{x_0} [\tau_{B_2}] \left(\int_{2>|y-x_0|>2\rho_Q(x_0)/3} j(|x_0-y|)f(y)\, dy
+ \int_{|y-x_0|\ge 2} j(|x_0-y|)f(y)\, dy\right)\nonumber \\
&=c_5 \rho_Q(x_0)^2 \int_{|y-x_0|>2\rho_Q(x_0)/3} j(|x_0-y|)f(y)\, dy\, .
\end{align}
Suppose now that $|y-x|\ge r^{1-\gamma}\rho_Q(x)^{\gamma}$ and $x\in
B(Q,r/4)$. Then
$$
|y-x_0|\le |y-x|+r\le
|y-x|+r^{\gamma}\rho_Q(x)^{-\gamma}|y-x|\le 2r^{\gamma}\rho_Q(x)^{-\gamma}|y-x|.
$$
Thus, using \eqref{H:1n}, we
get for $|x-y| \le 2$,
\begin{eqnarray}\label{e:gf1} j(|y-x| )
\le c_7 (\rho_Q(x)/r)^{-\nu \gamma} j(|y-x_0| ).
\end{eqnarray}
Now, using \eqref{H:1}, \eqref{H:2} and \eqref{e:gf1},
\begin{eqnarray}\label{e:c:7}
I_1&\le &c_7 \int_{R_0/2>|y-x|>r^{1-\gamma}\rho_Q(x)^{\gamma},
|y-x_0|>2\rho_Q(x_0)/3}(\rho_Q(x)/r)^{-
\nu \gamma}
j(|y-x_0| ) f(y)\, dy\nonumber\\
& &+c_8\int_{|y-x|\ge R_0/2, |y-x_0|>2\rho_Q(x_0)/3}j(|x_0-y|)f(y)\,
dy\nonumber\\
&\le & c_9 \left( (\rho_Q(x)/r)^{-
\nu \gamma}+1\right)
\int_{ |y-x_0|>2\rho_Q(x_0)/3}j(|x_0-y|)\, f(y)\, dy\nonumber\\
&\le &c_5^{-1} c_9\rho_Q(x_0)^{-2}\left( (\rho_Q(x)/r)^{-
\nu \gamma}+1\right)f(x_0)\nonumber\\
&\le& 2c_5^{-1} c_9 (\rho_Q(x)/r)^{-
\nu \gamma}
\rho_Q(x_0)^{-2} f(x_0) \, ,
\end{eqnarray}
where the second to last inequality is due to \eqref{e:c:6}.
If $|y-x_0|<2\rho_Q(x_0)/3$, then $|y-x|\ge |x_0-Q|-|x-Q|-|y-x_0|
>\rho_Q(x_0)/6$. This together with the Harnack inequality implies that
\begin{eqnarray}\label{e:c:8}
I_2 &\le& c_{10} \int_{|y-x_0|\le 2\rho_Q(x_0)/3}j(|x-y|) f(x_0)\, dy
\le c_{10} f(x_0)\int_{|y-x|>\rho_Q(x_0)/6} j(|x-y|)\, dy\nonumber\\
&=& c_{10} f(x_0)\left(\int_{R_0>|z|>\rho_Q(x_0)/6} j(|z|)\, dz +
\int_{R_0 \le |z|} j(|z|)\, dz \right)\nonumber\\
&\le & c_{10} f(x_0)\left(\int_{R_0>|z|>\rho_Q(x_0)/6} j(|z|)\, dz +
c_{11} \right).
\end{eqnarray}
Combining \eqref{e:c:5}, \eqref{e:c:7} and \eqref{e:c:8} we obtain
\begin{eqnarray}\label{e:c:9}
&&{\mathbb E}_x[f(X_{\tau_{B_0(x)}});\, X_{\tau_{B_0(x)}}\notin B_1(x)]\nonumber\\
&\le &c_{12} f(x_0)\Big(\rho_Q(x)^2(\rho_Q(x)/r)^{-
\gamma
\nu}\rho_Q(x_0)^{-2}
\nonumber \\
&&\quad
(\rho_Q(x)/r)^2(\rho_Q(x_0)/6)^2 \int_{R_0>|z|>\rho_Q(x_0)/6} j(|z|)\, dz +(\rho_Q(x)/r)^2 r^2 \Big)\nonumber \\
&\le &c_{13} f(x_0)\left((\rho_Q(x)/r)^{2-
\gamma
\nu}
+ (\rho_Q(x)/r)^2 \Big(\int_{R_0>|z|>\rho_Q(x_0)/6} |z|^2j(|z|)\, dz +1\Big) \right)\nonumber \\
&\le &c_{14} f(x_0)\left((\rho_Q(x)/r)^{2-
\gamma
\nu}
+ (\rho_Q(x)/r)^2 \right),
\end{eqnarray}
where we used the fact that
$\rho_Q(x_0)=r/2$. Since
$2-\gamma
\nu>0$, choose now $\eta\in (0, \eta_0)$ so that
$$
c_{14}\,\left(\eta^{2-\gamma
\nu} +\eta^2
\right)\,\le\, 1\, .
$$
Then for $x\in D \cap B(Q,r/12)$ with $\rho_Q(x) < \eta r$, we
have by \eqref{e:c:9},
\begin{eqnarray*}
{\mathbb E}_x\left[f(X_{\tau_{B_0(x)}});\, X_{\tau_{B_0(x)}}\notin
B_1(x)\right] &\le & c_{14}\,
f(x_0)\left(\eta^{2-\gamma
\nu}+\eta^2 \right) \le
f(x_0)\, .
\end{eqnarray*}
We now prove the Carleson estimate \eqref{e:carleson} for $x\in
D\cap B(Q, r/12)$ by a method of contradiction. Recall that
$f(x_0)=1$. Suppose that there exists $x_1\in D\cap B(Q,r/12)$ such
that $f(x_1)\ge K>\eta^{-\beta}\vee (1+\delta^{-1})$, where $K$ is a
constant to be specified later. By \eqref{e:c:2} and the assumption
$f(x_1)\ge K>\eta^{-\beta}$, we have
$(\rho_Q(x_1)/r)^{-\beta}>f(x_1)\ge K> \eta^{-\beta}$, and hence
$\rho_Q(x_1)<\eta r$.
By (\ref{e:c:3}) and (\ref{e:c:4}),
$$
K\le f(x_1)\le {\mathbb E}_{x_1}\left[f(X_{\tau_{B_0(x_1)}});
X_{\tau_{B_0(x_1)}} \in B_1(x_1) \right]+1\, ,
$$
and hence
$$
{\mathbb E}_{x_1}\left[f(X_{\tau_{B_0(x_1)}}); X_{\tau_{B_0(x_1)}} \in
B_1(x_1)\right] \ge f(x_1)-1 > \frac{1}{1+\delta}\, f(x_1)\, .
$$
In the last inequality of the display above we used the assumption
that $f(x_1)\ge K>1+\delta^{-1}$. If $K \ge 2^{\beta/\gamma}$, then
$D^c\cap B_1(x_1)\subset D^c \cap B(Q,r)$. By using the assumption
that $f=0$ on $D^c\cap B(Q, r)$, we get from \eqref{e:c:1}
\begin{eqnarray*}
{\mathbb E}_{x_1}[f(X_{\tau_{B_0(x_1)}}), X_{\tau_{B_0(x_1)}}\in B_1(x_1)]
&=&{\mathbb E}_{x_1}[
f(X_{\tau_{B_0(x_1)}}), X_{\tau_{B_0(x_1)}}\in B_1(x_1)\cap D]\\
& \le& {\mathbb P}_x(X_{\tau_{B_0(x_1)}}\in
D) \, \sup_{B_1(x_1)}f \le (1-\delta) \, \sup_{B_1(x_1)}f \, .
\end{eqnarray*}
Therefore, $\sup_{B_1(x_1)}f> f(x_1)/((1+\delta)(1-\delta))$, i.e.,
there exists a point $x_2\in D$ such that
$$
|x_1-x_2|\le r^{1-\gamma}\rho_Q(x_1)^{\gamma} \quad \hbox{ and }
\quad
f(x_2)>\frac{1}{1-\delta^2}\, f(x_1)\ge \frac{1}{1-\delta^2}\, K\, .
$$
By induction, if $x_k\in D\cap B(Q, r/12)$ with $f(x_k)\geq
K/(1-\delta^2)^{k-1}$ for $k\ge 2$, then there exists $x_{k+1}\in D$
such that
\begin{equation}\label{e:c:10}
|x_k-x_{k+1}|\le r^{1-\gamma}\rho_Q(x_k)^{\gamma} \quad \hbox{ and
} \quad f(x_{k+1}) > \frac{1}{1-\delta^2}\, f(x_k)>
\frac{1}{(1-\delta^2)^k}\, K\, .
\end{equation}
{}From (\ref{e:c:2}) and (\ref{e:c:10}) it follows that
$\rho_Q(x_{k})/r \le (1-\delta^2)^{(k-1)/\beta}K^{-1/\beta}$, for
every $k\ge 1$. Therefore,
\begin{align*}
&|x_k-Q|\,\le\,|x_1-Q|
+\sum_{j=1}^{k-1}|x_{j+1}-x_j|\,\le\, \frac{r}{12} +
\sum_{j=1}^{\infty} r^{1-\gamma}\rho_Q(x_j)^{\gamma}\\
&\le \frac{r}{12}+r^{1-\gamma}\sum_{j=1}^{\infty}(1-\delta^2)^{(j-1)
\gamma/\beta}K^{-\gamma/\beta}r^{\gamma}\,=\,\frac{r}{12}+
r^{1-\gamma}r^{\gamma}K^{-\gamma/\beta}
\sum_{j=0}^{\infty}(1-\delta^2)^{j\gamma/\beta}\\
&=\frac{r}{12}+ r K^{-\gamma/\beta}\,
\frac{1}{1-(1-\delta^2)^{\gamma/\beta}}.
\end{align*}
Choose
$$
K=\eta\vee (1+\delta^{-1})\vee 12^{\beta/\gamma}(1-
(1-\delta^2)^{\gamma/\beta})^{-\beta/\gamma}.
$$
Then $K^{-\gamma/\beta}\, (1-(1-\delta^2)^{\gamma/\beta})^{-1}\le
1/12$, and hence $x_k\in D\cap B(Q,r/6)$ for every $k\ge 1$. Since
$\lim_{k\to \infty}f(x_k)=+\infty$, this contradicts the fact that
$f$ is bounded on $B(Q,r/2)$. This contradiction shows that $f(x)<
K$ for every $x\in D\cap B(Q, r/12)$. This completes the proof of
the theorem.
{\hfill $\Box$ \bigskip}
\noindent {\bf Proof of Theorem \ref{t:main} }.
We recall that $R_1=R/(4\sqrt{1 + (1+\Lambda)^2})$ and $\lambda_0
>2 R_1^{-1}$ and $\kappa_0 \in (0,1)$ are the constants in the statement of
Lemma \ref{L:2}.
Since $D$ is a $C^{1,1}$ open set and $r<R$, by the Harnack
inequality and a standard chain argument, it
suffices to prove \eqref{e:bhp_m} for $x,y \in D \cap B(Q,2^{-1}
r\kappa_0\lambda_0^{-1})$. In this proof, the constants $\eta$ and $c_i$'s
are always independent of $r$.
For any $r\in (0, R]$ and
$x\in D\cap B(Q, 2^{-1} r\kappa_0\lambda_0^{-1})$, let $Q_x$ be
the point $Q_x \in \partial D$ so that $|x-Q_x|=\delta_{D}(x)$ and
let $x_0:=Q_x+\frac{r}{8}(x-Q_x)/|x-Q_x|$. We choose a
$C^{1,1}$-function $\varphi: \bR^{d-1}\to \bR$ satisfying $\varphi
(0)= 0$, $\nabla\varphi (0)=(0, \dots, 0)$, $\| \nabla \varphi
\|_\infty \leq \Lambda$, $| \nabla \varphi (y)-\nabla \varphi (z)|
\leq \Lambda |y-z|$, and an orthonormal coordinate system $CS$ with
its origin at $Q_x$ such that
$$
B(Q_x, R)\cap D=\{ y=(\widetilde y, y_d) \in B(0, R) \mbox{ in } CS: y_d >
\varphi (\widetilde y) \}.
$$
In the coordinate system $CS$ we have $\widetilde x = \widetilde 0$ and $x_0=(\widetilde
0, r/8)$. For any $b_1, b_2>0$, we define
$$
D(b_1, b_2):=\left\{ y=(\widetilde y, y_d) \mbox{ in } CS: 0<y_d-\varphi(\widetilde
y)<b_1r\kappa_0\lambda_0^{-1}, \ |\widetilde y| < b_2 r\lambda_0^{-1} \right\}.
$$
It is easy to see that
$D(2, 2)\subset D\cap B(Q, r/2)$.
In fact, since $\Lambda \ge 1$ and $R \le 1$,
for every $z \in D(2, 2)$,
\begin{align*}
&|z-Q| \le |Q-x|+|x-Q_x|+|Q_x-z| \le |Q-x|+|x-Q_x|+ |z_d- \varphi(\widetilde
z)|+ |\varphi(\widetilde z)|\\
&< r \lambda_0^{-1} ((1+\Lambda)+ 4)
<2^{-1} r R((1+\Lambda)+ 4)/(4\sqrt{1 + (1+\Lambda)^2})
\le \frac{r}{2}.
\end{align*}
Thus if $f$ is a nonnegative function on ${\mathbb R}^d$ that is harmonic in
$D\cap B(Q, r)$ with respect to $X$ and vanishes continuously in
$D^c\cap B(Q, r)$, then, by Lemma \ref{l:regularity}, $f$ is regular
harmonic in $D\cap B(Q,r/2)$ with respect to $X$, hence also in
$D(2, 2)$. Thus by the Harnack
inequality, we have
\begin{eqnarray}
f(x) &= & {\mathbb E}_x\left[f\big(X_{\tau_{ D(1,1)}}\big)\right] \ge
{\mathbb E}_x\left[f\big(X_{\tau_{ D(1,1)}}\big); X_{
\tau_{ D(1,1)}} \in D(2,1)\right]\label{e:BHP2}\\
&\ge& c_1 f(x_0) {\mathbb P}_x\Big( X_{\tau_{ D(1,1)}} \in D (2,1)\Big)
\ge c_2 f(x_0) \delta_D(x) /r.\nonumber
\end{eqnarray}
In the last inequality above we have used \eqref{e:L:1}.
Let $w=(\widetilde 0, r\lambda_0^{-1}\kappa_04)$. Then it is easy to see that there
exists a constant $\eta=\eta(\Lambda, \delta_0)\in (0, 1/4)$ such
that $B(w, \eta r\lambda_0^{-1}\kappa_0)\in D(1, 1)$. By \eqref{H:1},
\eqref{H:2}, \eqref{e:levy} and Lemma \ref{L:2.00},
\begin{align*}
&f(w) \,\ge\, {\mathbb E}_{w}\left[f\big(X_{\tau_{ D(1,1)}}\big);
X_{\tau_{ D(1,1)}} \notin D(2,2)\right]\,=\,
{\mathbb E}_{w}
\int_0^{\tau_{ D(1,1)}}
\int_{{\mathbb R}^d\setminus D(2,2)} f(y) j(|X_t-y|)dydt\\
&\ge\, c_3\,{\mathbb E}_{w}\big[\tau_{B(w, \eta r\lambda_0^{-1}\kappa_0 )}\big]
\int_{{\mathbb R}^d\setminus D(2,2)} f(y) j(|w-y|) dy\,\ge\, c_4\, r^2\,
\int_{{\mathbb R}^d\setminus D(2,2)} f(y) j(|w-y|) dy.
\end{align*}
Hence by \eqref{H:1}, \eqref{H:2}, \eqref{e:L:3},
\begin{align*}
&{\mathbb E}_{x}\left[f \left(X_{\tau_{ D(1,1)}}\right); \,
X_{\tau_{ D(1,1)}}
\notin D(2,2)\right]\,=\, {\mathbb E}_{x}
\int_0^{\tau_{ D(1,1)}}
\int_{{\mathbb R}^d\setminus
D(2,2)} f(y) j(|X_t-y|)dydt\\
&\le\, c_5\, {\mathbb E}_x[\tau_{ D(1,1)}] \int_{{\mathbb R}^d\setminus
D(2,2)} f(y) j(|w-y|)dy\\
&\le\, c_6\, \delta_D(x) r \int_{{\mathbb R}^d\setminus D(2,2)}
f(y) j(|w-y|) dy \,\leq \, \frac{c_6 \,
\delta_D(x)}{c_4 \, r} f(w).
\end{align*}
On the other hand, by the Harnack inequality and the Carleson estimate, we have
$$
{\mathbb E}_x\left[f\left(X_{\tau_{ D(1,1)}}\right);\, X_{\tau_{
D(1,1)}} \in D(2,2)\right] \,\le\, c_7 \, f(x_0) {\mathbb P}_x\left(
X_{\tau_{D (1,1)}} \in D(2,2)\right)\,\le\, c_8 \, f(x_0)
\delta_D(x) /r.
$$
In the last inequality above we have used \eqref{e:L:2}.
Combining the two inequalities above, we get
\begin{align}
& f(x) \,= \, {\mathbb E}_x\left[f \left(X_{\tau_{ D(1,1)}}\right); \,
X_{\tau_{ D(1,1)}} \in D(2,2)\right]\,+\,{\mathbb E}_x\left[ f
\left(X_{\tau_{ D(1,1)}}\right); \,
X_{\tau_{ D(1,1)}} \notin D(2,2)\right] \label{e:BHP1} \\
&\le \, \frac{c_8}{r} \delta_D(x) f(x_0) + \frac{c_6 \,
\delta_D(x)}{c_4\, r} f(w) \,\le \, \frac{c_{9}}{r}\, \delta_D(x)
(f(x_0) + f(w))\,\le \, \frac{c_{10}}{r}\, \delta_D(x) f(x_0) .\nonumber
\end{align}
In the last inequality above we have used the Harnack inequality.
From \eqref{e:BHP2}--\eqref{e:BHP1}, we have that for every $x, y\in
D \cap B(Q, 2^{-1} r\kappa_0\lambda_0^{-1})$,
$$
\frac{f(x)}{f(y)}\,\le \,
\frac{c_{10}}{c_2}\,\frac{\delta_D(x)}{\delta_D(y)},
$$
which proves the theorem. {\hfill $\Box$ \bigskip}
\section{Counterexample}\label{counterexample}
In this section,
we present an example of a (bounded) $C^{1,1}$ domain (open and connected) $D$
on which the boundary Harnack principle for the independent sum of a Brownian motion and
a finite range rotationally invariant L\'evy process fails,
even for regular harmonic function vanishing on $D^c$.
A similar example appears in \cite[Section 6]{KS} for the case of truncated stable process.
Suppose that $Z$ is a rotationally invariant L\'evy process whose L\'evy measure
has a density $J(x)=j(|x|)$ with $j(r)=0$ for all $r\ge 1$. Suppose that $Z$ is independent
of the Brownian motion $W$. We will consider the process $Y=W+Z$.
For any Borel sets $U$ and $V$ in ${\mathbb R}^d$ with $V \subset \overline{U}^c$, we have
\begin{equation}
\label{Poisson}
{\mathbb P}_x(Y_{\tau^Y_U} \in V)={\mathbb E}_x \int_0^{\tau_U^Y}
\int_{V } j(|Y_t-z|){\bf 1}_{\{|Y_t-z| <1\}}(|Y_t-z|) \, dz\, dt
\quad x \in U,
\end{equation}
where $\tau_U^Y:=\inf\{t>0: Y_t \notin U\}$.
We consider the bounded domain in ${\mathbb R}^d$
$$
D:=(-100, 100)^{d} \setminus \left( (-100, 49]^{d-1} \times[-1/2, 0]\right).
$$
Suppose that the (not necessarily scale invariant) boundary Harnack principle
is true for $Y$ on $D$ at the origin for regular harmonic function vanishing on $D^c$,
i.e., there exist constants $R_1 >0 $ and $M_1 >1$ such that for any $r < R_1$ and
any nonnegative functions $u, v$ on ${\mathbb R}^d$ which are regular harmonic with
respect to $Y$ in $D \cap B(
0, M_1 r)$ and vanish in $D^c$, we have
\begin{equation}\label{ce0}
\frac{ u(x)}{ v(x)}
\, \le \,c_r\, \frac{ u(y)}{ v(y)} \quad \mbox{ for any }
x,y \in
D \cap B(
0, r),
\end{equation}
where $c_r=c_r(D)>0$ is independent of the harmonic functions $u$ and $v$.
Choose an $r_1 < R_1$ with $M_1 r_1 <1/2$ and let $A:=( \widetilde 0,
\frac12 r_1)$. We define a function $v$ by
$$
v(x):={\mathbb P}_x \left(Y_{ \tau^Y_{D \cap B(
0,M_1r_1)}} \in \{y \in D;
y_d
>0\}\right).
$$
By definition $v$ is regular harmonic in $D \cap B(
0,M_1r_1)$ with
respect to $Y$ and vanishes in $D^c$. Applying the function $v$ above to
(\ref{ce0}), we get a Carleson type estimate at $0$, i.e., for any nonnegative
function $u$ which is regular harmonic with respect to $Y$ in $D
\cap B(
0, M_1 r_1)$ and vanishes in $D^c$ we have
\begin{equation}\label{ce1}
u(A) \,\ge \, c^{-1}_{r_1} \frac{v(A)}{v(x)} u(x) \,\ge \, c^{-1}_{r_1} v(A)
u(x) \,=\,c_1\,u(x), \quad x \in D \cap B(0, r_1),
\end{equation}
where $c_1=c^{-1}_{r_1} v(A) >0$.
We will construct a bounded positive function $u$ on ${\mathbb R}^d$ which is regular harmonic with respect to $Y$
in $D \cap B(
0, M_1 r)$ and vanishes
in $D^c$ for which (\ref{ce1}) fails.
For $n \ge 1$, we put
\begin{eqnarray*}
C_n&:=&\left\{ (
\widetilde x,
x_d) \in D: \quad |
\widetilde x| \le 2^{-n-3} r_1,
\quad x_2
\le -1+2^{-n}r_1^2 \right\},\\
D_n&:=&\left\{ (
\widetilde y,
y_d) \in D: \quad
y_d>0, \quad
|x-y| <1 \quad \mbox{ for some } x \in C_n\right\}.
\end{eqnarray*}
It is easy to see that
\begin{equation}\label{ce2}
\overline{D_n} \subset \{(
\widetilde y,
y_d): |
\widetilde y| \le ( 2^{-n-3}+ 2^{-(n-1)/2}) r_1, 0 \le
y_d \le 2^{-n}r_1^2\} \subset B(
0,r_1) \cap D, \quad \mbox{ for } n \ge 2.
\end{equation}
Indeed, for any $y\in \overline{D_n}$, we have $
y_d\in [0, 2^{-n}r_1^2]$ and
$|y-x |\le 1$ for some $x \in C_n$.
If $|
\widetilde y| > ( 2^{-n-3}+ 2^{-(n-1)/2}) r_1$, $
y_d \ge 0$ and $x \in C_n$, then
$$
|x-y|^2 \ge
x_d^2 + (|
\widetilde y|-|
\widetilde x|)^2 \ge (1-2^{-n}r_1^2)^2+ 2^{-(n-1)}r_1^2 >1.$$
Thus, in this case $y \notin \overline{D_n}$.
For any $n$, let $T^Y_{D_n}$ be the first hitting time of $D_n$ by the
process $Y$. By \eqref{ce2}
$$
{\mathbb P}_A\left(\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{D_n}\right) \,\to\,
{\mathbb P}_A\left(\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{\{
0\}}\right)
=0, \quad
\mbox{ as } n \to \infty.
$$
Fix $n_0\ge 2$ large so that
\begin{equation}\label{ce3}
{\mathbb P}_A\left(\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{D_{n_0}}\right) \, < \, \frac{c_1}{2}
\end{equation}
and define
$$
u(x)\,:={\mathbb P}_x \left( Y_{ \tau^Y_{D \cap B(
0,M_1r_1)}} \in C_{n_0}\right).
$$
$u$ is a nonnegative bounded function which is
regular harmonic in $D \cap B(
0,M_1r_1)$ with respect to
$Y$ and vanishes in $D^c$. It also vanishes
continuously on $\partial D \cap B(
0,M_1r_1)$.
Note that by \eqref{Poisson},
$${\mathbb P}_{A} \left( Y_{ \tau^Y_{D \cap B(
0,M_1r_1)}} \in C_{n_0},
\, \tau^Y_{D \cap B(
0,M_1r_1)} \le T^Y_{D_{n_0}} \right)
\, =\,
{\mathbb P}_{A} \left( Y_{ \tau^Y_{D \cap B(
0,M_1r_1)
\setminus D_{n_0}}} \in C_{n_0} \right)=0.
$$
Thus by the strong Markov property,
\begin{eqnarray*}
u(A) &=&{\mathbb P}_A \left( Y_{ \tau^Y_{D \cap B(
0,M_1r_1)}} \in C_{n_0},\,
\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{D_{n_0}} \right)\\
&=&{\mathbb E}_A \left[{\mathbb P}_{Y_{T^Y_{D_{n_0}}}}
\left( Y_{ \tau^Y_{D \cap B(
0,M_1r_1)}} \in C_{n_0} \right);\,
\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{D_{n_0}} \right]\\
&\le&{\mathbb P}_A\left(\tau^Y_{D \cap B(
0,M_1r_1)} > T^Y_{D_{n_0}}\right) \left(
\sup_{x \in D_{n_0}} u(x)\right) \, < \, \frac{c_1}2 \left(\sup_{x \in
D \cap B(
0,r_1) } u(x)\right).
\end{eqnarray*}
In the last inequality above, we have used (\ref{ce2})--(\ref{ce3}).
But by (\ref{ce1}), $u(A) \,\ge \,c_1 \, \sup_{x \in D \cap
B(
0,r_1) } u(x)$, which gives a contradiction. Thus the boundary
Harnack principle is not true for $D$ at the origin.
By smoothing off the corners of $D$, we can easily construct a bounded
$C^{1, 1}$ domain on which the boundary Harnack principle for $Y$ fails at 0.
\bigskip
\section{Proofs of Theorems \ref{t-main-green} and \ref{t-main-martin}}
As already said in the introduction, once the boundary Harnack principle has been established,
the proofs of Theorems \ref{t-main-green} and \ref{t-main-martin} are similar to the
corresponding proofs in \cite{CKSV2} for the operator $\Delta+ a^{\alpha} \Delta^{\alpha/2}$.
In fact, the proof are even simpler, because \cite{CKSV2} strives for uniformity in the weight $a$.
The proof of Theorem \ref{t-main-green} in the case $d\geq 3$ is by now quite standard.
Once the interior estimates are
established, the full estimates in connected $C^{1,1}$ open sets follow from
the boundary Harnack principle by the method developed
by Bogdan \cite{Bo1} and Hansen \cite{H}. For the operator $\Delta+ a \Delta^{\alpha/2}$
this is accomplished in \cite[Section 3]{CKSV2}. In the present setting the proof from
\cite{CKSV2} carries over almost verbatim. In several places in \cite{CKSV2} one refers
to the form of the L\'evy density, but in fact,
the form of the L\'evy density is only used
to establish uniformity in the weight $a$.
When $d=2$, the above method ceases to work due to the nature of the logarithmic
potential associated with the Laplacian. The proof in \cite[Section 4]{CKSV2} for
the operator $\Delta+ a \Delta^{\alpha/2}$ uses a capacitary argument to derive
the interior upper bound estimate for the Green function. By a scaling consideration
and applying the boundary Harnack principle, one gets sharp Green function upper bound
estimates. For the lower bound estimates, \cite{CKSV2} compares the process with the
subordinate killed Brownian motion when
$D$ is connected, and then extend it to general bounded $C^{1,1}$ by using the
jumping structure of the process. In the present setting, the proof of the lower
bound is exactly the same as in \cite{CKSV2} (see proofs of Theorems 4.2 and 4.4).
The proof of the upper bound is essentially the same as the one in \cite{CKSV2},
except that one has to make several minor modifications. Lemma 4.5 in \cite{CKSV2}
should be replaced by the following statement: There exists $c>0$ such that for any $L>0$,
$$
\mathrm{Cap}^0_{B(0,L)}(\overline{B(0,r)}) \ge
\frac{c}{\log(L/r)} \quad \text{for every } r \in (0, 3L/4).
$$
This is proved in the same way as \cite[Lemma 4.5]{CKSV2} by using the explicit
formula for the Green function of the ball $B(0,L)\subset {\mathbb R}^2$:
$$
G^0_{B(0,L)}(x,y)=\frac{1}{2\pi}\log\left(1+\frac{
(L^2-|x|^2)(L^2-|y|^2)}{L^2|x-y|^2}\right)\, .
$$
The statement of Lemma 4.6 in \cite{CKSV2} should be changed to: There exists
$c>0$ such that for any $L>0$ and
bounded open set $D$ in $\bR^2$ containing $B(0, L)$
and any $x \in \overline{B(0, \frac{3L}4)}$
$$
G_{D}(x,0)\,\le\, \frac{c}{\mbox{\rm Cap}^0_{D}
\big(\overline{B(0, |x|/2)} \big)} \, {\mathbb P}_x\left( \sigma_{\overline{B(0, |x|/2 )}} < \tau_{D}\right) ,
$$
(we refer to \cite{CKSV2} for all unexplained notation).
Next, Corollary 4.7 in \cite{CKSV2} should be replaced by the statement:
There exists $c>0$ such that for any $L>0$ and any $x \in \overline{B(0, 3L/4)}$
$$
G_{B(0,L)}(x,0)\,\le\, c\,\log \left(L/|x| \right).
$$
Finally, the last change is in the proof of Lemma 4.8 in \cite{CKSV2} which
uses a scaling argument. This in our setting can be circumvented by using the
modified statement of \cite[Lemma 4.6]{CKSV2}. The rest of the proof remains
exactly the same.
The proof of Theorem \ref{t-main-martin} is also quite
standard (see \cite{Bo, CKSV2, KS2, KSV}). In the
current setting we follow step-by-step the proof of the corresponding
result in \cite[Section 6]{CKSV2}.
The main difference is that
\cite{CKSV2} uses the explicit form of the L\'evy density $j^a$ for
the operator $\Delta+ a\Delta^{\alpha/2}$ which is $c(\alpha,d,a)r^{-d-\alpha}$.
This L\'evy density is now replaced by $j$, and it suffice to use properties
\eqref{H:1} and \eqref{H:2} to carry over all arguments. The reader can also compare with
\cite[Section 6]{KSV} where the Martin boundary was identified with the
Euclidean boundary for purely discontinuous processes whose jumping kernel satisfies
\eqref{H:1} and \eqref{H:2}.
|
1,116,691,500,218 | arxiv | \section{Introduction}
While the minimal supersymmetric standard model (MSSM) provides the most
promising extension of the successful standard model, it does not yet
encompass important ideas that would be expected of the complete low
energy effective theory. These include neutrino masses, baryogenesis,
inflation and the strong CP problem. Recently there have been a number of
interesting proposals which partly address these shortcomings. In particular
several papers by Murayama, Yanagida and collaborators have made significant
contributions \cite{msyy}; see also ref. \cite{shafi}. However a more
complete phenomenological model is lacking at present.
In this work we construct a phenomenological extension of the MSSM which
can successfully incorporate inflation, neutrino masses, baryogenesis and
axions. We build on the work of Murayama et al \cite{msyy,msy}, modifying
their approach in a way specified below. The resulting Lagrangian can
describe all of the usual supersymmetry phenomenology, including cold dark
matter, LEP data and BR($b\rightarrow s\gamma$). In addition it includes
neutrino masses via a see-saw mechanism (which provide the hot dark matter),
induces inflation with a sneutrino inflaton, incorporates baryogenesis and
accommodates the axion. Interestingly the parameters relevant to each of
these ideas are closely interrelated in our model. Furthermore, the
fine-tuning in the inflationary potential is no more worse than that of the
electron Yukawa coupling in the MSSM. We will postpone giving any detailed
calculations here, and instead outline how such an encompassing Lagrangian
can be constructed.
In order to incorporate chaotic inflation one needs a scalar field in the
theory to have an initial value much greater than $M_{Planck}$ in the early
universe. An unnatural fine-tuning required for gauge non-singlet fields
along D-term flat directions restricts the inflaton to be a gauge singlet
field such as a sneutrino. This idea of identifying the right-handed
sneutrino as the inflaton is due to Murayama et al \cite{msyy}, where it was
noted that the addition of the superpotential term $W=\textstyle{1\over 2} M {\hat N_i^c}
{\hat N_i^c}$ with a common Majorana mass $M\simeq 10^{13}$GeV coincides
with a successful implementation of chaotic inflation using a quadratic
scalar potential. However to solve the strong CP problem one expects the
Lagrangian in the early universe to be Peccei-Quinn (PQ) invariant. A
PQ-invariant Majorana term for the right-handed neutrinos can be written by
introducing a singlet superfield $\hat P$ with superpotential $W=\textstyle{1\over 2} h_i
{\hat N_i^c} {\hat N_i^c} {\hat P}$. This means that in the early universe
the inflationary potential will be quartic with a coupling $h_1^2$.
Anisotropic temperature fluctuations, $\delta T/T \simeq 10^{-5}$ in the
present universe then determine the Majorana Yukawa coupling $h_1\simeq
10^{-7}$ \cite{salopek}.
Neutrino masses via a see-saw mechanism will be generated when the scalar
component ${\tilde P}$ of the superfield $\hat P$ receives a vacuum
expectation value at an intermediate scale. Intermediate scale breaking in
the supersymmetric standard model was previously considered by Murayama,
Suzuki and Yanagida \cite{msy}. Radiative corrections from right-handed
neutrino loops break U$(1)_{PQ}$ by driving the squared mass of ${\tilde P}$
negative at an intermediate scale. This is similar to the normal radiative
electroweak symmetry breaking induced in the MSSM by the large top Yukawa
coupling. It turns out that a second singlet superfield ${\hat P}^\prime$ is
also needed to ensure an invisible axion. The PQ symmetry can only be broken
after inflation ends because during the inflationary epoch the inflaton
induces an effective ${\tilde P}$ mass, which dominates the radiative
corrections from neutrino loops. As the inflaton undergoes coherent
oscillations about its minimum, the oscillation amplitude falls off as
$R^{-1}$ ($R$ is the scale factor of the universe) for a quartic potential.
However as the universe is reheated, finite temperature corrections induce
a local minimum at $\langle{\tilde P}\rangle=0$ and the field ${\tilde P}$
can remain trapped there until $T\lesssim 10^3$GeV.
If a second period of inflation were to commence when the sneutrino is
oscillating to zero, the universe would then be supercooled below
$T\simeq10^3$GeV. The potential barrier at $\langle{\tilde P}\rangle =0$
would disappear and $\tilde P$ would drop to the true vacuum at $\langle{
\tilde P}\rangle\simeq 10^{12}$GeV. This second inflationary epoch can be
caused by a scalar field with amplitude ${\cal O}(M_{Pl})$. Typically these
scalar fields only have non-renormalisable inflaton couplings and are
associated with a flat direction of the supersymmetric theory. Their
amplitudes can be driven to values ${\cal O}(M_{Pl})$ during the first
inflationary epoch \cite{drt}. Thus it is likely that after the first
inflation period is over there exists some flat-direction field, $\eta$ with
an amplitude ${\cal O}(M_{Pl})$ which starts the second inflationary epoch.
In contrast to the initial period of chaotic inflation where $V(\phi)
\lesssim M_{Pl}^4$, the potential along the flat direction is $V\simeq
m_W^2 M_{Pl}^2 \simeq (10^{11} {\rm GeV})^4$, where $m_W\simeq
{\cal O}$(TeV). This has been referred to as `intermediate scale inflation'
\cite{banks} and conveniently coincides with the U$(1)_{PQ}$ symmetry
breaking.
When the second inflationary epoch ends, the universe is reheated to a
temperature $T_{RH}\simeq 10^6$GeV which is low enough to prevent restoring
PQ symmetry (at the local minimum $\langle{\tilde P}\rangle=0$). This
reheat temperature is high enough for baryogenesis to occur via the
out-of-equilibrium decay of the light electron Majorana neutrino ($N_1$).
The initial chaotic inflationary epoch with the right-handed electron
sneutrino inflaton solves the flatness and horizon problems and generates
the required density perturbations. In order not to wipe out the density
perturbations from the original inflationary epoch, we require that the
number of e-foldings, $N$ during the second period of inflation satisfy
$N\lesssim 30$ \cite{rt}. Note, however that the axion strings resulting
from the spontaneous symmetry breakdown will not be completely diluted
during the second inflationary epoch.
The main points of our model which differ from previous attempts are as
follows. Initially chaotic inflation occurs with a quartic potential
associated with the right-handed electron sneutrino. COBE data on the
temperature anisotropy then determine the electron Majorana Yukawa coupling
to be $h_1\simeq 10^{-7}$ which is no less fine-tuned than the electron
Dirac Yukawa coupling. When the universe is reheated, finite temperature
corrections induce a local minimum at $\langle{\tilde P}\rangle=0$ which
persists until $T\simeq 10^3$GeV. If instead a second inflationary epoch
occurs at an intermediate scale (via a flat-direction field $\eta$), the
universe will be supercooled below $T\simeq 10^3$GeV as $\langle
{\tilde N}_1^c\rangle\rightarrow 0$. Soft breaking terms then dominate and
radiatively generate an intermediate mass scale $\simeq 10^{12}$GeV. The
mass of the electron Majorana neutrino will typically be $M_{N_1}\simeq
10^5$GeV (rather than the more common value $M_{N_1}\simeq 10^{11}$GeV).
When the flat-direction field decays it can reheat the universe to a
temperature $T_{RH}\simeq 10^6$ GeV. All supersymmetry breaking effects are
parameterised by soft terms in the scalar potential and we do not consider
any effects that might arise from supergravity or string theory. The details
of our scenario are presented below.
\section{Chaotic inflation in the supersymmetric standard model}
Consider a PQ invariant extension of the MSSM which provides the framework
for our model of inflation. This extension was first considered by Murayama,
Suzuki and Yanagida \cite{msy}. If a right-handed neutrino field
${\hat N}^c$ is introduced into the MSSM, the possible Yukawa couplings in
the superpotential are
\begin{equation}
W[\Phi]= h_U^{ij} {\hat u}^c_i {\hat Q}_j {\hat H}_u
+ h_D^{ij} {\hat d}^c_i {\hat Q}_j {\hat H}_d
+ h_E^{ij} {\hat e}^c_i {\hat L}_j {\hat H}_d
+ h_N^{ij} {\hat N}^c_i {\hat L}_j {\hat H}_u
\label{sp}
\end{equation}
where $\hat Q$, $\hat L$ and ${\hat H}_{u,d}$ are SU(2) doublet chiral
superfields and ${\hat u}^c$,${\hat d}^c$,${\hat e}^c$ and ${\hat N}^c$ are
SU(2) singlet chiral superfields. The labels $i,j$ are generation indices
and all group indices have been suppressed. Notice that to generate a
Majorana mass term for the right-handed neutrino only requires coupling
${\hat N}^c$ to a singlet superfield $\hat P$. However, with just the
superfield $\hat P$, the PQ symmetry is broken at the electroweak scale and
gives rise to a standard visible axion which has been ruled out
experimentally. This problem is avoided by introducing a second singlet
field, ${\hat P}^\prime$ which causes the PQ symmetry to be broken at an
intermediate scale and leads to an invisible axion \cite{msy}. Thus the most
general PQ invariant superpotential with an intermediate breaking scale is
given by
\begin{equation}
W^\prime[\Phi]={\textstyle{1\over 2}} h_M^{ij} {\hat N}^c_i {\hat N}^c_j {\hat P}
+{f\over M_{Pl}} {\hat P}^3 {\hat P}^\prime
+{g\over M_{Pl}} {\hat P} {\hat P}^\prime {\hat H}_u {\hat H}_d ,
\label{pqsp}
\end{equation}
where $M_{Pl}$ is the Planck mass and the PQ charge assignments are $+1/2$
for $\hat Q$, $\hat L$, ${\hat u}^c$, ${\hat d}^c$, ${\hat e}^c$,
${\hat N}^c$, $-1$ for $\hat P$, ${\hat H}_{u,d}$ and $+3$ for
${\hat P}^\prime$. The total superpotential of our phenomenological model is
$W+W^\prime$. Notice that by introducing ${\hat P}^\prime$ one naturally
generates a coupling to the Higgs superfields, which ultimately becomes the
$\mu$-term of the MSSM.
For chaotic inflation to occur one needs an inflationary potential $V(\phi)
\lesssim M_{Pl}^4$ and a scalar field with an initial value $\phi(0) \gg
M_{Pl}$ \cite{linde}. The scalar potential resulting from $W+W^\prime$
restricts the amplitude of any gauge non-singlet scalar fields to be
${\cal O}(M_{Pl})$ because of unnatural fine-tunings along D-term flat
directions \cite{msyy}.\footnote{Also a flat inflationary potential is much
more difficult to achieve with gauge couplings.}
This leaves only the scalar components ${\tilde N}_i^c$, ${\tilde P}$ and
${\tilde P}^\prime$ of the singlet superfields ${\hat N}_i^c$,$\hat P$ and
${\hat P}^\prime$ as possible candidates for the inflaton. The scalar
potential arising from the superpotential $W^\prime$ is given by
\begin{eqnarray}
V(\phi)&=&\left|{\textstyle{1\over 2}} h_i {\tilde N}_i^c{\tilde N}_i^c+3{f\over
M_{Pl}}{\tilde P}^2{\tilde P}^\prime+{g\over M_{Pl}} H_uH_d
{\tilde P}^\prime\right|^2+\left|{\tilde P}\right|^2 \left|{f\over
M_{Pl}} {\tilde P}^2+{g\over M_{Pl}}H_u H_d\right|^2 \nonumber \\
&+& h_i^2\left|{\tilde N}_i^c\right|^2 \left|{\tilde P}\right|^2
+{g^2\over M_{Pl}^2}\left|{\tilde P}\right|^2 \left|{\tilde P}^
\prime\right|^2 (H_u^\dagger H_u+H_d^\dagger H_d)
\label{vphi}
\end{eqnarray}
where we have assumed for simplicity that the Majorana Yukawa couplings
are real and diagonal, $h_M^{ij}=h_i \delta^{ij}$ (the soft breaking
terms are not important in this initial inflationary stage and will be
considered later). If the couplings $f,g \sim 0.01$
\footnote{We will show later that intermediate scale breaking requires
$f\gtrsim 0.01$.}
then the condition $V(\phi)\lesssim M_{Pl}^4$ restricts the amplitudes of
${\tilde P}$ and ${\tilde P}^\prime$ to be ${\cal O}(M_{Pl})$ which is not
enough to solve the flatness and horizon problems. This leaves the
right-handed sneutrino as the only candidate for the inflaton. Clearly the
lightest sneutrino will end up being the inflaton because it is assumed to
have the flatest potential (or smallest Majorana Yukawa coupling). The
heavier generations will roll to their minimum fairly quickly because their
potential is steeper. In addition the ${\tilde P}$ and ${\tilde P}^\prime$
scalar fields receive induced masses of ${\cal O}(M_{Pl})$ and are also
driven to their minima early on. Thus if we suppose that the right-handed
electron sneutrino acts as the inflaton with ${\tilde N}_1^c(0)\gg M_{Pl}$
then during inflation the potential (\ref{vphi}) effectively becomes
\begin{equation}
V(\phi)={1\over 4} h_1^2 \left|{\tilde N}_1^c\right|^4,
\label{infpot}
\end{equation}
with $\langle{{\tilde N}_{2,3}^c}\rangle,\langle{\tilde P}\rangle,\langle{
\tilde P}^\prime\rangle \ll M_{Pl}$. Note that the Higgs ($H_u$) and slepton
scalar fields receive induced masses from the inflaton,
$\langle{\tilde N}_1^c(t)\rangle$, which are bigger than the Hubble constant
$H$. Consequently the ${\cal O}(M_{Pl})$ amplitudes of these fields will be
damped away exponentially during the inflationary period.
The inflationary potential (\ref{infpot}) is known to generate the required
density perturbations for large scale structure of the universe, provided
that the quartic coupling ($h_1^2$) is approximately $10^{-14}$
\cite{lindebook}. This means that the Majorana Yukawa coupling for the
right-handed electron neutrino must be $h_1 \simeq 10^{-7}$. This is similar
in magnitude to the electron Yukawa coupling in the standard model, $h_e
\simeq 10^{-6}$ and suggests that the reason why the inflationary potential
is so flat is related to the (as yet) unknown reason why the electron Yukawa
coupling is very small. Given an intermediate scale breaking $\langle
{\tilde P}\rangle\simeq 10^{12}$ GeV, (see the next section) the mass scale
of the electron Majorana neutrino would be $M_{N_1}\simeq h_1\langle
{\tilde P}\rangle \simeq 10^5$ GeV. The two heavier Majorana neutrino
generations are not determined by inflation. If one assumes a hierarchy in
the Majorana Yukawa couplings similar to that of the quark and lepton mass
spectrum, an interesting light neutrino spectrum can result, with $\Delta
m_{\mu e}^2 \simeq 10^{-5} {\rm eV}^2$ and in certain cases $m_{\nu_e} >
m_{\nu_\mu}$. In principle one can also obtain an estimate for the ratio of
hot to cold dark matter.
During the initial period of chaotic inflation quantum de-Sitter
fluctuations can affect the classical motion of the inflaton. The amplitude
of the inflaton decreases exponentially during the de-Sitter phase
\cite{lindebook}
\begin{equation}
\label{iamp}
{\tilde N}_1^c(t)={\tilde N}_1^c(0) {\rm exp}\left[-{h_1\over
\sqrt{6\pi}} M_{Pl} t \right].
\end{equation}
After a time $\Delta t=H^{-1}$ the amplitude of the inflaton decreases by an
amount $\Delta {\tilde N}_1^c=M_{Pl}^2/(2\pi{\tilde N}_1^c)$, whereas the
average amplitude of the quantum fluctuations grows by $\left|\delta
{\tilde N}_1^c\right|=H/(2\pi)$. In order for the quantum fluctuations to
have negligible influence on the classical evolution ${\tilde N}_1^c(t)$ we
need ${\tilde N}_1^c(0)\ll h_1^{-1/3} M_{Pl}\simeq 10^2 M_{Pl}$. In addition
the universe must expand greater than 65 e-folds to solve the flatness and
horizon problems. This restricts the initial value of the inflaton field to
lie in the range $5 M_{Pl} \lesssim{\tilde N}_1^c(0)\lesssim 10^2 M_{Pl}$.
\section{PQ symmetry breaking}
The intermediate scale breaking of PQ symmetry occurs when the singlet
scalar field, ${\tilde P}$ receives a vacuum expectation value $\langle{
\tilde P}\rangle \simeq 10^{12}$ GeV. Murayama, Suzuki and Yanagida
\cite{msy} showed that this breaking can be induced by radiative corrections
from right-handed neutrino loops, which drives the mass squared parameter of
${\tilde P}$ negative. The soft supersymmetric breaking terms in the scalar
potential involving ${\tilde N}_i^c$,${\tilde P}$ and ${\tilde P}^\prime$
are given by
\begin{eqnarray}
\label{Vsoft}
V_{soft}&=&m_{\tilde P}^2 \left|{\tilde P}\right|^2+m_{{\tilde P}^
\prime}^2 \left|{\tilde P}^\prime\right|^2 + m_{{\tilde N}_i^c}^2
\left|{\tilde N}_i^c\right|^2+(A_N^{(ij)} h_N^{ij}{\tilde N}_i^c
{\tilde L}_j H_u+h.c.)\nonumber \\
&+& ({\textstyle{1\over 2}} h_i A_i{\tilde N}_i^c {\tilde N}_i^c {\tilde P}+
{f\over M_{Pl}}A_f{\tilde P}^3{\tilde P}^\prime+{g\over M_{Pl}}
A_g H_u H_d {\tilde P}{\tilde P}^\prime+h.c.).
\end{eqnarray}
The soft scalar masses and trilinear couplings are all apriori unknown mass
parameters, but a study of constrained minimal supersymmetry requires them
to be ${\cal O}$(1 TeV)\cite{kkrw}. When $m_{\tilde P}^2\simeq -m_W^2$,
where $m_W\simeq {\cal O}$(TeV) is the electroweak scale, the minimum of the
scalar potential
\begin{equation}
\label{phiPpot}
V({\tilde P})=-m_W^2 \left|{\tilde P}\right|^2 + {f^2\over M_{Pl}^2}
\left|{\tilde P}\right|^6 + V_0
\end{equation}
occurs for
\begin{equation}
\label{pqvev}
\langle{\tilde P}\rangle=
\sqrt{{m_W M_{Pl}\over \sqrt{3} f}}\simeq 10^{12} {\rm GeV}
\end{equation}
where $f\sim 0.01$ and $V_0$ is the vacuum energy associated with
the phase transition. A significantly smaller value of the coupling $f$
would increase the intermediate mass scale and conflict with cosmological
axion mass bounds \cite{twl}. In addition one finds that to stabilise the
scalar potential we need $m_{{\tilde P}^\prime}^2 > 0$ and $\langle
{\tilde P}^\prime\rangle\simeq\langle{\tilde P}\rangle$ \cite{msy}. When the
quantum corrections to the soft scalar masses in (\ref{Vsoft}) are included
via the one-loop renormalisation group equations, boundary conditions at
$M_{Planck}$ determine whether the tree-level result (\ref{pqvev}) remains
valid. In particular for $m_{\tilde P}^2$ and $m_{{\tilde N}_i^c}^2$ we have
\begin{eqnarray}
\label{rgeP}
{d m_{\tilde P}^2 \over d t}&=&{1\over 16\pi^2}\sum_i \left|h_i
\right|^2 (m_{\tilde P}^2+2m_{{\tilde N}_i^c}^2+\left|A_i\right|^2)
\\
\label{rgeN}
{d m_{{\tilde N}_i^c}^2 \over d t}&=&{1\over 16\pi^2}2\left|h_i
\right|^2 (m_{\tilde P}^2+2m_{{\tilde N}_i^c}^2+\left|A_i\right|^2).
\end{eqnarray}
Note that in (\ref{rgeN}) we have not written slepton and Higgs soft mass
terms. A complete analysis of all the renormalisation group equations in the
MSSM which includes the neutrino masses can have interesting implications.
As we noted in the previous section $h_1\simeq {\cal O}(10^{-7})$ and
consequently its effect on the renormalisation group running (\ref{rgeP})
and (\ref{rgeN}) is negligible when $h_2,h_3 \gg h_1$. This means that the
running of $m_{{\tilde N}_2^c}^2$ and $m_{{\tilde N}_3^c}^2$ will be
identical to $m_{\tilde P}^2$. To ensure that only $m_{\tilde P}^2$ goes
negative we have to impose the boundary condition $m_{{\tilde N}_{2,3}^c}^2
\gtrsim 3 m_{\tilde P}^2$ at $M_{Planck}$. Numerical integration of the
renormalisation group equations (\ref{rgeP}) and (\ref{rgeN}) with these
boundary conditions leads to radiative PQ-symmetry breaking at an
intermediate scale, $\langle{\tilde P}\rangle\simeq 10^{12}$GeV.
The radiative corrections indicated by the renormalisation group equations
(\ref{rgeP}) and (\ref{rgeN}) are evaluated at a temperature $T=0$ and the
quantum fields are assumed to be at their minima. However we need to include
corrections arising from the inflationary period and consider possible
thermal effects. During the inflationary epoch the inflaton field sits far
from its minimum with a value ${\tilde N}_1^c(0)\gtrsim M_{Pl}$. As noted
earlier the inflaton can induce masses to any other scalar fields that it
couples to in the scalar potential (\ref{vphi}). While the Higgs and slepton
fields receive an effective mass $\gtrsim H$, the coupling $h_1^2\left|{
\tilde N}_1^c\right|^2\left|{\tilde P}\right|^2$ induces an effective mass
$h_1 \langle{\tilde N}_1^c(t)\rangle$ for ${\tilde P}$. This mass will
dominate any radiative corrections until the inflaton field
${\tilde N}_1^c(t)$ settles to its minimum after inflation ends.
The finite temperature corrections associated with the potential
(\ref{phiPpot}) have been previously discussed in the context of
intermediate scale breaking in superstring models \cite{ky}. The finite
temperature potential for $m_W \ll T\ll M_I$ and excluding the region
$T\sim\left|{\tilde P}\right|$ is given by
\begin{eqnarray}
\label{phiPtemp}
V(\left|{\tilde P}\right|,T)&\simeq&-m_W^2\left|{\tilde P}\right|^2
+{\pi^2\over 90}T^4 \qquad\qquad (T\ll \left|{\tilde P}\right| < M_I)
\\
&\simeq&{h_3^2\over 24}T^2\left|{\tilde P}\right|^2 \qquad\qquad
\qquad\qquad (T\gg\left|{\tilde P}\right|)
\end{eqnarray}
where $M_I$ is the intermediate breaking scale. Since the third generation
Majorana neutrino Yukawa coupling $h_3 \simeq 1$, the finite temperature
potential has a local minimum at $\langle{\tilde P}\rangle =0$ which
disappears when $T\lesssim m_W \simeq 10^3$GeV. The problem is that when
the universe is supercooled at the end of inflation, the inflaton induced
${\tilde P}$ mass ($h_1 \langle{\tilde N}_1^c(t)\rangle$) still dominates
the radiative corrections ($\langle{\tilde N}_1^c\rangle\simeq{\textstyle{1
\over 3}}M_{Pl}$) and so $\langle{\tilde P}\rangle \simeq 0$. When the
universe reheats to a temperature $T_{RH}\simeq 10^4$GeV the scalar field
${\tilde P}$ is still trapped at the origin with a barrier of height
$\sim T^4$. Eventually when $T\simeq m_W$ the barrier disappears and then
$\langle{\tilde P}\rangle \simeq M_I \simeq 10^{12}$GeV. Electroweak
baryogenesis will then be the only possibility for generating a baryon
asymmetry.
However, in general there are many flat directions in supersymmetric theories
and it is very likely that a flat-direction field, $\eta$ has an amplitude
${\cal O}(M_{Pl})$. This can occur via quantum de-Sitter fluctuations along
the F and D-flat directions or it can be driven to an ${\cal O}(M_{Pl})$
local minimum by non-renormalisable Kahler potential couplings during the
initial inflationary epoch (see Dine et al \cite{drt}). If we assume this is
the case then as the right-handed electron sneutrino continues to roll
towards its minimum, there will be a point where the flat-direction field
$\eta$ dominates the potential energy density with $\eta(0)\simeq M_{Pl}$
and $V(\eta)={\textstyle{1\over 2}} m_W^2 \eta^2$. A second period of chaotic inflation will
then commence, which accelerates the damping of the ${\tilde N}_1^c$
oscillations and supercools the universe again. Eventually the $\tilde P$
soft term will dominate the inflaton induced ${\tilde P}$ mass (temperature
effects are negligible) and generate an intermediate scale ($\langle{
\tilde P}\rangle\simeq M_I$). We can neglect the quantum de-Sitter
fluctuations during this second period of inflation because $\sqrt{\langle
\chi^2\rangle}\lesssim m_W$. The pseudo-Nambu-Goldstone boson resulting
from the spontaneous symmetry breakdown will be the invisible axion. The
right-handed electron sneutrino and electron Majorana neutrino will then
become massive with $M_{N_1}\simeq 10^5$ GeV. In addition the MSSM Higgs
mass term $(\mu{\hat H_u}{\hat H_d})$ is generated with $\mu\simeq{\cal O}
(m_W)$.
The number of e-foldings $N$, produced during the intermediate scale
inflation depends on the initial value of the flat-direction field and is
given by $N\simeq 2\pi\eta (0)^2/M_{Pl}^2$. Typically we expect $\eta(0)
\lesssim 2 M_{Pl}$ to avoid fine-tuning problems and so $N\lesssim 25$.
This amount of inflation is not enough to expand different $\theta_i$ axion
domains beyond our present day horizon, so cosmic axion strings will be
present (although the strings will be diluted). However, even if axion
strings occur and lead to domain walls at the QCD transition temperature it
is not clear that this leads to any cosmological problems \cite{ggr}.
The adiabatic density perturbations produced by this second period of
inflation will be negligible because $\delta\rho/\rho \simeq m_W/M_{Pl}
\simeq 10^{-16}$. In order that they become irrelevant for galaxy formation
and not destroy the density perturbations produced by ${\tilde N}_1^c$, one
requires that the density perturbations re-enter the horizon for time
scales irrelevant to the growth of large scale cosmological density
perturbations. This requires that the number of e-foldings $N \leq 30$
for $T_{RH}\simeq 10^6$GeV \cite{rt} which is satisfied for $\eta(0)\simeq
2 M_{Pl}$. Note also that quantum fluctuations of the axion field can
produce isothermal density perturbations, but these will be negligible
because the Hubble constant during this second inflationary epoch is
$H\simeq m_W$ \cite{twl}.
When the second period of inflation ends the universe will be reheated
to a temperature $T_{RH}\simeq g_{\star}^{-1/4}\sqrt{\Gamma_\eta M_{Pl}}
\simeq 10^6$GeV, where $g_\star \simeq 280$ and $\Gamma_\eta\simeq h_Y^2
m_W/(4\pi)$ for an inflaton with a Yukawa-type coupling $h_Y\simeq 10^{-4}$.
Since ${\tilde P}$ is sitting (up to quantum fluctuations $\lesssim{\cal O}
(m_W)$) at the global minimum $\langle{\tilde P}\rangle \simeq M_I$ with a
potential depth $m_W^2 M_I^2\simeq (10^7 {\rm GeV})^4$, finite temperature
corrections do not destroy this minimum, even though a local minimum exists
at the origin. Note that the reheat temperature is sufficiently low to avoid
the gravitino problem \cite{gravitino}. In addition the reheat temperature
is sufficiently high that right-handed electron neutrinos ($N_1$) are
regenerated because $M_{N_1}\simeq 10^5$ GeV. When $T\simeq M_{N_1}$ a
lepton asymmetry will be generated by out of equilibrium CP-violating decays
of $N_1$, provided that the neutrino Dirac Yukawa couplings are complex and
$\left| h_N^{1j}\right|\simeq 10^{-6}$. This lepton asymmetry will be
reprocessed into a baryon asymmetry by the usual electroweak anomaly.
\section{Conclusion}
We have described how one can obtain chaotic inflation with a radiatively
generated intermediate mass scale in a simple phenomenological extension of
the MSSM. We build on but significantly modify the approach of Murayama et al
\cite{msyy}. An initial period of inflation, driven by a quartic potential
associated with the right-handed electron sneutrino solves the usual horizon
and flatness problems of the universe. Density perturbations, $\delta\rho/
\rho\simeq 10^{-5}$ are generated when the electron Majorana neutrino Yukawa
coupling is ${\cal O}(10^{-7})$. While technically natural, this coupling is
similiar in magnitude to the electron Dirac Yukawa coupling in the MSSM.
Radiative corrections from right-handed neutrino loops will break
U$(1)_{PQ}$ at an intermediate scale ($10^{12}$GeV) when the universe cools
to a temperature $T\lesssim 10^3$GeV. This implies an electron Majorana
neutrino mass $M_{N_1}\simeq 10^5$GeV and suggests that there is a hierarchy
in the Majorana neutrino spectrum related to the quark and lepton spectrum.
A baryon asymmetry can only be generated by invoking the non-perturbative
processes of electroweak baryogenesis.
However it is possible that a second stage of inflation can occur at an
intermediate scale with some flat-direction field, $\eta$. This second
inflationary epoch accelerates the damping of the electron sneutrino
amplitude and exponentially cools the universe again, causing the radiative
corrections to dominate and spontaneously break U$(1)_{PQ}$. When inflation
ends the universe can be reheated to a temperature $T_{RH}\simeq 10^6$GeV
which is sufficiently low to prevent restoring PQ symmetry. In addition the
right-handed electron neutrinos are regenerated and eventually decay when
$T\simeq M_{N_1}$. The resulting lepton asymmetry is reprocessed by the
electroweak anomaly into a baryon asymmetry.
The model we have constructed is an attempt to amalgamate current
cosmological ideas with the well tested phenomenology of the MSSM.
Ultimately one would like motivation from a more fundamental theory, but we
hope that the effective Lagrangian we have considered can shed some light
in this direction.
\section*{Acknowledgments}
We would like to thank M.~Einhorn, H.~Feldman, K.~Freese, C.~Kolda,
S.~Martin, H.~Murayama and R.~Watkins for discussions and comments. This
work was supported in part by the Department of Energy.
\newpage
|
1,116,691,500,219 | arxiv | \chapter*{Flavor effects in leptogenesis}
\author[]{P.~S.~B.~Dev${}^{\ast}$, P.~Di Bari${}^{\dagger}$, B.~Garbrecht${}^{\ddagger}$, S.~Lavignac${}^{\S}$,\\ P.~Millington${}^{\P}$\footnote{Corresponding Author.}, D.~Teresi${}^{\parallel}$}
\address{${}^{\ast}$ Department of Physics and McDonnell Center for the Space Sciences,\\ Washington University, St.~Louis, MO 63130, USA\\[3pt]
${}^{\dagger}$ Physics and Astronomy, University of Southampton,\\ Southampton, SO17 1BJ, UK\\[3pt]
${}^{\ddagger}$ Physik Department T70, Technische Universit\"{a}t M\"{u}nchen,\\
James-Franck-Stra\ss e, 85748 Garching, Germany\\[3pt]
${}^{\S}$ Institut de Physique Th\'eorique,
Universit\'e Paris Saclay,\\ CNRS, CEA,
F-91191 Gif-sur-Yvette, France\\[3pt]
${}^{\P}$ School of Physics and Astronomy, University of Nottingham,\\ Nottingham NG7 2RD, UK\\[3pt]
${}^{\parallel}$ Service de Physique Th\'{e}orique, Universit\'{e} Libre de Bruxelles,\\[-2pt] Boulevard du Triomphe, CP225, 1050 Brussels, Belgium
\\[3pt]
${}^{1}$ [email protected]}
\begin{abstract}
\textbf{Abstract}: Flavor effects can have a significant impact on the final estimate of the lepton (and therefore baryon) asymmetry in scenarios of leptogenesis. It is therefore necessary to account fully for this flavor dynamics in the relevant transport equations that describe the production (and washout) of the asymmetry. Doing so can both open up and restrict viable regions of parameter space relative to the predictions of more approximate calculations. In this review, we identify the regimes in which flavor effects can be relevant and illustrate their impact in a number of phenomenological models. These include type I and type II seesaw embeddings, and low-scale resonant scenarios. In addition, we provide an overview of the semi-classical and field-theoretic methods that have been developed to capture flavor effects in a consistent way.
\end{abstract}
\newpage
\body
\tableofcontents
\section{Introduction}
The realization of the importance of flavor effects~\cite{Abada:2006fw, Nardi:2006fx, Abada:2006ea, Blanchet:2006be, Pascoli:2006ie, DeSimone:2006nrs} represents one of the most significant developments in leptogenesis since its original proposal~\cite{Fukugita:1986hr} as a viable mechanism for generating the observed baryon asymmetry of the Universe. The flavor effects to which we refer can be associated with either of the following:
\begin{itemize}
\item [(i)] Non-vanishing off-diagonal elements in the charged-lepton Yukawa couplings and their couplings to the mediator of the relevant $L$-violating Weinberg operator.
\item [(ii)] Non-vanishing coherences in the off-diagonal elements of the particle number densities of species carrying flavor quantum numbers.
\end{itemize}
The former are a property of the renormalized Lagrangian of the model and arise from misalignment of the flavor and mass eigenbases; the latter are a property of the primordial plasma and arise from the quantum statistical mechanics of a system with particle mixing. Throughout this review, we will refer to flavor effects arising from the contribution of additional heavy, right-handed (RH) neutrino species as {\em heavy-neutrino flavor effects} and to those related to charged-lepton flavors as {\em charged-lepton flavor effects}, and we will see that a general description must take both into account. For earlier reviews that discuss the issue of flavor effects in leptogenesis, see, e.g., Refs.~\cite{Pilaftsis:1998pd, Davidson:2008bu, Blanchet:2012bk, Fong:2013wr}.
Coherences amongst the charged-lepton flavors play an important role in the dynamics of the washout of the asymmetry, and this is of particular importance for high-scale scenarios such as thermal leptogenesis. On the other hand, coherences amongst the heavy-neutrino flavors have an important effect on the source of CP asymmetry due to oscillations. Whilst oscillations are suppressed for hierarchical heavy-neutrino mass spectra, they become important when the heavy-neutrino masses become quasi-degenerate, and this has significant implications for scenarios of resonant leptogenesis, discussed further in Chapter~\cite{leptogenesis:A03} of this review. Successful leptogenesis can, in fact, be driven entirely by oscillations through the ARS mechanism~\cite{Akhmedov:1998qx}, and these scenarios are discussed in detail in Chapter~\cite{leptogenesis:A02} of this review. In certain regimes, accounting systematically for all relevant flavor effects can both enhance and suppress the final asymmetry compared to treatments in which they are only partially captured. Moreover, aside from their impacts upon the generated asymmetry, flavor effects can be key to realising scenarios of leptogenesis that are directly testable at current and near-future experiments both at the energy and intensity frontiers.
There have been significant efforts in the literature to develop theoretical frameworks and calculational techniques that allow flavor effects to be captured in a systematic way. These efforts span both first-principles field-theoretic and more phenomenologically-inspired semi-classical approaches. The former are based on the Kadanoff-Baym formalism~\cite{Baym:1961zz, KadanoffBaym}, itself embedded within the Schwinger-Keldysh~\cite{Schwinger:1960qe, Keldysh:1964ud} closed-time-path approach of non-equilibrium field theory. The latter --- often referred to as the density matrix formalism~\cite{Dolgov:1980cq, Stodolsky:1986dx, Raffelt:1992uj, Sigl:1992fn} --- can be derived at the operator level by means of the Liouville-von Neumann and Heisenberg equations. A more comprehensive overview of recent developments in field-theoretic approaches is provided in the companion Chapter~\cite{leptogenesis:A03}.
The outline of this review is as follows. In~\sref{sec:methods}, we discuss the regimes in which flavor effects are relevant. We then provide a brief overview of calculational methods that can account for these effects in the relevant transport equations that describe the production of the asymmetry. Having summarized the necessary theoretical tools, we proceed to illustrate the importance of flavor effects in the context of a number of phenomenological models. In~\sref{sec:typeI}, we consider thermal leptogenesis in the type I seesaw scenario; in~\sref{sec:RL}, we move on to low-scale scenarios of resonant leptogenesis; and finally, in~\sref{sec:typeII}, we discuss type II seesaw models. We briefly outline the relevance of flavor effects in other models in~\sref{sec:other}, and our conclusions are presented in~\sref{sec:conclusions}.
\section{Flavor effects and calculational methods}
\label{sec:methods}
In this section, and before proceeding to discuss the role of flavor effects in particular scenarios of leptogenesis, we first review the regimes in which flavor effects are important. We will also briefly outline the frameworks that allow these flavor effects to be captured fully in the Boltzmann-like equations that describe the generation of the asymmetry. We will discuss two in particular: semi-classical methods based on the so-called density matrix formalism~\cite{Sigl:1992fn} and field-theoretic approaches based on the Kadanoff-Baym formalism~\cite{Baym:1961zz, KadanoffBaym}.
\subsection{Flavored regimes}
\label{sec_regimes}
The Lagrangian
\begin{align}
\label{eq:1_lagrangian_general}
\mathcal{L} \ &=\ \mathcal{L}_{{\rm SM},h_\beta=0} \: +\:i\overline{N_{\!Rk}}\slashed{\partial}N_{Rk}\nonumber\\&\qquad - \: \left( h_\beta \,\overline{\ell}_{\beta}\,\phi\, e_{R\beta} \:
+ \: \lambda_{\alpha k}\,\overline{\ell}_{\alpha}\,\phi^cN_{Rk}\:
+ \:\frac{1}{2}\,\overline{N_{\!Rk}^{c}}M_kN_{Rk}\:
+ \:\text{h.c.}\right)
\end{align}
selects the mass eigenstates of the charged leptons as a preferred basis. However, in order to understand flavor effects in leptogenesis and how they can be neglected at very high temperatures, we would like to use the freedom of basis transformations among the lepton doublets $\ell$. Therefore, we promote the Standard Model (SM) Yukawa couplings to a matrix, viz.~$h_\beta \,\overline{\ell}_{\beta}\,\phi\, e_{R\beta} \to h_{\alpha\beta} \,\overline{\ell}_{\alpha}\,\phi\, e_{R\beta}$, where $h_{\alpha\beta}$ is diagonal in the flavor basis. Whilst we use the same symbol for the flavor-covariant matrix and the vector in the fixed flavor basis, it will be clear from the context to which object is referred. In addition, we have explicitly identified the chirality of the right-handed singlets $N_{Rk}$ in order to distinguish them from the physical Majorana fields $N=N^c$, discussed later (see \sref{sec:typeI}).
Flavor-sensitive rates in the early Universe should scale as $|h_{\alpha\alpha}|^2 T$, where $T$ is the temperature. These are suppressed by a phase-space factor also involving gauge couplings~\cite{Garbrecht:2013urw} because the leading processes at high temperature are two-by-two scatterings involving gauge-boson radiation, cf.~\eref{Gamma:fl} and~\eref{gammafl}. These rates are to be compared with the Hubble rate $H$, which scales as $H\sim T^2/M_{\rm Pl}$, where $M_{\rm Pl}$ is the Planck mass. Doing so implies that flavor-sensitive processes are out of equilibrium above and in equilibrium below a certain temperature. The equilibration temperatures for various SM processes, relevant for flavor and spectator effects in leptogenesis, as well as in other cosmological scenarios, are shown in~\fref{fig:regions}. It should be noted, however, that the ranges are only indicative because loopholes can easily be found. For example, and as discussed in~\sref{sec3}, a scenario with largely hierarchical RH-neutrino Yukawa couplings can be constructed where the partial decoherence of correlations involving the $\tau$ flavor is important even when leptogenesis occurs at a low temperature due to comparably light RH neutrinos and a resonantly-enhanced CP asymmetry.
\begin{figure}[t!]
\centering
\includegraphics[width=1.\textwidth]{regions1}
\caption{Ranges of equilibration temperature for various SM processes, i.e.~for the strong and weak sphalerons (green), as well as quark (red) and lepton (blue) Yukawa interactions. The bands range from $T_X$ to $20\,T_X$, with $T_X$ denoting the equilibration temperature, at which the particular rate coincides with the Hubble rate. Figure taken from Ref.~\cite{Garbrecht:2014kda}.\label{fig:regions}}
\end{figure}
However, barring extra symmetries or tuning in the type I seesaw-model, the standard picture of flavored regimes is as follows: Suppose first that leptogenesis occurs at temperatures below $10^9\,{\rm GeV}$ from the decay of the lightest RH neutrino $N_1$ (see~\sref{subsec:N1dominated} for more details). In general, the decay creates a coherent superposition of all three lepton-doublet flavors $e$, $\mu$ and $\tau$. These superpositions can be described by off-diagonal elements that appear either in a description based on two-point correlation functions in the Schwinger-Keldysh formalism or within a matrix of number densities based on an operator formalism. Nevertheless, the flavor-sensitive rates will lead to a rapid decay of these off-diagonal correlations such that they can be ignored. It is therefore most suitable to simply remain in the mass eigenbasis where the Yukawa couplings of the charged leptons are diagonal.
Next, consider the opposite regime, where leptogenesis occurs at temperatures above $10^{14}\,{\rm GeV}$. If we remain in the mass eigenbasis, we can no longer ignore the flavor correlations, which amounts to a calculational inconvenience. The latter can, however, be removed by a flavor transformation of the doublet leptons, such that $N_1$ only couples to one of the doublet leptons in the new basis:
\begin{align}
\left(
\begin{array}{c}
u_{\perp 1}\\
u_{\perp 2}\\
u_\parallel
\end{array}
\right)
\left(
\begin{array}{ccc}
\lambda_{e 1} & \lambda_{e 2} &\lambda_{e 3}\\
\lambda_{\mu 1} & \lambda_{\mu 2} &\lambda_{\mu 3}\\
\lambda_{\tau 1} & \lambda_{\tau 2} &\lambda_{\tau 3}\\
\end{array}
\right)\
= \
\left(
\begin{array}{ccc}
0 & \times &\times\\
0 & \times &\times\\
\times & \times &\times\\
\end{array}
\right)
\;,
\end{align}
where $\times$ denotes a non-vanishing entry,
\begin{align}
u_\parallel\ =\ \frac{\left(
\begin{array}{ccc}\lambda_{e 1}, & \lambda_{\mu 1}, & \lambda_{\tau 1}
\end{array}
\right)}{\sqrt{\sum|\lambda_{\alpha 1}|^2}}
\end{align}
and $u_{\perp 1,2}$ are unit vectors perpendicular to $u_\parallel$, as well as to one another. In this description, we only need to consider the flavor aligned with $u_\parallel$ and can ignore the $\perp$ flavors altogether because no asymmetry is generated within these in the first place.
Finally, consider the narrow regime between $\tau$ and $\mu$ equilibration (around $10^{11}\,{\rm GeV}$), where we suitably transform
\begin{align}
\left(
\begin{array}{c}
u_{\perp}\\
u_{\parallel}\\
(\begin{array}{ccc}0&0&1\end{array})
\end{array}
\right)
\left(
\begin{array}{ccc}
\lambda_{e 1} & \lambda_{e 2} &\lambda_{e 3}\\
\lambda_{\mu 1} & \lambda_{\mu 2} &\lambda_{\mu 3}\\
\lambda_{\tau 1} & \lambda_{\tau 2} &\lambda_{\tau 3}
\end{array}
\right)\
=\
\left(
\begin{array}{ccc}
0 & \times &\times\\
\times & \times &\times\\
\lambda_{\tau 1} & \lambda_{\tau 2} &\lambda_{\tau 3}\\
\end{array}
\right)
\;,
\end{align}
in which
\begin{align}
u_{\parallel}\ =\ \frac{(
\begin{array}{ccc}
\lambda_{e 1}, & \lambda_{\mu 1}, & 0
\end{array})}
{\sqrt{|\lambda_{e 1}|^2+|\lambda_{\mu 1}|^2}}\;,\quad
u_{\perp}\ =\ \frac{(
\begin{array}{ccc}
\lambda_{\mu 1}, & -\,\lambda_{e 1}, & 0
\end{array})}
{\sqrt{|\lambda_{e 1}|^2+|\lambda_{\mu 1}|^2}}\;.
\end{align}
In this setup, asymmetries are produced within the $\tau$ flavor and the flavor aligned with $u_\parallel$, and there are no correlations amongst these because any such correlations are destroyed by interactions mediated by $h_\tau$. No asymmetries are generated in the flavor aligned with $u_\perp$, which can therefore be ignored.
This leaves open the questions of how to deal with intermediate regimes and whether the above procedures can be obtained as limiting cases of a more general approach that allows to treat flavor effects throughout the entire temperature range. This will be addressed in~\sref{subsec:methods}.
\subsection{Calculational methods}
\label{subsec:methods}
In order to calculate the final lepton asymmetry, we need to describe the evolution of integrated particle number densities, $n\equiv n(t)$, in the expanding Universe~\cite{Kolb:1979qa,Luty:1992un}. This evolution is described semi-classically by coupled systems of Boltzmann equations, which take the general form
\begin{equation}
\label{3_generalBoltzmann}
\dot{n}_{A}\:+\:3 H n_{A}\ =\ \mathcal{C}_{A}[\{f\}]\;,
\end{equation}
where $\dot{}$ indicates a derivative with respect to cosmic time $t$ and $H$ is the Hubble rate. The subscript $A$ is a multi-index, which labels all species and their quantum numbers, i.e.~flavor, spin/helicity, isospin and so on. For our present discussions, the most important of these will be flavor. The terms on the left-hand side of~\eref{3_generalBoltzmann} are the so-called \emph{drift terms}, which include the effect of the cosmological expansion, and the \smash{$\mathcal{C}_A[\{f\}]$} on the right-hand side of~\eref{3_generalBoltzmann} are the \emph{collision terms}. The latter depend, in general, on the phase-space distribution functions $f_A$, which we define below. The remainder of this section will be concerned with the derivation of these collision terms in the flavored regime, where we must carefully treat the quantum-mechanical effects of particle mixing. Further discussion of the treatment of these effects in the context of resonant leptogenesis can be found in Chapter~\cite{leptogenesis:A03} of this review.
The first step in obtaining the requisite systems of Boltzmann-like equations is to determine what it is that we aim to count. These are the distribution functions $f_A\equiv f_A(\mathbf{p},\mathbf{X},t)$: the densities of particles in phase space. Throughout what follows, we assume spatial homogeneity, such that the distribution functions depend only on time $t$ and three-momentum $\mathbf{p}$. Given a single scalar degree of freedom, the distribution function is straightforwardly related to the number operator, itself built out of the canonical creation and annihilation operators $\hat{a}^{\dag}(\mathbf{p})$ and $\hat{a}(\mathbf{p})$. Working in the interaction picture, we have
\begin{equation}
f(t,\mathbf{p})\ \equiv\ \big<\hat{n}(\mathbf{p})\big>_t\ \equiv\ \frac{1}{V}\mathrm{tr}\,\hat{\rho}(t)\hat{a}^{\dag}(\mathbf{p})\hat{a}(\mathbf{p})\;,
\end{equation}
where $\hat{\rho}(t)$ is the density operator ($\mathrm{tr}\,\rho(t)=1$) and $V=(2\pi)^3\delta^{(3)}(\mathbf{0})$ is the three-volume of the system. In the presence of multiple flavors, we might be tempted to add to these distribution functions a flavor index, $i$ say, such that
\begin{equation}
f_i(t,\mathbf{p})\ \equiv\ \big<\hat{n}_i(\mathbf{p})\big>_t\ \equiv\ \frac{1}{V}\mathrm{tr}\,\hat{\rho}(t)\hat{a}_i^{\dag}(\mathbf{p})\hat{a}_i(\mathbf{p})\;.
\end{equation}
However, in the presence of particle mixing, such an extension is incomplete, and we must introduce matrices of distribution functions that count both the diagonal densities of individual flavors but also the coherences between those different flavors:
\begin{equation}
f_{ij}(t,\mathbf{p})\ \equiv\ \big<\hat{n}_{ij}(\mathbf{p})\big>_t\ \equiv\ \frac{1}{V}\mathrm{tr}\,\hat{\rho}(t)\hat{a}_j^{\dag}(\mathbf{p})\hat{a}_i(\mathbf{p})\;.
\end{equation}
More generally, it may be necessary to count other individual quantum numbers, for example, helicity, as well as the corresponding coherences. The integrated number densities are of the form
\begin{equation}
\label{eq:numbdenmatrix}
n_{Xij}(t)\ =\ \sum_{q}\int\!\frac{\mathrm{d}^3\mathbf{p}}{(2\pi)^3}\;f_{Xij,q}(\mathbf{p},t)\;,
\end{equation}
where $X$ labels the particle species and the sum over $q$ includes all additional quantum numbers that we do not wish to track explicitly.
It is clear now what the relevant multi-indices $A$ and $B$ are in~\eref{3_generalBoltzmann}; they run over the particle species of interest and their corresponding flavor structure. Hence, the coupled Boltzmann equations for the fermionic species are
\begin{subequations}
\begin{gather}
\label{3_generalBoltzmann2}
\dot{n}_{Nij}\:+\:3 H n_{Nij}\ =\ \mathcal{C}_{ij}[\{f,\bar{f}\}]\;,\\
\dot{n}_{\ell\alpha\beta}\:+\:3 H n_{\ell\alpha\beta}\ =\ \mathcal{C}_{\alpha\beta}[\{f,\bar{f}\}]\;,
\end{gather}
\end{subequations}
plus the CP-conjugate expressions, describing the evolution of the conjugate densities $\bar{n}_N$ and $\bar{n}_{\ell}$. We turn our attention now to the collision terms.
We may proceed in one of two ways: semi-classically via the Liouville-von Neumann and Heisenberg equations, or field-theoretically via the so-called Kadanoff-Baym formalism. Whilst the former approach is less technically involved, the latter has the advantage that all quantum effects are, in principle, incorporated systematically without external prescription.
\subsubsection{Semi-classical approach}
The aim of semi-classical approaches is to find consistent means for supplementing systems of Boltzmann equations with ingredients that involve some level of resummation. In this way, one intends to capture the pertinent quantum effects, whilst avoiding the technicalities of first-principles field-theoretic treatments. An introduction to semi-classical approaches for the simplest scenario of thermal leptogenesis is provided in Chapter~\cite{leptogenesis:A04} of this review.
We outline here the basics of the so-called density matrix formalism~\cite{Dolgov:1980cq, Stodolsky:1986dx, Raffelt:1992uj, Sigl:1992fn}, which yields rate equations for the integrated matrices of number densities in~\eref{eq:numbdenmatrix}. The derivation that follows is based on Ref.~\cite{Dev:2014laa}, and we will work in the interaction picture. Therein, we recall that the creation and annihilation operators evolve subject to the free part of the Hamiltonian $\hat{H}^0$ via the (interaction-picture form of the) Heisenberg equation of motion and that the density operator evolves subject to the interaction part of the Hamiltonian $\hat{H}^{\rm int}$ via the Liouville-von Neumann equation.
Introducing the matrix of number operators $\hat{n}_{ij}(t,\mathbf{p})$ corresponding to \eref{eq:numbdenmatrix}, the time-derivatives of the respective densities can be written
\begin{align}
\label{eq:densmatstart}
\frac{{\rm d}\,n_{ij}(t,\mathbf{p})}{{\rm d}t}\ =\ \frac{\rm d}{{\rm d}t}\,\mathrm{tr}\,\Big\{\hat{\rho}(t)\,\hat{n}_{ij}(t,\mathbf{p})\Big\}\ &=\ \mathrm{tr}\,\bigg\{\hat{\rho}(t)\,\frac{{\rm d}\,\hat{n}_{ij}(t,\mathbf{p})}{{\rm d}t}\:+\:\frac{{\rm d}\,\hat{\rho}(t)}{{\rm d}t}\,\hat{n}_{ij}(t,\mathbf{p})\bigg\}\;.
\end{align}
By means of the Heisenberg equation of motion, the first term on the right-hande side of \eref{eq:densmatstart} can be written
\begin{equation}
\mathrm{tr}\,\bigg\{\hat{\rho}(t)\,\frac{{\rm d}\,\hat{n}_{ij}(t,\mathbf{p})}{{\rm d}t}\bigg\}\ =\ i\langle[\hat{H}^0,\hat{n}_{ij}(t,\mathbf{p})]\rangle_t\;,
\end{equation}
and it describes flavor oscillations. For the second term on the right-hand side of \eref{eq:densmatstart}, we first recast the usual form of the Liouville-von Neumann equation
\begin{equation}
\frac{{\rm d}\,\hat{\rho}(t)}{{\rm d}t}\ =\ -\,i[\hat{H}^{\rm int}(t),\hat{\rho}(t)]
\end{equation}
as a Volterra integral equation of the second kind, i.e.
\begin{equation}
\hat{\rho}(t)\ =\ \hat{\rho}(0)\:-\:i\int_0^t{\rm d}t'\;[\hat{H}^{\rm int}(t'),\hat{\rho}(t')]\;.
\end{equation}
Proceeding by successive substitution to second order in the interaction Hamiltonian and subsequently differentiating with respect to time, we obtain
\begin{equation}
\label{eq:LvNsecondorder}
\frac{{\rm d}\,\hat{\rho}(t)}{{\rm d}t}\ =\ -\,i[\hat{H}^{\rm int}(t),\hat{\rho}(0)]\:-\:\int_0^t{\rm d}t'\;[\hat{H}^{\rm int}(t),[\hat{H}^{\rm int}(t'),\hat{\rho}(t')]]\;.
\end{equation}
For the models and particle species of interest to us, the first term on the right-hand side of \eref{eq:LvNsecondorder} is zero. The second term gives rise to the leading collision terms, and, by putting everything together, we obtain the exact evolution equation
\begin{equation}
\label{eq:fulldensmateq}
\frac{{\rm d}\,n_{ij}(t,\mathbf{p})}{{\rm d}t}\ =\ i\langle[\hat{H}^0,\hat{n}_{ij}(t,\mathbf{p})]\rangle_t\:-\:\int_0^t{\rm d}t'\;\langle[\hat{H}^{\rm int}(t'),[\hat{H}^{\rm int}(t),\hat{n}_{ij}(t,\mathbf{p})]]\rangle_{t'}\;.
\end{equation}
At this point, we emphasise the presence of the non-Markovian memory integral over $\rho(t')$, which depends on the complete history of the evolution.
By assuming (i) that the time-scales for the microscopic QFT processes and statistical evolution are well separated, and (ii) that momentum correlations built up by a collision are lost before the next collision (molecular chaos), we can make a Markovian (or Wigner-Weisskopf~\cite{Weisskopf:1930au}) approximation of \eref{eq:fulldensmateq} (see, e.g., Ref.~\cite{Dev:2014laa}). Doing so, yields the Markovian master equation
\begin{equation}
\label{eq:Markovianmaster}
\frac{{\rm d}\,n_{ij}(t,\mathbf{p})}{{\rm d}t}\ =\ i\langle[\hat{H}^0,\hat{n}_{ij}(t,\mathbf{p})]\rangle_t\:-\:\frac{1}{2}\int_{-\infty}^{+\infty}{\rm d}t'\;\langle[\hat{H}^{\rm int}(t'),[\hat{H}^{\rm int}(t),\hat{n}_{ij}(t,\mathbf{p})]]\rangle_{t}\;.
\end{equation}
Notice that the Markovian approximation has led to the extension of the limits of time-integration and the change of time argument $t'\to t$ in the density operator, thereby neglecting memory effects.
Whilst it is now a matter of course to find the explicit form of the oscillation and collision terms for a given Hamiltonian, it is clear that the right-hand side of \eref{eq:Markovianmaster} is truncated at second order in the interaction Hamiltonian. Moreover, in making the Markovian approximation, we have also neglected dispersive self-energy corrections. Hence, in order to capture any relevant non-perturbative effects in the resulting rate equations, we need to supplement the finite-order calculation with resummed quantities by some effective means. This process may be motivated by considering scattering matrix elements (in the case of the collision terms) or from finite-temperature field theory calculations (in the case of the thermal-mass corrections).
However, as is the case for any effective description, it is necessary to ensure that important field-theoretic properties are preserved, e.g.~unitarity, CPT invariance, gauge invariance and so on, and significant effort has been devoted to this in the literature (see, e.g., Ref.~\cite{Pilaftsis:1998pd}). For instance, in resonant scenarios (see~\sref{sec3} and Chapter~\cite{leptogenesis:A03}), it is necessary to resum the self-energies of the heavy neutrinos in order to regulate the resonant enhancement of the CP asymmetry. In this case, we need systematic methods for dealing with the resummation of transition amplitudes involving intermediate unstable states. Moreover, these unstable states will likely be subject to particle mixing. Lastly, we must avoid the double counting of processes contributing to the statistical evolution~\cite{Kolb:1979qa}. For example, if we include decays, inverse decays and two-to-two scatterings in the collision terms, we must be careful to deal with what happens when the scattering is mediated by an on-resonance $s$-channel exchange of the unstable particle. This problem can be evaded by employing so-called Real Intermediate State (RIS) subtraction~\cite{Kolb:1979qa} (see also Chapter~\cite{leptogenesis:A04}).
Rate equations can also be derived from first principles using the field-theoretic approaches that we will describe in the next subsection. Whilst this technology supersedes density matrix formalisms, semi-classical approaches remain of significant utility, and it is worth noting that many of the results reviewed in~\sref{sec:typeI}, \sref{sec3} and~\sref{sec:typeII} have been derived by these means.
\subsubsection{Field-theoretic approach}
\label{sec:methods_fieldtheory}
The program of \emph{field-theoretic} approaches is to derive the fluid equations that are used in phenomenological studies of leptogenesis from first principles of quantum field theory. As a starting point, we may choose the Schwinger-Dyson equations on the Schwinger-Keldysh closed time path (CTP)~\cite{Schwinger:1960qe, Keldysh:1964ud}, which contain the full content of the theory. Specifically, no truncations in the interactions or the quantum statistical state need to be made in their formulation. As a particular consequence, the evolution of the system is reversible prior to further truncations. The Schwinger-Dyson equations are formulated in terms of $n$-point functions and make no reference to an operator-based formalism. In fact, within statistical quantum field theory, they are most often derived in the functional formalism for the $n$-particle irreducible effective action~\cite{Calzetta:1986cq}. Nonetheless, it is important to keep in mind that within a perturbative expansion, the tree-level two-point functions can be straightforwardly constructed in the operator formalism via the density matrix, cf.~Refs.~\cite{Lee:2004we, Millington:2012pf, Millington:2013isa}, which may be useful in order to see how semi-classical and field-theoretic methods can be related. Further discussions of this point can be found in Chapter~\cite{leptogenesis:A03} of this review.
For the problem of leptogenesis, the following controlled approximations can be applied in order to reduce the Schwinger-Dyson equations to a system of quantum Boltzmann equations suitable for phenomenological studies:
\begin{itemize}
\item
Due to the smallness of the RH-neutrino Yukawa couplings $\lambda$, a perturbative truncation of the Schwinger-Dyson equations is appropriate for leptogenesis. Even more robust is an expansion based on the two-particle-irreducible effective action that readily resums one-loop corrections to the Green's functions that otherwise exhibit unphysical divergences, as occurs, for instance, for the fully mass-degenerate limit of resonant leptogenesis, as well as for the $t$-channel contribution to the production of relativistic RH neutrinos.
\medskip
\item
Another important truncation lies within neglecting the full higher-order quantum correlations, i.e.~those present within $n$-point functions for $n>2$, as well as among different species of particles. In principle, all higher-order correlations can be reconstructed from the two-particle-irreducible two-point Green's functions, but, in practice, the \emph{full} information is lost because the backreaction of the RH neutrinos on the lepton and Higgs doublets is neglected, up to an effective description through kinetic equilibrium distributions with chemical potentials.
\end{itemize}
In addition, the two-point functions will, in general, contain correlations between particles that share the same conserved quantum numbers, i.e.~members of a flavor multiplet. This is of relevance for leptogenesis in that it can affect the RH neutrinos, as well as the charged leptons. Flavor correlations of RH neutrinos lead to a contribution to the CP-violating source for leptogenesis (see the detailed discussions in the chapters on resonant leptogenesis~\cite{leptogenesis:A03} and ARS leptogenesis~\cite{leptogenesis:A02}), while correlations among the doublet leptons are at the core of the \emph{flavor} effects and their importance for the washout of the lepton asymmetries, which are in the main focus of the present chapter. Therefore, in this section, we account for flavor-correlations in the charged leptons only.\footnote{Correlations in the RH neutrinos are then still generated through wave-function corrections at one-loop order. For RH-neutrino correlations, particular care must be taken in order to avoid over-counting issues (see~\sref{subsec:rateequations}).}
An overview of the Schwinger-Keldysh CTP formalism is given in Sec.~3 of the accompanying Chapter~\cite{leptogenesis:A03} on resonant leptogenesis. Its application to leptogenesis is discussed in detail in Refs.~\cite{Buchmuller:2000nd, DeSimone:2007gkc, Garny:2009rv, Garny:2009qn, Beneke:2010wd, Anisimov:2010aq, Anisimov:2010dk}, and this present section relies particularly on Ref.~\cite{Beneke:2010dz}. Our present starting point is the Schwinger-Dyson equation for the flavored left-handed (LH) lepton propagator:
\begin{align}
i\slashed{\partial}_x S_{\ell \alpha\beta}^{fg}(x,y)
\ =\ f \delta^{fg}\delta_{\alpha\beta} \delta^{(4)}(x-y) P_{\rm R}\:
+\:\sum\limits_h\int\!{\rm d}^4 w\;{\Sigma\!\!\!/}^{fh}_{\ell \alpha\gamma}(x,w) S_{\ell \gamma\beta}^{hg}(w,y)\;.
\end{align}
The lower Greek indices are for active lepton flavor, the upper latin indices indicate the CTP branches $\pm$, and $P_{{\rm L},{\rm R}}$ are the left- and right-chiral projectors.
Switching to Wigner space, truncating at leading order in gradients and taking appropriate linear combinations, one obtains
\begin{subequations}
\begin{align}
\label{polemass:gradexp}
\left(\slashed{k}\:-\:\slashed{\Sigma}_\ell^{\mathcal{H}}\:\mp\:\slashed{\Sigma}_{\ell}^{\cal A}\right)
S_\ell^{A,R}\ &=\ P_{\rm R}\;,\\
\label{KB:gradexp}
\frac{i}{2}\,\slashed{\partial} S_{\ell}^{<,>}
\:+\:(\slashed{k}\:-\:\slashed{\Sigma}_{\ell}^{\mathcal{H}})S_\ell^{<,>}\:
-\:\slashed{\Sigma}^{<,>}_{\ell} S_{\ell}^{\mathcal{H}}
\ &=\
\frac12
\left(
\slashed{\Sigma}^{>}_{\ell}
S_{\ell}^<\:
-\:
\slashed{\Sigma}^{<}_{\ell}
S_{\ell}^>
\right)
\;,
\end{align}
\end{subequations}
where the superscripts $R$ and $A$ indicate retarded and advanced boundary conditions, respectively. We have also defined the linear combinations $\slashed{\Sigma}_{\ell}^{\mathcal A}\equiv(\slashed{\Sigma}_{\ell}^{A}-\slashed{\Sigma}_{\ell}^{R})/(2i)$, $\slashed{\Sigma}_{\ell}^{\mathcal{H}}\equiv(\slashed{\Sigma}_{\ell}^{A}+\slashed{\Sigma}_{\ell}^{R})/2$ with analogous definitions for the propagators $S_\ell$. The Wigner-space two-point functions (here, the propagators $S_\ell$ and self-energies $\slashed{\Sigma}_{\ell}$) are understood to be functions of the four-momentum $k$ and the average coordinate $X= (x+y)/2$ upon which the partial derivative is acting. In order to understand the physical content of these equations, it is useful to note that $S_\ell(k,X)$ describes particle properties for $k^0>0$ and anti-particle properties for $k^0<0$. We refer to the accompanying Chapter~\cite{leptogenesis:A03}, where more aspects of the Wigner transformation and the gradient expansion are reviewed. Note that when comparing with that reference, the definitions for the various two-point functions on the closed time path made here may differ by factors of $i$ and $2$.
It is of conceptual interest and an important consistency check to understand the solutions to this system of equations. It turns out that we may represent the tree-level propagators as
\begin{subequations}
\label{Slessgreater:elldoublets}
\begin{align}
\label{Seq:less}
iS^<_{\ell \alpha\beta}\ &=\ -\,2 S^{\cal A}_{\ell}
\left[
\theta(k^0)f_{\ell \alpha\beta}(\mathbf k)\:
-\:\theta(-k^0)({1\!\!1}_{\alpha\beta}-\bar{f}_{\ell \alpha\beta}(-\mathbf k))
\right]\;,
\\
\label{Seq:greater}
iS^>_{\ell \alpha\beta}\ &=\ -\,2 S^{\cal A}_{\ell}
\left[
-\,\theta(k^0)({1\!\!1}_{\alpha\beta}-f_{\ell \alpha\beta}(\mathbf k))
\:+\:\theta(-k^0) \bar{f}_{\ell \alpha\beta}(-\mathbf k)
\right]\;,
\end{align}
\end{subequations}
where
\begin{align}
\label{S^A:singular}
S_\ell^{\cal A}\ =\
\pi
P_{\rm L} \slashed{k} P_{\rm R}
\delta \!\left(k^2\right)\;,
\end{align}
and $f_{\ell \alpha\beta}$ and $\bar{f}_{\ell \alpha\beta}$ are the elements of the matrices of distribution functions for the charged leptons (unbarred) and anti-leptons (barred). At this point, one may wonder how finite-width effects from absorptive corrections, as well as the dispersive shifts to the various pole masses in the flavor-mixing system at finite temperature, come into the game. In principle, in order to recover these effects, one has to resum the gradients to all orders~\cite{Garbrecht:2011xw, Fidler:2011yq}. Fortunately, since the lepton doublets are weakly coupled, this only amounts to perturbatively-suppressed kinematic corrections for the individual reactions.
Assuming spatial homogeneity and taking $i$ times the Hermitian part of the Kadanoff-Baym equation,~\eref{KB:gradexp}, we find that the remaining relevant information
can be isolated in the kinetic equation
\begin{align}
\label{eq:kinetic}
&i\partial_\eta i\gamma^0 S^{<,>}_\ell
\:-\:\left[
\mathbf k\cdot{\bm \gamma}\gamma^0
\:+\:{\Sigma}^{\mathcal{H}}_\ell\gamma^0,i\gamma^0 S^{<,>}_\ell
\right]\nonumber\\&\qquad\qquad
-\:\left[i{\Sigma}^{<,>}_\ell\gamma^0, \gamma^0 S^{\mathcal{H}}_\ell\right]\
=\ -\:\frac12\left(i{\cal C}_\ell\:+\:i{\cal C}_\ell^\dagger\right)\;,
\end{align}
with the collision term
\begin{align}
\label{collision:term}
{\cal C}_\ell \ & =\ i{\Sigma}^>_{\ell}
iS^<_\ell\:-\: i{\Sigma}^<_\ell iS^>_\ell \;.
\end{align}
For brevity, we have used a fixed flavor basis where the charged leptons are mass diagonal in the electroweak symmetry-broken phase. The flavor-covariant generalization can be found in Ref.~\cite{Beneke:2010dz}. Moreover, we assume here spatial homogeneity, such that there is no dependence on $X^i$ for $i=1,2,3$. In addition, to account for the expansion of the Universe, we use a parametrization where $X^0=\eta$ is the conformal time.
It turns out that in the parametric regime relevant for leptogenesis, oscillations among the charged-lepton flavors are effectively frozen in. In order to explain this effect, we decompose the fluid equations into particle and anti-particle distributions, as well as number densities
\begin{subequations}
\label{relate:nu:S}
\begin{align}
n_{\ell \alpha\beta}\ &=\
\int\!\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3}\;
f_{\ell \alpha\beta}(\mathbf k)
\ =\ -
\int\!\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3}
\int_{0}^{\infty}
\frac{{\rm d} k^0}{2\pi}\;
{\rm tr}\left[
i\gamma^0 S_{\ell \alpha\beta}^{<}
\right] \;,
\\
\bar{n}_{\ell \alpha\beta} \ &=\
\int\!\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3}\;\bar{f}_{\ell \alpha\beta}(\mathbf k)\
=\
\int\!\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3}
\int_{-\infty}^{0}
\frac{{\rm d} k^0}{2\pi}
{\rm tr}\left[
i\gamma^0 S_{\ell \alpha\beta}^{>}
\right] \;.
\end{align}
\end{subequations}
Note that in view of including flavor effects, $n_\ell$ counts the charge density within one component of the ${\rm SU}(2)_{\rm L}$ doublet of SM leptons only (in contrast to, e.g., the quantity $n_L$ used in the accompanying Chapters~\cite{leptogenesis:A03} and~\cite{leptogenesis:A04}). This way, compensating factors that would appear in the equations describing the reactions with the right-handed charged leptons of the SM can be avoided.
Integrating over the four momentum of the lepton doublets brings us from a kinetic to a fluid description. Avoiding the technical details, we will simply present the resulting fluid equations:
\begin{subequations}
\label{kin:eq_nu}
\begin{align}
\frac{\partial \delta n_{\ell \alpha\beta}}{\partial \eta}\
&=\
-\,i\Delta\omega^{\rm eff}_{\ell \alpha\beta} \delta n_{\ell \alpha\beta}
\:-\: \sum\limits_{\gamma}[W_{\alpha\gamma}\delta n_{\ell \gamma\beta}\:+\:\delta n^*_{\ell \gamma\alpha}W_{\beta\gamma}^*]
\nonumber\\&+\: S_{\alpha\beta}
\:-\:\Gamma^{\rm bl}(\delta n_{\ell \alpha\beta}\:+\:\delta \bar{n}_{\ell \alpha\beta})
\:-\:\Gamma_{\ell \alpha\beta}^{\rm fl}
\;,\\
\frac{\partial \delta \bar{n}_{\ell \alpha\beta}}{\partial \eta}
\ &=\
+\,i\Delta\omega^{\rm eff}_{\ell \alpha\beta} \delta \bar{n}_{\ell \alpha\beta}
\:-\:\sum\limits_{\gamma}[W_{\alpha\gamma}\delta \bar{n}_{\ell \gamma\beta}
\:+\:\delta \bar{n}^*_{\ell \gamma\alpha}W_{\beta\gamma}^*]\nonumber\\&
-\: S_{\alpha\beta}
\:-\:\Gamma^{\rm bl}(\delta n_{\ell \alpha\beta}\:+\:\delta \bar{n}_{\ell \alpha\beta})
\:-\:\overline{\Gamma}_{\ell \alpha\beta}^{\rm fl}\;,
\end{align}
\end{subequations}
and discuss their physical content and relation to~\eref{eq:kinetic}. The details of the evaluation of the particular terms can be found in Ref.~\cite{Beneke:2010dz}.
First, we discuss the kinetic aspects. Notice that we have expressed this equation in terms of the deviations of the lepton and anti-lepton number densities ($\delta n_{\ell}$ and $\delta \bar{n}_{\ell}$) from their
equilibrium values. One can show that for these quantities, the commutator term involving $S^{\mathcal{H}}_\ell$ in~\eref{eq:kinetic} (which is essentially an inhomogeneous term) drops out~\cite{Garbrecht:2011xw}. The remaining commutator term involving $\Sigma_\ell^{\mathcal{H}}$ potentially gives rise to flavor oscillations due to the thermal masses of the charged leptons. Only flavor-sensitive terms are relevant here. (Specifically, there are no direct oscillation effects due to the flavor-blind gauge interactions, which give rise to a contribution to the effective mass that is proportional to the identity matrix in flavor space.) Upon momentum averaging, the oscillation effects are therefore described by
\begin{align}
\Delta\omega_{\ell \alpha\beta}^{\rm eff}(\eta)
\ &=\ \int\!\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3}\;
\frac{12\, e^{|\mathbf k|/T}}{T^3(e^{|\mathbf
k|/T}+1)^2}\,
\left(\frac{h_\alpha h_\beta^* T^2}{16|\mathbf k|}\right)\;.
\end{align}
Next, we turn to the collisional contributions, where we can identify the washout rate
\begin{align}
W_{\alpha\beta}\
&= \ \lambda_{\alpha 1} \lambda_{\beta 1}^*
\int\!
\frac{{\rm d}^3 \mathbf{k}}{(2\pi)^3 2|\mathbf k|}\,
\frac{{\rm d}^3 \mathbf{p}}{(2\pi)^3 2\sqrt{\mathbf{p}^2+(a(\eta)M_1)^2}}\,
\frac{{\rm d}^3 \mathbf{q}}{(2\pi)^3 2|\mathbf{q}|}\;
\notag\\
&\times(2\pi)^4 \delta^{(4)}(p-k-q)
k\cdot p
\big[
f_{N1}(\mathbf{p})+f_{\phi}(\mathbf{q})
\big]\,
\frac{12\,e^{|\mathbf k|/T}}{T^3(e^{|\mathbf k|/T}+1)^2}\;.
\end{align}
Here, the integration variables are understood to be conformal momenta, such that the physical momenta are, e.g.,~given by $\mathbf{k}/a(\eta)$, where $a(\eta)$ is the scale factor of the Friedmann-Lema\^{i}tre-Robertson-Walker metric. Similarly, $T$ is a conformal temperature, and the physical temperature is $T/a(\eta)$.
The CP-violating source term consists of a vertex and a wave-function contribution:
\begin{align}
S_{\alpha\beta}\ =\ S^{({\rm v})}_{\alpha\beta}\:+\:S^{({\rm wf})}_{\alpha\beta}\;,
\end{align}
where
\begin{align}
S^{({\rm v})}_{\alpha\beta}\ &=\ -\:i\sum\limits_{j\,\neq\,1}(\lambda_{\alpha 1}\lambda_{\gamma 1}\lambda_{\gamma j}^*\lambda_{\beta j}^*-\lambda_{\alpha j}\lambda_{\gamma j} \lambda^*_{\gamma 1} \lambda^*_{\beta 1})\notag\\
&\times\
\int\!\frac{{\rm d}^3\mathbf{k}}{(2\pi)^32|\mathbf{k}|}\,\frac{{\rm d}^3\mathbf{p}}{(2\pi)^32\sqrt{\mathbf{p}^2+M_1^2}}\,\frac{{\rm d}^3\mathbf{q}}{(2\pi)^32|\mathbf{q}|}\;(2\pi)^4\delta^{(4)}(p-k-q) \notag\\& \times\ k^\mu \frac{M_1}{16\pi M_j}K_{\mu j}(p,q)
\big[1-f_\ell(k)+f_\phi(q)\big]\;,
\label{S:vertex}
\end{align}
and
\begin{align}
S^{({\rm wf})}_{\alpha\beta}\ &=\ 8\,i\sum\limits_{j\,\neq\,1}\big[\big(\lambda_{\alpha 1}\lambda_{\gamma 1}\lambda_{\gamma j}^*\lambda_{\beta j}^*-\lambda_{\alpha j}\lambda_{\gamma j} \lambda^*_{\gamma 1} \lambda^*_{\beta 1}\big)\notag\\
&+\ \big(\lambda_{\alpha 1}\lambda^*_{\gamma 1}\lambda_{\gamma j}\lambda_{\beta j}^*-\lambda_{\alpha j}\lambda^*_{\gamma j} \lambda_{\gamma 1} \lambda^*_{\beta 1}\big)\big]\int\!\frac{{\rm d}^3\mathbf{p}}{(2\pi)^32\sqrt{\mathbf{p}^2+M_1^2}}\;\hat\Sigma_{N\mu}(p)
\hat\Sigma_N^{\mu}(p)\;.
\label{S:wavefunction}
\end{align}
Here, duplicate indices other than $j$ are summed over according to the Einstein convention. We have chosen to present these contributions in integral form in order to highlight the structure of the thermal cuts and the pertaining quantum statistical effects, as well as to facilitate comparison with the companion Chapters~\cite{leptogenesis:A02, leptogenesis:A03, leptogenesis:A04}. The expression for the vertex function $K_{\mu j}(p,q)$ can be found in
Chapter~\cite{leptogenesis:A04}, and
\begin{align}
\hat\Sigma_N^\mu(p)\ =\ \frac12\int\!\frac{{\rm d}^3\mathbf{k}}{(2\pi)^3 2|\mathbf k|}
\frac{{\rm d}^3\mathbf{q}}{(2\pi)^3 2|\mathbf q|}
(2\pi)^4 \delta^{(4)}(p-k-q)\,p^\mu
\big[
1-f_\ell^{\rm eq}(\mathbf k)+f_\phi^{\rm eq}(\mathbf q)
\big]\;,
\end{align}
which relates to the expression from Chapter~\cite{leptogenesis:A04} as $\hat\Sigma_{N\mu}(p)=L_\mu(p)/2$. We choose this different normalization in order to highlight the symmetry of the internal (cut) and external phase space
of the CTP Feynman diagrams, as well as to make connection with the discussion on ARS leptogenesis in the accompanying Chapter~\cite{leptogenesis:A02}.
It is of interest to comment on the CP-odd combinations of Yukawa couplings that appear in~\eref{S:vertex} and~\eref{S:wavefunction}. The combination in~\eref{S:vertex} and in the first term in round brackets in~\eref{S:wavefunction} arises due to lepton number violating contributions mediated by the Majorana mass $M$. In contrast, the second term in round brackets in~\eref{S:wavefunction} is lepton number conserving but lepton flavor violating, where the total lepton number conservation can be easily seen when taking the trace over the flavor indices $\alpha$ and $\beta$ of the charged leptons. Yet, lepton flavor violation in the type I seesaw model is only mediated by the RH neutrinos. Therefore, the different washout rates for the particular active lepton flavors (provided the latter are distinguishable from rates that are mediated by SM Yukawa couplings) can lead to a net lepton asymmetry even when starting only from the lepton number conserving contribution to the source. This has important consequences: Firstly, in case lepton number violation is suppressed for some reason, flavor effects can still lead to a sizable or even enhanced lepton asymmetry, as occurs for ARS leptogenesis, cf.~the accompanying Chapter~\cite{leptogenesis:A02} on this topic. Secondly, since all the active lepton flavors are summed over, the trace of the lepton number violating source is apparently independent of the weak basis transformation implied by the PMNS matrix. Therefore, unflavored leptogenesis is independent of the Dirac and Majorana phases in the PMNS matrix. In turn, once flavor effects are important, the outcome of leptogenesis depends, in general, on the PMNS phases, but we should be aware that extra ``high-energy'' phases will contribute~\cite{Nardi:2006fx, Abada:2006ea, Abada:2006fw}. For a decomposition of lepton number conserving versus lepton number violating sources in terms of effective decay asymmetries, see~\eref{veial} of the present chapter.
Finally, we turn to the last two terms in~\eref{kin:eq_nu}, which may be categorized as lepton number conserving dissipative effects. Flavor-blind contributions are mediated by gauge interactions and are described by $\Gamma^{\rm bl}\sim g^4 T$, where $g$ stands collectively for the weak and weak-hypercharge couplings. The relative signs are discussed carefully in Ref.~\cite{Beneke:2010dz}. The physical content is, however, that loss terms in, say, leptons and their flavor correlations tend to be compensated by gain terms from anti-leptons. This has an important consequence for the frustration of flavor oscillations, which we discuss below. The leading flavor-sensitive term is evaluated to be~\cite{Beneke:2010dz}
\begin{subequations}
\label{Gamma:fl}
\begin{align}
\Gamma^{\rm fl}_{\ell \alpha\beta}\ &=\
+\:\frac12\,
{\rm tr}\int\limits_{0}^{\infty}\frac{{\rm d} k^0}{2\pi}\int\!\frac{{\rm d}^3\mathbf{k}}{(2\pi)^3}
\left(
{\cal C}_{\ell \alpha\beta}^{\rm fl}(k)
\:+\:{\cal C}_{\ell \alpha\beta}^{{\rm fl}\dagger}(k)
\right)
\notag\\
&=\
\gamma^{\rm fl}
\left(
h_\alpha h^*_\gamma \delta n_{\ell \gamma\beta}\:+\:
\delta n_{\ell \alpha\gamma}^{\dagger}h_\gamma h^*_\beta
\:-\:h_\alpha \delta n_{{\rm R}\alpha} h^*_{\alpha}\delta_{\alpha\beta}
\:-\:h_{\alpha} \delta n_{{\rm R}\alpha}^{\dagger} h^*_{\alpha} \delta_{\alpha\beta}
\right)\;,\\
\overline{\Gamma}^{\rm fl}_{\ell \alpha\beta}\ &=\
-\:\frac12
{\rm tr}\,\int\limits_{-\infty}^{0}\frac{{\rm d} k^0}{2\pi}\int\!\frac{{\rm d}^3\mathbf{k}}{(2\pi)^3}
\left(
{\cal C}_{\ell \alpha\beta}^{\rm fl}(k)
\:+\:{\cal C}_{\ell \alpha\beta}^{{\rm fl}\dagger}(k)
\right)
\notag\\
& =\
\gamma^{\rm fl}
\left(
h_\alpha h^*_\gamma \delta \bar{n}_{\ell \gamma\beta}\:+\:
\delta \bar{n}_{\ell \alpha\gamma}^{\dagger}h_\gamma h^*_\beta
\:-\:h_\alpha \delta \bar{n}_{{\rm R}\alpha} h^*_{\alpha}\delta_{\alpha\beta}
\:-\:h_{\alpha} \delta \bar{n}_{{\rm R}\alpha}^{\dagger} h^*_{\alpha} \delta_{\alpha\beta}
\right)\;,
\end{align}
\end{subequations}
where no summation over $\alpha$ and $\beta$ is performed.\footnote{Here, we have taken the right-handed charged leptons to live in their flavor basis, which we can always do without the need to rotate other couplings. This is, of course, different for the doublet leptons, which have SM Yukawa coupling, as well as couplings to RH neutrinos, that cannot be simultaneously diagonalized. A flavor-covariant description of the right-handed charged leptons is presented in Ref.~\cite{Beneke:2010dz}.} This rate describes the direct damping of the off-diagonal correlations because these appear in the loss terms while the gain terms are diagonal in the flavor basis. Note that, in order to conserve baryon-minus-lepton number in the SM sector, we have to supplement our network of equations with one for the right-handed charged leptons, which can be considered as a spectator process that we omit here for brevity. The relevant fluid equations for the right-handed charged leptons are presented in Ref.~\cite{Beneke:2010dz}. The scattering processes leading to flavor decoherence are dominated by thermal effects because tree-level $1\leftrightarrow2$ reactions among massless particles mediated by the SM Yukawa couplings are kinematically suppressed. A logarithmic enhancement occurs due to $t$ channel divergences from fermion exchange that is regulated by Landau damping and Debye screening. From these considerations, one can compute the rate~\cite{Garbrecht:2013urw}
\begin{align}
\label{gammafl}
\gamma^{\rm fl} \ & =\
\gamma^{{\rm fl}(\phi)\delta\ell}\:+\:\gamma^{{\rm fl}(\ell)\delta\ell}\:+\:
\gamma^{{\rm fl}({\rm R})\delta\ell}\:+\:\gamma^{{\rm fl}}_{{\rm vertex}}
\notag\\
&=\ 1.32\times 10^{-3} \times h_t^2 T\:+\:3.72\times 10^{-3} \times G T\:+\: 8.31\times 10^{-4} \times G (\log G^{-1}) T \notag\\
& \qquad+\:4.74 \times 10^{-3} \times g_1^2 T
\:+\: 1.67\times 10^{-3} \times g_1^2 (\log g_1^{-2}) T
\:+\: 1.7\times 10^{-3} G T \;,
\end{align}
where $G=\frac{1}{2} (3 g_2^2 + g_1^2) $. In the SM, one may take $\gamma^{\rm fl}=5\times 10^{-3} T$, where a mild dependence of the numerical factor on the temperature scale due to the running couplings may be neglected in view of other uncertainties. Note that this value for $\gamma^{\rm fl}$ coincides with what had been used in the literature before a detailed calculation was available~\cite{Cline:1993bd, Abada:2006fw}.
We now turn our attention to the frustration of flavor oscillations. Close to equilibrium, the term with $\Gamma^{\rm bl}=O(g^4 T)$ imposes the constraint
\begin{align}
\label{charge_constraint}
\delta n_{\ell\alpha\beta}\ =\ -\:\delta \bar{n}_{\ell\alpha\beta}\;.
\end{align}
This means that gauge interactions force opposite chemical potentials, and this condition generalizes to a matrix form in the presence of flavor coherences. Now, due to the opposite sign for particles and anti-particles
in the oscillation term of the kinetic equations,~\eref{kin:eq_nu}, it turns out that a large $\Gamma^{\rm bl}$ effectively frustrates flavor oscillations. To explain this, we consider the system of equations
\begin{subequations}
\label{g_matrix_eq}
\begin{align}
\label{g_matrix_eq1}
\frac{{\rm d}}{{\rm d}t}\,\delta g(t)\ &=\ -\:i\Delta\omega\,\delta g(t)\:-\:
\Gamma\big[\delta g(t)\:+\:\delta \bar{g}(t)\big]\;,
\\
\label{g_matrix_eq2}
\frac{{\rm d}}{{\rm d}t}\,\delta \bar{g}(t)\ &=\ +\:i\Delta\omega\,\delta \bar{g}(t)\:-\:
\Gamma\big[\delta \bar{g}(t)\:+\:\delta g(t)\big]\;.
\end{align}
\end{subequations}
For flavored leptogenesis, the order of magnitude of the parameters are as follows:
\begin{align}
\Gamma\ =\ \Gamma^{\rm bl}\ \sim\ g^4 T\;,\qquad \Delta\omega\ \sim\ h_{\tau,\mu}^2 T
\ll \Gamma\;,
\end{align}
where we should take the $\tau$ or $\mu$ Yukawa coupling depending on which of these dominates the mass splitting of the flavors under consideration. Since $g^4 \gg h_{\tau,\mu}^2$, there are eigenmodes with short decay times
$\tau_{\rm s} = 1/(\Gamma+\sqrt{\Gamma^2-\Delta\omega^2}) \approx 1/(2 \Gamma)$ and long decay times $\tau_{\rm l} = 1/(\Gamma-\sqrt{\Gamma^2-\Delta\omega^2}) \approx 2 \Gamma / \Delta\omega^2$. The corresponding eigenvectors are
\begin{align}
\delta g_{\rm s,l} \ =\ \delta g\:+\:\frac{-\,i
\Delta\omega\: \pm \:\sqrt{\Gamma^2-\Delta\omega^2}}{\Gamma} \delta \bar{g}
\ \approx\ \delta g \:\pm\: \left(1 \mp i
\frac{\Delta\omega}{\Gamma} \right) \delta \bar{g}\;,
\end{align}
with
\begin{align}
\delta g_{\rm s,l}(t)\ =\ \delta g_{\rm s,l}(0) \,{\rm e}^{-t/\tau_{\rm s,l}}\;.
\end{align}
The short mode $\delta g_{\rm s} \approx \delta g + \delta \bar{g}$ thus rapidly approaches zero due to pair annihilations, leading to an effective constraint
\begin{align}
\label{eq:delg}
\delta g\ \sim\ - \left(1 - i \frac{\Delta\omega}{\Gamma}
\right) \delta \bar{g}\;.
\end{align}
The opposite signs in front of the $\Delta \omega$ term in~\eref{g_matrix_eq} are crucial because they imply that the source of the oscillations in
\begin{equation}
\frac{{\rm d}}{{\rm d}t}\big[\delta g(t)\:-\:\delta \bar{g}(t)\big]
\ =\ -\,i \Delta\omega\big[\delta g(t)\:+\:\delta \bar{g}(t)\big]\;,
\end{equation}
is damped due to the flavor-blind gauge interactions.
The interplay of the flavor-blind interactions with the flavored oscillation term leads to the slow decay of the long mode. The rate for this effect is, however, much smaller than the direct damping rate from flavor-dependent scatterings,
\smash{${\Delta\omega^{\rm eff}}^2/\Gamma^{\rm bl} \sim h_{\tau,\mu}^4 g_2^{-4} T \ll \Gamma^{\rm fl}\sim g_2^2h_\tau^2 T$}, since \smash{$h_{\tau,\mu} \ll g_2^3$}. Therefore, it is a suitable approximation to neglect the oscillations and the damping due to flavor-blind interactions altogether, accounting only for the direct damping from flavor-dependent scatterings. While, for leptogenesis, we are in the parametric regime where $\Delta\omega\ll \Gamma$ and flavor oscillations are overdamped and frustrated, this is not expected to be true for general systems of flavor mixing at finite temperature, where flavor oscillations and damping due to the interplay with flavor-blind scatterings mediated by gauge interactions may be quantitatively important.
In conclusion, we have shown that the CTP framework leads to a fluid description in the form of~\eref{kin:eq_nu}, where the terms involving $\Delta \omega^{\rm eff}$ and $\Gamma^{\rm bl}$ can be neglected. In this approximation, we can then perform the obvious simplification of taking the difference between the equations for $\delta n_\ell$ and $\delta \bar{n}_\ell$ such that we obtain a single equation for $n_{\Delta \ell}=\delta n_\ell-\delta \bar{n}_\ell$. At that stage, we have obtained then fluid equations for the LH charged leptons that can be applied to the fully flavored and unflavored, as well as intermediate regimes. The flavor damping $\Gamma^{\rm fl}$ leads to the decay of off-diagonal correlations. Provided the damping is large, we obtain the commonly used fully-flavored description by simply deleting the off-diagonal components of the fluid equation.
\section[Flavor phenomenology of leptogenesis in the type I seesaw mechanism]{Flavor phenomenology of leptogenesis in the\\ type I seesaw mechanism}
\label{sec:typeI}
In this section, we discuss the importance of flavor effects in minimal scenarios of leptogenesis embedded within the type I seesaw scenario, wherein the SM Lagrangian is extended by introducing ${\cal N}_N$ RH Majorana neutrinos that are assumed to be produced thermally in the early Universe. Moreover, we highlight how leptogenesis can play an important role in testing high-energy seesaw models especially when flavor effects are taken into account.
Assuming a hierarchical RH neutrino spectrum, if one neglects completely the flavor composition of leptons produced by the decays of heavy RH neutrinos (unflavored assumption), the dominant contribution to the final asymmetry comes from the lightest RH neutrinos ($N_1$-dominated scenario), barring a special region of parameter space where the next-to-lightest RH neutrinos' contribution dominates ($N_2$-dominated scenario). On the other hand, when charged-lepton flavor effects are taken into account, the region of parameter space where the next-to-lightest RH neutrinos' contribution dominates gets significantly larger ($N_2$-dominated scenario). In some cases, the heaviest of the RH neutrinos, usually $N_3$, might also give a non-negligible contribution, as long as there is not a too strong mass hierarchy suppressing their CP asymmetries.
The RH-neutrino Yukawa couplings $\lambda$ and Majorana mass term $M$ are such that, after spontaneous symmetry breaking, we can write the neutrino mass terms in a basis where both charged-lepton and Majorana mass matrices are diagonal (the flavor basis):
\begin{equation}
-\,{\cal L}^{\nu}_{\rm m}\ =\ \overline{\nu_{L \alpha}} \, m_{D \alpha i} \, N_{R i}\:+\:
\frac{1}{2} \overline{N_{R i}^c} \, M_i \, N_{R i} \:+\:{\rm h.c.}\;,
\end{equation}
where $\alpha\in\{e, \mu, \tau\}$, $i\in\{1,\dots, N\}$ and $m_D = v\, \lambda/\sqrt{2}$ is the neutrino Dirac mass matrix generated by the Higgs vev $v$. In the seesaw limit, $M \gg m_D$, the mass spectrum splits into a set of heavy (Majorana, almost RH) neutrinos $N_i = N_{R i} + N_{R i}^c+(m_D/M)(\nu_{Li}+\nu_{Li}^c)$ with masses (almost) coinciding with the eigenvalues $M_i$ of the Majorana mass matrix and into a set of light (Majorana, almost LH) neutrinos $\nu_{i} = \nu_{L i} + \nu_{L i}^c - (m_D/M)(N_{Ri}+N_{Ri}^c)$ with masses given by the seesaw formula
\begin{equation}
\label{seesaw}
D_m\ =\ U_{\nu}^{\dagger} \, m_D \, M^{-1} \, m_D^{\mathsf{T}} \, U^*_{\nu} \;,
\end{equation}
where the diagonalizing matrix $U_{\nu}$ is the leptonic mixing (PMNS) matrix and we have defined $D_m \equiv {\rm diag}(m_1, m_2, m_3)$.
Neutrino mixing experiments measure two mass-squared differences. For the atmospheric neutrino mass scale, global analyses find~\cite{Capozzi:2017ipn} $m_{\rm atm}\equiv \sqrt{m^{\, 2}_3 - m_1^{\, 2}} = (50.5\pm 0.04)\,{\rm meV}$, and for the solar neutrino mass scale $m_{\rm sol} = (8.6\pm 0.1)\,{\rm meV}$, defined as $m_{\rm sol} \equiv \sqrt{m^{\, 2}_2 - m_1^{\, 2}}$ for normally-ordered neutrino masses (NO) and as $m_{\rm sol} \equiv \sqrt{m^{\, 2}_3 - m_2^{\, 2}}$ for inverse-ordered neutrino masses (IO), where we are adopting the convention $m_1 \leq m_2 \leq m_3$. See the accompanying Chapter~\cite{leptogenesis:A06} for a review of the current status of the data on neutrino masses and lepton mixing.
For NO, the leptonic mixing matrix can be parametrized in the usual way in terms of three mixing angles $\theta_{12}, \theta_{23}$ and $\theta_{13}$, one CP-violating Dirac phase $\delta$, and two CP-violating Majorana phases $\alpha$ and $\beta$:
\begin{equation}
\label{eq:PMNS}
U_{\nu}\ =\ \begin{pmatrix}
c_{12}\,c_{13} & s_{12}\,c_{13} & s_{13}\,e^{-i\delta} \\
-s_{12}\,c_{23}-c_{12}\,s_{23}\,s_{13}\,e^{i\delta} &
c_{12}\,c_{23}-s_{12}\,s_{23}\,s_{13}\,e^{i\delta} & s_{23}\,c_{13} \\
s_{12}\,s_{23}-c_{12}\,c_{23}\,s_{13}\,e^{i\delta}
& -c_{12}\,s_{23}-s_{12}\,c_{23}\,s_{13}\,e^{i\delta} &
c_{23}\,c_{13}
\end{pmatrix}
{\rm diag}\big(1, e^{i\alpha}, e^{i\beta}
\big)\;.
\end{equation}
In order to account for different orderings, it is convenient to relabel the neutrino masses in a way that $m_1' < m_2' < m_3'$ with
$1'=1$ , $2'= 2$ and $3' = 3$ for NO, and $1'=3$, $2'= 1$ and $3' = 2$ for IO. In this primed basis, the leptonic mixing matrix for IO changes as
\begin{equation}
U_{\nu}^{\rm (IO)} \ =\
U_{\nu}^{\rm (NO)} \,
\left(\begin{array}{ccc}
0 & 1 & 0\\
0 & 0 & 1\\
1 & 0 & 0
\end{array}\right) \;.
\end{equation}
However, in order to simplify the notation, we will omit the primed indexes. Global analyses of results from the neutrino oscillation experiments find the following best fit values ($1\sigma$ errors and $3\sigma$ intervals) for the mixing angles and the leptonic Dirac phase $\delta$ in the case of NO~\cite{Capozzi:2017ipn}:
\begin{subequations}
\label{expranges}
\begin{align}
\theta_{13} \ & = \ 8.45^{\circ}\pm 0.15^{\circ} \, \;\; [8.0^{\circ}, 9.0^{\circ}] \; , \\
\theta_{12} \ & = \ 33^{\circ}\pm 1^{\circ} \, \;\; [30^{\circ}, 36^{\circ}] \; , \\
\theta_{23} \ & = \ {41^{\circ}} \pm {1^{\circ}} \, \;\; [38^{\circ}, 51.65^{\circ}] \; , \\
\delta \ & = \ {-\,0.62 \pi \pm 0.2\pi} \, \;\; [-\,1.24\pi, 0.17\pi] \; .
\end{align}
\end{subequations}
It is interesting that there is already an excluded interval $\delta\notin [0.17\,\pi, 0.76\pi]$ at $3\,\sigma$ and that $\sin\delta \geq 0$ is excluded at $2\sigma$, favouring $\sin\delta < 0$ (in Ref.~\cite{Esteban:2016qun}, a lower statistical significance is found). A confirmation of the exclusion of $\sin \delta =0$ would imply the discovery of CP violation in neutrino oscillations, a very interesting (and favourable) result for leptogenesis; we will come back to this point. There are no experimental constraints on the Majorana phases $\alpha$ and $\beta$.
There is no signal from neutrinoless double beta ($0\nu\beta\beta$) decay experiments, and this therefore places an upper bound on the effective $0\nu\beta\beta$ neutrino mass $m_{ee} \equiv |m_{\nu ee}|$. Currently, the most stringent reported upper bound comes from the KamLAND-Zen collaboration, finding $m_{ee} \leq (61 \mbox{--} 165)\,{\rm meV}$ at $90\%\,{\rm C.L.}$~\cite{KamLAND-Zen:2016pfg} (for other recent results, see Refs.~\cite{Agostini:2017dxu,Albert:2017owj,Aalseth:2017btx}), where the range accounts for nuclear matrix element uncertainties (see the discussion in Chapter~\cite{leptogenesis:A06}).
Cosmological observations place an upper bound on the sum of the neutrino masses. The \emph{Planck Collaboration} obtains a robust stringent upper bound $\sum_i m_i \lesssim 170\,{\rm meV}$ at $95\% {\rm C.L.}$~\cite{Aghanim:2016yuo} that, taking into account the experimental determination of the solar and atmospheric neutrino mass scales from neutrino-oscillation experiments, translates into an upper bound on the lightest neutrino mass $m_1 \lesssim 50\,(42)\,{\rm meV}$ for NO (IO).
\subsection{Vanilla leptogenesis}
\label{sec:vanilla}
We will be particularly interested in phenomenological scenarios where the asymmetry is produced in the so-called strong washout regime. This occurs when the RH-neutrino inverse decays are in equilibrium during a certain
interval of temperatures $[T_{\rm in}, T_{\rm out}]$ centred approximately about $T \sim M_i$, efficiently washing out any asymmetry produced while $T \gtrsim T_{\rm out}$~\cite{Buchmuller:2004nz}. Moreover, if one assumes a hierarchical RH-neutrino spectrum or is, in any case, not in the resonant regime, and if flavor effects are neglected, one obtains an $N_1$-dominated scenario for most of the parameter space. In this case, the asymmetry can be described to a reasonable approximation by a very simple set of Boltzmann rate (i.e.~momentum-integrated) equations (see the accompanying Chapter~\cite{leptogenesis:A04} for more details):
\begin{subequations}
\begin{align}
\frac{{\rm d}Y_{N_1}}{{\rm d}z} \ & = \ -\:D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq}) \;,
\label{dlg1} \\
\frac{{\rm d}Y_{B-L}}{{\rm d}z} \ & = \ -\:\epsilon_1\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:
[\Delta W(z) +W_1^{\rm ID}(z)] \, Y_{B-L} \; ,
\label{dlg2}
\end{align}
\end{subequations}
written here in terms of the yields\footnote{An alternative and simplifying option to variables $Y_X$ is to normalize the abundance of any quantity $X$ to the number of RH neutrinos in ultra-relativistic equilibrium, defining $N_X \equiv n_X/n_{N}^{\rm eq}(z \ll 1)$. The two definitions are related by
\begin{equation*}
N_X (z) \ =\ \frac{g_*}{g_{N_1}} \, \frac{8\,\pi^4}{135 \zeta(3)} \, Y_X (z)\ =\
\frac{Y_X (z)}{Y_{N_1}^{\rm eq}(z=0)}\;.
\end{equation*}}
\begin{equation}
Y_{N_1}\ \equiv\ \frac{n_{N_1}}{s}\qquad \text{and}\qquad Y_{B-L}\ =\ \sum_{\alpha}Y_{\Delta_{\alpha}}\ \;,
\end{equation}
where
\begin{equation}
Y_{\Delta_{\alpha}}\ \equiv\ Y_{\Delta B/3-L_{\alpha}}\ =\ \frac 1 3\, Y_{\Delta B} - Y_{\Delta \ell_{\alpha}}-Y_{\Delta e_{R\alpha}}\;,
\end{equation}
$s=2\pi^2g_*T^3/45$ is the entropy density of the $g_*$ effective degrees of freedom and we have defined $z\equiv M_1/T$. The $N_1$ total CP asymmetry $\epsilon_1$ is defined as
\begin{equation}
\epsilon_1\ \equiv\ \frac{\Gamma_1-\bar{\Gamma}_1}{\Gamma_1+\bar{\Gamma}_1} \;,
\end{equation}
where $\Gamma_1\equiv \sum_{\alpha} \, \Gamma_{1\alpha}$ is the $N_1$ decay rate into leptons and $\bar{\Gamma}_1\equiv\sum_{\alpha}\, \bar{\Gamma}_{1\alpha}$ is the $N_1$ decay rate into anti-leptons and we have defined $\Gamma_{1\alpha} \equiv \Gamma(N_1\to\ell_{\alpha}\phi)$ and $\bar{\Gamma}_{1\alpha} \equiv \Gamma(N_1\to\bar{\ell}_{\alpha}\bar{\phi})$. A perturbative calculation from the interference of tree-level with one-loop self-energy and vertex diagrams gives~\cite{Covi:1996wh}
\begin{equation}
\label{CPas}
\epsilon_1 \ =\ \frac{1}{8\pi}\, \sum_{j\,\neq\, 1}\,\frac{{\rm
Im}\big[(\lambda^{\dagger}\lambda)^2_{1j}\big]}{(\lambda^{\dagger}\lambda)_{11}}\,\xi\bigg(1,\frac{M_j^2}{M_1^2}\bigg)\; ,
\end{equation}
where
\begin{equation}
\label{xi}
\xi(b,x)\ =\ \sqrt{x}\,
\bigg[1+\frac{b}{1-x}-(1+x)\,\ln\left(\frac{1+x}{x}\right)\bigg] \; .
\end{equation}
The (dimensionless) decay term $D_1$ and the washout term from inverse decays $W_1^{\rm ID}$ are given respectively by
\begin{equation}
\label{DW}
D_1(z)\ \equiv\ \frac{\Gamma_1+\bar{\Gamma}_1}{H\,z} \ =\ K_1\,z\,
\left\langle \frac{1}{\gamma_1} \right\rangle
\end{equation}
and
\begin{equation}
W_1^{\rm ID}(z)\ \equiv\ \frac{1}{2}\,\frac{\Gamma_1^{\rm ID}+\bar{\Gamma}_1^{\rm ID}}{H\,z} \ =\ \frac{1}{4}\,K_1 \,{\cal K}_1(z)\,z^3 \; ,
\end{equation}
where $K_1$ is the total decay parameter defined as
\begin{equation}
K_1 \ \equiv\ \frac{(\Gamma_1+\bar{\Gamma}_1)_{T\,=\,0}}{H_{T\,=\,M_1}} \; ,
\end{equation}
with $H$ being the expansion rate of the Universe. Finally, the averaged dilution factor, in terms of the modified Bessel functions of the second kind, is given by $\left\langle {1/\gamma_1} \right\rangle = {{\cal K}_{1}(z) / {\cal K}_{2}(z)}$.
The final $B-L$ asymmetry is simply given by
\begin{equation}
Y^{\infty}_{B-L}\ =\ -\,Y_{N_1}^{\rm eq}(0)\,\epsilon_1\,\kappa^{\infty}(K_1,m_1)\; ,
\end{equation}
where $\kappa^{\infty}(K_1, m_1)$ is the total final efficiency factor that can be calculated in the case of an initial thermal $N_1$ abundance as
\begin{equation}
\label{efunf}
\kappa^{\infty}(K_1, m_1) \ \simeq\ \kappa(K_1, m_1) \ \equiv \
\kappa(K_1) \, \exp\left[-\,\frac{\omega}{z_B}\,\frac{M_1}{10^{10}\,{\rm GeV}}\,
\frac{\sum_i m_i^2}{{\rm eV}^2} \right] \; ,
\end{equation}
with
\begin{equation}
\kappa(K_1) \ \equiv\ \frac{2}{K_1 \, z_{\rm B}(x)}\left[1-{\rm exp}\left(-\,\frac{1}{2}\,K_1 \, z_{\rm B}(K_1)\right)\right]
\end{equation}
and $\omega \simeq 0.186$. The exponential term is an effect of the $\Delta L =2$ washout term $\Delta W$. In the case of an initially vanishing $N_1$ abundance, the expression is more complicated and is the sum of a negative and a positive contribution. In any case, in the strong washout regime, realised for $K_1 \gtrsim 3$, there is no dependence on the initial $N_1$ abundance. This is because the asymmetry is generated within quite a narrow interval of temperatures centred at $T_{\rm lep}\equiv M_1/z_{B1}$, where $z_{B1}\equiv z_{B}(K_1) ={\cal O}(0.1)$, when the RH neutrinos are fully non-relativistic. All the asymmetry generated at higher temperatures, in the relativistic regime, and depending on the initial $N_2$-abundance, is efficiently washed out~\cite{Blanchet:2006ch}. This strongly reduces the theoretical uncertainties, since, in the relativistic regime, many different effects, most of which are not well under control, have to be taken into account in the calculation of the asymmetry.
Finally, the baryon-to-photon number ratio can be calculated in a very simple way from the the final $B-L$ asymmetry:
\begin{equation}
\eta_B \ =\ a_{\rm sph}\,\frac{n_{B-L}^{\infty}}{n_{\gamma}^{\rm rec}}\ \simeq\ 0.01\,\frac{Y_{B-L}^{\infty}}{Y_{N_1}^{\rm eq}(0)}\; ,
\end{equation}
where $a_{\rm sph} \simeq 1/3$ is the fraction of $B-L$ asymmetry that goes into a baryon asymmetry when sphaleron processes~\cite{Kuzmin:1985mm} are in equilibrium (occurring approximately in the temperature range
$10^{12} \,{\rm GeV} \gtrsim T \gtrsim 100\,{\rm GeV}$). For successful leptogenesis, the result obtained for $\eta_B$ must reproduce the experimental value extracted from CMB temperature anisotropies. The \emph{Planck Collaboration} has recently found~\cite{Ade:2015xua}
\begin{equation}
\label{etaBPlanck}
\eta_{B}^{\rm CMB} \ =\ (6.10 \pm 0.04)\: \times\: 10^{-10} \; .
\end{equation}
An interesting feature of this simple picture is that both the RH-neutrino abundance and the washout of the asymmetry are described just by the efficiency factor. This depends only on the decay parameter $K_1$ and, quite interestingly, on the neutrino masses, which can be parametrized entirely in terms of $m_1$, when the measured values of the mass-squared differences are combined. The total decay parameter can then be re-expressed in terms of the Dirac mass matrix as
\begin{equation}
K_1 \ =\ \frac{(m^{\dagger}_D\,m_D)_{11}}{M_1 \, m_{\star}}\ =\ \frac{\widetilde{m}_1}{m_{\star}} \;,
\end{equation}
where $\widetilde{m}_1 \equiv (m^{\dagger}_D\,m_D)_{11}/M_1$ is the {\em effective neutrino mass} and
\begin{equation}
m_{\star}\ \equiv \
\frac{16\, \pi^{5/2}\,\sqrt{g_*}}{3\,\sqrt{5}}\,
\frac{v^2}{M_{\rm Pl}}
\simeq 1.08 \, {\rm meV}
\end{equation}
is the {\em equilibrium neutrino mass}. For most of the seesaw parameter space and barring fine-tuned cancellations in the seesaw formula, one has $\widetilde{m}_1 \simeq m_{\rm sol}$ -- $m_{\rm atm}$ corresponding to $K_1 \sim 10$ -- $50$. For these values of $K_1$, most of the produced asymmetry is washed out, since one has $\kappa(K_1) \sim 1/K_1^{1.2} \sim 10^{-3}$ -- $10^{-2}$. However, successful leptogenesis can still be attained for $|\epsilon_1| \sim 10^{-6}$ -- $10^{-5}$. At the same time, for these large values of $K_1$, the value of $\kappa(K_1)$ is independent of the initial $N_1$ abundance. They also imply a washout of a pre-existing asymmetry $Y_{B-L}^{{\rm pre},0}$ as large as $\sim 1$, since its relic final value is given by
\begin{equation}
Y_{B-L}^{{\rm pre},\infty} \ =\ e^{- \frac{3\pi}{8}\, K_1} \, Y_{B-L}^{{\rm pre}, 0} \; ,
\end{equation}
which is therefore exponentially suppressed. This result is due to the interesting experimental finding $m_{\rm sol},m_{\rm atm} \sim 10\,m_{\star}$, a coincidence that might be regarded as a phenomenological indication of {\em strong thermal leptogenesis}, wherein the final asymmetry is independent of the initial conditions. Notice that any asymmetry generated by the heavier RH neutrinos, and in particular by the $N_2$'s, will be exponentially washed out and can be neglected.
Barring fine-tuned cancellations in the seesaw formula, one obtains the upper bound~\cite{Davidson:2002qv}
\begin{equation}
\label{upperbound}
|\epsilon_1|\ \lesssim\ 10^{-6}\,\frac{M_1}{10^{10}\,{\rm GeV}}\,\frac{m_{\rm atm}}{m_1 + m_3} \; .
\end{equation}
This upper bound on the CP asymmetry implies an upper bound on the final asymmetry, and the condition of successful leptogenesis yields a lower bound $M_1 \gtrsim 10^9\,{\rm GeV}$~\cite{Davidson:2002qv, Buchmuller:2002rq}.
A more precise value depends on the assumed initial $N_1$ abundance. In the case of strong washout, for $K_1 \gtrsim 3$, there is no such dependence, and one finds $M_1 \gtrsim 3 \times 10^9\,{\rm GeV}$. The lower bound on $M_1$ implies a lower bound on the reheat temperature of the Universe $T_{\rm reh} \gtrsim 1 \times 10^9 \,{\rm GeV}$. Within gravity-mediated supersymmetric models, this lower bound might be incompatible with the upper bound from avoidance of gravitino over-production~\cite{Khlopov:1984pf, Ellis:1984eq, Kawasaki:2008qe}. However, the latest constraints on supersymmetric models from the LHC strongly relieve the tension, since they favor large values of the gravitino mass above a TeV, making the upper bound more relaxed, $T_{\rm reh} \lesssim 10^{10} \,{\rm GeV}$, and reconcilable with thermal leptogenesis. Allowing for very strong fine-tuning in the seesaw relation, the lower bound can be relaxed if $M_2 \neq M_3$ due to an extra term in the the total CP asymmetry that does not respect the upper bound~\eref{upperbound} and that is suppressed by a factor $(M_1/M_2)^2$~\cite{Hambye:2003rt, Blanchet:2008pw}.
\subsection[Flavor effects in the N1-dominated scenario]{Flavor effects in the $N_1$-dominated scenario}
\label{subsec:N1dominated}
The vanilla leptogenesis scenario and the rate equations in~\eref{dlg1} and~\eref{dlg2} rely on the implicit assumption that leptons produced from the decays of the RH neutrinos do not lose their coherence in flavor space prior to inverse decays that would otherwise fully wash out the asymmetry produced by the decays. If one depicts the asymmetry produced in the decays in the flavor space of the three charged leptons, this is equivalent to saying that decays and inverse decays all occur along one definite flavor direction, and flavor effects are, therefore, absent in practice. This is the unflavored approximation.\footnote{This is sometimes called the one-flavored approximation. However, this can be misleading, especially when heavy-neutrino flavors are introduced, and we prefer to refer to it as the unflavored approximation. Also notice that in the limit of no washout, corresponding to the case when inverse decays are never in equilibrium, there is no real difference between an unflavored description and a flavored one.}
However, this picture is highly over-simplified, and a proper account of flavor effects can strongly affect the final value of the asymmetry. Within the $N_1$-dominated scenario, the source of flavor effects is given by the interactions of the charged leptons~\cite{Barbieri:1999ma, Abada:2006fw}, described by $-\,{\cal L}^{\ell}_Y = h\,\bar{\ell}\,\phi\, e_R$. It results from the fact that the charged-lepton and neutrino Yukawa coupling matrices, respectively $h$ and $\lambda$, are, in general, not diagonal in the same basis. Therefore, charged-lepton interactions occurring between decays and inverse decays will tend to break the coherent propagation of the leptons produced in $N_1$ decays
before their inverse decays~\cite{Blanchet:2006ch}. Charged-lepton interactions are, of course, strongly flavor-dependent, since the eigenvalues of $h$ are very hierarchical: $h_{\tau} \gg h_{\mu} \gg h_e$. This implies that tau interactions, with rate $\Gamma_{\tau} \simeq 8 \times 10^{-3}\,h^2_{\tau} \, T$ are the strongest ones and are effective when $\Gamma_{\tau} \gtrsim\Gamma^{\rm ID}$ for $M_1 \lesssim 5 \times 10^{11}\,{\rm GeV}$. On the other hand, muon interactions are effective for $\Gamma_{\mu} \simeq 10^{-3}\,h^2_{\tau} \, T \gtrsim \Gamma^{\rm ID}$, implying $M_1 \lesssim 5 \times 10^8 \,{\rm GeV}$. In this way, we have three important flavor regimes, determined by the mass of the lightest RH neutrino $M_1$, as follows.
\subsubsection{Unflavored regime: $M_1 \gg 5\times 10^{11}\,{\rm GeV}$}
As discussed earlier, all charged-lepton interactions can be neglected. One then recovers the {\em unflavored regime}, where charged-lepton effects have negligible impact.
\subsubsection{Two-flavor regime: $5 \times 10^{8}\,{\rm GeV} \ll M_1 \ll 5 \times 10^{11} \,{\rm GeV}$}
Leptons of type ${\ell}_1$, produced by the $N_1$ decays, can be described in their inverse decay as an incoherent mixture of a $\tau$ component and an $e+\mu$ component, which we indicate by $\tau_1^{\bot}$. The flavor composition is then determined by the probabilities \smash{$P_{1\alpha} \equiv |\langle \ell_1 | \alpha \rangle |^2$}, with \smash{$\alpha=\tau, \tau_1^\bot$} and such that \smash{$P_{1\tau} + P_{1\tau_1^\bot} =1$}. One can do the same for the anti-leptons, introducing probabilities $\bar{P}_{1\alpha}$. At tree level, the $\ell_1$ and $\bar{\ell}_1$ quantum states are CP-conjugates of each other. However, when loop effects are considered, one has $P_{1\alpha} \neq \bar{P}_{1\alpha}$.
The yields for the asymmetry in the two flavors $\tau$ and $\tau_1^{\bot}$, respectively \smash{$Y_{\Delta_\tau}$} and \smash{$Y_{\Delta_{\tau_1^{\bot}}}$}, have to be tracked separately, and we enter a so-called {\em two-flavor regime}. If we indicate the tree-level probabilities with $P^0_{1\alpha}$, their inverse-decay washout term is then reduced, compared to $W_1^{\rm ID}$, by a factor $P^0_{1\alpha} = (\Gamma_{1\alpha}+\bar{\Gamma}_{1\alpha})/(\Gamma_1 +\bar{\Gamma}_1)$. The kinetic equation for the total asymmetry in the unflavored regime,~\eref{dlg2}, is now replaced by two equations: one for \smash{$Y_{\Delta_\tau}$} and one for \smash{$Y_{\Delta_{\tau_1^{\bot}}}$}. The RH-neutrino kinetic equation remains unchanged, and the relevant set of Boltzmann equations is
\begin{subequations}
\label{flke}
\begin{align}
\frac{{\rm d} Y_{N_1}}{{\rm d}z}\ & = \ -\:D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq}) \; , \\
\frac{{\rm d}Y_{\Delta_{\tau}}}{{\rm d}z} \ & = \ -\:
\epsilon_{1\tau}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq}) \:-\:P_{1\tau}^{0}\,W_1\,Y_{\Delta_{\tau}} \; , \\
\frac{{\rm d}Y_{\Delta_{\tau_1^{\bot}}}}{{\rm d}z} \ & = \ -\:
\epsilon_{1{\tau}^{\bot}_1}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:P_{1{\tau}^{\bot}_1}^{0}\,W_1\,Y_{\Delta_{\tau_1^{\bot}}} \; ,
\end{align}
\end{subequations}
where we have introduced the flavored CP asymmetries ($\alpha=e,\mu,\tau$), given by~\cite{Covi:1996wh}
\begin{align}
\label{veial}
\epsilon_{1\alpha} \ & = \
\frac{1}{8\pi (\lambda^{\dagger}\lambda)_{11}} \, \sum_{j\,\neq\, 1} \bigg\{ {\rm Im}\,
\left[ \lambda_{\alpha 1}^{*}\lambda_{\alpha j} (\lambda^{\dagger}\lambda)_{1j}\right] \xi\bigg(1,\frac{M_j^2}{M_1^2}\bigg)
\nonumber\\
& + \ \frac{M_1^2}{M_1^2-M_j^2} {\rm Im}\,\Big[\lambda_{\alpha 1}^*\lambda_{\alpha j} (\lambda^{\dagger}\lambda)_{j1}
\Big] \bigg\} \; ,
\end{align}
and defined $\epsilon_{1\tau_1^{\bot}} \equiv \epsilon_{1e} + \epsilon_{1\mu}$ and $P^0_{1\tau_{1}^\bot} \equiv P^0_{1e} + P^0_{1\mu}$.\footnote{Since $\epsilon_1 = \sum_\alpha \epsilon_{1\alpha}$, one can indeed verify that the expression for $\epsilon_1$ in~\eref{CPas} is recovered after summing over $\alpha$ in~\eref{veial}.} The loop function $\xi(b,x)$ is defined in~\eref{xi} (see also Chapter~\cite{leptogenesis:A04}). If the $\ell_1$ and $\bar{\ell}_1$ quantum states were simply CP-conjugates of each other, the flavored CP asymmetries would just be given by $\epsilon_{1\alpha} = P^0_{1\alpha}\,\epsilon_1$. As mentioned above, this holds at tree level, but loop contributions\footnote{They must necessarily be considered, since the CP asymmetries are generated by the interference of tree-level and one-loop graphs. One would, of course, have $\epsilon_1=\epsilon_{1\alpha}=0$ at tree level.} generate a mismatch~\cite{Nardi:2006fx} $\Delta P_{1\alpha} \equiv P_{1\alpha} -\bar{P}_{1\alpha}$, so that the flavored CP asymmetries get additional contributions. We then have
\begin{equation}
\epsilon_{1\alpha}\ =\ \frac{P_{1\alpha} + \bar{P}_{1\alpha}}{2}\,\epsilon_1\:+\:\frac{\Delta P_{1\alpha}}{2}
\end{equation}
and note that $\Delta P_{1\tau} + \Delta P_{1\tau_1^\bot} =0$.
The solution for the final asymmetry is a quite trivial generalization of the result obtained in the unflavored case (see \sref{sec:vanilla}). One has
\begin{equation}
Y_{B-L}^{\infty}\ =\ Y_{\Delta_{\tau}}^{\infty}\: +\: Y_{\Delta_{\tau_1^{\bot}}}^{\infty} \; ,
\end{equation}
with $Y_{\Delta_{\tau}}/Y_{N_1}^{\rm eq}(0) \simeq -\,\epsilon_{1\tau}\,\kappa(K_{1\tau})$ and $Y_{\Delta_{\tau_1^{\bot}}}/Y_{N_1}^{\rm eq}(0) \simeq -\,\epsilon_{1\tau_1^\bot}\,\kappa(K_{1\tau})$. Barring fine-tuning in the seesaw formula, the total final asymmetry can then be written as~\cite{Blanchet:2008pw}
\begin{equation}
\label{finalas}
Y_{B-L}^{\infty}/Y_{N_1}^{\rm eq}(0)\ \simeq \ -\,N_{\rm fl} \, \epsilon_1 \, \kappa(K_1)\: +\: \frac{\Delta P_{1 \tau}}{2} \,
\left[\kappa(K_{1\tau_1^\bot})-\kappa(K_{1\tau}) \right]\; ,
\end{equation}
where $N_{\rm fl}$ is an effective number of flavors with value between 1, when there is no washout at all ($K_1 \ll 1$) and the unflavored result is recovered, and 2, the number of flavors. This expression shows that large deviations from the unflavored case can arise only in the presence of washout, if $\Delta P_{1\alpha} \neq 0$ and $\kappa(K_{1\tau_1^\bot})-\kappa(K_{1\tau}) \neq 0$. For this reason, the lower bound on $M_1$ and on $T_{\rm reh}$ in the limit of no washout are not changed by flavor effects. It should also be said that, allowing for some fine-tuning in the seesaw formula, the flavored CP asymmetries can be enhanced by unbounded extra terms that are suppressed
by $M_1/M_2$. With some mild fine-tuning, and without a too strongly hierarchical spectrum, one can relax the lower bounds on $M_1$ and on $T_{\rm reh}$ to $\sim 10^8\,{\rm GeV}$~\cite{Blanchet:2008pw}.
The most extreme case of deviation from the unflavored case is realised when $\epsilon_1 =0$, implying conservation of total lepton number~\cite{Nardi:2006fx}. Even in this case, if the second term is large enough, one can attain successful leptogenesis~\cite{Blanchet:2006be,Pascoli:2006ie,Pascoli:2006ci,Anisimov:2007mw,Molinaro:2007uv,Molinaro:2008rg,Molinaro:2008cw}. The CP violation then stems uniquely from low-energy phases, although certain conditions on the high-energy parameters still have to be verified. Therefore, the measurement of CP-violating values of low-energy phases is not a sufficient (nor a necessary) condition for successful leptogenesis. However, the discovery of CP violation at low energies, in particular of a CP-violating value of the Dirac phase, as now supported by the data, would, of course, be a very important conceptual result, not least of all because CP violation at low energies is, in general, accompanied by CP violation at high energies.\footnote{Imposing a discrete flavor symmetry, this would not be true: one could have CP-violating values of the low-energy phases with no CP violation at high energies. However, a flavor symmetry has to be broken, and even a very small breaking would be sufficient to generate enough CP violation at high energies to produce the correct asymmetry. Implications of flavor and CP symmetries in leptogenesis are discussed in detail in Chapter~\cite{leptogenesis:A06}.}
\subsubsection{Three-flavor regime: $M_1 \ll 5 \times 10^{8} \, {\rm GeV}$}
In this case, the muon interaction rate is large enough at the asymmetry production that also the leptonic quantum states $\tau_1^\bot$ produced by the $N_1$ decays decohere before they inverse decay. One therefore has to calculate separately the electron asymmetry $Y_{\Delta_e}$ and the muon asymmetry $Y_{\Delta_\mu}$ in addition to the tau asymmetry $Y_{\Delta_\tau}$, thereby realising a {\em three-flavor regime}.
The set of kinetic equations are easily generalized and will comprise three kinetic equations: one for each flavor asymmetry $Y_{\Delta_\alpha}$. However, in this case, the asymmetries are, barring a quasi-degenerate RH-neutrino spectrum or fine-tuning in the seesaw formula, too small to have successful leptogenesis. For this reason, the two-flavor regime is, in general, more significant.\footnote{In a supersymmetric case, the transition between the two- and the three-flavor regimes occurs at $M_1 \simeq 5 \times 10^{8}\,(1 +\tan^2 \beta)\,{\rm GeV}$~\cite{Abada:2006fw}. One can then have successful leptogenesis even in the three-flavor regime.}
\subsection{Density matrix equation}
The unflavored regime and the two-(or three-)flavor regimes are asymptotic limits of a more general physical picture where, at the inverse decay, not all leptonic quantum states $| \ell_1 \rangle $ are either a coherent superposition or an incoherent admixture but there is a coexistence of both states. In this intermediate regime, a useful statistical description is provided by a {\em density matrix equation}~\cite{Barbieri:1999ma, Abada:2006fw, DeSimone:2006nrs, Beneke:2010dz, Blanchet:2011xq}. In this more general approach, all abundances are replaced by matrices in (charged-lepton) flavor space. The density matrix equation is then flavor invariant upon rotations in (charged-lepton) flavor space (see the discussions in~\sref{sec:3flavourcovariance}). In the limit where one interaction dominates over all others in flavor space, the density matrix equation asymptotically reproduces the Boltzmann equations that we discussed above. In the intermediate regime, it manages to give a description of the transition between two different flavor regimes. For example, we can consider the important transition between the unflavored and the two-flavor regimes. In this case, the only charged-lepton interactions that we can consider are the tau interactions.
When gauge interactions are taken into account, they force the matrix for the sum of leptons and anti-leptons to be given approximately by $Y^{\ell +\bar{\ell}}_{\alpha\beta} = 2 \, Y_{\ell}^{\rm eq}\,\delta_{\alpha\beta}$. This leads to
the following (closed) equation for the $B-L$ density matrix~\cite{Barbieri:1999ma, Blanchet:2011xq}
\begin{align}
\label{fullyflavoured}
\frac{{\rm d}[Y_{B-L}]_{\alpha\beta}}{{\rm d}z} \ =\ &-\:
\epsilon^{(1)}_{\alpha\beta}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:\frac{1}{2}\,W_1\,\left\{{\cal P}^{0(1)}, Y_{B-L}\right\}_{\alpha\beta}
\nonumber\\&-\: \frac{\Gamma_{\tau}}{H\, z} \, [\sigma_{1}]_{\alpha\beta}\,[Y_{B-L}]_{\alpha\beta} \; ,
\end{align}
specialized in the (two) charged-lepton flavor basis \smash{$\tau-\tau_1^{\bot}$}. In this equation, \smash{$\epsilon^{(1)}_{\alpha\beta}$} is the CP asymmetry matrix for $N_1$ decays that feeds the source term, \smash{${\cal P}^{0(1)}_{\alpha\beta}$} is the tree-level flavor projector along the $\ell_1$ direction and $\sigma_1$ is the Pauli matrix.
As expected, the two-flavor regime is recovered in the limit $\Gamma_{\tau}/(Hz) \gg W_1$ (or, equivalently, $\Gamma_{\tau} \gg \Gamma_1^{\rm ID} + \bar{\Gamma}_1^{\rm ID}$), when all leptons ${\ell}_1$ experience a tau interaction before inverse decaying. In this limit, the third term on the right-hand side of~\eref{fullyflavoured} efficiently damps the off-diagonal terms, and one immediately recovers the kinetic equations,~\eref{flke}.
The unflavored limit is more tricky, and there is even an interesting twist. First of all, one can neglect tau lepton interactions. This is equivalent to neglecting the term $\propto \Gamma_{\tau}$ in~\eref{fullyflavoured}. The density matrix equation in the unflavored limit then becomes
\begin{equation}
\label{dmatrixunfl}
\frac{{\rm d}[Y_{B-L}]_{\alpha\beta}}{{\rm d}z} \ = \
-\,\epsilon^{(1)}_{\alpha\beta}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:\frac{1}{2}\,W_1\,\left\{{\cal P}^{0(1)}, Y_{B-L}\right\}_{\alpha\beta} \; .
\end{equation}
Taking the trace of this equation, one immediately finds the usual equation for $Y_{B-L}$ in the unflavored regime,~\eref{dlg2}.
At the same time, after some easy steps, one can also find an equation for the difference
\begin{align}
\frac{{\rm d}\big(Y_{\Delta_{\tau\tau}} - Y_{\Delta_{\tau_1^{\bot}\tau_1^{\bot}}}\big)}{{\rm d}z} \ = \
&-\:\Delta P_{1\tau}\, D_1 \, (Y_{N_1}-Y_{N_1}^{\rm eq}) \nonumber\\&-\:
\frac{1}{2}\,W_1 \Big(Y_{\Delta_{\tau\tau}} - Y_{\Delta_{\tau_1^{\bot}\tau_1^{\bot}}}\Big) \; ,
\end{align}
with solution
\begin{equation}
Y_{\Delta_{\tau\tau}} - Y_{\Delta_{\tau_1^{\bot}\tau_1^{\bot}}} \ =\ -\,Y_{N_1}^{\rm eq}(0)\,\Delta P_{1\tau} \, \kappa(K_1/2) \; ,
\end{equation}
so that, for the leptonic asymmetries, one has
\begin{subequations}
\label{NBmLttTB2}
\begin{align}
Y^{\infty}_{\Delta_{\tau\tau}}\ &\simeq\ P^0_{1\tau}\,Y_{B-L}^{\infty}
\:-\: \frac{1}{2}\,Y_{N_1}^{\rm eq}(0)\,\Delta P_{1\tau} \, \kappa(K_1/2) \; , \\
Y^{\infty}_{\Delta_{\tau_1^{\bot}\tau_1^{\bot}}} \ &\simeq\ P^0_{1\tau_1^{\bot}}\,Y_{B-L}^{\infty} \:+\:\frac{1}{2}\,
Y_{N_1}^{\rm eq}(0)\,\Delta P_{1\tau}\, \kappa(K_1/2) \; .
\end{align}
\end{subequations}
The second terms on the right-hand sides of the two expressions are the so-called {\em phantom terms}. In the $N_1$-dominated scenario, with no further dynamical stage after the $N_1$ production, they cannot leave any detectable trace since they cancel out in the final $Y_{B-L}$ and, therefore, in $\eta_B$. However, as we will discuss in the next subsection, when heavy-neutrino flavor effects are also taken into account, their exact cancellation at the production can be removed afterwards. In this case, they would give a contribution to the final expression for the baryon asymmetry.
\subsection{Flavor coupling}
In the Boltzmann equations for the flavored regimes in~\sref{subsec:N1dominated}, the evolution of the flavored asymmetries is independent of each other. For example, in the case of the $N_1$-dominated and two fully-flavored regime, one has that the equations for $Y_{\Delta_{\tau}}$ and $Y_{\Delta_{\tau_1^{\bot}}}$ are decoupled (see~\eref{flke}). The dynamics of the two asymmetries are then independent of one another, and one can say that the two flavors are thermally uncoupled.
There are, however, different effects (spectator processes) that are able to couple the dynamics of the two flavors~\cite{Barbieri:1999ma, Buchmuller:2001sr, Nardi:2005hs, Blanchet:2008pw}. The most important one is the {\em Higgs asymmetry}. Since the Higgs doublet carries hypercharge, the $\phi$'s couple to leptons and the $\bar{\phi}$'s couple to anti-leptons. On the other hand, the Higgs asymmetry is clearly unflavored.
Suppose, for example, that the asymmetry is entirely produced in the tau flavor and not in the $\tau_1^{\bot}$ flavor. The asymmetry created in the former will necessarily be accompanied by an opposite Higgs asymmetry. This, however, through inverse decays, will then necessarily induce an asymmetry also in the $\tau_1^{\bot}$ flavor, even though we have assumed that there is no source term in this flavor. Therefore, the Higgs asymmetry couples the dynamics of the two flavors, thereby realising a kind of thermal contact between them such that the asymmetry in one flavor induces an asymmetry in the other flavor. In addition to the Higgs asymmetry, one has also to consider that sphaleron processes are able to transfer the asymmetry initially injected into lepton doublets and Higgs bosons to all other particles, including quarks (indeed creating a baryon asymmetry). A lepton asymmetry created in a specific flavor can then induce asymmetries in the other flavors through baryon asymmetries, analogously to what we have seen for the Higgs asymmetry.
It should be noticed how, in this case, the inverse decays, which have so far only played the role of washout processes, can actually generate an asymmetry in one flavor, although this is possible only if there is a source term injecting an asymmetry in another flavor from the start. The Boltzmann equations in the two-flavor regime,~\eref{flke}, then get modified in the following way:
\begin{subequations}
\label{flcke}
\begin{align}
\frac{{\rm d}Y_{N_1}}{{\rm d}z} \ & = \ -\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq}) \;,\\
\frac{{\rm d}Y_{\Delta_{\tau_1^{\bot}}}}{{\rm d}z}\ & = \
-\:\epsilon_{1\tau_1^{\bot}}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:
P_{1\tau_1^{\bot}}^{0}\,W_1\,\sum_{\alpha\,=\,\tau_1^\bot,\tau}\,C_{\tau_1^\bot\alpha}^{(2)}\,Y_{\Delta_\alpha} \;,\\
\frac{{\rm d}Y_{\Delta_{\tau}}}{{\rm d}z}\ & = \
-\:\epsilon_{1\tau}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:
P_{1\tau}^{0}\,W_1\,\sum_{\alpha\,=\,\tau_1^\bot,\tau}\,C_{\tau\alpha}^{(2)}\,Y_{\Delta_\alpha} \; .
\end{align}
\end{subequations}
The flavor coupling matrix $C^{(2)}$ is given by the sum of two contributions,
\begin{equation}
C_{\alpha\beta}\ =\ C^{\ell}_{\alpha\beta}\:+\:C^{\phi}_{\alpha\beta} \;,
\end{equation}
the first one connecting the asymmetry in the lepton doublets and the second connecting the asymmetry in the Higgs bosons. It relates the asymmetries stored in the lepton doublets and Higgs bosons to the $Y_{\Delta_\alpha}$'s, and one can see how it acts in a way that the asymmetry in a flavor $\beta\neq \alpha$ influences the asymmetry $\alpha$ through the washout terms. Imposing chemical equilibrium conditions among the different asymmetries, one finds
\begin{equation}
C^{\ell(2)}\ =\ \left(\begin{array}{ccc}
417/589 & -\,120/589 \\ -\,30/589 & 390/589 \end{array}\right) \, \hspace{4mm} \mbox{\rm and}
\hspace{4mm}
C^{\phi(2)}\ =\ \left(\begin{array}{ccc}
164/589 & 224/589 \\
164/589 & 224/589
\end{array}\right) \;,
\end{equation}
whose sum yields
\begin{equation}
C^{(2)} \ \equiv\
\left(\begin{array}{ccc}
C^{(2)}_{\tau_1^\bot\tau_1^\bot} & C^{(2)}_{\tau_1^\bot\tau} \\ C^{(2)}_{\tau\tau_1^\bot} & C^{(2)}_{\tau_1^\bot\tau_1^\bot}
\end{array}\right) \ =\
\left(\begin{array}{ccc}
581/589 & 104/589 \\ 194/589 & 614/589 \end{array}\right) \; .
\end{equation}
In the {\em three-flavor regime}, the Boltzmann equations for each flavored asymmetry, taking into account the flavor coupling matrix, become
\begin{equation}
\label{flkewA}
\frac{{\rm d}Y_{\Delta_\alpha}}{{\rm d}z} \ =\ -\:
\epsilon_{1\alpha}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})
\:-\:P_{1\alpha}^{0}\,\sum_{\beta\,=\,e,\mu,\tau}\,C^{(3)}_{\alpha\beta}\,W_1^{\rm ID}\,Y_{\Delta_\beta}\;.
\end{equation}
The flavor coupling matrices in the three-flavor regime are given by
\begin{equation}
C^{\ell(3)}\ =\ \left(\begin{array}{ccc}
151/179 & -\,20/179 & -\,20/179 \\ -\,25/358 & 344/537 & -\,14/537 \\ -\,25/358 & -\,14/537 & 344/537
\end{array}\right)
\end{equation}
and
\begin{equation}
C^{\phi(3)}\ =\ \left(\begin{array}{ccc}
37/179 & 52/179 & 52/179 \\
37/179 & 52/179 & 52/179 \\
37/179 & 52/179 & 52/179 \\
\end{array}\right) \; ,
\end{equation}
whose sum yields
\begin{equation}
C^{(3)} \ \equiv\
\left(\begin{array}{ccc}
C_{ee}^{(3)} & C_{e\mu}^{(3)} & C_{e\tau}^{(3)} \\
C_{\mu e}^{(3)} & C_{\mu\mu}^{(3)} & C_{\mu \tau}^{(3)} \\
C_{\tau e}^{(3)} & C_{\tau\mu}^{(3)} & C_{\tau\tau}^{(3)}
\end{array}\right) \ =\
\left(\begin{array}{ccc}
188/179 & 32/179 & 32/179 \\ 49/358 & 500/537 & 142/537 \\ 49/358 & 142/537 & 500/537
\end{array}\right) \;.
\end{equation}
In an $N_1$-dominated scenario, the correction to the final asymmetry from accounting for flavor coupling is at most $40\%$~\cite{JosseMichaux:2007zj}. We will see, however, that the modification introduced by flavor coupling can be much larger in an $N_2$-dominated scenario, and it can even make completely new regions of parameter space accessible.
\subsection{Heavy-neutrino flavors}
The impact of charged-lepton flavor effects on the $N_1$-dominated scenario is quite important, but, in different cases, it only provides a correction, as we discussed following~\eref{finalas}. For example, the lower bounds on $M_1$ and $T_{\rm reh}$ in the $N_1$-dominated scenario do not change. The reason is that they are saturated in the limit of no washout, when flavor effects are irrelevant. However, when heavy-neutrino flavor effects are also considered, their interplay opens up many new opportunities for leptogenesis scenarios, some of which can be realised within certain categories of models embedding the type I seesaw mechanism.
The first clear consequence of heavy-neutrino flavor effects is that the final asymmetry receives a contribution from the decays of the different RH neutrino species. If we consider for definiteness the case of three RH neutrino species, one can simply write \smash{$\eta_B = \sum_{i=1,2,3} \, \eta_B^{(i)}$}. The first thing to notice is that each contribution is non-vanishing only if the mass \smash{$M_i \lesssim z_B^{(i)}\, T_{\rm reh}$}, where \smash{$z_B^{(i)}$} is the particular value of \smash{$M_i/T_{\rm reh}$} about which the asymmetry is generated. From this point of view, a straightforward condition that can be imposed for the validity of the $N_1$-dominated scenario is to have \smash{$M_2 \gtrsim T_{\rm reh}$}. However, in general, the next-to-lightest RH-neutrino mass $M_2$ is below the reheat temperature and, in this case, the $N_2$'s are also produced in the thermal bath
and can potentially contribute to the final asymmetry.
As we said, if charged-lepton flavor effects are neglected, the $N_2$ contribution would be exponentially suppressed by the $N_1$ washout as $\exp(-3\pi \, K_1 /8)$ and since, given the measured values of $m_{\rm sol}$ and $m_{\rm atm}$, one typically has $K_1 \gg 1$, the possibility to have an $N_2$-dominated scenario is relegated to a special region of parameters in which $K_1 \lesssim 1$~\cite{DiBari:2005st}. There is, however, an important caveat to this result. If the $N_1$ washout occurs at temperatures $T \sim M_1 \lesssim T_{\rm sph}^{\rm out}$, where $T_{\rm sph}^{\rm out}$ is the out-of-equilibrium temperature of sphaleron processes, it has no effect, since it will wash out the lepton asymmetry but not the baryon asymmetry~\cite{DiBari:2015svd}. This is a possibility to be taken into account. However, even in the case $M_1 \gtrsim T_{\rm sph}^{\rm off}$, when charged-lepton flavor effects are considered, the washout from the lightest RH neutrino does not necessarily act along the flavor where the asymmetry is produced, and some part might survive and contribute to the observed asymmetry (or even explain it).
First, suppose that $M_1 \ll 5 \times 10^{8}\,{\rm MeV}$. In this case, the $N_1$ washout acts along the three (orthogonal) charged-lepton flavor directions. One has then to consider separately the asymmetry produced in the three
charged-lepton flavors, obtaining~\cite{Vives:2009zz}
\begin{equation}
Y_{B-L}^{\infty}\ =\ \sum_{\alpha} Y_{\Delta_\alpha}({T \gtrsim M_1}) \, e^{-\frac{3\pi}{8}\,K_{1\alpha}}\;,
\end{equation}
where $Y_{\Delta_\alpha}({T \gtrsim M_1})$ are the flavored asymmetries produced prior to the $N_1$ washout. One can see that the exponential suppression of the three terms is given by the flavored decay parameters that can be much more easily $\lesssim 1$ than the total decay parameter $K_1 = \sum_{\alpha} K_{1\alpha}$. In this way, the asymmetry produced before the lightest RH-neutrino washout can more easily survive in a particular flavor.
If both the production of the asymmetry and the $N_1$ washout occur in the same flavor regime and above $5 \times 10^{8}\,{\rm GeV}$, i.e.~either in the unflavored or in the two-flavor regimes, then there is another effect to be considered that reduces the effectiveness of the $N_1$ washout: the {\em projection effect}~\cite{Barbieri:1999ma, Engelhard:2006yg}. This will only act along the flavor component that is parallel either to ${\ell}_1$,
in the unflavored regime, or to ${\ell}_{\tau^{\bot}_1}$, in the two-flavor regime. The asymmetry in the orthogonal flavor to ${\ell}_1$ or $\tau^{\bot}_1$ cannot be washed out. Both effects have then to be taken into account.
Within a density matrix formalism, accounting for heavy-neutrino flavor effects basically corresponds to having interactions acting on additional flavor directions. The density matrix equation,~\eref{fullyflavoured}, then generalizes to~\cite{Blanchet:2011xq}
\begin{align}
\label{denmaeqfinal}
\frac{{\rm d}[Y_{B-L}]_{\alpha\beta}}{{\rm d}z} \ & = \ -\:\epsilon^{(1)}_{\alpha\beta}\,D_1\,(Y_{N_1}-Y_{N_1}^{\rm eq})\:-\:\frac{1}{2}\,W_1\,\left\{{\cal P}^{(1)0}, Y_{B-L}\right\}_{\alpha\beta}\nonumber \\
& - \ \epsilon^{(2)}_{\alpha\beta}\,D_2\,(Y_{N_2}-Y_{N_2}^{\rm eq})\:-\:\frac{1}{2}\,W_2\,\left\{{\cal P}^{(2)0}, Y_{B-L}\right\}_{\alpha\beta}\nonumber \\
\nonumber
& - \ \epsilon^{(3)}_{\alpha\beta}\,D_3\,(Y_{N_3}-Y_{N_3}^{\rm eq})\:-\:\frac{1}{2}\,W_3\,\left\{{\cal P}^{(3)0}, Y_{B-L}\right\}_{\alpha\beta}\nonumber \\
& - \ \Gamma_{\tau} \, \left[\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right),\left[\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}\right),Y_{B-L} \right]\right]_{\alpha\beta}\nonumber \\
& - \
\Gamma_{\mu}\,\left[\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0
\end{array}\right),\left[\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0
\end{array}\right),Y_{B-L} \right]\right]_{\alpha\beta}
\;,
\end{align}
where we have extended the definitions of all quantities introduced for $N_1$ to the two heavier RH neutrinos $N_2$ and $N_3$. Clearly, in a general case, all terms on the right-hand side compete with each other in making lepton quantum states collapse along a particular direction in flavor space and its orthogonal one. However, assuming a hierarchical RH-neutrino spectrum, the different stages of asymmetry production and washout from each RH neutrino species occur sequentially, proceeding from the heaviest to the lightest one.
In this case, the equation now has different possible limits described by different sets of Boltzmann equations. Each limit is realised differently, depending on how the set of values $\{M_1,M_2,M_3\}$ is arranged in the three different flavor regimes (unflavored, two-flavor and three-flavor):
\begin{itemize}
\item[(a)] There are three different cases for both $N_1$ and $N_2$ in the three-flavor regime ($M_2, M_1 \ll 5 \times 10^8 \, {\rm GeV}$).
\item[(b)] One has three more cases for only $N_1$ in the three-flavor regime ($M_1 \ll 5 \times 10^8\,{\rm GeV}$). This is the {\em $N_2$-dominated scenario} to which we will give special consideration in the next subsection.
\item[(c)] There are three cases for the lightest RH neutrino in the two-flavor regime with $5 \times 10^{11}\,{\rm GeV} \gg M_1 \gg 5 \times 10^8\,{\rm GeV}$.
\item[(d)] Finally, there is the case when all three RH neutrinos are in the unflavored regime, with $M_i \gg 5 \times 10^{11}\,{\rm GeV}$.
\end{itemize}
\subsubsection[N2-dominated scenario and strong thermal leptogenesis]{$N_2$-dominated scenario and strong thermal leptogenesis}
Out of all these 10 possible mass patterns, the three in (b) have a special interest. The asymmetry produced from $N_1$ is insufficient to reproduce the observed value and, therefore, this has to be reproduced by the next-to-lightest RH neutrinos. The two scenarios with $N_2$ in the two-flavor regime are the only ones that can realise strong thermal leptogenesis, where the final asymmetry is independent of the initial conditions. Within the unflavored assumption, as we discussed, the only condition one has to impose is simply $K_1 \gg 1$, and this is strongly supported by the neutrino mixing data, since $m_{\rm sol}, m_{\rm atm} \sim 10\,m_{\star}$. However, when flavor effects are considered, a possible large pre-existing asymmetry can now avoid more easily the washout from the RH neutrinos. An easy way to wash out a large pre-existing asymmetry in all three flavors is to have $N_1$ in the three-flavor regime and all three $K_{1\alpha} \gg 1$~\cite{Engelhard:2006yg}. However, in this way, one cannot attain successful leptogenesis, since the lightest RH-neutrino production is insufficient and the asymmetry from the two heavier RH neutrinos is also washed out together with the pre-existing one.
The only possibility to achieve successful strong thermal leptogenesis is within a tau $N_2$-dominated scenario~\cite{Bertuzzo:2010et}. In this case, a pre-existing tau asymmetry is washed out by $N_2$ inverse decays already in the two-flavor regime (requiring $K_{2\tau} \gg 1$), when the tau flavor is already detected. At the end of the $N_2$-washout stage, the $N_2$ out-of-equilibrium decays produce a tau asymmetry, which is the one that must reproduce the observed asymmetry. Finally, at the $N_1$ washout, the pre-existing electron and muon asymmetries are also washed out (requiring $K_{1\mu}, K_{1e} \gg 1$), while the tau asymmetry produced by the $N_2$-decays survives (requiring $K_{1\tau} \lesssim 1$) and explains the observed baryon asymmetry.
As we will see, this seemingly special set of conditions for successful strong thermal leptogenesis can be realised within a well-motivated class of models. Moreover, it is interesting that it implies a lower bound on the lightest neutrino mass $m_1 \gtrsim 10\,{\rm meV}$~\cite{DiBari:2014eqa}, with the precise value depending logarithmically on the initial value of the pre-existing asymmetry.
Within the $N_2$-dominated scenario, with $5 \times 10^{11}\,{\rm GeV} \gg M_2 \gg 5 \times 10^8 \,{\rm GeV} \gg M_1$, if one neglects flavor coupling, the final asymmetry can be calculated using
\begin{equation}
Y_{B-L}^{\infty}\ =\ \sum_{\alpha\,=\,e,\mu,\tau}Y_{\Delta_{\alpha}}^{\infty} \; ,
\end{equation}
with
\begin{subequations}
\label{twofl}
\begin{align}
Y_{\Delta_e}^{\infty}\ & \simeq \ -\:Y_{N_1}^{\rm eq}(0)
\Bigg[\frac{K_{2e}}{K_{2\tau_2^{\bot}}}\,\epsilon_{2 \tau_2^{\bot}}\kappa(K_{2 \tau_2^{\bot}})\nonumber\\&\qquad \qquad
+\: \Bigg(\epsilon_{2e} - \frac{K_{2e}}{K_{2\tau_2^{\bot}}}\, \epsilon_{2 \tau_2^{\bot}} \Bigg)\,\kappa(K_{2 \tau_2^{\bot}}/2)\Bigg]\,
e^{-\frac{3\pi}{8}\,K_{1 e}} \; , \\ \nonumber
Y_{\Delta_{\mu}}^{\infty}\ & \simeq \ -\:Y_{N_1}^{\rm eq}(0)\Bigg[\frac{K_{2\mu}}{K_{2 \tau_2^{\bot}}}\,
\epsilon_{2 \tau_2^{\bot}}\,\kappa(K_{2 \tau_2^{\bot}})\nonumber\\&\qquad \qquad +\:
\Bigg(\epsilon_{2\mu} - \frac{K_{2\mu}}{K_{2\tau_2^{\bot}}}\, \epsilon_{2 \tau_2^{\bot}} \Bigg)\,
\kappa(K_{2 \tau_2^{\bot}}/2) \Bigg]
e^{-\frac{3\pi}{8}\,K_{1 \mu}}\; , \\
Y_{\Delta_{\tau}}^{\infty}\ & \simeq \ -\:Y_{N_1}^{\rm eq}(0)\epsilon_{2 \tau}\,\kappa(K_{2 \tau})\,e^{-\frac{3\pi}{8}\,K_{1 \tau}} \; .
\end{align}
\end{subequations}
This expression takes into account phantom terms but neglects flavor coupling. Including flavor coupling, two additional terms should also be taken into account, and these can become dominant in certain cases~\cite{Antusch:2010ms}. These terms contribute to an $\alpha$ flavor asymmetry despite being proportional to the $\beta \neq \alpha$ flavored CP asymmetry. Although these terms are proportional to small off-diagonal numerical coefficients in the flavor coupling matrix, they can in some models open up new regions of parameter space. Therefore, whilst flavor coupling is a correction within the $N_1$-dominated scenario, it can become crucial within the $N_2$-dominated scenario.
\subsection{Low-energy neutrino parameters}
Imposing successful leptogenesis is equivalent to constraining the seesaw parameter space and, very interestingly, it involves those heavy-neutrino parameters that we cannot test in low-energy neutrino experiments. If the masses $M_i$ are well above the TeV scale then they also evade all collider constraints. Therefore, leptogenesis provides a unique way to place constraints on these parameters and ideally one would like to over-constrain the seesaw parameter space by combining leptogenesis with low-energy neutrino experimental data. In this way, leptogenesis can be regarded as a very high energy ``experiment'' able to give us information on the physics at very high energies
embedding the seesaw mechanism.
This ambitious strategy encounters, however, a clear difficulty, since the number of seesaw parameters to be tested is much higher than the experimental constraints. The seesaw parameter space contains 18 additional parameters: 3 RH-neutrino masses and 15 additional parameters in the Dirac mass matrix. A convenient way to parameterize the Dirac mass matrix in the seesaw limit is the orthogonal parameterization~\cite{Casas:2001sr}
\begin{equation}
m_D \ =\ U_{\nu}\,D_m^{1/2}\,\Omega\,D_M^{1/2} \; ,
\end{equation}
following from the seesaw formula,~\eref{seesaw}. In this way, the 15 parameters in the Dirac mass matrix are re-expressed through the 9 low-energy neutrino parameters (3 light neutrino masses and 6 parameters in $U_{\nu}$), the 3 $M_i$ and 6 parameters in the orthogonal matrix $\Omega$.\footnote{The fact that on the right-hand side one has 18 parameters and on the left-hand side 15 parameters of course means that 3 parameters on the right-hand side, e.g.~the three RN-neutrino masses $M_i$, have to be regarded as independent of the 15 parameters in $m_D$.} This parametrization is model independent, meaning that it works for any model embedding the type I seesaw models and allows to take into account automatically the low-energy neutrino experimental information.
The orthogonal matrix $\Omega$ encodes information on the 3 lifetimes and the 3 total CP asymmetries of the RH neutrinos. Low-energy neutrino experiments alone cannot test the seesaw mechanism. The baryon-to-photon number ratio calculated from leptogenesis, $\eta_B^{\rm lep}$, depends on all 18 seesaw parameters, in general. Model independently, leptogenesis is then clearly insufficient to over-constrain the seesaw parameter and, in general, it does not produce testable model-independent predictions. However, a few things might help in reducing the number of independent parameters:
\begin{itemize}
\item Successful leptogenesis might be satisfied only about {\em peaks}, i.e.~only for very special regions in parameter space that can correspond to testable constraints on some low-energy neutrino parameters.
\item Some of the parameters might cancel out in the calculation of $\eta_B^{\rm lep}$.
\item One might impose some cosmologically-motivated condition to be respected, such as the {\em strong thermal leptogenesis} (independence of the initial conditions) or, even stronger, that one of the heavy RH neutrino
species is the dark matter candidate.
\item We might add phenomenological constraints from particle physics, such as collider signatures, charged LFV, EDM's, etc.
\item The seesaw might be embedded within a model that implies conditions on $m_D$ and $M_i$.
\end{itemize}
\subsubsection{Upper bound on neutrino masses in the unflavored regime}
In~\eref{efunf} for the efficiency factor in the unflavored regime, the exponential factor is an effect of $\Delta L =2$ washout processes. If this is combined with the upper bound in~\eref{upperbound} on the total CP asymmetry from the successful leptogenesis condition, one finds an upper bound $m_1 \lesssim 0.1 \,{\rm eV}$~\cite{Buchmuller:2002jk, Buchmuller:2004nz} in addition to the lower bound on $M_1$. Interestingly, this is now confirmed by the current cosmological upper bound placed by the \emph{Planck Collaboration}~\cite{Aghanim:2016yuo}. This upper bound is also very interesting, since it provides an example of how, despite our starting from 18 parameters, the successful leptogenesis condition, which constrains only one combination of them, can indeed produce testable constraints. The reason is that the final asymmetry in the unflavored approximation does not depend on the 6 parameters in $U$, since this cancels out in $\epsilon_1$, or on the 6 parameters associated with the two heavier RH neutrinos. There are only 6 parameters left ($m_1, m_{\rm atm}, m_{\rm sol}, M_1, \Omega^2_{11}$), out of which two are measured, thereby leaving only 4 free parameters. The asymmetry, however, has a peak strongly suppressed by the value of $m_1$, due mainly to the exponential suppression from $\Delta L=2$ washout processes in~\eref{efunf}. The latter is the origin of the upper bound on $m_1$.
Notice that the upper bound is saturated at values $M_1 \sim 10^{13} \,{\rm GeV}$ and, therefore, it still holds when flavor effects are included in the unflavored regime. In the two-flavor regime, due to the fact that the flavored CP asymmetries respect a more relaxed upper bound than the total, and since the washout can be reduced, the upper bound on $m_1$ is relaxed. However, within the validity of the two-flavor regime, it is still $m_1 \lesssim {\cal O}(0.1\,{\rm eV})$. A calculation based on a density matrix formalism should merge the upper bounds calculated within the flavored regimes where Boltzmann equations hold. One expects some relaxation but not much above $0.1\,{\rm eV}$~\cite{Blanchet:2008pw}. In the $N_2$-dominated scenario, the upper bound on $m_1$ is much looser, and one can have solutions for $m_1$ as large as $1\,{\rm eV}$.
\subsection[SO(10)-inspired leptogenesis]{$SO(10)$-inspired leptogenesis}
In the unflavored case, imposing so-called $SO(10)$-inspired conditions, which essentially corresponds to assuming that the neutrino Dirac mass matrix does not differ too much from the up-quark mass matrix, prevents successful leptogenesis, since $M_1 \ll 10^9 \,{\rm GeV}$ and, at the same time, an $N_2$ contribution is efficiently washed out. However, when flavor effects are considered, the $N_2$ asymmetry can escape the $N_1$ washout for a set of solutions that yield successful leptogenesis. Typically, the final asymmetry is in the tau flavor~\cite{DiBari:2008mp}. Interestingly, this set of solutions requires certain constraints on the low-energy neutrino parameters~\cite{DiBari:2010ux}. For example, the lightest neutrino mass cannot be below $\simeq 1\,{\rm meV}$, i.e.~one expects some deviation from the hierarchical limit, although we do not know any experimental way to test this lower bound fully at present. It should be added that $SO(10)$-inspired leptogenesis also strongly favors normally-ordered neutrino masses and that, for $m_1 \simeq m_{\rm sol} \simeq 10\,{\rm meV}$, it is allowed only for $\theta_{23}$ in the first octant. Recently, it has been noticed~\cite{DiBari:2017uka} that for the current favored values of $\delta \sim -\,\pi/2$, the effective Majorana mass $m_{ee}$ of $0\nu\beta\beta$ decay cannot be too much lower than $\sim 10\,{\rm meV}$. Scatter plots of the solutions in the plane $m_{ee}$ versus $m_1$ are shown in the left panel of Fig.~2. Yellow points indicate the dominant tau solutions (the orange points are obtained in the approximation $V_L =I$), and green points indicate some marginal muon solutions, which are now almost entirely excluded by the cosmological upper bound on $m_1$. If such values of $\delta$ are confirmed then $0\nu\beta\beta$ experiments will be able to test $SO(10)$-inspired leptogenesis fully in the coming years.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=60mm]{mee_vs_d.pdf} \hspace*{1mm}
\includegraphics[width=60mm]{d_vs_t23.pdf}
\end{center}
\caption{Scatter plots of the solutions projected on the shown planes: $m_{ee}$ versus $\delta$ (left) and $\delta$ versus $\theta_{23}$ (right). The yellow points are obtained imposing just successful $SO(10)$-inspired solutions while the blue points are a subset imposing in addition the strong thermal leptogenesis condition (orange and light blue points are for $V_L=I$). Figure taken from Ref.~\cite{DiBari:2017uka}.}
\end{figure}
It is possible to find very accurate expressions for all important quantities necessary to calculate the asymmetry in $SO(10)$-inspired leptogenesis. We refer the reader to Refs.~\cite{DiBari:2014eya, DiBari:2017uka} for a detailed discussion. Here, we just give some basic hints and results. The first step is that the Dirac mass matrix can be diagonalized by means of two unitary transformations $V_L$ and $U_R$, acting respectively on the left-handed and right-handed neutrino fields:
\begin{equation}
m_D \ =\ V_L^{\dagger} \, D_{m_D} \, U_R \; ,
\end{equation}
where we have defined $D_{m_D} \equiv {\rm diag}(m_{D1},m_{D2},m_{D3})$. If one plugs this expression into the seesaw formula,~\eref{seesaw}, one finds $M^{-1} = U_R \, D_M^{-1} \, U_R^{\mathsf{T}}$, where $M^{-1} \equiv D_{m_D}^{-1}\, V_L \, U_{\nu} \, D_m \, U_{\nu}^{\mathsf{T}} \, V_L^{\mathsf{T}} \, D_{m_D}^{-1}$ is the inverse of the Majorana mass matrix in the Yukawa basis (where $m_D$ is diagonal). Assuming $m_{D3} \gg m_{D2} \gg m_{D1}$, one can find accurate analytic expressions both for the RH-neutrino mixing matrix $U_R$ and for the RH-neutrino masses $M_i$. For example, for the RH-neutrino masses, one finds
\begin{equation}
M_1 \ \simeq \ \alpha_1^2 \, \frac{m^2_{\rm up}}{|(\widetilde{m}_\nu)_{11}|} \;, \;\;
M_2 \ \simeq \ \alpha_2^2 \, \frac{m^2_{\rm charm}}{m_1 \, m_2 \, m_3 } \, \frac{|(\widetilde{m}_{\nu})_{11}|}{|(\widetilde{m}_{\nu}^{-1})_{33}| } \;, \;\;
M_3 \ \simeq \ \alpha_3^2\, {m^2_{\rm top}}\,|(\widetilde{m}_{\nu}^{-1})_{33}| \;,
\end{equation}
where we have defined $(\alpha_1,\alpha_2,\alpha_3) \equiv (m_{D1}/m_{\rm up}, m_{D2}/m_{\rm charm}, m_{D3}/m_{\rm top})$ and $\widetilde{m}_{\nu} \equiv V_L\,m_{\nu}\,V_L^{\mathsf{T}}$. In this way, one arrives at a full analytic expression $\eta_B(m_{\nu};\alpha_i,V_L)$, allowing an analytic understanding of all constraints on low-energy neutrino parameters.
\subsubsection[Strong thermal SO(10)-inspired solution]{Strong thermal $SO(10)$-inspired solution}
As we discussed, when flavor effects are taken into account, there is only one scenario of (successful) leptogenesis allowing for independence of the initial conditions: the tau $N_2$-dominated scenario, where the asymmetry is produced by the $N_2$ decays in the tau flavor~\cite{Bertuzzo:2010et}. As we have seen, the conditions are quite special, since it is required that a large pre-existing asymmetry be washed out by the lightest RH neutrino in the electron and muon flavors. The next-to-lightest RH neutrinos both wash out a large pre-existing tau asymmetry and also produce the observed asymmetry in the same tau flavor, escaping the lightest RH-neutrino washout.
It is then highly non-trivial that this quite special set of conditions can be realised by a subset of the $SO(10)$-inspired solutions satisfying successful leptogenesis~\cite{DiBari:2013qja}. For this subset, the constraints are quite stringent and they pin down a well-defined solution: the strong thermal $SO(10)$-inspired solution. This is characterized by a non-vanishing reactor mixing angle, normally-ordered neutrino masses, an atmospheric mixing angle in the first octant and $\delta$ in the fourth quadrant ($\sin\delta <0$ and $\cos\delta >0$). In addition, the lightest neutrino mass has to be within a fairly narrow range of values about $m_1 \simeq 20\,{\rm meV}$, corresponding
to a sum of neutrino masses --- the quantity tested by cosmological observations --- $\sum_i m_i \simeq 95\,{\rm meV}$, implying a deviation from the normal-hierarchy prediction of $\sum_i m_i \simeq 60\,{\rm meV}$, detectable during the coming years. At the same time, the solution also predicts a $0\nu\beta\beta$ signal with $m_{ee} \simeq 0.8\, m_1 \simeq 16\,{\rm meV}$. In light of the latest experimental results discussed earlier, this solution is quite intriguing, since, in addition to relying on the same moderately strong washout as vanilla leptogenesis and due to the fact that both the solar and atmospheric scales are $\sim 10\,{\rm meV}$ --- the leptogenesis conspiracy~\cite{Blanchet:2008pw} --- it has also correctly predicted a non-vanishing reactor mixing angle and is currently in very good agreement with the best-fit parameters from neutrino-mixing experiments. (To our knowledge, it is the only model that has truly predicted $\sin \delta < 0$.) Notice that the possibility to have a large pre-existing asymmetry prior to the onset of leptogenesis at the large reheat temperatures required is quite a plausible possibility, so that the assumption of strong thermal leptogenesis should be regarded as a reasonable setup. (In particular, one could have a traditional GUT baryogenesis followed by leptogenesis.)
It is also possible to consider a supersymmetric framework for $SO(10)$-inspired leptogenesis~\cite{DiBari:2015svd}. In this case, the most important modification to be taken into account is that the critical values for $M_1$, which set the transition from one flavor regime to another, are enhanced by a factor $1+\tan^2\beta$ and, for sufficiently large values of $\tan\beta$, the production might occur in a three-flavor regime rather than in a two-flavor regime. This typically goes in the direction of enhancing the final asymmetry, since the washout at the production is reduced.
\subsubsection{Realistic models}
A first example of realistic models satisfying $SO(10)$-inspired conditions and able to fit all lepton and quark mass and mixing parameters are, as one might expect, $SO(10)$ models. A specific example is given by renormalizable $SO(10)$ models for which the Higgs fields belong to 10-, 120-, 126-dim representations, yielding specific mass relations among the various fermion mass matrices. Recently, reasonable fits have been obtained that typically point to a compact RH-neutrino spectrum with all RH-neutrino masses falling in the two-flavor regime. This compact-spectrum solution implies, however, huge fine-tuned cancellations in the seesaw formula. Even so, fits realising the $N_2$-dominated scenario have been obtained~\cite{Dueck:2013gca, Babu:2016bmy}, and, in this case, there is no fine-tuning in the seesaw formula. Note that $SO(10)$-inspired conditions can also be realised beyond $SO(10)$ models.
For example, a Pati-Salam model combined with $A_4$ and $Z_5$ discrete symmetries has recently been proposed, satisfying $SO(10)$-inspired conditions and also successful $SO(10)$-inspired leptogenesis~\cite{DiBari:2015oca}. On the other hand, a realistic model realising strong thermal $SO(10)$-inspired leptogenesis has not yet been found.
\section{\label{sec:RL}Flavor and low-scale resonant leptogenesis}
\label{sec3}
\noindent When the mass splitting of two of the heavy neutrinos is small compared to their widths, self-energy effects on the CP asymmetry can dominate and the CP violation can be resonantly enhanced~\cite{Liu:1993tg,Flanz:1994yx,Flanz:1996fb,Covi:1996fm,Covi:1996wh,Pilaftsis:1997jf,Pilaftsis:1997dr,Buchmuller:1997yu} (see also~\cite{Kuzmin:1985mm}). This allows for the scale of successful leptogenesis to be lowered to energies in the $\mathrm{TeV}$ range~\cite{Pilaftsis:2005rv}, making \emph{resonant leptogenesis} (RL)~\cite{Pilaftsis:2003gt} directly testable at current and near-future experiments. A comprehensive discussion of RL is provided in the accompanying Chapter~\cite{leptogenesis:A03}, and we focus here only on the importance of flavor effects in these low-scale models.
The rate equations in the preceding section are covariant under flavor transformations of the SM lepton doublets. However, they are specifically written in the RH-neutrino mass eigenbasis. Therefore, it is natural to ask: is it possible to write rate equations that are \emph{fully} flavor-covariant, also maintaining flavor-covariance at each stage of the calculation?
This question, in addition to being of conceptual interest, has practical consequences for RL. As we will see below, amongst other things, flavor covariance requires us to take into account quantum coherences between different flavors; in the resonant regime, the RH neutrinos are quasi-degenerate and thus one can expect that their quantum coherences may play a significant role. Resonant leptogenesis allows the successful construction of low-scale models of leptogenesis and, at such low scales, one would naively expect to be in the fully-flavored regime discussed in \sref{sec_regimes} for the charged leptons, where their flavor decoherence has already taken place. However, when studying low-scale models of leptogenesis, one is particularly interested in their \emph{testability}, i.e.~in their observable effects at current and near-future experiments. As will be clear from the example discussed below, in low-scale models with observable signatures, at least some of the Yukawa couplings are sufficiently large that their effect will partially recreate coherences in the charged-lepton sector~\cite{Dev:2014laa, Dev:2015wpa}. Hence, a \emph{fully flavor-covariant} treatment~\cite{Dev:2014laa, Dev:2014wsa, Dev:2014tpa, Dev:2015dka, Dev:2015wpa}, which will describe coherences in both the charged-lepton and RH-neutrino sectors, is of particular and quantitative importance in \emph{low-scale testable models} of resonant leptogenesis.
\subsection{Flavor covariance}
\label{sec:3flavourcovariance}
The lepton-doublet and RH-neutrino field operators $\ell_{\alpha}$ and $N_{Rk}$ transform under flavor rotations $U(\mathcal{N}_{\ell})\otimes U(\mathcal{N}_{N})$ as follows:\footnote{So as to avoid confusion, we do not suppress the $\dagger$ on Hermitian-conjugate fields as in Ref.~\cite{Dev:2014laa}.}
\begin{subequations}
\begin{gather}
\ell_{\alpha} \ \to \ \ell'_{\alpha} \ = \ V_{\alpha}^{\phantom{\alpha}\beta}\ell_{\beta}\;,\qquad
\ell^{\dag\alpha} \ \to \ \ell^{\dag\prime\alpha} \ = \ V^{\alpha}_{\phantom{\alpha}\beta}\ell^{\dag \beta}\;,
\\
N_{Rk} \ \to \ N'_{Rk} \ = \ U_{k}^{\phantom{k}l}N_{Rl},\qquad
N^{\dag k}_R \ \to \
N^{\dag\prime k}_R \ = \ U^{k}_{\phantom{k}l}N_R^{\dag l}\; ,
\end{gather}
\end{subequations}
where $V_{\alpha}^{\phantom{\alpha}\beta} \in U(\mathcal{N}_{\ell})$ and $U_{k}^{\phantom{k}l} \in U(\mathcal{N}_{N})$. Here and in the following, we adopt a flavor-covariant notation in which lower (upper) indices denote covariant (contravariant) transformation properties. In this notation, the relevant part of the Lagrangian in~\eref{eq:1_lagrangian_general} can be written as
\begin{equation}
-\mathcal{L}_N \ = \ \lambda_{\alpha}^{\phantom{\alpha}k} \overline{\ell}^{\alpha}
\phi^c N_{Rk}
+ \frac{1}{2} \overline{N_{Rk}^{c}} [M_N]^{k l}
N_{Rl} + {\rm h.c.}\;,
\label{3_Lagrangian}
\end{equation}
which is invariant under flavor transformations if the Yukawa couplings and Majorana mass matrix transform as spurions:
\begin{subequations}
\begin{gather}
\lambda_{\alpha}^{\phantom{\alpha}k} \ \rightarrow \ \lambda_{\alpha}^{\prime\!\phantom{\alpha}k} \ = \ V_{\alpha}^{\phantom{\alpha}\beta}
\;
U^k_{\phantom{k} l} \; \lambda_{\beta}^{\phantom{\beta}l} \;, \\
[M_N]^{kl} \ \rightarrow \
[M'_N]^{kl} \ = \ U^k_{\phantom{k} m} \;
U^l_{\phantom{l} n} \; [M_N]^{mn} \;.
\end{gather}
\end{subequations}
In order to maintain flavor covariance at all stages, the plane-wave decompositions of the field operators are written in a manifestly flavor-covariant way~\cite{Dev:2014laa}, e.g.
\begin{align}
\ell_{\alpha}(x) \ & = \ \sum_{s\,=\,+,-} \int_{\mathbf{p}} \Big[\big(2E_{\ell}(\mathbf{p})\big)^{-1/2}\Big]_{\alpha}^{\phantom{\alpha}\beta}
\notag \\ &\qquad \times \Big( \big[e^{-ip\cdot x}\big]_{\beta}^{\phantom{\beta}\gamma}\, [u(\mathbf{p},s)]_{\gamma}^{\phantom{\gamma}\delta}\,
b_{\delta}(\mathbf{p},s) \: +\: \big[e^{ip\cdot x}\big]_{\beta}^{\phantom{\beta}\gamma}\, [v(\mathbf{p},s)]_{\gamma}^{\phantom{\gamma}\delta}\,
d_{\delta}^{\dagger}(\mathbf{p},s)
\Big)\;,
\label{3_leptonfieldoperator}
\end{align}
where $[E_{\ell}^2(\mathbf{p})]_{\alpha}^{\phantom{\alpha}\beta}=\mathbf{p}^2\delta_{\alpha}^{\phantom{\alpha}\beta}+[M_{\ell}^{\dag}M_{\ell}]_{\alpha}^{\phantom{\alpha}\beta}$, with $M_{\ell}$ being the charged-lepton mass matrix, here generically taken as non-vanishing. We see that flavor covariance requires the Dirac four-spinors $ [u(\mathbf{p},s)]_{\gamma}^{\phantom{\gamma}\delta}$ and $[v(\mathbf{p},s)]_{\gamma}^{\phantom{\gamma}\delta}$ to transform as rank-$2$ tensors in flavor space, since they are solutions of the Dirac equation, which is matrix-valued in flavor space.
Equation~\eqref{3_leptonfieldoperator} shows that the creation and annihilation operators for particles ($b^{\dag \alpha}$, $b_{\alpha}$), and anti-particles ($d^{\dag}_{\alpha}$, $d^{\alpha}$) need to transform in conjugate representations, in order to have flavor covariance. Therefore, relations such as the ordinary charge conjugation $C$ and the Majorana condition for the RH neutrinos, which relate particle and anti-particle operators, cannot be valid in an arbitrary flavor basis. Instead, one is forced to consider generalized $\rm C$ transformations, denoted $\widetilde{\rm C}$, which involve a unitary matrix $\mathcal{G}^{\alpha\beta}\equiv [V^*V^\dag]^{\alpha\beta}$, describing the rotations to and from the basis in which the ``standard'' $\rm C$-transformations are defined:\footnote{We emphasise that the $C$-transformations are defined only up to an arbitrary complex phase.}
\begin{equation}
b_{\alpha}(\mathbf{p},s)^{\tilde{c}} \ \equiv \ \mathcal{G}^{\alpha\beta} \,
b_{\beta}(\mathbf{p},s)^{c} \ = \ \mathcal{G}^{\alpha\beta} \,\mathcal{G}_{\beta\gamma}\,
d^{\gamma}(\mathbf{p},s)
= \ d^{\alpha}(\mathbf{p},s) \;.
\end{equation}
Analogously, the Majorana condition for the RH neutrinos involves a matrix $G^{k l}$, which can be taken equal to the identity in the mass eigenbasis. Notice also the order of flavor indices, dictated by flavor covariance, in the definition of the number densities:
\begin{equation}
[n_{\ell}]_{\alpha}^{\phantom{\alpha}\beta} \ \sim \ \langle b^{\dag \beta} \,
b_{\alpha} \rangle \;, \qquad [\bar{n}_{\ell}]_{\alpha}^{\phantom{\alpha}\beta} \ \sim \ \langle d^\dag_{\alpha} \,
d^{\beta}\rangle \;,
\end{equation}
which implies that $n_{\ell}$ and $\bar{n}_{\ell}$ are $\widetilde{C}$-conjugate quantities: $n_{\ell}^{\tilde{c}} = \bar{n}_{\ell}^{\mathsf{T}}$, where $\mathsf{T}$ denotes the matrix transpose. Analogously, the RH-neutrino number densities are defined as
\begin{equation}
[n_N]_{k}^{\phantom{k}l} \ \sim \ \langle a^{\dag l} \,
a_k \rangle \;, \qquad [\bar{n}_N]_{k}^{\phantom{k}l} \ \sim \ G_{km} [n_N]_{n}^{\phantom{n}m} G^{nl}\;,
\end{equation}
and $n_N^{\tilde{c}} = \bar{n}_N^{\mathsf{T}}$. Thus, we can define number densities with definite $\widetilde{\rm C}{\rm P}$-transformation properties:
\begin{equation}
\underline{n}_N\ = \ \frac{1}{2}\Big(n_N\:+\:\bar{n}_N\Big)\;,\qquad n_{\Delta N}\ =\ n_N\:-\:\bar{n}_N\;, \qquad n_{\Delta \ell}\ = \ n_{\ell}\:-\:\bar{n}_{\ell}\;.
\end{equation}
Notice that the $\widetilde{\rm C}{\rm P}$-odd $n_{\Delta N}$ is purely imaginary and off-diagonal in the RH-neutrino mass eigenbasis, i.e.~it encodes the CP-violating coherences present in the RH-neutrino sector. Instead, the $\widetilde{\rm C}{\rm P}$-even $\underline{n}_N$ describes the RH neutrino populations and $\widetilde{\rm C}{\rm P}$-even coherences, and $n_{\Delta \ell}$ is nothing other than the matrix of asymmetries in the LH charged leptons.
\subsection{Rate equations}
\label{subsec:rateequations}
The requirement of flavor covariance and the definite $\widetilde{\rm C}{\rm P}$-properties of the number densities introduced in~\sref{sec:3flavourcovariance} fix the form of the flavor-covariant generalization of the rate equations (cf.~Chapter~\cite{leptogenesis:A04}). For the moment, let us extract the $\widetilde{\rm C}{\rm P}$-even and -odd parts of the various rates as
\begin{equation}
\gamma^X_Y \ \equiv \ \gamma(X \to Y) + \gamma(\bar{X} \to \bar{Y})\;, \qquad \delta\gamma^X_Y \ \equiv \ \gamma(X \to Y) - \gamma(\bar{X} \to \bar{Y})\;.
\end{equation}
We will discuss the physical issues related to $\widetilde{\rm C}{\rm P}$ violation in the rates later on. The Majorana nature of the RH neutrinos causes the appearance of real and imaginary parts of the rates in their rate equations that need to be defined conveniently in a covariant manner~\cite{Dev:2014laa}, and we will denote them here by a tilde.
With these considerations, the general form of the rate equations describing RH-neutrino oscillations, decays, inverse decays, $\Delta L=2$ scatterings and charged-lepton decoherence processes is~\cite{Dev:2014laa}:\\[0.25em]
\begin{subequations}
\begin{align}
\frac{H_{N} \, s}{z}\,
\frac{\mathrm{d}[\underline{Y}_N]_{k}^{\phantom{k}l}}{\mathrm{d}z} \ &= \ - \, i \, \frac{s}{2} \,
\Big[\mathcal{E}_N,\, Y_{\Delta N}\Big]_k^{\phantom{k}l}
+ \, \Tdu{\big[\widetilde{\rm Re}
(\gamma^{N}_{\ell \phi})\big]}{}{}{k}{l} \nonumber\\&\quad\;\;
- \, \frac{1}{2 \, Y_N^{\rm eq}} \,
\Big\{\underline{Y}_N, \, \widetilde{\rm Re}(\gamma^{N}_{\ell \phi})
\Big\}_{k}^{\phantom{k}l} \;,
\label{3_etanrateeq}\\[6pt]
\frac{H_{N} \, s}{z}\,
\frac{\mathrm{d}[Y_{\Delta N}]_{k}^{\phantom{k}l}}{\mathrm{d}z} \
&= \ - \, 2 \, i \, s \,
\Big[\mathcal{E}_N,\, \underline{Y}_N\Big]_k^{\phantom{k}l} \, + \, 2\, i\, \Tdu{\big[\widetilde{\rm Im}
(\delta \gamma^{N}_{\ell \phi})\big]}{}{}{k}{l} \notag\\
&\quad\;\; - \,
\frac{i}{Y_N^{\rm eq}} \, \Big\{\underline{Y}_N, \,
\widetilde{\rm Im}
(\delta\gamma^{N}_{\ell \phi}) \Big\}_{k}^{\phantom{k}l} - \, \frac{1}{2 \, Y_N^{\rm eq}} \,
\Big\{Y_{\Delta N}, \, \widetilde{\rm Re}(\gamma^{N}_{\ell \phi})
\Big\}_{k}^{\phantom{k}l}
\label{3_deltaetanrateeq}\;, \\[6pt]
\frac{H_{N} \, s}{z}\,
\frac{\mathrm{d}[Y_{\Delta \ell}]_{\alpha}^{\phantom{\alpha}\beta}}
{\mathrm{d}z} \
&= \ - \, \Tdu{[\delta \gamma^{N}_{\ell\phi}]}{\alpha}{\beta}{}{} \,
+\, \frac{[\underline{Y}_N]_{l}^{\phantom{l}k}}
{Y_N^{\rm eq}} \,
\Tdu{[\delta \gamma^{N}_{\ell\phi}]}{\alpha}{\beta}{k}{l}
+ \, \frac{[Y_{\Delta N}]_{l}^{\phantom{l} k}}{2\,Y_N^{\rm eq}} \,
\Tdu{[\gamma^{N}_{\ell \phi}]}{\alpha}{\beta}{k}{l} \notag\\
&\quad\;\; - \frac{1}{3} \,
\Big\{ Y_{\Delta \ell} , \,
{\gamma}^{\ell\phi}_{\ell^{\tilde{c}} \phi^{\tilde{c}}}
+ {\gamma}^{\ell\phi}_{\ell \phi}\Big\}_{\alpha}^{\phantom{\alpha} \beta}
\, - \, \frac{2}{3} \, \Tdu{[Y_{\Delta \ell}]}{\delta}{\epsilon}{}{} \,
\Tdu{[{\gamma}^{\ell\phi}_{\ell^{\tilde{c}} \phi^{\tilde{c}}} - {\gamma}^{\ell\phi}_{\ell \phi}]}{\epsilon}{\delta}{\alpha}{\beta}
\notag\\[3pt]
& \quad\;\; - \frac{2}{3} \,
\Big\{Y_{\Delta \ell}, \,
\gamma_{\rm dec } \Big\}_{\alpha}^{\phantom{\alpha}\beta} \,
+\, [\delta \gamma_{\rm dec}^{\rm back}]_{\alpha}^{\phantom{\alpha}\beta}\;,
\end{align}
\end{subequations}
\\
\noindent where $z$ is defined in terms of the temperature $T$ and heavy-neutrino mass scale $M$ as $z\equiv M/T$ (see Chapter~\cite{leptogenesis:A04}). The generalized real and imaginary parts of an Hermitian matrix $A$ are defined via
\begin{subequations}
\begin{align}
[\widetilde{\mathrm{Re}}(A)]_{\alpha}^{\phantom{\alpha}\beta}\ &\equiv\ \frac{1}{2}\Big(A_{\alpha}^{\phantom{\alpha}\beta}\:+\:G_{\alpha\lambda}A_{\mu}^{\phantom{\mu}\lambda}G^{\mu\beta}\Big)\;,\\
[\widetilde{\mathrm{Im}}(A)]_{\alpha}^{\phantom{\alpha}\beta}\ &\equiv\ \frac{1}{2i}\Big(A_{\alpha}^{\phantom{\alpha}\beta}\:-\:G_{\alpha\lambda}A_{\mu}^{\phantom{\mu}\lambda}G^{\mu\beta}\Big)\;.
\end{align}
\end{subequations}
These rate equations have been written in terms of the yields (see Chapter~\cite{leptogenesis:A04})
\begin{equation}
\underline{Y}_N(z)\ \equiv\ \frac{\underline{n}_N(z)}{s(z)}\;,\qquad Y_{\Delta N}(z)\ \equiv\ \frac{n_{\Delta N}(z)}{s(z)}\;,\qquad Y_{\Delta \ell}(z)\ \equiv\ \frac{n_{\Delta \ell}(z)}{s(z)}\;.
\end{equation}
While the form of the rate equations is essentially dictated by flavor covariance, it can be obtained explicitly by a semiclassical analysis~\cite{Dev:2014laa} and a field-theoretic Kadanoff-Baym treatment~\cite{Dev:2014wsa}.
\begin{figure}
\centering
\subfigure[][The in-medium inverse heavy-neutrino decay: $n_{\phi}\protect{[}n_{\ell}\protect{]}_{\beta}^{\protect{\phantom{\beta}}\alpha}\protect{[}\gamma(\ell\phi\to N)\protect{]}_{\alpha\protect{\phantom{\beta}}k}^{\protect{\phantom{\alpha}}\beta\protect{\phantom{k}}l}$.]{\parbox{\textwidth}{\centering \includegraphics[scale=0.7]{LPHItoNself.pdf}\vspace{0.5em}\\$\bigg\downarrow$\\\includegraphics[scale=0.7]{LPHItoNamp.pdf}}}
\subfigure[][The in-medium inverse heavy-neutrino decay: $\bar{n}_{\phi}\protect{[}\bar{n}_{\ell}\protect{]}_{\beta}^{\protect{\phantom{\beta}}\alpha}\protect{[}\gamma(\ell^{\tilde{c}}\phi^{\tilde{c}}\to N)\protect{]}_{\alpha\protect{\phantom{\beta}}k}^{\protect{\phantom{\alpha}}\beta\protect{\phantom{k}}l}$.]{\parbox{\textwidth}{\centering \includegraphics[scale=0.7]{LPHItoNbself.pdf}\vspace{0.5em}\\$\bigg\downarrow$\\\includegraphics[scale=0.7]{LPHItoNbamp.pdf}}}
\caption{\label{3_fig_cuts} Diagrammatic representation of the $2\to 1$ processes, illustrating the origin of the four-index rates from the unitarity cuts of the thermal heavy-neutrino self-energies~\cite{Dev:2014laa}. Notice that the shaded region of the cut appears to the left. Diagrams adapted from Ref.~\cite{Dev:2014laa}.}
\end{figure}
The necessary appearance of rates that carry high-rank structure in flavor space, e.g.~\smash{{\footnotesize $\Tdu{[\gamma^{N}_{\ell \phi}]}{\alpha}{\beta}{k}{l}$}}, can be understood in terms of partial cuts of the ``thermal'' self-energies (cf.~\sref{sec:methods_fieldtheory}) by means of a generalization of the optical theorem~\cite{Dev:2014laa}, where the cut is weighted by the matrix number density. For example, the inverse decay terms can be obtained directly from the cuts shown in~\fref{3_fig_cuts}, allowing us to extract the thermally-averaged rates (cf.~Chapter~\cite{leptogenesis:A04})
\begin{subequations}
\begin{gather}
\Tdu{[\gamma(N\to\ell\phi)]}{\alpha}{\beta}{k}{l} = \Tdu{[\gamma(\ell^{\tilde{c}}\phi^{\tilde{c}}\to N)]}{\alpha}{\beta}{k}{l} = \int_{N\ell\phi}g_{\ell}g_{\phi}(2p_N\cdot p_{\ell})\lambda^{\dag\beta}_{\phantom{\dag\beta}k}\lambda_{\alpha}^{\phantom{\alpha}l}\;,\\
\Tdu{[\gamma(N\to\ell^{\tilde{c}}\phi^{\tilde{c}})]}{\alpha}{\beta}{k}{l} = \Tdu{[\gamma(\ell\phi\to N)]}{\alpha}{\beta}{k}{l} = \int_{N\ell\phi}g_{\ell}g_{\phi}(2p_N\cdot p_{\ell})[\lambda^{\tilde{c}}]^{\dag\beta}_{\phantom{\dag\beta}k}[\lambda^{\tilde{c}}]_{\alpha}^{\phantom{\alpha}l}\;,
\end{gather}
\end{subequations}
where the left-hand equalities follow from CPT, $g_{\ell}$ and $g_{\phi}$ are respectively the degeneracy factors of the internal degrees of freedom of the charged-lepton and Higgs doublets, and we employ the short-hand notation
\begin{align}
\int_{N\ell\phi}\ &\equiv\ \int\!\frac{{\rm d}^3\mathbf{p}_N}{(2\pi)^32E_N(\mathbf{p}_N)}\,\frac{{\rm d}^3\mathbf{p}_{\ell}}{(2\pi)^32E_{\ell}(\mathbf{p}_{\ell})}\,\frac{{\rm d}^3\mathbf{p}_{\phi}}{(2\pi)^32E_{\phi}(\mathbf{p}_{\phi})}\nonumber\\&\qquad\times\:(2\pi)^4\delta^{(4)}(p_N-p_{\ell}-p_{\phi})e^{-p^0_N/T}
\end{align}
for the thermally-averaged phase-space integrals. In this way, we obtain
\begin{equation}
\Tdu{[\gamma^{N}_{\ell \phi}]}{\alpha}{\beta}{k}{l}\ =\ \frac{M^4}{\pi^2z}\,\frac{\mathcal{K}_1(z)}{16\pi}\,\Big(\lambda^{\dag\beta}_{\phantom{\dag\beta}k}\lambda_{\alpha}^{\phantom{\alpha}l}\:+\:[\lambda^{\tilde{c}}]^{\dag\beta}_{\phantom{\dag\beta}k}[\lambda^{\tilde{c}}]_{\alpha}^{\phantom{\alpha}l}\Big)\;,
\end{equation}
where $\mathcal{K}_1(z)$ is the first-order modified Bessel function of the second kind.
In order to identify the physical origin of each of the terms in these rate equations, it is helpful to consider their flavor structure and, specifically, whether they arise from commutators or anti-commutators in flavor space.
The first term on each of the right-hand sides of~\eref{3_etanrateeq} and~\eref{3_deltaetanrateeq} originates from a commutator in flavor space. Working, for instance, in the mass eigenbasis, it is clear that these terms source CP asymmetry only when non-zero flavor correlations are encoded in the off-diagonal elements of the matrix number densities. Since these terms are non-zero only in the presence of such a misalignment, they predominantly capture the \emph{coherent oscillations} between heavy-neutrino flavors. These terms are of \emph{statistical} origin, and we emphasise that they would be absent in a flavor-diagonal treatment.
The remaining terms instead arise from anti-commutators in flavor space and persist in the flavor-diagonal limit. (The terms that do not explicitly carry braces began as anti-commutators involving the equilibrium number densities, which are taken to be diagonal in flavor space.) With the exception of the decoherence term, which will be described shortly, the anti-commutator structure predominantly captures the effect of \emph{mixing} between the heavy-neutrino flavors. The terms involving \smash{$\gamma^{N}_{\ell\phi}$} and \smash{$\delta\gamma^N_{\ell\phi}$} together describe decays and inverse decays. The terms involving \smash{$\gamma^{\ell\phi}_{\ell\phi}$} and $\gamma^{\ell\phi}_{\ell^{\tilde{c}}\phi^{\tilde{c}}}$ describe $\Delta L=2$ scatterings. In order to avoid double counting, the procedure of RIS subtraction~\cite{Kolb:1979qa} has been applied to these rate equations, as discussed in the accompanying Chapter~\cite{leptogenesis:A04}, with the necessary inclusion of thermal corrections~\cite{Dev:2014laa}. Finally, the decoherence term \smash{$[\delta \gamma_{\rm dec}^{\rm back}]_{\alpha}^{\phantom{\alpha}\beta}$}~\cite{Dev:2014laa} (cf.~Ref.~\cite{Abada:2006fw}) accounts for processes mediated by the charged-lepton Yukawa couplings, which act in competition with the processes mediated by the heavy-neutrino Yukawa couplings. The former tend to decohere the charged leptons into their mass eigenbasis, whereas the latter tend to regenerate charged-lepton coherences.
The physically distinct sources of CP asymmetry from \emph{oscillations} and \emph{mixing} can also be isolated by considering the sequence of heavy-neutrino production, propagation and subsequent decay. The contribution from \emph{mixing} is associated with the heavy-neutrino production and decay processes, and the contribution from \emph{oscillations} is associated with the \emph{in-medium} propagation of the heavy neutrinos. The former is generated predominantly by the interference of the ($T=0$) one-loop and tree-level processes, capturing the usual $\varepsilon$- and $\varepsilon'$-type CP violation. The latter is contained in the thermal part of the intermediate heavy-neutrino propagator and is captured at leading order in the semi-classical rate equations by the presence of the commutator terms.
In the hierarchical limit, the source of CP asymmetry is dominated by \emph{mixing}. A semi-classical analysis of flavor-diagonal Boltzmann equations is then sufficient, and the source of asymmetry can be treated by means of effective or resummed Yukawa couplings (see~Ref.~\cite{Pilaftsis:2003gt}). In the quasi-degenerate limit, \emph{oscillations} become important, and we need also to keep track of the evolution of the off-diagonal flavor correlations, resulting in a non-vanishing contribution from the commutator term. Whilst it is clear that both \emph{mixing} and \emph{oscillations} contribute to the asymmetry in the quasi-degenerate regime, it remains an open question as to how to account consistently for both sources without under- or over-counting the final asymmetry. In semi-classical approaches, it has been claimed~\cite{Dev:2014laa} that both the commutator term and resummed Yukawa couplings should be included. This has also been argued in a field-theoretic approach~\cite{Dev:2014wsa} based on the interaction picture~\cite{Millington:2012pf,Millington:2013isa} (see also the discussion in Chapter~\cite{leptogenesis:A03}). Conversely, in other field-theoretic approaches, it has been claimed~\cite{Garbrecht:2011aw} that both sources are captured by the average mass shell approximation for the flavor-off-diagonal heavy-neutrino Wigner functions. The material difference amounts to a possible factor of 2 in the final asymmetry~\cite{Dev:2014wsa}. The main obstacle to resolving this debate is the technical difficulty of making direct comparisons between different approaches in the strong washout regime and in the presence of cosmological expansion.
A direct comparison was made in the weak washout regime and on a static and stationary background in Ref.~\cite{Kartavtsev:2015vto} (see also the discussion in Chapter~\cite{leptogenesis:A03} of this review). In this idealized setting, the sources of CP violation were studied in a field-theoretic approach, based on the Kadanoff-Baym formalism (in both interaction- and Heisenberg-picture descriptions), by analysing the effective shell structure of the would-be non-equilibrium heavy-neutrino propagators of a toy scalar model. Whilst both mixing and oscillation contributions can be identified --- the former living on the quasi-particle mass shells and the latter living on an intermediate average mass shell --- one also finds additional terms that can be interpreted as the \emph{destructive} interference between these contributions. In the hierarchical limit, the oscillation and interference terms are suppressed, such that the quasi-particle mass shells dominate and a flavor-diagonal semi-classical analysis with resummed Yukawa couplings is appropriate. In the fully degenerate limit, the destructive interference is complete (see also Ref.~\cite{Hohenegger:2014cpa}), and one finds zero asymmetry, as expected. In the problematic, quasi-degenerate limit, the degree of cancellation was shown~\cite{Kartavtsev:2015vto} to depend strongly on the distribution of particle number between the different flavors and is therefore model- and washout-dependent (i.e.~dependent upon the choice of initial conditions in the weak washout regime). If the asymmetry is distributed evenly between the different flavors (corresponding to symmetric initial conditions in the weak washout regime), the impact of the destructive interference is more severe, and there is a significant suppression of the \emph{mixing} source. If this result is extrapolated to the strong washout regime, the form of the CP source then agrees with the average mass shell approximation employed in Ref.~\cite{Garbrecht:2011aw}. Instead, if a particular diagonal element of the number density dominates (corresponding to asymmetric initial conditions in the weak washout regime), the interference does not significantly impact the magnitude of the mixing term. One then finds that both the mixing and oscillation sources contribute additively to the final asymmetry up to a maximum factor of 2 enhancement when compared with taking only one source into account.
We should, however, be careful in extrapolating the latter observations to the strong washout regime and an expanding background. Whilst it is the case that one diagonal element of the heavy-neutrino number densities dominates in the attractor limit of the scenario considered in Ref.~\cite{Dev:2014laa}, the behavior of the aforementioned destructive interference in the strong washout regime and the degree to which it is correctly captured remains an area of active discussion. In semi-classical approaches, the destructive interference is, at least in part, captured by ensuring that the regulator of the final asymmetry (obtained through consistent resummation of the effective Yukawa couplings) vanishes appropriately in the CP-conserving limit.
\subsection{Phenomenological aspects}
As already mentioned in the introduction, the flavor effects captured in the fully flavor-covariant treatment are both of qualitative and quantitative importance in testable leptogenesis models. In this section, we illustrate this with a minimal model of low-scale resonant $\tau$-genesis (RL$_\tau$) in which the lepton asymmetry is generated from and protected in a single lepton flavor $\ell=\tau$~\cite{Pilaftsis:2004xx, Deppisch:2010fr}. The Dirac Yukawa couplings involving electron and muon flavors in~\eref{3_Lagrangian} remain sizable, thus giving rise to potentially observable predictions for lepton number and flavor violation at both energy and intensity frontiers~\cite{Deppisch:2010fr, Dev:2014laa, Dev:2015wpa}.
Within the minimal RL$_\ell$ setup, the heavy-neutrino sector possesses an $O({\cal N}_N)$ symmetry at some high energy scale $\mu_X$, i.e.~$M_N(\mu_X)=M\mathbb{I}$. The small mass splitting, as required for successful RL, can then be generated naturally at the phenomenologically relevant low-energy scale by renormalization group (RG) running effects induced by the Yukawa couplings $\lambda_\alpha^{\phantom{\alpha}k}$, i.e.~$M_N(M)=M\mathbb{I}+\Delta M_N^{\rm RG}$, where~\cite{Deppisch:2010fr}
\begin{align}
\Delta M_N^{\rm RG} \ \simeq \ -\,\frac{M}{8\pi^2} \ln\left(\frac{\mu_X}{M}\right){\rm Re}[\lambda^\dag(\mu_X)\cdot\lambda(\mu_X)] \; .
\end{align}
However, it turns out that this minimal scenario is not viable due to a no-go theorem~\cite{Dev:2015wpa}, which ensures that the leptonic asymmetry vanishes identically at ${\cal O}(\lambda^4)$. To avoid this, we include a new source of flavor breaking $\Delta M_N$, which is not aligned with $\Delta M_N^{\rm RG}$. Thus, the relevant heavy-neutrino mass matrix for our case is given by
\begin{align}
M_N \ = \ M\mathbb{I}+\Delta M_N^{\rm RG}+\Delta M_N \; ,
\end{align}
which goes into the type I seesaw formula for the light neutrino mass matrix~\cite{Minkowski:1977sc, Mohapatra:1979ia, Yanagida:1979as, GellMann:1980vs, Glashow:1979nm}
\begin{align}
M_\nu \ \simeq \ -\,\frac{v^2}{2}\,\lambda\cdot M_N^{-1}\cdot \lambda^{\sf T} \; .
\label{3_lightmassmatrix}
\end{align}
For the purpose of our illustration, we consider three RH neutrinos (i.e.~${\cal N}_N=3$) and the following diagonal form for $\Delta M_N$:
\begin{align}
\Delta M_N \ = \ {\rm diag}(\Delta M_1, \Delta M_2/2, -\,\Delta M_2/2) \; ,
\end{align}
where $\Delta M_2\neq \Delta M_1$ is needed to make the light neutrino mass matrix $M_\nu$ in~\eref{3_lightmassmatrix} rank-2, thus allowing us to fit successfully the low-energy neutrino oscillation data.
As for the Yukawa coupling matrix $\lambda$, we consider an $\mathrm{RL}_\tau$ model that
possesses a leptonic symmetry $U(1)_{\ell}$ and protects the lightness of the LH neutrino masses. In this scenario, the Yukawa
couplings $\lambda_{\alpha}^{\phantom{\alpha}k}$ have the following
structure~\cite{Pilaftsis:2004xx, Pilaftsis:2005rv}:
\begin{equation}
\lambda \ = \begin{pmatrix} 0 & a \,e^{-i\pi/4} & a\,e^{i\pi/4}\cr
0 & b\,e^{-i\pi/4} & b\,e^{i\pi/4}\cr
0 & c\,e^{-i\pi/4} & c\,e^{i\pi/4}\end{pmatrix}
\: + \: \delta \lambda \; .
\label{3_Yukawastructure}
\end{equation}
In order to protect the $\tau$ asymmetry from excessive washout and simultaneously allow for large couplings in the electron and muon sectors so as to have experimentally observable effects, we take $|c| \ll |a|,|b| \approx
10^{-3}-10^{-2}$. The leptonic flavor-symmetry-breaking matrix is taken to be
\begin{equation}
\delta \lambda \ = \ \begin{pmatrix}
\varsigma_e & 0 & 0\cr
\varsigma_\mu & 0 & 0\cr
\varsigma_\tau & 0 & 0 \end{pmatrix}
\; .
\end{equation}
To leading order in the symmetry-breaking parameters of $\Delta M_N$ and $\delta \lambda$, the tree-level light-neutrino mass matrix, given by~\eref{3_lightmassmatrix}, becomes
\begin{equation}
M_\nu \
\simeq \ \frac{v^2}{2M}\begin{pmatrix}
\frac{\Delta M}{M} a^2 - \varsigma_e^2 &
\frac{\Delta M}{M} ab - \varsigma_e\varsigma_\mu &
- \varsigma_e\varsigma_\tau\cr
\frac{\Delta M}{M} ab - \varsigma_e\varsigma_\mu &
\frac{\Delta M}{M} b^2 - \varsigma_\mu^2 &
- \varsigma_\mu\varsigma_\tau\cr
- \varsigma_e\varsigma_\tau &
- \varsigma_\mu\varsigma_\tau &
- \varsigma_\tau^2 \end{pmatrix}\;,
\end{equation}
where $\Delta M = -i\Delta M_2$ and we have neglected subdominant terms $\frac{\Delta M}{M} \,c \times (a,b,c)$. Inverting this expression, we determine the following model parameters appearing in the Yukawa coupling matrix~\eqref{3_Yukawastructure}:
\begin{align}
a^2 \ &= \ \frac{2M}{v^2}
\left(M_{\nu,{11}}-\frac{M^2_{\nu,{13}}}{M_{\nu,{33}}}\right) \frac{M}{\Delta M}\; ,
\qquad
b^2 \ = \ \frac{2M}{v^2}
\left(M_{\nu,{22}}-\frac{M^2_{\nu,{23}}}{M_{\nu,{33}}}\right)\frac{M}{\Delta M}\; ,
\nonumber \\[6pt]
\varsigma_e^2 \ &= \ -\frac{2M}{v^2}\frac{M^2_{\nu,{13}}}{M_{\nu,{33}}} \; ,
\qquad
\varsigma_\mu^2 \ = \ -\frac{2M}{v^2}\frac{M^2_{\nu,{23}}}{M_{\nu,{33}}}\; ,
\qquad
\varsigma_\tau^2 \ = \ -\frac{2M}{v^2}M_{\nu,{33}}\; .
\label{3_modelparameters}
\end{align}
Therefore, the Yukawa coupling matrix in the RL$_\tau$ model can be fixed completely in terms of the heavy-neutrino mass scale $M$ and the input parameters $c$ and $\Delta M_{2}$, apart from the light-neutrino oscillation parameters, which determine the elements of $M_\nu$ from the diagonalization equation $M_\nu = U_{\nu}{\rm diag}(m_{\nu_1},m_{\nu 2},m_{\nu_3})U_{\nu}^{\sf T}$, where $U_{\nu}$ is the usual PMNS mixing matrix (see Eq.~\eqref{eq:PMNS}).
\begin{table}[t!]
\tbl{The numerical values of the free parameters for three chosen benchmark points in our RL model. The parameters $a,b,\varsigma_{e,\mu,\tau}$ have been derived using~\eref{3_modelparameters}.}
{\begin{tabular}{c c c c}\hline
Input Parameter & BP1 & BP2 & BP3\\ \hline
$M$ & 400 GeV & 2000 GeV & 400 GeV \\
$\Delta M_1/M$ & $-5\times 10^{-5}$ & $-5\times 10^{-5}$ & $-5\times 10^{-5}$ \\
$\Delta M_2/M$ & $1.1\times 10^{-9}$ & $5\times 10^{-9}$ & $10^{-8}$ \\
$c$ & $2\times 10^{-7}$ & $2\times 10^{-7}$ & $2\times 10^{-7}$ \\
\hline
\end{tabular}\label{3_tab_benchmarks}}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.35]{benchmarks.pdf}
\caption{Total lepton asymmetry $Y_{\Delta \ell}$ as a function of the inverse temperature, obtained using the fully flavor-covariant formalism (solid curves) for three benchmark points. For comparison, we also show the corresponding predictions as obtained using the Boltzmann equations diagonal in charged-lepton flavors (dashed curves), which overestimate the final asymmetry in all three cases. The vertical line shows the critical temperature $T_C$ beyond which the lepton asymmetry is frozen out due to the exponential suppression of the electroweak sphaleron transition rate.\label{3_fig_benchmarks}}
\end{figure}
For numerical purposes, we choose a normal hierarchy of light neutrino masses, with the lightest mass $m_{\nu_1}=0$, and use the best-fit values of the oscillation parameters (mass-squared differences and mixing angles) from a recent global fit~\cite{Capozzi:2013csa}. For illustration, we choose $\delta=0$ and $\phi_1=\pi,~\phi_2=\pi$ for the Dirac and Majorana phases, respectively. To demonstrate the flavor dynamics of our RL$_\tau$ model, we select three benchmark points, as listed in~\tref{3_tab_benchmarks}. The results for the total lepton asymmetry in each case are shown in \fref{3_fig_benchmarks}. The ``bump'' in each case is due to an interplay between the heavy-neutrino coherence and charged-lepton decoherence effects~\cite{Dev:2014laa}. We find that the final lepton asymmetry obtained using the fully flavor-covariant treatment is smaller than that obtained from the solution of the Boltzmann equations diagonal in the charged-lepton flavor by up to a factor of 5. This clearly demonstrates the quantitative importance of the flavor effects captured by the flavor-covariant formalism.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.35]{example.pdf}
\caption{Total lepton asymmetry $Y_{\Delta \ell}$ as a function of the inverse temperature, obtained using the fully flavor-covariant formalism (black solid curve) versus that obtained using the Boltzmann equations diagonal in charged-lepton flavor (green dot-dashed), heavy-neutrino flavor (red dashed) and both (blue dotted). The yellow and grey solid curves show the total asymmetry in the flavor-covariant treatment for different initial conditions. The horizontal line corresponds to the lepton asymmetry that reproduces the observed baryon asymmetry. The vertical line shows the critical temperature $T_C$ beyond which the lepton asymmetry is frozen out due to the exponential suppression of the electroweak sphaleron transition rate.\label{3_fig_example}}
\end{figure}
\begin{table}[t!]
\tbl{The low-energy predictions for three chosen benchmark points in the RL model.}
{\begin{tabular}{c c c c c} \hline
Observable & BP1 & BP2 & BP3 & Current Upper Limit (90\% CL) \\ \hline
BR$(\mu\to e\gamma)$ & $3.9\times 10^{-13}$ & $1.2\times 10^{-15}$ & $4.7\times 10^{-15}$ & $4.2\times 10^{-13}$ [MEG]~\cite{TheMEG:2016wtm} \\
BR$(\tau\to \mu\gamma)$ & $3.2\times 10^{-23}$ & $1.7\times 10^{-25}$ & $7.0\times 10^{-24}$ & $4.4\times 10^{-8}$ [PDG]~\cite{Olive:2016xmw}\\
BR$(\tau\to e\gamma)$ & $1.2\times 10^{-23}$ & $6.5\times 10^{-26}$ & $2.6\times 10^{-24}$ & $3.3\times 10^{-8}$ [PDG]~\cite{Olive:2016xmw}\\ \hline
BR$(\mu\to 3e)$ & $1.9\times 10^{-14}$ & $1.5\times 10^{-16}$ & $2.3\times 10^{-16}$ & $1.0\times 10^{-12}$ [PDG]~\cite{Olive:2016xmw}\\ \hline
$R_{\mu-e}^{\rm Ti}$ & $5.9\times 10^{-13}$ & $1.9\times 10^{-16}$& $7.1\times 10^{-15}$ & $6.1\times 10^{-13}$ [SINDRUM II]~\cite{Kaulard:1998rb} \\
$R_{\mu-e}^{\rm Au}$ & $6.4\times 10^{-13}$ & $2.8\times 10^{-17}$ & $7.1\times 10^{-15}$ & $7.0\times 10^{-13}$ [SINDRUM II]~\cite{Bertl:2006up} \\
$R_{\mu-e}^{\rm Pb}$ & $4.5\times 10^{-13}$ & $1.2\times 10^{-17}$ & $7.1\times 10^{-15}$ & $4.6\times 10^{-11}$ [SINDRUM II]~\cite{Honecker:1996zf} \\ \hline
$\langle m_{\beta\beta}\rangle$ (meV) & $3.8\times 10^{-9}$ & $3.8\times 10^{-9}$ & $3.8\times 10^{-9}$ & $61-165$ [KamLAND-Zen]~\cite{KamLAND-Zen:2016pfg} \\
\hline
\end{tabular} \label{3_tab_predictions}}
\end{table}
The impact of flavor effects is further illustrated in~\fref{3_fig_example}. The solid curves show the total lepton asymmetry obtained from the fully flavor-covariant Boltzmann equations for {\it very} different initial conditions. It is reassuring to see that the final asymmetry is independent of any pre-existing initial abundance --- a hallmark of RL models~\cite{Pilaftsis:2005rv}. The dotted (blue), dashed (red) and dot-dashed (green) curves show the corresponding predictions from the solution of Boltzmann equations diagonal in both heavy-neutrino and charged-lepton flavors, only in the heavy-neutrino flavor, and only in the charged-lepton flavor, respectively. It is clear that none of the fully or partially diagonal rate equations are capable of capturing all flavor effects in a consistent manner, which necessitates the use of the flavor-covariant treatment. For this particular example, we have chosen $\delta=-\pi/2$, as mildly favored by the recent T2K data~\cite{Escudero:2016odp}, and $\phi_1=\pi,\phi_2=0$ for the PMNS CP phases in order to reproduce the observed baryon asymmetry in the flavor-covariant treatment. The other input parameters in this example are $M=250$ GeV, $\Delta M_1/M=-\,5\times 10^{-5}$, $\Delta M_2/M=1.5\times 10^{-9}$ and $c=2.8\times 10^{-7}$.
As mentioned earlier, apart from explaining the matter-anti-matter asymmetry puzzle, the low-scale RL models offer the attractive possibility of being tested in various laboratory experiments at both energy and intensity frontiers. The benchmark scenarios shown in~\tref{3_tab_benchmarks}, having TeV-scale heavy neutrinos, can be probed at the LHC via multilepton final states~\cite{Deppisch:2015qwa}. Note that, due to the small mass splitting between the three heavy neutrinos, the same-sign dilepton signal at the LHC will be suppressed. However, the opposite-sign dilepton or trilepton signals can be useful in probing these scenarios. As for the low-energy probes at the intensity frontier, the model predictions for various low-energy observables are given in~\tref{3_tab_predictions}, along with the current experimental limits at 90\% C.L. For details of the theoretical calculations, see, e.g.,~Ref.~\cite{Dev:2014laa}. The $0\nu\beta\beta$ rate is suppressed in this case for the same reason as the suppression of the lepton number violating LHC signals, i.e.~due to the quasi-degeneracy of the heavy neutrinos. Even so, the $\mu\to e\gamma$ and $\mu-e$ conversion predictions are close to the current experimental bounds and could be tested in the near future by upcoming experiments, such as Mu2e~\cite{Bartoszek:2014mya} and PRISM/PRIME~\cite{Kuno:2005mm}. This is a characteristic feature of the RL$_\tau$ models being considered here, which have relatively large Yukawa couplings in the electron and muon sectors, thus giving rise to observable lepton flavor violating (LFV) effects. On the other hand, the Yukawa couplings in the tau sector are smaller, which suppresses the corresponding LFV effects. It is difficult to have any observable LFV effects in most of the other low-scale RL models~\cite{Heurtier:2016iac}, and this puts the RL$_\tau$ models discussed here on a unique footing.
\section{\label{sec:typeII}Type II seesaw/scalar triplet leptogenesis}
Leptogenesis has mainly been studied in the framework of the type I seesaw mechanism, in which the source of the lepton asymmetry is the CP-violating decays of heavy Majorana neutrinos. Scalar triplet leptogenesis~\cite{Hambye:2003ka, Antusch:2004xy, Hambye:2005tk, Chun:2006sp, Hallgren:2007nq, Frigerio:2008ai, Felipe:2013kk, Sierra:2014tqa, Lavignac:2015gpa}, based on the type II seesaw mechanism~\cite{Magg:1980ut, Schechter:1980gr, Lazarides:1980nt, Mohapatra:1980yp}, has received much less attention in comparison. In particular, lepton flavor effects were included only recently in this scenario~\cite{Felipe:2013kk, Sierra:2014tqa, Lavignac:2015gpa}.
\subsection{The framework}
\label{4_sec_framework}
In spite of its simplicity, the type II seesaw mechanism is much less popular than its type I cousin, presumably because it is less easily implemented in GUTs. The only thing it requires is the addition to the SM of a massive scalar electroweak triplet, which couples to the LH leptons and to the Higgs doublet in the following way:
\begin{equation}
\mathcal{L}_\Delta\ =\ -\,\frac{1}{2} \left( y_{\alpha\beta}\, \ell_\alpha^{\mathsf{T}} C i \sigma^2 \Delta \ell_\beta
+\mu\, \phi^{\mathsf{T}} i \sigma^2 \Delta^\dagger \phi + \mbox{h.c.} \right) - M_\Delta^2\, \mbox{tr} (\Delta^\dagger \Delta)\; ,
\label{4_Lagrangian}
\end{equation}
where $C$ is the charge conjugation matrix defined by $C \gamma^{\mathsf{T}}_\mu C^{-1} = -\,\gamma_\mu$, and
\begin{equation}
\Delta\ =\
\left( \begin{array}{cc} \Delta^+ / \sqrt{2} & \Delta^{++} \\
\Delta^0 & - \Delta^+ / \sqrt{2}
\end{array} \right) \;, \qquad
\Delta^\dagger\ =\
\left( \begin{array}{cc} \Delta^- / \sqrt{2} & \Delta^{0*} \\
\Delta^{--} & - \Delta^- / \sqrt{2}
\end{array} \right) \;.
\end{equation}
In~\eref{4_Lagrangian}, $\alpha$ and $\beta$ are lepton flavor indices, $y_{\alpha\beta}$ is a symmetric $3 \times 3$ matrix of complex dimensionless couplings, and $\mu$ is a complex mass parameter. Heavy scalar triplet exchange generates the neutrino mass matrix
\begin{equation}
(M^\Delta_{\nu})_{\alpha\beta}\ =\ \frac{1}{4}\, \mu y_{\alpha\beta}\frac{v^2}{M_\Delta^2}\; ,
\label{4_mnu_Delta}
\end{equation}
where $v = \sqrt{2}\,\langle \phi^0 \rangle = 246\, \mbox{GeV}$ is the Higgs boson vacuum expectation value, providing the desired suppression of neutrino masses.
The Lagrangian in~\eref{4_Lagrangian} allows the scalar triplet to decay into a pair of anti-leptons or a pair of Higgs bosons, with respective tree-level decay rates and branching ratios
\begin{equation}
\Gamma(\Delta \rightarrow \bar \ell \bar \ell)\ =\ \frac{\lambda^2_\ell}{32\pi}\, M_\Delta\, , \qquad
\Gamma(\Delta \rightarrow \phi \phi)\ =\ \frac{\lambda^2_{\phi}}{32\pi}\, M_\Delta\; ,
\end{equation}
\begin{equation}
B_\ell\ =\ \lambda_\ell^2 / (\lambda_\ell^2 + \lambda_{\phi}^2)\;, \qquad \qquad
B_{\phi}\ =\ \lambda_{\phi}^2 / (\lambda_\ell^2 + \lambda_{\phi}^2)\; ,
\end{equation}
where we have introduced the notations
\begin{equation}
\lambda_\ell\, \equiv\, \sqrt{\mathrm{tr}(yy^\dagger)}\;, \qquad \qquad \lambda_{\phi}\ \equiv\ |\mu| / M_\Delta\;.
\end{equation}
This minimal setup is, however, not enough for leptogenesis: to generate an asymmetry between triplet and anti-triplet decays, another heavy state must be added to the model that couples to the lepton and Higgs doublets. Examples of such states are additional scalar triplets, which induce a CP asymmetry in $\Delta / \bar \Delta$ decays through self-energy corrections, or right-handed neutrinos, which give rise to vertex corrections. If the additional particles are significantly heavier than the scalar triplet, they are not present in the thermal bath at the time of leptogenesis, and one can parametrize their effects~\cite{Hambye:2005tk} by the effective dimension-5 operators\footnote{In full generality, one should also consider the effective dimension-6 operators
\begin{equation*}
-\frac{1}{4}\, \frac{\eta_{\alpha\beta\gamma\delta}}{\Lambda^2}
\left( \ell_\alpha^{\mathsf{T}} C i \sigma^2 \vec{\sigma} \ell_\beta \right)\! \cdot\!
\left( \bar{\ell}_\gamma \vec{\sigma} i \sigma^2 C \bar{\ell}_\delta^{\mathsf{T}} \right)\;,
\label{4_4lepton_operator}
\end{equation*}
which arise at tree level if the heavier particles are scalar triplets and at the one-loop level if they are right-handed neutrinos. These operators, which contribute to the flavor-dependent CP asymmetries $\epsilon_{\alpha\beta}$
but not to the total CP asymmetry $\epsilon_\Delta \equiv \sum_{\alpha, \beta} \epsilon_{\alpha \beta}$, play a crucial role in the scenario of ``purely flavored leptogenesis,'' discussed in Refs.~\cite{Felipe:2013kk, Sierra:2014tqa}.
Given that they are suppressed by an additional power of $\Lambda$ and possibly also by a loop factor, their effects are typically subdominant in less specific scenarios, and we will omit them in the following.}
\begin{equation}
\frac{1}{4}\, \frac{\kappa_{\alpha\beta}}{\Lambda}\,
(\ell^{\mathsf{T}}_\alpha i \sigma^2 \phi)\, C\, (\phi^{\mathsf{T}} i\sigma^2 \ell_\beta)\: +\: \mbox{h.c.}\; ,
\label{4_Weinberg_operator}
\end{equation}
which are suppressed by $\Lambda \gg M_\Delta$. These operators induce a new contribution to neutrino masses proportional to $\kappa_{\alpha\beta} / \Lambda$, so that the total neutrino mass matrix can be written
\begin{equation}
M_\nu\ =\ M_{\nu}^{\Delta} + M_{\nu}^H\; , \quad
(M_{\nu}^\Delta)_{\alpha\beta} \ =\ \frac{\lambda_{\phi} y_{\alpha\beta}}{4 M_\Delta}\, v^2\; , \quad
(M_{\nu}^H)_{\alpha\beta} \ =\ \frac{\kappa_{\alpha\beta}}{4 \Lambda}\, v^2\; .
\label{4_mnu}
\end{equation}
The CP asymmetries between triplet and anti-triplet decays arise from the interference between a tree-level diagram and a one-loop diagram with insertion of the operators in~\eref{4_Weinberg_operator}. They are given
by~\cite{Hambye:2005tk, Lavignac:2015gpa}
\begin{equation}
\epsilon_{\phi}\ \equiv\ 2\ \frac{\Gamma(\Delta \rightarrow \phi\phi)-\Gamma(\bar \Delta \rightarrow \bar{\phi}\bar{\phi})}
{\Gamma_\Delta+\Gamma_{\bar \Delta}}\ =\
\frac{1}{2\pi}\frac{M_\Delta}{v^2}\sqrt{B_\ell B_{\phi}}\
\frac{\mbox{Im} \left[ \mbox{tr} (M^{\Delta\dagger}_\nu M^H_{\nu}) \right]}{\bar{M}_\nu^\Delta}\; ,
\label{4_epsilon_Delta}
\end{equation}
\begin{align}
\epsilon_{\alpha\beta}\ & \equiv\
\frac{\Gamma(\bar \Delta\rightarrow \ell_\alpha \ell_\beta)-\Gamma(\Delta\rightarrow \bar \ell_\alpha \bar \ell_\beta)}
{\Gamma_\Delta+\Gamma_{\bar \Delta}}\ \left( 1 + \delta_{\alpha \beta} \right) \nonumber \\
& =\, \frac{1}{2\pi}\frac{M_\Delta}{v^2}\sqrt{B_\ell B_{\phi}}\
\frac{\mbox{Im}\left[(M^{\Delta*}_\nu)_{\alpha\beta} (M^H_\nu)_{\alpha\beta}\right]}{\bar{M}_\nu^\Delta}\; ,
\label{4_epsilon_alphabeta}
\end{align}
where $(M^\Delta_\nu)_{\alpha\beta}$ and $(M^H_\nu)_{\alpha\beta}$ are defined in~\eref{4_mnu}, $\Gamma_\Delta = \Gamma_{\bar \Delta}$ is the total triplet decay rate, and
\begin{equation}
\bar{M}_\nu^\Delta\ \equiv\ \sqrt{\mathrm{tr}(M_\nu^{\Delta\dagger} M_{\nu}^\Delta)}\; .
\end{equation}
Unitarity and CPT invariance ensure that the CP asymmetry in decays into Higgs bosons $\epsilon_{\phi}$ is equal to the total CP asymmetry in leptonic decays $\sum_{\alpha, \beta} \epsilon_{\alpha \beta}$.
The first quantitative study of scalar triplet leptogenesis, in which flavor effects were omitted, was performed in Ref.~\cite{Hambye:2005tk}. Flavor effects were discussed in a flavor non-covariant approach in Refs.~\cite{Felipe:2013kk, Sierra:2014tqa}, and spectator processes were included in Ref.~\cite{Sierra:2014tqa}. Flavor-covariant Boltzmann equations were first presented in Ref.~\cite{Lavignac:2015gpa}.
\subsection{Flavor-covariant Boltzmann equations}
\label{4_sec_covariant_BEs}
In order to describe flavor effects in a covariant way, we introduce, as was done for the type I seesaw case in Ref.~\cite{Barbieri:1999ma}, a $3 \times 3$ matrix in lepton flavor space~\cite{Dolgov:1980cq, Stodolsky:1986dx, Raffelt:1992uj, Sigl:1992fn} --- the matrix of flavor asymmetries $[Y_{\Delta \ell}]_{\alpha\beta}$. The diagonal entries of this matrix are the asymmetries $Y_{\Delta \ell_\alpha} \equiv (n_{\ell_\alpha} - \bar{n}_{\ell_{\alpha}})/s$ stored
in the lepton doublets $\ell_\alpha$, while its off-diagonal entries encode the quantum correlations between the different flavor asymmetries. Explicitly, one first defines the phase-space distribution functions $f_{\ell\alpha\beta}(\mathbf{p})$ and $\bar{f}_{\ell\alpha\beta}(\mathbf{p})$ as matrices in flavor space by~\cite{Sigl:1992fn}
\begin{subequations}
\begin{align}
\langle b_\alpha^\dagger(\mathbf{p}) b_\beta(\mathbf{p}') \rangle\ =\
(2\pi)^3 \delta^{(3)}(\mathbf{p}-\mathbf{p}') f_{\ell\alpha\beta}(\mathbf{p})\; ,
\label{4_rho} \\
\langle d_\beta^\dagger(\mathbf{p}) d_\alpha(\mathbf{p}') \rangle\ =\
(2\pi)^3 \delta^{(3)}(\mathbf{p}-\mathbf{p}') \bar{f}_{\ell\alpha\beta}(\mathbf{p})\; ,
\label{4_rhobar}
\end{align}
\end{subequations}
where $b_\alpha^\dagger$ (resp. $d_\alpha^\dagger$) is the operator that creates a lepton (anti-lepton) doublet of flavor $\alpha$ (the opposite order of the flavor indices $\alpha$ and $\beta$
in~\eref{4_rho} and~\eref{4_rhobar} is required by flavor covariance). The matrix of flavor asymmetries is then given by
\begin{equation}
[Y_{\Delta \ell}]_{\alpha\beta}\ \equiv\ \frac{n_{\ell\alpha\beta} - \bar n_{\ell\alpha\beta}}{s}\;,
\label{4_Deltal_def}
\end{equation}
where the (matrix) number densities $n_{\ell\alpha\beta}$ and $\bar n_{\ell\alpha\beta}$ are obtained by integrating $f_{\ell\alpha\beta}(\mathbf{p})$ and $\bar{f}_{\ell\alpha\beta}(\mathbf{p})$ over phase space (with a factor $g_\ell = 2$ due to the $SU(2)_L$ degeneracy):
\begin{equation}
n_{\ell\alpha\beta}\
=\ 2 \int\! \frac{{\rm d}^3\mathbf{p}}{(2\pi)^3}\;f_{\ell\alpha\beta}(\mathbf{p})\; , \qquad
\bar n_{\ell\alpha\beta}\
=\ 2 \int\! \frac{{\rm d}^3\mathbf{p}}{(2\pi)^3}\;\bar{f}_{\ell\alpha\beta}(\mathbf{p})\; .
\end{equation}
With this definition, the matrix of flavor asymmetries transforms as $Y_{\Delta \ell} \to U^*Y_{\Delta \ell} U^{\mathsf{T}}$ under flavor rotations $\ell \to U \ell$, where $U$ is a $3 \times 3$ unitary matrix. We also need to define asymmetries for the Higgs doublet and scalar triplet:
\begin{equation}
Y_{\Delta\chi}\ \equiv\ \frac{n_\chi - \bar{n}_{\chi}}{s}\;, \qquad \qquad \chi\ =\ \phi, \Delta\; ,
\label{4_Deltaphi_def}
\end{equation}
where $n_\chi$ and $\bar{n}_{\chi}$ are the number densities of the scalars $\chi$ and of their anti-particles:
\begin{equation}
n_\chi\
=\ g_\chi \int\! \frac{{\rm d}^3\mathbf{p}}{(2\pi)^3}\; f_\chi (\mathbf{p})\; , \qquad
\bar{n}_{\chi}\
=\ g_\chi \int\! \frac{{\rm d}^3\mathbf{p}}{(2\pi)^3}\; f_{\bar \chi} (\mathbf{p})\; ,
\end{equation}
with $g_\chi = 2$ for Higgs doublets and $g_\chi = 3$ for scalar triplets.
The time evolution of the matrix of flavor asymmetries is governed by a flavor-covariant Boltzmann equation of the form
\begin{equation}
sHz\,\frac{{\rm d} [Y_{\Delta \ell}]_{\alpha\beta}}{{\rm d}z}\ =\
\left(\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_{\Delta}^{\rm eq}+\bar{Y}_{\Delta}^{\rm eq}}-1\right)\! \gamma_D\, \mathcal{E}_{\alpha\beta}
- \mathcal{W}_{\alpha\beta}\;,
\label{4_covariant_BE}
\end{equation}
where the first term on the right-hand side is the source term proportional to the CP-asymmetry matrix $\mathcal{E}_{\alpha\beta}$, and the second term is the washout term. In the parenthesis, $Y_\Delta \equiv n_\Delta / s$ and $\bar Y_\Delta \equiv \bar n_\Delta / s$ are the triplet and anti-triplet yields, respectively, and \smash{$Y^\text{eq}_\Delta$} and \smash{$\bar Y^\text{eq}_\Delta$} are their equilibrium values. Flavor covariance requires that, under rotations $\ell \to U \ell$, the matrices $\mathcal{E}$ and $\mathcal{W}$ transform in the same way as $Y_{\Delta \ell}$, namely as $\mathcal{E} \to U^* \mathcal{E} U^{\mathsf{T}}$ and $\mathcal{W} \to U^* \mathcal{W} U^{\mathsf{T}}$.
The Boltzmann equation,~\eref{4_covariant_BE}, can be derived using the CTP formalism~\cite{Schwinger:1960qe, Keldysh:1964ud, Bakshi:1962dv, Bakshi:1963bn} (see also \sref{sec:methods_fieldtheory}), in a similar way to the flavored quantum Boltzmann equations of type I seesaw leptogenesis~\cite{Buchmuller:2000nd, DeSimone:2007gkc, Garny:2009rv, Garny:2009qn, Cirigliano:2009yt, Beneke:2010wd, Beneke:2010dz, Anisimov:2010dk, Dev:2014wsa}. In the CTP approach, particle densities are replaced by Green's functions defined on a closed path in the complex time plane going from an initial instant $t=0$ to $t=+\infty$ and back. Starting from the Schwinger-Dyson equations satisfied by the lepton-doublet Green's functions, one arrives, after some manipulations, at the quantum Boltzmann equation (see Ref.~\cite{Lavignac:2015gpa} for details)
\begin{align}
sHz\, \frac{{\rm d}[Y_{\Delta \ell}]_{\alpha\beta}}{{\rm d}z}\, =\, & - \int\! {\rm d}^3\mathbf{w} \int_0^t {\rm d}t_w\;
\mathrm{tr} \left[ \Sigma^>_{\ell\beta\gamma} (x,w) S^<_{\ell\gamma\alpha} (w,x)
- \Sigma^<_{\ell\beta\gamma} (x,w) S^>_{\ell\gamma\alpha} (w,x) \right. \nonumber \\
& \left. -\, S^>_{\ell\beta\gamma} (x,w) \Sigma^<_{\ell\gamma\alpha} (w,x)
+ S^<_{\ell\beta\gamma} (x,w) \Sigma^>_{\ell\gamma\alpha} (w,x) \right]\;,
\label{4_QBE_Deltal}
\end{align}
where $S^<_{\ell\alpha\beta} (x,y)$ and $S^>_{\ell\alpha\beta} (x,y)$ are lepton-doublet Green's functions path-ordered along the closed time contour, and $\Sigma^<_{\ell\alpha\beta} (x,y)$ and $\Sigma^>_{\ell\alpha\beta} (x,y)$ are self-energies. The expansion of the Universe has been taken into account by making the replacement $\frac{{\rm d}}{{\rm d}t} \to sHz \frac{{\rm d}}{{\rm d}z}$ on the left-hand side of~\eref{4_QBE_Deltal}. Since we are not interested in quantum effects, we take the classical limit of~\eref{4_QBE_Deltal} by extending the time integral to infinity, which amounts to keeping only the contribution of on-shell intermediate states in the self-energy functions.
In this way, we obtain the semi-classical, flavor-covariant Boltzmann equation
\begin{equation}
sHz\,\frac{{\rm d} [Y_{\Delta \ell}]_{\alpha\beta}}{{\rm d}z}\ =\
\left(\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_{\Delta}^{\rm eq}+\bar{Y}^{\rm eq}_{\Delta}}-1\right)\! \gamma_D\, \mathcal{E}_{\alpha\beta}
- \mathcal{W}^D_{\alpha\beta} - \mathcal{W}^{\ell \phi}_{\alpha\beta}
- \mathcal{W}^{4\ell}_{\alpha\beta} - \mathcal{W}^{\ell\Delta}_{\alpha\beta}\;,
\label{4_BE_Delta_l}
\end{equation}
in which the terms $\mathcal{W}^D_{\alpha\beta}$, $\mathcal{W}^{\ell \phi}_{\alpha\beta}$, $\mathcal{W}^{4\ell}_{\alpha\beta}$ and $\mathcal{W}^{\ell\Delta}_{\alpha\beta}$ correspond to different washout processes, to be specified below.
The source term of~\eref{4_BE_Delta_l} arises from the two-loop self-energy diagrams of~\fref{4_fig_self-energy}, which provide the flavor-covariant CP-asymmetry matrix
\begin{equation}
\mathcal{E}_{\alpha\beta}\, =\, \frac{1}{4\pi i}\frac{M_\Delta}{v^2}\sqrt{B_\ell B_\phi}\
\frac{(M^H_\nu M_\nu^{\Delta\dagger} - M_\nu^\Delta M^{H\dagger}_\nu)_{\alpha\beta}}{\bar{M}_\nu^\Delta}\;.
\label{4_E_alphabeta}
\end{equation}
It is straightforward to check that the trace of this matrix is equal to the total CP asymmetry between triplet and anti-triplet decays: $\mathrm{tr}\, \mathcal{E} = \sum_{\alpha, \beta} \epsilon_{\alpha \beta} = \epsilon_\Delta$.
\begin{figure}[t!]
\centerline{\includegraphics[scale=0.1]{1loop_self_energy.pdf}\qquad\quad
\includegraphics[scale=0.1]{2loop_self_energy_a.pdf}\quad
\includegraphics[scale=0.1]{2loop_self_energy_b.pdf}}
\hskip 1.8cm $(a)$ \hskip 6.1cm $(b)$
\vskip .1cm
\caption{$(a)$ One-loop contribution to the lepton doublet self-energy $\Sigma_{\ell\beta\alpha}$. $(b)$ Two-loop contributions to the lepton doublet self-energy
giving rise to the CP asymmetry $\mathcal{E}_{\alpha\beta}$.}
\label{4_fig_self-energy}
\end{figure}
The washout term $\mathcal{W}^D_{\alpha\beta}$ is associated with triplet and anti-triplet inverse decays. It arises from the one-loop contribution to the lepton doublet self-energy, shown in~\fref{4_fig_self-energy}, and is given by
\begin{align}
\mathcal{W}^D_{\alpha\beta}\ & =\ \frac{2 B_\ell}{\mathrm{tr}(yy^\dagger)}
\bigg[ (yy^\dagger)_{\alpha\beta}\, \frac{Y_{\Delta_{\Delta}}}{Y_{\Delta}^{\rm eq}+\bar{Y}_{\Delta}^{\rm eq}}\nonumber\\&\qquad +\:\frac{1}{4 Y_\ell^{\text{eq}}}
\left( 2 y [Y_{\Delta \ell}]^{\mathsf{T}} y^\dagger + yy^\dagger Y_{\Delta \ell} + Y_{\Delta \ell} yy^\dagger \right)_{\alpha\beta}\bigg]\! \gamma_D\;.
\label{4_W_D}
\end{align}
In~\eref{4_W_D}, $Y_{\Delta_{\Delta}} \equiv (n_\Delta - \bar{n}_{\Delta}) / s$ is the triplet asymmetry, $Y_\ell^{\text{eq}} \equiv n_\ell^{\text{eq}} / s$ and $\gamma_D$ is the total, thermally-averaged decay rate
of triplets and anti-triplets:
\begin{align}
\gamma_D\, =\, & \int\! \frac{{\rm d}^3\mathbf{p}}{(2\pi)^32 \omega_{\mathbf{p}}} \int\! \frac{{\rm d}^3\mathbf{k}}{(2\pi)^32 \omega_{\mathbf{k}}}
\int\! \frac{{\rm d}^3\mathbf{q}}{(2\pi)^32 \omega_{\mathbf{q}}}\; 3 \left( \lambda_\ell^2 + \lambda_\phi^2 \right) (k\cdot q) \nonumber \\
& \times (2\pi)^4 \delta^{(4)} (p-k-q) \left\lbrace f_\Delta^{\mathrm{eq}} (\mathbf{p})
+ \bar{f}_\Delta^{\mathrm{eq}} (\mathbf{p}) \right\rbrace\;.
\end{align}
The other washout terms are associated with $2 \to 2$ scattering processes and originate from two-loop contributions to the lepton doublet self-energy. $\mathcal{W}^{ \ell \phi}_{\alpha\beta}$ accounts for the washout of the flavor
asymmetries by the $\Delta L =2$ scatterings $\ell_\gamma \ell_\delta \leftrightarrow \bar \phi \bar \phi$ and $\ell_\gamma \phi \leftrightarrow \bar \ell_\delta \bar \phi$, and is given by
\begin{align}
\mathcal{W}^{ \ell \phi}_{\alpha\beta}\, =\ & 2 \left\lbrace \frac{1}{\mathrm{tr}(yy^\dagger)}
\left[ \frac{ \left( 2y [Y_{\Delta\ell}]^{\mathsf{T}} y^\dagger + yy^\dagger Y_{\Delta\ell}
+ Y_{\Delta\ell} yy^\dagger \right)_{\alpha\beta} }{4Y_\ell^\text{eq}}\,
+ \frac{Y_{\Delta \phi}}{Y_\phi^{\text{eq}}}\, (yy^\dagger)_{\alpha\beta} \right] \right.\!
\gamma_{\ell \phi}^\Delta \nonumber \\
& + \frac{1}{\mbox{Re} \left[ \mathrm{tr}(y\kappa^\dagger) \right]}
\left[ \frac{ \left( 2 y [Y_{\Delta\ell}]^{\mathsf{T}} \kappa^\dagger + y \kappa^\dagger Y_{\Delta\ell}
+ Y_{\Delta\ell} y \kappa^\dagger \right)_{\alpha\beta} }{4Y_\ell^\text{eq}}\,
+ \frac{Y_{\Delta \phi}}{Y_\phi^{\text{eq}}}\, (y \kappa^\dagger)_{\alpha\beta} \right]\!
\gamma_{\ell \phi}^\mathcal{I} \nonumber \\
& + \frac{1}{\mbox{Re} \left[ \mathrm{tr}(y\kappa^\dagger) \right]}
\left[ \frac{ \left( 2 \kappa [Y_{\Delta\ell}]^{\mathsf{T}} y^\dagger + \kappa y^\dagger Y_{\Delta\ell}
+ Y_{\Delta\ell} \kappa y^\dagger \right)_{\alpha\beta}}{4Y_\ell^\text{eq}}\,
+ \frac{Y_{\Delta \phi}}{Y_\phi^{\text{eq}}}\, (\kappa y^\dagger)_{\alpha\beta} \right]\!
\gamma_{\ell \phi}^\mathcal{I} \nonumber \\
& \left. + \frac{1}{\mathrm{tr}(\kappa\kappa^\dagger)}
\left[ \frac{ \left( 2 \kappa [Y_{\Delta\ell}]^{\mathsf{T}} \kappa^\dagger + \kappa \kappa^\dagger Y_{\Delta\ell}
+ Y_{\Delta\ell} \kappa \kappa^\dagger \right)_{\alpha\beta}}{4Y_\ell^\text{eq}}\,
+ \frac{Y_{\Delta \phi}}{Y_\phi^{\text{eq}}}\, (\kappa\kappa^\dagger)_{\alpha\beta} \right]\!
\gamma_{\ell \phi}^H \right\rbrace\;,
\label{4_W_lh}
\end{align}
in which $\gamma_{\ell \phi}^\Delta$ and $\gamma_{\ell \phi}^H$ are respectively the contributions of scalar-triplet exchange and of the $d=5$ operators in~\eref{4_Weinberg_operator} to the rate of $\Delta L =2$ scatterings $\gamma_{\ell \phi}$, and $\gamma_{\ell \phi}^\mathcal{I}$ is the interference term (more precisely, $\gamma_{\ell \phi}=\gamma_{\ell \phi}^\Delta+2\gamma_{\ell \phi}^\mathcal{I}+\gamma_{\ell \phi}^H$). The remaining washout terms $\mathcal{W}^{4\ell}_{\alpha\beta}$ and $\mathcal{W}^{\ell\Delta}_{\alpha\beta}$ are associated with $\Delta L = 0$ scatterings. Even though they do not violate lepton number, they modify the dynamics of leptogenesis
by redistributing the lepton asymmetry among the different flavors, thus affecting the value of the final $B-L$ asymmetry. For the washout term due to the lepton-lepton scatterings $\ell_\gamma\ell_\delta\leftrightarrow\ell_\rho\ell_\sigma$ and $\ell_\gamma\bar \ell_\rho\leftrightarrow\bar \ell_\delta\ell_\sigma$, one obtains
\begin{equation}
\mathcal{W}^{4\ell}_{\alpha\beta}\, =\, \frac{2}{\lambda^4_\ell}
\left[ \lambda^2_\ell\, \frac{ \left( 2 y [Y_{\Delta\ell}]^{\mathsf{T}} y^\dagger + yy^\dagger Y_{\Delta\ell}
+ Y_{\Delta\ell} yy^\dagger \right)_{\alpha\beta}}{4Y_\ell^\text{eq}}\,
- \frac{\mathrm{tr}(Y_{\Delta\ell} yy^\dagger)}{Y_\ell^{\text{eq}}}\, (yy^\dagger)_{\alpha\beta} \right]\! \gamma_{4\ell}\;,
\label{4_W_4l}
\end{equation}
while for the lepton-triplet scatterings $\ell_\gamma\Delta\leftrightarrow\ell_\delta\Delta$, $\ell_\gamma\bar \Delta\leftrightarrow\ell_\delta\bar \Delta$ and $\ell_\gamma\bar \ell_\delta\leftrightarrow\Delta\bar \Delta$:
\begin{equation}
\mathcal{W}^{\ell\Delta}_{\alpha\beta}\, =\, \frac{1}{\mathrm{tr}(yy^\dagger yy^\dagger)\, 2 Y_\ell^\text{eq}}\,
\left( yy^\dagger yy^\dagger Y_{\Delta\ell} - 2 yy^\dagger Y_{\Delta\ell} yy^\dagger
+ Y_{\Delta\ell} yy^\dagger yy^\dagger \right)_{\alpha\beta} \gamma_{\ell\Delta}\;.
\label{4_W_ellDelta}
\end{equation}
The scattering rates $\gamma_{4\ell}$, $\gamma_{\ell\Delta}$ and the contributions $\gamma_{\ell \phi}^\Delta$, $\gamma_{\ell \phi}^\mathcal{I}$ and $\gamma_{\ell \phi}^H$ to $\gamma_{\ell \phi}$ are computed with the appropriate subtraction of on-shell intermediate states when necessary (see the discussion in Chapter~\cite{leptogenesis:A04}). Their expressions can be found in Ref.~\cite{Lavignac:2015gpa}.
Since the couplings $y_{\alpha \beta}$ and $\kappa_{\alpha \beta}$ transform as $(y, \kappa) \to U^* (y, \kappa)\, U^\dagger$ under flavor rotations $\ell \to U \ell$, one immediately sees from~\eref{4_E_alphabeta}, \eref{4_W_D}, \eref{4_W_lh}, \eref{4_W_4l} and~\eref{4_W_ellDelta} that the CP-asymmetry matrix $\mathcal{E}$ and the various washout terms transform as $(\mathcal{E}, \mathcal{W}) \to U^* (\mathcal{E}, \mathcal{W})\, U^{\mathsf{T}}$, as required by flavor covariance.
In order to have a closed set of Boltzmann equations, one must supplement \eref{4_BE_Delta_l} with equations for $Y_{\Delta}+\bar{Y}_{\Delta}$ and $Y_{\Delta_{\Delta}}$ (an equation for $Y_{\Delta \phi}$ is not needed, as $Y_{\Delta \phi}$ can be expressed as a function\footnote{For instance, in the limit where all spectator processes (electroweak and QCD sphalerons, Standard Model Yukawa couplings) are neglected, which has been implicitly considered so far, one has $Y_{\Delta \phi} \ =\ \mathrm{tr} Y_{\Delta\ell} \:-\: 2 Y_{\Delta_{\Delta}}$ from hypercharge and baryon number conservation.} of $Y_{\Delta_{\Delta}}$ and $[Y_{\Delta\ell}]_{\alpha\beta}$):
\begin{subequations}
\begin{align}
sHz\,\frac{{\rm d}\big(Y_{\Delta}+\bar{Y}_{\Delta}\big)}{{\rm d}z}\ & =\ -\:\bigg[\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_{\Delta}^\text{eq}+\bar{Y}^{\rm eq}_{\Delta}}\:-\:1\bigg]\gamma_D\:
-\:2\bigg[ \bigg(\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_{\Delta}^\text{eq}+\bar{Y}^{\rm eq}_{\Delta}} \bigg)^{\! 2}\: -\: 1\bigg] \gamma_A\; ,
\label{4_BE_Sigma_Delta} \\
sHz\,\frac{{\rm d}Y_{\Delta_{\Delta}}}{{\rm d}z}\ & =\ -\:\frac{1}{2}\big[\mathrm{tr}(\mathcal{W}^D)-W^D_\phi\big]\;,
\label{4_BE_Delta_Delta}
\end{align}
\end{subequations}
where the first and second terms in~\eref{4_BE_Sigma_Delta} are due to triplet/anti-triplet decays and to triplet-anti-triplet annihilations, respectively, and the term $W^D_\phi$ in~\eref{4_BE_Delta_Delta} is associated with the decays $\Delta \to \phi \phi$, $\bar \Delta \to \bar \phi \bar \phi$ and with their inverse decays:
\begin{equation}
W^D_\phi\, =\, 2B_\phi\left(\frac{Y_{\Delta \phi}}{Y_\phi^\text{eq}}-\frac{Y_{\Delta_{\Delta}}}{Y_{\Delta}^\text{eq}+\bar{Y}_{\Delta}^\text{eq}}\right)\! \gamma_D\;.
\label{4_W^D_phi}
\end{equation}
Using~\eref{4_W_D} and~\eref{4_W^D_phi}, the Boltzmann equation for $Y_{\Delta_{\Delta}}$ can be rewritten as
\begin{equation}
sHz\,\frac{{\rm d}Y_{\Delta_{\Delta}}}{{\rm d}z}\ =\ -\left(\frac{Y_{\Delta_{\Delta}}}{Y_{\Delta}^\text{eq}+\bar{Y}_{\Delta}^\text{eq}}
+B_\ell\frac{\mathrm{tr}(yy^\dagger Y_{\Delta \ell})}{\lambda_\ell^2 Y_\ell^\text{eq}}
-B_\phi\frac{Y_{\Delta \phi}}{Y_\phi^\text{eq}}\right)\! \gamma_D\;.
\label{4_BE_Delta_Delta_bis}
\end{equation}
\subsection{Flavor regimes and spectator processes}
\label{4_sec_regimes}
In deriving the flavor-covariant Boltzmann equation,~\eref{4_BE_Delta_l}, we assumed that the quantum correlations between the different lepton flavors are not affected by charged-lepton Yukawa interactions, which, strictly speaking, is true only above $T = 10^{12}\, \mbox{GeV}$ (see~\sref{sec_regimes}). At lower temperatures, the scatterings induced by charged-lepton Yukawa couplings can no longer be neglected, and their effects must be taken into account by appropriate terms on the right-hand side of~\eref{4_BE_Delta_l}. Alternatively, one can neglect the quantum correlations between lepton flavors that these processes, when they are sufficiently fast, tend to destroy.
For instance, below $T = 10^{12}\, \mbox{GeV}$, the tau Yukawa coupling is in equilibrium and drives the $(e,\tau)$ and $(\mu,\tau)$ entries of $Y_{\Delta\ell}$ to zero. The relevant dynamical variables in the temperature range
$10^{9}\, \mbox{GeV} < T < 10^{12}\, \mbox{GeV}$ are therefore $Y_{\Delta\ell_{\tau}}$ (the asymmetry stored in the tau lepton doublet) and the $2 \times 2$ matrix $[Y_{\Delta \ell}^0]_{\alpha \beta}$ (the flavor asymmetries stored in $\ell_e$ and $\ell_\mu$ and their quantum correlations). Accordingly, \eref{4_BE_Delta_l} must be replaced by two separate Boltzmann equations for $Y_{\Delta \ell_{\tau}}$ and $[Y_{\Delta \ell}^0]_{\alpha \beta}$, the second one being covariant with respect to rotations in the ($\ell_e$, $\ell_\mu$) flavor space. Below $T = 10^9\, \mbox{GeV}$, the muon Yukawa coupling also enters equilibrium and destroys the correlations between the $e$ and $\mu$ flavors. The Boltzmann equation,~\eref{4_BE_Delta_l}, then reduces to three equations for the flavor asymmetries $Y_{\Delta \ell_{\alpha}}$ ($\alpha = e, \mu, \tau$).
Finally, the effect of spectator processes~\cite{Buchmuller:2001sr, Nardi:2005hs}, which affect the dynamics of leptogenesis even though they do not violate lepton number, must be taken into account~\cite{Sierra:2014tqa}.
Working in the usual approximation that, in a given temperature range, each of these reactions is either negligible or in equilibrium, one obtains relations among the various particle asymmetries in the plasma. Using these relations, one can write the Boltzmann equations solely in terms of asymmetries that are conserved by all spectator processes relevant in the temperature range considered. These asymmetries are $Y_{\Delta_{\Delta}}$, the $3 \times 3$ and $2 \times 2$ flavor-covariant matrices
\begin{equation}
Y_{\Delta_{\alpha\beta}}\ \equiv\ \frac 1 3 \,Y_{\Delta B}\, \delta_{\alpha\beta} - [Y_{\Delta\ell}]_{\alpha\beta}
\quad\ \text{and}\ \quad
Y^0_{\Delta_{\alpha\beta}}\, \equiv\, \frac 1 3 Y_{\Delta B}\, \delta_{\alpha\beta} - [Y^0_{\Delta \ell}]_{\alpha\beta}
\end{equation}
(relevant in the temperature regimes $T > 10^{12}\, \mbox{GeV}$ and $10^9\, \mbox{GeV} < T < 10^{12}\, \mbox{GeV}$, respectively), which are conserved by all spectator processes except charged-lepton Yukawa interactions,
and
\begin{equation}
Y_{\Delta_\alpha}\ \equiv\ Y_{\Delta B/3-L_{\alpha}}\ =\ \frac 1 3\,Y_{\Delta B} - Y_{\Delta\ell_{\alpha}}-Y_{\Delta e_{R\alpha}}\;,
\end{equation}
which are preserved by all SM interactions. In addition to $Y_{\Delta}+\bar{Y}_{\Delta}$ and $Y_{\Delta_{\Delta}}$, the dynamical variables appearing in the Boltzmann equations (after making use of the equilibrium relations)
are $Y_{\Delta_{\alpha\beta}}$ above $T = 10^{12}\, \mbox{GeV}$, $(Y^0_{\Delta_{\alpha\beta}}, Y_{\Delta_\tau})$ between $T = 10^9\, \mbox{GeV}$ and $T = 10^{12}\, \mbox{GeV}$, and $(Y_{\Delta_e}, Y_{\Delta_\mu}, Y_{\Delta_\tau})$ below $T = 10^9\, \mbox{GeV}$.
The expressions for the Boltzmann equations valid in each temperature regime, with proper inclusion of the spectator processes, can be found in Ref.~\cite{Lavignac:2015gpa}.
\subsection{The relevance of flavor effects}
\label{4_sec_covariance}
A remarkable property of scalar triplet leptogenesis, as opposed to leptogenesis in the type I seesaw framework, is that lepton flavor effects are relevant in all temperature regimes. In particular, there is no well-defined single-flavor approximation in scalar triplet leptogenesis. The basic reason for this is that the scalar triplet couples to a pair of leptons rather than to a specific combination of lepton flavors. By contrast, in the leptogenesis scenario with right-handed neutrinos, the couplings of the lightest singlet neutrino $N_1$ can be written as
\begin{equation}
- \sum_\alpha \lambda_{\alpha 1} \bar \ell_\alpha \phi^c N_1 \:+\: \mbox{h.c.}\ =\
-\:\lambda_{N_1} \bar \ell_{N_1\,}\! \phi^c N_1 \:+\: \mbox{h.c.}\; ,
\end{equation}
where $\ell_{N_1} \equiv \sum_\alpha \lambda^*_{\alpha 1} \ell_\alpha / \lambda_{N_1}$ and $\lambda_{N_1} \equiv \sqrt{\sum_\alpha |\lambda_{\alpha 1}|^2}\, $. Assuming hierarchical right-handed neutrinos, so that the heavier singlet neutrinos $N_2$ and $N_3$ are not present in the plasma when $N_1$ starts to decay (and neglecting the $\Delta L=2$ scattering processes mediated by $N_2$ and $N_3$), the coherence of $\ell_{N_1}$ is preserved
as long as the scatterings induced by the charged-lepton Yukawa couplings remain out of equilibrium, i.e.~in the temperature regime $T > 10^{12}\, \mbox{GeV}$. Leptogenesis can then be described in terms of a single lepton flavor\footnote{An exception to this statement is when the lepton asymmetries generated in $N_2$ and $N_3$ decays have not been completely washed out before the out-of-equilibrium decays of $N_1$ start to occur.} --- hence the name {\it single-flavor approximation}. This can be understood in more technical terms by going to the flavor basis $(\ell_{N_1}, \ell_{\perp 1}, \ell_{\perp 2})$, where $\ell_{\perp 1}$ and $\ell_{\perp 2}$ are two directions perpendicular to $\ell_{N_1}$ in flavor space. When the charged-lepton Yukawa couplings and the washout terms mediated by $N_2$ and $N_3$ are switched off, the Boltzmann equation for $[Y_{\Delta\ell}]_{11} \equiv Y_{\Delta\ell_{N_1}}$ becomes independent of the other entries of the matrix $[Y_{\Delta\ell}]_{\alpha\beta}$, and the source terms for $Y_{\Delta\ell_{\perp1}}$ and $Y_{\Delta\ell_{\perp 2}}$ vanish. Analogously, in the temperature regime $10^9\, \mbox{GeV} < T < 10^{12}\, \mbox{GeV}$, where the tau Yukawa coupling is in equilibrium but the muon and electron ones are not, leptogenesis can be described in terms of the flavor asymmetries $Y_{\Delta\ell_{\tau}}$ and $Y_{\Delta\ell_{0}}$ (where $\ell_0 \propto \lambda^*_{e 1} \ell_e + \lambda^*_{\mu 1}\ell_\mu$), provided that $N_2$ and $N_3$ play a negligible role in the generation and washout of the lepton asymmetry.
In scalar triplet leptogenesis, one may formally define a single-flavor approximation by making the substitutions $[Y_{\Delta\ell}]_{\alpha\beta} \to Y_{\Delta\ell}$, $y_{\alpha\beta} \to \lambda_\ell$, $\kappa_{\alpha\beta} \to \lambda_\kappa \equiv \sqrt{\mbox{tr}(\kappa \kappa^\dagger)}$ and $\mathcal{E}_{\alpha\beta} \to \epsilon_\Delta$ in~\eref{4_BE_Delta_l} and~\eref{4_BE_Delta_Delta_bis}, but the resulting Boltzmann equations\footnote{These equations are the ones that were derived and used in the first quantitative study of scalar triplet leptogenesis~\cite{Hambye:2005tk}, which did not include flavor effects.} cannot be obtained as limits of the flavor-covariant ones. As a consequence, neglecting flavor effects in scalar triplet leptogenesis does not, in general, provide a good approximation to the flavor-covariant computation, even above $T = 10^{12}\, \mbox{GeV}$.
This is a clear difference with the standard leptogenesis scenario with hierarchical right-handed neutrinos. The analogue of the single-flavor approximation of the type I seesaw case is in fact a ``three-flavor approximation'' in which flavor effects still play a prominent role. Namely, in the basis where the triplet couplings to leptons are flavor diagonal, the Boltzmann equations for the diagonal entries of the matrix $[Y_{\Delta\ell}]_{\alpha\beta}$ become independent
of the off-diagonal ones when the contribution of the dimension-5 operators in~\eref{4_Weinberg_operator} to the $\Delta L = 2$ scatterings in~\eref{4_W_lh} vanishes. Equation~\eqref{4_BE_Delta_l} may then be replaced by three Boltzmann equations for the flavor asymmetries $Y_{\Delta\ell_1}$, $Y_{\Delta\ell_2}$ and $Y_{\Delta\ell_3}$, where the $\ell_i$ define the basis of flavor space in which the couplings $y_{\alpha \beta}$ are diagonal. It should be clear that the three-flavor approximation is valid only in this particular basis; in any other basis, the diagonal and off-diagonal entries of $[Y_{\Delta\ell}]_{\alpha\beta}$ are coupled. Furthermore, the flavor-covariant Boltzmann equations must be used as soon as the contribution of the operators in~\eref{4_Weinberg_operator} to $\Delta L = 2$ scatterings is sizable. Finally, between $T = 10^9\, \mbox{GeV}$ and $T = 10^{12}\, \mbox{GeV}$, there is no flavor basis in which~\eref{4_BE_Delta_l} can be substituted for Boltzmann equations for ``diagonal'' flavor asymmetries, even when $\Delta L = 2$ scatterings are negligible. The use of the flavor-covariant formalism involving the $2 \times 2$ matrix $[Y^0_{\Delta\ell}]_{\alpha\beta}$ is therefore unavoidable in this regime.
\subsection{Quantitative impact of flavor effects}
\label{4_sec_pheno}
Let us now illustrate the relevance of flavor effects by means of some numerical examples. Given the large number of parameters involved, we shall concentrate on two suitably chosen Ans\"atze. We can take as independent parameters the scalar triplet mass $M_\Delta$ and its couplings to Higgs doublets ($\lambda_\phi$) and to lepton doublets ($y_{\alpha \beta}$). Once values for these parameters and for the neutrino parameters are chosen (including the yet unknown mass ordering, lightest neutrino mass and phases of the PMNS matrix), the coefficients $\kappa_{\alpha \beta} / \Lambda$ of the effective dimension-5 operators in~\eref{4_Weinberg_operator} are completely fixed by the neutrino mass formula in~\eref{4_mnu}. For definiteness, we work in the charged-lepton mass eigenbasis, in which the neutrino mass matrix takes the form $M_\nu = U_{\nu}^*\, \mbox{diag}(m_1, m_2, m_3)\, U_{\nu}^\dagger$, where the $m_i$ ($i = 1, 2, 3$) are the neutrino masses and $U_\nu$ is the PMNS matrix. For the mixing angles and squared mass differences, we take values within $1 \sigma$ of the best fit to global neutrino data of Ref.~\cite{Gonzalez-Garcia:2014bfa}. Finally, we set all phases of the PMNS matrix to zero, assume a normal mass ordering and take the lightest neutrino mass to be $m_1 = 10^{-3}\, \mbox{eV}$ at the triplet mass scale.
For the triplet parameters, we choose the following Ans\"atze, defined in terms of the triplet contribution to the neutrino mass matrix $m_\Delta$:
\begin{itemize}
\item {\bf Ansatz 1:} $M_\nu^\Delta = i M_\nu$ \
\item {\bf Ansatz 2:} $M_\nu^\Delta = i \bar M_\nu\, U_\nu^* \left( \begin{array}{ccc} 0.949 & 0 & 0 \\
0 & 0.048 & 0 \\
0 & 0 & 0.312
\end{array} \right) U_\nu^\dagger\, $, \\ \\
where $\bar M_\nu \equiv \sqrt{\mathrm{tr} (M^\dagger_\nu M_\nu)} = \sqrt{\sum_i m^2_i}$.
\end{itemize}
Both Ans\"atze are characterized by $\bar M_\nu^\Delta = \bar M_\nu$. Since $[M_\nu^\Delta]_{\alpha \beta} = \lambda_\phi y_{\alpha \beta}\, v^2 / (4 M_\Delta)$, the hierarchical structure of the triplet couplings to leptons $y_{\alpha \beta}$ is completely determined in each case, while two parameters, which can be chosen to be $\lambda_\ell$ and $M_\Delta$, remain free. In Ansatz~1, the triplet couplings to leptons are proportional to the entries of the neutrino mass matrix, while, in Ansatz 2, the hierarchical structures of $y_{\alpha \beta}$ and $[M_\nu]_{\alpha \beta}$ are very different. Ansatz~1 also has the property of maximizing the total CP asymmetry $\epsilon_\Delta$.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.45]{Comp510p12align.pdf} \qquad
\includegraphics[scale=0.45]{Comp510p12xy.pdf}
\caption{Baryon-to-photon ratio $n_B/n_\gamma$ as a function of $\lambda_\ell$ for $M_\Delta=5\times10^{12}\, \mbox{GeV}$, assuming Ansatz~1 (left panel) or Ansatz~2 (right panel). The red lines show the result of the flavor-covariant computation involving the $3 \times 3$ matrix $[Y_{\Delta \ell}]_{\alpha\beta}$, with (solid red line) or without (dashed-dotted red line) spectator processes taken into account, while the blue lines correspond to the result of the single-flavor approximation, including spectator processes (blue dashed line) or not (blue dotted line). The branching ratios $B_\ell$ and $B_\phi$ are equal for $\lambda_\ell \simeq 0.15$. Figure taken from Ref.~\cite{Lavignac:2015gpa}.}
\label{4_fig_comp1}
\end{figure}
Figure~\ref{4_fig_comp1} shows the impact of lepton flavor effects and spectator processes on the generated baryon-to-photon ratio for Ansatz~1 (left panel) and Ansatz~2 (right panel). The triplet mass has been chosen to be $M_\Delta = 5 \times 10^{12}\, \mbox{GeV}$, so that most of the $B-L$ asymmetry is produced at $T > 10^{12}\, \mbox{GeV}$. The flavor-covariant computation involving the $3 \times 3$ matrix of flavor asymmetries
$[Y_{\Delta \ell}]_{\alpha \beta}$ is compared with the single-flavor approximation, with and without spectator processes. Flavor effects are sizable for practically all parameter values and typically lead to an enhancement of the generated baryon asymmetry by a factor of order one (up to an order of magnitude for Ansatz~2 with $\lambda_\ell \sim 0.03$). However, for small values of $\lambda_\ell$ (corresponding to $B_\ell \ll B_\phi$), the difference between
the flavor-covariant computation and the single flavor approximation is much less significant. This can easily be understood by noting that, in this limit, the washout of the flavored lepton asymmetries, which is mainly due to the inverse decays $\ell_\alpha \ell_\beta \to \bar \Delta$ and $\bar \ell_\alpha \bar \ell_\beta \to \Delta$, becomes less important. Neglecting all washout terms in the Boltzmann equation,~\eref{4_BE_Delta_l}, and taking the trace over lepton flavors, one obtains
\begin{align}
&sHz\,\frac{{\rm d} [Y_{\Delta\ell}]_{\alpha\beta}}{{\rm d}z}\, =
\bigg(\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_\Delta^\text{eq}+\bar{Y}^{\rm eq}_{\Delta}}\:-\:1\bigg) \gamma_D\, \mathcal{E}_{\alpha\beta}\;,
\nonumber\\&\qquad \qquad \Longrightarrow \qquad
sHz\,\frac{{\rm d} Y_{\Delta\ell}}{{\rm d}z}\, =
\bigg(\frac{Y_{\Delta}+\bar{Y}_{\Delta}}{Y_\Delta^\text{eq}+\bar{Y}^{\rm eq}_{\Delta}}\:-\:1\bigg) \gamma_D \epsilon_\Delta\;,
\end{align}
which is the equation of the single-flavor approximation in the same limit. Flavor effects also tend to become relatively less important in the opposite limit $\lambda_\ell \gg 1$ (corresponding to $B_\ell \gg B_\phi$), because
the lepton flavor asymmetries are more efficiently washed out than for smaller values of $\lambda_\ell$, and the asymmetry generated in the Higgs sector becomes the dominant source of the final baryon-to-photon ratio~\cite{Hambye:2005tk, Lavignac:2015gpa}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.45]{BAUalign.pdf} \qquad
\includegraphics[scale=0.45]{BAUxy.pdf}
\caption{Isocurves of the baryon-to-photon ratio $n_B/n_\gamma$ in the $(\lambda_\ell, M_\Delta)$ plane, obtained performing the flavor-covariant computation including spectator processes, assuming Ansatz~1 (left panel) or Ansatz~2 (right panel). The shaded, colored areas correspond to the regions of the parameter space where the observed baryon asymmetry can be reproduced in the flavor-covariant computation (light red shading) or in the single-flavor approximation neglecting spectator processes (dark blue shading). The solid black line corresponds to $B_\ell = B_\phi$. Also shown are the regions where $\lambda_\phi$ is greater than $1$ or $4\pi$. Figure taken from Ref.~\cite{Lavignac:2015gpa}.}
\label{4_fig_M_Delta_dependence}
\end{figure}
Figure~\ref{4_fig_M_Delta_dependence} shows the dependence of the generated baryon asymmetry on $\lambda_\ell$ and $M_\Delta$ for Ansatz~1 (left panel) and Ansatz~2 (right panel). The isocurves of the baryon-to-photon ratio correspond to the flavor-covariant computation including spectator processes. The comparison of the two shaded, colored areas shows that the inclusion of flavor effects significantly enlarges the region of parameter space where successful scalar triplet leptogenesis is possible. For the Ans\"atze considered, the observed baryon-to-photon ratio can be reproduced for triplet masses as low as $4.4 \times 10^{10}\, \mbox{GeV}$, to be compared with $1.2\times10^{11}\, \mbox{GeV}$ in the approximation where flavor effects and spectator processes are neglected. These values are not absolute lower bounds, as different assumptions about the triplet parameters can lead to successful leptogenesis for lower triplet masses (for instance, Ref.~\cite{Hambye:2005tk} found a lower bound $M_\Delta > 2.8 \times 10^{10}\, \mbox{GeV}$ for $\bar M_\nu^\Delta = 0.001\, \mbox{eV} \ll \bar M_\nu$ in the single-flavor approximation).
\section{Importance of flavor in other models}
\label{sec:other}
Before concluding this chapter, we remark on the importance of flavor effects in other models of leptogenesis. We focus, in particular, on those models detailed in the other chapters of this review, and cross references are included where appropriate.
\paragraph{ARS mechanism.} If the sterile-neutrino Yukawa couplings are sufficiently small, successful leptogenesis can be achieved within type I seesaw scenarios at scales as low as $M \sim 1\,$--$\,100\ {\rm GeV}$, whilst at the same time satisfying the observational and experimental constraints on the SM neutrino masses. The smallness of these Yukawa couplings delays the thermalization of the sterile states, such that at least one of them can still be out of equilibrium at the onset of the electroweak phase transition. Their CP-violating oscillations are then able to distribute lepton asymmetry unevenly amongst the different flavors. These individual asymmetries can then be communicated to the charged leptons by any of the sterile neutrinos that are in equilibrium and reprocessed into baryon asymmetry by sphaleron processes. The resulting baryon asymmetry is protected from the eventual equilibration of the sterile states, since this occurs after the sphaleron processes have switched off. This scenario of baryogenesis via leptogenesis is known as the ARS mechanism, after Akhmedov, Rubakov and Smirnov~\cite{Akhmedov:1998qx} (see also Ref.~\cite{Asaka:2005pn}). In contrast to the scenarios described in the rest of this chapter, the ARS mechanism does not rely on the Majorana nature of the sterile neutrinos, and it therefore allows for successful leptogenesis also for Dirac-type neutrinos. Even if Majorana masses are present, the lepton number violating processes that they mediate are suppressed in the regime $T\gg M$ relevant to the ARS mechanism. With the exception of contributions to the asymmetry from thermally-induced $L$- and CP-violating decays of the Higgs doublet~\cite{Hambye:2016sby, Hambye:2017elz}, ARS leptogenesis is therefore a purely flavored scenario, and further discussions can be found in the dedicated Chapter~\cite{leptogenesis:A02} along with an overview of its experimental signatures in Chapter~\cite{leptogenesis:A05}.
\paragraph{Extended low-scale type II and type III leptogenesis.} The resonant enhancement of CP violation in type II (scalar triplet) and type III (fermion triplet) seesaw scenarios can be implemented through the addition of new scalars and fermions. Further discussions and references can be found in the discussions in Sec. 4.2 of Chapter~\cite{leptogenesis:A05}.
\paragraph{Left-right symmetric models.} Further discussions of the embeddings of low-scale resonant scenarios in left-right-symmetric~\cite{Mohapatra:1974hk, Mohapatra:1974gc, Senjanovic:1975rk} extensions of the SM gauge groups ($SU(2)_L\times SU(2)_R\times U(1)_{B-L}$) can be found in Sec. 5.2 of Chapter~\cite{leptogenesis:A05}.
\paragraph{Type I soft leptogenesis.} Soft SUSY breaking terms can give rise to additional sources of CP violation, allowing leptogenesis to be realised in supersymmetric type I seesaw scenarios at temperatures $T\lesssim 10^9\ {\rm GeV}$ lower than the bound from gravitino over-production. Further details of type I soft leptogenesis and the importance of lepton flavor effects are discussed in Sec. 6.1 of Chapter~\cite{leptogenesis:A05}.
\paragraph{Flavor symmetries.} In order to predict the mixing angles and phases of the PMNS matrix, one can assume that the three generations of SM leptons form a triplet of a flavor symmetry group $G_f$, which may be taken together with a CP symmetry that acts non-trivially in flavor space. A comprehensive discussion of flavor symmetries and their implications for leptogenesis can be found in Chapter~\cite{leptogenesis:A06}.
\section{Conclusions}
\label{sec:conclusions}
In this chapter, we have highlighted the potential importance of accounting fully for flavor effects in order to obtain accurate estimates of the final lepton (and therefore baryon) asymmetry in scenarios of leptogenesis. Flavor correlations in the heavy-neutrino sector contribute to the source of the CP asymmetry, and flavor correlations in the charged-lepton sector are important for determining the washout of the lepton asymmetry. The effect on the latter can even allow for successful leptogenesis when total lepton number is conserved (or the violation of total lepton number is suppressed).
In the case of thermal leptogenesis based on the type I seesaw scenario, we have seen that the region of parameter space where the next-to-lightest RH neutrino dominates the production of the asymmetry is enhanced when charged-lepton flavor effects are taken into account. Moreover, once these effects are accounted for, only one scenario of thermal leptogenesis can successfully generate the observed asymmetry whilst remaining independent of the initial conditions: the tau $N_2$-dominated scenario, wherein the asymmetry is mostly produced by decays of the next-to-lightest heavy neutrino via the tau channel. In these flavored regimes, the evolution of the individual flavor asymmetries can be coupled by spectator effects, and this can expand and open up viable regions of parameter space for $N_2$-dominated scenarios.
In resonant leptogenesis, we have seen that coherences in the charged-lepton and heavy-neutrino sectors play significant and opposing roles in determining the final asymmetry. This is because, for the quasi-degenerate heavy-neutrino mass spectra relevant to these scenarios, flavor oscillations also contribute to the source of the CP asymmetry in addition to the flavor mixing that dominates for hierarchical mass spectra. Treating only coherences in the heavy-neutrino flavors but neglecting coherences amongst the charged-lepton flavors can overestimate the asymmetry by as much as a factor of 5. Doing the opposite, i.e.~treating only coherences in the charged-lepton flavors but neglecting coherences amongst the heavy-neutrino flavors, can instead underestimate the asymmetry by as much as a factor of 2. This motivates the use of fully flavor-covariant approaches that are able to yield rate equations for the matrices of charged-lepton and heavy-neutrino number densities. Such approaches can be realised both in semi-classical and field-theoretic descriptions of leptogenesis, and we have briefly reviewed these complementary methodologies.
Furthermore, for models of leptogenesis embedded in the type II seesaw scenario, we have seen that charged-lepton flavor effects are relevant in all temperature regimes, since the scalar triplet couples to a pair of lepton doublets. A flavor-covariant treatment then shows that accounting fully for these effects typically leads to an order-one enhancement of the asymmetry compared to a single-flavor approximation, where the latter may be justified for small triplet-lepton couplings.
Aside from having an important impact on the final asymmetry, flavor effects are also relevant to the testability of leptogenesis. Specifically, when flavor effects cannot be neglected, leptogenesis becomes sensitive to the phases of the PMNS matrix. Moreover, in low-scale resonant scenarios, some of the Yukawa couplings remain sizable, allowing such models to be directly testable in current and near-future experiments, including the LHC, as well as low-energy experiments looking for lepton flavor and lepton number violation.
\newpage
\section*{Acknowledgments}
We thank Emiliano Molinaro and Serguey Petcov for helpful comments. PDB acknowledges financial support from the STFC Consolidated Grant ST/L000296/1. The work of SL has been supported in part by the European Union Horizon 2020 Research and Innovation Programme under the Marie Sk\l odowska-Curie Grant Agreements No.~690575 and No.~674896. The work of PM is supported by STFC Grant No. ST/L000393/1 and a Leverhulme Trust Research Leadership Award. The work of DT is supported by a ULB postdoctoral fellowship and the Belgian Federal Science Policy (IAP P7/37). We gratefully acknowledge the hospitality of the Munich Institute for Astro- and Particle Physics (MIAPP) of the DFG cluster of excellence ``Origin and Structure of the Universe'', where this work has been initiated.
|
1,116,691,500,220 | arxiv | \section{Introduction}
Vector space models
have been the main driving force behind
progress in NLP. Most work in this area, either in the form of static or contextualised embeddings, has been based on co-occurrence statistics and largely driven by the distributional hypothesis \cite{Harris1954DistributionalS,Firth1957ASO}. This has also resulted in these representations seemingly capturing certain relational knowledge, such as word analogies \cite{Mikolov2013LinguisticRI,gittens2017skip}. In this context, \citet{chiang-etal-2020-understanding} found that the ability of word embeddings to evaluate analogies was not greatly impaired by removing
co-occurrences related to relational pairs. This suggests there are limits to
how the distributional hypothesis impacts the encoding of relational knowledge.
We extend this line of work by focusing on the relational knowledge of concepts and traits. We also creep beyond English by translating concept and traits used in one of our datasets into Spanish.
\parados{Contributions:} \textbf{(1)} We show that there is no impact on the ability of semantic spaces to predict whether a pair of embeddings corresponds to a trait-concept pair or to predict what traits a given concept has when removing co-occurrences of concepts and traits.
\textbf{(2)} We developed a freely available dataset that can be used for further trait-based relational knowledge analyses for English and Spanish.\footnote{\url{https://github.com/cardiffnlp/trait-concept-datasets}}
\section{Related work}
\parados{What models learn} Evaluation of \textit{neural} semantic spaces has focused on what knowledge they capture with a slew of work showing that some knowledge of analogies can be seen by applying simple transformations \cite{Mikolov2013LinguisticRI,levy-goldberg-2014-linguistic,Arora2016ALV,Paperno2016WhenTW,gittens2017skip,ethayarajh-etal-2019-towards}. Others have investigated what syntactic information neural semantic spaces seem to capture with most showing that they do capture something deeper than surface patters \cite{linzen2016assessing,gulordava-etal-2018-colorless,giulianelli}. However, they fail to exhaustively capture syntactic phenomena and specifically have been shown to struggle with polarity \cite{futrell2018rnns,jumelet2018do} and certain \textit{filler-gap} dependencies \cite{wilcox2018what,chowdhury2018rnn}. Pretrained language models (PLMs) have been found to capture varying degrees of syntactic information \cite{peters2018dissecting,tenney2019bert,goldberg2019assessing,clark2019does}, however, they have also been shown to struggle to predict the grammaticality of sentences \cite{marvin2018targeted,warstadt2019neural} and seem to depend on fragile heuristics rather than anything deeper \cite{mccoy2019right}.
\parados{Relational knowledge} More specifically with respect to relational knowledge and semantic spaces, for some time now work has shown that semantic spaces could encode certain relational knowledge, e.g. knowledge of the relative positioning of geographical locations \cite{Louwerse2009LanguageEG}. Similarly, \citet{gupta-etal-2015-distributional} found that embeddings capture something of relational knowledge associated with countries and cities, e.g. how countries related to one another with respect to GDP. \citet{rubinstein-etal-2015-well} found that word embeddings captured some taxonomic relational knowledge but fared less well with respect to trait-based relational knowledge. Often analogy completion tasks are used to investigate what sort of relational knowledge a semantic space has captured with early work showing that simple linear transformations were enough to highlight analogies \cite{Mikolov2013EfficientEO,vylomova-etal-2016-take}. This method has drawn some criticism and has been challenged as a robust means of evaluating what relational knowledge models capture \cite{drozd-etal-2016-word,DBLP:conf/naacl/GladkovaDM16,schluter2018word,bouraoui-etal-2018-relation}. Attempts to evaluate what PLMs capture of relational knowledge have also been made, highlighting that these larger, more data-hungry models capture some but not all relational knowledge \cite{Forbes2019DoNL,bouraoui2020inducing}.
\parados{Patterns in data} However, all the work cited above focuses work focuses on \textit{what} models learn about relational knowledge and not \textit{how}, or rather what are the salient signals in the data used in these techniques that manifest in relational knowledge. Some work has been done in this direction, with \citet{Pardos2020AUM} showing co-occurrences are not necessary in their distributional model of courses to predict similar or related courses. \citet{chiang-etal-2020-understanding} evaluated this finding in neural semantic spaces, finding that the ability of a semantic space to complete analogies isn't impacted when removing co-occurrences
It is important to understand what aspects of the data result in what models learn because without this semblance of interpretability, problematic biases can creep in, e.g. gender biases in Word2Vec \cite{bolukbasi2016man} or in BERT \cite{Bhardwaj2021InvestigatingGB}. Attempts have been made to mitigate certain biases in contexualised word embeddings \cite{Kaneko2021DebiasingPC}, but in order to do so, the biases have to be known. Also, \citet{shwartz-choi-2020-neural} discuss the issue of reporting bias in the data typically used in NLP, where rarer occurrences are more likely to be explicitly mentioned than common ones which results in models that can generalise about under-reported phenomena but not temper the over-reported information. Therefore it is necessary to understand the nature of the data and how it impacts what models capture and how.
In this work, we aim to expand on the work of \citet{chiang-etal-2020-understanding} in two main ways. First, we do not use analogies and analogy completion to evaluate the impact co-occurrences of concept-traits has on relational knowledge developed in neural semantic spaces, but instead use a dataset of different trait-based relations (e.g. \texttt{is-colour}, \texttt{has-component}) derived from the {\mcrae} and {\textsc{Norms}} feature datasets. This allows us to more directly evaluate the ability of models to predict relational knowledge by casting the evaluation as a simple classification task (both in a multi class and binary class setting). And second, we extend the analysis by looking at Spanish data as well to evaluate whether the results extend beyond English.
\section{Methodology}
The methodology follows five sequential steps: the development of datasets that include concepts and their traits (Section \ref{secdatasets}); the selection and processing of large general-domain corpora (Section \ref{seccorpora}); the transformation of the selected corpora based on the concept-trait datasets to test our hypothesis (Section \ref{secremoving}); training of word embeddings on the original and adapted corpora (Section \ref{secembeddings}); and finally the evaluation of the embeddings based on the trait-based datasets (Section \ref{secclassifiers}).
\begin{table*}[tbph!]
\centering
\footnotesize
\tabcolsep=.055cm
\begin{tabular}{llr>{\raggedleft\arraybackslash}p{2.1em}cr}
\toprule
& trait type & \multicolumn{1}{c}{N$_C$} && N$_T$ &\multicolumn{1}{c}{Traits} \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}-EN}}}} & colour & 148 && 7 & green (32), brown (32), black (24), white (21), red (16), yellow (13), orange (10) \\
& components & 110 && 6 & handle (39), legs (19), wheels (14), leaves (14), seeds (13), doors (11) \\
& materials & 144 && 4 & metal (79), wood (43), cotton (11), leather (11)\\
& size \& shape & 234 && 4 & small (83), large (70), long (44), round (37)\\
& tactile & 117 && 7 & heavy (21), soft (19), furry (18), sharp (17), hard (16), juicy (16), slimy (10)\\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\textsc{Norms}}}}}} & colour & 133 & (78) & 5 & green (35), brown (32), white (30), black (22), yellow (14) \\
& components & 35 & (26) & 2 & handle (25), sugar (10)\\
& materials & 94 & (62) & 5 & metal (46), wood (16), water (11), paper (11), bones (10)\\
& size \& shape & 242 & (138) & 4 & small (109), large (73), long (31), round (29)\\
& tactile & 106 & (70) & 6 & heavy (28), sharp (26), liquid (14), light (13), juicy (13), soft (12)\\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}-ES}}}} & colour & 140 && 7 & verde (31), marrón (31), blanco (21), negro (20), rojo (16), amarillo (12), naranja (9)\\
& components & 100 && 6 & mango (33), piernas (18), ruedas (14), hojas (14), semillas (11), puertas (10) \\
& materials & 131 && 4 & métal (72), madera (38), algodón (11), cuero (10)\\
& size \& shape & 216 && 4 & pequeño (75), grande (66), largo (41), redondo (34)\\
& tactile & 101 && 6 & pesado (19), suave (19), peludo (17), duro (16), afilado (16), jugoso (14)\\
\bottomrule
\end{tabular}
\caption{Dataset statistics: N$_C$ is the number of concepts, N$_T$ is the number of unique features, {\textsc{Norms}} N$_C$ includes unique count in parenthesis, and the number in parenthesis for traits is the number of concepts with that trait.}
\label{tab:dateset}
\end{table*}
\subsection{Datasets}\label{secdatasets}
The datasets were based on the {\mcrae} features dataset \cite{McRae2005SemanticFP}. This is a collection of semantics features associated with a large set of concepts (541) generated from features given by human participants. A secondary trait-based dataset was also collated for English based on the {\textsc{Norms}} dataset \cite{Devereux2014TheCF}. This is developed in the same way as {\mcrae} and is partially an extension of that dataset with 638 concepts. We wanted to avoid value judgements (such as \texttt{is-feminine}) and to collate more trait-based relations, that is pairs of words related by an inherent attribute of a concept.
\parados{\mcrae-EN} The first step in developing the datasets used in this work was to collate certain features into subsets of similar traits. This was done in a partially manual way by splitting data into 5 subsets. Each feature in {\mcrae} has the number of participants who specified that feature for that concept, so initially a frequency cut of 10 was applied to the features. From this set, we observed a number of similar traits that broadly fit into trait categories. A series of simple heuristics were then applied to extract all potential concept-feature pairs for each subset. For some trait types this was trivial with the {\mcrae} dataset, e.g. colour relations could be found using the feature classification in {\mcrae} of \texttt{visual-colour}. The full details of the heuristics can be seen in Appendix \ref{sec:heuristics}.
This process resulted in 5 trait-based subsets: \textbf{colours}, \textbf{components}, \textbf{materials}, \textbf{size \& shape}, and \textbf{tactile}. From each subset, we removed duplicates (e.g. ambulance has the features \texttt{is-white}, \texttt{is-red}, and \texttt{is-orange} in the colour subset).\footnote{A multi-label version of these subsets are included at \url{https://github.com/cardiffnlp/trait-concept-datasets} for {\mcrae}-EN and {\textsc{Norms}}-EN.} And from the remaining concept-feature pairs, we cut on 10+ concepts per trait to ensure a suitable number of instances per target in our evaluation. The resulting statistics associated with this dataset can be seen in the top section of Table \ref{tab:dateset}.
\parados{\mcrae-ES} The set of concepts and trait words that occur across all 5 subsets were manually translated. The translators consisted of one native English speaker with some knowledge of Spanish and one native Spanish speaker who is fluent in English.
As might be expected, issues occurred when undertaking the translation that required judgements to be made. When there was a one to many translation, we used the translation that was \textit{Iberian} if multiple translations were due to regional variants. Otherwise we chose the most common or most canonical. However, we also chose single word alternatives to avoid multiword concepts when this wouldn't have resulted in using an obscure word. We also made some choices to avoid having duplicate/competing concepts, i.e.\ \textit{boat} was translated as \textit{barca} and \textit{ship} as \textit{barco}. Further, we tried to match the intended use in English, i.e. translated \textit{sledgehammer} to \textit{almádena} rather than more generic term in Spanish \textit{mazo} as heavy metal version is more standard in English. Otherwise we tried to use more generic options. A variety of resources were used to aid this including bilingual dictionaries, Wikipedia, and RAE (Real Academia Española). Despite our best efforts to maintain as many concept-trait pairs as possible, certain concepts just don't work in Spanish, typically many to one translations, e.g. \textit{dove} translates to \textit{paloma} which also means normal mangy pigeons. A more common issue was the tendency to use multi-word expressions in Spanish for certain concepts, such as \textit{goldfish} (\textit{pez dorado}) and \textit{escalator} (\textit{escalera mecánica}) with no single-word alternatives. The statistics resulting to the trait subsets for {\mcrae}-ES are shown in the bottom section of Table \ref{tab:dateset}.
\parados{\textsc{Norms}-EN} To make our experiments more robust, we also used the {\textsc{Norms}} dataset. In order to use this dataset, we manually classified features in this dataset based on the subset from our {\mcrae} trait dataset. First, we cut the features in {\textsc{Norms}} that occurred less than 10 times and then took the set of remaining features and classified them as one of the five subsets and then automatically cast each concept-trait pair into their respective subset. We manually checked to see if any features not used had been erroneously omitted due to annotation issues and folded those features into the relative subsets. This entailed adding \texttt{is-liquid} and \texttt{is-furry} to the tactile subset after some consideration (with \texttt{is-furry} subsequently being removed due to the minimum frequency cut after removing duplicates). The resulting subsets had duplicate concepts removed and then a minimum frequency cut on the remaining features of 10. The statistics of the resulting subsets can be seen in the middle section of Table \ref{tab:dateset} with the number of new unique concepts added to each subset shown in parenthesis in the concept count (N$_C$) column.
\subsection{Corpora}\label{seccorpora}
\begin{table}[t!]
\centering
\small
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{p{1.6em}lcc}
\toprule
& Corpus & Sentences & Tokens \\
\midrule
\parbox[t]{5mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textbf{English}}}} &UMBC & 135M & 3.4B \\
&Wiki & 114M & 2.5B \\
&Wee-Wiki & 71M & 1.6B \\
\midrule
\parbox[t]{5mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textbf{Spanish}}}}&ES1B & 62M & 1.4B \\
&Wiki & 28M & 0.6B \\
&Wee-Wiki & 19M & 0.4B \\
\bottomrule
\end{tabular}
\caption{Basic statistics of corpora used.}
\label{tab:corpus-stats}
\end{table}
For the statistics of the corpora used see Table \ref{tab:corpus-stats}.
\parados{UMBC} The University of Maryland, Baltimore County (UMBC) webbase corpus is the resulting collection of paragraphs from a webcrawl in 2007 over millions of webpages \cite{han-etal-2013-umbc}.
\parados{ES1B} The Spanish Billion Words Corpus (ES1B) is a collection of unannotated sentences takens from the web which span difference sources from Europarl to books. It also include data from a Wikipedia dump from 2015, so has some crossover with the Spanish Wikipedia corpus \cite{cardellinoSBWCE}.
\parados{Wiki}
We used English Wikipedia dump from 1st October 2021 and Spanish Wikipedia dump from 1st January 2022. They were extracted and cleaned using the WikiExtractor tool from \citet{Wikiextractor2015}. This left document ID HTML tags in the data which we removed with a simple heuristic.
\parados{Wee-Wiki}
Similar to the standard pre-processing of the Wikipedia data, but we also cut articles with very little views as these tend to be stub articles and automatically generated articles. The idea behind this is to cultivate a \textit{cleaner} and more natural version of the data. We used Wikipedia's official viewing statistics for 1st December 2021.\footnote{\url{https://dumps.wikimedia.org/other/pageviews/2021/2021-12/}} Articles with less than 10 views were removed.
\begin{table*}
\centering
\small
\tabcolsep=.1cm
\begin{tabular}{cl rrr}
\toprule
& & \multicolumn{3}{c}{UMBC} \\
& & \multicolumn{3}{c}{instances removed}\\
& trait type & \multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic} \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & 76,800 & 70,159 & 8,974 \\
& components & 33,284 & 23,347 & 9,745 \\
& material & 28,061 & 19,171 & 6,030 \\
& size \& shape & 104,478 & 68,697 & 18,213\\
& tactile & 18,881 & 13,845 & 4,632\\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\textsc{Norms}}}}}} & colour & 25,106 & 18,737 & 7,422\\
& components & 5,270 & 3,793 & 1,291\\
& material & 51,898 & 34,484 & 12,150\\
& size \& shape & 105,895 & 68,162 & 18,372\\
& tactile & 17,965 & 13,040 & 4,264\\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{rrr}
\toprule
\multicolumn{3}{c}{Wiki} \\
\multicolumn{3}{c}{instances removed}\\
\multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic}\\
\midrule
105,614 & 97,728 & 13,397\\
22,307 & 15,500 & 5,915\\
29,695 & 20,477 & 5,771\\
131,165 & 88,453 & 26,612\\
14,437 & 10,658 & 3,657\\
\midrule
26,378 & 19,824 & 8,360 \\
4,463 & 3,110 & 1,005\\
30,916 & 20441 & 7338\\
117,210 & 79,041 & 22,812\\
13,683 & 10,156 & 3,533\\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{rrr}
\toprule
\multicolumn{3}{c}{Wee-Wiki} \\
\multicolumn{3}{c}{instances removed}\\
\multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic}\\
\midrule
70,194 & 64,594 & 9,083\\
15,553 & 10,987 & 4,544\\
21,239 & 14,669 & 4,431\\
90,280 & 60,516 & 17,452\\
11,413 & 8,529 & 2,981\\
\midrule
19,561 & 14,777 & 6,581\\
3,637 & 2,483 & 766\\
21,051 & 13,823 & 4,694\\
83,329 & 55,933 & 15,814\\
11,048 & 8,307 & 2,929\\
\bottomrule
\end{tabular}
\caption{Total instances removed and replaced for English Corpora (UMBC, Wiki, Wee-Wiki) for each dataset ({\mcrae} and {\textsc{Norms}}) by trait type and removal method (sentence, window, and syntactic as described in \S\ref{secremoving}).}
\label{tab:removal_stats_en}
\end{table*}
\begin{table*}
\centering
\small
\tabcolsep=.1cm
\begin{tabular}{cl rrr}
\toprule
& & \multicolumn{3}{c}{ES1B} \\
& & \multicolumn{3}{c}{instances removed}\\
& trait type &\multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic}\\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & 31,267 & 25,121 & 208\\
& components & 11,855 & 7,680 & 1,551\\
& material & 8,473 & 6,087 & 1,344 \\
& size \& shape & 34,416 & 19,276 & 248\\
& tactile & 3,508 & 2,404 & 185\\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{rrr}
\toprule
\multicolumn{3}{c}{Wiki} \\
\multicolumn{3}{c}{instances removed}\\
\multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic}\\
\midrule
19,424 & 15,804 & 2,729\\
6,628 & 4,048 & 1,873\\
6,704 & 4,698 & 2,200\\
23,224 & 13,513 & 4,001\\
2,459 & 1,743 & 782\\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{rrr}
\toprule
\multicolumn{3}{c}{Wee-Wiki} \\
\multicolumn{3}{c}{instances removed}\\
\multicolumn{1}{c}{sentence} & \multicolumn{1}{c}{window} & \multicolumn{1}{c}{syntactic}\\
\midrule
12,473 & 10,129 & 1,836\\
4,318 & 2,716 & 1,317\\
4,353 & 3,091 & 1,501\\
15,584 & 9,157 & 2,798\\
1,787 & 1,291 & 584\\
\bottomrule
\end{tabular}
\caption{Total instances removed and replaced for each Spanish Corpora (ES1B, Wiki, Wee-Wiki) for the {\mcrae} dataset broken down by trait type and removal method (sentence, window, and syntactic as described in \S\ref{secremoving}).}
\label{tab:removal_stats_es}
\end{table*}
\begin{figure}[t!]
\centering
\begin{dependency}[edge style={black!80, thick},label style={fill=black!5},edge slant=7]
\begin{deptext}[column sep=1.20em,ampersand replacement=\^,font=\footnotesize]
Cerró \^ la \^ \textbf{\textcolor{depblue}{puerta}} \^ de \^ el \^ \textbf{\textcolor{depred}{granero}} \\
\end{deptext}
\depedge{3}{2}{\textsc{det}}
\depedge{1}{3}{\textsc{obj}}
\depedge{6}{4}{\textsc{case}}
\depedge{6}{5}{\textsc{det}}
\depedge[edge style={depblue!90, thick},label style={fill=depblue!25}]{3}{6}{\textsc{nmod}}
\deproot[edge unit distance=2.5ex]{1}{\textsc{root}}
\end{dependency}
\raggedright\small{Original text: \textit{Cerró la puerta del granero}}\\
\raggedright\small{English: \textit{She/he closed the barn door}}
\caption{\textit{granero} (highlighted in red) is a concept in {\mcrae}-ES with a component trait of \textit{puerta} (highlighted in blue). In the example here they are linked by an \textit{nmod} edge (highlighted in blue). For the syntactic removal method this sentence would be removed.}
\label{fig:syntax}
\end{figure}
\subsection{Removing co-occurrences}
\label{secremoving}
We used 3 methods to remove co-occurrences with different levels of granularity to find co-occurrences. The first step in the process was to segment the corpora by sentence and to lemmatise the tokens. This was done using the spaCy library and the corresponding pre-trained models for English and Spanish \cite{spacy}. We used lemmas to handle gender of adjectives and nouns in Spanish and for plural forms in both languages. The segmented version of each corpus was then split into two separate corpora with 80\% of the sentences in the first, which were used as the standard corpora in our experiments, and with 20\%, which were used as reserves for replacing sentence with co-occurrences when creating input data without co-occurrences. When an instance was removed based on the criteria specified below, a random sentence was selected from the reserves, so as to balance the total number of sentences in each set.\footnote{\citet{chiang-etal-2020-understanding} observed only a small difference when using this methodology and when using one where instances were replaced with sentences containing the relative concepts (and as is shown in \S\ref{sec:results} this holds for our work).} The resulting number of instances removed is shown in Table \ref{tab:removal_stats_en} (English) and in Table \ref{tab:removal_stats_es} (Spanish).
\parados{Sentence}
The simplest method used was to merely remove any sentence where a concept and its corresponding trait was observed. The lemmatised version of the data was used to search for co-occurrences to be more thorough, especially with respect to the Spanish data. This entails using the lemmatised version of the concepts and traits to match them in the lemmatised instances in the data. This was done independently for each trait type.
\parados{Window}
The second method used removed instances when the concept and its relative trait occurred within a given window, again using lemmatised forms. The window size used was 10 to match the size used during the training of the embeddings.
\parados{Syntactic}
Finally, used the Stanza library and the corresponding pre-trained models available for English and Spanish to parse the instances where a concept and its relative trait occurred \cite{qi2020stanza}. If an edge between the concept and the trait was predicted after finding a co-occurrence using the lemmas, this was removed, otherwise the instance was left. This method tests whether co-occurrences which are syntactically related are more impactful than haphazard co-occurrences. An example is shown in Figure \ref{fig:syntax}.
\subsection{Word embeddings}\label{secembeddings}
The models used to evaluate the impact of co-occurrences were trained using the Gensim library \cite{rehurek_lrec}. We used CBOW Word2Vec embedding models \cite{Mikolov2013EfficientEO} as they are quicker to train than skip-gram models which was paramount considering the number of models that were required. Further, \citet{chiang-etal-2020-understanding} found no significant differences between CBOW and Skip-gram models with respect to the differences observed in analogy completion between models trained with and without co-occurrences. We used the default hyperparameters in Gensim except for embedding size which was set to 300 and window size which was set to 10, i.e. the same settings from \citet{chiang-etal-2020-understanding}. For each trait-type and for each corpus a model was trained on the data containing co-occurrences (\textbf{with} or \textbf{w/} in tables) and the data not containing co-occurrences (\textbf{without} or \textbf{w/o} in tables). We trained multiple models for the data including co-occurrences --- once per trait type --- giving us a robust measurement of those models' performance. This means that results for each \textbf{with} for each trait type across the extraction methods are trained on the same data and are reported to show the variation seen training models on the same data.\footnote{Variation could also be due to slightly different datasets if \textbf{without} data doesn't contain any occurrences of a concept.}
\subsection{Classifiers}\label{secclassifiers}
Trait-based relational knowledge was evaluated by casting it as a classification problem.
\begin{table*}
\centering
\small
\tabcolsep=.055cm
\begin{tabular}{llccp{0.5em}ccp{0.5em}cc}
\toprule
& & \multicolumn{8}{c}{UMBC} \\
\cmidrule{3-10}
& & \multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
& trait type & w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & 0.35 & 0.35 & & 0.34 & 0.34 & & \textbf{0.36} & 0.35 \\
& components & \textbf{0.82} & 0.81 & & \textbf{0.81} & 0.80 & & \textbf{0.82} & 0.81 \\
& materials & 0.65 & \textbf{0.69} & & \textbf{0.67} & 0.65 & & 0.67 & \textbf{0.68} \\
& size \& shape & \textbf{0.57} & 0.53 & & 0.55 & \textbf{0.58} & & 0.54 & \textbf{0.58} \\
& tactile & 0.61 & \textbf{0.62} & & \textbf{0.64} & 0.60 & & \textbf{0.65} & 0.64 \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\textsc{Norms}}}}}} & colour & \textbf{0.40} & 0.38 & & 0.41 & 0.41 & & 0.38 & \textbf{0.40} \\
& components & 0.89 & 0.89 & & 0.89 & 0.89 & & 0.89 & 0.89 \\
& materials & \textbf{0.88} & 0.87 & & \textbf{0.87} & 0.85 & & 0.87 & \textbf{0.88} \\
& size \& shape & \textbf{0.59} & 0.57 & & 0.58 & \textbf{0.60} & & \textbf{0.61} & 0.58 \\
& tactile & 0.69 & \textbf{0.72} & & \textbf{0.68} & 0.66 & & \textbf{0.70} & 0.66 \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wiki} \\
\cmidrule{1-8}
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\textbf{0.38} & 0.36 & & \textbf{0.38} & 0.30 & & \textbf{0.41} & 0.36 \\
0.78 & \textbf{0.80} & & 0.75 & \textbf{0.77} & & \textbf{0.79} & 0.76 \\
\textbf{0.68} & 0.67 & & 0.65 & \textbf{0.69} & & 0.65 & \textbf{0.67} \\
\textbf{0.60} & 0.58 & & \textbf{0.58} & 0.56 & & 0.56 & \textbf{0.61} \\
0.54 & \textbf{0.55} & & 0.56 & \textbf{0.59} & & \textbf{0.58} & 0.55 \\
\midrule
0.39 & 0.39 & & \textbf{0.41} & 0.39 & & \textbf{0.44} & 0.40 \\
0.91 & 0.91 & & 0.89 & \textbf{0.91} & & 0.91 & 0.91 \\
\textbf{0.86} & 0.84 & & 0.85 & 0.85 & & 0.86 & 0.86 \\
0.59 & 0.59 & & \textbf{0.62} & 0.57 & & 0.59 & \textbf{0.61} \\
0.61 & \textbf{0.65} & & 0.63 & 0.63 & & 0.65 & \textbf{0.67} \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wee-Wiki} \\
\cmidrule{1-8}
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\textbf{0.39} & 0.38 & & \textbf{0.39} & 0.38 & & \textbf{0.41} & 0.35 \\
\textbf{0.75} & 0.74 & & 0.75 & \textbf{0.79} & & 0.77 & 0.77 \\
\textbf{0.71} & 0.65 & & \textbf{0.67} & 0.66 & & 0.67 & \textbf{0.68} \\
\textbf{0.58} & 0.56 & & \textbf{0.59} & 0.56 & & \textbf{0.57} & 0.56 \\
0.50 & \textbf{0.51} & & 0.50 & \textbf{0.54} & & 0.51 & \textbf{0.55} \\
\midrule
\textbf{0.43} & 0.39 & & 0.37 & \textbf{0.39} & & 0.37 & \textbf{0.41} \\
0.89 & \textbf{0.91} & & 0.89 & \textbf{0.91} & & 0.94 & 0.94 \\
0.84 & 0.84 & & \textbf{0.83} & 0.82 & & \textbf{0.86} & 0.82 \\
\textbf{0.62} & 0.59 & & \textbf{0.58} & 0.55 & & \textbf{0.62} & 0.57 \\
0.60 & \textbf{0.61} & & 0.61 & 0.61 & & 0.63 & 0.63 \\
\bottomrule
\end{tabular}
\caption{Multi-class SVM results for English corpora and datasets by trait type and extraction method for models trained on data with (\textbf{w/}) and without (\textbf{w/o}) co-occurrences. Average accuracy across 3-fold cross validation is reported with best performing model between paired \textbf{w/} and \textbf{w/o} models highlighted in bold.}
\label{tab:en-multi}
\end{table*}
\begin{table*}
\centering
\small
\tabcolsep=.055cm
\begin{tabular}{llccp{0.5em}ccp{0.5em}cc}
\toprule
& & \multicolumn{8}{c}{ES1B} \\
& & \multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
& trait type & w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & 0.29 & \textbf{0.30} & & \textbf{0.33} & 0.31 & & \textbf{0.33} & 0.30 \\%\textbf{0.34} & 0.33 & & 0.32 & \textbf{0.34} & & 0.35 & \textbf{0.36} \\
& components & \textbf{0.77} & 0.71 & & \textbf{0.81} & 0.77 & & \textbf{0.74} & 0.73 \\
& materials & 0.63 & \textbf{0.67} & & \textbf{0.67} & 0.65 & & 0.66 & \textbf{0.67} \\
& size \& shape & 0.50 & \textbf{0.52} & & 0.48 & \textbf{0.53} & & \textbf{0.46} & 0.45 \\
& tactile & 0.54 & \textbf{0.58} & & \textbf{0.55} & 0.53 & & \textbf{0.55} & 0.53 \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wiki} \\
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\textbf{0.32} & 0.29 & & \textbf{0.34} & 0.32 & & \textbf{0.35} & 0.33 \\
0.71 & \textbf{0.75} & & 0.70 & \textbf{0.75} & & 0.72 & \textbf{0.74} \\
0.63 & \textbf{0.64} & & \textbf{0.70} & 0.63 & & 0.61 & \textbf{0.66} \\
\textbf{0.52} & 0.49 & & 0.49 & 0.49 & & 0.49 & \textbf{0.50} \\
\textbf{0.60} & 0.58 & & 0.60 & \textbf{0.62} & & \textbf{0.60} & 0.59 \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wee-Wiki} \\
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
0.31 & \textbf{0.32} & & 0.30 & \textbf{0.31} & & \textbf{0.31} & 0.29 \\
\textbf{0.73} & 0.72 & & \textbf{0.71} & 0.66 & & 0.71 & 0.71 \\
0.59 & 0.59 & & 0.59 & \textbf{0.61} & & \textbf{0.63} & 0.59 \\
0.47 & \textbf{0.48} & & \textbf{0.49} & 0.48 & & 0.46 & \textbf{0.53} \\
\textbf{0.51} & 0.50 & & \textbf{0.52} & 0.51 & & \textbf{0.49} & 0.48 \\
\bottomrule
\end{tabular}
\caption{Multi-class SVM results for Spanish corpora and datasets by trait type and extraction method for models trained on data with (\textbf{w/}) and without (\textbf{w/o}) co-occurrences. Average accuracy across 3-fold cross validation is reported with best performing model between paired \textbf{w/} and \textbf{w/o} models highlighted in bold.}
\label{tab:es-multi}
\end{table*}
\parados{Multi-class}
First we used a multi-class evaluation. Using the datasets described in Section \ref{secdatasets}, given a concept (e.g. \textit{banana}), the task consisted of selecting the most appropriate trait for a given trait type (e.g. \textit{yellow} in the colour dataset). We used a support vector machine (SVM) as our classifier from the Scikit-learn library \cite{scikit-learn} with the word embeddings learned in the previous step as the only input. For each model we used 3-fold cross-validation and report the mean score across the splits.\footnote{The full results for each model can be found at \url{https://github.com/cardiffnlp/trait-relations-and-co-occurrences}, including the number of concepts and features used for each model's evaluation and the standard deviations which are all very small.} For each pair of models (i.e. with and without co-occurrences for a given trait-type and for a given corpus), we checked to see if concepts appeared in both semantic spaces. When a concept was missing in one or both, it was removed from the dataset for both, such that the comparison of results is robust between the two models we are interested in comparing, however, this was not common. It brought up an issue with \textit{orange} and \textit{naranja}, namely that it occurs as a concept and as trait, so that in our extraction method for sentence and window occurrences of these are always removed from the corpora and so were removed from the evaluation datasets.
\parados{Binary}
We also use binary classification by exploiting the earlier findings suggesting that differences between embeddings can be used as a proxy to capture semantic relations \cite{Mikolov2013LinguisticRI,vylomova-etal-2016-take}. Again, we used SVM models, but this time the input features were the differences between concepts and their respective traits (i.e. $e_c - e_t$, where $e_c$ is the concept embedding and $e_t$ is the trait embedding) and the model predicted whether a pair was related or not. This required developing negative samples. This was done by randomly selecting words from the vocab space of the union of vocabs between each pair of model (i.e. with and without co-occurrences for a given trait type and a given corpus). These words then underwent a modicum of a control check by using lexical databases:
WordNet \cite{Fellbaum2000WordNetA} for English and the Multilingual Central Repository version 3.0 for Spanish \cite{Gonzalez-Agirre:Laparra:Rigau:2012} via the Natural Language Toolkit \cite{bird2009natural}. Once a word was randomly selected from the vocab space (excluding the concepts in the given dataset), the respective lexical database was checked to see if it contained the word and if so whether the synonyms associated with it were at least sometimes nouns (that is the synonym set of nouns contained at least one item). This was so that the selected word could in theory be something akin to a concept and not just gobbledygook. This procedure was done so the number of concepts in the negative sample set matched the number in the positive sample set (which had instances removed that didn't appear in one or both of the paired models similar to the multi-class setup). Then each randomly extracted negative \textit{concept} was ascribed a trait from the given trait space. Similar to the multi-class SVM setup, 3-fold cross-validation was used and the mean score across the splits is reported.\footnote{Full results for the binary classifier can be found at \url{https://github.com/cardiffnlp/trait-relations-and-co-occurrences}, including the number of instances for each model and the standard deviations.}
\begin{table*}
\centering
\small
\tabcolsep=.055cm
\begin{tabular}{llccp{0.5em}ccp{0.5em}cc}
\toprule
& & \multicolumn{8}{c}{UMBC} \\
\cmidrule{3-10}
& & \multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
& trait type & w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & \textbf{0.90} & 0.88 & & \textbf{0.88} & 0.86 & & 0.86 & 0.86 \\
& components & \textbf{0.90} & 0.88 & & 0.90 & 0.90 & & 0.90 & 0.90 \\
& materials & \textbf{0.93} & 0.92 & & 0.92 & 0.92 & & 0.90 & 0.90 \\
& size \& shape & 0.88 & 0.88 & & 0.85 & 0.85 & & 0.85 & 0.85 \\
& tactile & 0.89 & \textbf{0.90} & & 0.88 & 0.88 & & 0.86 & \textbf{0.88} \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\textsc{Norms}}}}}} & colour & 0.86 & 0.86 & & \textbf{0.84} & 0.83 & & 0.83 & 0.83 \\
& components & 0.80 & \textbf{0.82} & & \textbf{0.87} & 0.77 & & 0.80 & 0.80 \\
& materials & 0.84 & \textbf{0.85} & & \textbf{0.88} & 0.86 & & 0.88 & \textbf{0.91} \\
& size \& shape & 0.84 & \textbf{0.87} & & 0.88 & \textbf{0.89} & & \textbf{0.87} & 0.86 \\
& tactile & \textbf{0.87} & 0.84 & & \textbf{0.86} & 0.84 & & 0.84 & \textbf{0.86} \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wiki} \\
\cmidrule{1-8}
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
0.84 & \textbf{0.85} & & \textbf{0.88} & 0.86 & & 0.88 & 0.88 \\
\textbf{0.88} & 0.87 & & 0.87 & \textbf{0.88} & & 0.89 & \textbf{0.90} \\
\textbf{0.90} & 0.88 & & 0.88 & \textbf{0.89} & & 0.89 & 0.89 \\
0.86 & 0.86 & & 0.83 & 0.83 & & 0.84 & 0.84 \\
0.88 & 0.88 & & 0.82 & 0.82 & & 0.87 & \textbf{0.88} \\
\midrule
0.86 & 0.86 & & \textbf{0.85} & 0.83 & & \textbf{0.84} & 0.83 \\
0.90 & 0.90 & & \textbf{0.87} & 0.78 & & \textbf{0.86} & 0.84 \\
\textbf{0.89} & 0.87 & & 0.86 & \textbf{0.89} & & 0.85 & \textbf{0.87} \\
\textbf{0.88} & 0.86 & & 0.84 & 0.84 & & 0.85 & 0.85 \\
\textbf{0.83} & 0.82 & & 0.82 & \textbf{0.84} & & \textbf{0.86} & 0.84 \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wee-Wiki} \\
\cmidrule{1-8}
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
0.89 & \textbf{0.90} & & 0.87 & 0.87 & & 0.87 & 0.87 \\
0.86 & 0.86 & & 0.92 & 0.92 & & 0.89 & 0.89 \\
\textbf{0.86} & 0.85 & & 0.88 & \textbf{0.89} & & \textbf{0.90} & 0.89 \\
\textbf{0.88} & 0.87 & & 0.87 & \textbf{0.88} & & 0.86 & 0.86 \\
\textbf{0.84} & 0.82 & & \textbf{0.84} & 0.81 & & \textbf{0.82} & 0.81 \\
\midrule
0.84 & 0.84 & & \textbf{0.87} & 0.86 & & \textbf{0.86} & 0.85 \\
0.86 & \textbf{0.87} & & \textbf{0.93} & 0.90 & & \textbf{0.84} & 0.83 \\
0.85 & \textbf{0.88} & & 0.85 & \textbf{0.88} & & 0.90 & \textbf{0.91} \\
0.88 & 0.88 & & \textbf{0.88} & 0.87 & & \textbf{0.87} & 0.85 \\
\textbf{0.78} & 0.77 & & 0.80 & \textbf{0.81} & & 0.86 & 0.86 \\
\bottomrule
\end{tabular}
\caption{Binary SVM results for English corpora and datasets by trait type and extraction method for models trained on data with (\textbf{w/}) and without (\textbf{w/o}) co-occurrences. Average accuracy across 3-fold cross validation is reported with best performing model between paired \textbf{w/} and \textbf{w/o} models highlighted in bold.}
\label{tab:en-binary}
\end{table*}
\begin{table*}
\centering
\small
\tabcolsep=.055cm
\begin{tabular}{llccp{0.5em}ccp{0.5em}cc}
\toprule
& & \multicolumn{8}{c}{ES1B} \\
& & \multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
& trait type & w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\parbox[t]{2.5mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{{\mcrae}}}}} & colour & 0.81 & \textbf{0.82} & & \textbf{0.85} & 0.83 & & \textbf{0.81} & 0.80 \\
& components & \textbf{0.88} & 0.87 & & \textbf{0.83} & 0.80 & & \textbf{0.81} & 0.80 \\
& materials & 0.81 & \textbf{0.82} & & 0.86 & 0.86 & & 0.84 & 0.84 \\
& size \& shape & \textbf{0.82} & 0.81 & & \textbf{0.75} & 0.73 & & \textbf{0.76} & 0.74 \\
& tactile & \textbf{0.72} & 0.71 & & 0.75 & \textbf{0.79} & & \textbf{0.81} & 0.78 \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wiki} \\
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\textbf{0.84} & 0.81 & & \textbf{0.81} & 0.78 & & \textbf{0.87} & 0.84 \\
0.86 & \textbf{0.89} & & 0.78 & 0.78 & & 0.79 & \textbf{0.82} \\
\textbf{0.78} & 0.76 & & 0.75 & 0.75 & & \textbf{0.70} & 0.67 \\
0.79 & \textbf{0.80} & & \textbf{0.76} & 0.75 & & \textbf{0.79} & 0.78 \\
\textbf{0.74} & 0.73 & & 0.71 & \textbf{0.72} & & 0.74 & \textbf{0.75} \\
\bottomrule
\end{tabular}
\hfill
\begin{tabular}{ccp{0.5em}ccp{0.5em}cc}
\toprule
\multicolumn{8}{c}{Wee-Wiki} \\
\multicolumn{2}{c}{sentence} && \multicolumn{2}{c}{window} && \multicolumn{2}{c}{syntactic} \\
w/ & w/o && w/ & w/o && w/ & w/o \\
\midrule
\textbf{0.83} & 0.81 & & \textbf{0.83} & 0.80 & & 0.79 & \textbf{0.82} \\
0.77 & \textbf{0.78} & & 0.82 & \textbf{0.84} & & \textbf{0.76} & 0.74 \\
0.75 & \textbf{0.80} & & \textbf{0.75} & 0.74 & & 0.74 & 0.74 \\
0.79 & 0.79 & & 0.75 & \textbf{0.78} & & 0.82 & \textbf{0.83} \\
\textbf{0.78} & 0.72 & & \textbf{0.77} & 0.75 & & 0.78 & \textbf{0.80} \\
\bottomrule
\end{tabular}
\caption{Binary SVM results for Spanish corpora and datasets by trait type and extraction method for models trained on data with (\textbf{w/}) and without (\textbf{w/o}) co-occurrences. Average accuracy across 3-fold cross validation is reported with best performing model between paired \textbf{w/} and \textbf{w/o} models highlighted in bold.}
\label{tab:es-binary}
\end{table*}
\section{Results}\label{sec:results}
\parados{Multi-class results} The results for the multi-class experiments can be seen in Table \ref{tab:en-multi} for the English corpora and in Table \ref{tab:es-multi} for the Spanish corpora. The highest performing model for each pair of models, i.e. with (\textbf{w/}) and without (\textbf{w/o}) co-occurrences is highlighted in bold for clarity. Across the board, it is clear that there is no consistent pattern as to whether a model trained with co-occurrences outperforms a model trained without them or vice versa. This holds for all three co-occurrence extraction techniques, for all trait types, for all datasets, and for all corpora across both languages. This is similar to the findings of \citet{chiang-etal-2020-understanding} where little effect was observed on analogy completion whether co-occurrences were included or not, however, a systemic decrease was observed in that context despite it being small. While there are some differences between some models, the differences that would be required to make claims of one model being superior to another are much larger than what are observed here as the experimental setup isn't robust enough to verify that a difference of 0.01-0.02 is significant or not. A visualisation of the differences between each corresponding with and without model for {\mcrae}-EN by trait type can be seen in Figure \ref{fig:mcrae_en} (equivalent visualisations for {\textsc{Norms}}-EN and {\mcrae}-ES are shown in Figure \ref{fig:norms_en} and \ref{fig:mcrae_es}, respectively, in Appendix \ref{sec:vis}). Figure \ref{fig:mcrae_en} does highlight a slight difference with respect to colour traits, where a modest increase in performance is seen on average when training the models with co-occurrences, however, this isn't consisted across corpora and datasets as this increase is not observed in Figures \ref{fig:norms_en} and \ref{fig:mcrae_es} in Appendix \ref{sec:vis}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{img/deltas-mcrae-en.pdf}
\caption{Distributions of delta accuracy ($\Delta$Acc) for corresponding pairs for each trait type in {\mcrae}-EN.}
\label{fig:mcrae_en}
\end{figure}
\parados{Binary results} The results from the binary classification experiments substantiate these findings. They can be seen in Table \ref{tab:en-binary} for English and in Table \ref{tab:es-binary} for Spanish. Again, no pattern emerges across the different experimental dimensions that would suggest the removal of co-occurrences has impacted a model's ability to predict whether a pair is related or not. The overall high performance on the binary classification experiment for both English and Spanish suggests these models manage to encode meaningful information about these trait relations. But how this emerges is not clear. The simplest explanation is that suitably accurate representations are learnt due to the amount of data, but it could be for any number of other reasons not investigated here.
\begin{figure}[b!]
\centering
\includegraphics[width=0.8\linewidth]{img/deltas-mcrae-en-EM.pdf}
\caption{Distributions of delta accuracy ($\Delta$Acc) for pairs for each extraction method in {\mcrae}-EN.}
\label{fig:mcrae_en_EM}
\end{figure}
\section{Discussion}
The results highlight some tentatively interesting patterns with respect to trait types. In both English and Spanish, models perform consistently well on component traits, although for {\textsc{Norms}} this turned out to be only over 2 traits, effectively casting it as a binary classification. Materials is the next consistently highest performing trait type across corpora and language with size \& shape and tactile not far behind for English, but with a bigger gap in Spanish. The performance on colour traits is low across all settings and languages. This doesn't appear to be based on the size of the trait subset, e.g. the component subset is one of the smaller sets, yet has high performance and the performance of the other trait types don't vary with respect to the number of instance and unique features.
The number of removed sentences, as shown in Tables \ref{tab:removal_stats_en} and \ref{tab:removal_stats_es}, gives a vague indication of the occurrences of the concepts in the dataset and the occurrence of their traits with colour sentence removals being the second highest for {\mcrae}-EN across all three English corpora, the third highest for {\textsc{Norms}}-EN, and the highest for {\mcrae}-ES across all Spanish corpora. These rankings are consistent across extraction methods. Therefore, it is unlikely that the embeddings for the colours and the corresponding concepts (often concepts that occur in the other datasets) are somehow low quality due to low occurrences of these words. More likely is that the colour relation is more difficult than the other trait types as the other types are more tangible and more specific. Although this doesn't necessarily hold for size \& shape traits, specifically sizes which tend to be relative, e.g. in {\mcrae} a \textit{plane} can be \textit{large} (which it is, relative to most things) but so too can a \textit{bathtub} (which it is, relative to a mouse or other such timorous beasties, but not relative to a house). However, size \& shape is consistently one of the traits that models perform worst on especially with {\textsc{Norms}}-EN and {\mcrae}-ES.
As a final note, the different extraction methods yield no differences when compared to one another.
This can be observed clearly in Figure \ref{fig:mcrae_en_EM} in the main text and Figure \ref{fig:norms_en_EM} in Appendix \ref{sec:vis}. While the number of extracted instances using the syntactically related co-occurrences is very low and so difficult to draw any major conclusions, the number of sentence-based and window-based instances removed are quite high and are similar in magnitude. From this, we can deduce that the proximity of the words also doesn't have a major impact on the ability of a semantic space to encode relational knowledge. It could still be the case that if the data used to train models contained more syntactically related concept-trait pairs, they would encode \textit{more} relational knowledge, but it is clear that their absence doesn't result in the models losing what relational knowledge they can capture. Many questions remain on how these distributional models encode relational knowledge. We have merely presented results which \textit{do not} support the hypothesis that direct co-occurrence are the major signal for this process as related to trait-based relational knowledge.
\paragraph{Language models and wider impact of findings.}
Whether the results observed here for static embeddings would hold for PLMS isn't a given. While they are still based on the same distributional hypothesis and adopt statistical methods to encode salient features of language, they could potentially be more sensitive to the loss of co-occurrences in the training data. But this is an open research question that requires specific experimentation which has its own difficulties, i.e. prompting language models often includes lexical \textit{clues} which cloud our ability to say with any great certainty if they have captured some phenomenon or not, see \citet{kassner-schutze-2020-negated} for sensitivity of PLMs to mispriming).
The results do suggest that merely increasing the amount of data used likely won't result in any major improvements in the ability of models to encode relational knowledge or commonsense knowledge more generally, which is attested to by recent work in \citet{Li2021DoLM}. Potentially, we need to look to more complex methods to augment NLP systems with commonsense knowledge potentially using multimodal systems, e.g. language models trained with visual cues as was done in \citet{paik-etal-2021-world} to offset reporting bias with respect to colours. Alternatively, we can focus on the linguistic input and consider how to add stronger signals in the data used to train NLP systems.
\section{Conclusion}
We have contributed to the emerging interest in how neural semantic models encode linguistic information, focusing on trait-based relational knowledge. We have extended findings which showed that co-occurrences of relational pairs didn't have a major impact on a model's ability to encode knowledge of analogies by complementing this analysis with an evaluation of trait-based relational knowledge. We extended the analysis to include different extraction methods to evaluate whether a more fine-grained approach would highlight any differences in performance and found that this is not the case. The work presented here also expands beyond English and includes results in Spanish which follow the same trend. Finally, we have cultivated a set of datasets for different trait types in both English and Spanish (based on {\mcrae} and {\textsc{Norms}}) which are available at \url{https://github.com/cardiffnlp/trait-concept-datasets}.
\section*{Acknowledgements}
Mark and Jose are supported by a UKRI Future Leaders Fellowship (MR/T042001/1).
|
1,116,691,500,221 | arxiv | \section{Introduction}
Crystallographic phase instability is an important factor for enhancing superconductivity \cite{matthias1973criteria}. Some anomalies in the properties of compositions FeSe$ _ {1-x}$Te$ _ {x}$ with a low tellurium content, including the observation of phase separation in these compositions, indicate that phase instability may exist in these compounds.
The tetragonal plane of iron atoms, surrounded by pnictogen or chalcogen atoms, is the main motif of the crystal structure of iron-based superconductors (IBS). Therefore, the properties of this plane is of key importance for superconductivity in these compounds. In particular, the temperature of superconducting transitions depends on the structural parameters of this plane, such as, for example, the degree of deformation of the tetragonal environment of iron \cite{lee2008effect} or the distance from pnictogen to the iron plane \cite{kuroki2009pnictogen}.
The properties of the tetragonal plane of iron, surrounded by the pnictogens, could be thoroughly studied using almost ideal compositions of the 11 series of IBS \cite{hsu2008superconductivity,2017_Coldea, Bohmer2018}. The structure of these compounds is close to stoichiometric, and they have no intercalating elements that could distort the electronic properties of the iron plane. Nevertheless, to date, the properties of the 11 series of IBS have been studied only partially. There remains a whole range of compositions, the properties of which have not yet been sufficiently studied. In particular, the synthesis of Fe(Se,Te) compounds with low tellurium content usually leads to phase separation in both crystals and films \cite{zhuang2014}. The nature of this phase separation is not yet been understood.
We studied the synthesis and properties of FeSe$ _ {1-x}$Te$ _ {x}$ crystals in the range of $x<0.15$. We find that for the studied compositions the phase separation manifests itself in transport properties as a distortion of the anomaly on $dR/dT$ curves near the structural transition points. For as-synthesized samples with different iron contents, the anomaly on $dR/dT$ splits into two distinct anomalies indicating the formation of two phases. Heat treatments of studied compositions can suppress phase separation, which leads to the disappearance of the anomaly in the $dR/dT$ curve located at a lower temperature. We also found suppression of phase separation in compositions after long-term storage, which suggests that, during annealing, the partial oxidation and removal of the excess of iron may have a major impact. Thus, the results obtained indicate that phase separation in the studied compositions is caused by deviations in the stoichiometry of iron. In turn, this may mean that phase separation in the studied compositions is a consequence of the structural instability of the iron plane at certain lattice parameters
One of the possibilities for describing the dependence of the properties of compounds with transition elements on their structure is to study the properties of individual orbitals or orbital-selective effects \cite{streltsov2017orbital}. The properties of an individual orbital state has a relatively universal dependence on the interatomic distance \cite{slater1930cohesion}, which for the case of direct $d$ interaction can be expressed by the Bethe-Slater curve. In FeSe, similar to pure metallic iron, the distance between the iron atoms is close to the value at which the sign of the direct magnetic exchange changes. Recent studies of cubic lattices of the elemental $3d$ metals have shown that the exchange interaction for Fe has different signs for different groups of orbitals \cite{cardias2017bethe}. Thus, the change in the sign of the interaction or, in other words, the degeneration of the singlet and triplet states can occur independently for different groups of the iron orbitals.
The electronic instability in series 11 for compositions close to FeSe has many experimental demonstrations \cite{PhysRevB.96.100504, PhysRevB.95.224507}. For these compositions, small changes in the lattice parameters can produce significant changes in the electronic properties. In particular, a significant change in the mobility of carriers occurs in FeSe$ _ {1-x}$Te$ _ {x}$ with a low tellurium content \cite{ovchenkov2019nematic, PhysRevB.100.224516}, which corresponds to a crossover from bad to good metal. This crossover accompanies the majority carriers inversion in series 11. Formally, from neutrality, a change in the sign of mobile charges means an inversion of the charge of the ionic core. Thus, the inversion of carriers under an isovalent substitution \cite{Ovchenkov_2019} can be considered as evidence of a change in the polarity of the bond in these compounds, which in turn can be caused by the transformation of the $d$ orbitals with a change in the iron-iron distance in the plane.
The assumed structural instability of the iron/chalcogen plane near the charge neutrality point deserves further investigation as a possible important ingredient of superconductivity in IBS. Besides, this instability can be the reason for the splitting of some structural transitions in other series of iron-based superconductors.
\section{Experiment}
The studied crystals of FeSe${}_{1-x}$Te${}_{x}$ were prepared using the AlCl${}_{3}$/KCl/NaCl eutectic mixture in evacuated quartz ampoules in permanent gradient of temperature \cite{CrystEngComm12.1989, Char_FeSeS_CEC}. The quartz ampoules with the Fe(Te,Se) charge and maximum quantity of AlCl${}_{3}$/KCl/NaCl eutectic mixture were placed in a furnace so as to maintain their hot end at a temperature of 500~$^{\circ}$C and the cold end at a temperature of 433~$^{\circ}$C for $x = 0.15$ and $x=0.11$, and hot end at a temperature of 453~$^{\circ}$C and the cold end at a temperature of 400~$^{\circ}$C in the case of $x = 0.055$ . The chalcogenide charge is gradually dissolved in the hot end of the ampoule and precipitates in the form of single crystals at the cold end. After keeping for 8 weeks in the furnace, iron monochalcogenide platelike crystals were found at the cold ends of the ampoules.
To study the effect of heat treatment on the properties of crystals, two evacuated quartz ampoules with the $x=0.055$ crystals were heat-treated at the temperature 445~$^{\circ}$C for one week. Next, one of the ampoules was quenched in water, and another ampoule cooled with the oven. Other thermal treatments of crystals were also always carried out in evacuated ampoules.
The chemical composition of the crystals was determined using a Tescan Vega II XMU scanning electron microscope equipped with an INCA Energy 450 energy-dispersive spectrometer; the accelerating voltage was 20 kV.
Magnetization was measured using a Quantum Design MPMS SQUID in a field of 10 Oe under zero-field cooled conditions and in field of 10 kOe for $\chi (T)$ dependence in the temperature range 10 - 300~K .
Electrical measurements were done on cleaved samples with contacts made by sputtering of Au/Ti layers.
\section{Results}
The results of studying the transport properties of as-prepared crystals are shown in Fig.\ref{fgr:fig1}. The temperature dependence of the resistivity $\rho_{xx}$ has a similar metallic behavior for all compositions (Fig.\ref{fgr:fig1}, a). The temperatures of the superconducting transitions decrease insignificantly with increasing tellurium content (Fig.\ref{fgr:fig1}, b). The temperature dependence of the Hall constant also shows a slight change (Fig.\ref{fgr:fig1}, c). In particular, only one inversion point remains on $Rh(T)$ for $x=0.11$ and $x=0.15$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{Fig_new_1.eps}
\caption{ The properties of as-prepared crystals of FeSe${}_{1-x}$Te${}_{x}$ with $x$=0.055, 0.11, and 0.15. (a)~Temperature dependence of the resistivity $\rho_{xx}$. (b)~Temperature dependence of the resistivity $\rho_{xx}$ at low temperatures. (c)~Temperature dependence of the Hall constant $R_{H}$. (d)~Temperature dependencies of the magnetic susceptibility $\chi$ in an applied field of 10 kOe. (e)~ZFC and FC magnetic susceptibility $\chi$ in an applied field of 30 Oe. (f)~Magnetic field dependence of the Hall resistivity $\rho_{xy}$. (g)~Magnetoresistance $MR$=($\rho_{xx}$(B)-$\rho_{xx}$(0))/$\rho_{xx}$(0) versus $B^{2}$ at 15~K. (h)~Temperature dependence of the derivative of the resistivity $d\rho_{xx}/dT$. (i)~Temperature dependence of the derivative of the resistivity $d\rho_{xx}/dT$ between 40~K and 120~K.}
\label{fgr:fig1}
\end{figure}
The temperature dependencies of the magnetic susceptibility for compositions $x=0.11$ and $x=0.15$ have humps in the temperature range 100-200~K (Fig.\ref{fgr:fig1}, d), which may indicate the presence of a hexagonal phase. This is most likely because the synthesis temperature of these compositions was higher than the stability limit of the hexagonal phase for FeSe. Nevertheless, based on the values of the susceptibility of the hexagonal phase, the amplitudes of the humps indicate that the content of the hexagonal phase is negligible \cite{ovchenkov2019nematic}. Thus, we can assume that the susceptibility data indicate a high quality of the synthesized crystals.
The ZFC-FC curves for the samples composed of several plate crystals oriented parallel to the magnetic field (Fig.\ref{fgr:fig1}, e), show a full Meissner effect, although the systematic difference between the transition temperatures determined from magnetic and transport measurements may indicate a complex microstructure of crystals.
Similar to FeSe, the field dependencies of the Hall component of resistivity $\rho_{xy}$ have a nonlinear form at low temperatures, which indicates the presence of mobile carriers \cite{SUST-30-3-035017}. As can be seen from Fig.\ref{fgr:fig1} f), the nonlinearity is noticeably suppressed with increasing tellurium content, which agrees with a significant decrease in the carrier mobility in these compositions.
The field dependencies of the magnetoresistance $MR$ (Fig.\ref{fgr:fig1} g) confirm a significant change in the carrier mobility. The slope of the $MR(B^{2})$ dependencies for the compounds under study, with fairly good accuracy, is equal to the square of the carrier mobility \cite{2017ovchenkovMISM}. Thus, a change in the magnitude of the magnetoresistance $MR$ in a field of 7 T from 33\% to 5\% corresponds to a change in the mobility by a factor of two and a half. This means that the studied compositions are close to the crossover from bad to good metal, which occurs in the 11 and some other series of IBS \cite{PhysRevB.88.094501}.
The resistivity of the samples is also sensitive to the transition from the high-temperature tetragonal phase to the low-temperature orthorhombic phase. The temperature dependence of the derivative of the resistivity $d\rho_{xx}/dT$ exhibits anomalies in the region of the structural transition (Fig.\ref{fgr:fig1} h and \ref{fgr:fig1} i). The appearance of these anomalies makes it possible, in particular, to control the phase separation in the samples. It is known that phase separation in FeSe${}_{1-x}$Te${}_{x}$ with a low tellurium content occurs into two tetragonal phases with different lattice parameters \cite{zhuang2014}. As seen from Fig.\ref{fgr:fig1} i), the anomaly for the studied compositions has a split form. This shows the presence of two distinct phases with different temperatures of the structural transition. It can be noted that while the position of the anomaly at higher temperatures is different for the samples, the position of the low-temperature anomaly is the same for all samples.
The crystals of FeSe${}_{1-x}$Te${}_{x}$ are often non-uniform at a microscopic level \cite{PROKES2015}. The temperature of synthesis is usually reduced at low tellurium content because of the shift of the stability boundary of the hexagonal phase. With a decrease in the synthesis temperature, a deterioration in the homogeneity of the tellurium distribution can be expected. We initially assumed that the phase separation at low $x$ can be due to the non-uniform distribution of tellurium and the formation of clusters with high tellurium content. We were going to find the optimal heat treatments that would allow achieving a more uniform distribution of tellurium. We expected to see the difference introduced by quenching, and we expected that long heat treatment at intermediate temperature should enhance phase separation. However, the results obtained do not confirm the original assumptions.
We found that heat treatment at a temperature close to the synthesis temperature already suppresses the second phase quite effectively. Fig.\ref{fgr:fig2} a) shows the $d\rho_{xx}/dT$ curves for the as-prepared crystal and two crystals heat-treated at 445~$^{\circ}$C. Both heat-treated samples have no low-temperature anomaly on the $d\rho_{xx}/dT$, although quenching changes the shape of $\rho_{xx}(T)$ near phase transitions, as can be seen from Fig.\ref{fgr:fig2} a) and Fig.\ref{fgr:fig2} c). In general, heat treatment had a rather slight effect on $\rho_{xx}(T)$ (Fig.\ref{fgr:fig2} c). And at the same time, the decrease in mobility at low temperatures by about 20-25\% (see the values of MR plotted in Fig.\ref{fgr:fig2} e), indicates a possible degradation of the microstructure of the sample in both heat treatments.
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{Fig_new_2.eps}
\caption{The properties of as-prepared and heat-treated crystals of FeSe${}_{0.945}$Te${}_{0.055}$. (a)~Temperature dependence of the derivative of the resistivity $d\rho_{xx}/dT$ between 40~K and 120~K. (b)~Temperature dependence of the resistivity $\rho_{xx}$ at low temperatures. (c)~Temperature dependence of the resistivity $\rho_{xx}$ normalized at 300~K. (d)~Magnetoresistance $MR$=($\rho_{xx}$(B)-$\rho_{xx}$(0))/$\rho_{xx}$(0) versus $B^{2}$ at 15~K. }
\label{fgr:fig2}
\end{figure}
After heat treatment at 445~$^{\circ}$C, we carried out heat treatment at 150~$^{\circ}$C for two weeks. Our measurements did not reveal any changes in transport properties after the second heat treatment. However, we found that the phase composition can change during long-term storage at room temperature. For example, Fig.\ref{fgr:fig3} shows the $d\rho_{xx}/dT$ for the sample FeSe${}_{1-x}$Te${}_{x}$ with $x = 0.11$ prepared immediately after synthesis and two samples studied after long-term storage in an evacuated quartz ampoule.
Thus, the results obtained indicate that the main effect on phase separation could have been partial oxidation of the sample, which, as is well established \cite{Sun_2019}, effectively removes excess iron. On the other hand, variations in the stoichiometry of iron are possible in all compositions of 11 series. Therefore, there must be some additional factor causing phase separation in the investigated range of compositions. Below, we will discuss one of the possible reasons for the structural instability of the iron plane for the FeSe${}_{1-x}$Te${}_{x}$ compositions with low tellurium content.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Fig_new_3.eps}
\caption{Temperature dependence of the derivative of the resistivity $d\rho_{xx}/dT$ between 40~K and 120~K for FeSe${}_{0.89}$Te${}_{0.11}$ sample prepared immediately after synthesis and two samples prepared after one year of storage in an evacuated quartz ampoule.}
\label{fgr:fig3}
\end{figure}
\section{Discussion}
For the iron-based superconductors, several types of electronic instability have been identified that may be related to superconductivity in these compounds \cite{PhysRevLett.101.057003}. Structural instabilities can also be expected at certain values of the lattice parameters, which should be investigated in coordinate space. An effective way to consider the properties of compounds in real space is the molecular orbital (MO) method.
In the iron plane of IBS, there is a direct $d$-$d$ exchange between neighboring iron atoms, which can be considered separately from iron-chalcogen or iron-pnictogen bonds. The degree of participation of the direct $d$-$d$ exchange in the total energy of the chemical bond is measured by the dispersion of the corresponding bands, which is usually 3-4 eV for the IBS.
A simple illustrative model of the direct $d$-$d$ exchange is the Bethe-Slater curve, which describes the change in the sign of the exchange between neighboring $d$ ions depending on the distance between them. In body-centered cubic Fe the exchange has different sign for $e_{g}$ and $t_{2g}$ orbitals \cite{PhysRevLett.116.217202} and is strongly negative for the latter. In FeSe${}_{1-x}$Te${}_{x}$ the Fe-Fe distance is in the range 2.6-2.7~{\AA} \cite{ivanov2016local}, which is slightly larger than the value 2.5~{\AA} for the body-centered cubic Fe. Thus, the plane of iron atoms in FeSe${}_{1-x}$Te${}_{x}$ can be close to the condition of the change in the sign of exchange for some orbitals.
A change in the sign of the exchange means some equilibrium between tensile and compressive strain for $d$ orbitals in the plane of iron atoms. At this point, the triplet and singlet $d$ orbitals are degenerate, which should lead to a significant rearrangement of the electronic structure near this point. This must be the point of instability for the electronic structure. We consider that a change in the sign of the exchange can account for the quantum criticality observed in FeSe \cite{PhysRevB.97.201102}, the anomalous behavior of FeSe in ultrasonic experiments \cite{epl_0295-5075-101-5-56005, fil2013piezomagnetism}, and the large elastoresistance effect \cite{Chu710}.
The low polarity of the iron-pnictogen bond may be an important component of electronic instability in 11 series. Low values of the Coulomb contribution to the energy of the crystal field, as well as close to zero values of the Madelung energy, facilitate the charge redistribution, which should occur during changes in the electronic structure. In our opinion, the inversion of the majority carriers is a consequence of these changes in the electronic structure near the point of inversion of the direct $d$-$d$ exchange.
The crystals FeSe$ _ {1-x}$Te$ _ {x}$ with a high tellurium content usually grow with an excess of iron while the crystals FeSe$ _ {1-x}$S$ _ {x}$ usually have an iron deficiency. Thus we can formally assume that the energy corresponding to the addition of one iron atom to the structure changes its sign along 11 series. Then there must be an equilibrium point for which adding iron to the structure costs zero energy. In our opinion, near this point, phases with different values of iron stoichiometry can exist simultaneously.
\section{Conclusion}
Materials with electronic instability can be of interest for various fields of application, for example, for piezoresistive and thermoelectric applications. It is interesting to consider ways to enhance the orbital degeneracy effect and electronic instability in FeSe. The substitution of elements can have limited success because the introduced disorder removes the degeneracy and increases the crystal field. Optimal doping probably can be achieved by the addition of intercalating layers, which can also suppress structural instability.
\section{Acknowledgments}
This work has been supported by Russian Foundation for Basic Research through Grant 20-02-00561A. OSV acknowledge the financial support by Russian Science Foundation through the project 19-42-02010. ESK and ANV acknowledge the financial support by the Megagrant 2020-220-08-6358 of the Government Council on Grants of the Russian Federation.
\section*{References}
\bibliographystyle{unsrt}
|
1,116,691,500,222 | arxiv | \section{Introduction}
\label{secIntro}
Hard spheres (and hard disks in two-dimensions) interacting only through an infinite hard-core potential form the simplest interacting models for a classical fluid. Such athermal models have received a lot of attention in the last decades, because they allow us to investigate only the entropic effect on the thermodynamics of fluids and for their appealing role in the modeling of granular systems \cite{Applications}. In the monodisperse case, a first-order fluid-solid transition has been observed in numerical simulations of hard spheres \cite{Wood,Alder,Hoover}. For binary mixtures of particles with dissimilar sizes, several solid phases may arise [see, e.g, \cite{Eldridge} and refs. therein]. Moreover, beyond the freezing transition, the existence of solid-solid and fluid-fluid demixing transitions are also expected. For nonadditive mixtures of hard particles a demixing is expected because particles fill the space more effectively when separated into pure phases \cite{Widom,Melnyk,Santos,Louis}. Numerical simulations of binary nonadditive mixtures of hard spheres have indeed confirmed this \cite{Louis,Dijkstra}. In the case of additive mixtures the scenario is a bit more controversial, with the possibility of fluid-fluid phase separation being demonstrated in some recent analytical treatments, provided that the particle's sizes are dissimilar enough \cite{BH91,Lekkerkerker932}, in contrast with elder analytical approaches ruling out the demixing \cite{LR64}. However, in general, fluid-solid or solid-solid coexistence can preempt the fluid-fluid transition [see, e.g, \cite{LafuenteCuesta,LafuenteCuesta2} and refs. therein].
When defined on a lattice, the hard spheres (and disks) are approximated by $k$NN particles, i.e, particles which forbid up to their $k$th nearest neighbor (NN) sites of being occupied. The $0$NN case consists of point particles which do not interact and, so, do not undergo any transition. On the other hand, pure $k$NN models with $k\geqslant 1$ are known to present fluid-solid transitions. For instance, the 1NN case on the triangular lattice is the famous hard-hexagon model exactly solved by Baxter \cite{baxterHH,baxterBook}, whose continuous fluid-solid transition belongs to a universality class different from the Ising one found for this model on the square lattice \cite{GuoBlote}. In the cubic lattice, 3D Ising exponents have been found for the 1NN model \cite{Yamagata,HB}. While recent works indicate that the transition in the 2NN model on the square lattice is continuous, its universality class is still a subject of debate (see e.g. \cite{Heitor,Blote2NN} and references therein). On the cubic lattice, a discontinuous fluid-solid transition has been found for this model both on numerical simulations \cite{Panagiotopoulos} and mean-field approximations \cite{Nathann19}. For larger $k$'s, discussions on the behavior can be found in Refs. \cite{Heitor,Rajesh} for the square and \cite{Panagiotopoulos} for the cubic lattice.
As an aside, let us notice that entropy-driven transitions have been also investigated for other particle shapes, such as cubes \cite{Rajeshcubes}, dimers \cite{dimers}, rectangles \cite{rectangles}, rods \cite{rods}, triangles \cite{Nienhuistri}, Y-shaped \cite{RajeshY}, etc., on different lattices. Some binary mixtures of hard lattice gases with isotropic \cite{Dijkstra,Schmidt,Brader,frenkel0nn1nn,frenkelcubic1,frenkelcubic2,Dickman95} and anisotropic particles \cite{Roiji,Wensink,Dubois,Varga,Mederos,Schmidt02,Heras,Jurgenrods} have been also analyzed, which in several cases exhibit fluid-fluid or solid-solid demixing transitions.
Curiously, however, mixtures of $k$NN particles are far less explored. For instance, the 0NN-1NN case has been investigated on the square lattice by means of series expansions \cite{poland} and transfer matrix calculations \cite{Jim01,tiago15}. These later studies revealed a grand-canonical phase diagram featured by a fluid and a solid phase separated by a continuous and a discontinuous transition line, which meet at a tricritical point. The same scenario was found in a mean-field solution of this model on the Bethe lattice \cite{tiago11}. Interestingly, such model display a thermodynamic anomaly characterized by minima in isobaric curves of the total density of particles \cite{tiago15,tiago11}. On the triangular lattice, some authors \cite{Lekkerkerker93,Lekkerkerker95} claimed to have numerically found a fluid-fluid transition, yielding three stable (gas-liquid-solid) phases for the 0NN-1NN mixture. However, evidence against the fluid-fluid demixing have been presented in other works \cite{frenkel0nn1nn,Nienhuis}. Particularly, in Ref. \cite{Nienhuis} strong numerical evidence for a phase diagram similar to the one just discussed for the square lattice was presented.
On the cubic lattice, for the best of our knowledge, only the 0NN-2NN mixture has been considered up to now, via a semi-analytical (mean-field) solution of the model on a Husimi lattice built with cubes \cite{Nathann19}. Interestingly, a stable fluid-fluid transition was found in this system, so that three stable phases are present in its grand-canonical phase diagram: two fluids (being one regular and other featured by a dominance of point particles) and a solid phase. These phases are separated by first-order transition lines which meet at a triple point, whilst the fluid-fluid coexistence line ends at a critical point. A density anomaly similar to the one found on the square lattice 0NN-1NN model is also present in this system. In face of the scarcity of binary systems presenting fluid-fluid transitions, it turns out very important to extend the existing studies for other mixtures of $k$NN particles, in order to determine the conditions (e.g, the difference in particle size) necessary for its onset. Moreover, binary mixtures of hard particles are certainly more appealing for the modeling of real fluids or granular matter than the monocomponent case - for instance, binary mixtures of colloidal hard-spheres have been widely studied \cite{binarycolloids} -, which also justify further investigations of these systems.
\begin{figure*}[t]
\includegraphics[width=14.0cm]{Fig1.png}
\caption{(a) Illustration of part of a Husimi lattice built with cubes. The ground states of the solid $S1$ (b) and $S2$ (c) phases on the cubic lattice. Definitions of sublattices in an elementary cube used to solve the 0NN-1NN (d) and 1NN-2NN (e) mixtures. (f) Possible states of the root site, where 0NN, 1NN and 2NN particles are indicated by an open square, a full square and a circle, respectively.}
\label{fig1}
\end{figure*}
Here, we address this by analyzing the mixtures of 0NN-1NN and 1NN-2NN particles defined on the simple cubic lattice. From semi-analytical grand-canonical solutions of these models on Husimi lattices built with cubes (see Fig. \ref{fig1}a), we demonstrate that no fluid-fluid demixing occurs in such mixtures, in contrast with the 0NN-2NN one \cite{Nathann19}. In fact, the thermodynamic behavior of the 0NN-1NN model is qualitatively the same described above for its counterpart on the square lattice. For the 1NN-2NN mixture, a solid-solid demixing is found, between the ordered phases characteristic of the 1NN and 2NN particles, while a single fluid phase is present in the system. These three phases give rise to a very rich phase diagram, with continuous and discontinuous transitions lines, a tricritical and a triple point
The rest of the paper is organized as follows. In Sec. \ref{secModel} we define the models and devise their solutions on the Husimi lattice built with cubes. The thermodynamic properties of the 0NN-1NN and 1NN-2NN mixtures are presented in Secs. \ref{secRes0NN1NN} and \ref{secRes1NN2NN}, respectively. In Sec. \ref{secConc} our final discussions and conclusions are summarized. Some details on the model solutions are presented in the appendix.
\section{Models and their solution on a Husimi lattice built with cubes}
\label{secModel}
We consider lattice gases consisting of binary mixtures of hard particles placed on (and centered at) the vertices of a cubic lattice. Assuming that the lattice spacing is $a$, the small particles (0NN) are cubes of lateral size $\lambda=a$, which occupy a single lattice site and do not exclude their neighbors. On the other hand, the larger 1NN (2NN) particles are cubes of lateral size $\lambda = \sqrt{2}a$ ($\lambda = \sqrt{3}a$) placed on the lattice in a way that they exclude their first (first and second) nearest neighbors. An activity $z_k$ is associated with each $k$NN particle. The mixture 0NN-2NN ($z_1=0$ case) was already investigated by us in \cite{Nathann19}, so here we will focus on the cases $z_2=0$ and $z_0=0$, corresponding respectively to the mixtures 0NN-1NN and 1NN-2NN.
Let us remark that while the pure 0NN system ($z_1=z_2=0$) can be trivially solved and does not display any transition, the pure 1NN ($z_0=z_2=0$) and 2NN ($z_0=z_1=0$) models on the cubic lattice are known to undergo a continuous and a discontinuous transition, respectively, from disordered fluid phases to ordered solid ones \cite{Panagiotopoulos}. As illustrated in Figs. \ref{fig1}b and \ref{fig1}c, both solid phases are characterized by a sublattice ordering. In the 1NN solid ($S1$) phase one of two sublattices is preferentially occupied and in the ground state (the full occupancy limit) the density per site of 1NN particles is $\rho_1=1/2$. In the 2NN solid ($S2$) phase one of four sublattices is more occupied and the maximum density of 2NN particles is $\rho_2=1/4$. Therefore, to investigate the 0NN-1NN mixture one has to divide the lattice into two sublattices ($A$ and $B$), as shows Fig. \ref{fig1}d. For the 1NN-2NN system, eight sublattices (as defined in Fig. \ref{fig1}e) are need in order to capture the symmetries of both $S1$ and $S2$ phases.
Following our previous study of the 0NN-2NN mixture \cite{Nathann19}, instead of investigating the models on the simple cubic lattice, we will solve them on a Husimi lattice built with cubes (see Fig. \ref{fig1}a). Let us note that the solution of a given model in the core of a Cayley tree of coordination $q$ usually corresponds to the (mean-field) Bethe approximation for this model on a regular lattice with the same coordination, and for this reason this is known as the \textit{Bethe lattice} \cite{baxterBook}. Once the Cayley tree (and so the Bethe lattice) has no loops, an improved approximation can be obtained by building such tree with polygons or polyhedrons. The core of this cactus is the \textit{Husimi lattice} \cite{Husimi}. We remark that solutions on these lattices usually provide the qualitatively correct thermodynamic behavior of the investigated models, as is indeed the case for the 0NN-1NN mixture on the square \cite{Jim01,tiago15} and Bethe lattices \cite{tiago11}. Moreover, for certain lattice gases even a quantitative agreement with Monte Carlo simulation results on regular lattices have been observed \cite{Buzano,tiago10}.
To solve the $k$NN mixtures on Husimi lattices, we proceed as usual by defining partial partition functions (ppf's) according to the state of the root site of an elementary cube. As shows Fig. \ref{fig1}f, in general, for each sublattice the root site can be empty ($j=\varnothing$), occupied by a 0NN ($j=0$), a 1NN ($j=1$) or a 2NN particle ($j=2$). In the 0NN-1NN model only $j=\varnothing$, $0$ and $1$ are allowed and since there are two sublattices one has a total of six possible states for the root site (and, so, six ppf's). The situation worsen in the 1NN-2NN mixture, where $j=\varnothing$, $1$ and $2$ are allowed and there are eight sublattices, totaling twenty-four states.
Let us now consider the operation of attaching seven cubes to the vertices of another cube, with exception of the root site. This give us a 1-generation subtree. Repeating this process by attaching the root sites of seven subtrees to the vertices of another new cube, we can build a $(M+1)$-generation subtree from seven $M$-generation ones. So, if we keep the root site of the new cube in a given state $s$ and sublattice $g$, and sum over all the possible ways of attaching the seven $M$-generation subtrees to it, we obtain the ppf $g'_s$ in generation $(M+1)$ as a function of the ppf's $g_j$ in generation $M$, with $s,j=\varnothing,0,1$ and $g=a,b$ in the 0NN-1NN case and $s,j=\varnothing,1,2$ and $g=a_1,a_2,\ldots,d_1,d_2$ in the 1NN-2NN model. Therefore, we obtain six (twenty-four) recursion relations (RRs) for the ppf's of the 0NN-1NN (1NN-2NN) mixture. These RRs are presented in the appendix. Since they diverge in the thermodynamic limit ($M \rightarrow \infty$), we will work with ratios of them. From the RRs for the ppf's, we may define four (sixteen) RRs for the ratios in the 0NN-1NN (1NN-2NN) model, whose definitions are also presented in the appendix. The stable (real and positive) fixed points of these last RRs gives us the stable thermodynamic phases of the models. To determine the stability limits of a given phase, we calculate the Jacobian matrix at the related fixed point. Wherever the largest eigenvalue ($\Lambda$) of this matrix is smaller than 1, the fixed point is stable, as well as the corresponding thermodynamic phase. The condition $\Lambda = 1$ defines the stability limits.
The partition function ($Y$) of the models can be obtained, similarly to the ppf's, by attaching the root sites of eight subtrees to the eight vertices of a central cube. For the 0NN-1NN mixture, it can be written in a compact form, e.g, as
\begin{equation}
Y = a_{\varnothing} a'_{\varnothing} + z_0 a_{0} a'_{0} + z_1 a_{1} a'_{1} = (a_\varnothing b_\varnothing)^4 y,
\label{eqY}
\end{equation}
where $y$ depends only on the activities and ratios $A_j=a_j/a_{\varnothing}$ and $B_j=b_i/b_{\varnothing}$, with $j=0,1$ (see the appendix) and, so, it attains a constant value in the thermodynamic limit. Using Eqs. \ref{RRs0NN1NN} in appendix to write down the expanded expression for $y$, the densities of small and large particles at the central cube, in sublattice $A$, are given respectively by
\begin{equation}
\rho_0^{(A)} = \frac{A_0}{8Y} \frac{\partial Y}{\partial A_0} \quad \text{and} \quad \rho_1^{(A)} = \frac{A_1}{8Y} \frac{\partial Y}{\partial A_1}.
\label{eqDensities}
\end{equation}
By replacing $A$ to $B$ one obtains the densities in sublattice $B$. Thence, $\rho_j = \rho_j^{(A)} + \rho_j^{(B)}$ gives the total density of small ($j=0$) and large ($j=1$) particles at the central cube in the 0NN-1NN mixture. We can obtain $Y$ and the densities in a similar fashion for the 1NN-2NN system, as devised in the appendix.
The bulk free energy (per site) of each phase of the models can be calculated following the ansatz proposed by Gujrati \cite{Gujrati}, and discussed in detail for Husimi lattices built with cubes in Ref. \cite{Nathann19}. For the 0NN-1NN model it reads
\begin{equation}
\phi_b = -\frac{1}{8} \ln \left[ \frac{\left( A_{\varnothing} B_{\varnothing} \right)^4}{y^{6}} \right],
\end{equation}
where $A_\varnothing=a'_\varnothing/a_\varnothing^3 b_\varnothing^4$ and $B_\varnothing=b'_\varnothing/a_\varnothing^4 b_\varnothing^3$. The equivalent expression for the 1NN-2NN mixture is presented in the appendix. In general, in regions of the parameter space where two or more phases are stable, the equality of their free energies will define the points, lines, or surfaces of coexistence (where the first-order transitions take place). Since each lattice site occupies a volume $v_0=a^3$, the pressure (in our grand canonical formalism) is given by $P=-\phi/a^3$.
\section{Thermodynamic behavior of the 0NN-1NN mixture}
\label{secRes0NN1NN}
Before discussing the 0NN-1NN system, it is interesting to analyze the pure 1NN lattice gas. Once $z_0=0$ in this case, one has $A_0=A_1=A$ and $B_0=B_1=B$, so that only two recursion relations (RRs, $A$ and $B$) have to be analyzed (see the appendix). For small activity ($z_1 \leq 0.8298$) only a homogeneous, disordered fluid phase with $A=B$ is stable, while for large activity ($z_1 \geq 0.8298$) there are two equivalent fixed points, where $A=R>B=r$ or $A=r<B=R$. These last ones correspond to the two possible states of the ordered solid ($S1$) phase. Since the stability limits of both fluid and $S1$ phases coincide at $z_{1,c}=0.8298$ this is a critical point and there is a continuous fluid-solid transition in this model. This is consistent with several numerical studies of the 1NN model on the cubic lattice, where a critical point located at $z_1 \approx 1.05$ has been found \cite{HB,Yamagata,Panagiotopoulos}, which is a bit larger than the value from our mean-field approximation, as expected. In agreement with this, the critical density in the cubic lattice is given by $\rho_{1,c} \approx 0.21$ \cite{Panagiotopoulos,Gaunt}, while here one finds a slightly smaller value $\rho_{1,c} = 0.1762$.
By including the 0NN particles in the system, we observe that $A_0\neq A_1$ and $B_0\neq B_1$, so that now we have to deal with four RRs for the ratios. In the fluid phase $A_j=B_j$, while in the solid phase $A_j > B_j$ when sublattice $A$ is the one more populated, for $j=0,1$. For small activity (and so density) of point particles, one still find a continuous fluid-solid transition, which becomes discontinuous for large $z_0$. Namely, for small $z_0$ the stability limits of the fluid and solid phases are coincident, giving rise to a critical line, but for $z_0 > z_{0,TC}=0.5958$ (and $z_1 > z_{1,TC}=1.1277$) they become different, yielding a coexistence region (see Fig. \ref{fig2}a). From the equality of the free energies of both phases, one finds a first-order transition line which starts at $(z_{0,TC},z_{1,TC})$ and extends to $z_0,z_1 \rightarrow \infty$. In such limit, the fixed point of the $S1$ phase is given, e.g, by $A_0=A_1=1$ and $B_0=B_1=0$, whilst the $F$ phase is characterized by $A_0=B_0= 1$ and $A_1=B_1=0$. By calculating the free energy with these limiting values, we find that the $F$-$S1$ coexistence line is given by
\begin{equation}
z_1 \simeq z_0^2 + z_0,
\end{equation}
for large $z_0$ and $z_1$. Therefore, for $z_0 \rightarrow \infty$ we have $z_1 \approx z_0^2$, which is consistent with the fact that effectively one 1NN particle occupies the volume of two 0NN ones.
\begin{figure}[!t]
\includegraphics[width=8.cm]{Fig2a.pdf}
\includegraphics[width=8.cm]{Fig2b.pdf}
\includegraphics[width=8.cm]{Fig2c.pdf}
\caption{Phase diagrams for the 0NN-1NN mixture in (a) activities ($z_1,z_0$), (b) densities ($\rho_0,\rho_1$) and (c) pressure-composition ($P,x_0$) spaces. In all panels, the solid (blue) and dashed (red) lines indicated the continuous and discontinuous transitions loci, respectively. The dash-dotted (green) lines are the LMDs. The tricritical point (TC) is indicated by the circle. The dotted lines in (a) are the spinodals, while in (b) they are tie lines. The inset in (a) highlights the region around the minimum in the critical line and the TC point.}
\label{fig2}
\end{figure}
The critical line starts at $(z_{0},z_{1})=(0, 0.8298)$ and tangentially meet the coexistence line at $(z_{0,TC},z_{1,TC})$, demonstrating that this is a tricritical point. In contrast with the coexistence line, which is a monotonic increasing function of $z_0$ [in ($z_0,z_1$) space], the critical line initially decreases with $z_0$, having a slope $d z_1/d z_0|_{z_0 \rightarrow 0} = -1$. Then, it passes through a minimum at $(z_{0},z_{1})=(0.1974, 0.7284)$ and finally increases towards the tricritical point. See Fig. \ref{fig2}a. Such initial decreasing of the critical line shows that a small density of small particles facilitates the ordering of the large ones, which is certainly due to an effective attractive depletion interaction among the 1NN particles. We remark that a very similar behavior have been found for small $z_0$'s in the solution of the 0NN-2NN mixture on a Husimi lattice built with cubes \cite{Nathann19}, although in such case the fluid-solid transition is discontinuous. Moreover, in studies of the 0NN-1NN model on the square \cite{Jim01,tiago15} and Bethe lattice \cite{tiago11} a minimum in the critical fluid-solid transition line has also been found. Actually, the full phase diagram for the 0NN-1NN mixture on these lattices is qualitatively identical to the one found here, with a continuous and a discontinuous transition line meeting at a tricritical point \cite{Jim01,tiago15,tiago11}.
This is true also for the diagram in density space, as displayed in Fig. \ref{fig2}b, where the critical line starts at $(\rho_{0},\rho_{1})=(0,0.1762)$ and ends at the tricritical point, which is located at $(\rho_{0,TC},\rho_{1,TC})=(0.2190,0.0761)$. There, one can see that in general the solid phase is featured by a small density of 0NN particles ($\rho_0 \leq \rho_{0,TC}$), whereas the opposite happens in the fluid phase (where $\rho_1 \leq 0.1762$). For large $z_0$ (and $z_1$), using the asymptotic behaviors discussed above for the RRs and coexistence line, it is straightforward to demonstrate that $\rho_0^{(S1)} \approx \frac{1}{2z_0}$ and $\rho_1^{(S1)} \approx \frac{1}{2} - \frac{1}{2z_0}$ in the solid phase, while $\rho_0^{(F)} \approx 1 - \frac{1}{z_0}$ and $\rho_1^{(F)} \approx \frac{1}{z_0^5}$ in the fluid one. Thereby, as $z_0 \rightarrow \infty$ one obtains the expected densities in the full occupancy limit: $\rho_0^{(S1)} \rightarrow 0$ and $\rho_1^{(S1)} \rightarrow 1/2$, and $\rho_0^{(F)} \rightarrow 1$ and $\rho_1^{(F)} \rightarrow 0$, as indeed observed in Fig. \ref{fig2}b.
The fluid-solid demixing transition is also clearly observed in the pressure ($P$) versus composition ($x_0$) phase diagram displayed in Fig. \ref{fig2}c. Here, the molar fraction of 0NN particles was defined as $x_0=\rho_0/\rho_T$, where $\rho_T = \rho_0 + 2\rho_1$ is the total density of particles. In this phase diagram one sees that for small pressures ($P < 0.3219$) only the fluid phase is stable, regardless the composition $x_0$. For larger $P$, up to the tricritical point, which is located at $(x_{0,TC},P_{TC})=(0.5898,0.5258)$, there is a continuous $F$-$S1$ transition line. For $P>P_{TC}$ a coexistence region is observed. We notice that in the limit of large $P$ (corresponding to large $z_0$ and $z_1$) the molar fraction in the fluid and solid phase tend to $x_0=1$ and $x_0=0$, respectively. In this limit, it is simple to demonstrate that $P$ diverge as $P^{(F)} \sim -\ln (1-x_0)$ and $P^{(S1)} \sim -\ln(x_0)$.
Finally, let us notice that, similarly to what happens in the 0NN-1NN mixture on the square lattice, a density anomaly is also observed in our cubic approximation, which is characterized by minima in the isobaric curves of the total density of particles $\rho_T$ against $z_0$ (or $z_1$). A behavior analogous to the ones displayed, e.g, in Figs. 5a and 6 from Refs. \cite{Nathann19} and \cite{tiago15}, respectively. The ($z_0,z_1$) coordinates of these minima gives rise to a line of minimum density (LMD), which is also shown in the phase diagram of Fig. \ref{fig2}a. Such line starts inside the fluid phase at $z_0 \approx 0.2$ as $z_1 \rightarrow 0$, which is a result consistent with the one analytically predicted in the solution of the model on the Bethe lattice \cite{tiago11}, where the LMD was found to start at $z_0=1/(q-1)$, with $q$ being the lattice coordination ($q=6$, here). Then, the LMD crosses the stable and metastable fluid regions, ending at the spinodal of this phase. The LMD in other thermodynamic variables are also depicted in the Figs. \ref{fig2}b and \ref{fig2}c, where the same behavior is observed, as expected. We remark that such behavior is somewhat different from that found on the square lattice, where the LMD seems to end at the tricritical point \cite{tiago11,tiago15}. On the other hand, it is similar to the LMD found for the 0NN-2NN mixture on the Husimi lattice built with cubes, which also ends inside the region where the fluid is metastable \cite{Nathann19}, indicating that this can be a general feature of the LMDs of such mixtures in the cubic lattice. Regardless the location of their end points, in all these systems the LMDs divide the fluid phase in two regions: a regular one, for small $z_0$, $\rho_0$ and $x_0$, where $\partial \rho_T/\partial z_0 <0$, as expected; and the anomalous region, for large $z_0$, $\rho_0$ and $x_0$, where $\partial \rho_T/\partial z_0 >0$.
\section{Thermodynamic behavior of the 1NN-2NN mixture}
\label{secRes1NN2NN}
\begin{figure}[!b]
\includegraphics[width=8.cm]{Fig3a.pdf}
\includegraphics[width=8.cm]{Fig3b.pdf}
\caption{(a) Stability limits of the three stable (fluid, $S1$ and $S2$) phases found in the 1NN-2NN mixture. (b) Phase diagram of this model in variables $z_1,z_2$. In (b), the solid and dashed lines represent continuous and discontinuous transitions, respectively. The triple point is indicated by a circle and the tricritical point by a triangle. The inset highlights the region around the minimum in the $F$-$S2$ coexistence line.}
\label{fig3}
\end{figure}
Now, we investigate the 1NN-2NN model. Let us starting noticing that three stable phases are found in this system: \textit{i)} the isotropic, disordered fluid ($F$) phase, for which the RRs (at the fixed point) have the form $A_{i,j}=B_{i,j}=C_{i,j}=D_{i,j}$ for $i=1,2$ and $j=1,2$; \textit{ii)} the solid ($S1$) phase, associated with the ordering of 1NN particles, whose fixed point is featured by, e.g, $A_{1,j}=B_{1,j}=C_{1,j}=D_{1,j} > A_{2,j}=B_{2,j}=C_{2,j}=D_{2,j}$, for $j=1,2$, when the sublattices indexed by $1$ (see Fig. \ref{fig1}e) are the more populated ones; and \textit{iii)} the solid ($S2$) phase, associated with the ordering of 2NN particles, where we find, e.g, $A_{i,j} > B_{i,j}=C_{i,j}=D_{i,j}$, for $i=1,2$ and $j=1,2$, when the sublattices labeled by $A$ are the ones more occupied. Although the notations are different, the fixed points' symmetries for the fluid and $S1$ phases are the same from the previous section. Since for the pure 1NN model ($z_2=0$) a continuous fluid-solid ($F$-$S1$) transition is found (as discussed above), we might expect to find a continuous $F$-$S1$ transition line for small $z_2$. On the other hand, for the pure 2NN model ($z_1=0$) a discontinuous fluid-solid ($F$-$S2$) transition takes place, as observed by Panagiotopoulos \cite{Panagiotopoulos} in Monte Carlo simulations on the cubic lattice and by us in the cubic Husimi lattice approximation \cite{Nathann19}. Therefore, for small $z_1$ a first order $F$-$S2$ transition line is expected. Such behaviors are indeed found here for the 1NN-2NN mixture, as confirmed in Fig. \ref{fig3}. There, the stability limits of the three stable phases are shown in panel (a), where one sees that for small $z_2$ the spinodals of the fluid and $S1$ phases indeed coincide, up to $(z_{1,TC},z_{2,TC})=(1.5273,2.5016)$, after which they become different, giving rise to a coexistence region between these phases. The spinodal of the $S2$ phase, on the other hand, never coincides with the ones for the fluid and $S1$ phases, with exception of few points where they cross each other, indicating that such phases are always separated by first order transition lines.
\begin{figure}[!t]
\includegraphics[width=8.cm]{Fig4a.pdf}
\includegraphics[width=8.cm]{Fig4b.pdf}
\includegraphics[width=8.cm]{Fig4c.pdf}
\caption{(a) Densities of small $\rho_1$ and large $\rho_2$ particles in the indicated phases, for the 1NN-2NN mixture, as functions of $z_2$ for (a) $z_1=1$, (b) $z_1=2$ and (c) $z_1=3$. The vertical lines are the transition loci between the indicated phases.}
\label{fig4}
\end{figure}
In fact, from the equality of the bulk free energies, three coexistence ($F$-$S1$, $F$-$S2$ and $S1$-$S2$) lines are obtained, all of them meeting at a triple point, located at $(z_{1,TP},z_{2,TP})=(2.3102,7.8746)$. See Fig. \ref{fig3}b. The $F$-$S1$ coexistence line tangentially meet the $F$-$S1$ critical line at ($z_{1,TC},z_{2,TC}$), showing that this is a tricritical point. The $F$-$S2$ coexistence line starts at $z_1=0$ (and $z_2=5.7932$) and ends at the triple point. In between, however, it presents a minimum located at $(z_1,z_2)=(0.5702,5.4907)$, as shows the insertion in Fig. \ref{fig3}b. Therefore, initially this line decreases, indicating that a small fraction of 1NN particles facilitates the ordering of the larger 2NN ones. This is similar to the effect of the 0NN particles on the ordering of 1NN ones discussed in the previous section or in Refs. \cite{tiago11,tiago15} for the square lattice, as well as on the ordering of 2NN particles, as demonstrated in \cite{Nathann19} for the 0NN-2NN mixture. Interestingly, in all these systems the initial slope of the decreasing critical or coexistence lines is $d z_k/d z_0|_{z_0\rightarrow 0}=-1$ (with $k=1,2$), and here one also finds $d z_2/d z_1|_{z_1 \rightarrow 0}=-1$.
The $S1$-$S2$ coexistence line starts at the triple point and extends to $z_1,z_2\rightarrow\infty$. In such limit, one finds that all RRs vanish in $S1$ phase, with exception, e.g, of $A_{1,1}=B_{1,1}=C_{1,1}=D_{1,1}= 1$, while in the $S2$ phase one has, e.g, $A_{i,j} = 1$ and $B_{i,j}=C_{i,j}=D_{i,j} = 0$, for $i=1,2$ and $j=1,2$. Inserting these limiting values in the free energies, we find the $S1$-$S2$ coexistence line as
\begin{equation}
z_2 \simeq z_1^2 + z_1,
\end{equation}
for large $z_1$. So, for $z_1 \rightarrow \infty$ one obtains $z_2 \approx z_1^2$, which is again consistent with the fact that effectively two 1NN particles occupy the same volume of a 2NN one.
To let clear the differences among the three phases, specially between the two solid ones, it is worthy analyzing in detail the particle densities. Figures \ref{fig4}a-c display the densities $\rho_1$ and $\rho_2$ as functions of $z_2$ for three values of $z_1$, chosen such that all transition lines are crossed, where the continuous and discontinuous nature of the transitions are confirmed. As expected, for all phases and parameters $\rho_1$ decreases, while $\rho_2$ increases, with $z_2$. The $S1$ ($S2$) phase is always featured by a large density of 1NN (2NN) particles and a small density of 2NN (1NN) ones. In the fluid phase, on the other hand, the densities are more sensitive to the activities. For instance, at the $F$-$S2$ coexistence we find $\rho_1^{(F)}<\rho_2^{(F)}$ for $z_1=1$, but $\rho_1^{(F)}>\rho_2^{(F)}$ for $z_1=2$ (see Figs. \ref{fig4}a and \ref{fig4}b). These density behaviors are also confirmed in the phase diagram in the ($\rho_1,\rho_2$) space, which is depicted in Fig. \ref{fig5}a. In such diagram, the tricritical point is located at $(\rho_{1,TC},\rho_{2,TC})=(0.1463,0.0435)$, while at the triple point one has $\left( \rho_{1}^{(F)},\rho_{2}^{(F)} \right) = (0.1126,0.0823)$, $\left( \rho_{1}^{(S1)},\rho_{2}^{(S1)}\right)=(0.3000,0.0115)$ and $\left( \rho_{1}^{(S2)},\rho_{2}^{(S2)}\right)=(0.0550,0.1679)$. The $F$-$S1$ critical line starts at $(\rho_{1},\rho_{2})=(0.1762,0)$ and ends at the tricritical point. From the asymptotic behaviors discussed above for the RRs and the $S1$-$S2$ coexistence line in the limit of large $z_1$, it is easy to show that $\rho_1^{(S1)} \approx \frac{1}{2}-\frac{1}{2z_1}$ and $\rho_2^{(S1)} \approx \frac{1}{2z_1^4}$, while $\rho_1^{(S2)} \approx \frac{1}{4z_1}$ and $\rho_2^{(S2)} \approx \frac{1}{4}-\frac{1}{4 z_1}$. Hence, for $z_1 \rightarrow \infty$ we obtain $\rho_1^{(S1)} \rightarrow 1/2$ and $\rho_2^{(S1)} \rightarrow 0$, and $\rho_1^{(S2)} \rightarrow 0$ and $\rho_2^{(S2)} \rightarrow 1/4$, as expected and confirmed in Fig. \ref{fig5}a.
\begin{figure}[!t]
\includegraphics[width=8.cm]{Fig5a.pdf}
\includegraphics[width=8.cm]{Fig5b.pdf}
\caption{Phase diagrams for the 1NN-2NN mixture in (a) density ($\rho_1,\rho_2$) and (b) pressure-composition ($P,x_1$) space. In both panels the solid and dashed lines denotes the continuous and discontinuous transitions, respectively. The tricritical point is indicated by the triangle, while the circles represent the triple point, which are connect by thicker dotted lines. The thin dotted lines in (a) are tie lines.}
\label{fig5}
\end{figure}
Figure \ref{fig5}b shows the phase diagram in pressure-composition ($P,x_1$) plane, where the molar fraction of 1NN particles was defined as $x_1=2\rho_1/\rho_T$, with $\rho_T = 2\rho_1 + 4\rho_2$ being the total density of particles in this model. For small pressures, specifically for $P < 0.3219$, only the fluid phase is stable, but above this point the $S1$ phase becomes also stable and both phases are separated by the critical line. This line ends at the tricritical point, located at $(P_{TC},x_{1,TC})=(0.4816,0.6270)$, above which a $F$-$S1$ coexistence appears. For higher pressures, the $F$-$S2$ and $S1$-$S2$ coexistences also show up. For instance, the $F$-$S2$ one starts at $(P,x_1)=(0.4822,0)$. The triple point, which separates the three regions where each kind of coexistence is the stable one, is located at $P_{TP}=0.6059$, where $x_1^{(S2)}=0.1408$, $x_1^{(F)}=0.4061$ and $x_1^{(S1)}=0.9286$. For large pressures the $S1$ line tends to $x_1 = 1$, while the $S2$ one goes to $x_1=0$, as expected. From the asymptotic behaviors discussed above for the RRs, coexistence line and densities, it is simple to demonstrate that $P$ diverges as $P^{(S1)} \sim -\ln(1-x_1)$ and $P^{(S2)} \sim -\ln(x_1)$. These behaviors are the same found in the previous section for the 0NN-1NN mixture and for the 0NN-2NN model in \cite{Nathann19}, indicating that such logarithmic divergences are universal in these systems.
Interestingly, in contrast with the other mixtures of $k$NN particles studied so far, the isobaric curves of the total density of particles ($\rho_T$) are always monotonic decreasing functions of $z_1$ or $z_2$ . Namely, no density anomaly exists in the 1NN-2NN mixture. This is certainly related to the fact that now the fluid phase is limited to a region of small activities and, so, of small densities. This contrasts with the 0NN-1NN mixture (discussed above and in previous studies related to square lattice \cite{tiago15,Jim01,tiago11}), as well as with the 0NN-2NN system \cite{Nathann19}, where the fluid phase extends to $z_0 \rightarrow \infty$ and, then, $\rho_0^{(F)},\rho_T^{(F)} \rightarrow 1$. Once the anomaly appears for relatively large densities of the small particles, this strongly suggests that it is absent in the 1NN-2NN case because it is preempted by the $F$-$S1$ transitions.
\section{Conclusions}
\label{secConc}
We have investigated two binary mixtures of (hard) $k$NN particles, which exclude up to their $k$th nearest neighbors and are discretized approximations for hard-spheres on the cubic lattice. More specifically, we have obtained the grand-canonical solution, on a Husimi lattice built with cubes, of the model with point particles (0NN) and 1NN ones; and for the mixture 1NN-2NN. The 0NN-1NN system displays the same thermodynamic behavior already observed for it on the square \cite{tiago15,Jim01,poland} and on the Bethe lattice \cite{tiago11}, with a disordered fluid ($F$) and a solid ($S1$) phase where 1NN particles tend to preferentially occupy one of two sublattices. Such phases are separated by a critical line and a first-order transition line, both meeting at a tricritical point. As demonstrated in Ref. \cite{tiago15}, the critical line and the tricritical point found for this mixture on the square lattice belong to the critical and tricritical Ising universality classes in 2D, respectively. Moreover, 3D Ising exponents were obtained by Heringa and Bl\"ote \cite{HB} for the pure 1NN model on the cubic lattice. These results suggest that the entire critical line, as well as the tricritical point might be also in the 3D Ising universality classes. Furthermore, in the Bethe lattice with coordination $q=6$, which is also a mean-field approximation for the cubic lattice, the tricritical point is located at $(z_{0,TC},z_{1,TC})=(0.5333,0.9957)$ \cite{tiago11}, which is smaller than the values found here: $(z_{0,TC},z_{1,TC})=(0.5958,1.1277)$. Since the Husimi lattice solution is an improved approach when compared with the Bethe lattice one, this strongly suggests that $(z_{0,TC},z_{1,TC})$ in the cubic lattice shall be a bit larger than the values found here. Numerical investigations of this mixture on the cubic lattice are necessary to confirm these things.
For the 1NN-2NN mixture, a disordered fluid ($F$) and the ordered $S1$ phases are still present in the phase diagram and, once again, they are separated by a critical and a coexistence line, which meet at a tricritical point. Nonetheless, there exists also a second solid ($S2$) phase, which is featured by the ordering of 2NN particles into one of four sublattices. Such phase is separated from the $F$ and $S1$ phases by first-order transition lines. Therefore, beyond the fluid-solid ($F$-$S1$) demixing also observed in the 0NN-1NN mixture, in the 1NN-2NN case there exist also another fluid-solid ($F$-$S2$) demixing, as well as a solid-solid ($S1$-$S2$) phase separation. In this case, the three phases coexist at a triple point.
It is interesting to remark that the cubic Husimi lattice solution of the 0NN-2NN mixture presents a fluid-fluid demixing \cite{Nathann19}, which is absent in the mixtures analyzed here. We have indeed scanned the parameter space looking for additional phases, even metastable ones, but it seems that only the three ($F$, $S1$ and $S2$) phases discussed above exist in our approach. These results - together with the ones for the 0NN-1NN model on the square lattice, where no fluid-fluid demixing was observed \cite{poland,Jim01,tiago15} - indicate that the condition for observation of fluid-fluid transitions in such systems might be $k$NN-$k'$NN with $k' \geq k+2$ or, maybe, it is limited to 0NN-$k$NN mixtures, with $k \geq 2$. This is also an important issue to be addressed in future works.
Finally, let us remark that a density anomaly
is present in the 0NN-1NN mixture, but it is absent in the 1NN-2NN one. Since this anomaly, which might be important for understanding more complex fluids, have already been observed in previous studies of the 0NN-1NN mixture \cite{tiago11,tiago15} and also in the 0NN-2NN model \cite{Nathann19}, its absence in the 1NN-2NN system suggests that the presence of point (0NN) particles in the mixtures is imperative for its existence. In fact, in $0$NN-$k$NN mixtures, only the fluid phase is expected to be stable in the phase diagram for small enough activities ($z_k$) of the larger $k$NN particles, so that it shall exist for $z_0$ ranging from $0$ to $\infty$. This was indeed observed in all mixtures of this type investigated so far. For mixtures of the type $k$NN-$k'$NN, with $0 < k < k'$, the fluid phase is expected to be stable only in a limited region of parameter space (for small $z_k$ and $z_{k'}$), since it shall present transitions (at least) for the two solid phases associated with the ordering of both $k$NN and $k'$NN particles, as indeed observed here in the 1NN-2NN model. In this scenario, it may be the case that the density of small particles $\rho_k$ does not become so large within the fluid phase to allow the onset of the anomalous behavior, which certainly explain its absence in 1NN-2NN. For other mixtures of this type, however, there is no guarantee that this shall happen. So, once again, more studies of these $k$NN mixtures are important to unveil what are the conditions for the appearance of such entropy-driven anomaly. We notice that the symmetries of the ordered phases of $k$NN systems with $k>2$ cannot be captured by a Husimi lattice built with cubes, so that other methods, such as Monte Carlo simulations, shall be employed to answer the important questions raised here.
\acknowledgments
This work is partially supported by CNPq, CAPES and FAPEMIG (Brazilian agencies).
|
1,116,691,500,223 | arxiv | \section{Introduction}
The magnetic field in the solar corona dominates over non-magnetic forces such as plasma pressure and gravity because of
low plasma beta \citep{Gary}. Knowledge of the coronal magnetic field is therefore important in understanding the
structure of the coronal plasma and obtaining insights into dynamical processes such as flares and coronal mass ejections.
Routine measurements of the solar magnetic field are still mainly carried out in the photosphere.
Therefore, one has to infer the field strength in the higher layers of the solar atmosphere
from the measured photospheric field based on the assumption that the corona is force-free.
The extrapolation methods involved in this assumption include potential field extrapolation
\citep{Schmidt,Semel67}, linear force-free field extrapolation \citep{Chiu,Seehafer,Seehafer82,Semel88,clegg00}, and
nonlinear force-free field extrapolation
\citep{Amari97,Amari99,Amari,cuperman91,demoulin92,mikic94,Roumeliotis,Sakurai81,valori05,Wheatland04,Wiegelmann04,wu90,yan00}.
Among these, the nonlinear force-free field has the most realistic
description of the coronal magnetic field. The computation of nonlinear force-free fields is however, more challenging for
several reasons. Mathematically, problems regarding the existence and uniqueness of various boundary value problems
dealing with nonlinear force-free fields remain unsolved \citep[see][for details]{Amari}.
Another issue is their numerical analysis of given boundary values.
An additional complication is to derive the boundary data from observed photospheric
vector magnetic field measurements, which are consistent with the force-free
assumption. High noise in the transverse components of the measured
field vector, ambiguities regarding the field direction, and non-magnetic forces in the
photosphere complicate the task of deriving suitable boundary conditions from measured data.
For a more complete review of existing methods for computing nonlinear force-free coronal
magnetic fields, we refer to the review papers by \citet{Amari97},
\citet{Schrijver06}, \cite{Metcalf}, and \citet{Wiegelmann08}.
The magnetic field is not force-free in either the photosphere or the lower chromosphere
(with the possible exception of sunspot areas, where the field is strongest). Furthermore,
measurement errors, in particular for the transverse field components (eg. perpendicular to the line of sight
of the observer), would destroy the compatibility of a magnetogram with the condition of being force-free.
One way to ease these problems is to preprocess the magnetograph data as suggested by \citet{Wiegelmann06sak}.
The vector components of the total magnetic force and the total magnetic torque on the volume considered are given by six
boundary integrals that must vanish if the magnetic field is force-free in the full
volume \citep{Molodenskii69,Aly84,Aly89,Low85}.
The preprocessing changes the boundary values of $\vec{B}$ within the error margins of the measurement in such
a way that the moduli of the six boundary integrals are minimized. The resulting boundary values are expected to be more
suitable for an extrapolation into a force-free field than the original values.
In the practical calculations, the convergence properties of the preprocessing iterations, as
well as the calculated fields themselves, are very sensitive to small-scale noise and apparent discontinuities in
the photospheric magnetograph data. This problem should, in principle, disappear if small spatial scales were
sufficiently resolved.
However, the numerical effort for that would be enormous. The small-scale fluctuations in the magnetograms
are also presumed to affect the solutions only in a very thin boundary layer close to the
photosphere \citep{Fuhrmann}. Therefore, smoothing of the data is included in the preprocessing.
The good performance of the optimization method, as indicated in \citet{Schrijver06}, encouraged us to develop a spherical
version of the optimization code such as in \citet{Wiegelmann07} for a full sphere.
In the first few sections of this paper, we describe a newly developed code that originates from a cartesian force-free
optimization method implemented by \citet{Wiegelmann04}. Our new code takes the curvature of the Sun's surface into
account when modeling the coronal magnetic field in restricted area of the Sun. The optimization procedure
considers six boundary faces, but in practice only the bottom boundary face is measured. On the other five faces,
the assumed boundary data may have a strong influence on the solution. For this reason, it is desirable to move these
faces as far away as possible from the region of interest. This, however, eventually requires that the surface
curvature is taken into account.
\citet{DeRosa} compared several nonlinear force-free codes in cartesian geometry with stereoscopic
reconstructed loops as produced by \citet{Aschwanden}. The codes used as input vector magnetograms
from the Hinode-SOT-SP, which were unfortunately available for only a very small field of view
(about 10 percent of the area spanned by STEREO-loops). Outside the Hinode FOV (field of view) line-of-sight
magnetograms from SOHO/MDI were used and in the MDI-area, different assumptions about the transversal
magnetic field have been made. Unfortunately, the comparison inferred that when different codes were implemented in the
region outside the Hinode-FOV in different ways, the resulting coronal magnetic field models produced by the separate codes
were not consistent with the STEREO-loops. The recommendations of the authors are that
one needs far larger high resolution vector magnetograms, the codes need to account for uncertainties in the
magnetograms, and one must have a clearer understanding of the photospheric-to-corona interface. Full disc vector magnetograms
will soon become available with SDO/HMI, but for a meaningful application we have to take the curvature of the Sun into account
and carry out nonlinear force-free computations in spherical geometry. In this paper, we carry out the appropriate tests.
We investigate first ideal model data and later data that contain artificial noise. To deal with noisy data and data
with other uncertainties, we developed a preprocessing routine in spherical geometry. While preprocessing does not
model the details of the interface between the forced photosphere and the force-free base of the solar corona the
procedure helps us to find suitable boundary conditions for a force-free modelling from measurements with inconsistencies.
In this paper, we develop a spherical version of both the preprocessing and the optimization code for
restricted part of the Sun. We follow the suggestion of \citet{Wiegelmann06sak} to generalize their method of
preprocessing photospheric vector magnetograms to spherical geometry just by considering the curvature of the Sun's
surface for larger field of views. The paper is organized as follows: in Sect. 2, we describe an optimization procedure
in spherical geometry; then, in Sect. 3, we apply it to a known nonlinear force-free test field and
calculate some figures of merit for different boundary conditions. We derive force-free consistency
criteria and describe the preprocessing procedure in spherical geometry in Sect. 4 and Sect. 5, respectively.
In Sect. 6, we use a known semi-analytic force-free model to check our method and in Sect. 5, we apply the method
to different noise models. Finally, in Sect. 7, we draw conclusions and discuss our results.
\section{Optimization procedure}
Stationary states of the magnetic field configuration are described by the requirement that the Lorentz force be
zero. Optimization procedure is one of several methods that have been developed over the past few decades to compute
the most general class of those force-free fields.
\subsection{Optimization principle in spherical geometry }
Force-free magnetic fields must obey the equations
\begin{equation}
(\nabla \times\vec{B})\times\vec{B}=0 \,,\label{one}
\end{equation}
\begin{equation}
\nabla \cdot\vec{B}=0 \label{two}
\end{equation}
Equations (\ref{one}) and (\ref{two}) can be solved with the help of an optimization principle, as proposed
by \citet{Wheatland00} and generalized by \citet{Wiegelmann04} for cartesian
geometry. The method minimizes a joint measure $(L_\mathrm{\omega})$ of the normalized Lorentz forces and the divergence of the
field throughout the volume of interest, $V$.
Here we define a functional in spherical geometry \citep{Wiegelmann07}:
\begin{equation}
L_\mathrm{\omega}=\int_{V}\omega(r,\theta,\phi)\Big[B^{-2}\big|(\nabla\times {\vec{B}})\times {\vec{B}}\big|^2+
\big|\nabla\cdot {\vec{B}}\big|^2\Big]
r^2\sin\theta dr d\theta d\phi \,,\label{three}
\end{equation}
where $\omega(r,\theta,\phi)$ is a weighting function and $V$ is a computational
box of wedge-shaped volume, which includes the inner physical domain $V'$ and the
buffer zone(the region outside the physical domain), as shown in Fig. \ref{fig3}
of the bottom boundary on the photosphere. The physical domain $V'$ is a wedge-shaped volume, with two latitudinal boundaries at
$\theta_\mathrm{min}=20^{\degr}$ and $\theta_\mathrm{max}=160^{\degr}$ , two longitudinal boundaries at
$\phi_\mathrm{min}=90^{\degr}$ and
$\phi_\mathrm{max}=270^{\degr}$, and two radial boundaries at the photosphere ($r=1R_{\sun}$) and $r=2R_{\sun}$.
The idea is to define an interior physical region $V'$ in which we wish to calculate the
magnetic field so that it fulfills the force-free or MHS equations. We define $V'$ to be the inner region of $V$
(including the photosphere) with $\omega = 1$ everywhere including its
six inner boundaries $\delta V'$. We use the position-dependent weighting function to introduce a buffer
boundary of $nd = 6$ grid points towards the side and top boundaries of the computational box, $V$.
The weighting function, $\omega$ is chosen to be constant within the inner physical domain $V'$ and declines to 0 with a
cosine profile in the buffer boundary region. The framed region in Figs. \ref{fig3}(a-c) corresponds to the lower
boundary of the physical domain $V'$ with a resolution of $48\times 62$ pixels in the photosphere.
It is obvious that the force-free Eqs. (\ref{one}) and (\ref{two}) are fulfilled when
$L_\mathrm{\omega}$ equals zero.
For fixed boundary conditions, the functional $L_{\omega}$ in Eq. (\ref{three}) can be numerically minimized with
the help of the iteration
\begin{equation}
\frac{\partial\vec{B}}{\partial t}=\mu\widetilde{\vec{F}} \,,\label{four}
\end{equation}
where $\mu$ is a positive constant and the vector field
$\widetilde{\vec{F}}$ is calculated from
\begin{equation}
\widetilde{\vec{F}}=\omega \vec{F}+(\Omega_{a}\times\vec{B} )\times\nabla\omega+(\Omega_{b}\cdot\vec{B})
\nabla \omega \,,\label{five}
\end{equation}
\begin{equation}
{\vec{F}}=\nabla\times({\vec{\Omega}}_{a}\times {\vec{B}} )-{\vec{\Omega}}_{a}\times(\nabla\times {\vec{B}})+
\nabla ({\vec{\Omega}}_{b}\cdot {\vec{B}})
-{\vec{\Omega}}_{b}(\nabla\cdot {\vec{B}})+ \\
(\Omega_{a}^{2}+\Omega_{b}^{2})\vec{B} \,,\label{six}
\end{equation}
\begin{equation}
\vec{\Omega}_{a}=\textit{B}^{-2}(\nabla\times\vec{B})\times {\vec{B}} \,,\label{seven}
\end{equation}
\begin{equation}
\vec{\Omega}_{b}=\textit{B}^{-2}(\nabla\cdot\vec{B})\vec{B} \label{eight}
\end{equation}
The field on the outer boundaries is always fixed here as Dirichlet boundary conditions. Relaxing these
boundaries is possible \citep{Wiegelmann03} and leads to additional terms.
For $\omega (r,\theta,\phi)=1$, the optimization method requires that the
magnetic field is given on all six boundaries of $V'$.
This causes a serious limitation of the method because these data are only
available for model configurations. For the reconstruction of the coronal
magnetic field, it is necessary to develop a method that reconstructs the
magnetic field only from photospheric vector magnetograms \citep{Wiegelmann04}. Since only the bottom
boundary is measured, one has to make assumptions about the lateral and top boundaries, e.g., assume a
potential field. This leads to inconsistent boundary conditions \citep[see][regarding the compatibility of photospheric
vector magnetograph data]{Aly89}. With the help of the weighting function, the five inconsistent boundaries are replaced
by boundary layers and we consequently obtain more flexible boundaries around the physical domain that will be
adjusted automatically during the iteration. This diminishes the effect of the top and lateral boundaries on the
magnetic field solution inside the computational box. Additionally, the influence of the boundaries is diminished,
the farther we move them away from the region of interest.
The theoretical deviation of the iterative Eq. (\ref{four}) as outlined by \citet{Wheatland00}
does not depend on the use of a specific coordinate system. Previous numerical implementations of this method
were demonstrated by \citet{Wiegelmann07} for the full sphere. Within this work, we use a spherical geometry, but for only a
limited part of the sphere, e.g., large active regions, several (magnetically connected) active regions and full
disc computations. Full disc vector magnetograms should become available soon from SDO/HMI.
This kind of computational box will become necessary when the observed photospheric vector
magnetogram becomes available for only parts of the photosphere.
We use a spherical grid $r$, $\theta$, $\phi$ with $n_{r}$, $n_{\theta}$, $n_{\phi}$ grid points in the
direction of radius, latitude, and longitude, respectively. We normalize the magnetic field with the average radial magnetic
field on the photosphere and the length scale with a solar radius.
The method works as follows: \begin{enumerate}
\item We compute an initial source surface potential field in the computational domain from
$B_{r}$ in the photosphere at $r = 1R_\mathrm{\sun}$.
\item We replace $B_\mathrm{\theta}$ and $B_{\phi}$ at the bottom photospheric boundary at $r = 1R_{\sun}$ with the measured
vector magnetogram. The outer radial and lateral boundaries are unchanged from the initial potential
field model. For the purpose of code testing, we also tested different boundary conditions (see next section).
\item We iterate for a force-free magnetic field in the computational box by minimizing the
functional $L$ of Eq.(\ref{three}) by applying Eq.(\ref{four}). For each iteration step ($k$), the vector field
${\widetilde{\vec{F}}}^{(k)}$ is calculated from the known field ${\vec{B}}^{(k)}$, and a new field may simply be computed as
${\vec{B}}^{(k+1)} = {\vec{B}}^{(k)} + {\widetilde{\vec{F}}}^{(k)}\Delta t$ for
sufficiently small $\Delta t$.
\item The continuous form of Eq.(\ref{four}) ensures a monotonically decreasing functional $L$. For finite
time steps, this is also ensured if the iteration time step $dt$ is sufficiently small. If $L(t +
dt) \geq L(t)$, this step is rejected and we repeat this step with $dt$ reduced by a factor of
$2$.
\item After each successful iteration step, we increase $dt$ by a factor of $1.01$ to ensure a time
step as large as possible within the stability criteria. This ensures an iteration time step
close to its optimum.
\item The iteration stops if $dt$ becomes too small. As a stopping criteria, we use $dt \leq10^{-6}$.
\end{enumerate}
\subsection{Figures of merit}
To quantify the degree of agreement between vector fields $\vec{B}$ (for the input
model field) and $\vec{b}$ (the NLFF model solutions) specified on identical sets
of grid points, we use five metrics that compare either local characteristics (e.g.,
vector magnitudes and directions at each point) or the global energy content in
addition to the force and divergence integrals as defined in \citet{Schrijver06}.
The vector correlation ($C_\mathrm{vec}$) metric generalizes the standard correlation
coefficient for scalar functions given by
\begin{equation}
C_\mathrm{ vec}= \sum_i \vec{B}_{i} \cdot \vec{ b}_{i}/ \left( \sum_i |\vec{ B}_{i}|^2 \sum_i
|\vec{b}_{i}|^2 \right)^{1/2},\label{nine}
\end{equation}
where $\vec{B}_{i}$ and $\vec{b}_{i}$ are the vectors at each point grid $i$. If the vector fields are identical,
then $C_{vec}=1$; if $\vec{B}_{i}\perp \vec{b}_{i}$ , then
$C_{vec}=0$.\\
The second metric, $C_{CS}$ is based on the Cauchy-Schwarz inequality($|\vec{a}\cdot \vec{b}|\leq
|\vec{a}||\vec{b}|$ for any vector $\vec{a}$ and $\vec{b}$)
\begin{equation}
C_\mathrm{CS} = \frac{1}{N} \sum_i \frac{\vec{B}_{i} \cdot \vec{ b}_{i}} {|{\vec{B}_{i}}||{\vec
{b}_{i}}|},\label{ten}
\end{equation}
where $N$ is the number of vectors in the field. This metric is mostly a measure of the
angular differences between the vector fields: $C_{CS} = 1$, when $\vec{B}$ and $\vec{b}$ are parallel, and
$C_{CS} = -1$, if they are anti-parallel; $C_{CS} = 0$, if $\vec{B}_{i}\perp \vec{b}_{i}$ at each
point.\\
We next introduce two measures of the vector errors, one normalized to the
average vector norm, one averaging over relative differences. The normalized vector
error $E_{N}$ is defined as
\begin{equation}
E_\mathrm{N} = \sum_i |{\vec{b}_{i}}-{\vec{B}_{i}}|/ \sum_i |{\vec{B}_{i}}|,\label{eleven}
\end{equation}
The mean vector error $E_{M}$ is defined as
\begin{equation}
E_\mathrm{M} = \frac{1}{N} \sum_i \frac{|{\vec{ b}_{i}}-{\vec{B}_{i}}|}{|{\vec{B}_{i}}|}, \label{twelve}
\end{equation}
Unlike the first two metrics, perfect agreement between the two vector fields results in
$E_{M }= E_{N} = 0$.\\
Since we are also interested in determining how well the models estimate the
energy contained in the field, we use the total magnetic energy in the model field
normalized to the total magnetic energy in the input field as a global measure of
the quality of the fit
\begin{equation}
\epsilon = \frac{\sum_i |{\vec{b}_{i}}|^2}{\sum_i |{\vec{B}_{i}}|^2}, \label{therteen}
\end{equation}
where $\epsilon=1$ for closest agreement between the model field and the nonlinear force-free model
solutions.
\section{Test case and application to ideal boundary conditions}
\subsection{Test case }
To test the method, a known semi-analytic nonlinear solution
is used. \citet{Low90} presented a class of axisymmetric nonlinear force-free fields with a multipolar
character. The authors solved the Grad-Shafranov equation for
axisymmetric force-free fields in spherical coordinates $r$, $\theta$, and $\phi$. The magnetic field can be
written in the form
\begin{equation}
{\vec{B}}=\frac{1}{r\sin\theta}\Big( \frac{1}{r}\frac{\partial A}{\partial \theta }{\hat{\bf{e}}}_{r}-
\frac{\partial A}{\partial r} {\hat{\bf{e}}}_{\theta\ }+Q{\hat{\bf{e}}}_{\phi\ }\Big) \,,\label{fourteen}
\end{equation}
where $A$ is the flux function and $Q$ represents the $\phi$-component of ${\vec{B}}$, depending only on $A$.
The flux function $A$ satisfies the Grad-Shafranov equation
\begin{equation}
\frac{\partial ^{2} A}{\partial r^{2} }+\frac{1-\mu^{2}}{r^{2}}\frac{\partial ^{2} A}{\partial \mu^{2} }+
Q\frac{dQ}{dA}=0 \,,\label{fifteen}
\end{equation}
where $\mu = cos\theta$. \citet{Low90} derive solutions for
\begin{equation}
\\\frac{dQ}{dA}=\alpha=\textrm{const},\label{sixteen}
\end{equation}
by looking for separable solutions of the form
\begin{equation}
\\ A(r,\theta)=\frac{P(\mu)}{r^{n}}\label{seventeen}
\end{equation}
\citet{Low90} suggested that these field solutions are the ideal solutions for testing methods of reconstructing
force-free fields from boundary values. They have become a standard test for nonlinear force-free extrapolation
codes in cartesian geometry \citep{Amari99,Amari,Wheatland00,Wiegelmann03,Yan06,Inhester06,Schrijver06} because the
symmetry in the solution is no longer obvious after a translation that places the point source
outside the computational domain and a rotation of the symmetry axis with respect to the domain edges.
Here we use a Low and Lou solution in spherical coordinates. The solution used is labelled
$P_\mathrm{1,1}$ with $\Phi=\pi/10$, in Low \& Lou's, notation \citep{Low90}. The original equilibrium
is invariant in $\phi$, but we can produce a $\phi$-variation in our coordinate system by placing the origin of
the solution at $l=0.25$ solar radius position from the sun centre. The corresponding configuration
is then no longer symmetric in $\phi$ with respect to the solar surface, as seen in the magnetic field map
in the top row of Fig. \ref{fig3}, which shows the three components $B_{r}$, $B_{\theta}$, and $B_{\phi}$ on the photosphere,
respectively. We remark that we use the solution only for the purpose of testing our code and the equilibrium is
not assumed to be a realistic model for the coronal magnetic field. We do the test runs on spherical grids $(r,
\theta,\phi)$ of $20\times 48\times 62$ and $40\times 96\times 124 $ grid points.
\subsection{Application to ideal boundary conditions}
Here we used different boundary conditions extracted from the
Low and Lou model magnetic field.
\begin{itemize}
\item Case 1: The boundary fields are specified on $V'$(all the six boundaries $\delta V'$ of $V'$).
\item Case 2: The boundary fields are only specified on the photosphere (the lower boundary of
the physical domain $V'$).
\item Case 3: The boundary fields are only specified on the photosphere (the lower boundary of
the physical domain $V'$) and with boundary layers (at the buffer zone) of $nd=6$ grid points toward top and lateral
boundaries of the computational box $V$.
\end{itemize}
For the boundary conditions in case 1, the field line plot (as shown in Fig. $\ref{fig1}$) agrees
with original Low and Lou reference field because the optimization method requires
all boundaries bounding the computational volume as boundary conditions.
For the boundary conditions in case $2$, we used an optimization code without a weighting function ($nd = 0$)
and with a photospheric boundary. Here the boundaries of the physical domain coincide with the computational boundaries.
The lateral and top boundaries assume the value of the potential field during the iteration. Some low-lying
field lines are represented quite well (right-hand picture in Fig. $\ref{fig1}$ second row).
The field lines close to the box center are of course close to the bottom boundary
and far away from the other boundaries. The (observed) bottom boundary has a
higher influence on the field here than the potential lateral and top boundary. Other
field lines, especially high-reaching field lines, deviate from the analytic solution.
For the boundary condition in case $3$, we implemented an optimization code with a weighting function of $nd = 6$
grid points outside the physical domain. This reduces the effect of top and lateral boundaries where $\vec{B}$ is
unknown as $\omega$ drops from $1$ to $0$ outward across the boundary layer around the physical domain.
The comparison of the field lines of the Low \& Lou model field with the reconstructed field of case $3$
(the last picture in Fig. \ref{fig1}) shows that the quality of the reconstruction
improves significantly with the use of the weighting function. Additionally, the size and shape of a boundary layer
influences the quality of the reconstruction \citep{Wiegelmann04} for cartesian geometry.
The larger computational box displaces the lateral and top boundary further away from the physical domain and
its influence on the solution consequently decreases. As a result, the magnetic field in the physical domain is
dominated by the vector magnetogram data, which is exactly what is required for application to measured vector magnetograms.
A potential field reconstruction obviously does not agree with the reference field. In particular, we are in able to
compute the magnetic energy content of the coronal magnetic field to be approximately correct. The figures of merit show
that the potential field is far away from the true solutions and contains only $67.6\%$ of the magnetic energy.
\begin{table*}
\caption {
Quality of our reconstructions with several figures of merit as explained
in section 3.2. We compute the figures for the three different cases along with the model reference field and potential
field.}
\hspace*{-0.5cm}
\centering
\begin{tabular}{l|lllcc|lllll|rr}
\hline \hline Model & $L_{\omega}$&$L_{f}$&$L_{d}$&
$\parallel \nabla \cdot {\vec{B}} \parallel_{\infty}$&
$\parallel {\vec{ j} } \times {\vec{B}} \parallel_{\infty} $ &
$C_{\rm vec}$&$C_{\rm CS}$&$E_{N}$&$E_{M}$&$\epsilon$ &Steps& Time \\
\hline
&\multicolumn{3}{c}{Spherical grid $20 \times 48 \times 62$} &&&&&&&& \\
Original &$0.029$&$0.015$&$0.014$&$1.180$&$1.355$& $ 1$&$ 1$&$ 0$&$ 0$&$ 1$&$ $&$ $ \\
Potential &$0.020$&$0.007$&$0.014$&$1.706$&$1.091$&$0.736$&$0.688$&$0.573$&$0.535$&$0.676$&& \\
Case 1 &$0.006$&$0.004$&$0.002$&$0.454$&$0.774$&$0.999$&$0.983$&$0.012$&$0.016$&$1.005$&$10000$& 7.14 min \\
Case 2 &$33.236$&$7.806$&$25.430$&$47.843$&$24.135$&$0.757$&$0.726$&$0.397$&$0.451$&$0.745$&$110$& 1.28 min \\
Case 3 &$0.009$&$0.006$&$0.03$&$0.367$&$0.787$&$0.994$&$0.967$&$0.187$&$0.097$&$0.989$&$12011$& 17.54 min \\
\hline
&\multicolumn{3}{c}{Spherical grid $40 \times 96 \times 124$} &&&&&&&& \\
Original &$0.005$&$0.003$&$0.002$&$0.38$&$0.71$&$ 1$&$ 1$&$ 0$&$ 0$&$ 1$&$ $&$ $ \\
Potential &$0.30$&$0.0003$&$0.30$&$0.44$&$0.23$&$0.67$&$0.77$&$0.70$&$0.67$&$0.75$&$ $&$ $ \\
Case 1 &$0.002$&$0.001$&$0.0006$&$0.38$&$0.32$&$0.998$&$0.999$&$0.004$&$0.007$&$1.001$&$12522$& 1h 21min \\
Case 2 &$26.27$&$10.20$&$16.07$&$20.40$&$30.53$&$0.799$&$0.759$&$0.411$&$0.456$&$0.798$&$5673$&1h 1min \\
Case 3 &$0.24$&$0.20$&$0.04$&$0.630$&$0.747$&$0.996$&$0.971$&$0.186$&$0.112$&$0.996$&$12143$ & 4h 57min\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[htp!]
\centering
\mbox{\subfigure[Original ]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1a.eps}}
\subfigure[Potential]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1b.eps}}}
\mbox{\subfigure[Case 1]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1c.eps}}
\subfigure[Case 2]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1d.eps}}}
\mbox{\subfigure[Case 3]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1e.eps}}}
\caption{The figure shows the original reference field, a potential field, and
the results of a nonlinear force-free reconstruction with different boundary
conditions (case 1-3, see text). The color coding shows $B_r$ on the photosphere and
the disc centre corresponds to $180^{\degr}$ longitude. }
\label{fig1}
\end{figure*}
The degree of convergence towards a force-free and divergence-free model solution
can be quantified by the integral measures of the Lorentz force and divergence
terms in the minimization functional in Eq. (\ref{three}), computed over the entire
model volume $V$:
\begin{displaymath} L_{f}=\int_{V}\omega(r,\theta,\phi)B^{-2}\big|(\nabla\times {\vec{B}})\times
{\vec{B}}\big|^2 r^2\sin\theta dr d\theta d\phi ,
\end{displaymath}
\begin{displaymath}L_{d}=\int_{V}\omega(r,\theta,\phi)\big|\nabla\cdot {\vec{B}}\big|^2
r^2\sin\theta dr d\theta d\phi ,
\end{displaymath}
\begin{displaymath}L_{\omega}=L_{f}+L_{d},
\end{displaymath}
where $L_{f}$ and $L_{d}$ measure how well the force-free and divergence-free conditions
are fulfilled, respectively.
In Table $1$, we list the figures of merit for our extrapolations results as
introduced in previous section.
Column 1 indicates the corresponding test case. Columns $2-4$ show how well the force and
solenoidal condition are fulfilled, where Col. $2$ contains the value of the functional
$L_{\omega}$
as defined in Eq.$(3)$ and $L_{f}$ and $L_{d}$ in Cols. $3$ and $4$ correspond to the first (force-free) and
second (solenoidal free) part of $L_{\omega}$. The evolution of the functional $L_{\omega}$, $|\vec{j}\times \vec{B}|$,
and $|\nabla \cdot \vec{B}|$ during the optimization process is shown in Fig. \ref{fig2}. One can see from this figure that
the calculation does not converge for case 2, because of the problematic boundaries where the fields are unknown.
Column $5$ contains the $L^{\infty}$ norm of the divergence of the magnetic field
\begin{displaymath}\parallel \nabla \cdot {\vec{ B}} \parallel_{\infty}=\sup_{{\bf x} \in V} |\nabla \cdot {\vec{B}}|
\end{displaymath}
and Col. $6$ lists the $L^{\infty}$ norm of the Lorentz force of the magnetic field
\begin{displaymath}\parallel {\vec{j} } \times {\vec{B}} \parallel_{\infty}=\sup_{{\bf x} \in V} |{\vec{j} } \times {\vec{B}} |.
\end{displaymath}
The next five columns of Table $1$ contain different measurements comparing our reconstructed field with the semi-analytic
reference field. The two vector fields agree perfectly if $C_{\rm vec}$, $C_{\rm CS}$, and $\epsilon$ are
unity and if $E_{\rm N}$ and $E_{\rm M}$ are zero. Column 12 contains the number of
iteration steps until convergence, and Col. 13 shows the computing time on $1$ processor.
A comparison of the original reference field (Fig. \ref{fig1}(a)) with our nonlinear force-free reconstructions
(cases 1-3) shows that the magnetic field line plots agree with the original for case 1 and case 3
within the plotting precision. Case 2 shows some deviations from the original, but the reconstructed field lines are much
closer to the reference field than the initial potential field.
\begin{figure}
\centering
\includegraphics[bb=55 375 552 693,clip,height=6.0cm,width=8.5cm]{12529f2a.eps}
\includegraphics[bb=55 375 552 693,clip,height=6.0cm,width=8.5cm]{12529f2b.eps}
\includegraphics[bb=55 375 552 693,clip,height=6.0cm,width=8.5cm]{12529f2c.eps}
\caption{Evolution of $L_{\omega}$ (as defined in Eq. 3), $max(|force|)$, and $max(|div B|)$ during the optimization process.
The solid line corresponds to case 3, the dash-dotted line to case 1, the long-dashed line to case 2.}
\label{fig2}
\end{figure}
The visual inspection of Fig. \ref{fig1} is supported by the quantitative criteria shown in Table 1. For case 1 and case 3
the formal force-free criteria $(L_{\omega},L_{f},L_{d})$ are smaller than the discretization error of the analytic
solution and the comparison metrics show almost perfect agreement with the reference field. The comparison metrics
(of Table 1) show that there is discrepancy between the reference field and case 2 as the magnetic field solution is
affected by nearby problematic top and lateral boundaries. In Fig. \ref{fig1} we compare magnetic field line plots of
the original model field with a corresponding potential field and nonlinear force-free reconstructions with different
boundary conditions (case 1 - case 3). The colour coding shows the radial magnetic field in the photosphere, as also shown in the
magnetogram in Fig. $\ref{fig3}$(a). The images show the results of the computation on the $20 \times 48 \times 62$ grid.
\section{Consistency criteria in spherical geometry}
A more fundamental requirement of the boundary data is its consistency with the force-free field approximation. As
shown by \citet{Molodenskii69} and \citet{Aly89}, a balance between the total momentum and angular momentum
exerted onto the numerical box in cartesian geometry by the magnetic field leads to a set of
boundary integral constraints on the magnetic field. These constraints should also be satisfied on the solar surface
for the field at the coronal base in the vicinity of a sufficiently isolated magnetic region and in a
situation where there is no rapid dynamical development. As explained in detail in
\citet{Molodenskii74}, the sense of these relations is that on average a force-free field cannot exert a net tangential
force on the boundary or shear stresses along axes lying along the boundary.
In summary, the boundary data for the force-free extrapolation should fulfill the following conditions:
\begin{enumerate}
\item The boundary data should coincide with the photospheric observations within
measurement errors.
\item The boundary data should be consistent with the assumption of a force-free
magnetic field above.
\item For computational reasons (finite differences), the boundary data should be
sufficiently smooth.
\end{enumerate}
Additional a-priori assumption is about the photospheric data are that the magnetic flux from the photosphere is
sufficiently distant from the boundaries of the observational domain and that the net flux is balanced,
i.e.,
\begin{equation}
\int_{S}B_{r}(r=1R_{s},\theta,\phi)d\Omega =0,\label{eighteen}
\end{equation}
where $S$ is the area of a bottom boundary of the physical domain on the photosphere.
Generally, the flux balance criterion must be applied to the entire, closed surface of the numerical box. However, we can
only measure the magnetic field vector on the bottom photospheric boundary and the contributions of the lateral and top
boundary remain unspecified. However, if a major part of the known flux from the bottom boundary
is uncompensated, the final force-free magnetic field solution will depend markedly on how the uncompensated flux is
distributed over the other five boundaries. This would result in a major uncertainty on the final force free magnetic
field configuration. We therefore demand that the flux balance is satisfied with the bottom
data alone \citep{Wiegelmann06}. If this is not the case, we classify the reconstruction problem as not being uniquely
solvable within the given box. \citet{Aly89} used the virial theorem to define the conditions that a
vector magnetogram must fulfill to be consistent with the assumption of a force-free field above in cartesian geometry.
Here in this paper, we assume the force-free and torque-free conditions for spherical geometry as formulated in \citet{Sakurai94},
i.e.,
1. The total force on the boundary vanishes
\begin{equation}
\int_{S}\Big[\frac{1}{2}\big(B_{\theta}^{2}+B_{\phi}^{2}-B_{r}^{2}\big)\sin\theta\cos\phi-B_{r}B_{\theta}\cos\theta\cos\phi
+ B_{r}B_{\phi}\sin\phi\Big]d\Omega=0,\label{nineteen}
\end{equation}
\begin{equation}
\int_{S}\Big[\frac{1}{2}\big(B_{\theta}^{2}+B_{\phi}^{2}-B_{r}^{2}\big)\sin\theta\sin\phi-B_{r}B_{\theta}\cos\theta\sin\phi
-B_{r}B_{\phi}\cos\phi\Big]d\Omega=0,\label{twenty}
\end{equation}
\begin{equation}
\int_{S}\Big[\frac{1}{2}\big(B_{\theta}^{2}+B_{\phi}^{2}-B_{r}^{2}\big)\cos\theta+B_{r}B_{\theta}\sin\theta\Big]d\Omega=0
\label{twenty_one}
\end{equation}
2. The total torque on the boundary vanishes\footnote{See Appendix A for derivation of those torque-balance equations.}
\begin{equation}
\int_{S}B_{r}\big(B_{\phi}\cos\theta\cos\phi+B_{\theta}\sin\phi\big)d\Omega=0,\label{twenty_two}
\end{equation}
\begin{equation}
\int_{S}B_{r}\big(B_{\phi}\cos\theta\sin\phi-B_{\theta}\cos\phi\big)d\Omega=0,\label{twenty_three}
\end{equation}
\begin{equation}
\int_{S}B_{r}B_{\phi}\sin\theta d\Omega=0\label{twenty_four}
\end{equation}
As with the flux balance, these criteria must in general, be applied to the entire surface of the numerical box.
Since we assumed that the photospheric flux is sufficiently concentrated in the center and the net flux is in balance,
we can expect the magnetic field on the lateral and top boundaries to remain weak and hence these surfaces do not
represent a significant contribution to the integrals of the constraints above. We therefore impose the criteria on
the bottom boundary alone. From this beginning, we use the following notation for simplicity:
\begin{displaymath}
E_{B}^{-}=\frac{1}{2}\big(B_{\theta}^{2}+B_{\phi}^{2}-B_{r}^{2}\big) \, , \;
E_{B}=\int_{S}\big(B^{2}_{r}+B^{2}_{\theta}+B^{2}_{\phi}\big)d\Omega
\, , \;
\end{displaymath}
\begin{displaymath}
B_{1}=B_{\theta}\cos\theta\cos\phi-B_{\phi}\sin\phi \, , \;
B_{2}=B_{\theta}\cos\theta\sin\phi+B_{\phi}\cos\phi
\, , \;
\end{displaymath}
\begin{displaymath}
B_{3}=B_{\phi}\cos\theta\cos\phi+B_{\theta}\sin\phi \, , \;
B_{4}=B_{\phi}\cos\theta\sin\phi-B_{\theta}\cos\phi
\;
\end{displaymath}
To quantify the quality of the vector magnetograms with respect to the above criteria, we introduce three dimensionless
parameters similar to those in \citet{Wiegelmann06sak}, but now for spherical geometry:
\begin{enumerate}
\item The flux balance parameter
$$\varepsilon_{flux}=\frac{\int_{S}B_{r}d\Omega}{\int_{S}|B_{r}|d\Omega}$$
\item The force balance parameter
\begin{displaymath}\begin{split}
\varepsilon_{force}=&\Big(\big| \int_{S}\big[E_{B}^{-}\sin\theta\cos\phi
-B_{r}B_{1}\big]d\Omega\big|
+\big|\int_{S}\big[E_{B}^{-}\sin\theta\sin\phi
\\ &-B_{r}B_{2}\big]d\Omega\big|
+\big|\int_{S}\big[E_{B}^{-}\cos\theta +B_{r}B_{\theta}\sin\theta\big]d\Omega\big|\Big)
\Big/E_{B}
\end{split}\end{displaymath}
\item The torque balance parameter
\begin{displaymath}\begin{split}
\varepsilon_{torque}=&\Big(\big|\int_{S}B_{r}B_{3}d\Omega\big|+\big|\int_{S}B_{r}B_{4}d\Omega\big|
+\big|\int_{S}B_{r}B_{\phi}\sin\theta d\Omega\big|\Big)\Big/E_{B}
\end{split}\end{displaymath}
\end{enumerate}
An observed vector magnetogram is then flux-balanced and consistent
with the force-free assumption if: $\varepsilon_{flux}\ll 1$, $\varepsilon_{force}\ll
1$ and $\varepsilon_{torque}\ll 1$.
\section{Preprocessing method}
The strategy of preprocessing is to define a functional $L$ of the boundary values of $\vec{B}$, such that on
minimizing $L$ the total magnetic force and the total magnetic torque on the considered volume,
as well as a quantity measuring the degree of small-scale noise in the boundary data, simultaneously become small. Each
of the quantities to be made small is measured by an appropriately defined subfunctional included in $L$. The different
subfunctionals are weighted to control their relative importance.
Even if we choose a sufficiently flux balanced isolated active region ($\varepsilon_{flux}\ll 1$), we find that the force-free
conditions $\varepsilon_{force}\ll 1$ and $\varepsilon_{torque}\ll 1$ are not usually fulfilled for measured
vector magnetograms. We therefore conclude, that force-free extrapolation methods
should not be used directly on observed vector magnetograms (see \citet{Gary} for $\beta >1$ in photosphere), particularly
not on very noisy transverse photospheric magnetic field measurements. The large noise in the transverse
components of the photospheric field vector, which is one order of magnitude higher than the LOS-field
($\sim$the transverse $B_{\theta}$ and $B_{\phi}$ at the bottom boundary), provides us freedom to adjust these data within the
noise level. We use this freedom to drive the data towards being more consistent with Aly's force-free and torque-free conditions.
The preprocessing scheme of \citet{Wiegelmann06sak} involves
minimizing a two-dimensional functional of quadratic form similar to the following:
\begin{equation}
L=\mu_{1}L_{1}+\mu_{2}L_{2}+\mu_{3}L_{3}+\mu_{4}L_{4}\label{twenty_five}
\end{equation}
Here we write the individual terms in spherical co-ordinates as: \begin{equation}\begin{split}
L_{1}= \Big(\sum_{p}\big[E_{B}^{-}\sin\theta\cos\phi
-B_{r}B_{1}\big]\sin\theta\Big)^{2}
+\Big(\sum_{p}\big[ E_{B}^{-}\sin\theta\sin\phi
\\-B_{r}B_{2}\big]\sin\theta\Big)^{2}
+ \Big(\sum_{p}\Big[E_{B}^{-}\cos\theta
+B_{r}B_{\theta}\sin\theta\big]\sin\theta\Big)^{2},\label{twenty_six}
\end{split}\end{equation}
\begin{equation}
L_{2}=\Big(\sum_{p} B_{r}B_{3}\sin\theta\Big)^{2}
+\Big(\sum_{p}B_{r}B_{4}\sin\theta \Big)^{2}
+\Big(\sum_{p} B_{r}B_{\phi}\sin^{2}\theta\Big)^{2},\label{twenty_seven}
\end{equation}
\begin{equation}
L_{3}=\sum_{p}\big(B_{r}-B_{robs}\big)^{2}+\sum_{p}\big(B_{\theta}-B_{\theta obs}\big)^{2}
+\sum_{p}\big(B_{\phi}-B_{\phi obs}\big)^{2},\label{twenty_eight}
\end{equation}
\begin{equation}
L_{4}=\sum_{p}\big[ \big(\Delta B_{r}\big)^{2}+\big(\Delta B_{\theta}\big)^{2}
+\big(\Delta B_{\phi}\big)^{2}\big]\label{twenty_nine}
\end{equation}
The surface integrals are replaced by a summation $\Big(\int_{S}d\Omega\rightarrow\Sigma_{p}
\sin\theta\Delta\theta\Delta\phi$, omitting the constant $\Delta\theta\Delta\phi$ over all
$p$ grid nodes of the bottom surface grid, with an elementary surface of $\sin\theta\Delta\phi\times\Delta\theta$\Big).
The differentiation in the smoothing term ($L_{4}$) is achieved by the usual five-point
stencil for the 2D-Laplace operator. Each of the constraints $L_{n}$ is weighted by a yet undetermined factor
$\mu_{n}$. The first term $(n=1)$ corresponds to the force-balance condition,
and the next $(n=2)$ to the torque-free condition. The following term $(n=3)$
ensures that the optimized boundary condition agrees with the measured
photospheric data, and that the last term $(n=4)$ controls the smoothing. The
2D-Laplace operator is designated by $\Delta$.
The aim of our preprocessing procedure is to minimize $L$ so that all terms
$L_{n}$, if possible, become small simultaneously. This will yield a surface
magnetic field:
\begin{equation}
{\vec{B}}_{min}=argmin(L)\label{thirty}
\end{equation}
Besides a dependence on the observed magnetogram, the solution in Eq.(\ref{twenty_five}) now also
depends on the coefficients $\mu_{n}$. These coefficients are a formal necessity because the terms $L_{n}$ represent
different quantities. By means of these coefficients, however, we can also give more or less weight to the individual
terms in the case where a reduction in one term opposes a reduction in another. This competition obviously exists
between the observation term $(n=3)$ and the smoothing term $(n=4)$.
The smoothing is performed consistently for all three magnetic field components.
To obtain Eq.(\ref{thirty}) by iteration, we need the derivative of $L$ with respect to each of the three field components at
every node $(q)$ of the bottom boundary grid. We have, however, taken into account that $B_{r}$ is measured
with much higher accuracy than $B_{\theta}$ and $B_{\phi}$. This is achieved by assuming that the vertical component is invariable
compared to horizontal components in all terms where mixed products of the vertical and horizontal
field components occur, e.g., within the constraints \citep{Wiegelmann06sak}.
The relevant functional derivatives of $L$ are therefore\footnote{See Appendix B for partial derivative of
$L_{4}$ with respect to each of the three field components.}
\begin{equation}\begin{split}
\frac{\partial L}{\partial(B_{\theta})_{q}}=& 2\mu_{1}(B_{\theta}\sin^{2}\theta\cos\phi-
B_{r}\sin\theta\cos\theta\cos\phi )_{q}\times
\\&\sum_{p}\big[E_{B}^{-}\sin\theta\cos\phi-B_{r}B_{1}\big]\sin\theta
\\&+2\mu_{1}(B_{\theta}\sin^{2}\theta\sin\phi- B_{r}\sin\theta\cos\theta\sin\phi )_{q}\times
\\&\sum_{p}\big[E_{B}^{-}\sin\theta\sin\phi-B_{r}B_{2}\big]\sin\theta
\\&+2\mu_{1}( B_{\theta}\sin\theta\cos\theta+B_{r}\sin^{2}\theta)_{q}\times
\\&\sum_{p}\big[E_{B}^{-}\cos\theta+B_{r}B_{\theta}\sin\theta\big]\sin\theta
\\ &+2\mu_{2}\Big[(B_{r}\sin\theta\sin\phi)_{q}\sum_{p} B_{r}B_{3}\sin\theta
\\&-(B_{r}\sin\theta\cos\phi)_{q}\sum_{p} B_{r}B_{4}\sin\theta\Big]
\\ &+2\mu_{3}(B_{\theta}-B_{\theta obs})_{q}+2\mu_{4}(\Delta(\Delta B_{\theta}))_{q},\label{thirty_one}
\end{split}\end{equation}
\begin{equation}\begin{split}
\\ \frac{\partial L}{\partial(B_{\phi})_{q}}=&2\mu_{1}(B_{\phi}\sin^{2}\theta\cos\phi+B_{r}\sin\theta\sin\phi )_{q}\times
\\&\sum_{p}\big[E_{B}^{-}\sin\theta\cos\phi-B_{r}B_{1}\big]\sin\theta
\\ &+2\mu_{1}(B_{\phi}\sin^{2}\theta\sin\phi- B_{r}\sin\theta\cos\phi )_{q}\times
\\&\sum_{p}\big[E_{B}^{-}\sin\theta\sin\phi-B_{r}B_{2}\big]\sin\theta
\\ &+2\mu_{1}( B_{\phi}\sin\theta\cos\theta)_{q}\sum_{p}\big[E_{B}^{-}\cos\theta
+B_{r}B_{\theta}\sin\theta\big]\sin\theta
\\ &+2\mu_{2}\Big[(B_{r}\cos\theta\cos\phi\sin\theta)_{q}\sum_{p}B_{r}B_{3}\sin\theta
\\ &+(B_{r}\cos\theta\sin\phi\sin\theta)_{q}\sum_{p}B_{r}B_{4}\sin\theta
\\ &+(B_{r}\sin^{2}\theta)_{q}\sum_{p}B_{r}B_{\phi}\sin^{2}\theta\Big]
+2\mu_{3}(B_{\phi}-B_{\phi obs})_{q}\\ &+2\mu_{4}(\Delta(\Delta B_{\phi}))_{q},\label{thirty_two}
\end{split}
\end{equation}
\begin{equation}
\frac{\partial L}{\partial(B_{r})_{q}}=2\mu_{3}(B_{r}-B_{r obs})_{q}+2\mu_{4}(\Delta(\Delta B_{r}))_{q}
\label{thirty_three}
\end{equation}
The optimization is performed iteratively by a simple Newton or Landweber iteration, which
replaces
\begin{equation}
(B_{r})_{q}\longleftarrow (B_{r})_{q}-\mu \frac{\partial L}{\partial(B_{r})_{q}},\label{thirty_four}
\end{equation}
\begin{equation}
(B_{\theta})_{q}\longleftarrow (B_{\theta})_{q}-\mu \frac{\partial L}{\partial(B_{\theta})_{q}},\label{thirty_five}
\end{equation}
\begin{equation}
(B_{\phi})_{q}\longleftarrow (B_{\phi})_{q}-\mu \frac{\partial L}{\partial(B_{\phi})_{q}},\label{thirty_six}
\end{equation}
at every step. The convergence of this scheme towards a solution of Eq.(\ref{twenty_five}) is
obvious: $L$ has to decrease monotonically at every step as long as Eqs.(\ref{twenty_six})-(\ref{twenty_eight})
have a nonzero component. These terms, however, vanish only if an extremum of $L$
is reached. Since $L$ is fourth order in $B$, this may not necessarily be a global minimum; in rare cases, if the step size
is handled carelessly, it may even be a local maximum. In practical calculation, this should not, however, be a problem
and from our experience we rapidly obtain a minimum ${\vec{B}}_{min}$ of $L$, once the parameters $\mu_{n}$ are specified
\citep{Wiegelmann06sak}.
\begin{figure*}[htp!]
\centering
\mbox{
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3a.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3b.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3c.eps}}
\mbox{
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3d.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3e.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3f.eps}}
\mbox{
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3g.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3h.eps}
\includegraphics[bb=91 105 470 355,clip,height=4.5cm,width=6.0cm]{12529f3i.eps}}
\includegraphics[bb=91 30 470 45,clip,height=0.3cm,width=14.0cm]{12529f3g.eps}
\includegraphics[bb=91 45 470 74,clip,height=0.8cm,width=14.0cm]{12529f3g.eps}
\caption{ {\bf{Top row:}} vector magnetogram derive from the Low and Lou solution. From left to right the three components
$B_{r}$, $B_{\theta}$ \& $B_{\phi}$ are shown). {\bf{Middle row:}} the same magnetogram as in
the first row, but with noise added (noise model I). {\bf{Bottom row:}} magnetogram resulting from preprocessing
of the disturbed magnetogram shown in the second row. The magnetic fields are measured in gauss. The vertical and horizontal
axes show latitude, $\theta$ and longitude, $\phi$ on the photosphere respectively.}
\label{fig3}
\end{figure*}
\begin{figure*}[htp!]
\centering
\mbox{\subfigure[Original reference field]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1a.eps}}
\subfigure[Potential field]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f1b.eps}}}
\mbox{\subfigure[Field from noisy data]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f4c.eps}}
\subfigure[Field from preprocessed data]
{\includegraphics[bb=120 60 445 350,clip,height=6.0cm,width=7.0cm]{12529f4d.eps}}}
\caption{a) Some field lines for the original Low and Lou solution. b) Potential
field reconstruction. c) Nonlinear force-free reconstruction from noisy data (noise
model I) without preprocessing. d) Nonlinear force-free reconstruction from noisy
data (noise model I) after preprocessing the vector magnetogram with our newly
developed spherical code. }
\label{fig4}
\end{figure*}
\section{Application to different noise-models}
We extract the bottom boundary of the Low and Lou equilibrium and use it as input for our extrapolation code
\citep[see][]{Wiegelmann04}. This artificial vector magnetogram (see first row of Fig. \ref{fig3}) extrapolated
from a semi-analytical solution is of course in perfect agreement with the assumption of a force-free field
above (Aly-criteria) and the result of our extrapolation code was in reasonable agreement with the original.
True measured vector magnetograms are not ideal (and smooth) of course, and we simulate this effect by adding noise
to the Low and Lou magnetogram \citep{Wiegelmann06sak}. We add noise to this ideal solution in the
form:
\newline\indent\textbf{Noise model I:}\\
$\delta B_{i}=n_{l}\cdot r_{n}\cdot \sqrt{B_{i}},$ where $n_{l}$ is the noise level and $r_{n}$ a random number
in the range $-1....1$. The noise level was chosen to be $n_{l}=10.0$ for the transverse magnetic field
$(B_{\theta},B_{\phi})$ and $n_{l}=0.5$ for $B_{r}$. This mimics a real magnetogram (see the middle row of Fig.
\ref{fig3}) with Gaussian noise and significantly higher noise in the transverse components of the magnetic field.
\newline\indent\textbf{Noise model II:}\\
$\delta B_{i}=n_{l}\cdot r_{n},$ where $n_{l}$ is the noise level and $r_{n}$ a random number in the range $-1....1$.
The noise level was chosen to be $n_{l}=20.0$ for the transverse magnetic field $(B_{\theta},B_{\phi})$ and $n_{l}=1.0$ for
$B_{r})$. This noise model adds noise, independent of the local magnetic field strength.
\newline\indent\textbf{Noise model III:}\\
$\delta B_{r}=\textrm{constant}$, $\delta B_{t}=\frac{\delta
B^{2}_{tmin}}{\sqrt{B^{2}_{t}+B^{2}_{tmin}}}$, where we choose a constant noise level $\delta B_{r}$ of $1$ and a minimum
detection level $\delta B_{tmin}=20$. This noise model mimics the effect in which the transverse noise
level is higher in regions of low magnetic field strength \citep{Wiegelmann06sak}.
\begin{table*}
\caption {
Figures of merit for the three different noise models with and without preprocessing along
with model reference field and potential field.
}
\hspace*{-0.5cm}
\centering
\begin{tabular}{lc|lllcc|rrrrrr}
\hline \hline Model&Preprocessed&$L$&$L_1$&$L_2$&
$\parallel \nabla \cdot {\bf B} \parallel_{\infty}$&
$\parallel {\bf j } \times {\bf B} \parallel_{\infty} $ &
$C_{\rm vec}$&$C_{\rm CS}$&$E_{N}$&$E_{M}$&$\epsilon$ &Steps\\
\hline
Original &&$0.029$&$0.015$&$0.014$&$1.180$&$1.355$& $ 1$&$ 1$&$ 0$&$ 0$&$ 1$&$ $ \\
Potential &&$0.020$&$0.007$&$0.014$&$1.706$&$1.091$&$0.736$&$0.688$&$0.573$&$0.535$&$0.676$&$$ \\
Noise model I &No&$22.015$&$8.612$&$13.403$&$25.531$&$11.671$&$0.819$&$0.767$&$0.337$&$0.421$&$0.861$&$1337$ \\
Noise model I &Yes&$0.105$&$0.066$&$0.039$&$1.746$&$1.806$&$0.951$&$0.947$&$0.197$&$0.105$&$0.964$&$12191$ \\
Noise model II &No&$18.957$&$7.915$&$11.042$&$23.089$&$9.871$&$0.828$&$0.774$&$0.321$&$0.417$&$0.869$&$1484$ \\
Noise model II &Yes&$0.097$&$0.057$&$0.040$&$1.533$&$1.617$&$0.963$&$0.951$&$0.191$&$0.099$&$0.971$&$11423$ \\
Noise model III &No&$17.718$&$7.615$&$10.103$&$20.763$&$8.992$&$0.859$&$0.781$&$0.310$&$0.402$&$0.873$&$1497$ \\
Noise model III &Yes&$0.081$&$0.043$&$0.038$&$1.382$&$1.407$&$0.979$&$0.957$&$0.189$&$0.098$&$0.982$&$10378$ \\
\hline
\end{tabular}
\end{table*}
The bottom row of Fig. \ref{fig3} shows the preprocessed vector magnetogram (for noise model I) after applying our procedure.
The aim of the preprocessing is to use the resulting magnetogram as input for a nonlinear force-free magnetic field extrapolation.
Figure \ref{fig4} shows in panel a) the original Low and Lou solution and in panel b) a corresponding
potential field reconstruction. In Fig. \ref{fig4} we present only the
inner region of the whole magnetogram (marked with black rectangular box in Fig.\ref{fig3}(a)) because the surrounding magnetogram
is used as a boundary layer (6 grid points) for our nonlinear force-free code. The computation was done on a $26 \times 60 \times 74$
grid including a 6 pixel boundary layer towards the lateral and top boundary of the computational box $V$. In the remaining
panels of Fig. \ref{fig4}, we demonstrate the effect of the noise model (I) on the reconstruction. The noise
levels were chosen so that the mean noise was similar for all three noise models. Fig. \ref{fig4} (c) shows a nonlinear
force-free reconstruction with noisy data (noise model I, magnetogram shown in the central panel of Fig.
\ref{fig3}), and Fig. \ref{fig4} (d) presents a nonlinear force-free reconstruction after preprocessing (magnetogram shown
in the bottom panel of Fig. \ref{fig3}). After preprocessing(see Fig. \ref{fig4} d), we achieve far closer agreement with
the original solution (Fig. \ref{fig4} a). Field lines are plotted from the same photospheric footpoints in the positive polarity.
For the other noise models II and III, we find that the preprocessed data agree more closely with the original Fig. \ref{fig4} (a).
We check the correlation of the original solution with our reconstruction with help of the vector correlation function as defined
in (\ref{nine}).
Table $2$ confirms the visual inspection of Fig. \ref{fig4}. The correlation of the reconstructed magnetic field with the original
improves significantly after preprocessing of the data for all noise models.
We knew already from previous studies \citep{Wiegelmann03,Wiegelmann04} that noise and inconsistencies in vector magnetograms
have a negative influence on the nonlinear force-free reconstruction, and the preprocessing routine described in this paper shows us
how to overcome these difficulties in the case of spherical geometry. As indicated by Fig. \ref{fig5}, the higher the noise level
we have added to the original magnetogram, the smaller the vector correlation will be for the field reconstructed from the
magnetogram with noise, compared with the reference field. However, the corresponding vector correlations for the field
reconstructed from the preprocessed magnetogram has no significant change as the code largely removes the
noise we have added to the original magnetogram with different noise levels.
\begin{figure}
\centering
\includegraphics[bb=90 45 496 342,clip,height=6.5cm,width=8.5cm]{12529fg5.eps}
\caption{Vector correlation plotted against noise level for noise model I.}
\label{fig5}
\end{figure}
\section{Conclusion and outlook}
In this paper, we have developed and tested the optimization method for the reconstruction of nonlinear
force-free coronal magnetic fields in spherical geometry by restricting the code to limited parts of the Sun, as
suggested by \citet{Wiegelmann07}.
The optimization method minimizes a functional consisting of a quadratic form of the force
balance and the solenoidal condition. Without a weighting function, all the six boundaries are equal
likely to influence the solution. The effect of top and lateral boundaries can be reduced by introducing a boundary layer
around the physical domain \citep{Wiegelmann04}. The physical domain is a wedge-shaped area within which we
reconstruct the coronal magnetic field that is consistent with the photospheric vector magnetogram data.
The boundary layer replaces the hard lateral and top boundary used previously. In the physical domain, the weighting function
is unity. It drops monotonically in the boundary layer and reaches zero at the boundary of the computational box.
At the boundary of the computational box, we set the field to have the value of the potential field computed from $B_{r}$ at
the bottom boundary. Our test calculations show that a finite-sized weighted boundary yields far more reliable results.
The depth $nd$ of this buffer boundary influences the quality of reconstruction, since the magnetic
flux in these test cases is not concentrated well inside the interior of the box.
In this work, we have presented a method for preprocessing vector magnetogram data to be able to use the preprocessing result
as input for a nonlinear force-free magnetic field extrapolation with help of an optimization code in spherical geometry.
We extended the preprocessing routine developed by \citet{Wiegelmann06sak} to spherical geometry. As a first test of the
method, we use the Low and Lou solution with added noise from different noise models.
A direct use of the noisy photospheric data for a nonlinear force-free extrapolation showed no good agreement with the
original Low and Lou solution, but after applying our newly developed preprocessing method
we obtained a reasonable agreement with the original. The preprocessing method changes the boundary data within their noise limits
to drive the magnetogram towards boundary conditions that are consistent with the assumption of a force-free field above.
The transverse field components with higher noise level are modified more than the radial components.
To carry out the preprocessing, we use a minimization principle.
On the one hand, we control the final boundary data to be as close as possible (within the noise level)
to the original measured data, and the data are forced to fulfill the consistency criteria and be sufficiently smooth.
Smoothness of the boundary data is required by the nonlinear force-free extrapolation code, but also necessary physically because
the magnetic field at the basis of the corona should be smoother than in the photosphere, where it is measured. In addition to
these, we found that adding a larger amount of noise to the magnetogram decreases its vector correlation with the model reference
field whenever we reconstruct it without preprocessing.
We plan to use this newly developed code for future missions such as SDO (Solar Dynamic Observatory)
when full disc magnetogram data become available.
\begin{acknowledgements}Tilaye Tadesse acknowledges a fellowship of the International Max-Planck Research School
at the Max-Planck Institute for Solar System Research and the work of T.
Wiegelmann was supported by DLR-grant $50$ OC $453$ $0501$. The authors would like to thank referee for
his/her constructive and helpful comments.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,500,224 | arxiv | \section{Introduction}
\subsection{Background}
\hspace{5mm}
For a polynomial $P$ in the complex plane, the {\it Julia set} $J_P$ is the complement of the open set in $\mathbb{C}$ where the sequence $\{P^n\}_{n\geq 1}$ form a normal family locally. The dynamical behavior of the points in the Julia set is extremely chaotic and in general the Julia set of a polynomial has a very complicated structure. It is easy to check that if two polynomials $P$ and $Q$ commute, then their Julia sets coincide. Conversely, it follows from the work of Baker--Er\"{e}menko \cite{BE} and Beardon \cite{Be1} that if $J_P=J_Q$ for two polynomials $P$ and $Q$ of degree greater than or equal to $2$, then
\[
P \circ Q = \sigma \circ Q \circ P
\]
where $\sigma(z) = az + b$ with $\vert a \vert = 1$ and $\sigma(J_P) = J_P$. The Julia sets of two polynomial in the complex plane, of degree greater than or equal to $2$, coincide if and only if they commute upto a rigid motion in the complex plane. Now onwards, this property of Julia sets will be referred as a {\it{rigidity}} property of Julia sets.
\medskip
Recently, in \cite{BPK}, an analogue of the above-mentioned {\it rigidity} phenomenon has been proved for the H\'{e}non maps in $\mathbb{C}^2$ which, by the classification theorem of Freidland--Milnor (\cite{FM}), is the most important class of automorphisms in $\mathbb{C}^2$ from the point of view of dynamics. The class of H\'{e}non maps consists of the polynomial automorphisms of
$\mathbb{C}^2$ of the form
\begin{equation}\label{henon form}
H = H_m \circ H_{m-1} \circ \cdots \circ H_1
\end{equation}
where
\begin{equation*}
H_j(x, y) = (y, p_j(y) - \delta_j x)
\end{equation*}
with $p_j$ a polynomial of degree $d_j \ge 2$ with highest degree coefficient $c_j\in \mathbb{C}$ and $0\neq \delta_j \in \mathbb{C}$. The degree of $H$ is $d = d_1d_2 \ldots d_m$. Interested readers can look at \cite{BS1}, \cite{BS2} and \cite{BS3} for a detailed study of the dynamics of H\'{e}non maps.
\medskip
As in the case of polynomials in $\mathbb{C}$, one can give an analogous definition of Julia sets for H\'{e}non maps in $\mathbb{C}^2$. For a H\'{e}non map $H$ in $\mathbb{C}^2$, the {\it Julia sets} $J_H^\pm$ are defined as the complement of the open sets in $\mathbb{C}^2$ where the sequence $\{H^{\pm n}\}$ form normal families locally. Here $H^{\pm n}$ denote the $n$-fold iterates of $H$ and $H^{-1}$, respectively. It turns out that
\[
J_H^\pm =\partial K_H^\pm
\]
where
\[
K^{\pm}_H = \{(x, y) \in \mathbb C^2 : \;\text{the sequence}\; \left(H^{\pm n}(x, y) \right) \; \text{is bounded} \},
\]
the set of {\it non-escaping} points.
\begin{defn}
Let $H$ be an automorphism on $\mathbb{C}^2$. A set $S\subseteq \mathbb{C}^2$ is called completely invariant under the map $H$ if $H(S)=S$.
\end{defn}
Note that $K_H^\pm$ are completely invariant under $H$. In \cite{BPK}, for a H\'{e}non map $H$, we proved the following {\it {rigidity}} theorem.
\subsection*{Known Theorem 1 (KT1).}
Let $F$ be an automorphism of $\mathbb{C}^2$ which keeps the non-escaping sets $K_H^\pm$ completely invariant, then $F$ is a polynomial automorphism. If $\deg F \geq 2$, then either $F$ or $F^{-1}$ is a H\'{e}non map. Further, $F$ shares a close relation with $H$, viz.,
\begin{equation} \label{relFH1}
F^{\pm 1} \circ H = C \circ H \circ F^{\pm 1}
\end{equation}
where $C$ a linear map of the form $(x, y) \mapsto (\delta_- x, \delta_+ y)$ with $\vert \delta_\pm \vert = 1$. Also,
\begin{equation}\label{relFH2}
\text{ either } F^m=\sigma H^n \text{ or } F^{-m}=\sigma H^n
\end{equation}
for some $m,n \in \mathbb{N}$ and for some affine automorphism $\sigma$ in $\mathbb{C}^2$.
\medskip
The same techniques, which are used to prove the above theorem, gives the following version of {\it {rigidity}} theorem for Julia sets of H\'{e}non maps.
\subsection*{Known Theorem 2 (KT2).}
Let $H$ and $F$ be two H\'{e}non maps such that their Julia sets coincide, i.e, $J_H^\pm=J_F^\pm$, then (\ref{relFH1}) and (\ref{relFH2}) hold. Conversely, if $F$ and $H$ be two H\'{e}non maps satisfying
\[
F\circ H=C\circ H \circ F
\]
where $C:(x,y)\mapsto (\delta_- x, \delta_+ y)$ with $\lvert \delta_\pm \rvert =1$ and $C(K_H^\pm)=K_H^\pm$, then $J_H^\pm=J_F^\pm$.
\medskip
The goal of the present article is to improve and extend the {\it{rigidity}} result of H\'{e}non maps obtained in \cite{BPK}.
\subsection{Main Results}
\hspace{20mm}
\noindent
Let $H$ be a H\'{e}non map. Now if we start with a polynomial automorphism $F$ of $\mathbb{C}^2$ such that $F(K_H^+)=K_H^+$, then we can recover the same relations between $H$ and $F$ as in (\ref{relFH1}) and (\ref{relFH2}) (see (KT1)). This shows that the condition $F(K_H^-)=K_H^-$ in (KT1) is redundant if we start with a polynomial automorphism $F$ in $\mathbb{C}^2$. With these words, we present our first theorem.
\begin{thm} \label{new rigidity}
Let $H$ be a H\'{e}non map in $\mathbb{C}^2$ and $F$ be a polynomial automorphism of $\mathbb{C}^2$ which keeps $K_H^+$ completely invariant. Then,
\begin{itemize}
\item[(a)]
if $\deg(F) = 1$, $F$ is of the form
\[
(x,y)\mapsto (ax+f, dy+g)
\]
with $\lvert a \rvert= \lvert d \rvert=1$ and
\item[(b)]
if $\deg F \geq 2$, then either $F$ or $F^{-1}$ is a H\'{e}non map and accordingly there exist $m,n \in \mathbb{N}^*$ such that
$$
\text{either } F^{ m}= H^n \text{ or } F^{ -m}= H^n .
$$
Further, there exists $m\in \mathbb{N}^*$ such that
$$
\text{either } F^{ m}\circ H= H \circ F^m \text{ or } F^{ -m}\circ H= H\circ F^{-m} .
$$
\end{itemize}
\end{thm}
\noindent
The Green functions of a H\'{e}non map $H$ is defined as follows:
\begin{equation} \label{Green}
G^{\pm}_H(x, y) := \lim_{n \rightarrow \infty} \frac{1}{d^n} \log^+ \Vert H^{\pm n}(x, y) \Vert
\end{equation}
for all $(x,y)\in \mathbb{C}^2$. Here $\log^+(x)=\max \{\log x, 0\}$. The functions $G_H^\pm$ are continuous, plurisubharmonic, non-negative on $\mathbb C^2$ and pluriharmonic on $\mathbb C^2 \setminus K^{\pm}_H$ vanishing precisely on $K^{\pm}_H$.
\medskip
In case of Theorem 1.1, \cite{BPK}, since we start with an automorphism $F$ which keeps both $K_H^\pm$ invariant, a direct analysis of the possible forms of $F$ (using Jung's theorem, \cite{J}) shows that $F$ (or $F^{-1}$) must be a H\'{e}non map. Consequently, it follows immediately that $G_H^+=G_F^+$ (or $G_H^+=G_F^-$). But in the present case, since only the invariance of $K_H^+$ is available, a similar analysis using Jung's theorem does not a priori gurantee that $F$ is a H\'{e}non map, rather we get that $F$ (and hence $F^{-1}$) is a regular (hence H\'{e}non-type) map (see Section 2 for the definitions of regular maps and H\'{e}non-type maps). Then it requires some work to show that the Green functions of $H$ and $F$ coincide, i.e., $G_H^+=G_F^+$ or $G_H^+=G_F^-$ (one can define Green functions of regular and H\'{e}non-type maps in the similar fashion as in the case of H\'{e}non maps and it is discussed in Section 2).
Then we use Lamy's theorem (\cite{L}), to show that some iterates of $H$ and $F$ (or $F^{-1}$) coincide, i.e., there exist $m,n \in \mathbb{N}$ such that $F^m=H^n$ (or $F^{-m}=H^n$) which shows that $F$ (or $F^{-1}$) is indeed a H\'{e}non map. In fact, in sprit Theorem \ref{new rigidity} is similar to Theorem 5.4 in \cite{L}.
\medskip
For each $c>0$, let (see \cite{DS})
\begin{equation*}
\tilde{\Omega}_{H,c}^\pm =\left\{(x,y)\in \mathbb{C}^2: G_H^\pm(x,y) \leq c\right\}, \:\ K_{H,c}^\pm =\left\{(x,y)\in \mathbb{C}^2: G_H^\pm(x,y) = c\right\},\;\ J_{H,c}^\pm= \partial K_{H,c}^\pm.
\end{equation*}
Note that for $c>0$, the set $K_{H,c}^+$ has empty interior and thus
\[
K_{H,c}^+=J_{H,c}^+.
\]
Further define
\begin{equation*}
G_{H,c}^\pm(x,y) := \max \left \{G_H^\pm(x,y) -c,0\right\}
\end{equation*}
for $(x,y)\in \mathbb{C}^2$.
The functions $G_{H,c}^\pm$ are continuous, plurisubharmonic, non-negative on $\mathbb C^2$ and pluriharmonic on $\mathbb C^2 \setminus J_{H,c}^\pm$ vanishing precisely on $\tilde{\Omega}_{H,c}^\pm$.
It can be shown that for a H\'{e}non map $H$, the {\it non-escaping} sets are the zero level sets of the Green functions $G_H^\pm$, i.e.,
$$
K_H^\pm=\{(x,y)\in \mathbb{C}^2: G_H^\pm(x,y)=0\}.
$$
Clearly, $K_{H,0}^\pm=K_H^\pm$. Theorem \ref{new rigidity} gives a characterization of automorphisms in $\mathbb{C}^2$ in terms of $H$, which keep $K_{H,0}^\pm$ completely invariant. The next theorem shows that in case $c>0$, there exists no automorphism except possibly the affine automorphisms which keeps $K_{H,c}^\pm$ completely invariant.
\begin{thm} \label{Glevel}
Let $H$ be a H\'{e}non map in $\mathbb{C}^2$. If $F$ is a polynomial automorphism in $\mathbb{C}^2$ such that
\begin{equation*}
F\left(K_{H,c}^+\right)=K_{H,c}^+
\end{equation*}
for some $c >0$, then $F$ is an affine automorphism of the follwing form
\[
(x,y)\mapsto (ax+by+f, dy+g).
\]
\end{thm}
In \cite{F}, Forn{\ae}ss showed the existence of so called \short{k}. A domain $\Omega$ which can be expressed as an increasing union of unit balls (upto biholomorphism) such that Kobayashi metric vanises identically in $\Omega$, but allows a bounded (above) pluri-subharmonic function, is called \short{k}. For a H\'{e}non map $H$, it can be shown that the interior of any non-zero sublevel set of the Green function $G_H^+$, i.e.,
\[
\Omega_{H,c}=\left\{(x,y)\in \mathbb{C}^2: G_H^+(x,y) < c\right\}
\]
is a \short{2}, for any $c>0$ (see \cite{F}). Since $\Omega_{H,c}$ is essentially an increasing union of Euclidean balls in $\mathbb{C}^2$ whose automorphism group is well-understood, it is an interesting task to understand the automorphism group of $\Omega_{H,c}$. Since any polynomial automorphism $F$ in $\mathbb{C}^2$ which acts as an automorphism of $\Omega_{H,c}$ will keep $K_{H,c}^+$ completely invariant, a simple application of Theorem \ref{Glevel} gives the following proposition.
\begin{prop}
For any $c>0$, there exists no polynomial automorphism of $\mathbb{C}^2$ except possibly the affine automorphisms
of the form
\[
(x,y)\mapsto (ax+by+f, dy+g).
\]
which acts as an automorphism of $\Omega_{H,c}$.
\end{prop}
It follows from Theorem 1.1 in \cite{BPK} that if the zero level sets of Green functions of two H\'{e}non maps (or the Julia sets) coincide, then they almost commute. We prove that the same is true if any two level sets of the Green functions of a pair of H\'{e}non maps coincide.
\begin{thm} \label{two levels}
Let $H_1$ and $H_2$ be two H\'{e}non maps of degree $d_{H_1}$ and $d_{H_2}$, respectively such that
\[
J_{H_1,c_1}^+= J_{H_2,c_2}^+ \text{ and } J_{H_1,d_1}^-= J_{H_2,d_2}^-
\]
for some $c_1, c_2, d_1, d_2 \geq 0$, then
$$
H_2\circ H_1=C \circ H_2 \circ H_1.
$$
Here $C(x,y)=(\delta_+ x, \delta_- y)$ with $\lvert \delta_+\rvert= e^ {c(d_{H_1}-1) (d_{H_2}-1)}$ and $\lvert \delta_-\rvert=e^ {d(d_{H_1}-1) (d_{H_2}-1)}$ where $c=c_1-c_2$ and $d=d_1-d_2$.
\end{thm}
Before we start proving our main theorems, we gather a few preparatory stuff in the next section. Proofs of Theorem \ref{new rigidity}, Theorem \ref{Glevel} and Theorem \ref{two levels} appear in Section 3, Section 4 and Section 5, respectively.
\section{Preliminaries}
Readers are referred to \cite{BS1}, \cite{BS2}, \cite{BS3} and \cite{DS} for a detailed study of the material in this section.
\medskip
For $R>0$, let us first define a filtration of $\mathbb{C}^2$ as follows:
\begin{align*}
V^+_R &= \{ (x,y) \in \mathbb C^2: \vert x \vert < \vert y \vert, \vert y \vert > R \},\\
V^-_R &= \{ (x,y) \in \mathbb C^2: \vert y \vert < \vert x \vert, \vert x \vert > R \},\\
V_R &= \{ (x, y) \in \mathbb C^2: \vert x \vert, \vert y \vert \le R \}.
\end{align*}
For a given H\'{e}non map $H$ of degree $d$, there exists $R > 0$ such that
\[
H(V^+_R) \subset V^+_R, \; H(V^+_R \cup V_R) \subset V^+_R \cup V_R,
\]
\[
H^{-1}(V^-_R) \subset V^-_R, \; H^{-1}(V^-_R \cup V_R) \subset V^-_R \cup V_R,
\]
\[
K_H^{\pm} \subset V_R \cup V^{\mp}_R \text{ and } \mathbb C^2 \setminus K^{\pm}_H = \bigcup_{n=0}^{\infty} (H^{\mp n})(V^{\pm}_R).
\]
Recall that
\[
K^{\pm}_H = \{(x, y) \in \mathbb C^2 : \;\text{the sequence}\; \left(H^{\pm n}(x, y) \right) \; \text{is bounded} \}.
\]
As defined in the previous section, the Green functions
\[
G^{\pm}_H(x, y) = \lim_{n \rightarrow \infty} \frac{1}{d^n} \log^+ \Vert H^{\pm n}(x, y) \Vert
\]
for $(x,y)\in \mathbb{C}^2$. The functions $G_H^\pm$ are continuous, plurisubharmonic, non-negative on $\mathbb C^2$ and pluriharmonic on $\mathbb C^2 \setminus K^{\pm}_H$ vanishing precisely on $K^{\pm}_H$. By construction, the following functorial property holds:
\[
G^{\pm}_H \circ H = d^{\pm 1} G^{\pm}_H.
\]
Both $G^{\pm}_H$ have logarithmic growth near infinity, i.e., there exists $R>0$ such that
\begin{equation}\label{L1}
G_H^+ (x,y)= \log^+ \lvert y \rvert+ O(1)
\end{equation}
in $\overline{V_R^+ \cup V_R}$, and
\begin{equation}\label{L2}
G_H^- (x,y)= \log^+ \lvert x \rvert+ O(1)
\end{equation}
in $\overline{V_R^- \cup V_R}$. Hence
\begin{equation}\label{L3}
G_H^\pm (x,y)\leq \max \{\log^+ \lvert x\rvert, \log^+ \lvert y \rvert \}+ C
\end{equation}
for all $(x,y)\in \mathbb{C}^2$ and for some $C>0$.
The supports of the positive closed $(1,1)$ currents
\[
\mu^{\pm}_H = dd^c G^{\pm}_H
\]
are $J^{\pm}_H$ and $\mu_H = \mu^+_H \wedge \mu^-_H$ is an invariant measure for $H$.
\medskip
Recall that, for each $c>0$, we define (see \cite{DS})
\begin{equation*}
\tilde{\Omega}_{H,c}^\pm =\left\{(x,y)\in \mathbb{C}^2: G_H^\pm(x,y) \leq c\right\}, K_{H,c}^\pm =\left\{(x,y)\in \mathbb{C}^2: G_H^\pm(x,y) = c\right\} , J_{H,c}^\pm= \partial K_{H,c}^\pm
\end{equation*}
and recall that $K_{H,c}^+=J_{H,c}^+$ for $c>0$. Further define
\begin{equation*}
G_{H,c}^\pm(x,y) = \max \left \{G_H^\pm(x,y) -c,0\right\}
\end{equation*}
for $(x,y)\in \mathbb{C}^2$.
The functions $G_{H,c}^\pm$ are continuous, plurisubharmonic, non-negative on $\mathbb C^2$ and pluriharmonic on $\mathbb C^2 \setminus J_{H,c}^\pm$ vanishing precisely on $\tilde{\Omega}_{H,c}^\pm$. Clearly, $G_{H,c}^\pm$ satisfy the same inequalities as in (\ref{L1}), (\ref{L2}) and (\ref{L3}). Further, The supports of the positive closed $(1,1)$ currents
\[
\mu_{H,c}^{\pm} = dd^c G_{H,c}^{\pm}
\]
are $J^{\pm}_{H,c}$.
\medskip
The following theorem proved by Dinh--Sibony (see \cite{DS}) have been crucially used to establish the main theorems of the present article.
\begin{thm}\label{DS1}
The current $\mu_H^+$ is the unique closed positive $(1,1)$ current of mass $1$ supported on $J_H^+$. For any $c>0$, the current $\mu_{H,c}^+$ is a closed positive $(1,1)$ current of mass $1$ supported on $J_{H,c}^+$.
\end{thm}
Any H\'{e}non map extends meromorphically to $\mathbb{P}^2$ with an isolated indeterminacy point at $I^+ = [1:0:0]$ and similarly, $H^{-1}$ extends to $\mathbb{P}^2$ with a lone indeterminacy point at $I^{-} = [0:1:0]$.
The class of H\'{e}non maps form the most important class of {\it regular} maps in $\mathbb{C}^2$.
\medskip
For a polynomial $f$ in $\mathbb{C}^2$, let $\hat{f}$ and $\hat{f}^{-1}$ be the meromorphic extensions of $f$ and $f^{-1}$ to $\mathbb{P}^2$, respectively. Let $I_f^+$ and $I_f^-$ be the inderminacy points of $\hat{f}$ and
$\hat{f}^{-1}$ in $\mathbb{P}^2$, respectively.
\begin{defn}
We say that $f$ is regular if $I_f^+ \cap I_f^{-}=\emptyset$.
\end{defn}
For a regular map $f$ in $\mathbb{C}^2$ of degree $d$, the Green functions
\[
G^{\pm}_f(x, y) := \lim_{n \rightarrow \infty} \frac{1}{d^n} \log^+ \Vert f^{\pm n}(x, y) \Vert
\]
for $(x,y)\in \mathbb{C}^2$. We define
\[
K^{\pm}_f = \{(x, y) \in \mathbb C^2 : \;\text{the sequence}\; \left(f^{\pm n}(x, y) \right) \; \text{is bounded} \} \text{ and } J_f^{\pm}=\partial K_f^\pm.
\]
The functions $G_f^\pm$ are continuous, plurisubharmonic, non-negative on $\mathbb C^2$ and pluriharmonic on $\mathbb C^2 \setminus K^{\pm}_f$ vanishing precisely on $K^{\pm}_f$. By construction, the following functorial property holds:
\[
G^{\pm}_f\circ f = d^{\pm 1} G^{\pm}_f
\]
where $d$ is the degree of $f$. Further, the functions $G^{\pm}_f$ have logarithmic growth near $I_f^{\mp}$ and the similar inequalities as in (\ref{L1}), \ref{L2} and (\ref{L3}) hold for $f$. See \cite{SW} (Section 2) for the following proposition.
\begin{prop}\label{relegular attraction}
The points $I_f^+$ and $I_f^-$ are the attracting fixed points for $f^{-1}$ and $f$, respectively. Futhermore, for any point $z\in \mathbb{C}^2\setminus K_f^\pm$,
\[
f^{\pm n} (z) \rightarrow I_f^\mp
\]
as $n\rightarrow \infty$.
\end{prop}
The supports of the positive closed $(1,1)$ currents
\[
\mu^{\pm}_f = dd^c G^{\pm}_f
\]
are $J^{\pm}_f$.
\medskip
The following theorem is due to Dinh--Sibony (\cite{DS})
\begin{thm}\label{DS2}
The current $\mu_f^+$ is the unique closed $(1,1)$ current of mass $1$ supported on the sets $K_f^+$ and $J_f^+$.
\end{thm}
\begin{defn}
A polynomial automorphism $f$ in $\mathbb{C}^2$ is called H\'{e}non-type if
\[
f=\varphi \circ h \circ \varphi^{-1}
\]
where $h$ is composition of H\'{e}non maps and $\varphi$ is a polynomial automorphism in $\mathbb{C}^2$.
\end{defn}
Clearly, a regular map is a H\'{e}non-type map.
\medskip
Let $\mathcal{PSH}(\mathbb{C}^2)$ be the collection of all pluri-subharmonic functions in $\mathbb{C}^2$. Set
\[
\mathcal{L}=\{v\in \mathcal{PSH}(\mathbb{C}^2): v(z)\leq \log^+ \lVert z \rVert+M \text{ with some } M>0\}
\]
where $\log^+(x)=\max \{\log x, 0\}$.
\begin{defn}
For a subset $S\subseteq \mathbb{C}^2$, the function
\[
L_S(z):=\sup \{u(z): u\in \mathcal{L}, u\leq 0 \text{ on } S\}
\]
for $z\in \mathbb{C}^2$, is called the pluricomplex Green function of $S$.
\end{defn}
One can look at Proposition 8.4.10 in \cite{MNTU} for the proof of the following proposition.
\begin{prop}\label{pluri henon}
The pluricomplex Green functions of the sets $K_H^\pm$ and of the sets $J_H^\pm$ are $G_H^\pm$.
\end{prop}
\section{Proof of Theorem \ref{new rigidity}}
Suppose that $\deg(F) \ge 2$. Then we show that
\begin{center}
{\it $F$ is a regular automorphism:}
\end{center}
\noindent
That $F$ is a regular automorphism is obtained following the same line of arguments as in the proof of Theorem 1.1 in \cite{BPK}, which shows that any polynomial automorphism preserving the non-escaping sets $K_H^\pm$ is essentially a H\'{e}non map. In present case, due to unavilability of invariance of $K_H^-$ under $F$, we need to strech the arguments given in \cite{BPK} accordingly to conclude that $F$ is a regular polynomial automorphism.
\medskip
By Jung's theorem (see \cite{J}), $F$ can be written as a composition of affine maps and elementary maps in $\mathbb{C}^2$. Recall that an elementary map is of the form
\[
e(x, y) = (\alpha x + p(y), \beta y + \gamma)
\]
where $\alpha \beta \not= 0$ and $p(y)$ is a polynomial in $y$. We consider following cases.
\medskip
\noindent
{\it{Case (i)}}: Let
\[
F=a_1\circ e_1 \circ a_2 \circ e_2\circ \cdots \circ a_k \circ e_k
\]
for some $k\geq 1$ where the $a_i$'s are non-elementary affine maps and the
$e_i$'s are non-affine elementary maps.
Without loss of generality, suppose that
\[
F=a_1\circ e_1 \circ a_2 \circ e_2.
\]
Let
\[
a_1(x,y)=(\alpha_1 x+ \beta_1 y+ \delta_1,\alpha_2 x+ \beta_2 y+ \delta_2 ).
\]
for $\alpha_2\neq 0$. Now consider the maps
\[
a_1^2(x,y)=(\alpha_2 x+ \beta_2 y+ \delta_2, s_2y+r_2)
\]
and
\begin{equation}\label{aff}
a_1^1(x,y)=(bx+cy,y)
\end{equation}
where $b \not= 0, c=\alpha_1/\alpha_2$, $r_2=(\delta_1-c\delta_2)/b$ and $s_2=(\beta_1-c\beta_2)/b$. Note that $a_1= a_1^1\circ \tau\circ a_1^2$ where $\tau(x,y)=(y,x)$ for any $b\neq 0$. Expressing $a_2$ in a similar way, it follows that
\[
F=a_1^1\tau a_1^2 e_1 a_2^1\tau a_2^2 e_2.
\]
Now $a_1^2 e_1 a_2^1$ and $a_2^2 e_2$ are elementary maps, $E_1$ and $E_2$ respectively, say. Therefore,
\[
F=a_1^1 \tau E_1 \tau E_2
\]
where $E_i(x,y)=(m_i x+p_i(y), k_i y+ r_i)$ and the $p_i$'s are polynomials in
$y$ of degree at least $2$ for $i=1,2$. Since $\tau E_1$ and $\tau E_2$ are H\'{e}non maps, it follows that $[1:0:0]$ is an indeterminacy point of $F$. But $F^{-1}([1:0:0])=[1:0:0]$. Thus, in this case, $F$ is regular map with $[1:0:0]$ as the forward indeterminacy point. Note that the point $[w:1:0]$ is the indeterminacy point of $F^{-1}$ for some $w\in \mathbb{C}$.
\medskip
\noindent
{\it{Case (ii):}} Let
\[
F=a_1\circ e_1\circ a_2 \circ\cdots\circ e_{k-1} \circ a_k
\]
for some $k\geq 2$. That $F$ can not be of this form, provided $F(K_H^+)=K_H^+$, follows exactly the same set of arguments as in the proof of Theorem 1.1 in \cite{BPK} (or case (ii) in Theorem \ref{Glevel} in the present paper).
\medskip
\noindent
{\it{Case (iii):}} Let
\[
F=e_1\circ a_1 \circ e_2 \circ a_2 \circ \cdots \circ e_k \circ a_k
\]
for some $k\geq 1$. Note that $F^{-1}$ has a form as in Case (i). Since $F^{-1}$ also keeps $K_H^+$ invariant, it follows that $F^{-1}$ is a regular map with $[1:0:0]$ as the indeterminacy point. Hence, $F$ is a regular map
\medskip
\noindent
{\it{Case (iv):}} Let
\[
F=e_1\circ a_1 \circ e_2 \circ a_2 \circ \cdots\circ a_{k-1}\circ e_{k-1} \circ e_k
\]
for some $k\geq 1$. For simplicity, we work with
\[
F=e_1\circ a_1 \circ e_2
\]
and as in the previous cases, we write
\[
F=e_1 a_1^1\tau a_1^2 e_2
\]
and thus,
\[
\tau F=\tau e_1 a_1^1\tau a_1^2 e_2.
\]
Note that both $\tau e_1 a_1^1$ and $\tau a_1^2 e_2$ are H\'{e}non maps. Thus $\tau F$ is a H\'{e}non map. Therefore, $F[0:1:0]=[1:0:0]$ and consequently, $F(V_R^+)$ will intersects $K_H^+$ since $[1:0:0]$ is the limit point of $K_H^+$ in $\mathbb{P}^2$. This implies, $V_R^+ \cap K_H^+\neq \emptyset$ since $F(K_H^+)=K_H^+$ . This is clearly a contradiction. Therefore, $F$ can not be of this form.
\medskip
Thus we prove that $F$ is a regular map. Further, the indeterminacy point of $F$ is either $[1:0:0]$ or $[w:1:0]$ and accordingly the indeterminacy point of $F^{-1}$ is either $[w:1:0]$ or $[1:0:0]$.
\begin{center}
{\it Green functions of $H$ and $F$ coincide:}
\end{center}
Note that in the previous section, we have shown that if $F(K_H^+)=K_H^+$, then the forms appeared in Case (i) and Case (iii) are the two possible forms of $F$. Now in Case (i), $I_F^+=[1:0:0]$ is the indeterminacy point of $F$. Hence $I_F^+$ is the attracting fixed point for $F^{-1}$ (see Proposition \ref{relegular attraction}, Section 2).
Now we have $F(K_H^+)=K_H^+$. If $z\notin K_F^+$, then $F^n(z)\rightarrow I_F^-$ as $n\rightarrow \infty$ where $I_F^{-}=[w:1:0]$ is the indeterminacy point of $F^{-1}$. But $\overline { K_H^+}=K_H^+ \cup I^+$ where $I^+=[1:0:0]$. Therefore, $K_H^+ \subset K_F^+$. Using Dinh--Sibony {\it rigidity} result (see Theorem \ref{DS2}, Section 2) for regular maps, we can conclude that
\[
J^+=J_H^+=J_F^+.
\]
Since $H$ is a H\'{e}non map, $G_H^+$ is the pluricomplex Green function of $J_H^+$ (see \ref{pluri henon}, Section2). Further we claim that
{\it{$G_F^+$ is the pluricomplex Green function of $J_F^+$.}} For $\epsilon>0$ sufficiently small, let
\[
U_F^-=\left\{(x,y)\in \mathbb{C}^2: \lvert x \rvert < (\lvert c \rvert+ \epsilon )\lvert y \rvert , \lvert y \rvert >R \right\}
\]
which is clearly away from the point $[1:0:0]$.
By Theorem 8.4 in \cite{DS},
\begin{equation*}
\log \lVert z \rVert-M_2 \leq G_F^+(z)\leq \log \lVert z \rVert+M_1
\end{equation*}
for some $M_1, M_2>0$ and for all $z\in U_F^-$.
Fix $x_0\in \mathbb{C}$ and consider the complex line $L_{x_0}=\{(x_0,y): y\in \mathbb{C}\}$. For $R>0$ sufficiently large
\begin{equation*}
\log \lVert z \rVert-K_2 \leq G_H^+(z)\leq \log \lVert z \rVert+K_1
\end{equation*}
in $V_R^+$ (see (\ref{L1})) for some $K_1, K_2 >0$. Now note that
\begin{equation*}
L_{x_0}^R=\{(x_0,y): y\in \mathbb{C}, \lvert y \rvert>R\} \subseteq U_F^- \cap V_R^+
\end{equation*}
and therefore $G_H^+-G_F^+$ is bounded at infinity along the line $L_{x_0}$. Further, since $J^+=J_H^+=J_F^+$ and the function $G_H^+-G_F^+$ is harmonic in $L_{x_0}\setminus J^+$ which vanishes identically on $J^+$, it follows that
\begin{equation*}
G_H^+ \leq G_F^+.
\end{equation*}
Since
\[
G_F^+(z)\leq \log \lVert z \rVert+M
\]
for some $M>0$ and for all $z\in \mathbb{C}^2$ and $G_F^+$ vanishes identically on $J_F^+$,
\begin{equation*}
G_F^+ \leq G_H^+
\end{equation*}
in $\mathbb{C}^2$.
Therefore, the Green function of $H$ and the Green function of $F$ coincide, i.e.,
$$
G_H^+=G_F^+
$$
in $\mathbb{C}^2$.
\medskip
In the other case, that is, if $[1:0:0]$ is the indeterminacy point of $F^{-1}$, using the similar set of arguments as before, it follows that
\[
G_H^+=G_{F^{-1}}^+.
\]
\begin{center}
{\it Some iterates of $F$ and $H$ agree:}
\end{center}
Note that since $F^{\pm 1}$ are regular maps, they are H\'{e}non-type maps, i.e., $F^{\pm 1} $ are conjugate to some H\'{e}non maps. Further, without loss of generality, we assume that $G_H^+=G_F^+$. Therefore, by Theorem. 5.4 in \cite{L}, it follows that there exists $m,n\in \mathbb{N}^*$ such that
\begin{equation}\label{eq iterates}
F^m=H^n.
\end{equation}
\medskip
\begin{center}
{\it $F$ is a H\'{e}non map: }
\end{center}
Since root of a H\'{e}non map is also a H\'{e}non map (see Theorem. 4.1, \cite{BF}), it follows from (\ref{eq iterates}) that $F$ is a H\'{e}non map.
\medskip
\noindent
\begin{center}
{\it Some iterates of $F$ commutes with $H$:}
\end{center}
\noindent
It follows from (\ref{eq iterates}) that
$$
\text{either } F^{ m}\circ H= H \circ F^m \text{ or } F^{ -m}\circ H= H\circ F^{-m}
$$
for some $m\in \mathbb{N}^*$.
\begin{center}
{\it Description of linear automorphisms which keeps $K_H^+$ invariant:}
\end{center}
\noindent
Let $\sigma$ be an affine automorphism of the form
\[
\sigma(x,y)=(ax+by+f,cx+dy+g)
\]
such that $\sigma(K_H^+)=K_H^+$.
Thus if we take a sequence $\{(x_n,y_n)\}_{n\geq 1}\subseteq K_H^+$ which converges to $[1:0:0]\in \mathbb{P}^2$, then
\[
(a x_n+b y_n+ f, c x_n +d y_n +g) \rightarrow [1:0:0]
\]
as $n\rightarrow \infty$, which in turn gives that $c=0$. Hence
\[
\sigma(x,y)=(ax+by+f,dy+g).
\]
Now since $\sigma \circ H (K_H^+)=K_H^+$ and $\deg (\sigma \circ H) \geq 2$, it follows from previous description that $\sigma \circ H$ is H\'{e}non and thus $b=0$. Therefore
\[
\sigma(x,y)=(ax+f,dy+g).
\]
Let $(x,y)\in K_H^+$, then
\begin{equation} \label{sigma n}
\sigma^n(x,y)=\left (a^n x+f \frac{(a^n-1)}{a-1}, d^n y + g \frac{(d^n-1)}{(d-1)}\right).
\end{equation}
Now note that if $\lvert a \rvert \leq 1$, then $\lvert d \rvert \leq 1$ since $\sigma^n (K_H^+)=K_H^+\subseteq V_R \cup V_R^-$ for all $n\geq 1$. Choose $(x_n,y_n)\in K_H^+$ such that $\lvert x_n \rvert \rightarrow \infty$ and ${y_n}/{x_n} \rightarrow 0$ as $n\rightarrow \infty$.
As in (\ref{sigma n}), we have
\begin{equation} \label{sigma nn}
\sigma^n(x_n,y_n)=\left (a^n x_n+f \frac{(a^n-1)}{a-1}, d^n y_n + g \frac{(d^n-1)}{(d-1)}\right).
\end{equation}
If $\lvert a \rvert <1$, then it follows from (\ref{sigma nn}) that $\sigma^n (x_n,y_n)\rightarrow [0:0:0]$ as $n\rightarrow \infty$ which is a contradiction since $\overline{ K_H^+}=K_H^+ \cup I^+$. Thus $\lvert a \rvert\geq 1$. Now
\[
\sigma^{-1}(x,y)=\left (\frac{x}{a}-\frac{f}{a},\frac{y}{d}-\frac{g}{d} \right).
\]
Since $\sigma(K_H^+)=K_H^+$, applying same argument as before $\lvert a^{-1}\rvert \geq 1$. Thus we get $\lvert a\rvert=1$. We have already proved that if $\lvert a\rvert \leq 1$, then $\lvert d \rvert \leq 1$ and thus we have $\lvert d \rvert=1$. So finally we get that
\[
\sigma(x,y)=(ax+f,dy+g)
\]
with $\lvert a \rvert=\lvert d \rvert=1$.
\begin{cor}
Let $H$ and $F$ be two H\'{e}non maps such that $J_H^+=J_F^+$, then there exist $m,n \in \mathbb{N}$ such that $F^m=H^n$.
\end{cor}
\section {Proof of the Theorem \ref{Glevel}}
Before starting the proof of \ref{Glevel}, we state the following propostion which we shall require later. The proof of the proposition follows exactly as the proof Proposition \ref{pluri henon}, hence we omit the proof.
\begin{prop} \label{pluri-complex}
For any $c>0$, the functions $G_{H,c}^\pm$ are the pluricomplex Green functions of the sets $\tilde{\Omega}_{H,c}^\pm$ and of the sets $K_{H,c}^\pm=J_{H,c}^\pm$.
\end{prop}
\subsection*{Proof of the Theorem \ref{Glevel}:}
Let $F$ be a polynomial automorphism of $\mathbb{C}^2$ such that $F(K_{H,c}^+)=K_{H,c}^+$, for some $c>0$. Then we prove the following equalities:
\begin{equation}\label{Gc1}
G_{H,c}^+ \circ F^{\pm 1} (x,y)= b^\pm G_{H,c}^+(x,y)
\end{equation}
for $(x,y)\in \mathbb{C}^2$.
Note that if (\ref{Gc1}) holds, then $b^-={(b^+)}^{-1}$ with $b^+>0$. The idea of the proof is due to Buzzard--Forn{\ae}ss (\cite{BF}).
\medskip
Since $F(K_{H,c}^+)=K_{H,c}^+$, it follows that $G_{H,c}^+\circ F=0$ on $K_{H,c}^+$. For any $x \in \mathbb{C}$ and consider
\[
g_{x}(y):=G_{H,c}^+\circ F (x,y)
\]
for $y\in \mathbb{C}$. The function $g_x$ is harmonic outside the compact set $K_{H,c}^+ \cap (\{x\}\times \mathbb{C})$ and thus it is harmonic outside a large disk of radius $R>0$. Let $h_{x}$ be the harmonic conjugate of $g_{x}$ in $\{\vert y \vert > R\}$ with period $p_{x}$. Therefore
\[
\psi_{x}(y)=g_{x}(y)-p_{x}\log \lvert y\rvert+ i h_{x}(y)
\]
is holomorphic in $\{\lvert y\rvert >R\}$. Since
\[
\left\lvert\exp(-\psi_{x}(y))\right\rvert \leq {\lvert y\rvert}^{p_{x}},
\]
the function $\exp(-\psi_{x}(y))$ has at most a pole at infinity and thus,
\[
\exp(-\psi_{x}(y))=y^k \exp f(y)
\]
where $f$ is a holomorphic function in $\{\lvert y \rvert >R\}$ having a removable singularity at infinity.
Taking absolute values and then log, we get the following:
\[
g_{x}(y)-p_{x}\log \lvert y \rvert=-k \log \lvert y \rvert-{\rm{Re}}(f(y))
\]
in $\{\lvert y\rvert >R\}$. Therefore,
\[
g_{x}(y)=b_{x}\log \lvert y\rvert +O(1)
\]
in $\{\lvert y\rvert >R\}$ and since $g \ge 0$ in $\mathbb{C}$,
\[
g_{x}(y)=b_{x}\log^+ \lvert y\rvert +O(1)
\]
in $\mathbb{C}$.
\medskip
We prove that $b_x$ is independent of $x$. To prove this, we work in a small neighbourhood of a fixed $x_0$ and consider $R>0$ large enough. Let $p, q$ be two distinct points near $x_0$ and let $I$ be the straight line segment joining them. Then
\[
\Sigma = \{(x, y) : x \in I, \vert y \vert = R \}
\]
is a smooth real 2-surface with two boundary components namely,
\[
\{(p, y) : \vert y \vert = R \} \cup \{ (q, y): \vert y \vert = R \}.
\]
We get
\[
b_p - b_q = \int_{\partial \Sigma} d^c (G^+_{H,c} \circ F) = \int_{\Sigma} dd^c(G^+_{H,c} \circ F) = 0
\]
applying Stokes' theorem. Here the last equality holds due to the pluriharmonicity of $G^+_{H,c} \circ F$ on $V^+_R$. Thus $b_x$ is locally constant and therefore constant everywhere. Let us write $b_x = b^+$ for all $x \in \mathbb{C}$.
\medskip
Now we have
$
g_{x_0}(y)=b^+ \log^+ \lvert y \rvert+ O(1) \text{ and } G_{H,c}^+(x,y)=\log^+ \lvert y\rvert +O(1)
$
in $V_R^+$. Further, the difference $g_{x_0}(y)-b^+ G_{H,c}^+(y)$ is harmonic at each $y$ for which $(x_0,y)\in \mathbb{C}^2\setminus K_{H,c}^+$ with a removable singularity at infinity and vanishes for $(x_0,y)\in K_{H,c}^+$. Therefore, $g_{x_0}(y)=b^+G_{H,c}^+(x_0,y)$ for each $y\in \mathbb{C}$.
Applying the same argument we get that $G_{H,c}^+\circ F-b^+G_{H,c}^+ \equiv 0$ in $\Delta(x_0;r_0)\times \mathbb{C}$. Since the difference is pluriharmonic in $\mathbb{C}^2\setminus K_{H,c}^+$ and it vanishes in $K_{H,c}^+$, we have $G_{H,c}^+\circ F=b^+G_{H,c}^+$ in $\mathbb{C}^2$. Using similar arguments we get that $$G_{H,c}^+\circ F^{-1}=b^-G_{H,c}^+$$ in $\mathbb{C}^2$ where $b^-={(b^+)}^{-1}$.
\medskip
\noindent
Since for any $c>0$, $\overline{K_{H,c}^\pm}=K_{H,c}^\pm \cup I^\pm$ in $\mathbb{P}^2$ (see \cite{DS}), if $\deg F=1$, using the similar arguments as in Theorem \ref{new rigidity}, we get that
\[
F(x,y)=(ax+by+f,dy+g)
\]
for $(x,y)\in \mathbb{C}^2$. In case $\deg F\geq 2$, we prove that
\begin{center}
{\it{ $F$ is a regular automorphism:}}
\end{center}
To prove that $F$ is a regular polynomial automorphism, we shall use the similar set of arguments as in the first part of the proof of Theorem \ref{new rigidity}. As before, the following cases arise:
\medskip
\noindent
{\it{Case (i):}}
Let
\[
F=a_1\circ e_1 \circ a_2 \circ e_2\circ \cdots \circ a_k \circ e_k
\]
for some $k\geq 1$ where the $a_i$'s are non-elementary affine maps and the
$e_i$'s are non-affine elementary maps. As it is shown in Theorem {\ref{new rigidity}}, in this case , $F$ is a regular map.
\medskip
\noindent
{\it{Case (ii)}} Let
\[
F=a_1\circ e_1\circ a_2 \circ\cdots\circ e_{k-1} \circ a_k
\]
for some $k\geq 2$.
For simplicity, assume that $F=a_1\circ e_1 \circ a_2$. As in the previous case we can write
\[
F=a_1^1 \tau a_1^2 e_1 a_2^1 \tau a_2^2
\]
where $a_1^1(x,y)=(bx+cy,y)$ and $ \tau a_2^2(x,y)=(s_2 y+r_2, \alpha_2 x+ \beta_2 y+ \delta_2)$.
That $F$ can not be of this form follows exactly as in the proof of Theorem 1.1 in \cite{BPK}. However, since in our present case $K_{H,c}^+$ is invariant under $F$ instead of the invariance of the non-escaping set $K_H^+$ (as in Theorem 1.1 in \cite{BPK}), we need to modify our proof accordingly.
\medskip
Note that for any given $c>0$, there exists $R_c>0$ sufficiently large such that
\begin{equation} \label{JHc}
K_{H,c}^+ \cap V_{R_c}^+=\emptyset.
\end{equation}
Let for each $n\in \mathbb{N}$, there exists $(x_n,y_n)\in K_{H,c}^+ \cap V_n^+$. Now by Lemma 6.3 in \cite{DS}, $\overline{ K_{H,c}^+}=K_{H,c} \cup I^+$ in $\mathbb{P}^2$. Therefore $[x_n: y_n :1]\rightarrow [1:0:0]$ as $n\rightarrow \infty$ which contradicts the fact that $(x_n,y_n)\in V_n^+$ for each $n\geq 1$. Thus (\ref{JHc}) follows.
\medskip
\noindent
{\it{Claim}:} There exists a sequence ${(x_n,y_n)}_{n \geq 1}\subseteq K_{H,c}^+ \cap V_R^-$ with $\lvert x_n \rvert \geq \lvert y_n\rvert\geq n$ for all $n\geq 1$.
\medskip
If no such sequence exists, then we can choose a sequence $(x_n,y_n)\in K_{H,c}^+\cap V_R^-$ such that $\lvert y_n \rvert$ is bounded by a fixed constant $M>1$ for all $n\geq 1$ and $\lvert x_n \rvert \rightarrow \infty$ as $n\rightarrow \infty$. Without loss of generality, we choose $R>0$ sufficiently large such that $K_{H,c}^+$ and $K_{H,dc}^+$ both are contained in $V_R \cup V_R^-$ where $d=\deg H$. Suppose that the H\'{e}non map is of the form: $H:(x,y)\mapsto (y, p(y)-\delta x)$ with $\delta\neq 0$. Then there exists a subsequence $\{(x_{n_k}, y_{n_k})\}\subset K_{H,c}^+ \cap V_R^-$ such that $\{H(x_{n_k}, y_{n_k})\}\subset K_{H,dc}^+ \cap V_R^-$ and thus
\[
\lvert y_{n_k} \rvert \geq \lvert p(y_{n_k})-\delta x_{n_k}\rvert \geq \lvert \delta\rvert\lvert x_{n_k} \rvert-\lvert p(y_{n_k})\rvert.
\]
Therefore, the sequence $\{x_{n_k}\}$ is bounded which is a contradiction.
\medskip
Since $\overline{K_{H,c}^+}=K_{H,c}^+ \cup I^+$ in $\mathbb{P}^2$,
\[
\frac{\vert y_n \vert}{\vert x_n \vert}\rightarrow 0 \text{ as } n\rightarrow \infty.
\]
Thus
\begin{equation}\label{eps}
\lvert y_n \rvert \leq \epsilon_n\lvert x_n\rvert
\end{equation}
for all $n\geq 1$ with $\epsilon_n \rightarrow 0$.
\medskip
Note that $\tau a_2^2(x_n,y_n)=(s_2 y_n+r_2, \alpha_2 x_n + \beta_2 y_n + \delta_2)$ and
\begin{eqnarray}
\lvert \alpha_2 x_n + \beta_2 y_n +\delta_2\rvert
&\geq& (\lvert \alpha_2 \rvert - \epsilon_n \lvert \beta_2\rvert)\lvert x_n\rvert-\lvert\delta_2\rvert \nonumber\\
&\geq& \frac{1}{2}\lvert \alpha_2 \rvert \lvert x_n \rvert-\lvert \delta_2 \rvert \nonumber\\
&\geq& \frac{1}{2} \lvert \alpha_2\rvert \lvert y_n \rvert-\lvert \delta_2 \rvert \geq \lvert s_2\rvert \lvert y_n\rvert + \lvert r_2\rvert \lvert y_n\rvert-\lvert \delta_2\rvert \nonumber
\end{eqnarray}
for all $n\geq n_0$. Now since $\lvert s_2\rvert$ and $\lvert r_2\rvert$ can be chosen sufficiently small, we choose them such that ${\lvert\alpha_2\rvert}/{2}\geq\lvert s_2\rvert+ \lvert r_2\rvert$. Thus the last inequality follows.
\medskip
Since $\lvert y_n\rvert \rightarrow \infty$ as $n\rightarrow \infty$, it follows that
\[
\lvert \alpha_2 x_n + \beta_2 y_n +\delta_2\rvert \geq \lvert s_2 \rvert \lvert y_n\rvert + \lvert r_2\rvert \geq \lvert s_2 y_n + r_2\rvert
\]
and
\[
\lvert \alpha_2 x_n + \beta_2 y_n +\delta_2\rvert \geq R
\]
for sufficiently large $n$.
\medskip
Thus for a sequence ${(x_n,y_n)}_{n\geq 1}\subseteq K_{H,c}^+\cap V_R^-$ with $\lvert x_n \rvert \geq \lvert y_n \rvert \geq n$, it turns out that $\tau a_2^2 (x_n,y_n)\in V_R^+$ for sufficiently large $n$. Thus
\[
(x_n',y_n')= \tau a_1^2 e_1 a_2^1 \tau a_2^2 (x_n,y_n)\in V_R^+
\]
and
\[
\lvert b x_n'+c y_n'\rvert\leq (\lvert b\rvert+\lvert c\rvert)\lvert y_n'\rvert.
\]
Hence
\begin{equation}\label{contra}
\lvert y_n''\rvert \geq \frac{1}{(\lvert b \rvert+ \lvert c \rvert)} \lvert x_n''\rvert
\end{equation}
where $(x_n'',y_n'')=a_1^1(x_n',y_n')$. Now since $F(K_{H,c}^+)=K_{H,c}^+$,
\[
(x_n'',y_n'')=F(x_n,y_n)\in K_{H,c}^+\cap V_R^-
\]
for sufficiently large $n\geq 1$ and $\Vert (x_n'',y_n'')\Vert \rightarrow \infty$ as $n\rightarrow \infty$. By (\ref{eps}), we get that
\[
\lvert y_n''\rvert \leq \epsilon_n \lvert x_n''\rvert
\]
where $\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$ which contradicts (\ref{contra}). Thus $F$ cannot be of this form.
\medskip
\noindent
{\it{Case (iii):}} Let
\[
F=e_1\circ a_1 \circ e_2 \circ a_2 \circ \cdots \circ e_k \circ a_k
\]
for some $k\geq 1$. Note that $F^{-1}$ has a form as in Case 1. Since $F^{-1}$ also keeps $K_{H,c}^+$ invariant, it follows that $F^{-1}$ is a regular map. Hence, $F$ is a regular map with $[w:1:0]$ as its indeterminacy point for some $w\in \mathbb{C}$.
\medskip
\noindent
{\it{Case (iv):}} Let
\[
F=e_1\circ a_1 \circ e_2 \circ a_2 \circ \cdots\circ a_{k-1}\circ e_{k-1} \circ e_k
\]
for some $k\geq 1$. For simplicity, we work with
\[
F=e_1\circ a_1 \circ e_2.
\]
As in the previous cases, we can write
\[
F=e_1 a_1^1\tau a_1^2 e_2
\]
and thus,
\[
\tau F=\tau e_1 a_1^1\tau a_1^2 e_2.
\]
Note that both $\tau e_1 a_1^1$ and $\tau a_1^2 e_2$ are H\'{e}non maps. Thus $\tau F$ is a H\'{e}non map. Therefore, $F[0:1:0]=[1:0:0]$ and consequently, $F(V_R^+)$ will intersects $K_{H,c}^+$ since $[1:0:0]$ is the limit point of $K_{H,c}^+$ in $\mathbb{P}^2$. This implies, $V_R^+ \cap K_{H,c}^+\neq \emptyset$ since $F(K_{H,c}^+)=K_{H,c}^+$ . This is clearly a contradiction. Therefore, $F$ can not be of this form.
\medskip
Thus we prove that $F$ is a regular map. Furthermore, the point $[1:0:0]$ is the indeterminacy point either of $F$ or of $F^{-1}$. Without loss of generality, let $[1:0:0]$ be the indeterminacy point of $F$.
\begin{center}
\it{$K_{H,c}^+$ never remains invariant under an automorphism $F$ with $\deg F\geq 2$ and $c>0$:}
\end{center}
From (\ref{Gc1}), it follows that
\begin{equation*}\label{max}
\max \left \{G_H^+\circ F (x,y)-c,0 \right \}= b^+ \max \left \{G_H^+(x,y)-c,0 \right \}
\end{equation*}
in $\mathbb{C}^2$. Comparing both sides of (\ref{max}), we get that
\begin{equation*} \label{GFc}
G_H^+ \circ F (x,y)=b^+ G_H^+(x,y)-b^+ c + c
\end{equation*}
for all $(x,y)$
such that $G_H^+(x,y)>c$. In particular for sufficiently large $R>0$, we have that $G_H^+(x,y)>c$ for all $(x,y)\in V_R^+$.
\medskip
Since
\begin{equation*}
G_H^+ \circ F (z)=b^+ G_H^+(z)-b^+ c + c
\end{equation*}
for all $(x,y)$ such that $G_H^+ (x,y)> c$,
it follows that
$$
F\left(\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}\right)\subseteq \left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}.
$$
Also,
\begin{equation*}
G_H^+ \circ F^{-1} (x,y)=b^- G_H^+(x,y)-b^- c + c
\end{equation*}
for all $(x,y)$
such that $G_H^+(x,y)>c$, it follows that
$$
F^{-1}\left(\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}\right)\subseteq \left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}.
$$
Therefore,
$$
F\left(\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}\right)=\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) > c \right \}.
$$
This implies
$$
F\left(\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) < c \right \}\right)=\left \{(x,y)\in \mathbb{C}^2:G_H^+(x,y) < c \right \}
$$
since $F(K_{H,c}^+)=K_{H,c}^+$.
\medskip
\noindent
{\it{Claim:}}
$
\Omega_{H,c}=\left \{(x,y)\in \mathbb{C}^2: G_H^+(x,y) <c \right \} \subseteq K_F^+.
$
\medskip
Since $F(K_{H,c}^+)=K_{H,c}^+$, by Prop. \ref{relegular attraction}, it follows that $J_{H,c}^+=K_{H,c}^+ \subset K_F^+$. By {\it{rigidity}} results of Dinh--Sibony (see Theorem \ref{DS1} and Theorem \ref{DS2}), it follows that
\[
J_{H,c}^+=J_F^+.
\]
By Prop. \ref{pluri-complex}, the function $G_{H,c}^+$ is the pluricomplex Green function of $J_{H,c}^+$. Again as in Theorem \ref{new rigidity}, we can show that $G_F^+$ is the pluricomplex Green function of $J_F^+$ and thus
\[
G_{H,c}^+=G_F^+.
\]
Now since $G_{H,c}^+$ vanishes identically on $\Omega_{H,c}$, it follows that $G_F^+$ also vanishes identically on $\Omega_{H,c}$. Since
\[
K_F^+=\left\{(x,y): G_F^+(x,y)=0 \right\},
\]
we have that
\[
\Omega_{H,c}\subseteq K_F^+.
\]
Now by Theorem \ref{DS1}, it follows that for each positive $r<c$, the set $J_{H,r}^+$ supports a positive closed (1,1) current of mass 1. On the other hand $K_F^+$ supports a unique positive closed (1,1) current of mass 1. Contradiction! This finishes the proof.
\medskip
Thus $F$ must be an affine automorphism. Since $F(K_{H,c}^+)=K_{H,c}^+$ and $\overline{ K_{H,c}^+}=K_{H,c}^+ \cup I^+$ in $\mathbb{P}^2$, applying similar arguments as before, we can show that
$$
F(x,y)=(ax+by+f, dy+g).
$$
\section{Proof of the Theorem \ref{two levels}}
Let us first state the following propositon (see \cite{BPK}) which we require to prove Theorem \ref{two levels}. Suppose $H$ is of the form (\ref{henon form}).
\begin{prop}\label{Bot}
For a given H\'{e}non map $H$, there exist non-vanishing holomorphic functions
$\phi_H^\pm: V_R^\pm \rightarrow \mathbb{C}$ such that
\[
\phi_H^+\circ H(x,y)=c_H{(\phi_H(x,y))}^d
\]
in $V_R^+$ where
\[
c_H = \prod_{j=1}^m {c_j}^{d_{j+1}\cdots d_m}
\]
with the convention that $d_{j+1}\cdots d_m=1$ when $j=m$, $d=d_j \cdots d_1$ and
\[
\phi_H^-\circ H^{-1}(x,y)=c_H'{(\phi_H^-(x,y))}^d
\]
in $V_R^-$ where
\[
c_H'=\prod_{j=1}^m {\left({c_j}\delta_j^{-1}\right)}^{d_{j-1}\cdots d_1}
\]
with the convention that $d_{j-1}\cdots d_1=1$ when $j=1$.
Further,
\[
\phi_H^+(x,y)\sim y \text{ as } \lVert(x,y)\rVert\rightarrow \infty \text{ in } V_R^+
\]
and
\[
\phi_H^-(x,y)\sim x \text{ as } \lVert(x,y)\rVert\rightarrow \infty \text{ in } V_R^-.
\]
\end{prop}
The functions $\phi_H^\pm$ are called the B\"{o}ttcher functions corresponding to the H\'{e}non map $H$.
\subsection*{Proof of the Theorem \ref{two levels}:}
Since $K_{H_1,c_1}^+=K_{H_2,c_2}^+$, by Prop. \ref{pluri-complex}, we have
$G_{H_1,c_1}^+=G_{H_2,c_2}^+$ in $\mathbb{C}^2$, i.e.,
\begin{equation*}
\max \left \{G_{H_1}^+ -c_1,0\right \}=\max\left \{G_{H_2}^+ -c_2,0\right \}
\end{equation*}
in $\mathbb{C}^2$. Therefore,
\[
G_{H_1}^+-c_1=G_{H_2}^+-c_2
\]
in $V_R^+$, for $R>0$ sufficiently large. Thus by Prop. \ref{Bot}, it follows that
\begin{equation}
\log\left \lvert \phi_{H_1}^+\right \rvert+\frac{1}{d_{H_1}-1}\log\left \lvert c_{H_1} \right \rvert=\log \left \lvert \phi_{H_2}^+ \right \rvert+\frac{1}{d_{H_2}-1}\log\left \lvert c_{H_2} \right\rvert+c
\end{equation}
with $c=c_1-c_2$ in $V_R^+$. Also since
\begin{equation} \label{phi y}
\phi_{H_i}(x,y )\sim y \text{ as } \lVert (x,y) \rVert \rightarrow \infty
\end{equation}
for $i=1,2$, we have
\begin{equation}\label{relC}
\frac{1}{d_{H_1}-1}\log\left \lvert c_{H_1} \right \rvert=\frac{1}{d_{H_2}-1}\log\left \lvert c_{H_2} \right\rvert+c
\end{equation}
which in turn gives
\[
\log\left \lvert \phi_{H_1}^+\right \rvert=\log\left \lvert \phi_{H_2}^+\right \rvert
\]
in $V_R^+$. Again, using (\ref{phi y}), we have
\begin{equation}
\phi_{H_1}^+ = \phi_{H_2}^+=\phi^+
\end{equation}
in $V_R^+$. Using \ref{relC}, we get
\[
\log {\lvert c_{H_1}\rvert}^{d_{H_2}-1}=\log {\lvert c_{H_2}\rvert}^{d_{H_1}-1}+c(d_{H_2}-1)(d_{H_1}-1)
\]
which implies that
\begin{equation} \label{delta+}
c_{H_1}^{d_{H_2}} c_{H_2}=\delta_+ c_{H_2}^{d_{H_1}} c_{H_1}
\end{equation}
where $\lvert \delta_+ \rvert= e^ {c(d_{H_1}-1) (d_{H_2}-1)}$.
\medskip
By Proposition \ref{Bot},
\begin{equation*}
\phi^+ \circ H_2 \circ H_1 (x,y)=c_{H_2}{\left(\phi^+ \circ H_1 (x,y)\right)}^{d_{H_2}}=c_{H_2} c_{H_1}^{d_{H_2}}{(\phi^+(x,y))}^{d_{H_1} d_{H_2}}
\end{equation*}
and similarly,
\begin{equation*}
\phi^+ \circ H_1 \circ H_2 (x,y)=c_{H_1}{\left(\phi^+ \circ H_2 (x,y)\right)}^{d_{H_1}}=c_{H_1} c_{H_2}^{d_{H_1}}{(\phi^+(x,y))}^{d_{H_1} d_{H_2}}.
\end{equation*}
Thus,
\begin{equation*}\label{equality}
\phi^+ ( H_2 \circ H_1) =\delta_+ \phi^+ ( H_1 \circ H_2)
\end{equation*}
on $V_R^+$.
Now since
\begin{equation*}
\phi^+ \circ H_2 \circ H_1 (x,y)\sim {(H_2 \circ H_1)}_2(x,y)
\end{equation*}
and
\begin{equation*}
\phi^+ \circ H_1 \circ H_2 (x,y)\sim {(H_1 \circ H_2)}_2(x,y)
\end{equation*}
as $\lVert(x,y)\rVert\rightarrow \infty$ in $V_R^+$, it follows that for any fix $x_0 \in \mathbb{C}$,
\[
{(H_2 \circ H_1)}_2(x_0,y)- \delta_+{(H_1\circ H_2 )}_2(x_0,y)\sim 0 \text{ as } \lvert y \rvert \rightarrow \infty.
\]
Since the expression on the left of the above equation is a polynomial in $y$, it follows that
\[
{(H_2 \circ H_1)}_2(x_0,y)=\delta_+{(H_1 \circ H_2 )}_2(x_0,y)
\]
for all $y\in \mathbb{C}$. Therefore,
\begin{equation}\label{rel1}
{(H_2 \circ H_1)}_2 \equiv \delta_+{(H_1 \circ H_2)}_2
\end{equation}
in $\mathbb{C}^2$.
\medskip
We again use B\"{o}ttcher coordinates to recover the relation between the first components of these maps. As in the previous case, similarly one can show that
\begin{equation*}
\phi_{H_1}^- = \phi_{H_2}^-=\phi^-.
\end{equation*}
Thus using Prop. \ref{Bot}, we get that
\begin{equation}\label{F1}
{(c_{H_1}')}^{d_{H_2}} c_{H_2}' {(\phi^- \circ H_1 \circ H_2 (x,y))}^{d_{H_1} d_{H_2}} = \phi^-(x,y)
\end{equation}
and
\begin{equation}\label{F2}
{(c_{H_2}')}^{d_{H_1}} c_{H_1}' {(\phi^- \circ H_2 \circ H_1 (x,y))}^{d_{H_1} d_{H_2}} = \phi^-(x,y)
\end{equation}
for all $(x,y)\in {(H_1\circ H_2)}^{-1}(V_R^-)\cap {(H_2\circ H_1)}^{-1}(V_R^-)=\mathcal{A}$. Note that $\mathcal A$ is an open neighbourhood of $I^+ = [1:0:0]$ in $\mathbb P^2$.
As in (\ref{delta+}), it can be shown that
\begin{equation} \label{delta+}
{\left(c_{H_1}'\right)}^{d_{H_2}} c_{H_2}'=\eta {\left(c_{H_2}'\right)}^{d_{H_1}} c_{H_1}'
\end{equation}
where $\lvert \eta\rvert= e^ {d(d_{H_1}-1) (d_{H_2}-1)}$ with $d=d_1-d_2$. Hence
\[
{(\phi^- \circ H_1 \circ H_2 (x,y))}^{d_{H_1} d_{H_2}} = \eta {(\phi^- \circ H_2 \circ H_1 (x,y))}^{d_{H_1} d_{H_2}}
\]
on $\mathcal{A}$. Consequently, there exists $\delta_-$ (an appropriate $d_{H_1} d_{H_2}$-th root of $\eta$) such that
\begin{equation} \label{phi h-}
\phi^-\circ (H_1\circ H_2)=\delta_- \phi^- \circ (H_2 \circ H_1)
\end{equation}
on $\mathcal A$ where $\vert \delta_- \vert = e^ {d(d_{H_1}-1) (d_{H_2}-1)}$.
\medskip
Note that for a fixed $c\neq 0$, there exists $\epsilon>0$ sufficiently small such that
\[
\mathcal{A}_{\epsilon,c}=\{[{1}/{y}:c:1] \text{ where }0\neq \lvert y\rvert <\epsilon\}
\]
intersects $\mathcal A$ and contains $I^+=[1:0:0]$ in its boundary. Choose $[x_n:c:1]\in \mathcal {A}_{\epsilon,c}$ such that $\lvert x_n \rvert \rightarrow \infty$. Now since $(H_2 \circ H_1)(x_n,c), (H_1\circ H_2)(x_n,c)\in V_R^-$, it follows that
\[
{(H_2 \circ H_1)}_1(x_n,c),{(H_1\circ H_2)}_1(x_n,c)\rightarrow \infty
\]
as $n\rightarrow \infty$.
\medskip
\noindent
Using (\ref{phi h-}),
\begin{equation*}
{(H_1\circ H_2)}_1(x_n,c)-\delta_-{(H_2\circ H_1)}_1(x_n,c)\rightarrow 0
\end{equation*}
as $n\rightarrow \infty$. The expression on the left is a polynomial in $x$ for each fixed $c$ and thus
\begin{equation*}
{(H_1\circ H_2)}_1(x,c)=\delta_-{(H_2 \circ H_1)}_1(x,c)
\end{equation*}
for all $x\in \mathbb{C}$. Using the similar argument as in the previous case, we get
\begin{equation}\label{rel2}
{(H_2\circ H_1)}_1 \equiv \delta_-{(H_1\circ H_2)}_1
\end{equation}
in $\mathbb{C}^2$.
\medskip
Hence using (\ref{rel1}) and (\ref{rel2}), we get
\[
H_2 \circ H_1= C\circ H_1\circ H_2
\]
where $C(x, y) = (\delta_- x,\delta_+ y)$ with $\lvert \delta_+\rvert=e^ {c(d_{H_1}-1) (d_{H_2}-1)}$ and $\lvert \delta_-\rvert=e^ {d(d_{H_1}-1) (d_{H_2}-1)}$ with $c=c_1-c_2$ and $d=d_1-d_2$.
|
1,116,691,500,225 | arxiv | \section{Introduction}
\label{intro}
Internet of Things (IoT) sensors has seen explosive growth over the last twenty years. The consumer IoT market is estimated to reach 142 billion dollars by 2026 at a CAGR of $17\%$. Estimates forecast that by 2025, there will be 152,200 IoT devices connecting to the internet per minute. However, the expanding IoT landscape has similarly created an increased attack surface. Malware attacks targeting IoT have increased by $30\%$ with $76\%$ of IoT risk professionals believing their organization’s IoT security posture leaves them vulnerable to cyber attacks~\cite{puf-sec}. This is particularly important for the subset of IoT devices such as medical sensors that are used in telehealth applications~\cite{hassan2017internet, tehranipoor2017exploring, wortman2017proposing, tehranipoor2017investigation}.
Telecommunication in healthcare, or telehealth, provides a means by which patients can interact with medical professionals and health-related services virtually. In light of the COVID-19 worldwide pandemic, the need for secure and readily available remote patient monitoring has never been more important. Rural and low-income communities, in particular, have been severely impacted by the lack of accessibility to in-person healthcare. This has created the need for access to remote patient monitoring and virtual health visits in order for greater accessibility to premier care. However, the convenience of connecting medical providers with patients remotely also introduces significant security and privacy risks. One such risk of using unsecured medical devices is the potential for a major privacy breach, as they store sensitive information such as vital signals, diagnosed conditions, therapies, and a variety of personal data~\cite{TelehealthII}. For example, on the dark web, an individual's private health information (PHI) goes for 20 times to 100 times the value of a social security number or credit card. This creates a strong financial incentive for malicious actors to illicitly steal healthcare data from small embedded telehealth sensors which are the most exposed computing elements due to the tight processing, energy, and latency requirements of IoT devices~\cite{anagnostopoulos2018securing}.
Physical Unclonable Functions (PUFs) can provide a lightweight and tamper-proof security primitive for IoT devices particularly telehealth sensors~\cite{puf-sec, tehranipoor2018towards, tehranipoor2017design11}. PUFs create cryptographic signatures that can be used for authentication~\cite{yan2015novel}, software attestation, and cryptographic key generation~\cite{DRAM-PUF, aguirre2020systematic}. These signatures are derived from the sub-micron process variations present in integrated circuits (ICs) ~\cite{DLA-PUF}. These signatures are never stored on the device itself and in many cases are extremely difficult to tamper~\cite{logic-locking,DRAM}.
In this paper, we propose a novel PUF extraction scheme called the High-Low method (HaLo), which extracts process variations found in commercially available NAND flash memory chips. The HaLo method aims at minimizing authentication latency and maximizing the lifetime of the flash chip by limiting program/erase cycles. The HaLo method is also proven to work using commercially available NAND flash chips while being entirely controlled by a microcontroller that costs under \$15, making it one of the most cost effective published flash-based solutions to our knowledge. Our main contributions are summarized as follows:
\begin{itemize}
\medskip
\item We have built a novel PUF extraction technique called the HaLo method, which supports edge deployment on low-cost microcontrollers. This will lower the cost of entry, and help encourage secure data transmission for remote devices.
\item The proposed HaLo method offers a PUF solution for accurate authentication and has lower latency and lower power consumption than other PUF generation techniques.
\item The HaLo method is compatible with off-the-shelf ONFI 2.2 compliant flash chips, which could lead to backward compatibility implementation on existing health sensors.
\end{itemize}
\medskip
The organization of the remainder of this paper is as follows. First, \textbf{Section II} will discuss the preliminary background information required in order to understand the HaLo method. Next, \textbf{Section III} will give an in-depth look at related published authentication schemes using process variation, as well as a look at the advantages of the novel HaLo extraction method. \textbf{Section IV} will discuss the HaLo extraction method in detail including design constraints and considerations, and \textbf{Section V} will provide experimental results and validation of the novel HaLo method. Finally, \textbf{Section VI} will provide details on the telehealth application proof of concept built with the HaLo extraction method, and \textbf{Section VII} will give a brief conclusion and summary of the work, along with details on proposed future work for this project.
\section{Preliminaries}
In this section, we will briefly discuss the general background information involving our proposed authentication scheme. Specifically, this section covers an introduction to current wireless authentication protocol solutions, a brief understanding of process variations found in flash memory chips, and a general understanding of PUFs~\cite{gordon2021flash}.
\subsection{Wireless Authentication Methods}
There are a variety of wireless authentication methods that are used by IoT and Telehealth devices. One of the most common ways to authenticate these resource constrained devices is by utilizing pre-shared keys~\cite{Wifi-PSK-Hack}. Pre-shared keys are authentication keys or tokens that are shared with a device prior to its deployment. These keys are then exchanged with a gateway in order to authenticate an IoT device. There are several important vulnerabilities within this model that are solution wishes to address. First, the keys can be extracted out of firmware by a sophisticated hacker who has access to the physical device. This has happened in the industry with Link Plugs and other smaller IoT devices~\cite{link}. Secondly, these keys can be cloned through replay attacks and deauthentication attacks~\cite{Wifi-PSK-Hack,Adversarial}. This was shown to be effective on all WEP Wifi routers which extracted wifi keys due to the lack of 'freshness' in the messages sent between endpoints and wifi routers. By using PUFs and TRNGs, many of these prior security vulnerabilities can be drastically mitigated by providing random nonces and authentication signatures~\cite{wortman2020exploring}. Firstly, the PUFs allow secrets to be embedded in process variations which are not stored in any nonvolatile memory which prevents hackers from simply dumping onboard firmware and finding pre-shared keys. Secondly, the onboard TRNG mitigates replay attacks by preventing attackers from arbitrarily replaying authentication messages.
\subsection{MLC NAND Flash Memory Architecture}
\begin{figure}[t]
\centering
\includegraphics[width=0.56\linewidth]{FlashCellMLC.JPG}
\vspace{-7pt}
\caption{A) Overview of NAND Flash Cell B) MLC Flash Chip Digitization}
\label{fig:flashcell}
\vspace{-10pt}
\end{figure}
Flash NAND memory is a type of non-volatile memory that stores user data in the physical form of charge on a \textit{float gate}. Programming these cells requires electrons to move from the polysilicon channel on/off the floating gate via electron tunneling, described in figure~\ref{fig:flashcell}A. It is important to note that this electron tunneling can damage the tunnel oxide, which means that flash cells become less reliable after many program/erase cycles (PECs). Most flash memory is rated anywhere from 1,000 to 10,000 PECs. A flash's lifetime can increase drastically by using a memory management controller that distributes PECs evenly across all cells called wear leveling.
In the case of \textit{Multi-Level Cell (MLC)} flash chips, 2 bits of data are stored on each cell, where the current float gate voltage is compared to multiple threshold voltages in order to determine the cell's digital value, as shown in Figure~\ref{fig:flashcell}B. Flash has high memory density, low cost, and ability to be programmed and erased electronically without moving parts, flash NAND memory has exploded in popularity for remote devices and IoT systems.
\subsection{Process Variations and PUFs for Flash Memory}
Like all other silicon-based ICs, flash memory chips are subject to many different forms of process variations. In general, most process variations are uncontrollable imperfections caused by limitations in modern lithography processes. These variations are commonly caused by various disturbs caused by parasitic capacitance. By performing multiple read or program operations on sections of the flash chips, disturbs can be induced on these chips causing random fluctuations and bit flips. These fluctuations depend on each cell's relative gate thickness and width. both of which are uncontrollable and extremely hard to model. Finally, this allows for unique signatures that can be generated for each page of the flash memory. These signatures are known as PUFs which can be used for secure key generation and authentication mechanisms~\cite{FlashVariation}. Each PUF has a challenge and response. The challenge is the input to the function which then outputs an unclonable response. Key mathematical metrics are used to describe PUF efficacy which will be touched on in more detail in \textbf{Section VI}.
\subsection{Physical Unclonable Functions (PUFs)}
A PUF is a function that produces a unique signature based on challenge-response pairs. These unique signatures often referred to as \textit{silicon fingerprints}, are by-products of intrinsic, uncontrollable process variations found on a silicon-based IC~\cite{ProcessVariation}. As mentioned in a previous section, these process variations are present on every chip, and they can greatly differ from one another in regards to performance. While understanding specific instances of process variations can help influence the design for PUF extraction, the absolute variations do not typically matter.
\section{Related Works}
Flash memory has gained popularity in recent years due to its cheap cost and high density for storage applications. This is particularly the case for Flash NAND memory~\cite{DRAM-Overview}. Rather than including extra CMOS as a PUF, utilizing the onboard Flash memory can conserve space and energy for an extremely resource-constrained devices such as those seen in telehealth applications.
When looking at the development of different Flash PUFs, there are several important trends to recognize. The first Flash PUFs were created from 2012-2017~\cite{Prabhu,Wang,Jia,Saito}. These seminal works were predominately focused on showing how Flash memory was a viable candidate for memory-based PUFs. These Flash PUFs were important for showing the promise of flash memory, but they struggled to account for several factors such as aging, temperature, design constraints. For example, Prabhu et al.~\cite{Prabhu} required hundreds of thousands of programs in order for PUF signatures to develop which can take several hours. Similarly, Kim et al.~\cite{Kim} required very fine-grained flash programming which required knowledge of sensing voltages that are typically proprietary on commercial flash chips. Similarly, Wang et al.~\cite{Wang} created both a novel PUF and TRNG but incurred very high processing overheads that leads to high latency.
From 2018 to 2020, Flash PUF development began marked by the work of~\cite{Wu,Clark,Poudel,Mahmoodi,Sakib,Larimian}. Many of these proposed designs significantly enhanced Flash PUF architectures that took into account many different factors. These Flash PUFs incorporate advanced parameters such as aging resistance, temperature resistance, and unique architectures. For example, work from Clark et al.~\cite{Clark} designed a Flash PUF that was voltage resistant. Secondly. Poudel et al.~\cite{Poudel} designed a Flash PUF that works on the onboard microcontroller NOR Flash Memory. Similarly, Larimian et al.~\cite{Larimian} verified the machine learning resilience of their Flash-based PUFs by performing extensive deep learning tests. Finally, Mahmoodi et al.~\cite{Mahmoodi} created one of the most stable and resilient Flash PUFs by modifying the cells to extract leakage current.
However, several open challenges remain. In order to make PUFs commercially viable, they must be generated using cheap commercial microcontrollers in order to keep their price point down. Furthermore, it is helpful that these PUFs can be deployed on legacy systems by using commercial off-the-shelf flash memory and these systems should avoid excessively long latency such as those seen in ~\cite{Prabhu}. Although, even some of the most cutting-edge work do not use commercial off-the-shelf components. This can reduce the adoption of Flash PUF technology. Secondly, many of these Flash PUFs require slowly building charge differences on the floating gate through hundred or thousands of program/erase cycles. This is effective at generating signatures; however, this drastically increases the latency required to generate these PUF responses. Finally, many of the systems that avoid the slow build-up of charge have to use expensive FPGAs with Gigahertz clocking speeds to generate fine-grained interrupts as shown in~\cite{Clark}. This is not viable for edge deployments particularly those such as telehealth-based applications.
Our work seeks to bridge this gap by developing a Flash-based PUF that uses only ONFI standard commands; works on a 15 dollar microcontroller; uses off the shelf commodity flash chips~\cite{yan2020bit}, generate signatures with low latency using a novel interrupt technique; and is able to compete with some of the most resilient PUF structures identified in previous work.
\section{HaLo Flash PUF Extraction Technique}
HaLo extracts process variations found in flash NAND memory cells for authentication applications. This section will discuss the HaLo experimental setup, design considerations for PUF extraction, and the HaLo method itself.
\subsection{Experimental Setup}
As mentioned in our summarization of contributions section, it is vital to keep the experimental setup as minimal as possible in terms of both cost and complexity. While previous works have mostly required high clock frequencies and custom chips, our novel design uses a 100 MHz clock and a simple TSOP Adapter in order to connect to the flash chip. Our simple bill of materials consists of:
\begin{itemize}
\item STM32 based microcontroller with 100 MHz maximum clocking frequency. This will serve as the memory controller and is used to read/write/erase the flash memory.
\item 32 GB MLC Flash NAND memory chip from Micron that does not have an integrated memory controller. The memory chip can read/store application data and is also used in the HaLo extraction method
\item TSOP Adapter in order to interact with packaged flash NAND chip
\end{itemize}
The entire experimental setup is purchased for just shy of \$20, and all items were readily available for purchase to be included in a commercial application. It is worth noting that we used an unmanaged flash chip in order to give us full control of the device. This extra control allowed us to write directly to specific memory locations without worrying about wear-leveling control or error-correcting code (ECC), which are implemented in most managed flash components. This gave us the highest control on interrupting the writing, erasing, and reading processes, because we were directly controlling the chip's behavior without interference from the memory controller. The HaLo method only requires the most basic memory access and modification functions including read page, write page, and erase block.
The experimental setup is shown in Figure~\ref{fig:setup}. The memory chip interface was \textit{Open NAND Flash Interface (ONFI)} version 2.2, so the interrupt sequence and reading of data are ubiquitous across all ONFI 2.2 memory chips. ONFI 2.2 requires 7 control signals and an 8-bit bus for data, as shown with jumper cables connecting the Flash NAND Chip to the Memory Controller. Additionally, a USB connection was made between the Memory Controller and a computer. PUF trial data was transferred via the USB cable so that all statistical testing could be performed on the computer. A computer is simply an analysis tool, and no part of the actual PUF extraction method is performed on the computer.
The flash chip tested in this work stores pages consisting of 4096 bytes of data and 224 extra bytes for ECC. Blocks are organized in chunks of 256 pages, with each 32 Gb chip having access to 4096 unique blocks ~\cite{DRAM-TRNG,COTS-DRAM}.
\begin{figure}[t]
\centering
\includegraphics[width=0.68\linewidth]{LabledSetup.png}
\vspace{-7pt}
\caption{Experimental Setup Diagram}
\label{fig:setup}
\vspace{-10pt}
\end{figure}
\subsection{Proposed PUF Extraction: Design Considerations}
With the experimental setup described in the previous section, work began on the PUF extraction method itself. When designing the PUF extraction method, there are two major metrics that are primarily considered. The first feature was minimizing the number of program/erase cycles (PECs) required in order to extract a reliable signature. As mentioned in the \textit{Preliminaries} section, flash memory devices have a limited lifetime measured in PECs. As the charge is tunneled onto and off of the floating gate, the tunnel oxide that separates the silicon channel and the floating gate begins to deteriorate. As the deterioration continues, the data held within each cell becomes unreliable due to the increase of charge leakage on the floating gate. Wear on the cells is an issue for data retention and general application use, but it also poses an issue with unreliable signature extraction, because the programming and erase times for each cell are altered as the cell reaches its end of life. In order to combat this aging effect, HaLo was designed in order to require as few PECs as possible.
The second major consideration was lowering the signature extraction latency. Faster signature extractions allow for faster authentication and lower power consumption ~\cite{RO}. Telehealth sensors in the scope of this work can be treated as edge devices that are resource-constrained by both processor speed as well as limited battery life. In order to prolong the sensor's battery lifetime as well as create a reasonably fast authenticated connection, the method needs to be lightweight and fast.
While additional results and metrics will be discussed in detail in the \textit{Experimental Results and Validation} section below, these two design considerations helped guide the construction of the HaLo method. In the next section, the HaLo technique will be explained in detail, and additional design constraints and considerations will be discussed.
\subsection{PUF Extraction Observations and Techniques}
As mentioned in the \textit{Related Works} section, the PUF extraction must have low latency and can only use standard edge deployable microcontrollers within the 100MHz frequency range. This introduces several important design challenges in order to make Flash PUFs achievable on low-cost microcontrollers. First, the low latency requirement prevents the design from utilizing hundreds of repeated program and read cycles that slowly increase the charge on the floating gloat. Therefore, our scheme must use fine-grained interrupts in order to interrupt the operation of the cells to force the flash into unsteady positions. However, a single interrupt scheme such as the one seen in Clark et al.~\cite{Clark} does not have a high enough clock resolution to interrupt the flash programming fast enough to generate a 50/50 split between ones and zeros. In fact, signatures that are generated from a single programmed interrupt are either about $80\%$ 1's or $80\%$ 0's. This is highlighted in Figure~\ref{clocking_granularity} where the sysTick clock on the microcontroller is interrupted at different times. As shown in the figure, the clock does not have the granularity to interrupt the programming operation on the microcontroller to generate an even distribution of 1's and 0's. This creates two distinct regions detonated by a low programming interrupt and a high interrupt programming interrupt where the low interrupt generates signatures with about $80\%$ ones and the high programming interrupt generates signals with about $80\%$ zeros. The steep drop-off occurs within a single sysTick clock instruction. This makes interrupting the program a distinct challenge.
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{clocking_granularity.PNG}
\caption{Ratio of 1's and 0's for Different Program Interrupt Times}
\label{clocking_granularity}
\end{figure}
Secondly, the lack of granularity has another limiting effect as well. The signatures generated from a single interrupt are extremely noisy. This is due to the interrupting method. This error rate can approach $10\%$ within 80 total P/E cycles. This is due to the lack of granularity in the clock itself. Due to the max 100 MHz clocking speed, an interrupt at a particular clocking delay can vary due to the limited frequency. This causes unintended errors for cells that are sensitive to the interrupt. This necessitates several programs in the enrollment to ensure that the bytes are stable across a small variation in the actual interrupt delay.
Although the error increases rapidly and the clocking speed cannot generate signatures with $50\%$ ones and zeros, several more insights and solutions were generated to remedy these design challenges. Specifically, a well-defined enrollment scheme is designed to select only the most stable cells which can be decoded as one or zero. These can then be used for highly stably signatures.
\subsection{Proposed PUF Extraction: Enrollment Scheme}
As other work has shown in Poudel et al.~\cite{Poudel} unstable cells can be identified for TRNG bits by applying several reads. These reads apply a smaller voltage to the floating gate of the flash cells which only slightly disturbs the cells. This can quickly identify unstable cells that are flagged during enrollment. Furthermore, approximately $95\%$ of these unstable bits flip within five reads. Therefore, only applying five reads is sufficient for identifying stable cells through successive read operations. Figure~\ref{stability_read} highlights this observation. This figure graphically shows all 32000 bits on a single page. All of the yellow lines indicate a flipped bit. Many of these bits flip within the first five reads which is an effective filtering process for identifying stable bits within a single read. Combining multiple reads can help identify unstable bits per page along with multiple programs which helps to identify unstable bits due to lack of clocking granularity.
\begin{figure}
\center
\includegraphics[width=0.6\linewidth]{bit_stability_read.PNG}
\caption{Error Increase Over 100 Trials}
\label{stability_read}
\end{figure}
Secondly, flash cell failure is highly spatially dependent. This is evidenced by plotting a histogram of the distance between each cell flip or failure. Failures tend to cluster in groups. This can be modeled as a negative exponential distribution between errors. Therefore, instead of flagging individual bits that are stable, bytes are flagged as stable and only if a byte is completely stable is it passed through the enrollment process.
Combining these observations, the first enrollment strategy was crafted. An interrupt program is used on the low side and another is applied to the high side. Then five reads are performed. If bytes remain stable, across the low interrupt and the high side interrupt they are flagged as stable. After this technique is applied about 1500 bytes are extracted per page which has a total of 8000 bytes. Approximately 800 of these bytes are over $98\%$ accurate during testing. However, approximately 700 bytes are underneath this threshold. This is due to the fact that many stable bytes are incorrectly flagged during the enrollment process. However, a slight wrinkle to the first algorithm is able to generate highly stable PUF responses. This leads to our new enrollment strategy called the HaLo method.
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{histogram.PNG}
\caption{Histogram of Distance Between Sequential Failures}
\label{poisson-dist}
\end{figure}
The HaLo extraction method uses a novel technique to generate highly reliable signatures that begins with a byte selection process. Reiterating the previous strategy, rather than aborting the program early and using signatures with $80\%$ ones, two different types of interrupted programs are performed per page: the high interrupt and low interrupt. The low program generates a signature with $80\%$ ones and the high program has signatures with approximately $20\%$. This is repeated five times on each low programming interrupt and high programming interrupt. Since errors tend to be close together, only bytes that are entirely stable are chosen. This creates two separate sets each of which contains 25 signatures due to the five reads per five programs. Next, the stable high bytes locations and the stable low byte locations for a single page are identified. Then bytes that are either highly resistant to programming or highly susceptible to over-programming are chosen. This information is captured by selecting bytes that fully resist programming in the under-programmed section also known as the low side and which bytes are easily programmed in the over-programmed section also known as the high side. If a byte group is simultaneously fully programmed in the over-programmed section and is fully under-programmed in the under-programmed section it is also removed during the enrollment process. This is because greater than $97\%$ of errors are from the aforementioned 700 bytes which are stable in both sections. When referencing Figure ~\ref{enroll}, the only byte selected from the low side is Byte 2 and the Byte selected for the high side is Byte 8000. This then creates a 'map' for each page of the stable low bytes and stable high bytes.
After enrollment, the telehealth device receives a list of byte locations to provide a high program and low program along with the page number to apply the programming. It will do two separate programs: one high and the other low. Then, the majority bit is selected from each byte and is decoded properly. Collisions are minimized since 'mapped' byte values are either highly resistant to over-partial programming or susceptible to it. However, if it occurs the byte with the most amount of parity bits when decoded will be selected. Furthermore, any undefined states (such as equal hamming weights of four) are decoded as low since more errors come from the high side than the low side. This process is highlighted in Figure ~\ref{decode}. This process reduces the percent error by several magnitudes of 10 to around $10^{-4}$. A major downside to this extraction process is that it consumes a tremendous amount of bits to extract only the most stable bits and performs two programs instead of one. Specifically, each page consists of 32,000 bits has an output response of about 500 bits.
\begin{figure}
\center
\includegraphics[width=0.65\linewidth]{enroll.png}
\caption{This figure highlights the enrollment process where low interrupt and high interrupt signatures are generated to provide a 'map' of stable low byte and high byte values. These values are set along with the page number to compromise a challenge. }
\label{enroll}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.65\linewidth]{decode.png}
\caption{This figure highlights the decoding process for a device generating a response to a PUF challenge. First, the device applies the two programming intervals to the page requested in the challenge. Then, the stable bytes are located and are also sent in the challenge to the end device. The device then applies the programming and chooses the majority represented bit in each signature. In the event of a collision the byte that has the strongest weight is selected. This is why the collision scenario in the figure decodes to 0. }
\label{decode}
\end{figure}
\subsection{Proposed PUF Extraction: Challenge and Response}
After the HaLo enrollment process is complete a 'map' of the stable bytes is stored on the gateway. Each byte in the map is either extremely susceptible to over-programming or highly resistant to it. With this information, the gateway will request particular byte locations from the sensor for authentication. Approximately 256 low byte locations and 256 high byte locations will be sent along with the specific page number that is used. This compromises approximately 600 bytes of space which can be sent in the payload of a single Ethernet frame to the sensor. The microcontroller will then apply two programs and ten total reads and identifies which bytes are stable and which are not. If a byte location produces a majority of ones it is subsequently decoded as one and is assumed to be one of the high side bytes. Inversely, if a byte produces a majority of zeros than it is assumed to be from the low side and is decoded as zero. This extraction technique can resist a maximum of three errors before a byte is incorrectly decoded. Therefore, the gateway will receive a bit string of ones and zeros to the sensor.
This design allows for two important features for the gateway. Firstly, the gateway can control how long of an authentication response it needs. This can allow our application to adapt to various levels of required security. For example, certain cryptographic applications may only require 100-bit signatures. The gateway then only has to send 100 bytes to the sensor. On the other hand, sensitive security tasks that may demand 512-bit signatures can also be accomplished. Secondly, many other approaches use helper data such as Hamming Codes, Fire Codes, or more recently Low-Density Parity Codes, to recover any errors sent back from the device. These codes leak polarity information about the response values which allows for error correction. Our scheme leaves out any polarity data and simply sends bytes to read. This makes our helper data significantly more robust to side-channel or modeling attacks since the helper data never reveals any polarity information from the bytes it is requesting. However, it is important to note that this advantage is realized because the HaLo enrollment algorithm filters bytes from pages very aggressively. On average, each page returns approximately 700 usable bytes. Therefore, about $83\%$ bytes are not usable.
\begin{figure}
\center
\includegraphics[width=0.6\linewidth]{Auth_Protocol.png}
\caption{Authentication Protocol}
\label{auth-protocol}
\end{figure}
The final step in the HaLo extraction method is authenticating a sensor by issuing a challenge and verifying the response. While most memory based PUF solutions use a challenge-response scheme that links a challenge value with a specific page in memory, HaLo extraction includes additional information in the challenge packet. As mentioned in the \textit{PUF Extraction Observations and Techniques} section, the granularity of the interrupted program is not high enough to generate reliably stable signatures, due to the fact that some cells appear stable in small sample sizes. In order to combat this, the challenge issued to the sensor includes the location in memory, as well as a list of 512 stable byte locations on both the high side and low side of the page programming. It is important to note that these byte indexes do not reveal any polarity of the bytes themselves -- they are simply represent a map of bytes that were identified as extremely stable in the enrollment process. The basic authentication procedure is shown in Figure \ref{auth-protocol}.
\section{Experimental Results and Validations}
In this section, we will discuss the relevant PUF metrics of the novel HaLo extraction process. All of the results generated in this section were gathered at room temperature using the experimental setup discussed in the \textit{Proposed Flash PUF} section. The main characteristics of the HaLo PUF extraction method are its reliability over multiple challenges, the uniqueness of signatures on every page, and the minimal time required in order to generate a signature.
\subsection{PUF Metrics}
There are three major metrics for any PUF. They are uniqueness, randomness, and reliability. Uniqueness defines how different each PUF response is from the other. This value is best captured through the Inter-Hamming Distance (Inter-HD). The percentage Inter-HD describes what percentage of bits flip between two different responses. An ideal value for this is approximately $50\%$. Our system had an average of $47\%$ which is shown in Figure~\ref{inter-hd}. The second metric randomness describes how random each signature is. This can be measured in Shannon entropy per bit. With 256 high bits and 256 low bits, the average Shannon entropy per bit is approximately 0.999 with an ideal value of 1. Next, reliability is the final measurement. The reliability of this system is extremely strong. The error does change as the flash cells age; however, the average percent error is approximately $7.1 * 10^{-6}$. The max percent error value is $1.9 * 10^{-2}$ and the minimum value is $5.9 * 10^{-7}$. This reliability is strong enough to possibly support cryptographic key generation and is able to last through the end of the Flash's life this is shown in Figure~\ref{aging} and Figure~\ref{reliability}.
\subsection{Aging Adaptation}
In order to simulate aging, the Flash memory chips were programmed until their maximum rating which ranges between 3,000 to 4,000 P/E cycles for MLC chips. The aging causes the oxide to deteriorate from the program voltage stress. The low side bytes that are susceptible to over programming are barely modified since these bytes just program and leak charge faster ~\cite{BOCA-NET}. However, the high side bytes that are resistant to programming more easily modify the amount of bit flips in each byte. In general two approaches for this were considered. First was adapting the polling interval and decrease it since the flash cells program faster. However, this approach can be difficult to model and control due to the lack of granularity in the interrupt mechanism. Consequently, the second approach was taken. In this approach, the number of bytes required for a decode was changed once the cells reached a particular percentage of life used. This adapts the error rate and drops it significantly. Instead of taking the majority (or in our case 5 bits) for a decode. The decode threshold was varied according to the life of the flash cells. At $50\%$ of the lifetime, the amount of bits required for a proper decode becomes 6. Then, this changes to a threshold of 7 at $90\%$ which remains fixed and keeps the error rate significantly lower. This trend is reflected in Figure~\ref{aging}. All of the aggregate statistics are reflected in Table~\ref{tabel-stats}.
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{Reliability_Data.png}
\caption{Reliability of 250 Responses at $50\%$ of Life Used}
\label{reliability}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{Inter-HD.png}
\caption{Inter-Hamming Distance Calculations between 250 Responses}
\label{inter-hd}
\end{figure}
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{AgingData.png}
\caption{Error Rate vs Percentage of Life Used}
\label{aging}
\end{figure}
\subsection{PUF Latency and Power Consumption}
Finally, our PUF will be compared in latency and power consumption to an AES encryption scheme on one of the most portable AES implementations for resource-constrained devices, Tiny AES. To just encrypt a preshared key about 22.3 mW of power is used and the encryption process takes around 190 ms. However, the HaLo extraction technique takes less than 1mW of power and PUF signatures are generated in 34.8 ms. This relative power difference is shown in Figure~\ref{power}. This highlights a significant performance advantage with a higher security guarantee since the PUF signatures are never stored anywhere ~\cite{variations}. Furthermore, this low latency and power consumption make the HaLo PUF a strong candidate for the telehealth application space.
\begin{figure}
\center
\includegraphics[width=0.5\linewidth]{TimingPower.png}
\caption{Relative Power Consumption and Latency Comparison between the FPUF and Traditional Tiny AES Encryption Scheme}
\label{power}
\end{figure}
\begin{table}
\center
\caption{PUF Entropy, Reliability, and Randomness Metrics}
\label{tab:1}
\begin{tabular}{llll}
\hline\noalign{\smallskip}
Metric & Minimum & Average & Max \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Shannon Entropy & 0.99 & 0.99 & 0.99 \\
Reliability & $5.9 * 10^{-7}$ & $7.1 * 10^{-6}$ & $1.9 * 10^{-2}$ \\
Inter-Hamming Distance& $47.2\%$& $51.0\%$ & $53.1\%$ \\
\noalign{\smallskip}\hline
\label{tabel-stats}
\end{tabular}
\end{table}
\section{Telehealth Application}
The first step in developing our authentication protocol was to build our telehealth application, which involved using a DS18B20 temperature sensor to collect temperature data and store it in a format that we can then use for edge deployment. Body temperature collection is just one of the many different ways remote patient monitoring can be utilized. For our experiment, we used a Raspberry Pi as a means to collect temperature data in intervals of 10 seconds and saved the data in a .csv format.
We then built a TCP/IP dynamic challenge-response authentication scheme using the PUF. This authentication process starts with the gateway sending a challenge to the health sensor, denoted Challenge$_1$. This request includes a challenge location and stable byte locations of the NAND flash memory cell. The health sensor performs the interrupted program and extracts the stable byte values. The health sensor then randomly generates a nonce, denoted Nonce$_1$, and the generated hash value is XORed with Nonce$_1$. The value generated is sent to the gateway as a response to Challenge$_1$. The gateway knows what Response$_1$ should be, so it XORs the entire response of the gateway with Response$_1$ to determine Nonce$_1$. The gateway then randomly generates another nonce, denoted Nonce$_2$. A second challenge, using a different index of stable byte locations, is generated by the gateway. Challenge$_2$ is then concatenated with (Response$_1$ XOR Nonce$_2$), and this message is sent back to the health sensor. The health sensor is able to separate Challenge$_2$ from (Response$_1$ XOR Nonce$_2$), and (Response$_1$ XOR Nonce$_2$) is XORed with Response$_1$ to determine Nonce$_2$. Both the gateway and the health sensor now know the values of Challenge$_1$, Challenge$_2$, Nonce$_1$, and Nonce$_2$. Using Challenge$_2$, the health sensor again performs the interrupted program and extracts the stable byte values. By the end, if the health sensor and the gateway are legitimate sources, the challenges and responses can be properly decoded and the transaction is authenticated.
There are many advantages to using this authentication scheme. First, the use of hashes masks the plaintext values of the data being sent, meaning that attackers won’t be able to read the data in transit. Second, the use of randomly generated nonces means that the values being transmitted will always change with every transaction. Finally, and most importantly, the use of a PUF provides a unique identifier that the gateway can reliably authenticate, and attackers won’t be able to model the authentication responses of the PUF.
Here we see the challenge and response pairs that we discussed being sent between the health sensor and the gateway. Man-in-the-middle and replay attacks can occur in our any networked environment ~\cite{DVFS, Bio}. However, with regard to MITM, any modification of the packets being sent would result in a complete breakdown of the authentication process, ultimately making it obvious that the connection was tampered with, and would result in the gateway refusing the connection ~\cite{P2M}. With Replay attacks, the use of randomly generated nonces means the values being sent change with every transaction. This means that an attacker can’t replay a message with an old pair of nonces, as the values would be entirely different to the current nonces ~\cite{FLASH, Forte}.
\section{Conclusions and Future Works}
As medical care continues to improve in both effectiveness and accessibility on a global scale, remote patient monitoring is a logical next step for healthcare investment. We have witnessed a global pandemic that disrupted the delivery of potentially life-saving medical care for over a year, which may have been partially mitigated by the adoption of remote patient monitoring (RPM) technology. While the technology proves useful for many medical monitoring applications, it is important to carefully consider the security vulnerabilities and possible exploits of large-scale adoption of RPM. The HaLo extraction method is a lightweight, fast, and easily implementable security measure that could help ensure the authenticity of RPM sensor data. The HaLo method is designed in such a way that it could be implemented on new RPM sensors with minor software updates, and no additional components. The method is resource-efficient and offers financial benefits when compared with many other commercial PUF solutions.
When considering future work for this project, more extensive testing needs to be done using additional flash chips from different manufacturers. Additionally, testing must be done in high temperature environments in order to verify that the reliability remains high regardless of external factors such as heat. Finally, the HaLo extraction method could be tested on other flash memory densities (SLC/TLC), or possibly even different flash architectures (3-D) in order to broaden the possible application space.
|
1,116,691,500,226 | arxiv | \section{Introduction}
\noindent
Optical properties of metals are important for novel technological applications where the optical response needs to be engineered for specific purposes, such as for plasmonic devices (e.g. in spectrally-selective coatings~\cite{Bilokur2017, Guo2014}), for optoelectronics devices (e.g. in ultra-thin films for transparent conductive electrodes~\cite{Yun2017, Ren2015}), and also for microscopy and optical data storage~\cite{Hatwar1997}.
Also, the colours of metals (which are related to the optical properties within the visible range of the electromagnetic spectrum) play a significant role in the jewellery industry, decoration and dentistry.
For these applications, the most used materials are metallic alloys based on gold or other coinage or precious metals, such as silver, copper, palladium and platinum.
In particular, gold and copper are the few elemental metals that show a characteristic colour, due to the presence of a drop in the reflectivity curve inside the visible range; reflectivities of nearly all other metals are instead generally high and flat for all visible frequencies, making them appear shiny and silvery white.
Moreover, gold alloys and intermetallics are known to show a broad spectrum of colours (yellow, red, purple, to name a few), which can be tuned by varying the alloying elements and concentrations in the material~\cite{Cretu1999}.
Since in the jewellery industry there is the demand, due to market and fashion trends, for new precious-metal alloys with specific colours, it is also of great interest there the search and identification of novel alloys with novel optical properties. \\
Generally speaking, the common route followed by researchers and manufacturers in order to identify any type of novel materials is through trial-and-error experiments which, however, have the drawback to be time-consuming and, if dealing with precious-metal-based systems, expensive.
In order to streamline this process, an alternative route that can help in guiding the search for new promising candidate systems is computational modelling, so that the physical properties under investigation are assessed through computer simulations rather than by real experiments.
Here, we show and discuss how it is possible to perform photorealistic simulations of metals by means of first-principles methods and, as a consequence, predict the colour of novel metallic alloys.
Previously published studies about first-principles simulation of optical properties of both elemental metals and alloys already point towards
the feasibility of this approach. Indeed, in 1988 Maksimov \textit{et al}.~\cite{Maksimov1988} computed the optical properties of several elemental metals whereas, more recently, Werner \textit{et al}.~\cite{Werner2009} performed a similar study on 17 elemental metals; both studies found qualitative agreement with experimental results.
For compounds, on the other hand, Blaber \textit{et al}.~\cite{Blaber2009} calculated the optical properties of several intermetallic compounds, with a particular focus on alkali-noble intermetallics, for new possible candidates as plasmonic materials while, in another work, Keast \textit{et al}.~\cite{Keast2014} computed the density of states and dielectric function of gold intermetallics compounds and gold binary alloys.
Regarding the simulation of specific coloured intermetallic compounds, the reflectivity and colour of the three well-known coloured gold intermetallics AuAl$_2$, AuGa$_2$ and AuIn$_2$ was first computed in Ref.~\cite{Keast2011} and, afterwards, Keast \textit{et al}.~\cite{Keast2013} studied the influence of alloying concentrations on the reflectivity and colour of the intermetallic AuAl$_2$ by considering the ternary compounds having the Au$_{1-x}$Pt$_x$Al$_2$ composition, with $x=0.0, 0.5, 0.75, 1.0$; equivalent computational results were obtained independently by Kecik~\cite{Kecik2013}. Calculated and experimental~\cite{Vishnubhatla1967, Furrer2014} reflectivity curves and colours for these compounds showed good agreement and the trends in colour as a function of the composition were well reproduced.
In addition, the effect of disorder on the optical properties of Au$_{0.5}$Cu$_{0.5}$ was studied by comparing the dielectric function of the random solid solution, simulated using the supercell approach, with that of the ordered intermetallic compound~\cite{DeSilva2015}, and the main spectral differences between the two different types of compounds were captured by the simulations. \\
In this work, first we establish a general computational approach that can be used for the photorealistic simulation of metals, showing how the reflectivity and colour of metallic crystals can be estimated by means of first-principles techniques.
We then demonstrate through a systematic study on elemental metals and extensive comparisons with experimental data that the theoretical and numerical approximations adopted are able to reproduce the correct behaviour of the reflectivity curves and to capture the main differences in optical properties across the periodic table.
Finally, we perform a similar study on metal alloys by considering different types of compounds, i.e. ordered intermetallics, disordered solid solutions and heterogeneous alloys.
In particular, we show through a comparison with experimental results that, if the appropriate methods are used for the simulation of the different types of compounds, (i) the simulated colours of known coloured intermetallics are in qualitative and most often in quantitative agreement with experiments and that (ii) one can reproduce the main colour trends in noble-metal-based binary alloys.
\section{Results}
\noindent
\subsection{Computational approach}
\noindent
The computational workflow that allows one to obtain the reflectivity and colour of a given metal from an initial crystal structure, schematically depicted in Fig.~\ref{fig:colourworkflow}, can be divided into four main computational steps: (i) evaluation of the electronic structure, (ii) calculation of the dielectric function, (iii) calculation of the reflectivity and colour and (iv) photorealistic rendering of the material. \\
The quantity we consider for the first-principles simulation of optical properties is the complex, wavevector- and frequency-dependent, dielectric function $\varepsilon(\mathbf{q},\omega) = \varepsilon_1(\mathbf{q},\omega) + i \varepsilon_2(\mathbf{q},\omega)$.
In fact, the knowledge of the dielectric function gives then access to all the optical constants measurable by optical experiments, such as absorption coefficient and reflectivity. \\
Throughout this work the electronic structure is computed using density-functional theory (DFT)~\cite{Kohn1964dft} within the generalized gradient approximation (GGA) and relying on the PBE exchange-correlation functional~\cite{Perdew1996}. We emphasize here that the electronic structure could alternatively be obtained with more accurate techniques; for example, the accuracy of the band structures could be improved by computing quasi-particle corrections on top of PBE results (typically at the $GW$ level~\cite{Hybertsen1986,Onida2002,Reining2017}), albeit at a largely increased computational cost.
So, while in the present work we just rely on the Kohn-Sham (KS) PBE bands~\cite{Kohn1965}, the use of conceptually and quantitatively correct $GW$ bands would take place seamlessly inside this workflow.
Subsequently, we calculate the dielectric function within the independent particle approximation (IPA), which amounts to neglecting (i) effects related to electron-hole interactions (excitonic effects), since these are effectively screened by the conduction electrons and (ii) effects related to the rapidly varying microscopic electric fields inside the material (local-field effects) since these are typically small in homogeneous systems such as bulk metals~\cite{Marini2001}.
In the optical regime the momentum $\mathbf{q}$ transferred by the photon is negligible so that we can consider the optical limit, $\mathbf{q} \to \mathbf{0}$, of the expression for the IPA dielectric function $\varepsilon(\mathbf{q},\omega)$.
In general, the dielectric function still depends on the direction $\hat{\mathbf{q}} = \mathbf{q}/|\mathbf{q}|$ of the perturbing electric field and only for crystals with cubic symmetry it is the same in every direction.
In the optical limit it is convenient to divide the evaluation of the IPA dielectric function of metals into two separate contributions, an intraband Drude-like term $\varepsilon^{\text{intra}}(\hat{\mathbf{q}},\omega)$ due to the conduction electrons at the Fermi surface and an interband term $\varepsilon^{\text{inter}}(\hat{\mathbf{q}},\omega)$ due to vertical transitions between occupied and unoccupied bands, so that $\varepsilon(\hat{\mathbf{q}},\omega) = \varepsilon^{\text{inter}}(\hat{\mathbf{q}},\omega) + \varepsilon^{\text{intra}}(\hat{\mathbf{q}},\omega)$.
Using the solutions of the one-particle Schr\"{o}dinger equation for periodic systems, $H^{\text{KS}}\ket{\psi_{n\mathbf{k}}}=E_{n\mathbf{k} }\ket{\psi_{n\mathbf{k}}}$ (where $H^{\text{KS}}$ is the KS Hamiltonian from DFT), the explicit expression of the IPA dielectric function can be written as~\cite{Wooten1972, Marini2001, Harl2008}
\begin{align} \label{eq: eps_IP_inter}
\varepsilon^{\text{inter}}(\hat{\mathbf{q}},\omega) &= 1 - \frac{4\pi}{V} \sum_{\mathbf{k}} \sum_{\substack{n, n' \\ n \neq n'}}
\frac{| \bra{\psi_{n'\mathbf{k}}} \hat{\mathbf{q}} \cdot \mathbf{v} \ket{\psi_{n\mathbf{k}}} |^2 }{(E_{n'\mathbf{k}} - E_{n\mathbf{k}})^2}
\frac{ f_{n\mathbf{k}} - f_{n'\mathbf{k}} }{ \omega - ( E_{n'\mathbf{k}} - E_{n\mathbf{k} } )
+ i\eta }, \\ \label{eq: eps_IP_intra_dissipation}
\varepsilon^{\text{intra}}(\hat{\mathbf{q}},\omega) &= - \frac{\omega^2_{\text{D}}(\hat{\mathbf{q}})}{\omega(\omega + i\gamma)},
\end{align}
\noindent
where we have defined the IPA Drude plasma frequency as
\begin{equation} \label{eq: drude_plasma_freq}
\omega^2_{\text{D}}(\hat{\mathbf{q}}) = \frac{4\pi}{V} \sum_{\mathbf{k}} \sum_{n} | \bra{\psi_{n\mathbf{k}}} \hat{\mathbf{q}} \cdot \mathbf{v} \ket{\psi_{n\mathbf{k}}} |^2 \left( -\frac{\partial f_{n\mathbf{k}}}{\partial E_{n\mathbf{k}} } \right).
\end{equation}
\noindent
In the expressions above $\mathbf{v}= -i \, [\mathbf{r},H^{\text{KS}}]$ is the velocity operator, $f_{n\mathbf{k}}$ is the Fermi-Dirac occupation function of the KS Bloch state $\ket{\psi_{n\mathbf{k}}}$ identified by band index $n$ and wavevector $\mathbf{k}$ within the Brillouin zone (BZ) and $V$ is the volume of the crystal.
Instead $\eta$ is an infinitesimal broadening introduced to perform the adiabatic switch of the perturbation within linear-response theory and, in practical calculations,
it is used as an empirical broadening which accounts for scattering processes, always present in real materials, and/or for finite experimental resolution. Similarly, $\gamma$ is an empirical broadening representing dissipation effects of the conduction electrons (see the Methods section for more details on the parameters effectively used in the simulations). A more extensive discussion on the first-principles theory of optical properties and the derivation of the expression of the IPA dielectric function in the optical limit (Eq.~\ref{eq: eps_IP_inter}, Eq.~\ref{eq: eps_IP_intra_dissipation} and Eq.~\ref{eq: drude_plasma_freq}) can be found in Ref.~\cite{Prandini2019}. \\
As the most typical experimental situation is to have polycrystalline materials in which grains have random orientations, in the following we always deal with the dielectric function averaged over the three Cartesian directions
\begin{equation} \label{eq: eps_average_cartesian}
\varepsilon(\omega) = \frac{ \varepsilon(\hat{\mathbf{x}},\omega) + \varepsilon(\hat{\mathbf{y}},\omega) + \varepsilon(\hat{\mathbf{z}},\omega) }{3},
\end{equation}
\noindent
so that we can drop the dependence on the direction $\hat{\mathbf{q}}$. Similarly we also define a corresponding average IPA Drude plasma frequency as $\omega^2_{\text{D}} = [ \omega^2_{\text{D}}(\hat{\mathbf{x}}) + \omega^2_{\text{D}}(\hat{\mathbf{y}}) + \omega^2_{\text{D}}(\hat{\mathbf{z}}) ]/3$. \\
In order to compute the reflectivity from the knowledge of the dielectric function we first introduce the refractive index $n(\omega)$ and the extinction coefficient $k(\omega)$ that are defined from the equation $[n(\omega) + ik(\omega)]^2 = \varepsilon(\omega)$.
The reflectivity at normal incidence and assuming a vacuum-material interface is then simply linked to $n(\omega)$ and $k(\omega)$ through the Fresnel equations of classical electromagnetism (see for example Ref.~\cite{Griffiths2007}):
\begin{equation} \label{eq: reflectivity_refractive}
R(\omega) = \frac{\left[ n(\omega) -1 \right]^2 + k(\omega)^2 }{\left[ n(\omega) + 1 \right]^2 + k(\omega)^2}.
\end{equation}
\noindent
Eventually, we relate the reflectivity of a material to its perceived colour using the standard colour spaces introduced by the \textit{Commission Internationale de l'Eclairage} (CIE)~\cite{cie_webpage} for quantitative measures of colour.
For this purpose trichromatic theory gives the rigorous mathematical framework that permits to estimate the colour of an opaque material (e.g. a metal) by knowing its reflectivity $R(\lambda)$ for all the wavelengths $\lambda$ in the visible range (i.e. in the range [380, 780] nm), and to condense this information into three numbers, i.e. the colour coordinates~\cite{Schanda2007}.
In particular, according to the \textit{CIE 1931 standard colorimetric observer}, the tristimulus values ($X$, $Y$, $Z$) which define the CIE-$XYZ$ colour space completely describe a colour stimulus and are given by the following integrals over the visible range
\begin{eqnarray}
X & = & k\int\limits_{380 \, \text{nm}}^{780 \, \text{nm}} d\lambda \, \bar{\text{x}}(\lambda)R(\lambda)S(\lambda) , \\
Y & = & k\int\limits_{380 \, \text{nm}}^{780 \, \text{nm}} d\lambda \, \bar{\text{y}}(\lambda)R(\lambda)S(\lambda) , \\
Z & = & k\int\limits_{380 \, \text{nm}}^{780 \, \text{nm}} d\lambda \, \bar{\text{z}}(\lambda)R(\lambda)S(\lambda) ,
\end{eqnarray}
\noindent
where $\bar{\text{x}}(\lambda)$, $\bar{\text{y}}(\lambda)$ and $\bar{\text{z}}(\lambda)$ are the three so-called colour-matching functions and describe the chromatic response of the observer, being related to the sensitivity of the three different colour-sensitive photoreceptors present in the human eye.
$S(\lambda)$ is instead the spectral power distribution of one of the standard CIE illuminant (throughout this work the D65 illuminant is used which corresponds to average daylight) while the constant $k$ is chosen so that $Y = 100$ for objects for which $R(\lambda) = 1$ for all visible wavelengths.\\
In practice, it is more convenient to work within the CIELAB colour space rather than in the CIE-$XYZ$ colour space, which is defined by three coordinates ($L^*$, $a^*$, $b^*$) that are easily computed from the knowledge of the tristimulus values ($X$, $Y$, $Z$) through a coordinate transformation~\cite{Schanda2007}.
Indeed, since the CIELAB colour space is nearly uniform, euclidean distances can be used to approximately represent the perceived magnitude of colour differences between two objects in the same external conditions. Therefore, if ($L_1^*$, $a_1^*$, $b_1^*$) and ($L_2^*$, $a_2^*$, $b_2^*$) are the CIELAB coordinates of two objects, their colour difference is simply given by
\begin{equation}
\label{eq: delta_e}
\Delta E = \sqrt{(L_1^* - L_2^*)^2 + (a_1^*-a_2^*)+(b_1^*-b_2^*)^2}.
\end{equation}
\noindent
Typically, a difference $\Delta E > 1-2$ can be perceived by the human eye.
In addition, we use photorealistic rendering, which is based on the solution of the light-transport equation~\cite{Kajiya1986}, to simulate the actual appearance of an object
made of a material with specified optical constants in the visible range within a realistic 3D scene.
\noindent
Our goal is to apply the computational approach described above to study metals in their crystalline form.
From the point of view of first-principles calculations, elemental crystals are the easiest and most computationally efficient systems to simulate since they are periodic and their primitive cell, which typically consists of only a few atoms, can simply be taken as the simulation cell.
For multi-component systems instead (in this work we focus on binary alloys), we distinguish different types of compounds according to their atomic configuration and microstructure.
In particular, we consider the following three limiting cases: (i) perfectly ordered phases, i.e. pure intermetallic compounds, (ii) perfectly disordered phases, i.e. pure solid solutions, and (iii) heterogeneous alloys, i.e. alloys consisting of a mixture of two different phases.
Since we are exclusively interested in the study of the intrinsic bulk colours of metals, we neglect the influence on the optical properties of defects (e.g. vacancies, dislocations, etc.) and any type of surface effects. \\
We use different simulation methods in order to properly model the reflectivity and colour of these three different types of compounds, as summarized in Table~\ref{tab:alloy-type_simulations}.
As for the case of elemental crystals, pure intermetallic compounds are periodic systems and are simply simulated in their primitive cell.
On the other hand, we use the supercell approach, based on the use of special quasi-random structures (SQS)~\cite{Zunger1990, Wei1990},
to take into account effects related to disorder in the simulation of the optical properties of solid solutions (see Supplementary Discussion 1 for a comparison between the SQS supercell approach and the virtual-crystal approximation).
Instead, for heterogeneous alloys made of two phases $\alpha$ and $\beta$, we use the Bruggeman model~\cite{Niklasson1981} to estimate the optical properties of the alloy.
Within the Bruggeman model the dielectric function of the mixture, that we indicate as $\varepsilon_{\text{Br}}(\omega)$, is given by the following expression
\begin{equation} \label{eq: bruggeman_model}
(1-x_{\beta}) \frac{ \varepsilon_{\alpha}(\omega) - \varepsilon_{\text{Br}}(\omega) }{\varepsilon_{\alpha}(\omega) + 2\varepsilon_{\text{Br}}(\omega)} + x_{\beta} \frac{ \varepsilon_{\beta}(\omega) - \varepsilon_{\text{Br}}(\omega) }{\varepsilon_{\beta}(\omega) + 2\varepsilon_{\text{Br}}(\omega)} = 0,
\end{equation}
\noindent
in terms of the dielectric functions $\varepsilon_{\alpha}(\omega)$ and $\varepsilon_{\beta}(\omega)$ of the single phases, and where $x_{\alpha}$ and $x_{\beta}$ (with $x_{\alpha} + x_{\beta} = 1$) are the fractions of the two phases present in the material.
The dielectric function of the single phases can be obtained with the methods of Table~\ref{tab:alloy-type_simulations} for intermetallic compounds and solid solutions. \\
In the following, we apply and validate the computational approach described here, and discuss its limitations, on several elemental metals and binary compounds.
\subsection{Elemental metals}
\noindent
Fig.~\ref{fig:refl_elemental-metals_IP-vs-Exp} shows the comparison between IPA results and experimental data for the reflectivity curves of 18 elemental metals, focusing on frequencies centered around the visible range (i.e. in the range [1.59, 3.26] eV).
Experimentally, we observe high and flat reflectivities along the visible spectrum for the ``precious" transition metals (i.e. Rh, Ir and Pd) while we observe flat but slightly lower reflectivities for the other transition metals considered (i.e. V, Nb, Ta, Cr, Mo and W).
As a consequence, in terms of CIELAB colour coordinates, metals in the first group have a large CIELAB brightness $L^*$ and thus whitish colour, while the others have smaller brightness and thus a more greyish colour (e.g. rhodium has $L^*$=90 while vanadium has $L^*$=78).
An interesting exception among the transition metals is osmium, that shows a reflectivity curve that is low in the low-energy part of the visible spectrum but then suddenly rises in the blue-violet part, thus giving a bluish tint to pure osmium. A similar behaviour is found also in tantalum, but the rise of the reflectivity curve in the blue-violet region is significantly smaller and, consequently, also the bluish tint of the material is less pronounced.
Instead, the simple $sp$ metals lithium, potassium and aluminium all have very high and nearly flat reflectivity curves in the visible range (and therefore whitish colour) while in beryllium the intensity of the reflectivity is lower, and comparable to that of the transition metals (and thus having a greyish colour). Interestingly, the reflectivity curve of caesium decreases significantly within the visible range, so that red and yellow radiation is strongly reflected, while all other visible frequencies are absorbed, giving a yellow tint to the material.
As clearly shown in Fig.~\ref{fig:refl_elemental-metals_IP-vs-Exp}, the IPA simulations reproduce these different features of the elemental metals.
In contrast, for noble metals, while the characteristic drop in the reflectivity curve in the visible range (for Cu and Au) or in the ultraviolet (for Ag) is also reproduced by the simulations, it happens at smaller energies compared to experiments due to the well-known underestimation of the interband gap between valence $d$ bands and conduction $sp$ bands of PBE band structures.
This discrepancy can be corrected using approaches beyond DFT, such as the $GW$ approximation of many-body perturbation theory. By correcting the DFT band energies at the $G_0W_0$ level, a quantitative agreement with respect to experiments is obtained for the optical spectra of Cu~\cite{Marini2001a} and Ag~\cite{Marini2002} but not for Au, for which $G_0W_0$ gives very similar results to PBE~\cite{Rangel2012}. For this latter case, the quasi-particle self-consistent $GW$ ($QSGW$)~\cite{vanSchilfgaarde2006, Kotani2007} approach is required for the occupied $5d$ bands of gold to be lowered in energy by the right amount~\cite{Rangel2012}. \\
A quantitative measure of the accuracy of the simulations can be obtained through the colour difference $\Delta E$ (given in Eq.~\ref{eq: delta_e}) with respect to experiments. Its average value is found to be $<\Delta E> = 6.4$.
For a more qualitative visual comparison, Fig.~\ref{fig:rendering_elemental-metals} shows the simulated rendering of a metallic surface of elemental gold, osmium and caesium together with the appearance of experimental samples of the same materials.
In gold, the shift of the reflectivity edge in the simulations with respect to experiments makes the rendered colour more reddish than the true red-yellow colour of pure gold.
On the other hand the bluish colour of osmium and the yellow colour of caesium are well reproduced by the IPA simulations. \\
Moreover, as shown in Table~\ref{tab:drude_exp-vs-ip}, the IPA results for the Drude plasma frequency are in good agreement both with experiments and with previous simulations~\cite{Harl2008} performed at the same level of theory for some elemental metals. \\
From all these results, we conclude that the IPA approach applied on top of PBE band structures predicts the reflectivity and colour of elemental metals surprisingly well. Although the colour is not always in quantitative agreement with experiments, the shape and the main features of the experimental reflectivity curves are reproduced in elemental metals.
These results are somewhat surprising because it is known that quasi-particle corrections modify significantly the PBE band structure in metals and the corrections are k-dependent~\cite{Marini2001a, Marini2002} (i.e. they do not act as a simple scissor operator). Nonetheless, these approximated simulations manage to capture the correct features of the optical constants. This can intuitively be understood by the fact that the dielectric function is given by the sum of all possible vertical transitions over all the BZ and small differences in the positions and features of the bands (like gradient and curvature) are averaged out in the spectra.
In the special case of noble metals, the position of the occupied $d$ bands in PBE is not correct and, since there are no other allowed interband transitions in that energy range, the onset of interband optical absorption (i.e. $\varepsilon_2^{\text{inter}}(\omega)$) in PBE is also not at the correct position (similar to the case of semiconductors for which the PBE band gap is systematically underestimated~\cite{Onida2002}).
On the other hand, the shape of $\varepsilon(\omega)$ for noble metals is reasonably well reproduced.
\subsection{Alloys}
\noindent
In order to validate the theoretical approach used on binary compounds, we first compare the reflectivity and colour between simulations and experiments for known coloured intermetallic compounds, as previously done for elemental metals.
Second, we check the predictive accuracy of the simulations by studying in noble-metal-based alloys both the trends in reflectivity with respect to composition (in Ag-Au and Ag-Cu) and the differences in optical properties among different types of compounds for a given alloy composition (in Au-Cu and Ag-Cu).
\paragraph{Intermetallics.}
We first simulate the reflectivity and colour of intermetallic compounds that are experimentally known to be coloured.
The compounds studied are the purple AuAl$_2$, blue AuIn$_2$, bluish AuGa$_2$, yellow PtAl$_2$, red PdIn, blue-grey NiSi$_2$ and dark blue CoSi$_2$~\cite{Steinemann1997, Cretu1999}.
All these intermetallics have cubic symmetry: AuAl$_2$, AuGa$_2$, AuIn$_2$, PtAl$_2$, CoSi$_2$ and NiSi$_2$ crystallize in the FCC CaF$_2$ prototype structure (space group Fm$\bar{3}$m) while PdIn crystallizes in the BCC CsCl prototype structure (space group Pm$\bar{3}$m). \\
As shown in Fig.~\ref{fig:refl_intermetallic_sim-vs-exp}, the experimental shape of the reflectivity curve for the coloured intermetallics is well reproduced by the simulations.
The colour differences between simulations and experiments are summarized in Table~\ref{tab:colour_coloured-intermetallics_exp-vs-ip}, where the comparison with other first-principles simulations~\cite{Keast2011, Keast2013} is also reported.
The agreement with previous simulations is satisfactory and, moreover, we reproduce the true colour of the intermetallic compounds studied (although the CIELAB brightness is typically overestimated by the simulations).
For example, the comparison between photorealistic rendering and real material samples clearly shows that the simulations predict the correct colours of purple AuAl$_2$, bluish AuGa$_2$ and yellow PtAl$_2$ (see Fig.~\ref{fig:rendering_intermetallic}).
The characteristic colours of these highly symmetric intermetallic compounds are due to selective optical absorption in confined regions of the visible spectrum~\cite{Steinemann1997}.
For the gold compounds, the optical absorption inside the visible range is given by transitions from $sp$ conduction states below the Fermi level to unoccupied states above the Fermi level. The bands originating from the $5d$ states of gold, that are problematic in the study of elemental gold, are located at $\sim 5$ eV below the Fermi level and these do not contribute to the characteristic colours of these compounds~\cite{Keast2015}. This explains the better agreement with experiments found for the gold intermetallics compounds compared to the case of elemental gold.
\paragraph{Au-Ag-Cu.}
\noindent
The Au-Ag-Cu system is an ideal test case for the application of the computational approach described above to alloys since (i) several experimental optical data on this system are available, (ii) its constituent binaries show very different behaviours in terms of phase stability and so different types of compounds are observed and (iii) it is the basis of the most common jewellery and dental alloys in use today.
Concerning the phase stability of the constituent binaries, Ag is completely soluble in Au thus Au and Ag form solid solutions for each composition and no long-range order is observed at low temperatures.
Also Au and Cu form solid solutions over all concentrations at high temperatures but, for certain composition ranges, ordered intermetallic phases can be obtained at lower temperatures. In particular the known intermetallic compounds are the cubic AuCu$_3$ and Au$_3$Cu (space group Pm$\bar{3}$m), the low-temperature phase AuCu(I) (space group P4/mmm) and the high-temperature phase AuCu(II) (space group Imma).
The phase diagram of Ag-Cu instead exhibits eutectic behaviour with a wide miscibility gap and the system tends to segregate in phases of nearly pure Ag and pure Cu at room temperature~\cite{Okamoto1990}. \\
We study the effect of composition on the reflectivity of the Ag-Au system and compare experimental data of solid solutions with SQS simulations.
Fig.~\ref{fig:Ag-Au_refl_ip-vs-exp} shows that the gradual shift to lower wavelengths of the reflectivity edge of gold by increasing the Ag content is reproduced by the simulations.
However, as already discussed above for the case of elemental noble metals, the position of the reflectivity edge in IPA simulations based on PBE band structures does not correspond to the experimental one, but it is instead systematically shifted to longer wavelengths for each atomic concentration $x$ considered.
Although the simulations are not in quantitative agreement with experiments, the qualitative trends in reflectivity, and thus in colour, with respect to the alloy composition of Ag-Au are reproduced.
Similarly, we simulate the optical properties of Ag$_{1-x}$Cu$_x$ two-phase alloys by employing the Bruggeman model described above and study also in this system the effect of composition on the reflectivity.
The $\alpha$ and $\beta$ phases entering in the expression for the alloy dielectric function $\varepsilon_{\text{Br}}(\omega)$ of Eq.~\ref{eq: bruggeman_model} are assumed to be elemental Ag and elemental Cu, respectively.
And the dielectric functions of the two constituent elements are taken from the simulations of elemental metals discussed above.
As shown in Fig.~\ref{fig:Ag-Cu_refl_ip-vs-exp}, Ag additions in Cu increase the reflectivity at wavelengths shorter than the reflectivity edge of elemental Cu but do not shift, as it happens in Ag-Au solid solutions, the position of the edge.
The Bruggeman model provides the correct trend with composition but the effect on the drop in the reflectivity is less evident because the reflectivity edge of elemental Cu in IPA simulations is less steep than the experimental one. Note that the application of the Bruggeman model to experimental data of the dielectric function of elemental Ag and elemental Cu gives very good agreement with experimental data for the two-phase alloy and validates the use of the model. \\
Summarizing, for Ag-Au solid solutions, where there is a gradual shift of the reflectivity edge by varying alloying additions from elemental Au to elemental Ag, the colour of the alloy changes from red-yellow to yellow, pale greenish-yellow and eventually white of pure Ag. Au-Cu solid solutions show a similar behaviour~\cite{Rivory1977} and the colour of the alloy changes from red-yellow to reddish and eventually red of pure Cu.
Instead, in Ag-Cu two-phase alloys there is no shift of the reflectivity edge but, for all wavelengths in the visible range below the reflectivity edge of elemental Cu, the reflectivity curve rises roughly uniformly so that the colour of Ag-Cu changes from the red of pure Cu to reddish and then directly to whitish and white of pure Ag~\cite{Cretu1999}. \\
After considering the effect of composition on the reflectivity of binary alloys, we now study the effect of different types of compounds for a given fixed atomic concentration $x$ directly on the dielectric function of the Au-Cu and Ag-Cu systems.
Indeed, for Au-Cu at the composition $x=0.81$, experimental data are available in the literature for the optical absorption of both the solid solution and the intermetallic compound AuCu$_3$~\cite{Rivory1977}.
Analogously, for Ag-Cu at the composition $x=0.30$, experimental data are available for both a segregated two-phase sample made of a pure Cu phase and a pure Ag phase, and for a metastable solid solution obtained by vapor quenching~\cite{Rivory1977}.
We compare the optical absorption of the Au$_{1-x}$Cu$_x$ solid solution, at $x = 0.81$ in experiments and at $x=0.75$ in simulations, with the optical absorption of the intermetallic compound appearing around the composition $x=0.75$, i.e. the cubic AuCu$_3$ phase.
The purpose of this comparison is to study the differences in optical properties between ordered and disordered phases.
As shown in Fig.~\ref{fig:AuCu3_eps_ip-vs-exp}, the optical absorption of the intermetallic compound is very similar to the one of the random alloy with the notable exception of the presence of an additional peak at around 3.6 eV, which is missing in $\varepsilon_2(\omega)$ for the solid solution.
The comparison of the SQS results for the disordered alloy with the simulated results of the intermetallic compound shows that the simulations clearly capture this small difference.
Nonetheless, we underline that there is no significant change in the resulting colour between ordered and disordered alloy for this system because the position of the onset of optical absorption is not modified by the presence of long-range order, and thus neither is the colour.\\
Similarly, Fig.~\ref{fig:Ag-Cu_eps_ip-vs-exp} reports the comparison between the optical absorption of the Ag$_{1-x}$Cu$_x$ two-phase alloy, at $x = 0.70$ in experiments and at $x=0.75$ in simulations, with respect to that of the metastable solid solution having the same composition.
In the two-phase alloy, where the alloy optical properties are well approximated by a combination of those of pure Cu and pure Ag (Bruggeman model), we observe two onsets of absorption: the first one at $\sim$ 2.1 eV corresponding to the absorption edge of pure Cu and the second one at $\sim$ 4.0 eV corresponding to the absorption edge of pure Ag.
The optical absorption of the solid solution instead is very similar to the one of pure Ag but, in addition, we observe the presence of a supplementary broad peak at energies below the onset of absorption of pure Ag due to Cu impurity states.
The SQS results for the solid solution and the results of the Bruggeman model applied on the IPA dielectric function of elemental Ag and Cu reproduce the two different trends, although the SQS shows a small blueshift of the peak that follows the absorption edge of pure Ag which is not observed experimentally.
\section{Discussion}
\noindent
We have shown that the theoretical methods and approximations considered in this paper, i.e. IPA optical spectra computed on top of the DFT-PBE electronic structure, can be employed in systematic studies on the optical properties of metals in order to predict trends in real metallic systems and to help the search for novel materials with specific optical properties, and therefore also colours, by exploring the composition space through the computational screening of materials~\cite{Prandini2019}.
Moreover, this work could help stimulate future studies aiming to achieve the photorealistic simulation of different types of materials by means of first-principles techniques.
For example, the systematic validation of the approach performed on elemental metals and binary alloys can be seen as a necessary preliminary step for the photorealistic simulation of more complex metallic alloys having a larger number of constituent elements, such as ternaries, quaternaries, etc., which are more relevant for technological applications (e.g. superalloys and high-entropy alloys).
\section{Methods}
\noindent
\noindent
\subsection{Workflow}
\noindent
All DFT calculations are performed with the Quantum ESPRESSO distribution~\cite{Giannozzi2009}, which is based on the plane-wave pseudopotential method for the numerical solution of the KS equations.
We use Shirley's interpolation method~\cite{Shirley1996, Prendergast2009} as implemented in the \texttt{SIMPLE} code~\cite{simple_cpc} to evaluate the IPA dielectric function of metals including both interband and intraband contributions.
Photorealistic rendering is performed with the Mitsuba renderer~\cite{mitsuba_webpage}.
Pseudopotentials and plane-wave cutoffs are chosen according to the results of the standard solid-state pseudopotential (SSSP) protocol~\cite{sssp_npj} in order to have reliable and converged band structures as the starting ingredients for the evaluation of the IPA dielectric function. Since the \texttt{SIMPLE} code supports only norm-conserving pseudopotentials, we use optimized norm-conserving Vanderbilt (ONCV)~\cite{Hamann2013} pseudopotentials from the SG15~\cite{Schlipf2015} and PseudoDojo~\cite{Dojo2017} PBE pseudopotential libraries for all elements considered (see Supplementary Discussion 2 for more details on the choice of the pseudopotentials from the SSSP database of tests).
For the purpose of automation, the sequence of calculations required by the computational approach described in this work is implemented as a workflow within the framework of the AiiDA~\cite{Pizzi2016} infrastructure for computational science. Thanks to this ColourWorkflow (see Fig.~\ref{fig:colourworkflow}), it is possible, giving as input a generic crystal structure, to obtain directly as output the reflectivity and colour of a given material. \\
In all simulations, relativistic effects are accounted for at the scalar-relativistic level (see Supplementary Discussion 3 for an analysis on the effect of spin-orbit coupling on the optical properties of heavy elements) while the IPA dielectric function is always evaluated by including the non-local contribution of the pseudopotentials in the computation of the velocity matrix elements, as implemented in \texttt{SIMPLE}.
\subsection{Elemental metals}
\noindent
All calculations on elemental metals are performed on the ground-state crystal structures at zero temperature, as provided in Ref.~\cite{Lejaeghere2014}.
The equilibrium volume of each structure corresponds to the reference PBE value obtained by extensively tested all-electron calculations for the equation of state~\cite{Lejaeghere2016}.
If needed, the crystal structures are reduced to the primitive cell using the spglib library~\cite{spglib_webpage}.
Spin-polarization is not included in our calculations.
In the self-consistent DFT calculations for the evaluation of the ground-state density we use a Monkhorst-Pack grid~\cite{MonkhorstPack} of $24 \times 24 \times 24$ and a cold smearing~\cite{Marzari1999} of 0.02 Ry. In the non self-consistent band structure calculations needed for the construction of the Shirley's basis we use a uniform k-grid of $2 \times 2 \times 2$ including the seven periodic images of the $\Gamma$-point of the BZ and at least 30 empty conduction bands.
From a convergence study on the dielectric function we decide to employ an interpolation k-grid of $64 \times 64 \times 64$ and $\eta=\gamma=0.1$ eV in \texttt{SIMPLE} for each elemental metal considered, with the exception of elemental aluminium for which, because of a very slow convergence of $\varepsilon^{\text{inter}}(\omega)$ with respect to k-points sampling, the interpolation k-grid used is $80 \times 80 \times 80$ and $\eta$ is set to 0.2 eV. The Shirley's basis is constructed setting the threshold for the Gram-Schmidt orthonormalization algorithm equal to 0.0075 a.u. (input variable named $s_b$ in \texttt{SIMPLE}).
\subsection{Alloys}
\noindent
For the simulation of all binary compounds considered, we always use as plane-wave cutoff the largest value between the plane-wave cutoffs of the two constituent elements, as taken from Supplementary Table 1. The Shirley's basis is constructed setting $s_b = 0.01$ a.u. in \texttt{SIMPLE} and considering a number of empty bands at least equal to the number of occupied bands.
We choose the interpolation k-grid to be used in the evaluation of the dielectric function in terms of a k-point density, which is defined as the maximum distance between adjacent k-points along the reciprocal axes (in \AA$^{-1}$).
For all the seven cubic intermetallic compounds considered we select as k-point density 0.04 \AA$^{-1}$. With this choice the number of k-points included in the uniform k-grids is of the order $O(10^{5})$, which correspond to uniform k-grids in the range from $46 \times 46 \times 46$ up to $56 \times 56 \times 56$. \\
All the SQSs used in this work to simulate solid solutions of the systems Ag$_{1-x}$Au$_x$, Au$_{1-x}$Cu$_x$ and Ag$_{1-x}$Cu$_x$ are generated with the ATAT package~\cite{vandewalle2002, vandewalle2013}.
Since we consider only the simple stoichiometric ratios $x=0.25,0.5,0.75$, we use small FCC SQSs with 16 atoms per cell.
The interpolation k-grid is set according to a k-point density of 0.04 \AA$^{-1}$ (corresponding roughly to 11'000 points in the BZ).
\section*{Acknowledgements}
\noindent
The authors warmly thank Fanny Lalire and Fr\'ed\'eric Diologent for several useful discussions and for sharing with us confidential experimental results.
\section*{Competing Interests}
\noindent
The authors declare no competing interests.
\section*{Author contributions}
\noindent
N. M. and G.-M. R. designed the study; G. P. developed the computational workflow, performed the calculations and wrote the manuscript. All authors discussed and analysed the results and commented on the manuscript.
\section*{Funding}
\noindent
This research was supported by Varinor SA (CH 2800 Del\'emont, Switzerland).
\section*{Data availability}
\noindent
The data that support the findings of this study are available from the corresponding
authors upon reasonable request.
The source code of the ColourWorkflow and the input scripts necessary in order to reproduce the simulations performed for this work are available at https://github.com/giprandini/colour-workflow.
\section*{Additional information}
\noindent
\textbf{Supplementary information} is available at \textit{npj Computational Materials} website.\\
\section{References}
\bibliographystyle{naturemag}
|
1,116,691,500,227 | arxiv | \section{Introduction}
Two-dimensional Bin Packing (2BP) is a well-studied problem in combinatorial optimization.
It finds numerous applications in logistics, databases, and cutting stock.
In 2BP, we are given a set of $n$ rectangular items and square bins of side length 1.
The $i^{\textrm{th}}$ item is characterized by its width $w(i) \in (0,1]$ and height $h(i) \in (0,1]$.
Our goal is to find an axis-aligned nonoverlapping packing of
these items into the minimum number of square bins of side length 1.
There are two well-studied variants: (i) where the items cannot be rotated, and
(ii) they can be rotated by 90 degrees.
As is conventional in bin packing, we focus on asymptotic approximation algorithm.
For any optimization problem, the asymptotic approximation ratio (AAR)
of algorithm $\mathcal{A}$ is defined as $\lim_{m \to \infty} \sup_{I: \opt(I) = m} ({\mathcal{A}(I)}/{\opt(I)})$,
where $\opt(I)$ is the optimal objective value
and $\mathcal{A}(I)$ is the objective value of the solution output by algorithm $\mathcal{A}$, respectively, on input $I$.
Intuitively, AAR captures the algorithm's behavior
when $\opt(I)$ is large.
We call a bin packing algorithm $\alpha$-asymptotic-approximate iff its AAR is at most $\alpha$.
An Asymptotic Polynomial-Time Approximation Scheme (APTAS) is an algorithm
that accepts a parameter $\eps$ and has AAR of $(1+\eps)$.
2BP is a generalization of classical 1-D bin packing problem \cite{HobergR17, bp-aptas}.
However, unlike 1-D bin packing, 2BP does not admit an APTAS unless P=NP \cite{bansal2006bin}.
In 1982, Chung, Garey, and Johnson~\cite{chung1982packing} gave an approximation algorithm
with AAR 2.125 for 2BP. Caprara~\cite{caprara2008} obtained
a $T_{\infty}(\approx 1.691)$-asymptotic-approximation algorithm.
Bansal, Caprara, and Sviridenko~\cite{rna} introduced the Round and Approx framework
to obtain an AAR of $1+\ln(T_{\infty})$ ($\approx 1.525$).
Then Jansen and Praedel~\cite{JansenP2013} obtained an AAR of 1.5.
The present best AAR is $1+\ln(1.5)$ ($\approx 1.405$),
due to Bansal and Khan~\cite{bansal2014binpacking},
and works for both the cases with and without rotations.
The best lower bounds on the AAR for 2BP are
1 + 1/3792 and 1 + 1/2196 \cite{chlebik2009hardness},
for the versions with and without rotations, respectively.
In the context of geometric packing, guillotine cuts are well-studied and heavily used in practice \cite{sweeney1992cutting}.
The notions of {\em guillotine cuts} and {\em $k$-stage packing} were introduced
by Gilmore and Gomory in their seminal paper \cite{gilmore1965multistage} on cutting stock problem.
In $k$-stage packing each stage consists of either vertical or horizontal (but not both) axis-parallel end-to-end cuts, also called guillotine cuts.
In each stage, each of the rectangular regions obtained on the previous stage is considered separately
and can be cut again by using guillotine cuts. In $k$-stage packing,
the minimum number of cuts to obtain each rectangle from the initial packing is at most $k$, plus an
additional cut to trim (i.e., separate the rectangles themselves from waste area).
Note that in the cutting process we change the orientation (vertical or horizontal) of the cuts $k-1$ times.
2-stage packing, also called {\em shelf packing}, has been studied extensively.
In {\em guillotine packing}, the packing of items in each bin should be \emph{guillotinable},
i.e., items have to be packed in alternate horizontal and vertical stages
but there is no limit on the number of stages that can be used.
See \cref{sec:guill-examples} for examples.
Caprara et al.~\cite{caprara2005fast} gave an APTAS for 2-stage 2BP.
Bansal et al.~\cite{bansal2005tale} showed an APTAS for guillotine 2BP.
The presence of an APTAS for guillotine 2BP raises an important question:
can the optimal solution to guillotine 2BP be used as a good approximate solution to 2BP?
Formally, let $\opt(I)$ and $\opt_g(I)$ be the minimum number of bins and the
minimum number of guillotinable bins, respectively, needed to pack items $I$.
Let $\lambda$ be the smallest constant such that for some constant $c$
and for every set $I$ of items, we get $\opt_g(I) \le \lambda\opt(I) + c$.
Then $\lambda$ is called the Asymptotic Price of Guillotinability (APoG).
It is easy to show that $\mathrm{APoG} \ge 4/3$\footnote{Consider
a set $I$ of items containing $2m$ rectangles of width 0.6 and height 0.4 and
$2m$ rectangles of width 0.4 and height 0.6.
Then $\opt(I) = m$ and $\opt_g(I) = \ceil{4m/3}$.}.
Bansal and Khan~\cite{bansal2014binpacking} conjectured that $\mathrm{APoG} = 4/3$.
If true, this would imply a $(4/3+\eps)$-asymptotic-approximation algorithm for 2BP.
However, the present upper bound on APoG is only $T_\infty$ ($\approx1.691$),
due to Caprara's HDH algorithm~\cite{caprara2008} for 2BP, which produces a 2-stage packing.
APTASes are known for some special cases for 2BP,
such as when all items are squares~\cite{bansal2006bin} or
when all rectangles are small in both dimensions
\cite{coffman1980performance} (see \cref{thm:nfdh-small-2} in \cref{sec:nfdh}).
Another important class is {\em skewed} rectangles.
We say that a rectangle is $\delta$-large if,
for some constant $\delta>0$, its width and height are more than $\delta$;
otherwise, the rectangle is $\delta$-skewed.
We just say that a rectangle is large or skewed when $\delta$ is clear from the context.
An instance of 2BP is skewed if all the rectangles in the input are skewed.
Skewed instances are important in geometric packing (see Section \ref{subs:prior}).
This special case is practically relevant~\cite{galvez2020tight}:
e.g., in scheduling, it captures scenarios where
no job can consume a significant amount of a shared resource
(energy, memory space, etc.) for a significant amount of time.
Even for skewed instance for 2BP, the best known AAR is 1.406 \cite{bansal2014binpacking}.
Also, for skewed instance, the best known upper bound on APoG is $T_\infty \approx 1.691$.
\subsection{Related Works}
\label{subs:prior}
Multidimensional packing and covering problems are fundamental in combinatorial optimization \cite{CKPT17}.
Vector packing (VP) is another variant of bin packing, where the input is a set of vectors in $[0, 1]^d$ and the
goal is to partition the vectors into the minimum number of parts (bins) such that in each part, the sum of
vectors is at most 1 in every coordinate.
The present best approximation algorithm attains an AAR of $(0.807+\ln(d+1))$ \cite{bansal2016improved}
and there is a matching $\Omega(\ln d)$-hardness \cite{sandeep2021optimal}. Generalized multidimensional
packing \cite{aco-gvbp, aco-gvks} generalizes both geometric and vector packing.
In two-dimensional strip packing (2SP) \cite{coffman1980performance, steinberg1997strip}, we are given a
set of rectangles and a bounded width strip. The goal is to obtain an axis-aligned nonoverlapping packing
of all rectangles such that the height of the packing is minimized.
The best-known approximation ratio for SP is $5/3+\eps$ \cite{harren20115} and it is NP-hard to obtain
better than 3/2-approximation.
However, there exist APTASes for the problem, for both the cases
with and without rotations \cite{kenyon1996strip, jansen2005strip}.
In two-dimensional knapsack (2GK) \cite{jansen2004rectangle}, the rectangles have associated profits and
our goal is to pack the maximum profit subset into a unit square knapsack.
The present best polynomial-time (resp.~pseudopolynomial-time) approximation ratio
for 2GK is 1.809~\cite{galvez2017approximating} (resp.\ 4/3~\cite{GalSocg21}).
These geometric packing problem are studied for $d$-dimensions ($d\ge 2$) \cite{eku-hdhk} as well.
Both 2SP and 2GK are also well-studied under guillotine packing.
Seiden and Woeginger~\cite{seiden2005two} gave an APTAS for guillotine 2SP.
Khan et al.~\cite{KhanSocg21} have recently given a pseudopolynomial-time
approximation scheme for guillotine 2GK.
Recently, guillotine cuts \cite{pach2000cutting} have received attention due to
their connection with the maximum independent set of rectangles (MISR) problem~\cite{AdamaszekHW19}.
In MISR, we are given a set of possibly overlapping rectangles and the goal is to find the maximum cardinality set of rectangles so that there is
no pairwise overlap. It was noted in \cite{khan2020guillotine, abed2015guillotine} that for any set of $n$ non-overlapping axis-parallel rectangles, if there is a guillotine cutting sequence
separating $\alpha n$ of them, then it implies a $1/\alpha$-approximation for MISR.
Skewed instance is an important special case in these problems.
In some problems, such as MISR and 2GK, if all items are $\delta$-large then we can solve them exactly in polynomial time.
So, the inherent difficulty of these problems lies in skewed instances.
For VP, hard instances are again skewed, e.g.,
Bansal, Eli\'a\v{s} and Khan~\cite{bansal2016improved} showed that
hard instances for 2-D VP (for a class of algorithms called {\em rounding based algorithms})
are skewed instances, where one dimension is $1-\eps$ and the other dimension is $\eps$.
Galvez el al.~\cite{galvez2020tight} recently studied strip packing when all items are skewed.
For skewed instances, they showed $(3/2-\eps)$ hardness of approximation and a matching $(3/2+\eps)$-approximation algorithm.
For 2GK, when the height of each item is at most $\eps^3$,
a $(1-72\eps)$-approximation algorithm is known~\cite{fishkin2005efficient}.
\subsection{Our Contributions}
We study 2BP for the special case of $\delta$-skewed{} rectangles,
where $\delta \in (0, 1/2]$ is a constant.
First, we make progress towards the conjecture \cite{bansal2014binpacking} that $\mathrm{APoG} = 4/3$.
Even for skewed{} rectangles, we only knew $4/3 \le \mathrm{APoG} \le T_{\infty}(\approx 1.691)$.
We resolve the conjecture for skewed{} rectangles, by giving lower and upper bounds of roughly $4/3$
when $\delta$ is a small constant.
Specifically, we give an algorithm for 2BP, called $\thinGPack_{\eps}$,
that takes a parameter $\eps \in (0, 1)$ as input.
For a set $I$ of $\delta$-skewed{} rectangles, we show that when
$\delta$ and $\eps$ are close to 0, $\thinGPack_{\eps}(I)$ outputs a
4-stage packing of $I$ into roughly $4\opt(I)/3 + O(1)$ bins.
\begin{restatable}{theorem}{rthmThinGPack}
\label{thm:thin-gpack}
Let $I$ be a set of $\delta$-skewed items, where $\delta \in (0, 1/2]$.
Then $\thinGPack_{\eps}(I)$ outputs a 4-stage packing of $I$
in time $O((1/\eps)^{O(1/\eps)} + n\log n)$.
Furthermore, the number of bins used is at most
$({4}/{3})(1+8\delta)(1+7\eps)\opt(I) + ({8}/{\eps^2}) + 30$.
\end{restatable}
A tighter analysis shows that
when $\delta \le 1/16$ and $\eps \le 10^{-4}$,
then $\thinGPack$ has AAR $(76/45)(1+7\eps) < T_{\infty}$,
which improves upon the best-known bound on APoG for the general case.
The lower bound of $4/3$ on APoG can be extended to skewed{} items.
We formally prove this in \cref{sec:apog-lb}.
Hence, our bounds on APoG are tight for skewed{} items.
Our result indicates that to improve the bounds for APoG in the general case,
we should focus on $\delta$-large items.
Our bounds on APoG also hold when items can be rotated.
See \cref{sec:guill-rot} for details.
Our other main result is an APTAS for 2BP for skewed{} items.
Formally, we give an algorithm for 2BP, called $\thinCPack$,
and we show that for every constant $\eps \in (0, 1)$,
there exists a constant $\delta \in (0, \eps)$ such that the algorithm has an AAR of $1+\eps$
when all items in the input are $\delta$-skewed{} rectangles.
$\thinCPack$ can also be extended to the case where items can be rotated. %
The best-known AAR for 2BP is $1 + \ln(1.5) + \eps$.
Our result indicates that to improve upon algorithms for 2BP,
one should focus on $\delta$-large items.
In \cref{sec:guill-thin}, we describe the $\thinGPack$ algorithm
and prove \cref{thm:thin-gpack}.
In \cref{sec:thin-bp}, we describe the $\thinCPack$ algorithm
and prove that it has an AAR of $1+\eps$.
\section{Preliminaries}
\label{sec:preliminaries}
Let $[n] := \{1, 2, \ldots, n\}$, for $n \in \mathbb{N}$.
For a rectangle $i$, its area $a(i) := w(i)h(i)$.
For a set $I$ of rectangles, let $a(I) := \sum_{i \in I} a(i)$.
An \emph{axis-aligned packing} of an item $i$ in a bin
is specified by a pair $(x(i), y(i))$, where $x(i), y(i) \in [0,1]$,
so that $i$ is placed in the region
$[x(i), x(i)+w(i)] \times [y(i), y(i)+h(i)]$.
A packing of rectangles in a bin is called \emph{nonoverlapping} iff for any two
distinct items $i$ and $j$, the rectangles
$(x(i), x(i)+w(i)) \times (y(i), y(i)+h(i))$ and
$(x(j), x(j)+w(j)) \times (y(j), y(j)+h(j))$ are disjoint.
Equivalently, items may only intersect at their boundaries.\\
\noindent \textbf{Next-Fit Decreasing Height (NFDH):}
The NFDH algorithm~\cite{coffman1980performance}
is a simple algorithm for 2SP and 2BP. We will use the following results on NFDH.
We defer the proofs to \cref{sec:nfdh}.
\begin{restatable}{lemma}{rthmNfdhSmall}
\label{thm:nfdh-small}
Let $I$ be a set of items where each item $i$ has $w(i) \le \delta_W$
and $h(i) \le \delta_H$. Let there be a bin of width $W$ and height $H$.
If $a(I) \le (W - \delta_W)(H - \delta_H)$, then NFDH can pack $I$ into the bin.
\end{restatable}
\begin{lemma}
\label{thm:nfdh-wide-tall}
\label{thm:nfdh-tall}
\label{thm:nfdh-wide}
Let $I$ be a set of rectangular items. Then NFDH uses less than
$(2a(I)+1)/(1-\delta)$ bins to pack $I$ when $h(i) \le \delta$ for each item $i$
and $2a(I)/(1-\delta) + 3$ bins when $w(i) \le \delta$ for each item $i$.
\end{lemma}
If we swap the coordinate axes in NFDH, we get the
Next-Fit Decreasing Width (NFDW) algorithm.
Analog{}s of the above results hold for NFDW.\\
\noindent \textbf{Slicing Items:}
We will consider variants of 2BP where some items can be \emph{sliced}.
Formally, slicing a rectangular item $i$ using a horizontal cut is the operation of
replacing $i$ by two items $i_1$ and $i_2$ such that
$w(i) = w(i_1) = w(i_2)$ and $h(i) = h(i_1) + h(i_2)$.
Slicing using vertical cut is defined analogously.
Allowing some items to be sliced may reduce the number of bins required to pack.
See \cref{fig:frac-pack} for an example.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[
myarrow/.style = {->,>={Stealth},semithick},
mybrace/.style = {decoration={amplitude=3pt,brace,mirror,raise=1pt},semithick,decorate},
every node/.style = {scale=0.8},
scale=0.8
]
\draw (0,0) rectangle +(3,3);
\draw[fill={black!30}] (0,0) rectangle +(1.2,3);
\draw[fill={black!10}] (3,0) rectangle +(-1.5,1.2);
\draw[fill={black!10}] (3,1.2) rectangle +(-1.5,1.2);
\draw[mybrace] (0,0) -- node[below=1pt] {0.4} +(1.2,0);
\draw[mybrace] (1.5,0) -- node[below=1pt] {0.5} +(1.5,0);
\draw[mybrace] (3,0) -- node[right=2pt] {0.4} +(0,1.2);
\draw[mybrace] (3,1.2) -- node[right=2pt] {0.4} +(0,1.2);
\draw[fill={black!30}] (-8,0) rectangle +(1.2,3);
\path (-8,0) -- node[below=0pt] {0.4} +(1.2,0);
\path (-8,0) -- node[left=0pt] {1} +(0,3);
\node at (-6.2,1.5) {+};
\draw[fill={black!10}] (-5,1) rectangle +(3,1.2);
\path (-5,1) -- node[left=0pt] {0.4} +(0,1.2);
\draw[dashed] (-3.5,2.5) -- (-3.5,0.5);
\node[rotate=90,transform shape] at (-3.5,0.3) {\large\ding{34}};
\draw[mybrace] (-5,1) -- node[below=1pt] {0.5} +(1.5,0);
\draw[mybrace] (-3.5,1) -- node[below=1pt] {0.5} +(1.5,0);
\draw[myarrow] (-1.5,1.5) -- (-0.5,1.5);
\end{tikzpicture}
\caption{Packing two items into a bin, where one item is sliced using a vertical cut.
If slicing were forbidden, two bins would be required.}
\label{fig:frac-pack}
\end{figure}
Alamdari et al.~\cite{alamdari2013smart} explored algorithms for
a variant of 2SP where items can be sliced using vertical cuts,
which has applications in smart-grid electricity allocation.
Many packing algorithms \cite{kenyon1996strip,JansenP2013,bansal2005tale}
solve the sliceable version of the problem as a subroutine.
\section{Guillotinable Packing of Skewed{} Rectangles}
\label{sec:guill-thin}
An item is called $(\delta_W, \delta_H)$-skewed{} iff its width is at most $\delta_W$
or its height is at most $\delta_H$.
In this section, we consider the problem of obtaining tight upper and lower bounds
on APoG for $(\delta_W, \delta_H)$-skewed{} items.
We will describe the $\thinGPack$ algorithm and prove \cref{thm:thin-gpack}.
\subsection{Packing With Slicing}
Before describing $\thinGPack$,
let us first look at a closely-related variant of this problem,
called the \emph{sliceable 2D bin packing problem}, denoted as S2BP.
In this problem, we are given two sets of rectangular items, $\widetilde{W}$ and $\widetilde{H}$, where
items in $\widetilde{W}$ have width more than $1/2$, and items in $\widetilde{H}$ have height more than $1/2$.
$\widetilde{W}$ is called the set of wide items and $\widetilde{H}$ is called the set of tall items.
We are allowed to \emph{slice} items in $\widetilde{W}$ using horizontal cuts
and slice items in $\widetilde{H}$ using vertical cuts, and our task is to pack
$\widetilde{W} \cup \widetilde{H}$ into the minimum number of bins without rotating the items.
See \cref{fig:bp-vs-sbp} for an example that illustrates the difference
between 2BP and S2BP.
\begin{figure}[htb]
\begin{subfigure}{0.45\textwidth}
\centering
\begin{tikzpicture}[
witem/.style={draw,fill={black!30}},
hitem/.style={draw,fill={black!10}},
bin/.style={draw,thick},
myarrow/.style={->,>={Stealth},thick},
scale=0.7,
]
\begin{scope}
\node at (0.3, 2.0) {$\widetilde{W}$:};
\node at (0.3, 0.6) {$\widetilde{H}$:};
\path[hitem] (1.0, 0.0) rectangle +(1.2, 1.2);
\path[hitem] (2.4, 0.0) rectangle +(1.2, 1.2);
\path[witem] (1.0, 1.4) rectangle +(1.2, 1.2);
\path[witem] (2.4, 1.4) rectangle +(1.2, 1.2);
\end{scope}
\draw[myarrow] (3.9, 1.3) -- (5.2, 1.3);
\begin{scope}[xshift={5.5cm},yshift={-0.8cm}]
\path[hitem] (0.0, 0.0) rectangle +(1.2, 1.2);
\path[hitem] (2.2, 0.0) rectangle +(1.2, 1.2);
\path[witem] (0.0, 2.2) rectangle +(1.2, 1.2);
\path[witem] (2.2, 2.2) rectangle +(1.2, 1.2);
\path[bin] (0.0, 0.0) rectangle +(2, 2);
\path[bin] (2.2, 0.0) rectangle +(2, 2);
\path[bin] (0.0, 2.2) rectangle +(2, 2);
\path[bin] (2.2, 2.2) rectangle +(2, 2);
\end{scope}
\end{tikzpicture}
\caption{Packing items into 4 bins without slicing.}
\end{subfigure}
\hfil
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}[
witem/.style={draw,fill={black!30}},
hitem/.style={draw,fill={black!10}},
bin/.style={draw,thick},
myarrow/.style={->,>={Stealth},thick},
cutline/.style={draw={black!50!red},dashed,semithick},
scale=0.7,
]
\begin{scope}
\node at (0.3, 2.0) {$\widetilde{W}$:};
\node at (0.3, 0.6) {$\widetilde{H}$:};
\path[hitem] (1.0, 0.0) rectangle +(1.2, 1.2);
\path[hitem] (2.4, 0.0) rectangle +(1.2, 1.2);
\path[witem] (1.0, 1.4) rectangle +(1.2, 1.2);
\path[witem] (2.4, 1.4) rectangle +(1.2, 1.2);
\path[cutline] (2.3, 2.0) -- (3.7, 2.0);
\node[xscale=-1,transform shape] at (3.9, 1.98) {\large\ding{34}};
\path[cutline] (3.0, -0.1) -- (3.0, 1.3);
\node[rotate=90,transform shape] at (3.02, -0.3) {\large\ding{34}};
\end{scope}
\draw[myarrow] (3.9, 1.3) -- (5.2, 1.3);
\begin{scope}[xshift={5.5cm},yshift={0.3cm}]
\path[witem] (0.0, 0.0) rectangle +(1.2, 1.2);
\path[hitem] (2.2, 0.0) rectangle +(1.2, 1.2);
\path[witem] (0.0, 1.2) rectangle +(1.2, 0.6);
\path[witem] (2.2, 1.2) rectangle +(1.2, 0.6);
\path[hitem] (1.2, 0.0) rectangle +(0.6, 1.2);
\path[hitem] (3.4, 0.0) rectangle +(0.6, 1.2);
\path[bin] (0.0, 0.0) rectangle +(2, 2);
\path[bin] (2.2, 0.0) rectangle +(2, 2);
\end{scope}
\end{tikzpicture}
\caption{Packing items into 2 bins by horizontally slicing an item in $\widetilde{W}$
and vertically slicing an item in $\widetilde{H}$.}
\end{subfigure}
\caption[2BP vs.~S2BP]{Example illustrating 2BP vs.~S2BP.
There are 2 wide items ($\widetilde{W}$) and 2 tall items ($\widetilde{H}$).
The items are squares of side length 0.6 and the bins are squares of side length 1.}
\label{fig:bp-vs-sbp}
\end{figure}
We first describe a simple $4/3$-asymptotic-approximation algorithm
for S2BP, called $\greedyPack$, that outputs a 2-stage packing.
Later, we will show how to use $\greedyPack$ to design $\thinGPack$.
We assume that the bin is a square of side length 1. Since we can slice items,
we allow items in $\widetilde{W}$ to have height more than 1
and items in $\widetilde{H}$ to have width more than 1.
For $X \subseteq \widetilde{W}$, $Y \subseteq \widetilde{H}$, define $\hsum(X) := \sum_{i \in X} h(i)$;
$\wsum(Y) := \sum_{i \in Y} w(i)$;
$\wmax(X) := \max_{i \in X} w(i) \textrm{ if } X \neq \emptyset$, and $0 \textrm{ if } X = \emptyset$;
$\hmax(Y) := \max_{i \in Y} h(i) \textrm{ if } Y \neq \emptyset$, and $0 \textrm{ if } Y = \emptyset$.
In the algorithm $\greedyPack(\widetilde{W}, \widetilde{H})$, we first sort items $\widetilde{W}$ in decreasing order
of width and sort items $\widetilde{H}$ in decreasing order of height.
Suppose $\hsum(\widetilde{W}) \ge \wsum(\widetilde{H})$. Let $X$ be the largest prefix of $\widetilde{W}$
of total height at most 1, i.e., if $\hsum(\widetilde{W}) > 1$,
then $X$ is a prefix of $\widetilde{W}$ such that $\hsum(X) = 1$ (slice items if needed),
and $X = \widetilde{W}$ otherwise.
Pack $X$ into a bin such that the items touch the right edge of the bin.
Then we pack the largest possible prefix of $\widetilde{H}$
into the empty rectangular region of width $1 - \wmax(X)$ in the left side of the bin.
We call this a type-1 bin. See \cref{fig:greedy-pack:1} for an example.
If $\hsum(\widetilde{W}) < \wsum(\widetilde{H})$, we proceed analogously in a coordinate-swapped way,
i.e., we first pack tall items in the bin and then pack wide items in the remaining space.
Call this bin a type-2 bin.
We pack the rest of the items into bins in the same way.
\begin{figure}[htb]
\begin{subfigure}{0.45\textwidth}
\centering
\tikzset{mytransform/.style={scale=0.7}}
\tikzset{wItem/.style={draw,fill={black!30}}}
\tikzset{hItem/.style={draw,fill={black!10}}}
\ifcsname pGameL\endcsname\else\newlength{\pGameL}\fi
\setlength{\pGameL}{0.2cm}
\tikzset{bin/.style={draw,thick}}
\begin{tikzpicture}[mytransform]
\path[wItem] (5\pGameL, 0\pGameL) rectangle +(15\pGameL, 2\pGameL);
\path[wItem] (6\pGameL, 2\pGameL) rectangle +(14\pGameL, 3\pGameL);
\path[wItem] (6\pGameL, 5\pGameL) rectangle +(14\pGameL, 2\pGameL);
\path[wItem] (7\pGameL, 7\pGameL) rectangle +(13\pGameL, 3\pGameL);
\path[wItem] (8\pGameL, 10\pGameL) rectangle +(12\pGameL, 4\pGameL);
\path[wItem] (8\pGameL, 14\pGameL) rectangle +(12\pGameL, 2\pGameL);
\path[wItem] (9\pGameL, 16\pGameL) rectangle +(11\pGameL, 2\pGameL);
\path[wItem] (10\pGameL, 18\pGameL) rectangle +(10\pGameL, 2\pGameL);
\path[hItem] (1\pGameL, 4\pGameL) rectangle +(2\pGameL, 16\pGameL);
\path[hItem] (0\pGameL, 2\pGameL) rectangle +(1\pGameL, 18\pGameL);
\path[hItem] (3\pGameL, 5\pGameL) rectangle +(2\pGameL, 15\pGameL);
\draw[semithick,dashed] (5\pGameL, 0\pGameL) -- +(0\pGameL, 20\pGameL);
\path[bin] (0\pGameL, 0\pGameL) rectangle (20\pGameL, 20\pGameL);
\end{tikzpicture}
\caption{A type-1 bin produced by $\greedyPack$.
Wide items are packed on the right. Tall items are packed on the left.}%
\label{fig:greedy-pack:1}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\tikzset{mytransform/.style={xscale=-0.7,yscale=0.7,rotate=90}}
\tikzset{wItem/.style={draw,fill={black!10}}}
\tikzset{hItem/.style={draw,fill={black!30}}}
\ifcsname pGameL\endcsname\else\newlength{\pGameL}\fi
\setlength{\pGameL}{0.2cm}
\tikzset{bin/.style={draw,thick}}
\begin{tikzpicture}[mytransform]
\path[wItem] (5\pGameL, 0\pGameL) rectangle +(15\pGameL, 2\pGameL);
\path[wItem] (6\pGameL, 2\pGameL) rectangle +(14\pGameL, 3\pGameL);
\path[wItem] (6\pGameL, 5\pGameL) rectangle +(14\pGameL, 2\pGameL);
\path[wItem] (7\pGameL, 7\pGameL) rectangle +(13\pGameL, 3\pGameL);
\path[wItem] (8\pGameL, 10\pGameL) rectangle +(12\pGameL, 4\pGameL);
\path[wItem] (8\pGameL, 14\pGameL) rectangle +(12\pGameL, 2\pGameL);
\path[wItem] (9\pGameL, 16\pGameL) rectangle +(11\pGameL, 2\pGameL);
\path[wItem] (10\pGameL, 18\pGameL) rectangle +(10\pGameL, 2\pGameL);
\path[hItem] (1\pGameL, 4\pGameL) rectangle +(2\pGameL, 16\pGameL);
\path[hItem] (0\pGameL, 2\pGameL) rectangle +(1\pGameL, 18\pGameL);
\path[hItem] (3\pGameL, 5\pGameL) rectangle +(2\pGameL, 15\pGameL);
\draw[semithick,dashed] (5\pGameL, 0\pGameL) -- +(0\pGameL, 20\pGameL);
\path[bin] (0\pGameL, 0\pGameL) rectangle (20\pGameL, 20\pGameL);
\end{tikzpicture}
\caption{A type-2 bin produced by $\greedyPack$.
Tall items are packed above. Wide items are packed below.}%
\label{fig:greedy-pack:2}
\end{subfigure}
\caption{Examples of type-1 and type-2 bins produced by $\greedyPack$.}
\label{fig:greedy-pack}
\end{figure}
\begin{claim}
\label{thm:greedy-pack}
$\greedyPack(\widetilde{W}, \widetilde{H})$ outputs a 2-stage packing of $\widetilde{W} \cup \widetilde{H}$.
It runs in $O(m + |\widetilde{W}|\log|\widetilde{W}| + |\widetilde{H}|\log|\widetilde{H}|)$ time,
where $m$ is the number of bins used.
Furthermore, it slices items in $\widetilde{W}$ by making at most $m-1$ horizontal cuts
and slices items in $\widetilde{H}$ by making at most $m-1$ vertical cuts.
\end{claim}
Since items in $\widetilde{W}$ have width more than $1/2$,
no two items can be placed side-by-side.
Hence, $\smallceil{\hsum(\widetilde{W})} = \opt(\widetilde{W}) \le \opt(\widetilde{W} \cup \widetilde{H})$.
Similarly, $\smallceil{\wsum(\widetilde{H})} \le \opt(\widetilde{W} \cup \widetilde{H})$.
So, if all bins have the same type, $\greedyPack$ uses
$\max(\smallceil{\hsum(\widetilde{W})}, \smallceil{\wsum(\widetilde{H})}) = \opt(\widetilde{W} \cup \widetilde{H})$ bins.
We will now focus on the case where
some bins have type 1 and some have type 2.
\begin{definition}
In a type-1 bin, let $X$ and $Y$ be the wide and tall items, respectively.
The bin is called \emph{full} iff $\hsum(X) = 1$ and $\wsum(Y) = 1 - \wmax(X)$.
Define fullness for type-2 bins analogously.
\end{definition}
We first show that full bins pack items of a large total area,
and then we show that if some bins have type 1 and some bins have type 2,
then there can be at most 2 non-full bins.
This will help us get an upper-bound on the number of bins used by $\greedyPack(\widetilde{W}, \widetilde{H})$
in terms of $a(\widetilde{W} \cup \widetilde{H})$.
\begin{lemma}
\label{thm:area-bound}
Let there be $m_1$ type-1 full bins.
Let $J_1$ be the items in them.
Then $m_1 \le 4a(J_1)/3 + 1/3$.
\end{lemma}
\begin{proof}
In the $j^{\textrm{th}}$ full bin of type 1, let $X_j$ be the items from $\widetilde{W}$
and $Y_j$ be the items from $\widetilde{H}$. Let
$\ell_j := \wmax(X_j) \textrm{ if } j \le m_1$
and $\ell_{m_1+1} := 1/2$.
Since all items have their larger dimension more than $1/2$,
$\ell_j \ge 1/2$ and $\hmax(Y_j) > 1/2$, for any $j \in [m_1]$.
$a(X_j) \ge \ell_{j+1}$, since $X_j$ has height 1 and width at least $\ell_{j+1}$.
$a(Y_j) \ge (1-\ell_j)/2$, since $Y_j$ has width $1 - \ell_j$ and height more than $1/2$.
Therefore,
$a(J_1) = \sum_{j=1}^{m_1} (a(X_j) + a(Y_j))
\ge \sum_{j=1}^{m_1} (\ell_{j+1} + (1-\ell_j)/2)
\ge \sum_{j=1}^{m_1} \left(({\ell_{j+1}}/{2}) + ({1}/{4})
+ ({1}/{2}) - ({\ell_j}/{2})\right)
= ({3m_1}/{4}) + ({1}/{4}) - ({\ell_1}/{2})
\ge ({3m_1-1}/{4})$.\\
In the above inequalities, we used that $\ell_{j+1} \ge 1/2$
and $\ell_1 \le 1$.
Therefore, $m_1 \le 4a(J_1)/3 + 1/3$.
\end{proof}
An analog{} of \cref{thm:area-bound} can be proven for type-2 bins.
Note that \cref{thm:area-bound} implies that the average area of full bins is close to $3/4$.
It is possible for an individual full bin to have area close to 1/2,
but the number of such bins is small, due to the telescopic sum in \cref{thm:area-bound}.
Let $m$ be the number of bins used by $\greedyPack(\widetilde{W}, \widetilde{H})$.
After $j$ bins have been packed, let $A_j$ be the height of the remaining items in $\widetilde{W}$
and $B_j$ be the width of the remaining items in $\widetilde{H}$.
Let $t_j$ be the type of the $j^{\textrm{th}}$ bin (1 for type-1 bin and 2 for type-2 bin).
So $t_j = 1 \iff A_{j-1} \ge B_{j-1}$.
We first show that $|A_{j-1} - B_{j-1}| \le 1 \implies |A_j - B_j| \le 1$.
This means that once the difference between $\hsum(\widetilde{W})$ and $\wsum(\widetilde{H})$ becomes at most 1,
it continues to stay at most 1.
Next, we show that $t_j \neq t_{j+1} \implies |A_{j-1} - B_{j-1}| \le 1$.
This means that if some bins have type 1 and some have type 2,
then the difference between $\hsum(\widetilde{W})$ and $\wsum(\widetilde{H})$ will eventually become at most 1.
In the first non-full bin, we will use up all the wide items or the tall items.
We will show that the remaining items have total height or total width at most 1,
so we can have at most 1 more non-full bin.
Hence, there can be at most 2 non-full bins when we have both type-1 and type-2 bins.
In the $j^{\textrm{th}}$ bin, let $a_j$ be the height of items from $\widetilde{W}$
and $b_j$ be the width of items from $\widetilde{H}$.
Hence, for all $j \in [m]$,
$A_{j-1} = A_j + a_j$ and $B_{j-1} = B_j + b_j$.
\begin{lemma}
\label{thm:diff-capture}
$|A_{j-1} - B_{j-1}| \le 1 \implies |A_j - B_j| \le 1$.
\end{lemma}
\begin{proof}
W.l.o.g.{}, assume $A_{j-1} \ge B_{j-1}$. So, $t_j = 1$. Suppose $a_j < b_j$.
Then $a_j < 1$, so we used up $\widetilde{W}$ in the $j^{\textrm{th}}$ bin. Therefore,
$A_j = 0 \implies A_{j-1} = a_j < b_j \le b_j + B_j = B_{j-1}$,
which contradicts. Hence, $a_j \ge b_j$.
As $0 \le (A_{j-1} - B_{j-1}), (a_j - b_j) \le 1$, we get
$A_j - B_j = (A_{j-1} - B_{j-1}) - (a_j - b_j) \in [-1, 1]$.
\end{proof}
\begin{lemma}
\label{thm:tdiff-implies-adiff}
$t_j \neq t_{j+1} \implies |A_{j-1} - B_{j-1}| \le 1$.
\end{lemma}
\begin{proof}
W.l.o.g.{}, assume $t_j = 1$ and $t_{j+1} = 2$. Then
\[ A_{j-1} \ge B_{j-1} \textrm{ and } A_j < B_j
\implies B_{j-1} \le A_{j-1} < B_{j-1} + a_j - b_j
\implies A_{j-1} - B_{j-1} \in \ropenInterval{0, 1}. \qedhere \]
\end{proof}
\begin{lemma}
\label{thm:non-full-ub}
If all bins don't have the same type, then there can be at most 2 non-full bins.
\end{lemma}
\begin{proof}
Let there be $p$ full bins. %
Assume w.l.o.g.{} that in the $(p+1)^{\textrm{th}}$ bin, we used up all items from $\widetilde{W}$ but not $\widetilde{H}$.
Hence, $A_{p+1} = 0$ and $\forall i \ge p+2$, $t_i = 2$.
Since all bins don't have the same type, $\exists k \le p+1$ such that
$t_k = 1$ and $t_{k+1} = 2$.
By \cref{thm:tdiff-implies-adiff,thm:diff-capture}, we get $|A_{p+1} - B_{p+1}| \le 1$,
implying $B_{p+1} \le 1$.
Hence, the $(p+1)^{\textrm{th}}$ bin will use up all tall items,
implying at most 2 non-full bins.
\end{proof}
\begin{theorem}
\label{thm:greedy-pack-bins}
The number of bins $m$ used by $\greedyPack$ is at most
\\ $\max\left(\smallceil{\hsum(\widetilde{W})}, \smallceil{\wsum(\widetilde{H})},
\frac{4}{3}a(\widetilde{W} \cup \widetilde{H}) + \frac{8}{3}\right)$.
\end{theorem}
\begin{proof}
If all bins have the same type, then $m \le \max(\smallceil{\hsum(\widetilde{W})}, \smallceil{\wsum(\widetilde{H})})$.
Let there be $m_1$ (resp.~$m_2$) full bins of type 1 (resp.~type 2)
and let $J_1$ (resp.~$J_2$) be the items inside those bins.
Then by \cref{thm:area-bound}, we get $m_1 \le 4a(J_1)/3 + 1/3$ and $m_2 \le 4a(J_2)/3 + 1/3$.
Hence, $m_1 + m_2 \le 4a(\widetilde{W} \cup \widetilde{H})/3 + 2/3$.
If all bins don't have the same type, then by \cref{thm:non-full-ub},
there can be at most 2 non-full bins, so $\greedyPack(\widetilde{W}, \widetilde{H})$
uses at most $4a(\widetilde{W} \cup \widetilde{H})/3 + 8/3$ bins.
\end{proof}
\subsection{The \texorpdfstring{$\thinGPack$}{skewed4Pack} Algorithm}
\label{sec:thin-gpack}
We now return to the 2BP problem.
$\thinGPack$ is an algorithm for 2BP takes as input a set $I$ of rectangular items
and a parameter $\eps \in (0, 1)$ where $\eps^{-1} \in \mathbb{Z}$.
It outputs a 4-stage bin packing of $I$.
$\thinGPack$ has the following outline:
\begin{enumerate}[A.]
\item Use linear grouping \cite{bp-aptas,kenyon1996strip} to round up the width or height of each item in $I$.
This gives us a new instance $\widehat{I}$.
\item Pack $\widehat{I}$ into $1/\eps^2 + 1$ shelves,
after possibly \emph{slicing} some items.
Each shelf is a rectangular region with width or height more than $1/2$ and is fully packed, i.e.,
the total area of items in a shelf equals the area of the shelf.
If we treat each shelf as an item, we get a new instance $\widetilde{I}$.
\item Compute a packing of $\widetilde{I}$ into bins, after possibly slicing some items,
using $\greedyPack$.
\item Pack most of the items of $I$ into the shelves in the bins. We will prove that
the remaining items have very small area, so they can be packed separately using NFDH.
\end{enumerate}
\noindent \textbf{A. Item Classification and Rounding.}
Define $W := \{i \in I: h(i) \le \delta_H \}$ and $H := I - W$.
Items in $W$ are called \emph{wide} and items in $H$ are called \emph{tall}.
Let $W^{(L)} := \{i \in W: w(i) > \eps \}$ and $W^{(S)} := W - W^{(L)}$.
Similarly, let $H^{(L)} := \{i \in H: h(i) > \eps \}$ and $H^{(S)} := H - H^{(L)}$.
We will now use \emph{linear grouping}~\cite{bp-aptas,kenyon1996strip}
to round up the widths of items $W^{(L)}$ and the heights of items $H^{(L)}$
to get items $\What^{(L)}$ and $\Hhat^{(L)}$, respectively.
By \cref{thm:lingroup-n} in \cref{sec:lingroup},
items in $\What^{(L)}$ have at most $1/\eps^2$ distinct widths
and items in $\Hhat^{(L)}$ have at most $1/\eps^2$ distinct heights.
Let $\widehat{W} := \What^{(L)} \cup W^{(S)}$,
$\widehat{H} := \Hhat^{(L)} \cup H^{(S)}$,
and $\widehat{I} := \widehat{W} \cup \widehat{H}$.
Let $\fopt(\widehat{I})$ be the minimum number of bins needed to pack $\widehat{I}$,
where items in $\What^{(L)}$ can be sliced using horizontal cuts,
items in $\Hhat^{(L)}$ can be sliced using vertical cuts,
and items in $W^{(S)} \cup H^{(S)}$ can be sliced using both vertical and horizontal cuts.
Then the following lemma follows from \cref{thm:lingroup-repack} in \cref{sec:lingroup}.
\begin{lemma}
\label{thm:lingroup-opt-compare}
$\fopt(\widehat{I}) < (1+\eps)\opt(I) + 2$.
\end{lemma}
\noindent \textbf{B. Creating Shelves.}
We will use ideas from Kenyon and R\'emila's 2SP algorithm~\cite{kenyon1996strip}
to pack $\widehat{I}$ into \emph{shelves}.
Roughly, we solve a linear program to compute an optimal strip packing of $\widehat{W}$,
where the packing is 3-stage. The first stage of cuts gives us shelves
and the second stage gives us containers.
From each shelf, we trim off space that doesn't belong to any container.
See \cref{sec:guill-thin-extra:shelves} for details.
Let $\widetilde{W}$ be the shelves thus obtained.
Analogously, we can pack items $\widehat{H}$ into shelves $\widetilde{H}$.
Shelves in $\widetilde{W}$ are called \emph{wide shelves}
and shelves in $\widetilde{H}$ are called \emph{tall shelves}.
Let $\widetilde{I} := \widetilde{W} \cup \widetilde{H}$.
We can interpret each shelf in $\widetilde{I}$ as a rectangular item.
We allow slicing $\widetilde{W}$ and $\widetilde{H}$ using horizontal cuts and vertical cuts, respectively.
In \cref{sec:guill-thin-extra:shelves}, we prove the following facts.%
\begin{restatable}{lemma}{rthmCreateShelves}
\label{thm:shelves}
$\widetilde{I}$ has the following properties:
(a) $|\widetilde{W}| \le 1+1/\eps^2$ and $|\widetilde{H}| \le 1+1/\eps^2$;
(b) Items in $\widetilde{W}$ have width more than $1/2$
and items in $\widetilde{H}$ have height more than $1/2$;
(c) $a(\widetilde{I}) = a(\widehat{I})$;
(d) $\max(\smallceil{\hsum(\widetilde{W})}, \smallceil{\wsum(\widetilde{H})}) \le \fopt(\widehat{I})$.
\end{restatable}
\noindent \textbf{C. Packing Shelves Into Bins.}
So far, we have packed $\widehat{I}$ into shelves $\widetilde{W}$ and $\widetilde{H}$.
We will now use $\greedyPack(\widetilde{W}, \widetilde{H})$ to pack the shelves into bins.
By \cref{thm:greedy-pack}, we get a 2-stage packing of $\widetilde{W} \cup \widetilde{H}$
into $m$ bins, where we make at most $m-1$ horizontal cuts in $\widetilde{W}$
and at most $m-1$ vertical cuts in $\widetilde{H}$.
The horizontal cuts (resp.~vertical cuts) increase the number of wide shelves
(resp.~tall shelves) from at most $1 + 1/\eps^2$
to at most $m + 1/\eps^2$.
By \cref{thm:greedy-pack-bins}, \cref{thm:shelves}(d)
and \cref{thm:shelves}(c), we get
$m \le \max\left(\smallceil{\hsum(\widetilde{W})}, \smallceil{\wsum(\widetilde{H})},
\frac{4}{3}a(\widetilde{I}) + \frac{8}{3}\right)
\le \frac{4}{3}\fopt(\widehat{I}) + \frac{8}{3}$.
\noindent \textbf{D. Packing Items Into Containers.}
So far, we have a packing of shelves into $m$ bins,
where the shelves contain slices of items $\widehat{I}$.
We will now repack a large subset of the items $\widehat{I}$ into the shelves
without slicing them. See \cref{fig:thin-gpack-output} for an example output.
We will do this using a standard greedy algorithm.
See \cref{sec:guill-thin-extra:pack-into-containers} for details of the
algorithm and proof of the following lemma.
\begin{figure}[htb]
\centering
\ifcsname myu\endcsname\else\newlength{\myu}\fi
\setlength{\myu}{0.3cm}
\tikzset{mytransform/.style={}}
\tikzset{sepline/.style={draw,thick}}
\tikzset{halfsepline/.style={draw}}
\tikzset{wShelf/.style={draw,thick}}
\tikzset{hShelf/.style={draw,thick}}
\tikzset{wItem/.style={draw,fill={black!30},very thin}}
\tikzset{hItem/.style={draw,fill={black!10},very thin}}
\tikzset{bin/.style={draw,thick}}
\tikzset{binGrid/.style={draw,step=1\myu,{black!20}}}
\begin{tikzpicture}[mytransform]
\path[hItem] (0.0\myu, 0.0\myu) rectangle +(1.0\myu, 1.0\myu);
\path[hItem] (1.0\myu, 0.0\myu) rectangle +(1.0\myu, 1.0\myu);
\path[hItem] (2.0\myu, 0.0\myu) rectangle +(0.8\myu, 0.9\myu);
\path[hItem] (0.0\myu, 1.0\myu) rectangle +(1.0\myu, 0.8\myu);
\path[hItem] (1.0\myu, 1.0\myu) rectangle +(0.8\myu, 0.7\myu);
\path[hItem] (1.8\myu, 1.0\myu) rectangle +(0.8\myu, 0.7\myu);
\path[hItem] (0.0\myu, 1.8\myu) rectangle +(0.7\myu, 0.7\myu);
\path[hItem] (0.7\myu, 1.8\myu) rectangle +(0.8\myu, 0.65\myu);
\path[hItem] (1.5\myu, 1.8\myu) rectangle +(0.6\myu, 0.6\myu);
\path[hItem] (2.1\myu, 1.8\myu) rectangle +(0.6\myu, 0.55\myu);
\path[hItem] (0.0\myu, 2.5\myu) rectangle +(0.7\myu, 0.5\myu);
\path[hItem] (0.7\myu, 2.5\myu) rectangle +(0.8\myu, 0.5\myu);
\path[hItem] (1.5\myu, 2.5\myu) rectangle +(0.7\myu, 0.5\myu);
\path[hItem] (2.2\myu, 2.5\myu) rectangle +(0.7\myu, 0.5\myu);
\path[hItem] (0.0\myu, 3.0\myu) rectangle +(0.6\myu, 0.5\myu);
\path[hItem] (0.6\myu, 3.0\myu) rectangle +(0.7\myu, 0.48\myu);
\path[hItem] (1.3\myu, 3.0\myu) rectangle +(0.5\myu, 0.46\myu);
\path[hItem] (1.8\myu, 3.0\myu) rectangle +(0.4\myu, 0.44\myu);
\path[hItem] (2.2\myu, 3.0\myu) rectangle +(0.5\myu, 0.42\myu);
\path[hItem] (0.0\myu, 3.5\myu) rectangle +(0.5\myu, 0.4\myu);
\path[hItem] (0.5\myu, 3.5\myu) rectangle +(0.5\myu, 0.37\myu);
\path[hItem] (1.0\myu, 3.5\myu) rectangle +(0.5\myu, 0.34\myu);
\path[hItem] (1.5\myu, 3.5\myu) rectangle +(0.5\myu, 0.31\myu);
\path[hItem] (2.0\myu, 3.5\myu) rectangle +(0.4\myu, 0.28\myu);
\path[hItem] (2.4\myu, 3.5\myu) rectangle +(0.4\myu, 0.25\myu);
\path[hItem] (0.0\myu, 4\myu) rectangle +(0.5\myu, 6\myu);
\path[hItem] (0.5\myu, 4\myu) rectangle +(0.3\myu, 6\myu);
\path[hItem] (0.8\myu, 4\myu) rectangle +(0.7\myu, 6\myu);
\path[hItem] (1.5\myu, 4\myu) rectangle +(0.5\myu, 6\myu);
\path[hItem] (2.0\myu, 4\myu) rectangle +(0.6\myu, 6\myu);
\path[hItem] (0.0\myu, 10\myu) rectangle +(0.4\myu, 10\myu);
\path[hItem] (0.4\myu, 10\myu) rectangle +(0.6\myu, 10\myu);
\path[hItem] (1.0\myu, 10\myu) rectangle +(0.3\myu, 10\myu);
\path[hItem] (1.3\myu, 10\myu) rectangle +(0.8\myu, 10\myu);
\path[hItem] (2.1\myu, 10\myu) rectangle +(0.6\myu, 10\myu);
\path[hItem] (3.0\myu, 5\myu) rectangle +(0.6\myu, 7\myu);
\path[hItem] (3.6\myu, 5\myu) rectangle +(0.6\myu, 7\myu);
\path[hItem] (4.2\myu, 5\myu) rectangle +(0.7\myu, 7\myu);
\path[hItem] (3.0\myu, 12\myu) rectangle +(0.4\myu, 8\myu);
\path[hItem] (3.4\myu, 12\myu) rectangle +(0.5\myu, 8\myu);
\path[hItem] (3.9\myu, 12\myu) rectangle +(0.4\myu, 8\myu);
\path[hItem] (4.3\myu, 12\myu) rectangle +(0.4\myu, 8\myu);
\path[wItem] (5\myu, 0.0\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (5\myu, 0.5\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (5\myu, 1.3\myu) rectangle +(5\myu, 0.6\myu);
\path[wItem] (10\myu, 0.0\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (10\myu, 0.5\myu) rectangle +(5\myu, 0.6\myu);
\path[wItem] (10\myu, 1.1\myu) rectangle +(5\myu, 0.4\myu);
\path[wItem] (10\myu, 1.6\myu) rectangle +(5\myu, 0.4\myu);
\path[wItem] (15\myu, 0.0\myu) rectangle +(5\myu, 0.7\myu);
\path[wItem] (15\myu, 0.7\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (15\myu, 1.5\myu) rectangle +(5\myu, 0.3\myu);
\path[wItem] (6\myu, 2.0\myu) rectangle +(7\myu, 0.5\myu);
\path[wItem] (6\myu, 2.5\myu) rectangle +(7\myu, 0.8\myu);
\path[wItem] (6\myu, 3.3\myu) rectangle +(7\myu, 0.5\myu);
\path[wItem] (6\myu, 3.8\myu) rectangle +(7\myu, 0.5\myu);
\path[wItem] (6\myu, 4.3\myu) rectangle +(7\myu, 0.4\myu);
\path[wItem] (13\myu, 2.0\myu) rectangle +(7\myu, 0.8\myu);
\path[wItem] (13\myu, 2.8\myu) rectangle +(7\myu, 0.9\myu);
\path[wItem] (13\myu, 3.7\myu) rectangle +(7\myu, 0.2\myu);
\path[wItem] (13\myu, 3.9\myu) rectangle +(7\myu, 0.5\myu);
\path[wItem] (13\myu, 4.4\myu) rectangle +(7\myu, 0.3\myu);
\path[wItem] (6\myu, 5.0\myu) rectangle +(6\myu, 0.6\myu);
\path[wItem] (6\myu, 5.6\myu) rectangle +(6\myu, 0.3\myu);
\path[wItem] (6\myu, 5.9\myu) rectangle +(6\myu, 0.9\myu);
\path[wItem] (12\myu, 5.0\myu) rectangle +(4\myu, 0.3\myu);
\path[wItem] (12\myu, 5.3\myu) rectangle +(4\myu, 0.7\myu);
\path[wItem] (12\myu, 6.0\myu) rectangle +(4\myu, 0.4\myu);
\path[wItem] (12\myu, 6.4\myu) rectangle +(4\myu, 0.2\myu);
\path[wItem] (16\myu, 5.0\myu) rectangle +(4\myu, 0.5\myu);
\path[wItem] (16\myu, 5.5\myu) rectangle +(4\myu, 0.8\myu);
\path[wItem] (16\myu, 6.3\myu) rectangle +(4\myu, 0.4\myu);
\path[wItem] (7\myu, 7.0\myu) rectangle +(4\myu, 0.5\myu);
\path[wItem] (7\myu, 7.5\myu) rectangle +(4\myu, 0.8\myu);
\path[wItem] (7\myu, 8.3\myu) rectangle +(4\myu, 0.6\myu);
\path[wItem] (7\myu, 8.9\myu) rectangle +(4\myu, 0.8\myu);
\path[wItem] (11\myu, 7.0\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (11\myu, 7.5\myu) rectangle +(5\myu, 0.6\myu);
\path[wItem] (11\myu, 8.1\myu) rectangle +(5\myu, 0.4\myu);
\path[wItem] (11\myu, 8.5\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (11\myu, 9.3\myu) rectangle +(5\myu, 0.2\myu);
\path[wItem] (11\myu, 9.5\myu) rectangle +(5\myu, 0.3\myu);
\path[wItem] (16\myu, 7.0\myu) rectangle +(4\myu, 0.7\myu);
\path[wItem] (16\myu, 7.7\myu) rectangle +(4\myu, 0.8\myu);
\path[wItem] (16\myu, 8.5\myu) rectangle +(4\myu, 0.3\myu);
\path[wItem] (16\myu, 8.8\myu) rectangle +(4\myu, 0.5\myu);
\path[wItem] (16\myu, 9.3\myu) rectangle +(4\myu, 0.4\myu);
\path[wItem] (8\myu, 10.0\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (8\myu, 10.5\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (8\myu, 11.3\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (8\myu, 11.8\myu) rectangle +(5\myu, 0.5\myu);
\path[wItem] (8\myu, 12.3\myu) rectangle +(5\myu, 0.4\myu);
\path[wItem] (8\myu, 12.6\myu) rectangle +(5\myu, 0.4\myu);
\path[wItem] (8\myu, 13.0\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (13\myu, 10.0\myu) rectangle +(7\myu, 0.8\myu);
\path[wItem] (13\myu, 10.8\myu) rectangle +(7\myu, 0.9\myu);
\path[wItem] (13\myu, 11.7\myu) rectangle +(7\myu, 0.2\myu);
\path[wItem] (13\myu, 11.9\myu) rectangle +(7\myu, 0.5\myu);
\path[wItem] (13\myu, 12.4\myu) rectangle +(7\myu, 0.3\myu);
\path[wItem] (13\myu, 12.7\myu) rectangle +(7\myu, 0.9\myu);
\path[wItem] (8\myu, 14.0\myu) rectangle +(6\myu, 0.5\myu);
\path[wItem] (8\myu, 14.5\myu) rectangle +(6\myu, 0.3\myu);
\path[wItem] (8\myu, 14.8\myu) rectangle +(6\myu, 0.7\myu);
\path[wItem] (14\myu, 14.0\myu) rectangle +(6\myu, 0.7\myu);
\path[wItem] (14\myu, 14.7\myu) rectangle +(6\myu, 0.6\myu);
\path[wItem] (14\myu, 15.3\myu) rectangle +(6\myu, 0.6\myu);
\path[wItem] (9\myu, 16.0\myu) rectangle +(6\myu, 0.7\myu);
\path[wItem] (9\myu, 16.7\myu) rectangle +(6\myu, 0.6\myu);
\path[wItem] (9\myu, 17.3\myu) rectangle +(6\myu, 0.7\myu);
\path[wItem] (15\myu, 16.0\myu) rectangle +(5\myu, 0.6\myu);
\path[wItem] (15\myu, 16.6\myu) rectangle +(5\myu, 0.8\myu);
\path[wItem] (15\myu, 17.4\myu) rectangle +(5\myu, 0.2\myu);
\path[wItem] (10\myu, 18.0\myu) rectangle +(10\myu, 0.6\myu);
\path[wItem] (10\myu, 18.6\myu) rectangle +(10\myu, 0.7\myu);
\path[wItem] (10\myu, 19.3\myu) rectangle +(10\myu, 0.4\myu);
\path[sepline] (0\myu, 10\myu) -- +(3\myu, 0);
\path[halfsepline] (0\myu, 1.0\myu) -- +(3\myu, 0);
\path[halfsepline] (0\myu, 1.8\myu) -- +(3\myu, 0);
\path[halfsepline] (0\myu, 2.5\myu) -- +(3\myu, 0);
\path[halfsepline] (0\myu, 3.0\myu) -- +(3\myu, 0);
\path[halfsepline] (0\myu, 3.5\myu) -- +(3\myu, 0);
\path[sepline] (0\myu, 4\myu) -- +(3\myu, 0);
\path[sepline] (3\myu, 12\myu) -- +(2\myu, 0);
\path[sepline] (10\myu, 0\myu) -- +(0, 2\myu);
\path[sepline] (15\myu, 0\myu) -- +(0, 2\myu);
\path[sepline] (13\myu, 2\myu) -- +(0, 3\myu);
\path[sepline] (12\myu, 5\myu) -- +(0, 2\myu);
\path[sepline] (16\myu, 5\myu) -- +(0, 2\myu);
\path[sepline] (11\myu, 7\myu) -- +(0, 3\myu);
\path[sepline] (16\myu, 7\myu) -- +(0, 3\myu);
\path[sepline] (13\myu, 10\myu) -- +(0, 4\myu);
\path[sepline] (14\myu, 14\myu) -- +(0, 2\myu);
\path[sepline] (15\myu, 16\myu) -- +(0, 2\myu);
\path[wShelf] (5\myu, 0\myu) rectangle +(15\myu, 2\myu);
\path[wShelf] (6\myu, 2\myu) rectangle +(14\myu, 3\myu);
\path[wShelf] (6\myu, 5\myu) rectangle +(14\myu, 2\myu);
\path[wShelf] (7\myu, 7\myu) rectangle +(13\myu, 3\myu);
\path[wShelf] (8\myu, 10\myu) rectangle +(12\myu, 4\myu);
\path[wShelf] (8\myu, 14\myu) rectangle +(12\myu, 2\myu);
\path[wShelf] (9\myu, 16\myu) rectangle +(11\myu, 2\myu);
\path[wShelf] (10\myu, 18\myu) rectangle +(10\myu, 2\myu);
\path[hShelf] (0\myu, 0\myu) rectangle +(3\myu, 20\myu);
\path[hShelf] (3\myu, 5\myu) rectangle +(2\myu, 15\myu);
\draw[semithick,dashed] (5\myu, 0\myu) -- +(0\myu, 20\myu);
\path[bin] (0\myu, 0\myu) rectangle (20\myu, 20\myu);
\end{tikzpicture}
\caption[A type-1 bin in the packing of $\widehat{I}$ computed by $\thinGPack$.]%
{A type-1 bin in the packing of $\widehat{I}$ computed by $\thinGPack$.
The packing contains 5 tall containers in 2 tall shelves
and 18 wide containers in 8 wide shelves.}%
\label{fig:thin-gpack-output}
\end{figure}
\begin{restatable}{lemma}{rthmDiscardAreaUb}
\label{thm:discard-area-ub}
Let $P$ be a packing of $\widetilde{I}$ into $m$ bins, where we sliced wide shelves by making
at most $m-1$ horizontal cuts and sliced tall shelves by making at most $m-1$ vertical cuts.
Then we can (non-fractionally) pack a large subset of items $\widehat{I}$
into the shelves in $P$ such that
the unpacked items (also called \emph{discarded items}) from $\widehat{W}$ have area less than
$\eps \hsum(\widetilde{W}) + \delta_H(1 + \eps)(m + 1/\eps^2)$,
and the unpacked items from $\widehat{H}$ have area less than
$\eps \wsum(\widetilde{H}) + \delta_W(1 + \eps)(m + 1/\eps^2)$.
\end{restatable}
We will pack the wide discarded items into new bins using NFDH
and pack the tall discarded items into new bins using NFDW.
Finally, we prove the performance guarantee of $\thinGPack_{\eps}(I)$.
\begin{lemma}
\label{thm:thin-gpack-strong}
Let $I$ be a set of $(\delta_W, \delta_H)$-skewed items.
Then $\thinGPack_{\eps}(I)$ outputs a 4-stage packing of $I$
in time $O((1/\eps)^{O(1/\eps)} + n\log n)$
and uses less than $\alpha(1+\eps)\opt(I) + 2\beta$ bins, where
$\Delta := \frac{1}{2}\left(\frac{\delta_H}{1-\delta_H}
+ \frac{\delta_W}{1-\delta_W}\right),\;
\alpha := \frac{4}{3}(1+4\Delta)(1+3\eps),\;
\beta := \frac{2\Delta(1+\eps)}{\eps^2} + \frac{10}{3}
+ \frac{19\Delta}{3} + \frac{16\Delta\eps}{3}$.
\end{lemma}
\begin{proof}
The discarded items are packed using NFDH or NFDW, which output a 2-stage packing.
Since $\greedyPack$ outputs a 2-stage packing of the shelves
and the packing of items into the shelves is a 2-stage packing,
the bin packing of non-discarded items is a 4-stage packing.
The time taken by $\thinGPack$ is at most $O((1/\eps)^{O(1/\eps)} + n\log n)$.
Suppose $\greedyPack$ uses at most $m$ bins. Then by \cref{thm:greedy-pack-bins},
$m \le 4\fopt(\widehat{I})/3 + 8/3$.
Let $W^d$ and $H^d$ be the items discarded from $W$ and $H$, respectively.
By \cref{thm:discard-area-ub} and \cref{thm:shelves}(d),
$a(W^d) < \eps\fopt(\widehat{I}) + \delta_H(1 + \eps)(m + 1/\eps^2)$
and $a(H^d) < \eps\fopt(\widehat{I}) + \delta_W(1 + \eps)(m + 1/\eps^2)$.
By \cref{thm:nfdh-wide}, the number of bins used by $\thinGPack_{\eps}(I)$ is less than
$m + \frac{2a(W^d)+1}{1-\delta_H} + \frac{2a(H^d)+1}{1-\delta_W}
\le (1 + 4\Delta(1+\eps))m + 4\eps(1+\Delta)\fopt(\widehat{I})
+ 2(1+\Delta) + 4\Delta(1+\eps)/\eps^2
\le \alpha\fopt(\widehat{I}) + 2(\beta - 1)
< \alpha(1+\eps)\opt(I) + 2\beta.$
The last inequality follows from \cref{thm:lingroup-opt-compare}.
\end{proof}
Now we conclude with the proof of \cref{thm:thin-gpack}.
\begin{proof}[Proof of \cref{thm:thin-gpack}]
This is a simple corollary of \cref{thm:thin-gpack-strong}, where
$\delta \le 1/2$ gives us $\Delta \le 2\delta$,
$\alpha(1+\eps) \le (4/3)(1+8\delta)(1+7\eps)$,
and $\beta \le 4/\eps^2 + 15$.
\end{proof}
\section{Almost-Optimal Bin Packing of Skewed{} Rectangles}
\label{sec:thin-bp}
In this section, we will describe the algorithm $\thinCPack$. $\thinCPack$ takes as input
a set $I$ of items and a parameter $\eps \in (0, 1/2]$, where $\eps^{-1} \in \mathbb{Z}$.
We will prove that $\thinCPack$ has AAR $1+20\eps$ when $\delta$ is sufficiently small.
$\thinCPack$ works roughly as follows:
\begin{enumerate}
\item Invoke the subroutine $\round(I)$ (described in \cref{sec:thin-bp:round}).
$\round(I)$ returns a pair $(\widetilde{I}, I_{\mathrm{med}})$.
Here $I_{\mathrm{med}}$, called the set of \emph{medium items}, has low total area,
so we can pack it in a small number of bins.
$\widetilde{I}$, called the set of \emph{rounded items}, is obtained by
rounding up the width or height of each item in $I - I_{\mathrm{med}}$,
so that $\widetilde{I}$ has special properties that help us pack it easily.
\item Compute the optimal \emph{fractional compartmental} bin packing of $\widetilde{I}$
(we will define \emph{compartmental} and \emph{fractional} later).
\item Use this packing of $\widetilde{I}$ to obtain a packing of $I$
that uses slightly more number of bins.
\end{enumerate}
To bound the AAR of $\thinCPack$, we will prove a structural theorem
in \cref{sec:thin-bp:struct}, i.e., we will prove that
the optimal fractional compartmental packing of $\widetilde{I}$ uses close to $\opt(I)$ bins.
\subsection{Classifying and Rounding Items}
\label{sec:thin-bp:round}
\label{sec:thin-bp:remmed}
We now describe the algorithm $\round$ and
show that its output satisfies important properties.
First, we will find a set $I_{\mathrm{med}} \subseteq I$
and positive constants $\eps_1$ and $\eps_2$
such that $a(I_{\mathrm{med}}) \le \eps a(I)$, $\eps_2 \ll \eps_1$,
and $I - I_{\mathrm{med}}$ is $(\eps_2, \eps_1]$-free, i.e.,
no item in $I - I_{\mathrm{med}}$ has its width or height in the interval $(\eps_2, \eps_1]$.
Then we can remove $I_{\mathrm{med}}$ from $I$ and pack it separately
into a small number of bins using NFDH. We will see that
the $(\eps_2, \eps_1]$-freeness of $I - I_{\mathrm{med}}$
will help us pack $I - I_{\mathrm{med}}$ efficiently.
Specifically, we require $\eps_1 \le \eps$, $\eps_1^{-1} \in \mathbb{Z}$,
and $\eps_2 = f(\eps_1)$, where $f(x) := \frac{\eps x}{104(1+1/(\eps x))^{2/x-2}}$.
We explain this choice of $f$ in \cref{sec:thinCPack}.
Intuitively, such an $f$ ensures that $\eps_2 \ll \eps_1$
and $\eps_2^{-1} \in \mathbb{Z}$.
For $\thinCPack$ to work, we require $\delta \le \eps_2$.
Finding such an $I_{\mathrm{med}}$ and $\eps_1$ is a standard technique \cite{JansenP2013, bansal2014binpacking},
so we defer the details to \cref{sec:thin-bp-extra:remmed}.
Next, we classify the items in $I - I_{\mathrm{med}}$ into three disjoint classes:
\begin{itemize}[noitemsep]
\item Wide items: $W := \{i \in I: w(i) > \eps_1 \textrm{ and } h(i) \le \eps_2 \}$.
\item Tall items: $H := \{i \in I: w(i) \le \eps_2 \textrm{ and } h(i) > \eps_1 \}$.
\item Small items: $S := \{i \in I: w(i) \le \eps_2 \textrm{ and } h(i) \le \eps_2 \}$.
\end{itemize}
We will now use \emph{linear grouping}~\cite{bp-aptas,kenyon1996strip}
to round up the widths of items $W$ and the heights of items $H$
to get items $\widetilde{W}$ and $\widetilde{H}$, respectively.
By \cref{thm:lingroup-n} in \cref{sec:lingroup},
items in $\widetilde{W}$ have at most $1/(\eps\eps_1)$ distinct widths
and items in $\widetilde{H}$ have at most $1/(\eps\eps_1)$ distinct heights.
Let $\widetilde{I} := \widetilde{W} \cup \widetilde{H} \cup S$.
\begin{definition}[Fractional packing]
Suppose we are allowed to slice wide items in $\widetilde{I}$ using horizontal cuts,
slice tall items in $\widetilde{I}$ using vertical cuts and slice
small items in $\widetilde{I}$ using both horizontal and vertical cuts.
For any $\widetilde{X} \subseteq \widetilde{I}$, a bin packing of the slices of $\widetilde{X}$
is called a \emph{fractional packing} of $\widetilde{X}$.
The optimal fractional packing of $\widetilde{X}$ is denoted by $\fopt(\widetilde{X})$.
\end{definition}
\begin{lemma}
\label{thm:thin-bp:lingroup-opt-compare}
$\fopt(\widetilde{I}) < (1+\eps)\opt(I) + 2$.
\end{lemma}
\begin{proof}
Directly follows from \cref{thm:lingroup-repack} in \cref{sec:lingroup}.
\end{proof}
\subsection{Structural Theorem}
\label{sec:thin-bp:struct}
We will now define compartmental packing
and prove the structural theorem, which says that
the number of bins in the optimal fractional compartmental packing of $\widetilde{I}$
is roughly equal to $\fopt(\widetilde{I})$.
We first show how to \emph{discretize} a packing, i.e.,
we show that given a fractional packing of items in a bin,
we can remove a small fraction of tall and small items
and shift the remaining items leftwards so that the left and right edges
of each wide item belong to a constant-sized set $\mathcal{T}$,
where $|\mathcal{T}| \le (1+1/\eps\eps_1)^{2/\eps_1 - 2}$.
Next, we define \emph{compartmental} packing and show how to convert
a discretized packing to a compartmental packing.
For any rectangle $i$ packed in a bin, let $x_1(i)$ and $x_2(i)$ denote the $x$-coordinates
of its left and right edges, respectively, and let $y_1(i)$ and $y_2(i)$
denote the $y$-coordinates of its bottom and top edges, respectively.
Let $R$ be the set of distinct widths of items in $\widetilde{W}$.
Given the way we rounded items, $|R| \le 1/\eps\eps_1$.
Recall that $\eps_1 \le \eps \le 1/2$ and $\eps_1^{-1}, \eps^{-1} \in \mathbb{Z}$.
\begin{theorem}
\label{thm:disc-hor-pos}
Given a fractional packing of items $\widetilde{J} \subseteq \widetilde{I}$ into a bin,
we can remove tall and small items of total area less than $\eps$
and shift some of the remaining items to the left such that for every wide item $i$,
we get $x_1(i), x_2(i) \in \mathcal{T}$.
\end{theorem}
\begin{proof}
For wide items $u$ and $v$ in the bin, we say that $u \prec v$ iff
the right edge of $u$ is to the left of the left edge of $v$.
Formally $u \prec v \iff x_2(u) \le x_1(v)$.
We call $u$ a \emph{predecessor} of $v$.
A sequence $[i_1, i_2, \ldots, i_k]$ such that $i_1 \prec i_2 \prec \ldots \prec i_k$
is called a \emph{chain} ending at $i_k$.
For a wide item $i$, define $\level(i)$ as the number of items in the longest chain
ending at $i$. Formally, $\level(i) := 1$ if $i$ has no predecessors,
and $\left(1 + \max_{j \prec i} \level(j)\right)$ otherwise.
Let $W_j$ be the items at level $j$, i.e., $W_j := \{i: \level(i) = j\}$.
Note that the level of an item can be at most $1/\eps_1-1$,
since each wide item has width more than $\eps_1$.
\begin{figure}[htb]
\centering
\ifcsname pGameL\endcsname\else\newlength{\pGameL}\fi
\setlength{\pGameL}{0.2cm}
\tikzset{bin/.style={draw,thick}}
\tikzset{item/.style={draw,fill={black!15}}}
\tikzset{myarrow/.style={draw,->,>={Stealth}}}
\tikzset{mynode/.style={pos=0.5,inner sep=0,minimum size=0.45cm,shape=circle,semithick,draw}}
\tikzset{cutline/.style={draw,dashed}}
\begin{tikzpicture}
\path[cutline] (2\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[cutline] (5\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[cutline] (7\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[cutline] (9\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[cutline] (14\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[cutline] (16\pGameL, 0\pGameL) -- +(0, 20\pGameL);
\path[item] (0\pGameL, 15\pGameL) rectangle +(7\pGameL, 2\pGameL) node[mynode] (wa) {$a$};
\path[item] (5\pGameL, 11\pGameL) rectangle +(9\pGameL, 2\pGameL) node[mynode] (wb) {$b$};
\path[item] (16\pGameL, 10\pGameL) rectangle +(4\pGameL, 2\pGameL) node[mynode] (wc) {$c$};
\path[item] (9\pGameL, 18\pGameL) rectangle +(5\pGameL, 2\pGameL) node[mynode] (wd) {$d$};
\path[item] (2\pGameL, 6\pGameL) rectangle +(7\pGameL, 2\pGameL) node[mynode] (we) {$e$};
\path[item] (9\pGameL, 2\pGameL) rectangle +(11\pGameL, 2\pGameL) node[mynode] (wf) {$f$};
\path[bin] (0\pGameL, 0\pGameL) rectangle (20\pGameL, 20\pGameL);
\path[myarrow] (wa) -- (wc);
\path[myarrow] (wa) -- (wd);
\path[myarrow] (wa) -- (wf);
\path[myarrow] (wb) -- (wc);
\path[myarrow] (wd) -- (wc);
\path[myarrow] (we) -- (wc);
\path[myarrow] (we) -- (wd);
\path[myarrow] (we) -- (wf);
\end{tikzpicture}
\caption[Relation $\prec$ among items in a bin]%
{Example illustrating the $\prec$ relationship between wide items in a bin.
An edge is drawn from $u$ to $v$ iff $u \prec v$.
Here $W_1 = \{a, e, b\}$, $W_2 = \{d, f\}$ and $W_3 = \{c\}$.}
\label{fig:precedence-graph}
\end{figure}
We will describe an algorithm for discretization.
But first, we need to introduce two recursively-defined set families
$(S_1, S_2, \ldots)$ and $(T_0, T_1, \ldots)$.
Let $T_0 := \{0\}$ and $t_0 := 1$. For any $j > 0$, define
$t_j := (1+1/\eps\eps_1)^{2j},\, \delta_j := \eps\eps_1/t_{j-1},\,
S_j := T_{j-1} \cup \{k\delta_j: k \in \mathbb{Z}, 0 \le k < 1/\delta_j\},\,
T_j := \{x + y: x \in S_j, y \in R \cup \{0\}\}$.
Note that $\forall j > 0$, we have $T_{j-1} \subseteq S_j \subseteq T_j$
and $\delta_j^{-1} \in \mathbb{Z}$.
Define $\mathcal{T} := T_{1/\eps_1-1}$.
Our discretization algorithm proceeds in stages, where in the $j^{\textrm{th}}$ stage,
we apply two transformations to the items in the bin,
called \emph{strip-removal} and \emph{compaction}.
\\ \textbf{Strip-removal}: For each $x \in T_{j-1}$,
consider a strip of width $\delta_j$ and height 1 in the bin
whose left edge has coordinate $x$.
Discard the slices of tall and small items inside the strips.
\\ \textbf{Compaction}: Move all tall and small items as much towards the left as possible
(imagine a gravitational force acting leftwards on the tall and small items)
while keeping the wide items fixed.
Then move each wide item $i \in W_j$ leftwards till $x_1(i) \in S_j$.
Observe that the algorithm maintains the following invariant:
\textsl{after $k$ stages, for each $j \in [k]$,
each item $i \in W_j$ has $x_1(i) \in S_j$ (and hence $x_2(i) \in T_j$).}
This ensures that after the algorithm ends, $x_1(i), x_2(i) \in \mathcal{T}$.
All that remains to prove is that the total area of items discarded
during strip-removal is at most $\eps$ and that compaction is always possible.
\begin{lemma}
For all $j \ge 0$, $|T_j| \le t_j$.
\end{lemma}
\begin{proof}
We will prove this by induction. The base case holds because $|T_0| = t_0 = 1$.
Now assume $|T_{j-1}| \le t_{j-1}$. Then
$|T_j| \le (|R|+1)|S_j|
\le \left(\frac{1}{\eps\eps_1}+1\right)\left(|T_{j-1}| + \frac{1}{\delta_j}\right)
\le \left(\frac{1}{\eps\eps_1}+1\right)^2 t_{j-1}
= t_j.$
\end{proof}
Therefore, $|\mathcal{T}| \le t_{1/\eps_1-1} = (1+1/\eps\eps_1)^{2/\eps_1 - 2}$.
\begin{lemma}
Items discarded by strip-removal (across all stages)
have total area less than $\eps$.
\end{lemma}
\begin{proof}
In the $j^{\textrm{th}}$ stage, we create $|T_{j-1}|$ strips,
and each strip has total area at most $\delta_j$.
Therefore, the area discarded in the $j^{\textrm{th}}$ stage is at most
$|T_{j-1}|\delta_j \le t_{j-1}\delta_j = \eps\eps_1$.
Since there can be at most $1/\eps_1-1$ stages,
we discard an area of less than $\eps$ across all stages.
\end{proof}
\begin{lemma}
Compaction always succeeds, i.e., in the $j^{\textrm{th}}$ stage, while moving
item $i \in W_j$ leftwards, no other item will block its movement.
\end{lemma}
\begin{proof}
Let $i \in W_j$. Let $z$ be the $x$-coordinate of the left edge of the strip
immediately to the left of item $i$, i.e.,
$z := \max(\{x \in T_{j-1}: x \le x_1(i)\})$.
For any wide item $i'$, we have $x_2(i') \le x_1(i) \iff i' \prec i \iff \level(i') \le j-1$.
By our invariant, we get
$\level(i') \le j-1 \implies x_2(i') \in T_{j-1} \implies x_2(i') \le z$.
Therefore, for every wide item $i'$, $x_2(i') \not\in (z, x_1(i)]$.
In the $j^{\textrm{th}}$ strip-removal, we cleared the strip $[z, z+\delta_j] \times [0, 1]$.
If $x_1(i) \in [z, z+\delta_j]$, then $i$ can freely move to $z$,
and $z \in T_{j-1} \subseteq S_j$.
Since no wide item has its right edge in $(z, x_1(i)]$, if $x_1(i) > z + \delta_j$,
all the tall and small items whose left edge lies in $[z+\delta_j, x_1(i)]$
will move leftwards by at least $\delta_j$ during compaction.
Hence, there would be an empty space of width at least $\delta_j$
to the left of item $i$
(see \cref{fig:compaction-zoom}).
Therefore, we can move $i$ leftwards to make
$x_1(i)$ a multiple of $\delta_j$, and then $x_1(i)$ would belong to $S_j$.
\end{proof}
\begin{figure}[!htb]
\centering
\ifcsname myu\endcsname\else\newlength{\myu}\fi
\setlength{\myu}{0.6cm}
\tikzset{mypic/.pic={
\path[item] (3\myu, 1\myu) rectangle (7\myu, 1.5\myu);
\path[item-boundary] (7\myu, 1\myu) -- (3\myu, 1\myu) -- (3\myu, 1.5\myu) -- (7\myu, 1.5\myu);
\path[item] (4.8\myu, 2\myu) rectangle (7\myu, 2.5\myu);
\path[item-boundary] (7\myu, 2\myu) -- (4.8\myu, 2\myu) -- (4.8\myu, 2.5\myu) -- (7\myu, 2.5\myu);
\path[item] (4\myu, 3\myu) rectangle (7\myu, 3.5\myu) node[pos=0.5] {$i$};
\path[item-boundary] (7\myu, 3\myu) -- (4\myu, 3\myu) -- (4\myu, 3.5\myu) -- (7\myu, 3.5\myu);
\path[item] (-1\myu, 2\myu) rectangle (0\myu, 3\myu);
\path[item-boundary] (-1\myu, 2\myu) -- (0\myu, 2\myu) -- (0\myu, 3\myu) -- (-1\myu, 3\myu);
\path[item] (-1\myu, 4.5\myu) rectangle (6\myu, 5\myu) node[pos=0.5] {$k$};
\path[item-boundary] (-1\myu, 4.5\myu) -- (6\myu, 4.5\myu) -- (6\myu, 5\myu) -- (-1\myu, 5\myu);
\draw[thick]
(-1\myu, 0\myu) -- (7\myu, 0\myu)
(-1\myu, 6\myu) -- (7\myu, 6\myu);
\node[anchor=east] at (-1\myu, 3\myu) {$\cdots$};
\node[anchor=west] at (7\myu, 3\myu) {$\cdots$};
\draw[dashed,semithick]
(0\myu, 0\myu) -- (0\myu, 6\myu)
(4\myu, 0\myu) -- (4\myu, 6\myu)
(6\myu, 3.5\myu) -- (6\myu, 6\myu);
\node[anchor=south] at (0\myu, 6\myu) {$z$};
\node[anchor=south] at (4\myu, 6\myu) {$x_1(i)$};
\node at (6.5\myu, 4.75\myu) {$C$};
}}
\begin{tikzpicture}[
item-boundary/.style={draw,semithick},
item/.style={fill={black!25}},
myarrow/.style={->,>={Stealth},thick},
mybrace/.style = {decoration={amplitude=5pt,brace,mirror,raise=1pt},semithick,decorate},
]
\begin{scope}
\path[fill={black!10}]
(-1\myu, 0\myu) rectangle (0\myu, 6\myu)
(1\myu, 0\myu) rectangle (7\myu, 6\myu);
\draw[very thin] (0\myu, 0\myu) -- (0\myu, 6\myu) (1\myu, 0\myu) -- (1\myu, 6\myu);
\draw[mybrace] (0\myu, 0\myu) -- node[below=5pt] {$\delta_j$} (1\myu, 0\myu);
\pic at (0\myu, 0\myu) {mypic};
\end{scope}
\draw[myarrow] (9\myu, 3\myu) -- (13\myu, 3\myu)
node[pos=0.5,anchor=north,align=center,text width=3cm]
{shift tall and small items leftwards by $\delta_j$};
\begin{scope}[xshift={9.5cm}]
\path[fill={black!10}] (-1\myu, 0\myu) rectangle (7\myu, 6\myu);
\draw[very thin,fill=white]
(2\myu, 1\myu) rectangle (3\myu, 1.5\myu)
(3.8\myu, 2\myu) rectangle (4.8\myu, 2.5\myu)
(3\myu, 3\myu) rectangle (4\myu, 3.5\myu);
\pic at (0\myu, 0\myu) {mypic};
\draw[mybrace] (2\myu, 1\myu) -- node[below=5pt] {$\delta_j$} (3\myu, 1\myu);
\end{scope}
\end{tikzpicture}
\caption[Creation of empty space during compaction.]%
{This figure shows a region in the bin in the vicinity of item $i \in W_j$.
It illustrates how shifting tall and small items during compaction in the $j^{\textrm{th}}$ stage
creates a free space of width $\delta$ to the left of some wide items, including $i$.
Wide items are shaded dark and the lightly shaded region
potentially contains tall and small items.
Note that some tall and small items in the region $C$
may be unable to shift left because item $k$ is blocking them.
All other tall and small items in this figure to the right of $z$
can shift left by $\delta_j$.}
\label{fig:compaction-zoom}
\end{figure}
Hence, compaction always succeeds and we get $x_1(i), x_2(i) \in \mathcal{T}$
for each wide item $i$.
\end{proof}
\begin{definition}[Compartmental packing]
\label{defn:thin-bp:compartmental}
Consider a bin with some items packed into it. A \emph{compartment} $C$
is defined as a rectangular region in the bin satisfying the following properties:
\begin{itemize}[noitemsep]
\item $x_1(C), x_2(C) \in \mathcal{T}$.
\item $y_1(C), y_2(C)$ are multiples of $\eps_{\mathrm{cont}} := \eps\eps_1/6|\mathcal{T}|$.
\item $C$ does not contain both wide items and tall items.
\item If $C$ contains tall items, then $x_1(C)$ and $x_2(C)$
are consecutive values in $\mathcal{T}$.
\end{itemize}
If a compartment contains a wide item, it is called a \emph{wide compartment}.
Otherwise it is called a \emph{tall compartment}.
A packing of items into a bin is called \emph{compartmental}
iff there is a set of non-overlapping \emph{compartments} in the bin
such that each wide or tall item lies completely inside some compartment,
and there are at most $n_W := 3(1/\eps_1-1)|\mathcal{T}| + 1$ wide compartments
and at most $n_H := (1/\eps_1-1)|\mathcal{T}|$ tall compartments in the bin.
A packing of items into multiple bins is called compartmental iff
each bin is compartmental.
\end{definition}
Note that small items can be packed both inside and outside compartments.
The following two results are proved in \cref{sec:thin-bp-extra:compartmentalize}
using standard techniques.
\begin{restatable}{lemma}{rthmCompartmentalize}
\label{thm:thin-bp:compartmentalize}
Suppose $x_1(i), x_2(i) \in \mathcal{T}$ for each wide item $i$ in a bin.
Then by removing wide and small items of area less than $\eps$,
we can get a compartmental packing of the remaining items.
\end{restatable}
\begin{restatable}{theorem}{rthmStruct}
\label{thm:struct}
For a set $\widetilde{I}$ of $\delta$-skewed{} rounded items,
define $\fcopt(\widetilde{I})$ as the number of bins in
the optimal fractional compartmental packing%
\footnote{A \emph{fractional compartmental} packing of $\widetilde{I}$ is a fractional packing
of $\widetilde{I}$ that is also compartmental.}\!
of $\widetilde{I}$.
Then $\fcopt(\widetilde{I}) < (1+4\eps)\fopt(\widetilde{I}) + 2$.
\end{restatable}
\subsection{Packing Algorithm}
\label{sec:thin-bp:algo}
We now describe the $\thinCPack$ algorithm for packing a set $I$
of $\delta$-skewed{} items.
Roughly, $\thinCPack$ first computes $(\widetilde{I}, I_{\mathrm{med}}) := \round(I)$.
It then computes the optimal fractional compartmental packing of $\widetilde{I}$
by first guessing a packing of empty compartments into bins
and then fractionally packing the wide and tall items into the compartments
using a linear program.
It then converts the fractional packing of $\widetilde{I}$ to a non-fractional packing of $I$
with only a tiny increase in the number of bins.
See \cref{fig:thincpack} for a visual overview of $\thinCPack$.
We defer the details to \cref{sec:enum-configs,sec:feas-lp,sec:greedy-cont,sec:thinCPack}
and simply state the final result.
\begin{restatable}{theorem}{rthmThinCPack}
The number of bins used by $\thinCPack_{\eps}(\widetilde{I})$ is less than
\[ (1+20\eps)\opt(I) +
\frac{1}{13}\left(1 + \frac{1}{\eps\eps_1}\right)^{2/\eps_1 - 2} + 23. \]
\end{restatable}
\begin{figure}[htb]
\hbadness=10000
\begin{subfigure}[t]{0.3\textwidth}
\centering
\tikzset{compartment/.style={draw,thick,fill={black!10}},
container/.style={},item/.style={}}
\ifcsname myu\endcsname\else\newlength{\myu}\fi
\setlength{\myu}{0.8cm}
\tikzset{pic1/.pic={
\path[item]
(0.00\myu, 0\myu) rectangle +(0.09\myu, 0.6\myu)
(0.09\myu, 0\myu) rectangle +(0.10\myu, 0.6\myu)
(0.19\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu)
(0.27\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu);
\path[item]
(0\myu, 0.6\myu) rectangle +(0.10\myu, 0.4\myu)
(0.10\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.17\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.24\myu, 0.6\myu) rectangle +(0.08\myu, 0.4\myu)
(0.32\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu);
\path[item]
(0.40\myu, 0\myu) rectangle +(0.11\myu, 0.5\myu)
(0.50\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.48\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.57\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.70\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.80\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.90\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.70\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.77\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.84\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.91\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu);
\path[item]
(0.70\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.78\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.86\myu, 0.7\myu) rectangle +(0.09\myu, 0.3\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.6\myu)
(0\myu, 0.6\myu) rectangle +(0.4\myu, 0.4\myu)
(0.4\myu, 0\myu) rectangle +(0.3\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.3\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.3\myu, 0.4\myu)
(0.7\myu, 0.4\myu) rectangle +(0.3\myu, 0.3\myu)
(0.7\myu, 0.7\myu) rectangle +(0.3\myu, 0.3\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic2/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.4\myu, 0\myu) rectangle +(0.1\myu, 1\myu);
\path[item]
(0.50\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.58\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.66\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.74\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.82\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.90\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.50\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.68\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.77\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.86\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.5\myu, 1\myu)
(0.5\myu, 0\myu) rectangle +(0.5\myu, 0.5\myu)
(0.5\myu, 0.5\myu) rectangle +(0.5\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic3/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.00\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.09\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.18\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.27\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu);
\path[item]
(0.00\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.08\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.16\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.24\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.32\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu);
\path[item]
(0.4\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.5\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.6\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.8\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.9\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.47\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.54\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.63\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.72\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.80\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.88\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.4\myu)
(0\myu, 0.4\myu) rectangle +(0.4\myu, 0.3\myu)
(0\myu, 0.7\myu) rectangle +(0.4\myu, 0.3\myu)
(0.4\myu, 0\myu) rectangle +(0.6\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.6\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic4/.pic={
\path[item] foreach \h/\y in {0.15/0.00,0.14/0.15,0.13/0.29,0.12/0.42,0.11/0.54,
0.10/0.65,0.09/0.75,0.08/0.84,0.07/0.92}{
foreach \hdiff/\w/\x in {0.000/0.15/0.00,0.001/0.14/0.15,0.002/0.13/0.29,0.003/0.12/0.42,
0.004/0.11/0.54,0.005/0.10/0.65,0.006/0.09/0.75,0.007/0.08/0.84,0.008/0.07/0.92}{
(\x\myu, \y\myu) rectangle +(\w\myu, \h\myu-\hdiff\myu)
}};
}}
\begin{tikzpicture}
\pic at (2\myu, 0\myu) {pic4};
\pic[xscale=-3,rotate=90] at (2\myu, 2\myu) {pic1};
\pic[xscale=-3,rotate=90] at (1\myu, 4\myu) {pic2};
\pic[xscale=-2,rotate=90] at (1\myu, 1\myu) {pic3};
\pic[xscale=-2,rotate=90] at (2\myu, 3\myu) {pic1};
\pic[xscale=-2,rotate=90] at (0\myu, 0\myu) {pic2};
\pic[yscale=2] at (1\myu, 2\myu) {pic1};
\pic[yscale=2] at (3\myu, 0\myu) {pic2};
\pic[yscale=2] at (4\myu, 0\myu) {pic3};
\pic[yscale=2] at (4\myu, 3\myu) {pic1};
\pic[yscale=4] at (0\myu, 1\myu) {pic2};
\draw[ultra thick] (0\myu, 0\myu) rectangle (5\myu, 5\myu);
\end{tikzpicture}
\caption{Guess the packing of empty compartments in each bin (\cref{sec:enum-configs}).}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\tikzset{compartment/.style={draw,thick},
container/.style={draw,fill={black!25}},
item/.style={}}
\ifcsname myu\endcsname\else\newlength{\myu}\fi
\setlength{\myu}{0.8cm}
\tikzset{pic1/.pic={
\path[item]
(0.00\myu, 0\myu) rectangle +(0.09\myu, 0.6\myu)
(0.09\myu, 0\myu) rectangle +(0.10\myu, 0.6\myu)
(0.19\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu)
(0.27\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu);
\path[item]
(0\myu, 0.6\myu) rectangle +(0.10\myu, 0.4\myu)
(0.10\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.17\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.24\myu, 0.6\myu) rectangle +(0.08\myu, 0.4\myu)
(0.32\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu);
\path[item]
(0.40\myu, 0\myu) rectangle +(0.11\myu, 0.5\myu)
(0.50\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.48\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.57\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.70\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.80\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.90\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.70\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.77\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.84\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.91\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu);
\path[item]
(0.70\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.78\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.86\myu, 0.7\myu) rectangle +(0.09\myu, 0.3\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.6\myu)
(0\myu, 0.6\myu) rectangle +(0.4\myu, 0.4\myu)
(0.4\myu, 0\myu) rectangle +(0.3\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.3\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.3\myu, 0.4\myu)
(0.7\myu, 0.4\myu) rectangle +(0.3\myu, 0.3\myu)
(0.7\myu, 0.7\myu) rectangle +(0.3\myu, 0.3\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic2/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.4\myu, 0\myu) rectangle +(0.1\myu, 1\myu);
\path[item]
(0.50\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.58\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.66\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.74\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.82\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.90\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.50\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.68\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.77\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.86\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.5\myu, 1\myu)
(0.5\myu, 0\myu) rectangle +(0.5\myu, 0.5\myu)
(0.5\myu, 0.5\myu) rectangle +(0.5\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic3/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.00\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.09\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.18\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.27\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu);
\path[item]
(0.00\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.08\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.16\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.24\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.32\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu);
\path[item]
(0.4\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.5\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.6\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.8\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.9\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.47\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.54\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.63\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.72\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.80\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.88\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.4\myu)
(0\myu, 0.4\myu) rectangle +(0.4\myu, 0.3\myu)
(0\myu, 0.7\myu) rectangle +(0.4\myu, 0.3\myu)
(0.4\myu, 0\myu) rectangle +(0.6\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.6\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic4/.pic={
\path[item] foreach \h/\y in {0.15/0.00,0.14/0.15,0.13/0.29,0.12/0.42,0.11/0.54,
0.10/0.65,0.09/0.75,0.08/0.84,0.07/0.92}{
foreach \hdiff/\w/\x in {0.000/0.15/0.00,0.001/0.14/0.15,0.002/0.13/0.29,0.003/0.12/0.42,
0.004/0.11/0.54,0.005/0.10/0.65,0.006/0.09/0.75,0.007/0.08/0.84,0.008/0.07/0.92}{
(\x\myu, \y\myu) rectangle +(\w\myu, \h\myu-\hdiff\myu)
}};
}}
\begin{tikzpicture}
\pic at (2\myu, 0\myu) {pic4};
\pic[xscale=-3,rotate=90] at (2\myu, 2\myu) {pic1};
\pic[xscale=-3,rotate=90] at (1\myu, 4\myu) {pic2};
\pic[xscale=-2,rotate=90] at (1\myu, 1\myu) {pic3};
\pic[xscale=-2,rotate=90] at (2\myu, 3\myu) {pic1};
\pic[xscale=-2,rotate=90] at (0\myu, 0\myu) {pic2};
\pic[yscale=2] at (1\myu, 2\myu) {pic1};
\pic[yscale=2] at (3\myu, 0\myu) {pic2};
\pic[yscale=2] at (4\myu, 0\myu) {pic3};
\pic[yscale=2] at (4\myu, 3\myu) {pic1};
\pic[yscale=4] at (0\myu, 1\myu) {pic2};
\draw[ultra thick] (0\myu, 0\myu) rectangle (5\myu, 5\myu);
\end{tikzpicture}
\caption{Fractionally pack wide and tall items into compartments.
This partitions each compartment into \emph{containers} (\cref{sec:feas-lp}).}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\tikzset{compartment/.style={draw,ultra thick},
container/.style={draw,thick},
item/.style={draw,very thin,fill={black!25}}}
\ifcsname myu\endcsname\else\newlength{\myu}\fi
\setlength{\myu}{0.8cm}
\tikzset{pic1/.pic={
\path[item]
(0.00\myu, 0\myu) rectangle +(0.09\myu, 0.6\myu)
(0.09\myu, 0\myu) rectangle +(0.10\myu, 0.6\myu)
(0.19\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu)
(0.27\myu, 0\myu) rectangle +(0.08\myu, 0.6\myu);
\path[item]
(0\myu, 0.6\myu) rectangle +(0.10\myu, 0.4\myu)
(0.10\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.17\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu)
(0.24\myu, 0.6\myu) rectangle +(0.08\myu, 0.4\myu)
(0.32\myu, 0.6\myu) rectangle +(0.07\myu, 0.4\myu);
\path[item]
(0.40\myu, 0\myu) rectangle +(0.11\myu, 0.5\myu)
(0.50\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0\myu) rectangle +(0.09\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.48\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.57\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.70\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.80\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.90\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.70\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.77\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.84\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu)
(0.91\myu, 0.4\myu) rectangle +(0.07\myu, 0.3\myu);
\path[item]
(0.70\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.78\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.86\myu, 0.7\myu) rectangle +(0.09\myu, 0.3\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.6\myu)
(0\myu, 0.6\myu) rectangle +(0.4\myu, 0.4\myu)
(0.4\myu, 0\myu) rectangle +(0.3\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.3\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.3\myu, 0.4\myu)
(0.7\myu, 0.4\myu) rectangle +(0.3\myu, 0.3\myu)
(0.7\myu, 0.7\myu) rectangle +(0.3\myu, 0.3\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic2/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 1\myu)
(0.4\myu, 0\myu) rectangle +(0.1\myu, 1\myu);
\path[item]
(0.50\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.58\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.66\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.74\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.82\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu)
(0.90\myu, 0\myu) rectangle +(0.08\myu, 0.5\myu);
\path[item]
(0.50\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.59\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.68\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.77\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.86\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.5\myu, 1\myu)
(0.5\myu, 0\myu) rectangle +(0.5\myu, 0.5\myu)
(0.5\myu, 0.5\myu) rectangle +(0.5\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic3/.pic={
\path[item]
(0.0\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.1\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.2\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu)
(0.3\myu, 0\myu) rectangle +(0.1\myu, 0.4\myu);
\path[item]
(0.00\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.09\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.18\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu)
(0.27\myu, 0.4\myu) rectangle +(0.09\myu, 0.3\myu);
\path[item]
(0.00\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.08\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.16\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.24\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu)
(0.32\myu, 0.7\myu) rectangle +(0.08\myu, 0.3\myu);
\path[item]
(0.4\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.5\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.6\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.7\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.8\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu)
(0.9\myu, 0\myu) rectangle +(0.1\myu, 0.5\myu);
\path[item]
(0.40\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.47\myu, 0.5\myu) rectangle +(0.07\myu, 0.5\myu)
(0.54\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.63\myu, 0.5\myu) rectangle +(0.09\myu, 0.5\myu)
(0.72\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.80\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu)
(0.88\myu, 0.5\myu) rectangle +(0.08\myu, 0.5\myu);
\path[container]
(0\myu, 0\myu) rectangle +(0.4\myu, 0.4\myu)
(0\myu, 0.4\myu) rectangle +(0.4\myu, 0.3\myu)
(0\myu, 0.7\myu) rectangle +(0.4\myu, 0.3\myu)
(0.4\myu, 0\myu) rectangle +(0.6\myu, 0.5\myu)
(0.4\myu, 0.5\myu) rectangle +(0.6\myu, 0.5\myu);
\path[compartment] (0\myu, 0\myu) rectangle (1\myu, 1\myu);
}}
\tikzset{pic4/.pic={
\path[item] foreach \h/\y in {0.15/0.00,0.14/0.15,0.13/0.29,0.12/0.42,0.11/0.54,
0.10/0.65,0.09/0.75,0.08/0.84,0.07/0.92}{
foreach \hdiff/\w/\x in {0.000/0.15/0.00,0.001/0.14/0.15,0.002/0.13/0.29,0.003/0.12/0.42,
0.004/0.11/0.54,0.005/0.10/0.65,0.006/0.09/0.75,0.007/0.08/0.84,0.008/0.07/0.92}{
(\x\myu, \y\myu) rectangle +(\w\myu, \h\myu-\hdiff\myu)
}};
}}
\begin{tikzpicture}
\pic at (2\myu, 0\myu) {pic4};
\pic[xscale=-3,rotate=90] at (2\myu, 2\myu) {pic1};
\pic[xscale=-3,rotate=90] at (1\myu, 4\myu) {pic2};
\pic[xscale=-2,rotate=90] at (1\myu, 1\myu) {pic3};
\pic[xscale=-2,rotate=90] at (2\myu, 3\myu) {pic1};
\pic[xscale=-2,rotate=90] at (0\myu, 0\myu) {pic2};
\pic[yscale=2] at (1\myu, 2\myu) {pic1};
\pic[yscale=2] at (3\myu, 0\myu) {pic2};
\pic[yscale=2] at (4\myu, 0\myu) {pic3};
\pic[yscale=2] at (4\myu, 3\myu) {pic1};
\pic[yscale=4] at (0\myu, 1\myu) {pic2};
\draw[ultra thick] (0\myu, 0\myu) rectangle (5\myu, 5\myu);
\end{tikzpicture}
\caption{Pack the items non-fractionally (\cref{sec:greedy-cont}).}
\end{subfigure}
\caption{Major steps of $\thinCPack$ after $\round$ing $I$.}
\label{fig:thincpack}
\end{figure}
\input{skewedbp.bbl}
|
1,116,691,500,228 | arxiv | \section{Introduction}
\label{intro}
An interacting Bose system cooled toward absolute zero is believed either to crystallize, or, if it escapes solidification, to undergo Bose-Einstein condensation (BEC), with the concomitant onset of superfluidity (SF), both SF and BEC occurring at the same temperature $T_c$, in $d=3$ physical dimensions
\cite{Leggett2006,Kora2020b}.
A dimensional reduction, effectively achievable experimentally in a number of ways, brings about important modifications of this behavior.
Indeed, while in
$d=3$, at temperatures $T \le T_c$,
the one-body density matrix
plateaus to a finite value $n_\circ$ (known as the condensate fraction) at large interparticle separations,
in $d=2$
it decays algebraically, while the superfluid density
shows a ``universal jump'' at $T=T_c$
\cite{Nelson1977}, in what is generically described as a Berezinskii-Kosterlitz-Thouless (BKT)
phase transition \cite{Jose1977}.
The situation is completely different in $d=1$ dimension, in which
Galilean invariant (continuous)
systems cannot order, even at zero temperature. The low-energy,
long-wavelength
dynamics is
described by the ``universal'' harmonic Tomonaga-Luttinger liquid (TLL)
Haldane's model (HM) \cite{Haldane1981},
making stringent predictions on the behavior of several
observable quantities. For instance, while
there is no superfluid phase in the thermodynamic
($L \to \infty$, $L$ being the system length) limit,
superfluidity
manifests itself as a finite-size effect.
According to the TLL theory,
the superfluid fraction $\rho_S (L,T)$ is
a universal function
of $[({L T})/v_J]$, $v_J$ being the superfluid velocity \footnote{we set the Boltzmann constant $k_B=1$}, with ${ \rho_S ( L , 0)}$
= 1 \cite{Delmaestro2010}. Furthermore, the one-body density matrix displays a power-law decay modulated by oscillations
reflecting
the atomic nature of the fluid at the microscopic scale \cite{Cazalilla2004,Giamarchi2004}. Striking fingerprints of the
unique behavior are also present in the pair correlation function (PCF), which,
according to Haldane's theory,
features a power-law decay
on top of a uniform contribution $\propto n_0^2$, where $n_0$ is the average one-dimensional (1D) particle density, and with
higher-order harmonics of frequencies that are multiple integers of $2 \pi n_0$.
Correspondingly, the static structure factor $S(k)$ exhibits a universal, linear dependence on $k$ as $k \to 0$, with a slope depending on (non-universal) TLL parameters, and peaks at wave vectors $K_l = 2 \pi l n_0$, with integer $l$
\cite{Haldane1981,Giamarchi2004,Citro2007,Citro2008}.
\\ \indent
$^4$He is a paradigmatic system displaying all the main features of SF and BEC in Bose systems.
Thus, a $^4$He fluid confined in (quasi) 1D structures appears as a natural playground allowing for the experimental detection of the most peculiar aspects of 1D TLL physics, most of which remains yet unobserved.
To this aim,
several experimental avenues have been considered to achieve the quasi-1D confinement of $^4$He, chiefly by adsorbing helium gas inside elongated cavities of nanometer size diameter, such as those that exist in a variety of porous glasses
\cite{Sokol1996,Dimeo1998,Plantevin2001,Anderson2002,Toda2007,Prisk2013}, or nanoholes in Si$_3$N$_4$ membranes \cite{Savard2011}, as well as
carbon nanostructures \cite{Teizer1999,Ohba2016}.
Despite the obvious imperfections of these confining agents (defects, tortuosity, surface corrugation, etc.), the basic physics of an imbibed fluid should at least broadly mimic that taking place inside (possibly interconnected \cite{boninsegni2007}) long, smooth cylindrical channels; and, if the length $L$ of a typical channel greatly exceeds the characteristic size $R$ of confinement in the transverse direction (i.e., the radius of the cylinder), quasi-1D behavior ought to arise.
\\ \indent
On the theory side, the phase diagram of $^4$He
confined in long cylindrical channels of radius $R$ of the order of 1 nm, has been extensively studied with
the combined use of analytical and numerical methods \cite{Cole2000,Boninsegni2001,Delmaestro2010,Hernandez2011}.
The behavior of the confined $^4$He fluid can be qualitatively understood taking into account basic features of interaction between two helium atoms,
as well as of that of the atoms with the walls of the cylinder (i.e., the adsorption potential).
At sufficiently small $R$ (typically $\lesssim$ 4 \AA, although this varies depending on the adsorption potential) and realistic values of the pressure, $^4$He atoms line up on a single file along the axis of the cylinder, zero-point excursions off the axis mainly resulting into a softening of the repulsion at short distances of the helium pair potential \cite{Omiyinka2016}. Atomic exchanges are suppressed, and the physics of the system quantitatively approaches the 1D paradigm \cite{Delmaestro2010_b,Markic2015,Markic2018}.
\\ \indent
As $R$ is increased, the structure of the imbibed fluid for different pore fillings quite generally consists of one (or more) cylindrical (concentric) shell(s), coaxial with the pore, whose location and sharpness depend on the pressure and on the adsorption potential, upon which depends also the presence of a well-defined file of atoms along the axis; the radial distance between contiguous shells is roughly set by the repulsive core of the helium interatomic potential, of the order of 2.5 \AA\ \cite{Urban2006,Delmaestro2010,Delmaestro2010_b,Delmaestro2012,Markic2018}.
\\ \indent
Bulk three-dimensional (3D) physics must obviously emerge as both $R, L$ become macroscopic, i.e., formally in the $R, L \to\infty$ limit, with $R/L$ held constant (albeit possibly $<< 1)$ \cite{Delmaestro2010_b}. Under what conditions (quasi)-1D behavior may be observable in this geometry, especially if $R$ is large enough to allow for a multishell fluid structure inside the pore,
is a long debated issue. Based on an analysis of the axial PCF computed by QMC, Del Maestro et al. \cite{Delmaestro2010_b} concluded
that, even for $R$ large enough to accommodate
a few coaxial cylindrical shells in the pore, the helium fluid that fills the inner cylinder displays 1D behavior in the $L\to\infty$ limit.
This can be understood by regarding the multishell structure as a
set of coupled bosonic TLLs
(see, for instance, Ref. \onlinecite{Orignac1998}); despite the low-energy mode gapping due to the coupling between the TLLs, the ``center of mass mode'' survives as a gapless, low-lying degree of freedom, a signature of which is present in the
1D behavior of the PCF (an extension of this idea, accounting for effects of disorder, has been recently developed in Ref. \onlinecite{Yang2020}).
\\ \indent
This scenario has been recently challenged in Ref. \onlinecite{Markic2015}, where,
based on an analysis of the scaling properties of $\rho_S ( L , T )$, computed by QMC,
it is contended that a single cylindrical shell of $^4$He, of effective radius as tiny as 1.75 \AA, displays 2D, rather than 1D behavior.
Specifically, their claim is that the values of $\rho_S(L,T)$
do not conform with the predictions of the TLL theory, but are consistent instead with 2D BKT scaling. Furthermore, they argue that even in the $(R/L) \to 0$ limit there exists nonetheless a characteristic value of $R$ (presumably dependent on $n_0$ and on the adsorption potential) where a crossover takes place between 2D and 3D, i.e., the $^4$He fluid filling the pore features the same qualities of bulk $^4$He. If these conclusions were confirmed, they might conceivably deal a serious setback to the ongoing experimental effort aimed at observing signatures of TLL physics in fluids adsorbed in nanopores, as they may place daunting requirements on the confining agents, in terms of smallness of diameter size to achieve and/or type of material to utilize.
\\ \indent
In this paper, we weigh in on the above controversy and attempt to settle the issue of the effective dimensionality of a $^4$He fluid in the confines of a cylindrical nanopore, modeled as a smooth cylindrical channel of length $L$ and radius $R$, by performing extensive, first principle computer (QMC) simulations of a realistic model of $^4$He in cylindrical confinement. We consider pores of radius ranging from 3 to 10 \AA, as well as different adsorption potentials, allowing us to explore a variety of distinct physical settings. Specifically, we go from a single, tightly confined axial file, or a single cylindrical shell of $^4$He atoms, for pores of small radii, to systems enclosed in wider pores, in which multiple shells form, in some cases sharply defined, with little or no particle exchanges taking places between them, in other cases lose shells, only identifiable as local maxima of the radial $^4$He density, with $^4$He atoms essentially delocalized throughout the system.
\\ \indent
We compute the superfluid fraction $\rho_S (L,T)$ of $^4$He inside the pore, just like in Ref. \onlinecite{Markic2020}, as well as the axial static structure factor $S(k)$. As mentioned above, $S(k)$ has a known analytical form within the TLL, and is expected to display well-defined scaling properties with respect to the product $LT$ of pore length $L$ and temperature. This information can be used to assess the degree to which our simulated systems conform to the 1D paradigm, or whether there are significant deviations in some cases.
\\ \indent
The main conclusion of our study is that, in the limit $(R/L)\to\ 0$, the 1D behavior of $^4$He is {\it always} recovered, regardless of pore radius and/or shell structure inside the pore, in contrast with the claim of Ref. \onlinecite{Markic2020}. We see no evidence of any dimensional crossover to 2D or 3D.
We attribute the disagreement between our conclusions and those of Ref. \onlinecite{Markic2020}, to the fact that the length $L$ of the systems studied therein (making use of the same computational technology adopted here), is insufficient to reach the TLL regime \footnote{This conclusion appears to be corroborated by a comment made in Ref. \onlinecite{Markic2020}, to the effect that the agreement between their numerical data and the TTL predictions improves with increased system size, i.e., 1D behavior is approached asymptotically.}. We generally confirm both the findings of Refs. \onlinecite{Delmaestro2010_b,Delmaestro2012}, as well as the validity of the basic assumption of the theory expounded in Ref. \onlinecite{Yang2020}, namely that a single cylindrical shell is a 1D object as $L\to\infty$, lending support to the notion that the properties of a fluid in cylindrical confinement may be predicted on the basis on a formalism of coupled concentric shells.
\\ \indent
The paper is organized as follows: in Section \ref{nparham}, we briefly discuss the microscopic $N$-particle Hamiltonian for $^4$He in a nanopore;
in Section \ref{4hepores} we provide and discuss our results, while in Section \ref{conclusions} we outline
our conclusions and provide some further future perspectives of our work.
\section{Model and Methodology}
\label{nparham}
We model the system of interest as an ensemble of $N$ $^4$He atoms, regarded as point particles of mass $m$ and spin zero, i.e., obeying Bose statistics. The microscopic many-body Hamiltonian reads as follows:
\begin{equation}
H_N = - \frac{\hbar^2}{2 m } \: \sum_{ i = 1}^N \nabla^2_i + \sum_{ i < j = 1}^N V ( r_{ij} ) +
\sum_{ i = 1}^N U ( {\bf r}_i ),
\label{nparh.1}
\end{equation}
\noindent
where ${\bf r}_i$ is the position of $i$th atom, $r_{ij}\equiv |{\bf r}_i-{\bf r}_j|$, and $V(r)$
is the accepted (Aziz) interatomic pair potential for helium \cite{Aziz1979}.
The last term on the right-hand side of
Eq.(\ref{nparh.1}) corresponds to the one-body potential describing the the interactions of $^4$He atoms with the walls of a smooth cylindrical channel of radius $R$, representing the pore, whose axis is taken along the $z$ direction (specifically, it is the locus of all points with $x=y=0$). \\ \indent The expression for $U$ utilized here is described in detail in Ref. \onlinecite{Delmaestro2012}; it is derived from the well-known ``3-9" potential describing the interaction of a particle with an infinite, smooth wall, and it features an attractive well of depth $D$, as well as a repulsive core at short distance of characteristic length $a$.
These two parameters, together with the radius $R$, can be adjusted to reproduce, as closely as allowed by such a simplified model, the adsorption properties of real pores. In this work, we have made three different choices (summarized in Table \ref{table.1}), not with the aim of reproducing any actual physical system, but rather of gaining understanding of the physics of $^4$He in rather different confining environments. The set ${\cal A}$ of potential parameters has been proposed to describe the interaction of helium atoms with a Na substrate \cite{Chizmeshya1998}, while ${\cal B}$ and ${\cal C}$ respectively have been used to model porous glasses \cite{Delmaestro2012,Pricaupenko1995}.
\\ \indent
While the parameter sets labeled ${\cal B}$ and ${\cal C}$ may be more closely related to actual experimental systems (i.e., porous glasses), set ${\cal A}$ has been included with the aim of exploring the physics of the confined helium fluid in the presence of a weak adsorption potential. Alkali metal surfaces are known for their unusual adsorption properties \cite{Cheng1991,Boninsegni2004}; for example, a Cs substrate is not wetted at all by liquid helium, whereas a superfluid $^4$He monolayer has been predicted to form on a Li substrate \cite{Boninsegni1999}; Na is only slightly weaker an adsorber than Li. In confined geometries, alkali substrates have been show to enhance the superfluid response of nanoscale size fluid parahydrogen clusters \cite{Omiyinka2014}.
\begin{table}
\centering
\begin{tabular}{| c | c | c | }
\hline
{\rm System} & $D$ (K) & $a$ (\AA) \\
\hline
$\mathcal{A}$ & 12.53 & 3.99 \\
\hline
$\mathcal{B}$ & 32 & 2.25 \\
\hline
$\mathcal{C}$ & 100 & 2.05 \\
\hline
\end{tabular}
\caption{Parameters $D$ (well depth) and $a$ (hard core range) of the $^4$He-wall adsorption potentials ($U$ term in Eq. \ref{nparh.1}) adopted in this work.
The values for choice $\mathcal{A}$ (Na substrate) are taken from Ref. \onlinecite{Chizmeshya1998}, those for $\mathcal{B}$ (Glass 1) from Ref. \onlinecite{Delmaestro2012} and for $\mathcal{C}$ (Glass 2) from Ref. \onlinecite{Pricaupenko1995}. }
\label{table.1}
\end{table}
Fig. \ref{potentials} shows the various potentials considered in this work, for different pore radii. Clearly, substrate
$\mathcal{C}$ is the strongest among those considered here, whereas, for a given radius $R$, the well depth of substrate $\mathcal{B}$ is roughly intermediate between that of the other two. The adsorption potential is nearly flat in the vicinity of the axis of the channel ($r=0$), but displays a minimum off the axis for $R\gtrsim 3$ \AA, whose depth decreases with increasing $R$. It is important to note that the potential labeled $\mathcal{A}6$ in Fig. \ref{potentials} is very similar to that used in Ref. \onlinecite{Markic2020}, the latter being shifted upward by about 23 K, compared to ours.
\begin{figure}
\center
\includegraphics*[width=1. \linewidth]{fig1.pdf}
\caption{Pore adsorption potential
corresponding to the parameters in Table \ref{table.1}, with pore radius $R$ set to 6 \AA\ ($\mathcal{A}6$) and 10 \AA\
($\mathcal{A}10$,\ $\mathcal{B}$ and $\mathcal{C}$ ).
}
\label{potentials}
\end{figure}
\\ \indent
The low temperature properties of the system described by (\ref{nparh.1}) were investigated in this work by means of QMC simulations based on the canonical \cite{Mezzacapo2006,Mezzacapo2007} continuous-space Worm Algorithm \cite{Boninsegni2006,Boninsegni2006_2}. As this methodology is extensively described in the original references, we shall not review it here. Details of the simulation are standard; the system in enclosed in a supercell shaped as a cuboid, with periodic boundary conditions in all directions \footnote{As the helium fluid is confined inside the channel, boundary conditions in the $x$ and $y$ are effectively irrelevant. The cell side in these directions is taken typically a few times $2R$.}. We made use of a fourth-order approximation for the high-temperature density matrix (see, for instance, Ref. \onlinecite {Boninsegni2005}), and all of the results quoted here are extrapolated to the limit of time step $\tau\to 0$. In general, we found that a value of the time step equal to $1.6\times 10^{-3}$ K$^{-1}$ yields estimates indistinguishable from the extrapolated ones.
\\ \indent
The main physical quantities of interest are the radial $^4$He density profile $n(r)$, which provides information on the presence of one or more shells and their spatial definition, and the static structure factor $S(k)$, computed along the $z$ direction (i.e., the axis of the channel). The $S(k)$ is the key quantity that we use to assess the possible 1D nature of the fluid inside the pore, based on its scaling behavior with respect to $L$ and $T$, on the expectation that Haldane's model of Luttinger liquid ought to apply.
\\ \indent
We also compute the superfluid fraction $\rho_S (L,T)$, estimated using the standard winding number estimator \cite{Pollock1987,Fisher1973}, with the goal of comparing our results to those of Ref. \onlinecite{Markic2020}, which are obtained using the same methodology \footnote{As superfluidity is underlain by quantum-mechanical exchanges of indistinguishable atoms taking place at low temperature, qualitative insight on the propensity of the system to flow without dissipation can also be obtained from the computed statistics of exchange cycles.}.
In principle, the scaling of $\rho_S (L,T)$ in the $L\to\infty, T\to 0$ limit, can also provide information about the 1D nature of the system; however, its calculation in the limit of long channels (i.e., the limit in which the most stringent statements of the TLL apply) quickly becomes a daunting task, due to the increasingly large fluctuations of the winding number and, correspondingly, the impractically long time required to accumulate the statistics needed to reduce error bars to a meaningful size.
\section{Results}
\label{4hepores}
In this section we present and discuss our
results, chiefly focusing on the presence of TLL physics in $^4$He in nanopores, with the different geometries and adsorption potentials utilized.
\subsection{Dimensionality}
We begin by addressing immediately the central issue of this work, namely the possible crossover from 1D to 2D, and then from 2D to 3D behavior, which the authors of Ref. \onlinecite{Markic2020} claim take place in the pore as the radius and/or the density of helium inside are varied. In particular, based on a scaling analysis of the superfluid fraction $\rho_S
(L,T)$, they assert that the crossover from 1D to 2D occurs as soon as the helium atoms inside the channel no longer form a single file along the axis, morphing into a cylindrical shell, even one of radius as small as $\sim$ 2 \AA.
\begin{figure}
\center
\includegraphics*[width=1. \linewidth]{fig2.pdf}
\caption{{\it Top:} radial density profile $n(r)$, computed with respect to the axis of the pore. The linear $^4$He density is $n_0=0.25$ \AA$^{-1}$. Inset shows the $^4$He density projected on a plane along the axis of the cylinder (brighter colors mean higher density). This result is obtained with choice $\mathcal{A}$ of pore adsorption potential $U$, for a nominal pore radius $R=6$ \AA.\\
{\it Bottom:} Same as top panel, but for linear density $n_0=0.6$ \AA$^{-1}$. }
\label{bulkd_25}
\end{figure}
\\ \indent
Fig. \ref{bulkd_25} shows $^4$He radial density profiles computed at low temperature ($T=1$ K, they remain unchanged at lower temperature) for a helium fluid inside a pore of radius $R=6$ \AA, whose adsorption potential parameters are those of set $\mathcal{A}$ in Table \ref{table.1}, top (bottom) panel showing the result for linear density $n_0=0.25 \ (0.60)$ \AA$^{-1}$. Also shown are projected density maps along the axis of the pore, illustrating, together with the $n(r)$, the structural change taking place as the linear density is increased, i.e., the helium atoms go from forming a single axial file at lower density, to a cylindrical shell of radius slightly less than 2 \AA\ at the higher $n_0$.
\\ \indent
These results are {quantitatively very similar} to those of Ref. \onlinecite{Markic2020} for the same linear densities, albeit their results are generally obtained with different choices of pore radius and/or potential parameters. This similarity allows us to make a cogent comparison of our results and predictions for these systems with theirs.
For the purpose of our analysis, we momentarily postpone the discussion of the superfluid fraction, and present instead our results for the axial static structure factor $S(k)$.
\begin{figure}
\center
\includegraphics*[width=1. \linewidth]{fig3.pdf}
\caption{ Static structure factor $S(k)$ computed along the axis of a pore of radius $R=6$ \AA, for a linear $^4$He pore density equal to $n_0=0.25$ \AA$^{-1}$ (top) and $n_0=0.6$ \AA$^{-1}$ (bottom). The parameters of the interaction between $^4$He atoms and the wall of the pore are those corresponding to set $\mathcal{A}$ in Table \ref{table.1}. Data shown pertain to three different system sizes; the temperature of each simulation is such that the product $LT$ is constant. Statistical errors are smaller than the sizes of the symbols. Solid line represents a fit of the data in the $k\to 0$ limit based on TLL theory, as explained in the text.}
\label{sofk}
\end{figure}
\\ \indent
Fig. \ref{sofk} shows our QMC results for $S(k)$, for the two systems shown in Fig. \ref{bulkd_25}. The key result of TLL that we utilize to interpret our results, is the analytical expression for $S(k)$
for a system of linear size $L$ at temperature $T$ \cite{Citro2007,Citro2008}. As mentioned in Sec. \ref{intro}, its main implications are that, in the $L\to\infty$, $T\to 0$ limits, $S(k)$ {\it a}) becomes a function of the product $LT$ alone {\it b}) behaves linearly as $k\to 0$.
\\
\indent
The results of Fig. \ref{sofk} clearly show that both these conditions are met by our numerical data. In particular, for each of the two values of $n_0$, the collapse of data obtained at constant $LT$ for three systems of different lengths (each length differing from the other two by at least a factor two), is downright impressive. Our results are consistent with the linear behavior predicted at low $k$ by the TLL theory (solid lines in Fig. \ref{sofk}), allowing us to infer the value of the Luttinger parameter \cite{Kora2020}.
We obtain $K\sim
1.38$ (1.88) for $n_0=0.25$ ($n_0=0.60$) \AA$^{-1}$. These values of $K$, both in the $1 \le K\le 2$ range, are significantly lower than those reported, e.g., in Ref. \onlinecite {Delmaestro2010_b}; this is consistent with the weakness of the adsorption potential $\mathcal{A}$, compared to that utilized in those works (as it has already been shown for parahydrogen \cite{Omiyinka2016}, the strength of the potential $U$ affects the value of $K$).
\\
\indent
The $S(k)$ shown in Fig. \ref{sofk} displays well-defined peaks in both cases. For the lower density system, i.e., $n_0=0.25$ \AA$^{-1}$, which as shown above is structurally quasi-1D, the peak is
located at ${K}_1=2\pi n_0$, as expected. On the other hand, for the case $n_0=0.6$ \AA$^{-1}$, i.e., with $^4$He atoms arranged on a cylindrical shell, the location of the peak allows one to infer an {\it effective} 1D density roughly equal to $n_0/2$. One may interpret this by imagining the cylindrical shell as consisting of annuli coaxial with the pore, the distance between adjacent annuli being $2/n_0$, each annulus comprising on average two atoms, both located off the axis of the pore.
\\ \indent
Altogether, the results of Fig. \ref{sofk} constitute strong evidence of 1D behavior in {\it both} cases, the only effect of the structural change undergone by the system as it evolves from a single file to a shell being the above-mentioned renormalization of the 1D density. There is no evidence whatsoever of any dimensional crossover from 1D to 2D for the results obtained in this work at both linear densities are entirely consistent with TLL theory. Since, as stated above, we expect our results to be directly comparable to those of Ref. \onlinecite{Markic2020}, this apparent disagreement needs to be addressed.
\begin{figure}[h]
\center
\includegraphics*[width=1. \linewidth]{fig4.pdf}
\caption{ Superfluid fraction $\rho_S (L,T)$ for strictly 1D $^4$He at linear density $n_0=0.1$ \AA$^{-1}$, computed by
QMC for different system sizes, plotted as a function of the product $LT$ of system size by temperature. $N$ is
the number of particles. Statistical errors on
$\rho_S$ are of the order of 0.007, i.e., considerably smaller than symbol sizes. Finite-size effects are clearly seen.}
\label{1dhe}
\end{figure}
\\ \indent
As mentioned above, the contention of a dimensional crossover made in Ref. \onlinecite{Markic2020} is based on a scaling analysis of estimates of the superfluid fraction $\rho_S (L,T)$. We have computed $\rho_S
(L,T)$ in this work as well, obtaining results which are in quantitative agreement with those of Ref. \onlinecite{Markic2020}, at least within the statistical errors of both calculations \footnote{The numerical agreement between our results for $\rho_S(L,T)$ and those of Ref. \onlinecite{Markic2020} is obviously scarcely surprising, as both calculations make use of the same computing methodology (in fact, the same computer code).}.
We argue that the findings of Ref. \onlinecite{Markic2020} are in fact consistent with TLL theory, if finite-size effects affecting estimates of $\rho_S(L,T)$ are properly taken into account.
It is well known that the superfluid fraction computed by QMC using the winding number estimator, is affected by finite-size effects, in {\it any} dimension; 1D is no exception. In 1D, finite-size corrections to the estimate of $\rho_S$ have been extensively studied in the context of the classical XY
model, which is a minimal model of superfluidity (see, for instance, Ref. \onlinecite{Hirashima2020}).
\\ \indent
Fig. \ref{1dhe} shows the superfluid fraction $\rho_S (L,T)$ computed for a strictly 1D system, namely $^4$He, at a linear density
$n_0=0.1$ \AA$^{-1}$. The estimates are obtained in this work using the same QMC computational methodology, on systems comprising different numbers $N$ of particles, absolutely comparable to those utilized in Ref. \onlinecite{Markic2020}, and plotted as a function of the product $LT$. It is important to note that statistical errors on $\rho_S$ are of the order of 0.007, i.e., the obvious differences between results obtained for different numbers $N$ of particles are {\it well outside} statistical uncertainties. The trend is the same observed in Ref. \onlinecite{Markic2020}, i.e., for a given value of $LT$, estimates of the superfluid fraction obtained with a greater number of particles are greater in value. Thus, the results of Ref. \onlinecite{Markic2020} {\it are entirely consistent with the} 1D {\it scenario}.
\\ \indent
The contention is made in Ref. \onlinecite{Markic2020} that an accurate fit to the estimates of the superfluid fraction obtained for the system of linear density $n_0=0.6$ \AA$^{-1}$ can be obtained by assuming a 2D scenario, i.e., in terms of an actual (BKT) superfluid transition occurring at a finite temperature (around 0.2 K). Besides the fact that, as noticed above, there is no reason to discard the 1D scenario, the interpretation of the QMC data in terms of a 2D BKT transition offered in Ref. \onlinecite{Markic2020} (Fig. 10) is unconvincing, as evidenced by the large finite-size effects affecting their estimates of $\rho_S(L,T)$ {\it below} the estimated transition temperature $T_c$, which are not at all typical of a BKT transition (see, for instance, Ref. \onlinecite{Nguyen2021}).
\begin{figure}
\center
\includegraphics*[width=1. \linewidth]{fig5a.pdf}
\includegraphics*[width=1. \linewidth]{fig5b.pdf}
\caption{{\it Top}: Radial density profile of a $^4$He fluid of linear density $n_0=3$ \AA$^{-1}$, confined inside a pore of radius $R=10$ \AA. The result shown is at $T=1$ K, but does not change appreciably at lower $T$.
The interaction between $^4$He atoms and the wall of the pore are those of set $\mathcal{A}$ in Table \ref{table.1}. Inset shows the $^4$He density projected over a plane perpendicular to the axis of the cylindrical pore; brighter regions correspond to higher (3D) density.
{\it Bottom}: Static structure factor $S(k)$, computed along the pore axis for the same system, at three different system lengths and temperatures, such that $LT$ is held constant. Solid line is a linear fit to the data in the $k\to0$ limit, based on TLL theory.}
\label{radial_3}
\end{figure}
\\ \indent
One might wonder if a dimensional crossover may still take place, perhaps at different $n_0$ and/or pore radius. As we show below, the results obtained in this work, within the range of pore radii and density considered, are {\it all} amenable to a description in terms of the TLL theory, as $L\to \infty$. The crucial point, as we show below, is that one needs to perform calculations on systems of sufficient length, in order for the 1D physics to emerge.
\\ \indent
Fig. \ref{radial_3} (top) shows the radial density profile $n(r)$ at $T=1$ K (it is independent of $L$ and remains unchanged at lower $T$, within the precision of the calculation) for a $^4$He fluid of linear density $n_0=3$ \AA$^{-1}$, confined inside a pore of radius $R=10$ \AA. The parameters of the adsorption potential are those of set $\mathcal{A}$ in Table \ref{table.1}. Clearly, in this case the fluid is nearly structure-less, with two floppy, largely overlapping concentric shells, arising essentially as a result of the hard core repulsion of $^4$He atoms at short distance. At low $T$ (i.e., $T \lesssim 2$ K) quantum-mechanical exchanges of helium atoms become frequent. An analysis of the number of particles involved in cycles of exchanges shows that atoms are delocalized between the two shells (or, as expressed in the formalism of Ref. \onlinecite{Yang2020}, ``hopping'' of helium atoms from one shell to the other occurs).
\\ \indent
It might be imagined that such a system ought not conform to the 1D paradigm, on account of the significant atomic motion in the transverse direction, especially since it paves the way to quantum-mechanical exchanges, which underlie SF in 2D and 3D $^4$He, but are increasingly suppressed as the 1D limit is approached. However, as shown in the bottom panel of Fig. \ref{radial_3}, our QMC-computed axial static structure factor displays the same data collapse described above for the single-file and single-shell systems, i.e., it
only depends on the product $LT$ for sufficiently long $L$, a hallmark of TLL behavior. We therefore conclude that even this double-shell confined $^4$He fluid behaves like a 1D system in the thermodynamic limit, one to which the formalism of Yang and Affleck \cite{Yang2020} ought to be applicable. The value of the Luttinger parameter $K$, in this case is $\sim 1.85$, again inferred from the low-$k$ behavior of $S(k)$,
\\ \indent
It need be stressed that observing numerically the 1D limiting behavior of cylindrically-shaped 3D systems with a significant spatial extension in the radial direction, requires that computer simulations be carried out on systems of sufficient length $L$.
To give a sense of how crucial this aspect is, we discuss in detail the values of $\rho_S (L,T)$ at $T=1$ K, computed by QMC for the system of Fig. \ref{radial_3}, for the three quoted lengths. For $L=40,\ 80$ \AA, i.e., with $N=120$ and 240 $^4$He atoms, the estimate of $\rho_S$ is the same within statistical uncertainties, namely 0.95(3).
A value so close to unity, obtained at a relatively high temperature and apparently insensitive on $L$ might lead one to conclude that the system is behaving essentially like bulk $^4$He, i.e., a 3D SF. However, as soon as $L$ is increased to $160$ \AA, corresponding to $N=480$,
$\rho_S$ falls to 0.70(3), showing how the estimates obtained with the shorter sizes do not offer a reliable representation of the behavior of the system in the $L\to\infty$ limit.
\\ \indent
It has been explicitly shown in the context of the 2D XY model how the length $L$ of a quasi-1D system must be significantly greater than a characteristic length $L_c$, which grows proportionally to the perimeter (area) of an empty (filled) cylindrical shell, in order for the 1D behavior to emerge \cite{Yamashita2009,Hirashima2020}.
To illustrate this point, Fig. \ref{xy_all} shows numerical results for the superfluid density $\rho_S(L,T)$ of an XY ladder system defined on a $L \times M$ lattice, with periodic boundary conditions in both directions. As shown in the figure, the results obtained for different numbers $M$ of rungs collapse on the $M=1$ curve (i.e., a 1D system) if the length $L$ of the ladder is rescaled by $M$, $\rho_S ( L , M , T )= \rho_S ( L_{eff} , T )$, in the $L\to\infty$ limit.
\begin{figure}
\center
\includegraphics*[width=1 \linewidth]{fig6.pdf}
\caption{Superfluid fraction $\rho_S ( L , T )$ for a 2D XY ladder of length $L=4096$ and with $M$ rungs. Results are shown as a function of $LT$. Left panel shows results for
$M=1$ (green curve - one-dimensional case), $M=4$ (red curve - 4 rungs), and $M=8$ (blue curve - 8 rungs). The same results are shown in the right panel, with the system length rescaled by the corresponding $M$, showing the collapse of the curves onto each other.} \label{xy_all}
\end{figure}
\noindent
\subsection{Role of adsorption potential}
We conclude this section by briefly discussing the dependence of the physics of the system on the adsorption potential. Generally speaking, if the radius of the pore is small (how small depends on the adsorption potential, but typically $R\lesssim 4$ \AA) only a single file of atoms forms
along the axis; in the opposite limit, i.e., as $R$ becomes large, adsorption of $^4$He inside the pore approaches that on a flat substrate, i.e., continuous growth of a superfluid $^4$He film as a function of chemical potential, on top of possibly one or few ``inert'', solid layers, depending on the strength of the substrate.
Inside a pore of diameter of the order of few nm, interesting, intermediate behavior can be observed, as adsorption
begins to take place near the surface, and proceeds through the formation of concentric shells. The degree of floppiness of each shell, and the corresponding overlap of adjacent shells (and quantum-mechanical exchanges of atoms at low $T$) depend on both the value of $R$ and on the adsorption potential.
\\ \indent
Fig. \ref{rcmp} shows QMC-computed radial density profiles of $^4$He inside a cylindrical pore of radius $R=10$ \AA, for the three different adsorption potentials whose parameters are summarized in Table \ref{table.1}. The $^4$He linear density $n_0 = 4$ \AA$^{-1}$. As one can see, as the depth of the attractive well of the potential is increased, and concurrently the distance of closest approach $a$ decreases, the outer shell becomes sharper, forms closer to the substrate, and can pack a greater number of $^4$He atoms.\\ \indent For the particular case of potential $\mathcal{C}$, which is the most strongly adsorbing of the three, only one shell forms, coating the surface of the pore, of effective 2D density over 0.08 \AA$^{-2}$, i.e., above the 2D freezing density of $^4$He \cite{Gordillo1998}. Atoms confined within this shell experience very little mobility and exchanges are virtually non-existent. On increasing the $^4$He density inside the pore,
{ a second shell forms, well separated from one another and with no atomic exchanges (inset of Fig. \ref{rcmp}). On further increasing the density a central file of atoms may or not appear, depending on geometry and adsorption potentials, both affecting commensuration.}
\\ \indent
Very different behavior is observed on the weaker $\mathcal{A}$ and $\mathcal{B}$ substrates, in which the outer shell that coats the surface forms further away from it, and the shells that form are broad, floppy and liquid-like, and overlap. For a given choice of radius and for a specific linear density, the value of the Luttinger parameter $K$ is lower for weaker adsorption potentials.
\begin{figure}
\centering
\includegraphics{fig7.pdf}
\caption{Radial density profiles for a $^4$He fluid in a cylindrical pore of radius $R=10$ \AA. for the three different adsorption potentials of Table \ref{table.1}, labeled accordingly. The linear $^4$He density in all cases is 4.0 \AA$^{-1}$. Inset compares profiles for case $\mathcal{C}$ of Table \ref{table.1} for linear densities 4.0 and 6.0 \AA$^{-1}$, the latter corresponding to the presence of two shells. All the results show are at temperature $T=1$ K, and remain unchanged at lower $T$.}
\label{rcmp}
\end{figure}
\section{Conclusions}
\label{conclusions}
We have carried out in this work extensive QMC simulations at low temperature (typically 1 K or less) of a fluid of $^4$He atoms in the confines of a cylindrical pore of axial length greatly exceeding its radius. We have considered different values of the radius, up to 1 nm, and three different adsorption potentials, with greatly varying adsorption strength. The main purpose of this study was to assess recent contentions of dimensional crossover(s) taking place inside cylindrical pores, including for values of the pore radius as small as 4 \AA, in correspondence of the formation of a single shell (as opposed to a single file of atoms along the axis).\\
\indent
Our analysis of the static structure factor, computed along the axis of the pore, shows data collapse in the limit of pore length $L\to \infty$, consistently with TLL theory, i.e., the accepted 1D paradigm, for all values of the linear $^4$He density $n_0$, pore radius $R$ and adsorption potentials utilized in this work. We also find this to be true regardless of the structure of the fluid inside the pore, i.e., whether it is a single file of atoms along the axis, a single hollow shell, or a fluid filling a section of the pore, in which few overlapping shells can be identified. This is consistent with the formalism of Yang and Affleck, which models the multi-shell structure of the confined fluid as a system of coupled TLLs, still exhibiting overall 1D behavior \cite{Yang2020}.
\\ \indent
We find the value of the Luttinger parameter $K$ to fall in the range between 1 and 2 for the two weaker adsorption potentials considered, i.e., considerably less than what found in previous work, in which stronger adsorption potentials were used. This is consistent with what found in other calculations \cite{Omiyinka2016}, namely that adsorption in confined geometries characterized by weak potentials can reduce the value of $K$, i.e., bring a fluid close to a topologically protected, quasi-superfluid phase, characterized by $K < 0.5$. Whether a further, substantial reduction of $K$ could be achieved by selecting even weaker substrates, e.g., Cs \cite{Chizmeshya1998} will be the subject of future studies.
\\ \indent
We find that, as long as $L/R \gg 1$, the system always displays 1D behavior, i.e., we find no dimensional crossover.
We attribute the disagreement between our conclusions and those of Ref. \onlinecite{Markic2020} to the smallness of the system length considered, and to the incorrect assumption
that numerical data for the superfluid fraction should not be affected by finite-size corrections. In our submission, the data presented in Ref. \onlinecite{Markic2020} are consistent with 1D behavior.
\\ \indent
As a final remark, the possibility of clearly identifying TLL behavior in $^4$He in nanopores paves the
way to a plethora of possible extension of our research, toward, for instance, constructing pertinently
designed junctions and/or networks of pores, where to realize in a tunable and controlled system
the novel phases and phase transitions predicted in junctions of TLLs \cite{Chamon2003,Oshikawa2006,giuliano_dual,Hou2012,Giuliano2007,Giuliano2019,Kane2020},
including the ones involving fixed points at reduced decoherence \cite{Novais2005,Giuliano2008}, with potentially countless applications.
\vspace{0.2cm}
{\bf Acknowledgements:} We thank Ian Affleck and Arturo Tagliacozzo for a critical review of the manuscript. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). M. B. gratefully acknowledges the hospitality of the Universit\`a della Calabria, where most of the research work was performed. Computing support of ComputeCanada is acknowledged as well. A. N. was financially supported by POR Calabria FESR-FSE 2014/2020 - Linea B) Azione 10.5.12,
grant no.~A.5.1. D.G. acknowledges financial support from Italy's MIUR PRIN project TOP-SPIN (Grant No. PRIN 20177SL7HC).
\normalem
|
1,116,691,500,229 | arxiv |
\section{Introduction}
\vspace{-0.0em}
Deep architectures have achieved significant success in various vision tasks including image classification and object detection. Such success have relied heavily on massive numbers of annotated examples. However, in real-world scenarios, we are frequently unable to collect enough labeled examples. This has motivated the study of few-shot learning (FSL), which focuses on building classifiers for novel categories from one or very few labeled examples.
Previous approaches to FSL include meta-learning and metric learning. Meta-learning aims to learn task-agnostic knowledge that improves optimization. Metric learning focuses on learning representations on base categories that can generalize to novel categories. Most previous FSL methods attempt to borrow a strong inductive bias from the \textit{supervised} learning of base classes. However, the challenge of FSL is that a {\em helpful} inductive bias, i.e., one that improves performance on novel classes, is hard to develop when there is a large difference between the base and novel classes.
To address this challenge, previous research explores using unlabeled examples from novel classes to improve the generalization on novel classes, which is referred to transductive few-shot learning. Typical transductive few-shot learning (TFSL) methods include exploiting unlabeled novel examples that have been classified (by an initial classifier) with high confidence in order to self-train the model \cite{liu2018learning,li2019learning,chen2019block} or fine-tuning the model on unlabeled novel examples with an auxiliary loss serving as a regularizer \cite{dhillon2019baseline,rodriguez2020embedding,lichtenstein2020tafssl}. These methods still focus on \textbf{improving} the generalization of inductive bias borrowed from the supervised learning of base classes.
In comparison, our key motivation is that, unlabeled examples from novel classes not only can fine-tune or retrain a pre-trained model, but also can effectively train a new model from scratch. The advantage of doing so is that the model can generalize better on novel classes.
In this paper, we demonstrate the effectiveness of \textbf{an extremely simple baseline} for transductive few-shot learning.
\textbf{Our baseline does not use any labels on the base classes}.
{We conduct self-supervised learning on unlabeled data from both the base and the novel classes to learn a feature embedding. When doing few-shot classification, we directly learn a linear classifier on top of the feature embedding from the few given labeled examples and then classify the testing examples.}
Surprisingly, this baseline significantly outperforms state-of-the-art transductive few-shot learning methods, which have additional access to base-class labels.
The empirical performance of this baseline should not be ``the final solution'' for few-shot learning. We believe that meta-learning, metric learning, data augmentation, and transfer learning are also critical for effective few-shot learning. However, this baseline can help us interpret existing results and indicates that using self-supervised learning to learn a generalized representation could be another important tool in addressing few-shot learning.
To investigate the best possible way to use self-supervised learning in few-shot learning, it is necessary to examine more carefully the role of features learned through self-supervision in few-shot learning. For brevity, we refer to these features as `self-supervised features'.
\textbf{(1)} In a non-transductive few-shot learning setting, we explore the \textit{complementarity} and \textit{transferability} of supervised and self-supervised features. By directly concatenating self-supervised and supervised features, we get a 2-3\% performance boost and achieve new state-of-the-art results. We conduct cross-domain few-shot learning and show that supervised features have better transferability than self-supervised features. However, when more novel labeled examples are given, self-supervised features overtake supervised features.
\textbf{(2)} In a transductive few-shot learning setting, we show that simple off-the-shelf self-supervised learning significantly outperforms other competitors who have additional access to base-class labels.
We confirm the performance gain is not from a better representation but from a representation that better generalizes on the novel classes. The proof is that the self-supervised features achieve the top performance on the novel classes but \emph{not} on other unseen classes.
\textbf{(3)} For both non-transductive and transductive settings, we conduct comprehensive experiments to explore the effect of different backbone architectures and datasets. We report results using a shallow ResNet, a very deep ResNet, a very wide ResNet, and a specially designed shallow ResNet that is commonly used for few-shot learning. While deeper models generally have significantly better performance for the standard classification task on both large (e.g, ImageNet~\cite{deng2009imagenet}) and small datasets (e.g., CIFAR-10) as shown in \cite{he2015deep}, the performance gain is relatively small for supervised features in few-shot learning. In comparison, self-supervised features show a much larger improvement when using a deeper network, especially in the transductive setting.
We also conduct experiments on various datasets, including large datasets, small datasets, and datasets that have small or large domain differences between base and novel classes. We show the efficiency and robustness of self-supervised features on all kinds of datasets except for very small datasets.
\vspace{-0.0em}
\section{Related Work}
\vspace{-0.0em}
\textbf{Few-shot Learning.} Few-shot learning is a classic problem \cite{miller2000learning}, which refers to learning from one or a few labeled examples for each novel class. Existing FSL methods can be broadly grouped into three categories: data augmentation, meta-learning, and metric learning. Data augmentation methods synthesize \cite{imaginaryData,Delta-encoder,chen2019multi}, hallucinate \cite{2017ICCVaug} or deform \cite{chen2019image} images to generate additional examples to address the training data scarcity. Meta-learning~\cite{MAML,Sachin2017,MetaNetwork,lee2019meta} attempts to learn a parameterized mapping from limited training examples to hidden parameters that accelerate or improve the optimization procedure. Metric learning \cite{relation_net,bateni2020improved,li2020boosting} aims at learning a transferable metric space (or embedding). MatchingNet \cite{matchingnet_1shot} and ProtoNet \cite{prototype_network} adopt cosine and Euclidean distance to separate instances belonging to different classes. Recently, some works \cite{chen2019multi,liu2020negative,tian2020rethinking} showed that learning a classifier on top of supervised features can achieve surprisingly competitive performance.
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{fig/setting.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{\textbf{An illustration of different few-shot learning settings.} There are four few-shot settings, including few-shot learning (FSL), transductive few-shot learning (TFSL), unlabeled-base-class few-shot learning (UBC-FSL), and unlabeled-base-class transductive few-shot learning (UBC-TFSL).
The differences between these settings are \textbf{whether they have labels for examples from the base classes and unlabeled examples from the novel classes.} }
\label{fig:setting}
\vspace{-0.2em}
\end{figure*}
\textbf{Transductive Few-shot Learning.}
TFSL methods use the distribution support of unlabeled novel instances to help few-shot learning. Some TFSL methods \cite{liu2018learning,li2019learning,Wang_2020_CVPR} exploit unlabeled instances with high confidence to train the model.
\cite{chen2019block} propose a data augmentation method to directly mix base examples and selected novel examples in the image domain to learn generalized features.
In addition, previous work\cite{dhillon2019baseline,rodriguez2020embedding,lichtenstein2020tafssl} seek to take unlabeled testing instances to acquire an auxiliary loss serving as a regularizer to adapt the inductive bias.
These methods borrow inductive bias from the supervised learning of the base classes and further utilize unlabeled novel examples to improve it.
In comparison, we show that unlabeled novel examples in addition to labeled examples of the base classes can directly develop a very strong inductive bias.
\textbf{Self-supervised Learning.} Self-supervised learning aims to explore the internal data distribution and learns discriminative features without annotations. Some work takes predicting rotation \cite{gidaris2018unsupervised}, counting \cite{noroozi2017representation}, predicting the relative position of patches \cite{doersch2015unsupervised}, colorization \cite{zhang2016colorful,larsson2017colorization}, and solving jigsaw puzzles \cite{noroozi2016unsupervised} as self-supervised tasks to learn representations. Recently, instance discrimination \cite{wu2018unsupervised,chen2020simclr,grill2020bootstrap,tian2019contrastive} has attracted much attention.
\cite{he2020momentum} propose a momentum contrast to update models and shows superior performance to supervised learning.
\zt{In this work, we explore the generalization ability of self-supervised features to new classes in the few-shot setting, i.e., in circumstances where few labeled examples of novel classes are given.} \e{Other works that have explored transductive techniques, e.g., \cite{he2020momentum}, have used large training sets for new classes.}
Gidaris et al. \cite{gidaris2019boosting} and Su et al. \cite{su2019does} take rotation prediction, solving jigsaw as auxilary tasks to learn better representation on base classes to help few-shot learning.
Tian et al. \cite{tian2020rethinking} utilize contrastive learning to learn features for non-transductive few-shot learning.
In comparison, while previous works only conduct self-supervised learning under the non-transductive, we confirm the effectiveness of self-supervised learning in a transductive few-shot setting. We claim this as our major contribution.
\vspace{-0.0em}
\section{Methods}
\vspace{-0.0em}
\label{setting}
In Fig.~\ref{fig:setting}, we illustrate our few-shot learning settings.
We denote the base category set as $C_{base}$ and the novel category set as $C_{novel}$, in which $C_{base} \cap C_{novel} = \emptyset$. Correspondingly, we denote the labeled base dataset as $D_{base} = \{(I_i,y_i)\},y_i\in C_{base}$, the labeled novel dataset as $D_{novel}=\{(I_i,y_i)\},y_i\in C_{novel}$, the unlabeled base dataset as $U_{base} = \{(I_i)\},y_i\in C_{base}$, and the unlabeled novel dataset as $U_{novel}=\{(I_i)\},y_i\in C_{novel}$.
In a standard few-shot learning task, we are only given labeled examples from base classes so the training set is $D_{FSL} = D_{base}$. For transductive few-shot learning (TFSL), we are given $D_{TFSL} = D_{base} \cup U_{novel}$. For unlabeled-base-class few-shot learning (UBC-FSL), we have $D_{UBC-FSL} = U_{base}$. For unlabeled-base-class transductive few-shot learning (UBC-TFSL), we denote the training set as $D_{UBC-TFSL} = U_{base} \cup U_{novel}$.
Note that UBC-TFSL has strictly less supervision than TFSL and this setting has not been explored before. We claim \textbf{we are the first to explore this setting}.
These four few-shot learning settings use the same evaluation protocol as in previous works \cite{matchingnet_1shot}. At inference time, we are given a collection of \emph{N-way-m-shot} classification tasks sampled from $D_{novel}$ to evaluate our methods.
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{fig/keyvis.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{\textbf{Comparison between different methods under the non-transductive and transductive few-shot setting.} For the non-transductive few-shot learning, there is great complementarity among supervised and self-supervised features. For the transductive few-shot learning, our UBC-TFSL outperform other competitors even without using base-classes labels.
}
\label{fig:keyvis}
\vspace{-0.2em}
\end{figure*}
\vspace{-0.0em}
\subsection{Self-supervised learning}
\vspace{-0.0em}
Here we take instance discrimination as our self-supervision task due to its efficiency.
We follow momentum contrast \cite{he2020momentum}, where each training example $x_i$ is augmented twice into $x_i^q$ and $x_i^k$. $x_i^q$ and $x_i^k$ are then fed into two encoders forming two embeddings $q_i=f_q(x_i^q)$, and $k_i=f_k(x_i^k)$. A standard log-softmax function is used to discriminate a positive pair (2 instances augmented from one image) from several negative pairs (2 instances augmented from 2 images):
\begin{equation}
L(q_i,k_i) = -\log \left ( \frac{\exp(q_i^T k_i/\tau)}{\exp(q_i^T k_i/\tau) + \sum_{j\neq i} \exp(q_i^T k_j /\tau)}\right)
\label{eqn:loss}
\end{equation}
where $\tau$ is a temperature hyper-parameter. Since our implementations are based on MoCo-v2 \cite{chen2020mocov2}, please refer to it for further details. We also try other self-supervised methods in \cref{selfmethods}.
\vspace{-0.0em}
\subsection{Evaluation Protocols}
\vspace{-0.0em}
Here we introduce our protocols for the four different few-shot learning settings.
All protocols consist of a training phase and an evaluation phase. In the training phase, we learn a feature embedding on the training sets $D_{FSL}$, $D_{TFSL}$, $D_{UBC-FSL}$, and $D_{UBC-TFSL}$. In the evaluation phase, we evaluate the few-shot classification performance.
For simplicity and efficiency, we learn a logistic regression classifier on top of the learned feature embedding of $N*m$ training examples and then classify the testing examples.
Training and testing examples come from the given \emph{N-way-m-shot} classification task. Such procedures are repeated $1000$ times and we report the average few-shot classification accuracies with 95\% confidence intervals. Now, we would like to introduce our methods.
\textbf{Few-shot learning baseline.}
We learn our embedding network on $D_{FSL}$ using cross-entropy loss under a standard classification process. We use the logit layer as the feature embedding as it is slightly better than the pre-classification layer. This baseline is very simple and achieve the state-of-the-art performance.
\textbf{Unlabeled-base-class few-shot learning.} For UBC-FSL, we learn from self-supervised supervision on $D_{UBC-FSL}$. We follow MoCo-v2 to do instance discrimination. The output of the final layer of the model is used as the feature embedding.
\textbf{Unlabeled-base-class transductive few-shot learning.} For UBC-TFSL, our method is similar to our UBC-FSL method. The difference is that we train on $D_{UBC-TFSL}$, which has additional access to unlabeled test instances.
\textbf{Combination of FSL baseline and UBC-FSL.} This method works under standard, non-transductive, few-shot learning setting. We explore the complementarity between supervised features (from the FSL baseline) and self-supervised features (from UBC-FSL). We directly concatenate normalized supervised features and normalized self-supervised features and then do normalization again. T his feature is used as the feature embedding and we refer this method as ``Combined''.
\begin{table*}[h]
\caption{\textbf{Top-1 accuracies(\%) on \emph{mini}ImageNet and \emph{tiered}ImageNet.} We report the mean of 1000 randomly generated test episodes as well as the 95\% confidence intervals. The top results are highlighted in \first{blue} and the second-best results in \second{green}.
We provide results on Caltech-256 and \emph{mini}ImageNet\&CUB in the \textbf{supplementary}.
}
\centering
\small
\begin{tabular}{cclcccc}
\hline
& & & \multicolumn{2}{c}{
\textbf{\emph{mini}ImageNet}
} & \multicolumn{2}{c}{\textbf{\emph{tiered}ImageNet}}\tabularnewline
\textbf{setting} & \textbf{method} & \textbf{backbone} & \textbf{1-shot} & \textbf{5-shot} & \textbf{1-shot} & \textbf{5-shot}\tabularnewline
\hline
\multirow{17}{*}{\textbf{Non-transductive}}
& MetaOptNet & ResNet-12$^*$ & 62.6$\pm$0.6 & 78.6$\pm$0.4 & 65.9$\pm$0.7 & 81.5$\pm$0.5\tabularnewline
& Distill & ResNet-12$^*$ & \second{64.8$\pm$0.6} & \first{82.1$\pm$0.4} & 71.5$\pm$0.6 & 86.0$\pm$0.4\tabularnewline
& Neg-Cosine & ResNet-12$^*$ & 63.8$\pm$0.8 & 81.5$\pm$0.5 & - & -\tabularnewline
& Neg-Cosine & WRN-28-10 & 61.7$\pm$0.8 & 81.7$\pm$0.5 & - & -\tabularnewline
& UBC-FSL (Ours) & ResNet-12$^*$ & 47.8$\pm$0.6 & 68.5$\pm$0.5 & 52.8$\pm$0.6 & 69.8$\pm$0.6\tabularnewline
& UBC-FSL (Ours) & ResNet-12 & 56.9$\pm$0.6 & 76.5$\pm$0.4 & 58.0$\pm$0.7 & 76.3$\pm$0.5\tabularnewline
& UBC-FSL (Ours) & ResNet-50 & 56.2$\pm$0.6 & 75.4$\pm$0.4 & 66.6$\pm$0.7 & 83.1$\pm$0.5\tabularnewline
& UBC-FSL (Ours) & ResNet-101 & 57.5$\pm$0.6 & {77.2$\pm$0.4} & {68.0$\pm$0.7} & {84.3$\pm$0.5}\tabularnewline
& UBC-FSL (Ours) & WRN-28-10 & 57.1$\pm$0.6 & {76.5$\pm$0.4} & {67.5$\pm$0.7} & {83.9$\pm$0.5}\tabularnewline
& FSL baseline & ResNet-12$^*$ & 61.7$\pm$0.7 & 79.4$\pm$0.5 & 69.6$\pm$0.7 & 84.2$\pm$0.6\tabularnewline
& FSL baseline & ResNet-12 & 61.1$\pm$0.6 & 76.1$\pm$0.6 & 66.4$\pm$0.7 & 81.3$\pm$0.5\tabularnewline
& FSL baseline & ResNet-50 & 61.3$\pm$0.6 & 76.0$\pm$0.4 & 69.4$\pm$0.7 & 83.3$\pm$0.5\tabularnewline
& FSL baseline & ResNet-101 & 62.7$\pm$0.7 & 77.6$\pm$0.5 & 70.5$\pm$0.7 & 83.8$\pm$0.5\tabularnewline
& FSL baseline & WRN-28-10 & 62.4$\pm$0.7 & 77.5$\pm$0.5 & 70.2$\pm$0.7 & 83.5$\pm$0.5\tabularnewline
& Combined (Ours) & ResNet-12$^*$ & 59.8$\pm$0.8 & 73.3$\pm$0.7 & 69.2$\pm$0.7 & 82.0$\pm$0.6\tabularnewline
& Combined (Ours) & ResNet-12 & 63.8$\pm$0.7 & 79.9$\pm$0.6 & 67.8$\pm$0.7 & 83.0$\pm$0.5\tabularnewline
& Combined (Ours) & ResNet-50 & 63.9$\pm$0.9 & 79.9$\pm$0.5 & {72.3$\pm$0.7} & {86.1$\pm$0.5}\tabularnewline
& Combined (Ours) & ResNet-101 & \first{65.6$\pm$0.6} & \second{81.6$\pm$0.4} & \first{73.5$\pm$0.7} & \first{86.7$\pm$0.5}\tabularnewline
& Combined (Ours) & WRN-28-10 & {65.2$\pm$0.6} & {81.2$\pm$0.4} & \second{73.1$\pm$0.7} & \second{86.4$\pm$0.5}\tabularnewline
\hline
\multirow{11}{*}{\textbf{Transductive}} & ICI & ResNet-12$^*$ & 66.8$\pm$1.1 & 79.1$\pm$0.7 & 80.7 $\pm$1.1 & 87.9$\pm$0.6\tabularnewline
& ICI & ResNet50 & 60.2$\pm$1.1 & 75.2$\pm$0.7 & 78.6$\pm$1.1 & 86.8$\pm$0.6\tabularnewline
& ICI & ResNet101 & 64.3$\pm$1.2 & 78.1$\pm$0.7 & 82.4$\pm$1.0 & 89.4$\pm$0.6\tabularnewline
& TAFSSL & DenseNet& 80.1$\pm$0.2 & 85.7$\pm$0.1 & {86.0$\pm$0.2} & 89.3$\pm$0.1\tabularnewline
& EPNet & WRN-28-10 & 79.2$\pm$0.9 & 88.0$\pm$0.5 & 83.6$\pm$0.9 & 89.3$\pm$0.5\tabularnewline
& EPNet (full) & WRN-28-10 & {80.2$\pm$0.8} & {88.9$\pm$0.5} & 84.8$\pm$0.8 & 89.9$\pm$0.6\tabularnewline
& UBC-TFSL (Ours) & ResNet-12$^*$ & 51.1$\pm$0.9 & 74.6$\pm$0.6 & 57.2$\pm$0.6 & 74.7$\pm$0.6\tabularnewline
& UBC-TFSL (Ours) & ResNet-12 & 70.3$\pm$0.6 & 86.9$\pm$0.3 & 65.7$\pm$0.7 & 81.4$\pm$0.5\tabularnewline
& UBC-TFSL (Ours) & ResNet-50 & 79.1$\pm$0.6 & {92.1$\pm$0.3} & 81.0$\pm$0.6 & {90.7$\pm$0.4}\tabularnewline
& UBC-TFSL (Ours) & ResNet-101 & \first{80.4$\pm$0.6} & \first{92.8$\pm$0.2} & \first{87.0$\pm$0.6} & \first{93.6$\pm$0.3}\tabularnewline
& UBC-TFSL (Ours) & WRN-28-10 & \second{80.3$\pm$0.6} & \second{92.4$\pm$0.2} & \second{85.7$\pm$0.6} & \second{93.0$\pm$0.3}\tabularnewline
\hline
\end{tabular}
\label{tab:benchmark}
\vspace{-0.2em}
\end{table*}
\vspace{-0.0em}
\section{Experiments}
\vspace{-0.0em}
We define two types of experiments based upon whether the base and novel classes come from the same dataset or not. We refer to the standard FSL paradigm in which the base and novel classes come from the same dataset (e.g., ImageNet) as \textit{single-domain} FSL. We also perform experiments in which the novel classes are chosen from a separate dataset, which we call \textit{cross-domain} FSL. In cross-domain FSL, the domain differences between the base and novel classes are much larger than the single-domain FSL. For both setting, we report 5-way-1-shot and 5-way-5-shot accuracies.
\textbf{Datasets.} For single-domain FSL, we run experiments on three datasets: \emph{mini}ImageNet~\cite{matchingnet_1shot}, \emph{tiered}ImageNet~\cite{ren2018meta}, and Caltech-256~\cite{griffin2007caltech}.
The \emph{mini}ImageNet contains 100 classes randomly selected from ImageNet~\cite{deng2009imagenet} with 600 images per class. We follow \cite{Sachin2017} to split the categories into 64 base, 16 validation, and 20 novel classes. The \emph{tiered}ImageNet is another subset of ImageNet but has far more classes (608 classes). These classes are first divided into 34 groups and then further divided into 20 training groups (351 classes), 6 validation groups (97 classes), and 8 testing groups (160 classes), which ensure the distinction between training and testing sets. Caltech-256 (Caltech) has 30607 images from 256 classes. Following \cite{chen2019multi}, we split it into 150, 56, and 50 classes for training, validation,
and testing respectively.
For the cross-domain experiments, we construct a dataset that has high dissimilarity between base and novel classes by drawing the base classes from one dataset and the novel classes from another.
We denote this dataset as '\emph{mini}ImageNet\&CUB', which is a combination of \emph{mini}ImageNet and CUB-200-2011 (CUB) dataset \cite{wah2011caltech}.
CUB is a fine-grained image classification dataset including 200 bird classes and 11788 bird images. We follow \cite{2018arXiv180204376H} to split the categories into 100 base, 50 validation, and 50 novel classes.
In \emph{mini}ImageNet\&CUB, the training set (base classes) contains 64 classes from \emph{mini}ImageNet and the testing set (novel classes) contains 100 classes from CUB. Specifically, the 64 classes in the training set are the 64 base classes in \emph{mini}ImageNet and the 100 classes in the test set are the 100 base classes in CUB.
\textbf{Competitors.} We compare our methods with the top few-shot learning methods: MetaOptNet \cite{lee2019meta}, Distill \cite{tian2020rethinking}, and Neg-Cosine \cite{liu2020negative}.
We also compare with three transductive few-shot learning methods:
ICI \cite{Wang_2020_CVPR}, TAFSSL \cite{lichtenstein2020tafssl}, and EPNet \cite{rodriguez2020embedding}. TFSL methods have 100 unlabeled images per novel class by default. EPNet (full)
and our UBC-TFSL uses all of the images of novel classes as unlabeled training samples.
\textbf{Implementation details.}
Most of our settings are the same as \cite{chen2020mocov2}. We use a mini-batch size of 256 with 8 GPUs. We set the learning rate as 0.03 and use cosine annealing to decrese the learning rate. The feature dimension for contrastive loss is 128. The momentum for memory update is 0.5 and the temperature is set as 0.07. For \emph{mini}ImageNet, \emph{mini}ImageNet\&CUB, and Caltech-256, we sample 2048 negative pairs in our contrastive loss and train 1000 epochs. For \emph{tiered}ImageNet, we sample 20480 negative pairs and train 800 epochs.
\textbf{Architecture.} We use ResNet-12$^*$, ResNet-12, ResNet-50, ResNet-101, and WRN-28-10 as our backbone architecture. ResNet-12$^*$ is a modified version of ResNet-12 and will be introduced in Sec.\cref{arch}. WRN-28-10~\cite{BMVC2016_87} is a very wide version of ResNet-10 and have 36.5M parameters whereas ResNet-50 and ResNet-101 have 25.6M and 44.4M parameters respectively.
\vspace{-0.0em}
\subsection{\zt{Self-supervised learning can develop a strong inductive bias with no base-class labels} }
\vspace{-0.0em}
\label{alone}
\label{benchmark}
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{fig/arch.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{\textbf{Few-shot classification accuracy with various depths of backbone architectures. } Our UBC-FSL, FSL baseline, UBC-TFSL, and Combined have better performance with a deeper network.
\textbf{The performance gain is relatively small for supervised features (FSL baseline) and large for self-supervised features (UBC-FSL), especially in a transductive setting (UBC-TFSL).}
}
\label{fig:arch}
\vspace{-0.2em}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{fig/transfer2_1.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{
\textbf{Accuracy of 1-shot cross-domain FSL (first row) or single-domain FSL (second row).} First row: we visualize 1-shot test accuracy on the source dataset (x-axis) and the target dataset (y-axis). Second row: we visualize 1-shot accuracy on the base classes (x-axis) and the novel classes (y-axis). Squares, diamonds, and triangles denote ResNet-12, ResNet-50, and ResNet-101 respectively. We provide detailed statistics in the supplementary.
From the first row, the results suggest that \textbf{supervised features are better when transferring to a new target dataset} even if self-supervised features (UBC-TFSL) have more training data and better performance on the source dataset. From the second row, the results suggest that in few-shot learning, even if supervised features have better performance on base classes, it underperforms self-supervised features (UBC-TFSL) on novel classes. It confirms that \textbf{UBC-TFSL benefit from the representation that better generalized on the novel classes}.
}
\label{fig:transfer}
\vspace{-0.2em}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=13cm]{fig/shots.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{\textbf{Few-shot classification accuracy with larger shots.} We use ResNet-50 as our backbone architecture and evaluate on \emph{tiered}ImageNet and Caltech (transferred from \emph{tiered}ImageNet). Training on same data, supervised features (FSL baseline) outperform self-supervised features (UBC-FSL) in few-shot setting. However, self-supervised features are better when large numbers (100-shot) of labeled novel examples are given. }
\label{fig:shots}
\vspace{-0.2em}
\end{figure*}
\cite{su2019does} shed light on improving few-shot learning with self-supervision and claim that ``Self-supervision alone is not enough'' for FSL. We agree there is still a gap between unlabeled-base-class few-shot learning and few-shot learning.
However, in the transductive few-shot classification setting, we present the surprising result that \textbf{state-of-the-art performance can be obtained without using any labeled examples from the base classes at all.}
The results on \emph{mini}ImageNet and \emph{tiered}ImageNet are shown in Table \ref{tab:benchmark}. A better visualization is shown in Fig.~\ref{fig:keyvis}. Results on Caltech-256 and \emph{mini}ImageNet\&CUB are provided in supplementary material.
We notice that \textbf{(1) UBC-FSL shows some potential.} Even without any base-class labels, it only underperforms the state-of-the-art few-shot methods by $2-7\%$ in 1-shot and 5-shot accuracy on \emph{mini}ImageNet and \emph{tiered}ImageNet.
\textbf{(2) There is great complementarity among supervised features and self-supervised features.}
Combining supervised and self-supervised features (``Combined'') beats the FSL baseline on all four datasets for all backbone networks. Specifically, it gives 4\% and 2.9\% improvements in 5-shot accuracy on \emph{mini}ImageNet and \emph{tiered}ImageNet when using ResNet-101. Also, it beats all other FSL competitors on \emph{tiered}ImageNet.
\textbf{(3) For the transductive few-shot classification setting, state-of-the-art can be obtained without actually using any labeled examples at all.} Even without any base-class labels, UBC-TFSL significantly surpasses all other methods.
In Table~\ref{tab:benchmark}, it outperforms all other TFSL methods by $3.5\%$ and $3.9\%$ for 5-shot accuracy on \emph{mini}ImageNet and \emph{tiered}ImageNet respectively.
\textbf{(4) The FSL baseline struggles to learn a strong inductive bias with high dissimilarity between base and novel classes (cross-domain) whereas such dissimilarity has a relatively minor effect on UBC-TFSL.} In \emph{mini}ImageNet\&CUB (please refer to supplementary), UBC-TFSL outperforms the FSL baseline by~$15\%$ and ~$13\%$ for 1-shot and 5-shot accuracy respectively.
\vspace{-0.0em}
\subsection{A deeper network is better}
\vspace{-0.0em}
\label{arch}
While deeper models generally have better performance for the standard classification task on both large (e.g, ImageNet \cite{deng2009imagenet}) and small dataset(e.g., CIFAR-10) as shown in \cite{he2015deep}, most previous few-shot methods \cite{tian2020rethinking,lee2019meta} report the best results with a modified version (ResNet-12$^*$) of ResNet-12\cite{he2015deep}.
ResNet-12$^*$ employs several modifications, including making it $1.25\times$ wider, changing the input size from $224\times 224$ to $84\times 84$, using Leaky ReLU's instead of ReLU's, adding additional Dropblock layers \cite{ghiasi2018dropblock}, and removing the global pooling layer after the last residual block.
We feel that the effect of different backbone architecture is not very clear in few-shot learning literature. We want to know if using a very deep network (e.g., ResNet-101) can bring significant improvement in few-shot classification as in the standard classification task.
More importantly, we want to explore the differences between supervised and self-supervised features when using various backbone architectures in few-shot learning.
As shown in Table \ref{tab:benchmark}, we report results using ResNet-12$^*$, ResNet-12, ResNet-50, ResNet-101, and WRN-28-10. To better compare the effect of depth of backbone architecture, we visualize the performance in Fig.~\ref{fig:arch}. We notice that (1) ResNet-101 have the best performance and all our baselines benefit from deeper network in most cases. (2) The commonly used ResNet-12$^*$ work well for FSL baseline but do not suit for self-supervised learning based baselines. (3) The wide network WRN-28-10 has very good performance on all our baselines, only slightly underperform ResNet-101. We confirm that few-shot learning can actually benefit from a deeper or wider backbone architecture. \textbf{The performance gain is small for supervised features (FSL baseline) and large for self-supervised features (UBC-FSL), especially in a transductive setting (UBC-TFSL).}
\begin{figure*}[h]
\centering
\includegraphics[width=13cm]{fig/scale.pdf}
\vspace{-0.1em}
\vspace{-0.2cm}
\caption{\textbf{1-shot testing accuracy under various scales of dataset size.} ResNet-12 is our backbone architecture. In (a), we compare UBC-FSL, FSL baseline, UBC-TFSL, and Combined on three datasets of different sizes (30607, 60000, and 779165 images). In (b), we randomly select part of \emph{mini}ImageNet (e.g., 20\% of the whole dataset) and compare our methods.}
\label{fig:scale}
\vspace{-0.2em}
\end{figure*}
\vspace{-0.0em}
\subsection{Supervised vs.~self-supervised features in cross-domain FSL}
\vspace{-0.0em}
Another interesting question is whether models learned in a single domain can perform well in a new domain (with highly dissimilar classes). To study this, we conduct cross-domain FSL, in which we learn models on \emph{mini}ImageNet or \emph{tiered}ImageNet and evaluate our models on Caltech-256 and CUB. Specifically, the FSL baseline and UBC-FSL are trained on base classes of the source dataset, and UBC-TFSL are trained on both base and novel classes of the source dataset. Then, we evaluate our methods on the testing set of target datasets (Caltech-256 and CUB).
Notice that the way we are applying the UBC-TFSL model, it does not qualify as a true transductive setting, since the model does not have access to unlabeled data from the testing set. Instead, we are testing whether this model can improve its performance on cross-domain classes with unlabeled data from \textbf{additional} classes in the source data set.
Previous work \cite{he2020momentum} compares supervised and self-supervised features when transferring to a new domain for classification, object detection, and instance segmentation. It shows that self-supervised features have better transferability for these tasks. However, the conclusion is based on when large numbers of labeled examples are used to learn the final linear classifier. In few-shot setting, we show that \textbf{supervised features do better than self-supervised features}.
In the first row of Fig.~\ref{fig:transfer}, we compare UBC-FSL, FSL baseline, UBC-TFSL, and Combined in cross-domain FSL. The x-axis and y-axis denote the 1-shot testing accuracy on the source and target dataset respectively. Surprisingly, supervised features (FSL baseline, Combined) significantly outperform self-supervised features (UBC-FSL, UBC-TFSL) on the target dataset even if they have lower accuracy on the source dataset. In the second row of Fig.~\ref{fig:transfer}, we visualize the performance of our methods on base and novel classes in single-domain FSL. The x-axis and y-axis denote the 1-shot accuracy on base and novel classes respectively. As you can see, UBC-TFSL (gray points) outperforms FSL baseline (orange) on novel classes but underperforms on base classes.
These experiments show that UBC-TFSL has mediocre performance when it does \textbf{not} have access to unlabeled data from the test classes, but performs extremely well when it does. In other words, it is not simply access to additional unlabeled data that helps, but rather, data from the test classes themselves.
\vspace{-0.0em}
\subsection{Supervised vs.~self-supervised features with larger shots}
\vspace{-0.0em}
\label{shots}
In Fig.~\ref{fig:shots}, we compare UBC-FSL, the FSL baseline, UBC-TFSL and Combined with larger shots using ResNet-50 on \emph{tiered}ImageNet and \emph{tiered}ImageNet-Caltech (cross-domain FSL). For 1-shot learning, there is a large gap around 5\% between UBC-FSL and the FSL baseline. However, as the shots become larger, this gap gradually diminishes. For 100-shot on \emph{tiered}ImageNet and 80-shot on Caltech, UBC-FSL even outperforms the FSL baseline by 1.3\% and
0.6\% respectively.
We suggest that \textbf{supervised features may contain higher-level semantic concepts that are easier to digest with a few training instances} while {self-supervised features have better transferability with abundant training data}. This statement is compatible with previous work~\cite{he2020momentum}, which use abundant labeled data to learn the final classification layer and claims that self-supervised features have better transferability.
\vspace{-0.0em}
\subsection{Supervised vs.~self-supervised features and dataset size}
\vspace{-0.0em}
\label{size}
In this section, we compare supervised and self-supervised features under various dataset sizes. We conduct experiments on Caltech, \emph{mini}ImageNet, and \emph{tiered}ImageNet, which have 30607, 60000, and 779165 images respectively. We also randomly select subsets of \emph{mini}ImageNet (20\%, 40\%, 60\%, 80\%, and 100\%) and report the 1-shot accuracy. An equal portion of examples from each class are randomly selected.
As shown in Fig.~\ref{fig:scale}, self-supervised features (UBC-TFSL) significantly outperform other methods with a big dataset. However, when the dataset is small (e.g., Caltech-256 and 20\% of \emph{mini}ImageNet), it is overtaken by the FSL baseline.
This result suggests that \textbf{supervised features are more robust to dataset size.}
\vspace{-0.0em}
\subsection{Comparing different self-supervised methods}
\vspace{-0.0em}
\label{selfmethods}
\begin{table}[t]
\caption{\textbf{Few-shot classification accuracy with different self-supervised methods.} We run experiments using MoCo-v2~\cite{chen2020mocov2}, CMC~\cite{tian2019contrastive}, and SimCLR~\cite{chen2020simclr} as our self-supervised methods to learn the feature embedding.
The top results are highlighted in \first{blue} and the second-best results in \second{green}. }
\centering
\small
\begin{tabular}{llcccc}
\hline
& & \multicolumn{2}{c}{
\textbf{\emph{mini}ImageNet}
} \tabularnewline
\textbf{method} & \textbf{backbone} & \textbf{1-shot} & \textbf{5-shot} & \tabularnewline
\hline
UBC-FSL (MoCo-v2) & ResNet-101 & \second{57.5$\pm$0.6} & \first{77.2$\pm$0.4}\tabularnewline
UBC-FSL (CMC) & ResNet-101 & 56.9$\pm$0.6 & \second{76.9$\pm$0.5}\tabularnewline
UBC-FSL (SimCLR) & ResNet-101 & \first{57.6$\pm$0.7} & 76.7$\pm$0.6\tabularnewline
\hline
UBC-TFSL (MoCo-v2) & ResNet-101 & \first{80.4$\pm$0.6} & \first{92.8$\pm$0.2}\tabularnewline
UBC-TFSL (CMC) & ResNet-101 & \second{79.7$\pm$0.6} & 92.1$\pm$0.3\tabularnewline
UBC-TFSL (SimCLR) & ResNet-101 & 79.5$\pm$0.7 & \second{92.2$\pm$0.3}\tabularnewline
\hline
\end{tabular}
\label{tab:compare}
\vspace{-0.2em}
\vspace{-0.1cm}
\end{table}
As \zt{shown in Table \ref{tab:compare}, we compare three different instance discrimination methods to learn the feature embedding. Here we compare MoCo-v2~\cite{chen2020mocov2}, CMC~\cite{tian2019contrastive}, and SimCLR~\cite{chen2020simclr}. From the results, we can see that all these self-supervised methods can learn a powerful inductive bias, especially in the transductive setting, suggesting that most self-supervised methods can be generalized to learn a good embedding for few-shot learning. }
\vspace{-0.0em}
\section{Conclusion}
\vspace{-0.0em}
Most previous FSL methods borrow a strong inductive bias from the supervised learning of base classes. In this paper, we show that no base class labels are needed to develop such an inductive bias and that self-supervised learning can provide a powerful inductive bias for few-shot learning.
{We examine the role of features learned through self-supervision in few-shot learning through comprehensive experiments.}
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,500,230 | arxiv | \section{Introduction}
The nature of the dark matter (DM) is still unclear, because its
existence is so far only evident via its gravitational interaction
with normal matter.
There are many possible ways to find DM, other than through is
gravitational interactions. Direct detection experiments try to
observe nuclear recoils from weak interactions with DM particles.
Accelerator experiments search for hints of physics beyond the
\textit{standard model} (SM) of particle physics, which may provide
clues as to the identity of dark matter. Indirect detection
experiments try to identify secondary products of DM annihilation or
decay, such as photons, neutrinos and anti-particles. Experiments
typically search for characteristic spectral signatures of DM in the
cosmic fluxes of such particles, allowing them to (hopefully)
differentiate the DM signal from the myriad of astrophysical
backgrounds they face.
There are many candidates for DM in extensions of the SM. The most
popular is the neutralino, a linear combination of the superpartners
of the neutral Higgs and electroweak gauge bosons seen in
supersymmetric (SUSY) extensions of the SM. If $R$-parity is
conserved, and the lightest neutralino is also the lightest
supersymmetric particle (LSP), it can -- depending on the underlying
SUSY model parameters -- deliver a relic density in the favoured range
$0.094 \leq \Omega h^{2} \leq 0.129$. Neutralinos are also Majorana
particles, so would self-annihilate. If SUSY is to constitute a valid
solution to the well-known \textit{hierarchy problem} of the SM, it
must be broken at $\sim$1\,TeV, giving sparticles such as the lightest
neutralino masses of between $\sim$10\,GeV and $\sim$10\,TeV.
In the annihilation process, very high energy (VHE) $\gamma$-ray
photons are produced with energies up to the neutralino mass. The
emissivity of annihilating DM is proportional to $\varrho^{2}$, the
square of the DM density. It is thus useful to search for VHE
$\gamma$-radiation from regions where a high density of DM is
expected. One such region is the centre of our own galaxy, the
Galactic Centre (GC).
Limits on DM annihilation are generally based on assumptions about the
form of the annihilation spectrum, ignoring the individual spectra of
actual SUSY (or any other) models. This was perhaps reasonable until
it was found that internal bremsstrahlung (IB), consisting of both
final-state radiation (FSR) and virtual IB (VIB), can make large
contributions to the photon spectrum \cite{bib_IB}. In this case,
gamma-ray spectra from different supersymmetric models can be very
different, even when the neutralino mass is kept fixed. With this new
development, it is necessary to compare the observed and predicted
energy spectra from annihilation processes on an individual,
model-by-model basis. This was first performed in a full SUSY scan
using \textit{Fermi}-LAT data on the dwarf galaxy Segue 1
\cite{bib_Pat}.
The GC region has also been observed by the High Energy Stereoscopic
System (H.E.S.S.), and high-energy gamma radiation has been detected
\cite{bib_hessGC1, bib_hessGC5}. Because the observations seem to be
incompatible with the total observed flux coming exclusively from
neutralino annihilation, the hypotheses that DM annihilation makes a
subdominant contribution has been investigated, resulting in limits on
the DM self-annihilation cross-section \cite{bib_hessGC2, bib_hessGC4}.
In this article we show the results of two full model scans in the
parameter space of the Constrained Minimal Supersymmetric SM (CMSSM),
comparing model predictions with H.E.S.S. data from the Sagittarius
(SgrD), Carina and Sculptor dwarf galaxies, as well as the Galactic
halo and Galactic Centre. First we show a simple random scan,
producing a set of CMSSM models compatible with constrains on the
relic density and accelerator bounds included in DarkSUSY 5.0.4
\cite{bib_DS, bib_DSweb}. Later we show more advanced statistical
scans, using the SuperBayeS package \cite{bib_SB1, bib_SB4, bib_SBweb,
bib_SB3, bib_SB5, bib_SB2, bib_SB6}.
In section \ref{sec_hess} we introduce the H.E.S.S. experiment and the
data that we use for this work. Section \ref{sec_theo} is about the
theoretical framework of supersymmetric DM, and Section \ref{sec_ana}
describes our analysis of the H.E.S.S. data. Section
\ref{subsec_GC1} gives our results for the random scan using a
spectrum from the GC source. Section \ref{subsec_GC2} introduces
the CMSSM parameter scan with SuperBayeS, considering the same GC
spectrum. Section \ref{subsec_SgD} describes a SuperBayeS scan
taking into account the H.E.S.S. observations on the SgrD, whilst
Secs.~\ref{sec_2dwarfs} and \ref{sec_halo} introduce further
constraints from the Carina and Sculptor dwarfs, and the Galactic
halo, respectively. Section \ref{sec_sum} finishes with a summary and
outlook.
\section{The H.E.S.S. telescope and data} \label{sec_hess}
H.E.S.S. is a system of $4$ imaging atmospheric \v{C}erenkov
telescopes located in the Khomas highlands of Namibia, $120 \,
\text{km}$ south west of Windhoek, and $1800 \, \text{m}$ above sea
level. It is a $\gamma$-ray observatory sensitive to photons with
energies between around $100 \, \text{GeV}$ and $100 \, \text{TeV}$.
The energy resolution is better than $15 \%$. The angular resolution
is better than $0.1 ^{\circ}$ per event. \cite{bib_hesscrab}
The observed $\gamma$-ray spectrum for our scans including the GC data
is from \cite{bib_hessGC5}. It contains $92.9 \, \text{h}$ of
observations in the years $2004$, $2005$ and $2006$. For the following
analysis we employed the spectral points seen in the left-hand
subfigure of Figure 2 in \cite{bib_hessGC5}. These data were already
deconvolved from the instrumental response at the time of publication
(see Ref.~\cite{bib_hessGC5} for details), removing any need for us to
convolve our predicted CMSSM spectra with the H.E.S.S. response.
H.E.S.S. observed the SgrD in June 2006 for $\sim$$12\,\text{h}$. No
significant $\gamma$-ray excess was found and a flux upper limit of
$\Phi (E > 250 \, \text{GeV}) = 3.6 \cdot 10^{-12} \,
\text{cm}^{-2}\,\text{s}^{-1}$ (95\% \text{CL}) was calculated. Using
these observations, and assuming a generic annihilation spectrum as
well as two different DM density profiles, upper limits on the
annihilation cross section $\langle \sigma v \rangle$ as function of
the neutralino mass $m_{\chi}$ were calculated \cite{bib_hessSGD}.
Observations of the Carina and Sculptor dwarf galaxies took place
between January $2008$ and December $2009$ with $\sim$$15 \, \text{h}$
on Carina and $\sim$$12 \, \text{h}$ on Sculptor. Also here no
significant $\gamma$-ray excess was found leading also to upper limits
on the annihilation cross section as function of the neutralino mass
\cite{bib_carscul}.
Observations of the region around the GC were also used to search for
diffuse $\gamma$-radiation originating from DM annihilation in the
galactic halo. This radiation has not been found, so that again upper
limits were calculated \cite{bib_halo}.
\section{Theoretical framework} \label{sec_theo}
Adding the minimal additional particle content required to
supersymmetrise the SM, along with the most general `soft'
SUSY-breaking Lagrangian terms (required to break but retain SUSY as a
solution of the hierarchy problem), one arrives at the Minimal
Supersymmetric Standard Model (MSSM). The addition of the soft terms
introduces over 100 new parameters to the model, so even in the MSSM,
simplifying assumptions are required in order to make any meaningful
estimates of the parameters of the model. One way to arrive at such a
simplified version of the model is to choose a specific breaking
scheme, with the symmetry breaking parameters set at a high energy
scale, and then use renormalisation group equations to arrive at the
corresponding masses and couplings at lower energies. One particular
example, which we will consider in this paper, is the CMSSM, where the
model is defined
by five free parameters:\\
\begin{equation}
m_0;m_\frac12 ;A_0; \text{tan} \beta; \text{sgn} \mu;
\end{equation}
Here $m_0$ is the universal scalar mass, $m_\frac12$ the gaugino mass
parameter, $A_0$ the trilinear coupling between Higgs bosons, squarks
and sleptons, $\tan\beta$ the ratio of vacuum expectation values of
up-type and down-type Higgs bosons, and $\sgn\mu$ the sign of the
Higgs mixing parameter. The parameters $m_0$, $m_\frac12$ and $A_0$
are defined at the GUT scale (10$^{16}$ GeV), whereas $\tan\beta$ and
$\sgn\mu$ are defined at the weak scale. Most authors define the CMSSM
and mSUGRA (a `minimal SUperGRAvity-inspired' parametrisation of
the MSSM) identically, and refer to them interchangeably; some other
definitions of mSUGRA do exist, but the CMSSM is unambiguous.
In the literature, several regions have been identified where a
neutralino LSP provides the right relic abundance of dark matter.
These regions are then further constrained by accelerator searches.
The regions that are still viable are the stau coannihilation region,
where the stau is almost degenerate with the LSP (and the correct DM
abundance is achieved by coannihilations), the focus point region
(where the LSP is Higgsino-like), and the funnel regions, where LSP
annihilation is increased by resonance interactions with MSSM Higgs
particles.
\section{Analysis} \label{sec_ana}
The flux delivered by annihilating DM can be calculated with
\cite{bib_dmgamma}:
\begin{equation}
\begin{split}
\label{flux}
\Phi(E) &= 2.8 \cdot 10^{-12} \, \text{cm}^{-2} \text{s}^{-1}
\text{sr}^{-1} \cdot \frac{dN_{\gamma}}{dE} \frac{\langle \sigma v
\rangle}{\text{pb} \cdot c} \Bigl{(} \frac{1 \,
\text{TeV}}{m_{\chi}} \Bigr{)}^{2}
\cdot \bar{J}(\Delta \Omega)\Delta \Omega \\
\Delta \Omega &= \frac{1}{8.5 \text{kpc} \cdot
(0.3 \, \text{GeV} \, \text{cm}^{-3})^{2}} \int_{\Delta \Omega} d
\Omega \int_{\text{los}} ds \, \varrho^{2}
\end{split}
\end{equation} \label{eq_dmflux}
where $dN/dE$ describes the photon spectrum per annihilation, $\langle
\sigma v \rangle$ is the thermally-averaged, velocity-weighted annihilation
cross-section in the zero velocity limit (in the following simply
denoted ``cross section''), $m_{\chi}$ is the mass of the annihilating
DM particle and
$\varrho$ is its density, which is integrated along the line of sight
(los) and over the observed solid angle of
$\Delta \Omega = 1.16 \cdot 10^{-5} \, \text{sr}$ for the GC,
$\Delta \Omega = 2 \cdot 10^{-5} \, \text{sr}$ for SgrD, and $\Delta
\Omega = 10^{-5} \, \text{sr}$ for Carina and Sculptor. For the galactic
halo the signal- and background regions are defined more complicated than
for the other targets. The $J$-factor for this $\bar{J}(\Delta \Omega)$
represents the difference between the averaged line of sight integral
in the signal and in the background region.
\section{CMSSM random scan with data from the Galactic Centre}
\label{subsec_GC1}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.44\linewidth]{modelhessaux0363.eps}
\includegraphics[width=0.44\linewidth]{cross_flux.eps}
\end{center}
\caption{\textbf{Left}: Example of a comparison between data and model
annihilation spectrum. The crosses show the measured spectrum
published by the H.E.S.S. collaboration \cite{bib_hessGC5}. The red
line shows $\Phi_\text{DM}$ the calculated dark matter spectrum for
$m_{0} = 438 \, \text{GeV}$, $m_\frac12 = 1030 \, \text{GeV}$,
$A_{0} = 0$, $\tan \beta = 39.1$, $\sgn \mu = +1$ and
$\bar{J}(\Delta \Omega) \Delta \Omega = 350 \, \text{sr}$ (Moore
profile). The blue line represents $\Phi_{\text{bg}}$, the
background model (a power law with exponential cutoff) that delivers
the best fit as part of $\Phi_{\text{total}} = \Phi_{\text{DM}} +
\Phi_{\text{bg}}$. \textbf{Right}: Correlation plot for the
annihilation cross section $\langle \sigma v \rangle$ and
$\gamma$-ray yield $dN/dx$ with $x = E_{\gamma}/m_{\chi}$ at $x =
0.7$, showing the indicative number of photons per annihilation with
energies just below the WIMP mass. Because IB has a harder
gamma-ray spectrum than pion decay, for a fixed $\langle \sigma v
\rangle$ models with larger yields at $E=0.7m_\chi$ show stronger
IB. Here we see that for the points that passed our relic density
and accelerator cuts, the yield into photons with energies near the
WIMP mass decreases as the cross-section increases, indicating that
IB is much stronger in models with lower annihilation
cross-sections.}
\label{fig_model}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\linewidth]{rejectionhessaux.eps}
\end{center}
\caption{Rejection plots in the plane spanned by the neutralino mass
$m_{\chi}$ and the annihilation cross section $\langle \sigma v
\rangle$. The green points represent models consistent with the
data, and red points models that are not consistent (at $90\% \,
\text{CL}$). The black line shows the upper limit of $\langle \sigma
v \rangle$ as function of $m_{\chi}$, if annihilation of neutralinos
proceeded entirely as $\chi \, \chi \rightarrow b \, \bar{b}$. In
the top row, $\bar{J}(\Delta \Omega)\Delta \Omega$ increases from
$10 \, \text{sr}$ (upper left) to $100 \, \text{sr}$ (upper right).
The plots for an NFW profile ($\bar{J}(\Delta \Omega)\Delta \Omega =
0.15 \, \text{sr}$) and a Moore profile ($\bar{J}(\Delta
\Omega)\Delta \Omega) = 350 \, \text{sr}$)
are shown in the lower row.}
\label{fig_rej}
\end{figure}
Earlier analysis of the GC by H.E.S.S. showed that DM alone cannot be
responsible for the observed spectrum \cite{bib_hessGC2}. We
therefore consider an energy spectrum
\begin{equation}
\Phi_{\text{total}}(E) = \Phi_{\text{DM}}(E) + \Phi_{\text{bg}}(E),
\end{equation}
composed of a DM component $\Phi_{\text{DM}}$ and an
empirically-determined background, assumed to take the form of a power
law with an exponential cut-off:
\begin{equation}
\Phi_{\text{bg}}(E) = \Phi_{0} \cdot
\Bigl{(}\frac{E}{1 \, \text{TeV}}\Bigr{)}^{-\Gamma} \exp(-E/E_\text{cut}),
\end{equation}
where $\Phi_{0}$ is the flux normalisation, $\Gamma$ represents the
spectral slope and $E_\text{cut}$ the cutoff energy. Such a form for
the background provides quite a good fit to the observed spectrum
\cite{bib_hessGC2, bib_hessGC5}, and is generally representative of
typical astrophysical gamma-ray sources. In our scans, we fit
$\Phi_{0}$, $\Gamma$ and $E_\text{cut}$ individually for each DM model
in the CMSSM.
As a first check, we randomly chose $622$ CMSSM models whose relic
densities fit within the observed band ($0.094 \leq \Omega_{\text{DM}}
h^{2} \leq 0.129$) from $3$ years WMAP observations \cite{bib_WMAP3},
and pass similar accelerator bounds included in DarkSUSY 5.0.4. The
CMSSM parameters in this scan lie in the following ranges: $10 \,
\text{GeV} \leq m_{0} \leq 1000 \, \text{GeV}$, $10 \, \text{GeV} \leq
m_\frac12 \leq 1000 \, \text{GeV}$, $A_{0} = 0$, $0 \leq \tan \beta
\leq 60$, $\sgn \mu \in \{-1, 1\}$. Whether a model (CMSSM parameters
and chosen $J$ factor) is compatible with the measured data is decided
by a $\chi^{2}$-test. We fit $\Phi_{\text{total}}(E)$ to the data for
each model, keeping the parameters for $\Phi_{\text{DM}}(E)$ fixed and
the parameters for $\Phi_{\text{bg}}(E)$ free. An example of such a
comparison is shown in the left panel of Figure \ref{fig_model}.
A model (with an assumed value for $\bar{J}(\Delta \Omega)\Delta
\Omega$) is defined as compatible with the data if the resulting
$\chi^{2} < 14.04$; the $90\%$ threshold value of the $\chi^{2}$
distribution with $N_{\text{bins}}-N_{\text{free}} = 25-3$ degrees of
freedom. Results can be seen in Figure \ref{fig_rej}. Here we show
whether a model -- represented by a point in the $m_{\chi}$-$\langle
\sigma v \rangle$-plane -- is compatible with the measured spectrum or
not, given different assumed values of $\bar{J}(\Delta \Omega)\Delta
\Omega$. We also indicate the upper limit obtained if one assumes
100\% annihilation into $b \, \bar{b}$, as has often been done in
previous analysis. For comparison, the $J$ factor for an NFW profile
would be $\bar{J}(\Delta \Omega)\Delta \Omega \lvert_{\text{NFW}} =
0.15 \, \text{sr}$ and for a Moore profile $\bar{J}(\Delta
\Omega)\Delta \Omega \lvert_{\text{Moore}} = 350 \, \text{sr}$.
For $\bar{J}(\Delta \Omega) \Delta \Omega \gtrsim 10 \, \text{sr}$ the
data begin to limit models from high cross-sections downward (into the
focus point region). In addition, the parameter space is truncated
from low cross-sections upward (into the coannihilation region) due to
IB, as the number of photons from these processes and the annihilation
cross-section are anti-correlated (see the right panel of Figure
\ref{fig_model}). For $\bar{J}(\Delta \Omega) \Delta \Omega \gtrsim
100$ the two limiting fronts meet. Models with $m_{\chi} \lesssim 200
\, \text{GeV}$ remain allowed, because they do not affect the spectrum
in the energy range observed by H.E.S.S.. A few models with $m_{\chi}
\gtrsim 500 \, \text{GeV} - 1000 \, \text{GeV}$ and $\langle \sigma v
\rangle \lesssim 10^{-27} \, \text{cm}^{-3} \text{s}^{-1}$ also remain
allowed.
A random scan is however not sufficient when dealing with a
complicated parameter space with many dimensions, such as the CMSSM.
Nuisance parameters that could substantially affect model predictions,
such as the top quark mass, should be taken into account. Points
should be measured against a whole range of observables, and given a
properly-defined statistical likelihood rather than just ruled in or
out. Sophisticated scanning algorithms should be used to make sure
that all relevant parts of the parameter space have been probed, and
in a way that allows valid statistical inference to be performed on
the resultant points. For these reasons, we performed additional
likelihood-based scans.
\section{CMSSM likelihood scan with data from the Galactic Centre}
\label{subsec_GC2}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessGC010_aux_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessGC100_aux_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessGC010_aux_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessGC100_aux_2D_marg_1.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_\frac12$-$m_{0}$ plane. In the left
column H.E.S.S. data from the Galactic Centre have not been
included in the scans. For the other plots, $\bar{J}(\Delta
\Omega)\Delta \Omega = 10 \, \text{sr}$ (middle) and $100 \,
\text{sr}$ (right). The $\otimes$ marks the best fit point, while
the $\bullet$ marks the centre of gravity of the distribution.
Contours in the lower plots surround $68 \%$ and $95 \%$ credible
regions.}
\label{fig_sb1}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessGC010_aux_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessGC100_aux_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessGC010_aux_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessGC100_aux_2D_marg_7.eps}
\end{center}
\caption{Profile likelihoods $\mathcal{L}/\mathcal{L}_{\text{best
fit}}$ (upper row) and posterior probability density functions
(lower row) normalized to the best fit point in the
$m_{\chi}$-$\langle \sigma v \rangle$ plane. In the left column
H.E.S.S. data from the Galactic Centre have not been included in the
scans. For the other plots, $\bar{J}(\Delta \Omega)\Delta \Omega =
10 \, \text{sr}$ (middle) and $100 \, \text{sr}$ (right). The
$\otimes$ marks the best fit point, while the $\bullet$ marks the
centre of gravity of the distribution. Contours in the lower plots
surround $68 \%$ and $95 \%$ credible regions.}
\label{fig_sb2}
\end{figure}
Our analysis is based on SuperBayeS, a package that scans the CMSSM
parameter space and computes various observables, in particular the
gamma-ray spectrum for a given CMSSM model, by interfacing with
DarkSUSY. It performs statistical inference by comparing the computed
values of the observables to experimental data, using a full
likelihood construction. SuperBayeS also implements sophisticated
scanning algorithms; in our scans, we chose the MultiNest
\cite{bib_SB4} nested sampling algorithm, with $4000$ live points. In
each iteration step of this algorithm the point with the worst
likelihood in a set of points in parameter space is replaced by a
point with a better likelihood. In order to increase the possibility
to find such a point, the border of the region surrounding all other
points has to be described. This way this region nest the best fit
points iteratively. We used the modified version of SuperBayeS 1.35
described in \cite{bib_Pat} for the analysis of H.E.S.S. data,
supplemented with an appropriate H.E.S.S. likelihood term (given
below). This modified version employs DarkSUSY 5.0.4 for the
calculation of relic densities and gamma-ray spectra, including the
full calculation of internal bremsstrahlung (both VIB and FSR)
\cite{bib_IB} crucial for our analysis.
We scanned over $60 \, \text{GeV} \leq m_0 \leq 4000 \, \text{GeV}$,
$60 \, \text{GeV} \leq m_\frac12 \leq 4000 \, \text{GeV}$, $-7000 \leq
A_0 \leq 7000$ and $2 \leq \tan\beta \leq 65$, setting $\mu>0$ and
applying linear priors to the other parameters. We also scanned over
the top and bottom quark masses, and the strong and electromagnetic
coupling constants, treating them as SM nuisance parameters. We
incorporate the effects of the nuisance parameters in our analysis by
either computing the profile likelihood (i.e. maximising the
likelihood with respect to the nuisance parameters, at each point in
the CMSSM parameter space), or by marginalising over them, integrating
the posterior distribution at each point in the CMSSM parameter space
over the nuisance space. Similarly, we choose to present
distributions and likelihoods for subsets of CMSSM parameters by
further profiling or marginalising over the remaining CMSSM
parameters.
We use the unfolded (deconvolved; see \cite{bib_hessGC5}) H.E.S.S.
spectrum to directly compare with the theoretically predicted
gamma-ray spectrum. We also use the observables, experimental
likelihoods and SM nuisance likelihoods described in
Ref.~\cite{bib_SB6}; these are also the same as we employed in
Refs.~\cite{bib_Yashar, bib_Pat}. In particular, we compared the
relic density to data from the 5-year Wilkinson Microwave Anisotropy
Probe (WMAP), which found $\Omega_{\text{DM}} h^{2} = 0.1099 \pm
0.0062$ at the $1\sigma$ level \cite{bib_WMAP5}. Other observables
were: LEP constraints on sparticle masses and the Higgs mass,
measurements of the anomalous magnetic moment of the muon $(g - 2)$,
the mass difference $m_{\bar{B}_{s}} - m_{B_{s}}$, and the branching
fractions of the rare processes $b \rightarrow s \gamma$, $\bar{B}_{u}
\rightarrow \nu \tau^{-}$ and $\bar{B}_{s} \rightarrow \mu^{+}
\mu^{-}$.
The likelihood of one CMSSM model is defined by
\begin{equation}
- \ln \mathcal{L} = \sum_{i} - \ln \mathcal{L}_{i}
\end{equation}
where $\mathcal{L}_{i}$ is the likelihood associated with each
individual observable. For the H.E.S.S. spectrum from the GC, we used
\begin{equation}
- \ln \mathcal{L}_{\text{H.E.S.S., GC}} = \frac{\chi^{2}}{2}
\end{equation}
with the $\chi^{2}$ described in the previous subsection.
In Figures \ref{fig_sb1} and \ref{fig_sb2} we show both the profile
likelihood and the posterior probability density function (assuming
flat priors) for three different scans. The value of $\bar{J}(\Delta
\Omega) \Delta \Omega$ increases left to right from $0 \, \text{sr}$
-- no dark matter in the GC region, no H.E.S.S. data included in the
scan -- to $100 \, \text{sr}$ in the last column. Figure \ref{fig_sb1}
shows the results of the scans projected down into the
$m_\frac12$-$m_{0}$ plane, while Figure \ref{fig_sb2} shows the
resulting distributions in the $m_{\chi}$-$\langle \sigma v \rangle$
plane.
We see from Figures \ref{fig_sb1} and \ref{fig_sb2} that with
increasing $\bar{J}(\Delta \Omega)\Delta \Omega$, the likelihoods of
points in the coannihilation region and the higher-mass part of the
focus point are reduced. This can also be seen in the movement of the
best-fit point from the tip of the coannihilation region to a low-mass
part of the focus point when GC data are introduced. In contrast, the
posterior mean does not move substantially when GC data are included
in fits, reflecting the fact that the focus point carries the majority
of the posterior mass when linear priors are employed, and is left
largely intact after the application of GC data.
However, the values we have used for $\bar{J}(\Delta \Omega)\Delta
\Omega$ in these scans with GC data are unrealistically large. The GC
source delivers a strong astrophysical background, hindering dark
matter investigations. The scanning technique will be useful for
future observations of the GC region however, especially with upcoming
experiments like the \v{C}erenkov Telescope Array (CTA).
\section{CMSSM likelihood scan with data from the Sagittarius dwarf
galaxy} \label{subsec_SgD}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_NFW_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_Cored_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_NFW_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_Cored_2D_marg_1.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_\frac12$-$m_{0}$ plane. In the left
column H.E.S.S. data from the (SgrD) have not been included in the
scans. For the other plots, we assume an NFW (middle) or a cored DM
profile (right). The $\otimes$ marks the best fit point, while the
$\bullet$ marks the centre of gravity of the distribution. The
contours in the lower row plots surround the $68 \%$ and the $95 \%$
CL regions.}
\label{fig_sgd1}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_NFW_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_Cored_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_NFW_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessSGC_Cored_2D_marg_7.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_{\chi}$-$\langle \sigma v \rangle$
plane. In the left column H.E.S.S. data from the SgrD have not been
included in the scans. For the other plots, we assume an NFW
(middle) or a cored DM profile (right). The $\otimes$ marks the best
fit point, while the $\bullet$ marks the center of gravity of the
distribution. The contours in the lower row plots surround the $68
\%$ and the $95 \%$ CL regions.}
\label{fig_sgd2}
\end{figure}
We assume two different DM profiles for the SgrD: a (cuspy) NFW and a
cored profile. The first one delivers a scale factor of
$\bar{J}(\Delta \Omega)\Delta \Omega \lvert_{\text{NFW}} =
0.0186$\,\text{sr}, whereas the second one gives $\bar{J}(\Delta
\Omega)\Delta \Omega \lvert_{\text{cored}} = 0.636$\,\text{sr} (with
the definition in equation \ref{eq_dmflux}). Although it is less
concentrated very close to the centre, the cored halo gives a larger
$J$ factor because its dark matter core radius is only 1.5\,pc, which
is within the observed solid angle. The small core radius leads to a
steeper profile than NFW beyond $r=1.5\,pc$, and a higher DM density
around $r\sim1.5\,pc$. The calculated upper limits just begin to
touch interesting parts of parameter space (see the Erratum to
\cite{bib_hessSGD}).
Because SgrD experiences heavy tidal disruptions, there are large
uncertainties in its density profile, leading to a large uncertainty
in the resultant $J$ factor. We refer the reader to e.g.
Refs.~\cite{bib_hessSGD,bib_Viana} and references therein for more
extensive discussions. Because of this uncertainty, in the following
sections we also investigate the impacts of H.E.S.S. observations of
the Galactic diffuse emission and other dwarf galaxies on the CMSSM,
for which the $J$ factors are better constrained.
In order to subtract the large cosmic ray background, this is
estimated via a dedicated OFF-region in the H.E.S.S. analysis. This
region might also contain a significant fraction of a hypothetical DM
annihilation signal, which in this case would also be subtracted
\cite{bib_Mack}. Since more than $90 \%$ of the DM signal of both the
density profiles that we consider here originates from inside the
ON-region (see \cite{bib_hessSGD}), this effect is negligible in our
case.
To include these data into our likelihood calculation, we need an
estimate of the flux and its error. $437$ events were observed by
H.E.S.S. in the ``ON-region'' centred on SgrD, and in the surrounding
annular ``OFF-region'' $4270$ events were collected. Since there is a
difference of a factor of $10.1$ in the areas of the sky covered by
these two regions, there are $14.2$ excess events observed in the
ON-region. This is not statistically significant. Assuming that
observed events follow Poisson statistics, the actual observed flux
and its error are $\Phi(E>250 \, \text{GeV}) = (0.9 \pm 1.4) \cdot
10^{-12} \, \text{cm}^{-2} \text{s}^{-1}.$ We use this for the
calculation of a (gaussian) likelihood.
Calculating the expected integrated flux from each CMSSM model and
comparing with this value delivers us an easy estimate of the
likelihood
\begin{equation}
- \ln \mathcal{L}_{\text{H.E.S.S., Sag}} =
\frac{(\Phi_{\text{measured}}-\Phi_{\text{model}})^{2}}{2 \sigma_{\Phi}^{2}}
\end{equation}
The results of these scans (with somewhat more realistic density
profiles than we employed for the GC) can be seen in Figures
\ref{fig_sgd1} and \ref{fig_sgd2}. We see that the coannihilation
region becomes steadily more disfavoured for increasing $J$, due to
the large virtual IB signal produced by models in this region. In
general the only observable that strongly favours the coannihilation
region over higher sparticle masses (as found in e.g. the focus point
region) is $g-2$, the anomalous magnetic moment of the muon
\cite{bib_SB6}. When H.E.S.S. observations of the SgrD are included
in the total likelihood, we see that their preference for the focus
point over the stau coannihilation region essentially nullifies the
impact of $g-2$. This allows $b\rightarrow s\gamma$ to more clearly
exert its preference for higher sparticle masses, leading to a
stronger preference for focus point SUSY over the stau coannihilation
region.
\section{CMSSM likelihood scan with two other dwarf spheroidal galaxies}
\label{sec_2dwarfs}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_aveJ_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_aveJ_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_aveJ_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_aveJ_2D_marg_1.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_\frac12$-$m_{0}$ plane. In the left
column no H.E.S.S. data are included in the scans. In the middle
column Carina and Sculptor data are included. In the right column
SgrD is also included.}
\label{fig_dwarfs1}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_aveJ_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_aveJ_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_aveJ_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_aveJ_2D_marg_7.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_{\chi}$-$\langle \sigma v \rangle$
plane. In the left column no H.E.S.S. data are included in the
scans. In the middle column Carina and Sculptor data are included.
In the right column SgrD is also included.}
\label{fig_dwarfs2}
\end{figure}
H.E.S.S. has observed the dwarf spheroidals Carina and Sculptor in
$2008$ and $2009$ for $14.8 \, \text{h}$ and $11.8 \, \text{h}$ live
time. No significant excess was detected \cite{bib_carscul}. We handle
these two sources in the same way as the SgrD. The original H.E.S.S.
paper gives event numbers, both in total and above some minimal energy
$E_{\text{min}}$, the resulting upper limits on the number of excess
events, and an integrated flux with $E>E_{\text{min}}$. In order to
estimate the flux as we did with the SgrD, we used a simple toy Monte
Carlo simulation to determine all combinations of event numbers with
$E>E_{\text{min}}$ that reproduce the given upper limits on the excess
of events. For our corresponding flux estimates, we then selected the
largest event numbers that delivered the stated upper limits, as
$E_{\text{min}}$ is chosen very close to the energy threshold of the
observations, so the majority of events should have
$E>E_{\text{min}}$. Because in the majority of combinations the
resulting estimated flux varies well within one standard deviation,
the error we make with this method is small. The estimated fluxes are
$\Phi (E > 320 \, \text{GeV}) = (-1.99 \pm 1.88) \cdot 10^-13 \,
\text{cm}^{-2} \, \text{s}^{-1}$ for Carina and $\Phi (E > 220 \,
\text{GeV}) = (0.42 \pm 3.38) \cdot 10^{-13} \, \text{cm}^{-2} \,
\text{s}^{-1}$ for Sculptor. The resulting implications for scans of
the CMSSM parameter space can be seen in Figs.~\ref{fig_dwarfs1} and
\ref{fig_dwarfs2}. In the centre panels of these figures, we see that
the addition of Carina and Sculptor -- with median values of the $J$
factors reported in \cite{bib_carscul}: $\bar{J}(\Delta \Omega)\Delta
\Omega \lvert_{\text{Carina}} = 1.35 \cdot 10^{-4} \, \text{sr}$ and
$\bar{J}(\Delta \Omega)\Delta \Omega \lvert_{\text{Sculptor}} = 1.91
\cdot 10^{-3} \, \text{sr}$ -- reduces the posterior probability of
the stau coannihilation region relative to the focus point. This
effect is not so dramatic as was seen with the cored-profile SgrD in
the rightmost panels of Figs.~\ref{fig_sgd1} and \ref{fig_sgd2}. In
the rightmost panels of Figs.~\ref{fig_dwarfs1} and \ref{fig_dwarfs2},
we also show the impact of including all three dwarfs, this time with
a SgrD $J$ factor calculated as the mean of the $J$ factors derived
from NFW and cored profiles of $\bar{J}(\Delta \Omega)\Delta \Omega
\lvert_{SgrD} = 0.327 \, \text{sr}$. As expected, the coannihilation
region is further disfavoured by the inclusion of the SgrD, though
again not so severely as when this particular dwarf is employed with
the (maximal) $J$ factor corresponding to a cored density profile.
Profile likelihoods follow essentially similar trends to posteriors,
except for the fact that a highly isolated, very high likelihood
best-fit point has been found in the scan including only Carina and
Sculptor, but not in other scans. When the profile likelihood ratio
is calculated using this best-fit value and plotted, the effect is to
make all parts of the parameter space appear to have low likelihoods
(i.e. essentially all of the allowed parameter space appears green in
the middle panels of Figs.~\ref{fig_dwarfs1} and \ref{fig_dwarfs2}).
This is easily understood as a result of the highly spiked nature of
the CMSSM parameter space; here the scan has in fact managed to find
its way part-way up the isolated focus point likelihood spike
identified in \cite{bib_Yashar}. This spike is typically missed in
scans employing the standard configuration of the MultiNest algorithm
(as we do here) \cite{bib_Yashar, bib_SBproflike}, as this mode is
optimised more for mapping the posterior than producing
fully-converged profile likelihoods. Posteriors produced with these
scanning parameters are of course fully converged; the profile
likelihood results we present here should therefore be taken with
something of a grain of salt, and the posteriors considered to be the
primary result of this paper.
\section{CMSSM likelihood scan with observations on the galactic halo}
\label{sec_halo}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_halo_aveJ_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_halo_aveJ_2D_profl_1.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_halo_aveJ_2D_marg_1.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_halo_aveJ_2D_marg_1.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the $m_\frac12$-$m_{0}$ plane. In the left
column no H.E.S.S. data are included in the scans. In the other two
columns observations on the galactic halo are included, together
with Carina and Sculptor (middle) and with Carina, Sculptor and the SgrD
(right).}
\label{fig_halo1}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\linewidth]{noHess_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_halo_aveJ_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_halo_aveJ_2D_profl_7.eps}
\includegraphics[width=0.31\linewidth]{noHess_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_woSgD_halo_aveJ_2D_marg_7.eps}
\includegraphics[width=0.31\linewidth]{HessDwarfs_wiSgD_halo_aveJ_2D_marg_7.eps}
\end{center}
\caption{Profile likelihood ratios
$\mathcal{L}/\mathcal{L}_{\text{best fit}}$ (upper row) and
posterior probability density functions (lower row) normalized to
the best fit point in the$m_{\chi}$-$\langle \sigma v \rangle$
plane. In the left column no H.E.S.S. data are included in the
scans. In the other two columns observations on the galactic halo
are included, together with Carina and Sculptor (middle) and with
Carina, Sculptor and the SgrD (right).}
\label{fig_halo2}
\end{figure}
H.E.S.S. has performed observations near the galactic centre in the
years from $2004$ to $2008$ in order to measure diffuse
$\gamma$-radiation from the galactic halo. The residual spectrum does
not show evidence of any excess $\gamma$-radiation \cite{bib_halo}.
This spectrum can be handled like the spectrum of the Galactic Centre,
with the only difference being that the observable is intensity rather
than flux. As in the previous section, we assume for the halo a
median $J$ factor between the minimum and maximum values given by
\cite{bib_halo} $\bar{J}(\Delta \Omega) \lvert_{\text{halo}} = 1257$.
For scans including the halo, we also included Carina and Sculptor
(middle and rightmost panels of Figs.~\ref{fig_halo1} and
\ref{fig_halo2}), as well as the SgrD (rightmost panels of
Figs.~\ref{fig_halo1} and \ref{fig_halo2}). Comparing the middle
panels of Figs.~\ref{fig_halo1} and \ref{fig_halo2} to the middle
panels of Figs.~\ref{fig_dwarfs1} and \ref{fig_dwarfs2}, we see that
the addition of the Galactic halo data to scans including Carina and
Sculptor in fact serves to \textit{increase} the relative probability
of the coannihilation region with respect to the focus point. This is
because the halo constraint is rather weak, and serves only to
directly constrain a few focus-point models with large cross-sections,
slightly disfavouring the focus point in comparison to the
coannihilation region, and therefore tempering the negative effect of
the Carina and Sculptor dwarfs upon the relative probability of the
coannihilation strip. When the SgrD is added (rightmost panels of
Figs.~\ref{fig_halo1} and \ref{fig_halo2}), this effect is essentially
swamped by the strong constraining effect of the SgrD data. This
results in essentially the same level of preference for the focus
point in scans including the SgrD with or without the halo data;
differences between the rightmost panels of Figs.~\ref{fig_halo1} and
\ref{fig_halo2} and Figs.~\ref{fig_dwarfs1} and \ref{fig_dwarfs2} is
within the level of scanning noise.
\section{Summary, conclusions, and outlook} \label{sec_sum} We have
performed a scan over the CMSSM parameter space, taking into account a
large range of experimental data at the composite likelihood level,
and using nested sampling. We have done this in order to check what
constraints are placed on CMSSM models by the combination of H.E.S.S.
observations of dwarf spheroidal galaxies, the Galactic halo and the
Galactic Centre.
Due to the strong astrophysical $\gamma$-ray source in or very near
the GC, the search for DM there is strongly handicapped, so the data
are not very constraining. With unrealistic assumptions about the DM
density profile around the GC, we showed some example constraints on
the coannihilation region and focus-point neutralinos with large
masses. These examples show how the scanning technique will be useful
for future observations with the next generation of $\gamma$-ray
experiments, such as CTA.
For dwarf galaxies and the Galactic halo we also obtained constraints
on the coannihilation region and high-mass parts of the focus point,
even with realistic density profiles. These constraints result from
the combination of the energy reach of H.E.S.S. and a full treatment
of IB. Our results give the tightest constraints to date upon the
coannihilation region of the MSSM.
There are however still large uncertainties in the DM density profile
of the SgrD, due to strong tidal forces \cite{bib_hessSGD,bib_Viana}.
This is unfortunate, as the SgrD potentially provides the strongest
constraint on CMSSM coannihilation models. Future scans and limits
based on the SgrD should become more robust as they eventually come to
include the the dark matter halo parameters as nuisances, and
observational constraints upon those parameters improve. Ultimately
however, we see that including observations of Carina and Sculptor
along with those of the SgrD, and assuming median values of all $J$
factors, results in almost as strong a constraint on the
coannihilation region as taking just the SgrD on its own, and using a
maximal $J$ factor. This speaks strongly to the robustness of the
results we have presented in this paper.
The recently presented results from LHC \cite{bib_Aad:2011hh} are not
directly comparable with our results, since constraints are presented
for fixed $\tan \beta=3$ and $A_{0}=0$, which is actually not part of
the most favoured $68\%$ region that we find. However, ATLAS
constrains gaugino masses below about $310 \, \text{GeV}$ and scalar
masses below about $740 \, \text{GeV}$. Most of the favoured region
that we find is at either larger gaugino or larger scalar masses.
Thus, the present ATLAS constraint can be expected to have a minor
effect on the results presented here, see e.g.
\cite{bib_Bertone:2011nj} for a more detailed discussion.
\section{Acknowledgments}
We would like to thank Agnieszka Jacholkowski and Ullrich Schwanke for
fruitful discussions. We are grateful to the Swedish Research Council
(VR) for financial support. JR is grateful to the Knut and Alice
Wallenberg Foundation for financial support. JC is a Royal Swedish
Academy of Sciences Research Fellow supported by a grant from the Knut
and Alice Wallenberg Foundation. PS is supported by the Lorne Trottier
Chair in Astrophysics and an Institute for Particle Physics Theory
Fellowship.
|
1,116,691,500,231 | arxiv | \section{Introduction}
Dynamic graphs, that is, graphs that evolve over time, can conveniently model dynamic networks, which recently received a lot of interest from the academic community (\emph{e.g.} mobile sensor networks, vehicular networks, disruption tolerant networks, interaction flows, etc.).
Depending on the problem considered, various models were used: among others, static graphs can be used to represent a snapshot in time of a dynamic graph, functions can be used to define continuously when an edge appears over time, and sequences of tuples can represent atomic interactions between nodes over time.
The problem we consider in this paper assumes an arbitrary dynamic network, such as sensors deployed on a human body, cars evolving in a city that communicate with each other in an ad hoc manner, etc. We suppose that initially, each node in the network originates some data (\emph{e.g.} that originates from a sensor, or from computation), and that these data must be aggregated at some designated node, the \emph{sink}. To this goal, a node may send its data to a communication neighbor at a given time (the duration of this communication is supposed to be one time unit). We assume that there exists an aggregation function that takes two data as input and gives one data as output (the function is aggregating in the sense that the size of the output is supposed to be the same as a single input, such functions include $\min$, $\max$, etc.).
The main constraint for communications between nodes is that a node is allowed to send its data (be it its original data, or aggregated data) exactly once (\emph{e.g.} to keep energy consumption low). A direct consequence of this constraint is that a node must aggregate data anytime it receives some, provided it did not send its data previously. It also implies that a node cannot participate to the data aggregation protocol once it has transmitted its data. A nice property of any algorithm implementing this constraint is that the number of communications is minimum. The problem of aggregating all data at the sink with minimum duration is called the \emph{minimum data aggregation time problem}~\cite{bramas2015complexity}.
The essence of such a data aggregation algorithm is to decide whether or not to send a node's data when encountering a given communication neighbor: by waiting, a node may be able to aggregate more data, while by sending a node disseminates data but excludes itself for the rest of the computation.
In this paper, we consider that nodes may base their decision on their initial knowledge and past experience (past interactions with other nodes) only. Then, an algorithm accommodating those constraints is called an \emph{online distributed data aggregation} algorithm. The existence of such an algorithm is conditioned by the (dynamic) topology, initial knowledge of the nodes (\emph{e.g.} about their future communication neighbors), etc.
For simplicity, we assume that interactions between the nodes are carried out through pairwise operations. Anytime two nodes $a$ and $b$ are communication neighbors (or, for short, are interacting), either no data transfer happens, or one of them sends its data to the other, that executes the aggregation function on both its previously stored data and the received data, the output is then stored in the (new) stored data of the receiver.
In the sequel, we use the term \emph{interaction} to refer to a pairwise interaction.
We assume that an adversary controls the dynamics of the network, that is, the adversary decides which are the interactions. As we consider atomic interactions, the adversary decides what sequence of interactions is to occur in a given execution. Then, the sequence of static graphs to form the evolving graph can be seen as a sequence of single edge graphs, where the edge denotes the interaction that is chosen by the scheduler at this particular moment. Hence, the time when an interaction occurs is exactly its index in the sequence. Our model of dynamic graphs as a sequence of interactions differs from existing models on several points. First, general models like \textit{Time-varying-graph}~\cite{casteigts2011time} make use of continuous time, which adds a lot of complexity. Also, discrete time general models such as \emph{evolving graph}~\cite{casteigts2011time} capture the network evolution as a sequence of static graphs. Our model is a simplification of the evolving graph model where each static graph has a single edge. \textit{Population protocols}~\cite{angluin2007thecumputational} also consider pairwise interactions, but focus on finite state anonymous nodes with limited computational power and unlimited communication power (a given node can transmit its information many times), while we consider powerful nodes (that can record their past interactions) that are communication limited (they can send their data only once). Finally, \textit{Dynamic edge-relabeling}~\cite{casteigts2010srtuctural} is similar to population protocols, but the sequence of pairwise interactions occurs inside an evolving graph. This model shares the same differences as population protocols with our model.
\subsection{Related Work}
The problem of data aggregation has been widely studied in the context of wireless sensor networks. The literature on this problem can be divided in two groups depending on the assumption made about the collisions being handled by an underlying MAC layer.
\textit{In the case when collisions are not handled by the MAC layer}, the goal is to find a collision-free schedule that aggregates the data in minimum duration. The problem was first studied by Annamalai \textit{et al.} ~\cite{annamalai2003tree}, and formally defined by Chen \textit{et al.} ~\cite{chen2005minimum}, which proved that the problem is NP-complete. Then, several papers~\cite{yu2009distributed, xu2011delay, ren2010new, nguyen2011efficient} proposed centralized and distributed approximation algorithms for this problem. The best known algorithm is due to Nguyen \textit{et al.} ~\cite{nguyen2011efficient}. More recently, Bramas \textit{et al.} ~\cite{bramas2015complexity} considered the generalization of the problem to dynamic wireless sensor networks (modeled by evolving graphs). Bramas \textit{et al.} ~\cite{bramas2015complexity} show that the problem remains NP-complete even when restricted to dynamic WSNs of degree at most $2$ (compared to $3$ in the static case).
\textit{When collisions are handled by the MAC layer}, various problems related to data aggregation have been investigated. The general term \emph{in-network aggregation} includes several problems such as gathering and routing information in WSNs, mostly in a practical way. For instance, a survey~\cite{fasolo2007network} relates aggregation functions, routing protocols, and MAC layers with the objective of reducing resource consumption. \emph{Continuous aggregation}~\cite{abshoff2014continuous} assumes that data have to be aggregated, and that the result of the aggregation is then disseminated to all participating nodes. The main metric is then the delay before aggregated data is delivered to all nodes, as no particular node plays the role of a sink. Most related to our concern is the work by Cornejo \textit{et al.} ~\cite{cornejo2012aggregation}. In their work, each node starts with a token, the time is finite and no particular node plays the role of a sink node. Then, the topology evolves with time, and at each time instant, a node has at most one neighbor with which it can interact and send or not its token. The goal is to minimize the number of nodes that own at least one token. Assuming an algorithm does not know the future, Cornejo \textit{et al.} ~\cite{cornejo2012aggregation} prove that its competitive ratio is $\Omega(n)$ with high probability (w.r.t. the optimal offline algorithm) against an oblivious adversary.
\subsection{Our Contributions}
In this paper we define the problem of distributed online data aggregation in dynamic graphs, and study its complexity. It turns out that the problem difficulty strongly depends on the power of the adversary (that chooses which interactions occur in a given execution)
For the oblivious and the online adaptive adversaries, we give several impossibility results when nodes have no knowledge about the future evolution of the dynamic graph, nor about the topology. Also, when nodes are aware of the underlying graph (where an edge between two nodes exists if those nodes interact at least once in the execution), the data aggregation is impossible in general.
To examine the possibility cases, we define a cost function whose purpose is to compare the performance of a distributed online algorithm to the optimal offline algorithm for the same sequence of interactions. Our results show that if all interactions in the sequence occur infinitely often, there exists a distributed online data aggregation algorithm whose cost is finite. Moreover, if the underlying graph is a tree, we present an optimal algorithm.
For the randomized adversary, we first present tight bounds when nodes have full knowledge about the future interactions in the whole graph. In this case, the best possible algorithm terminates in $\Theta(n\log(n))$ interactions, in expectation and with high probability.
Then, we consider nodes with restricted knowledge, and we present two optimal distributed online data aggregation algorithms that differ in the knowledge that is available to nodes. The first algorithm, called \emph{Gathering}, assumes nodes have no knowledge whatsoever, and terminates in $O(n^2)$ interactions on average, which we prove is optimal with no knowledge. The second one, called \emph{Waiting Greedy}, terminates in $O\left(n^{3/2}\sqrt{\log(n)}\right)$ interactions with high probability, which we show is optimal when each node only knows the time of its next interaction with the sink (the knowledge assumed by Waiting Greedy).
We believe our research paves the way for stimulating future researches, as our proof arguments present techniques and analysis that can be of independent interest for studying dynamic networks.
\section{Model}
A dynamic graph is modeled as a couple $(V, I)$, where $V$ is a set of nodes and $I = \left(I_t\right)_{t\in \mathbb{N}}$ is a sequence of pairwise interactions (or simply interactions). A special node in $V$ is the \emph{sink} node, and is denoted by $s$ in the sequel. In the sequence $\left(I_t\right)_{t\in \mathbb{N}}$, the index $t$ of an interaction also refers to its \emph{time of occurrence}. In the sequel $V$ always denotes the set of nodes, $n\geq 3$ its size, and $s\in V$ the sink node.
In general, we consider that nodes in $V$ have unique identifiers, unlimited memory and unlimited computational power. However, we sometimes consider nodes with no persistent memory between interactions; those nodes are called \emph{oblivious}.
Initially, each node in $V$ receives a data. During an interaction $I_t = \{u,v\}$, if both nodes still own data, then one of the node has the possibility to transmit its data to the other node. The receiver aggregates the received data with its own data. The transmission and the aggregation take exactly one time unit. If a node decides to transmit its data, then it does not own any data, and is not able to receive other's data anymore.
\subsection{Problem Statement}
The data aggregation problem consists in choosing at each interaction whether a node transmits (and which one) or not so that after a finite number of interactions, the sink is the only node that owns a data.
In this paper we study distributed and online algorithms that solve this problem. Such algorithms are called
\emph{distributed online data aggregation} (DODA) algorithms.
A DODA is an algorithm that takes as input an interaction $I_t = \{u,v\}$, and its time of occurrence $t\in \mathbb{N}$, and outputs either $u$, $v$ or $\bot$. If a DODA outputs a node, this node is the receiver of the other node's data. In more details, if $u$ is the output, this means that before the interaction both $u$ and $v$ own data, and the algorithm orders $v$ to transmit its data to $u$. The algorithm is able to change the memory of the interacting nodes, for instance to store information that can be used in future interactions. In the sequel, $\mathcal{D}_\mathsf{ODA}$ denotes the set of all DODA algorithms. And $\mathcal{D}_\mathsf{ODA}^{\emptyset}$ denotes the set of DODA algorithms that only require oblivious nodes.
A DODA can require some knowledge to work. A knowledge is a function (or just an attribute) given to every node that gives some information about the future, the topology or anything else. By default, a node $u\in V$ has two information: its identifier $u.ID$ and a boolean $u.isSink$ that is true if $u$ is the sink, and false otherwise. A DODA algorithm may use additional functions associated with different knowledge. $\mathcal{D}_\mathsf{ODA}(\mathfrak{i}_1, \mathfrak{i}_2, \ldots)$ denotes the set of DODA algorithms that use the functions $\mathfrak{i}_1, \mathfrak{i}_2, \ldots$. For instance, we define for a node $u\in V$ the function $u.meetTime$ that maps a time $t\in\mathbb{N}$ with the smallest time $t'>t$ such that $I_{t'} = \{u,s\}$ \textit{i.e.}, the time of the next interaction with the sink (for $u=s$, we define $s.meetTime$ as the identity, $t\mapsto t$). Then $\mathcal{D}_\mathsf{ODA}(meetTime)$ refers to the set of DODA algorithms that use the information $meetTime$.
\subsection{Adversary Models}
In this paper we consider three models of adversaries:
\begin{itemize}
\item The oblivious adversary. This adversary knows the algorithm's code, and must construct the sequence of interactions before the execution starts.
\item This adversary knows the algorithm's code and can use the past execution of the algorithm to construct the next interaction. However, it must make its own decision as it does not know in advance the decision of the algorithm. In the case of deterministic algorithms, this adversary is equivalent to the oblivious adversary.
\item The randomized adversary. This adversary constructs the sequence of interactions by picking pairwise interactions uniformly at random.
\end{itemize}
Section \ref{sec:oblivious adversary} presents our results with the oblivious and the adaptive online adversary. The results with the randomized adversary are given in section \ref{sec:random adversary}.
\subsection{Definition of Cost}
To study and compare different DODA algorithms, we use a tool slightly different from the competitive analysis that is generally used to study online algorithms. The competitive ratio of an algorithm is the ratio between its performance and the optimal offline algorithm's performance. However, one can hardly define objectively the performance of an algorithm. For instance, if we just consider the number of interactions before termination, then an oblivious adversary can construct a sequence of interactions starting with the same interaction repeated an arbitrary number of time. In this case, even the optimal algorithm has infinite duration. Moreover, the adversary can choose the same interaction repeatedly after that the optimal offline algorithm terminates. This can prevent any non optimal algorithm from terminating and make it have an infinite competitive-ratio.
To prevent this we define the cost of an algorithm. Our cost is a way to define the performance of an algorithm, depending on the performance of the optimal offline algorithm. We believe our definition of cost is well-suited for a lots of problems where the adversary has a strong power, especially in dynamic networks. One of its main advantage is that it is invariant by trivial transformation of the sequence of interactions, like inserting or deleting duplicate interactions.
For the sake of simplicity, a data aggregation schedule with minimum duration (performed by an offline optimal algorithm) is called a \emph{convergecast}.
Consider a sequence of interactions $I$. Let $opt(t)$ be the ending time of a convergecast on $I$, starting at time $t\in\mathbb{N}$. If the ending time is infinite (if the optimal offline algorithm does not terminate) we write $opt(t)=\infty$. Let $T:\mathbb{N}_{\ge 1} \mapsto \mathbb{N}\cup\{\infty\}$ be the function defined as follow:
\begin{align*}
T(1) &= opt(0)\\
\forall i\geq1\quad T(i+1) &= opt(T(i) + 1)\\
\end{align*}
$T(i)$ is the duration of $i$ successive convergecasts (two convergecasts are consecutive if the second one starts just after the first one completes).
Let $duration(A,I)$ be the termination time of algorithm $A$ executed on the sequence of interactions $I$.
Now, we define the cost $\cost{A}{I}$ of an algorithm $A$ on the sequence $I$, as the smallest integer $i$ such that $duration(A,I)\leq T(i)$:
\[
\cost{A}{I} = \min \{i\;|\; duration(A,I)\leq T(i)\}
\]
This means that $\cost{A}{I}$ is an upper bound on the number of successive convergecasts we can perform during the execution of $A$, on the sequence $I$.
It follows from the definition that an algorithm performs an optimal data aggregation if and only if $\cost{A}{I} = 1$.
Also, if $duration(A,I)=\infty$, then it is possible that $\cost{A}{I}<\infty$. Indeed, if $i_{\max} = \min_i\{i\,|\,T(i) = \infty\}$ is well-defined, then $\cost{A}{I} = i_{\max}$, otherwise $\cost{A}{I}=\infty$.
\section{Oblivious and Online Adaptive Adversaries}\label{sec:oblivious adversary}
In this section we give several impossibility results when nodes have no knowledge, and then show several results depending on the amount of knowledge. We choose to limit our study to some specific knowledge, but one can be interested in studying the possible solutions for different kind of knowledge.
\subsection{Impossibility Results When Nodes Have no Knowledge}
\begin{theorem}
For every algorithm $A\in \mathcal{D}_\mathsf{ODA}$, there exists an adaptive online adversary generating a sequence of interactions $I$ such that $\cost{A}{I} = \infty$.
\end{theorem}
\begin{proof}
Let $I$ the sequence of interactions between $3$ nodes $a$, $b$, and the sink $s$, defined as follows.
$I_0 = \{a,b\}$. If $a$ transmits, then for every $i\in\mathbb{N}$, $I_{2i+1} = \{a,s\}$ and $I_{2i+2} = \{a,b\}$ so that $b$ will never be able to transmit.
Symmetrically if $b$ transmits the same thing happens.
If no node transmits, then $I_1 = \{b,s\}$. If $b$ transmits, then $I_{2i+2} = \{a,b\}$ and $I_{2i+3} = \{b,s\}$ so that $a$ will never be able to transmit.
Otherwise $I_2 = \{a,b\}$ and continue as in the first time.
$A$ never terminates, and a convergecast is always possible, so that $\cost{A}{I} = \infty$.
\end{proof}
In the case of deterministic algorithm, the previous theorem is true even with an oblivious adversary. However, for a randomized algorithm, the problem is more complex. The following theorem states that the impossibility results for oblivious randomized algorithm, leaving the case of general randomized algorithms against oblivious adversary as an open question.
\begin{theorem}
For every randomized algorithm $A\in \mathcal{D}_\mathsf{ODA}^\emptyset$, there exists an oblivious adversary generating a sequence of interactions $I$ such that $\cost{A}{I} = \infty$ with high probability\footnote{An event $A$ occurs with high probability, when $n$ tends to infinity, if $P(A) > 1-o\left(\dfrac{1}{\log(n)}\right)$}.
\end{theorem}
\begin{proof}
Let $V=\{s, u_0, \ldots, u_{n-2}\}$. In the sequel, indexes are modulo $n-1$ \textit{i.e.}, $\forall i,j\geq0$, $u_i=u_j$ with $i\equiv j \mod n-1$. Let $I^\infty$ defined by, for all $i\in\mathbb{N}$, $I^\infty_i = \{u_i, s\}$.
Let $I^l$ be the finite sequence, prefix of length $l>0$ of $I^\infty$.
For every $l>0$, the adversary can compute the probability $P_l$ that no node transmits its data when executing $A$ on $I^l$. $(P_l)_{l>0}$ is a non-increasing sequence, it converges to a limit $\mathcal{P}\geq 0$.
For a given $l$, if $P_l\geq 1/n$, there is at least two nodes whose probability not to transmit when executing $A$ on $I^l$ is at least $n^{-\frac{1}{n-2}} = 1 - O\left(\frac{1}{\sqrt{n}}\right)$.
To prove this, we can see the probability $P_l$ as the product of $n-1$ probabilities $p_0$, $p_1$, $\ldots$, $p_{n-2}$ where $p_i$ is the probability that node $u_i$ does not transmit during $I^l$. Those events are independent since the algorithm is oblivious. Let $p_{d}\geq p_{d'}$ be the two greatest probabilities in $\{p_i\}_{0\leq i\leq n-2}$, we have:
\[\left(\prod_{i=0}^{n-2} p_i \geq \frac{1}{n}\right)
\Rightarrow \left(\sum_{i=0}^{n-2} \log(p_i) \geq \log\left(\frac{1}{n}\right)\right)
\Rightarrow \left((n-2)\log(p_{d'}) \geq\log\left(\frac{1}{n}\right)\right)
\Rightarrow \left(p_{d'} \geq n^{-\frac{1}{n-2}}\right)
\]
This implies that, if $\mathcal{P}\geq 1/n$, then $A$ does not terminate on the sequence $I^{\infty}$ with high probability.
Otherwise, let $l_0$ be the smallest index such that $P_{l_0} < 1/n$. So that with high probability, at least one node transmits when executing $A$ on $I^{l_0}$.
Also, $P_{l_0 - 1}\geq 1/n$ so that the previous argument implies that there is at least two nodes $u_d$ and $u_{d'}$ whose probability to still have a data (after executing $A$ on $I^{l_0 - 1}$) is at least $n^{-\frac{1}{n-2}}$. If $l_0 = 0$ we can choose $\{u_d, u_{d'}\} = \{u_1, u_2\}$. We have $u_d\neq u_{l_0}$ or $u_{d'}\neq u_{l_0}$. Without loss of generality, we can suppose $u_d\neq u_{l_0}$, so that the probability that $u_d$ transmits is the same in $I^{l_0 - 1}$ and in $I^{l_0}$.
Now, $u_d$ is a node whose probability not to transmit when executing $A$ on $I^{l_0}$ is at least $n^{-\frac{1}{n-2}} = 1 - O\left(\frac{1}{\sqrt{n}}\right)$. Let $I'$ be the sequence of interactions defined as follow:
\[
\forall i\in [0,n-2]\setminus\{d-1\},\; I'_i = \{u_{i}, u_{i+1}\},\;
I'_{d-1} = \{u_{d-1}, s\}
\]
$I'$ is constructed such that $u_d$ (the node that has data with high probability) must send its data along a path that contains all the other nodes in order to reach the sink. But this path contains a node that does not have a data.
Let $I$ be the sequence of interaction starting with $I^{l_0}$ and followed by $I'$ infinitely often.
We have shown that with high probability, after $l_0$ interactions, at least one node transmits its data and the node $u_d$ still has a data. The node that does not have data prevents the data owned by $u_d$ from reaching $s$. So that $A$ does not terminate, and since a convergecast is always possible, then $\cost{A}{I} = \infty$.
\end{proof}
\subsection{When Nodes Know The Underlying Graph}
Let $\bar{G}$ be the underlying graph \textit{i.e.}, $\bar{G}=(V,E)$ with $E=\left\{(u,v)\,|\,\exists t \in \mathbb{N}, \, I_t = \{u,v\}\right\}$. The following results assume that the underlying graph is given initially to every node.
\begin{theorem}\label{thm: impossibility results with knowledge of the underlying graph agains online adversary}
If $n\geq 4$, then, for every algorithm $A\in \mathcal{D}_\mathsf{ODA}(\bar{G})$, there exists an online adaptive adversary generating a sequence of interactions $I$ such that $\cost{A}{I} = \infty$.
\end{theorem}
\begin{proof}
$V = \{s, u_1, u_2, u_3\}$. We create a sequence of interactions with the underlying graph $\bar{G}=\left(V,\left\{(s, u_1), (u_1, u_2),(u_2, u_3), (u_3, s)\right\}\right)$. We start with the following interactions:
\begin{equation}\label{eq:interaction to fail DODA with underlying graph}
\left(\{u_1, s\},\{u_3, s\},\{u_2, u_1\},\{u_2, u_3\}\right).
\end{equation}
If $u_2$ transmits to $u_1$ in $I_2$, then we repeat infinitely often the three following interactions:
\[\left(\{u_1, u_2\},\{u_2, u_3\},\{u_3, s\}, ...\right).\]
Else, if $u_2$ transmits to $u_3$ in $I_3$, then we repeat infinitely often the three following interactions:
\[\left(\{u_3, u_2\},\{u_2, u_1\},\{u_1, s\}, ...\right).\]
Otherwise, we repeat the four interactions (\ref{eq:interaction to fail DODA with underlying graph}), and apply the previous reasoning. Then, $A$ never terminates, and a convergecast is always possible, so that $\cost{A}{I} = \infty$.
\end{proof}
\begin{theorem}
If the interactions occurring at least once, occur infinity often, then there exists $A \in \mathcal{D}_\mathsf{ODA}^{\emptyset}(\bar{G})$ such that $\cost{A}{I} < \infty$ for every sequence of interactions $I$. However, $\cost{A}{I}$ is unbounded.
\end{theorem}
\begin{proof}
Nodes can compute a spanning tree $T$ rooted at $s$ (they compute the same tree, using nodes identifiers). Then, each node waits to receive the data from its children and then transmits to its parent as soon as possible. All transmissions are done in finite time because each edge of the spanning tree appears infinitely often.
However, when $\bar{G}$ is not a tree, there exists another spanning tree $T'$. Let $e$ be an edge of $T$ that is not in $T'$. By repeated interactions of edges of $T'$, an arbitrary amount of convergecasts can be performed while a node is waiting for sending data to its parent through $e$ in execution of $A$.
\end{proof}
\begin{theorem}
If $\bar{G}$ is a tree, there exists $A\in \mathcal{D}_\mathsf{ODA}^{\emptyset}(\bar{G})$ that is optimal.
\end{theorem}
\begin{proof}
Each node waits to receive the data from its children, then transmits to its parent as soon as possible.
\end{proof}
\subsection{If Nodes Know Their Own Future}
For a node $u\in V$, $u.future$ denotes the future of $u$ \textit{i.e.}, the sequence of interactions involving $u$, with their times of occurrences. In this case, according to the model, two interacting nodes exchange their future and non-oblivious nodes can store it. This may seem in contradiction with the motivation of the problem that aims to reduce the number of transmissions. However, it is possible that the data must be sent only once for reasons not related to energy (such as data that cannot be duplicated, tokens, etc.). That is why we consider this case, for the sake of completeness, even if oblivious algorithms should be favored.
\begin{theorem}
There exists $A\in \mathcal{D}_\mathsf{ODA}(future)$ such that $\cost{A}{I} \leq n$ for every sequence of interactions $I$.
\end{theorem}
\begin{proof}
One can show that the duration of $n-1$ successive convergecasts is sufficient to perform a broadcast from any source. So every node broadcasts its future to the other nodes. After that, all the nodes are aware of the future of every node and can compute the optimal data aggregation schedule. So that it takes only one convergecast to aggregate the data of the whole network. In total, $n$ successive convergecasts are sufficient.
\end{proof}
\section{Randomized Adversary}\label{sec:random adversary}
The randomized adversary constructs the sequence of interactions by picking a couple of nodes among all possible couples, uniformly at random. Thus, the underlying graph is a complete graph
of $n$ nodes (including the sink) and every interaction occurs with
the same probability $p = \frac{2}{n(n-1)}$.
In this section, the complexity is computed on average (because the adversary is randomized) and no more ``in the worst case'' as previously. In this case, considering the number of interactions is sufficient to represent the complexity of an algorithm. We see in Theorem~\ref{thm:performance of offline algorithm agains randomized adversary} that an offline algorithm terminates in $\Theta(n\log(n))$ interactions w.h.p. This bound gives a way to convert the complexity in terms of number of interactions to a cost. Indeed, if an algorithm $\mathcal{A}$ terminates in $O(n^2)$ interactions, then its performance is $O(n/\log(n))$ times worse than the offline algorithm and $\cost{A}{I} = O(n/\log(n))$ for a randomly generated sequence of interactions $I$. For the sake of simplicity, in the remaining of the section, we give the complexity in terms of number of interactions.
Since an interaction does not depend on previous interactions, the algorithms we propose here are oblivious i.e., they do not modify the memory of the nodes. In more details, the output of our algorithms depends only on the current interaction and on the information available in the node.
First, we introduce three oblivious DODA algorithms. For the sake of simplicity, we assume that the output is ignored if the interacting nodes do not both have data. Also, to break symmetry, we suppose the nodes that interact are given as input ordered by their identifiers.
\begin{itemize}
\item Waiting ($\ensuremath{\mathcal{W}}\in \mathcal{D}_\mathsf{ODA}^{\emptyset}$): A node transmits only when it is connected to the sink $s$:
\[
\ensuremath{\mathcal{W}}: (u_1, u_2, t)=\left\{
\begin{array}{ll}
u_i\,&\text{if $u_i.isSink$}\\
\bot\,&\text{otherwise}
\end{array}
\right.
\]
\item
Gathering (${\cal GA} \in \mathcal{D}_\mathsf{ODA}^{\emptyset}$): A node transmits its data
when it is connected to the sink $s$ or to a node having data.:
\[
{\cal GA}: (u_1, u_2, t) = \left\{ \begin{array}{ll}
u_i & {\rm if}\ u_i.is.Sink \\
u_1 & {\rm otherwise} \\
\end{array} \right.
\]
\item Waiting Greedy with parameter $\tau\in\mathbb{N}$ ($\WaitingGreedy{\tau}\in \mathcal{D}_\mathsf{ODA}^{\emptyset}(meetTime)$): The node with the greatest meet time transmits, if its meet time is greater than $\tau$:
\begin{align*}
m_1 = u_1.meetTime(t) \\
m_2 = u_2.meetTime(t)
\end{align*}
\[\arraycolsep=2.5pt\def1{1}
\WaitingGreedy{\tau}: (u_1, u_2, t){=}\left\{
\begin{array}{ll}
{u_1}&\text{if $m_1 \leq m_2\wedge \tau < m_2$}\\
{u_2}&\text{if $m_1 > m_2\wedge \tau < m_1$}\\
{\bot}&\text{otherwise}
\end{array}
\right.
\]
One can observe that after time $\tau$, the algorithm acts as the Gathering algorithm.
\end{itemize}
\subsection{Lower Bounds}
We show a lower bound $\Omega (n^2)$ on the number of interactions
required for DODA against the randomized adversary. The lower bound holds for all algorithms (including randomized
ones) that do not have knowledge about future of the evolving network.
The lower bound matches the upper bound of the \emph{Gathering} algorithm given in the next subsection. This implies that this bound is tight.
\begin{theorem}
The expected number of interactions required for DODA is $\Omega(n^2)$.
\end{theorem}
\begin{proof}
We show that the last data transmission requires $\Omega(n^2)$
interactions in expectation.
We consider any (randomized) algorithm {\cal A} and its execution
for DODA. Before the last transmission (from some node, say $v$,
to the sink $s$), only $v$ has data except for $s$.
The probability that $v$ and $s$ interacts in the next interaction is
$\frac{2}{n(n-1)}$.
Thus, the expected number $EI$ of interactions required for $v$ to transmit to $s$ is:
\[ EI = \frac{n(n-1)}{2} \]
So that the whole aggregation requires at least $EI=\Omega(n^2)$.
\end{proof}
We also give a tight bound for algorithms that know the full sequence of interactions.
\begin{theorem}\label{thm:performance of offline algorithm agains randomized adversary}
The best algorithm in $\mathcal{D}_\mathsf{ODA}^{\emptyset}$(full knowledge) terminates in $\Theta(n \log(n))$ interactions, in expectation and with high probability.
\end{theorem}
\begin{proof}
First, we show that the expected number of interactions of a broadcast algorithm is $\Theta(n \log(n))$.
The first data transmission occurs when the source node (say $v_0$)
interacts with another node.
The probability of occurrence of the first data transmission is
$\frac{2(n-1)}{n(n-1)}$.
After the $(i-1)$-th data transmission, $i$ nodes (say
$V_{i-1}=\{v_0, v_1, \ldots , v_{i-1}\}$) have the data and
the $i$-th data transmission occurs when a node in
$V_{i-1}$ interacts with a node not in $V_{i-1}$. This happens with probability $\frac{2i(n-i)}{n(n-1)}$.
Thus, if $X$ is the number of interactions required to perform a broadcast, then we have:
\begin{align*}
E(X)&=\sum^{n-1}_{i=1} \frac{n(n-1)}{2i(n-i)}
=\frac{n(n-1)}{2} \sum^{n-1}_{i=1} \frac{1}{i(n-i)} \\
&=\frac{n(n-1)}{2n} \sum^{n-1}_{i=1} (\frac{1}{i}+\frac{1}{n-i}) \\
&= (n-1) \sum^{n-1}_{i=1} \frac{1}{i} \in \Theta(n \log(n)).
\end{align*}
And the variance is
\begin{align*}
Var(X) &= \sum^{n-1}_{i=1} \left(1-\frac{2i(n-i)}{n(n-1)} \right)/\left(\frac{2i(n-i)}{n(n-1)}\right)^2 \\
&= n(n-1)\sum^{n-1}_{i=1} \frac{n(n-1) - 2i(n-i)}{\left(2i(n-i)\right)^2}\\
&=O\left(n^4\sum^{\lfloor n/2\rfloor-1}_{i=1} \left(\frac{1}{i(n-i)}\right)^2\right)
\end{align*}
The last sum is obtained from the previous one by observing that it is symmetric with respect to the index $i=n/2$, and the removed elements ($i=\lfloor n/2 \rfloor$ and possibly $i=\lceil n/2 \rceil$) are negligible.
We define $f: x\mapsto \frac{1}{x^2(n-x)^2}$. Since $f$ is increasing between $1$ and $n/2$, we have
\begin{align*}
\sum^{\lfloor n/2\rfloor - 1}_{i=1} f(i) &\leq \int_{1}^{n/2}f(x)dx \\
&= \frac{\frac{(n-2) n}{n-1}+2 \log (n-1)}{n^3} {=} O\left(\frac{1}{n^2}\right)
\end{align*}
So that the variance is in $O(n^2)$.
Using the Chebyshev's inequality, we have
\begin{align*}
P(|X - E(X)| > n\log(n)
&=O\left(\frac{1}{\log^2(n)}\right)
\end{align*}
Therefore, a sequence of $\Theta(n\log(n))$ interactions is sufficient to perform a broadcast with high probability.
By reversing the order of the interactions in the sequence of interactions, this implies that a sequence of $\Theta(n\log(n))$ interactions is also sufficient to perform a convergecast with the same probability. Aggregating data along the convergecast tree gives a valid data aggregation schedule.
\end{proof}
\begin{corollary}
The best algorithm in $\mathcal{D}_\mathsf{ODA}(future)$ terminates in $\Theta(n \log(n))$ interactions, in expectation and with high probability.
\end{corollary}
\begin{proof}
If each node starts with its own future, $O(n\log(n))$ interactions are sufficient to retrieve with high probability the future of the whole network. Then $O(n\log(n))$ interactions are sufficient to aggregate all the data with the full knowledge.
\end{proof}
\subsection{Algorithm Performance Without Knowledge}
\tolerance=2000
\begin{theorem}
The expected number of interactions the Waiting
requires to terminate is $O(n^2 \log(n))$.
The expected number of interactions the Gathering
requires to terminate is $O(n^2)$.
\end{theorem}
\begin{proof}
In the {\emph Waiting} algorithm, data is sent to the sink when a node with data is connected to the sink. We denote by $X_W$ the random variable that equals the number of interactions for the algorithm Waiting to terminate.
The probability of occurrence of the first data transmission is
$\frac{2(n-1)}{n(n-1)}$.
The probability of occurrence of the $i$-th data transmission
after the $(i-1)$-th data transmission is $\frac{2(n-i)}{n(n-1)}$.
Thus, the expected number of interactions required for DODA is
\begin{align*}
E(X_W) &= \sum^{n-1}_{i=1} \frac{n(n-1)}{2(n-i)} \\
&= \frac{n(n-1)}{2} \sum^{n-1}_{i=1} \frac{1}{i} \in O(n^2 \log(n))
\end{align*}
Since those events are independent, we also have that the variance of the number of interactions required for DODA is
\begin{align*}
Var(X_W
&=\sum^{n-1}_{i=1} \frac{n(n-1) - 2i}{n(n-1)}\times\frac{(n(n-1))^2}{4i^2}\\
&=\sum^{n-1}_{i=1} \frac{n^2(n-1)^2 - 2in(n-1)}{4i^2}\\
&\sim_{+\infty} \sum^{n-1}_{i=1} \frac{n^4}{4i^2}\qquad\sim_{+\infty} \frac{n^4\pi^2}{24}
\end{align*}
Using the Chebyshev's inequality, we have
\begin{align*}
P(\left|X_W - E(X_W)\right| > n^2\log(n))
&= O\left( \frac{n^4\pi^2}{24n^4\log^2(n)}\right)\\
&= O\left( \frac{1}{\log^2(n)} \right)
\end{align*}
Therefore, algorithm Waiting terminates after $O(n^2\log(n))$ interactions with probability greater than $1-1/log^2(n)$.
In the Gathering algorithm, a data is sent when a node with the data is connected to the sink or another node with data. We denote by $X_G$ the random variable that equals the number of interactions for the algorithm Gathering to terminate.
Notice that the total number of data transmissions required to terminate
is exactly $n-1$.
The probability of occurrence of the first data transmission is
$\frac{n(n-1)}{n(n-1)}=1$.
The probability of occurrence of the $i$-th data transmission
after the $(i-1)$-th data transmission is $\frac{(n-i+1)(n-i)}{n(n-1)}$.
Thus, the expected number of interactions required to terminate is
\begin{align*}
E(X_G) &= \sum^{n-1}_{i=1} \frac{n(n-1)}{(n-i+1)(n-i)} \\
&= n(n-1) \sum^{n-1}_{i=1} \frac{1}{i(i+1)} \in O(n^2)
\end{align*}
\end{proof}
\begin{corollary}\label{thm:optimality of gathering}
Algorithm Gathering is optimal in $\mathcal{D}_\mathsf{ODA}$.
\end{corollary}
\subsection[Algorithm Performance With meetTime]{Algorithm Performance With $\mathit{meetTime}$}
In this subsection we study the performance of our algorithm Waiting Greedy, find the optimal value of the parameter $\tau$ and prove that this is the best possible algorithm with only the $meetTime$ information (even if nodes have unbounded memory).
We begin by a lemma to find how many interactions are needed to have a given number of nodes interacting with the sink.
\begin{lemma}\label{lemme: nf(n) interactions allow f(n) nodes to interact with the sink}
If $f$ is a function such that $f(n) = \omega(1)$ and $f(n) = o(n)$ then, in $nf(n)$ interactions, $\Theta(f(n))$ nodes interact with the sink with high probability.
\end{lemma}
\begin{proof}
The probability of the $i$-th interaction between the sink and a node that has a data, after $i-1$ such interactions, is $\frac{2(n-i)}{n(n-1)}$. Let $X$ be the number of interactions needed for the sink to meet $f(n)$ different nodes. We have:
\begin{align*}
E(X) &=\sum^{f(n)}_{i=1} \frac{n(n-1)}{2(n-i)} \\
&=\frac{n(n-1)}{2} (H(n - 1) - H(n-f(n))) \\
&\sim \frac{n^2}{2} \left(-\log\left(1-\frac{f(n)}{n}\right)+ o(1)\right)\\
&\sim \frac{n^2}{2} \frac{f(n)}{n}\sim \frac{f(n)n}{2}
\end{align*}
and the variance is
\begin{align*}
Var(X)&=\sum^{f(n)}_{i=1} \left(1-\frac{2(n-i)}{n(n-1)}\right)/\left(\frac{2(n-i)}{n(n-1)}\right)^2\\
&\sim \sum^{f(n)}_{i=1} \frac{n^4}{4n^2} \sim \frac{n^2}{4}f(n)
\end{align*}
Using the Chebyshev's inequality, we have
\begin{align*}
P(|X - E(X)| > nf(n)) &=O\left( \frac{n^2f(n)}{4n^2f(n)^2}\right)\\
&=O\left( \frac{1}{f(n)}\right)
\end{align*}
So that $X = \Theta\left(nf(n)\right)$ with high probability if $1/f(n) = o(1)$ (or equivalently $f(n) = \omega(1)$).
\end{proof}
Now we can state our theorem about the performance of Waiting Greedy depending on the parameter $\tau$.
\begin{theorem}
Let $f$ be a function such that $f(n) = o(n)$ and $f(n) = \omega(1)$. The algorithm Waiting Greedy with $\tau = \Theta\left(\max\left(nf(n), n^2\log(n)/f(n)\right)\right)$ terminates in $\tau$ interactions with high probability.
\end{theorem}
\begin{proof}
To have an upper bound on the number of interactions needed by Waiting Greedy to terminate, we decompose the execution in two phases, one between time 0 and a time $t_1$ and the other between time $t_1$ and a time $t_2=\tau$. In the last phase, a set of nodes $L\subset V$ interacts at least once directly with the sink. Nodes in $L$ do not transmit to anyone in the first phase by definition of the algorithm (they have a meetTime smaller than $\tau$). Nodes in $L$ help the other nodes (in $L^c = V\backslash L$) to transmit their data in the first phase. Maybe nodes in $L^c$ can transmit to $L$ in the second phase, but we do not take this into account, that is why it is an upper bound.
If a node $u$ in $L^c$ interacts with a node in $L$ in the first phase, either it transmits its data, otherwise (by definition of the algorithm) it has a meetTime smaller than $\tau$ (and smaller than $t_1$ because it is not in $L$). In every case, a node in $L^c$ that meets a node in $L$ in the first phase, transmits its data.
To prove the theorem i.e., in order for the algorithm to terminate before $\tau$ with high probability, we prove two claims: (a) the number of nodes in $L$ is $f(n)$ with high probability if $t_2 - t_1 = nf(n)$ and (b) all nodes in $L^c$ interact with a node in $L$ with high probability if $t_1 = \Theta(n^{2}\log(n)/f(n))$.
The first claim is implied by Lemma~\ref{lemme: nf(n) interactions allow f(n) nodes to interact with the sink}. Now we prove the second claim.
Let $X$ be the number of interactions required for the nodes in $L^c$ to meet a node in $L$.
The probability of the $i$-th interaction between a node in $L^c$ (with a data) and a node in $L$, after $i-1$ such interactions already occurred, is ${2f(n)(n-f(n)-i)}/{n(n-1)}$.
It follows that the expected number of interactions to aggregate all the data of $L^c$ is
\begin{align*}
E(X) &= \sum_{i = 1}^{n - f(n)-1}\frac{n(n-1)}{2f(n)(n-f(n)-i)} \\
&= \frac{n(n-1)}{2f(n)} \sum_{i = 1}^{n - f(n)-1}\frac{1}{n-f(n)-i}\\
&\sim_{+\infty} \frac{n^2}{2f(n)}\log(n - f(n)) \\
&=\frac{n^2}{2f(n)}\log(n(1 - f(n)/n)) \sim
\frac{n^2\log(n)}{2f(n)}
\end{align*}
And the variance is
\begin{align*}
Var(X)&=\sum^{n - f(n)-1}_{i=1}\frac{ \left(1-\frac{2f(n)(n-f(n)-i)}{n(n-1)}\right)}{\left(\frac{2f(n)(n-f(n)-i)}{n(n-1)}\right)^2}\\
%
&\sim \sum^{n - f(n)-1}_{i=1} \frac{n^4}{4f(n)^2n^2}\sim \frac{n^3}{4f(n)^2}
\end{align*}
Using the Chebyshev's inequality, we have
\begin{align*}
P\left(|X - E(X)| > \frac{n^2\log(n)}{2f(n)}\right
{=}O\left( \frac{1}{n\log^2(n)}\right)
\end{align*}
Thus $X {=} O\left(\frac{n^2\log(n)}{f(n)}\right)$ with high probability.
\end{proof}
\begin{corollary}
The algorithm Waiting Greedy, with $\tau = \Theta(n^{3/2}\sqrt{\log(n)})$ terminates in $\tau$ interactions with high probability.
\end{corollary}
\begin{proof}
In the last theorem, the bound $O\left(\max\left(nf(n), n^2\log(n)/f(n)\right)\right)$ is minimized by the function $f: n \mapsto \sqrt{n\log(n)}$.
\end{proof}
\begin{theorem}\label{thm:optimality of greedy waiting}
Waiting Greedy with $\tau = \Theta(n^{3/2}\sqrt{\log(n)})$ is optimal in $\mathcal{D}_\mathsf{ODA}(meetTime)$.
\end{theorem}
\begin{proof}
For the sake of contradiction, we suppose the existence of an algorithm $A\in \mathcal{D}_\mathsf{ODA}(meetTime)$ that terminates in $T(n)$ interactions with high probability, with $T(n) = o\left(n^{3/2}\sqrt{\log(n)}\right)$. Without loss of generality we can suppose that $A$ does nothing after $T(n)$ interactions. Indeed, the algorithm $A'$ that executes $A$ up to $T(n)$ and does nothing afterward has the same upper bound (since the bound holds with high probability).
Let $L$ be the set of nodes that interact directly with the sink during the first $T(n)$ interactions. Let $L^c$ be its complementary in $V\backslash\{s\}$. We know from Lemma~\ref{lemme: nf(n) interactions allow f(n) nodes to interact with the sink} that $\#L=O(T(n)/n)=o\left(\sqrt{n\log(n)}\right)$ w.h.p.
We can show that $T(n)$ interactions are not sufficient for all the nodes in $L^c$ to interacts with nodes in $L$. If nodes in $L^c$ want to send their data to the sink, some data must be aggregated among nodes in $L^c$, then the remaining nodes in $L^c$ that still own data must interact with a node in $L$ before $T(n)$ interactions (this is not even sufficient to perform the DODA, but is enough to reach a contradiction).
When two nodes in $L^c$ interact, their meetTime (that are greater than $T(n)$) and the previous interactions are independent with the future interactions occurring before $T(n)$. This implies that when two nodes in $L^c$ interact, using this information to decide which node transmits is the same as choosing the sender randomly. From corollary \ref{thm:optimality of gathering}, this implies that the optimal algorithm to aggregate data in $L^c$ is the Gathering algorithm.
Now, we show that, even after the nodes in $L^c$ use the Gathering algorithm, there is with high probability at least one node in $L^c$ that still owns data and that does not interact with any node in $L$. This node prevents the termination of the algorithm before $T(n)$ interactions with high probability, which is a contradiction.
Formally, we have the following lemmas.
\begin{lemma}
Let $g(n)$ be the number of nodes in $L^c$. After using the Gathering algorithm during $T(n)$ interactions, the number of nodes in $L^c$ that still own data is in $\omega(\sqrt{n/\log(n)})$ with high probability.
\end{lemma}
\begin{proof}
Let $X$ be the number of interactions needed for $R(n)$ nodes in $L^c$ to transmit their data.
For the sake of contradiction, we suppose that
\begin{equation}\label{eq:g-R=o(g)}
g(n) - R(n) = O(\sqrt{n/\log(n)}) = o(g(n))
\end{equation}
and show that $X$ is greater than $T(n)$ w.h.p.
The probability of the $i$-th interaction between two nodes in $L^c$ that own data, after the $(i-1)$-th interaction already occurred, is $\frac{(g(n) - i)(g(n) - i - 1)}{n(n-1)}$. Thus we have:
\begin{align*}
E(X) &= \sum_{i=0}^{R(n) - 1}\frac{n(n-1)}{(g(n) - i)(g(n) - i - 1)}\\
&= n(n-1)\sum_{i=g(n) - R(n) + 1}^{g(n)}\frac{1}{i(i - 1)}\\
&= n(n-1)\left(\frac{1}{g(n) - R(n) + 1} - \frac{1}{g(n)}\right)\\
&= n(n-1)\frac{R(n)}{g(n)(g(n) - R(n))}
\end{align*}
From equation (\ref{eq:g-R=o(g)}) we deduce that $g\sim R$ and we have:
\begin{align*}
E(X) &\sim n^2\frac{1}{g(n) - R(n)}
\end{align*}
which implies
\[
E(X) = \Omega\left(n^2\sqrt{\frac{\log(n)}{n}}\right) = \Omega\left(n^{3/2}\sqrt{\log(n)}\right).
\] As in the previous proofs, the expectation is reached with high probability. This contradicts the fact $T(n)=o(n^{3/2}\sqrt{\log(n)})$
\end{proof}
\begin{lemma}
Let $H\subset L^c$ be the nodes in $L^c$ that still own data after the gathering. With high probability, $T(n)$ interactions are not sufficient for all the nodes in $H$ to interact with nodes in $L$.
\end{lemma}
\begin{proof}
We know from the previous lemma that the number of nodes in $H$ is $h(n) = \omega(\sqrt{n/\log(n)})$.
Let $X$ be the random variable that equals the number of interactions needed for the nodes in $H$ to interact with the nodes in $L$. We show that $X$ is in $\omega(n^{3/2}\sqrt{\log(n)})$ with high probability. Indeed, the probability of the $i$-th interaction between a node in $H$ that owns data, after the $(i-1)$-th interaction already occurred, is $\frac{2f(n)(h(n) - i)}{n(n-1)}$, where $f(n)=\#L$. Thus we have:
\begin{align*}
E(X) &= \sum_{i=0}^{h(n) - 1}\frac{n(n-1)}{2f(n)(h(n) - i)}\\
&= \frac{n(n-1)}{2f(n)}\sum_{i=1}^{h(n)}\frac{1}{i}
\sim \frac{n^2}{2f(n)}\log(h(n))
\end{align*}
But since $f(n) = o\left(\sqrt{n\log(n)}\right)$, we have
\begin{align*}
E(X) &= \omega\left(\frac{n^{3/2}}{\sqrt{\log(n)}}\log(h(n))\right) \\
&= \omega\left( \frac{n^{3/2}}{\sqrt{\log(n)}}\log(n/log(n)) \right) \\
&= \omega\left( n^{3/2}\sqrt{\log(n)} \right)
\end{align*}
Again the bound holds with high probability. This implies that, with high probability, $T(n)=o( n^{3/2}\sqrt{\log(n)} )$ interactions are not sufficient for all the nodes in $H$ to interact with nodes in $L$.
\end{proof}
\textit{End of the proof of theorem \ref{thm:optimality of greedy waiting}}.
We have shown that $T(n)$ interactions are not sufficient for the nodes in $L^c$ to transmit their data (directly or indirectly) to the nodes in $L$. Indeed, we have shown that the nodes in $L^c$ can apply the gathering algorithm so that $\omega(\sqrt{n\log(n))}$ nodes in $L^c$ still own data with high probability. But, with high probability, one of the $\omega(\sqrt{n\log(n)})$ remaining nodes does not interact with a node in $L$ in $T(n)$ interactions. This implies that, with high probability, at least one node cannot send its data to the sink in $T(n)$ interactions and an algorithm $A$ with such a bound $T$ does not exist.
\end{proof}
\section{Concluding remarks}
We defined and investigated the complexity of the distributed online data aggregation problem in dynamic graphs where interactions are controlled by an adversary. We obtained various tight complexity results for different adversaries and node knowledge, that open several scientific challenges:
\begin{enumerate}
\item What knowledge has a real impact on the lower bounds or algorithm efficiency ?
\item Can similar optimal algorithms be obtained with fixed memory or limited computational power ?
\item Can randomized adversaries that use a non-uniform probabilistic distribution alter significantly the bounds presented here in the same way as in the work by Yamauchi \textit{et al.} ~\cite{YTKY12c}~?
\end{enumerate}
\bibliographystyle{plain}
|
1,116,691,500,232 | arxiv | \section{Introduction}
In \cite{R} it was shown that one could lift a mod $p$ representation $\overline{\rho}$ to a power series ring in infinitely many variables which was generalized for totally real fields by \cite{P}. In this paper, we extend these results for a reducible representation $\overline{\rho}: G_{\mathbb{Q}} \rightarrow GL_2(\mathbb{F}_q)$, where $\mathbb{F}_q$ is a finite field of residue characteristic $p$ and cardinality $q = p^t$. We use cohomology classes which work for all lifts $\rho_n$ unlike \cite{HR} where their cohomology classes cannot be used to lift from mod $p$ to mod $ p^2$. This allows us to get an irreducible deformation of a reducible representation in infinitely many variables. The case of a reducible deformation of a residually reducible representation was addressed in \cite{S}. The author hopes to use these methods to generalize other lifting results of Ramakrishna for arbitrary number fields in an ongoing project.
Our main theorem is the following:
\begin{theorem} \label{t1}
Let $\overline{\rho}: G_{\mathbb{Q}} \rightarrow GL_2( \mathbb{F}_q)$ where $\overline{\rho} = \left(
\begin{array}{cc}
\phi & * \\
0 & 1 \\
\end{array}
\right)$ and $S$ be the set of primes containing $p$ and $\infty$ and all those at which $\overline{\rho}$ is ramified. Suppose:
\begin{itemize}
\item $p \geq 3$
\item $\overline{\rho}$ is indecomposable
\item the $\mathbb{F}_p$ span of the elements in the image of $\phi$ is all of $\mathbb{F}_q$,
\item $\phi^2 \ne1$
\item $\phi \ne \chi, \chi^{-1}$, where $\chi$ is the mod $p$ reduction of the cyclotomic character
\item for $\overline{\rho}$ odd that $\overline{\rho} |_{G_p}$ is not unramified of the form $ \left(
\begin{array}{cc}
1 & * \\
0 & 1 \\
\end{array}
\right)$, and for $\overline{\rho}$ even that $\overline{\rho} |_{G_p}$ is not $ \left(
\begin{array}{cc}
\chi & 0 \\
0 & 1 \\
\end{array}
\right)$ or $ \left(
\begin{array}{cc}
\chi^{-1} & * \\
0 & 1 \\
\end{array}
\right)$, where the $*$ may be trivial.
\end{itemize}
then there exists an irreducible deformation $\rho : G_{\mathbb{Q}} \rightarrow GL_2 (\mathbb{W} [[T_1, T_2,.., T_r,....,]])$ of $\overline{\rho}$ ramified at infinitely many primes, where $\mathbb{W}$ denotes the ring of Witt vectors of $\mathbb{F}_q$.
\end{theorem}
We start with $\overline{\rho} : G_{\mathbb{Q}} \rightarrow GL_2 (\mathbb{F}_q)$ and by adding primes to the ramification we lift it successively to $\rho_n : G_{\mathbb{Q}, S_n} \rightarrow GL_2 (\mathbb{W}[[ T_1,...,T_n]] / (p, T_1,...,T_n)^n)$, and define $\rho = \displaystyle\lim_{\overleftarrow{n}} \rho_n$. If $R_n$ is the deformation ring of $\rho_n$ with $m_{R_n}$ its maximal ideal, then we see that $R_n / m^n_{R_n} = \mathbb{W}[[ T_1,...,T_n]] / (p, T_1,...,T_n)^n$. We will add more primes of ramification to $S_n$ and get a new set of primes $S_{n+1}$, such that the deformation ring associated to $S_{n+1}$ has $R_{n+1}/m^{n+1}_{R_{n+1}}$ as a quotient. This gives us a surjection from $R_{n+1} / m_{R_{n+1}}^{n+1} \twoheadrightarrow R_n /m_{R_n}^n$, which allows us to get the inverse limit $R = \displaystyle\lim_{\overleftarrow{n}} R_n/ m_{R_n}^n$.
\section{Notation}
We refer the reader to the notation used in \cite{HR} but briefly outline some definitions and notations here.
\begin{itemize}
\item $G_Z$ is the Galois group over $\mathbb{Q}$ of its maximal extension unramified outside a finite set of primes $Z$.
\item For $w \in Z$, $G_w = Gal (\overline{\mathbb{Q}}_w/ \mathbb{Q}_w)$, where $\mathbb{Q}_w$ is the completion of $\mathbb{Q}$ at $w$.
\item For a $G_{\mathbb{Q}} = Gal (\overline{\mathbb{Q}}/\mathbb{Q})$ module $M$, $\mathbb{Q}(M)$ is the field fixed by the subgroup of $G_{\mathbb{Q}}$ that acts trivially on $M$.
\item The $\mathbb{G}_m$-dual of $M$ is denoted by $M^*$.
\item For $f \in H^1(G_{\mathbb{Q}}, M)$, we denote by $L_f$ the field fixed by the kernel of the homomorphism $f |_{Gal ({\overline{\mathbb{Q}}/\mathbb{Q}(M)})}$.
\item $X = Ad^0 (\overline{\rho})$ is the set of trace zero $2 \times 2$ matrices over $\mathbb{F}_q$ with Galois action through $\overline{\rho}$ by conjugation.
\item Let $K = \mathbb{Q} (X^*)$ which is equivalent to $ \mathbb{Q}(X,\mu_p)$.
\item For $w$ unramified in a Galois extension $L /\mathbb{Q}$ we denote a Frobenius at $w$ by $\sigma_w$.
\item $S$ is the set of primes containing $p,\infty$ and all those at which $\overline{\rho}$ is ramified.
\item For a character $\kappa : G_{\mathbb{Q}} \rightarrow \mathbb{F}_q^*$, we denote by $\mathbb{F}_q (\kappa)$ the module $\mathbb{F}_q$ with Galois action via $\kappa$.
\end{itemize}
\section{Trivial primes and the modification of $N_v$}
We modify the lemmas in \cite{R} for a residually reducible representation $\overline{\rho}: G_{\mathbb{Q}} \rightarrow GL_2(\mathbb{F}_q)$ using the language of \cite{HR} and some ideas from \cite{CP}. The following lemmas are used in the next section to find sets of primes that we add to the ramification to remove global obstructions, and cohomology classes associated to these new primes which we use to overcome local obstructions to lifting at each level $n$. The lemmas are adaptions of lemmas of Ramakrishna, so we show the modification and outline the rest of the argument.
\begin{definition}
Let $\overline{\rho}$ be as in the hypothesis of Theorem \ref{t1}. For $v$ unramified in $\overline{\rho}$ we say $v$ is a trivial prime if:
\begin{itemize}
\item $v$ is unramified in $\mathbb{Q}(\overline{\rho})$ and $\overline{\rho}(\sigma_v)$ is trivial, and
\item $v \equiv 1 \mod p$
\end{itemize}
\end{definition}
Since $\overline{\rho}$ is reducible, the Galois module $X=Ad^0(\overline{\rho})$ has a filtration of Galois stable $\mathbb{F}_q$-subspaces of the form $U_1 = \left(\begin{array}{cc}
0 & b \\
0 & 0\\
\end{array}
\right), U_2 =\left(\begin{array}{cc}
a & b \\
0 & -a \\
\end{array}
\right), U_3 =\left( \begin{array}{cc}
a & b \\
c & -a \\
\end{array}
\right)$ , while the Galois module $X^*$ has a filtration of the $\mathbb{F}_q$-subspaces $V_1 = (X/U_2)^*, V_2 = (X/U_1)^*, V_3 = X^*$. For a subquotient $M$ of $X$ or $X^*$, the $\phi$, trivial, $\phi^{-1}$, $\chi \phi$, $\phi$, $\chi \phi^{-1}$ eigenspaces are the eigenspaces under the prime to $p$ action of $Gal(\mathbb{Q} (\phi, \mu_p)/\mathbb{Q})$ under a splitting of the long exact sequence
$$1 \rightarrow Gal(K/\mathbb{Q} (\phi, \mu_p)) \rightarrow Gal(K/\mathbb{Q}) \rightarrow Gal (\mathbb{Q} (\phi, \mu_p)/ \mathbb{Q}) \rightarrow 1$$
\begin{definition}
For any $M \in \{U_1, U_2, U_3, V_1, V_2, V_3\}$ and $Z$ a finite set of primes containing $S$ we define $\Sha_Z^i (M)$ to be the kernel of the map $H^i(G_Z, M) \rightarrow \oplus_{v \in Z} H^i(G_v, M)$.
\end{definition}
\begin{definition}
Let $N_w$ be a subgroup of $ H^1(G_w, M)$ and let $N_w^*$ be its annihilator in $ H^1(G_w, M^*)$ under local Tate duality. Let $N = \{N_w \}_{w \in \mathbb{Z}_p}$. The Selmer group $H^1_N (G_Z, M)$ is the kernel of the restriction map:
$$ H^1(G_Z, M) \rightarrow \oplus_{w \in Z} H^1(G_w, M)/N_w$$
Let $N^* = \{ N^*_w \}_{w \in \mathbb{Z}_p}$. We define the dual Selmer group $H^1_{N^* }(G_Z, M^*)$ is the kernel of the other restriction map:
$$ H^1(G_Z, M^*) \rightarrow \oplus_{w \in Z} H^1(G_w, M^*)/N_w^*$$
\end{definition}
\begin{definition}
We say an element $f \in H^1(G_Z, X)$ (resp. $\psi \in H^1(G_Z, X^*)$) has rank $d$ if $d$ is the smallest number such that $f$ (resp. $\psi$) is in the image of the map $H^1(G_Z, U_d) \rightarrow H^1(G_Z, X)$ (resp. $H^1(G_Z, V_d) \rightarrow H^1(G_Z, X^*)$)
\end{definition}
\begin{lemma} \label{l1}
Let $M$ be any of the subspaces $\{U_1, U_2, U_3, V_1, V_2, V_3\}$, then there exists a finite set $Q_1$ of trivial primes such that $\Sha^1_{S \cup Q_1}(M) = 0$.
\end{lemma}
\begin{proof}
We mimic the proof of Prop 13 of \cite{HR} and outline the argument.
Let $\psi \in H^1 (G_{\mathbb{Q}}, X)$ and $L_{\psi}$ be the field fixed by the kernel of $\psi$. Let $P$ be the subgroup of $G_{\mathbb{Q}}$ that acts trivially on $M$ and $H= Gal(L_{\psi}/ \mathbb{Q})$. By Proposition 8 of \cite{HR}, we can assume that $H^1(Gal(\mathbb{Q}(M)/\mathbb{Q}),M)$ is trivial. We have the inflation-restriction sequence:
$$0 \rightarrow H^1(H/P, X^P) \rightarrow H^1(H, M) \rightarrow H^1 (P, M)^{H/P}$$
Now, $H/P = Gal(\mathbb{Q}(M)/\mathbb{Q})$ and $P$ acts trivially on $X$, so $H^1(H/P, M^P) = H^1(Gal(\mathbb{Q}(M)/\mathbb{Q}),M)$ which was assumed to be trivial. So a non-trivial $\psi \in H^1(H,M)$ gives rise to a non-trivial element of $H^1 (P, M)^{H/P}$ which is $ Hom(P,M)^{H/P}$ as $P$ acts trivially on $M$. This shows that $L_{\psi}$ is a non-trivial extension of $\mathbb{Q}(M)$. If $\psi \in \Sha^1_S (M)$, then we choose a trivial prime $q$ such that it splits completely from $\mathbb{Q}$ to $\mathbb{Q}(M)$ but not from $\mathbb{Q}(M)$ to $L_{\psi}$, which means that $\psi |_{G_{q}} \ne 0$, so $\psi \notin \Sha^1_{S \cup \{q\}}(M)$. As $H^1(G_S, M)$ is finite, we repeat this procedure and get a finite set of trivial primes $Q_1$ such that $\Sha^1_{S \cup Q_1}(M) =0$.
\end{proof}
Let $(z_v)_{v \in S \cup Q_1} \notin \oplus_{v \in S \cup Q_1} N_v \oplus \psi_{S \cup Q_1} (H^1(G_{S \cup Q_1}, X))$ be a set of cohomology classes which we will use eventually to overcome obstructions to lifting in the next section.
\begin{lemma} \label{l2}
Let $Q_1$ be as in lemma \ref{l1}. There exists a Cebotarev class $L$ of trivial primes such that
\begin{itemize}
\item $\beta |_{G_v}=0$ for all $\beta \in H^1 (G_{S \cup Q_1}, U_i^*)$ for $i=1,2$ and for all $\beta \in H^1(G_{S \cup Q_1}, X)$
\item There exists an $\mathbb{F}_p$-basis $\{ \psi,\psi_1,..,\psi_r \}$ of $H^1(G_{S \cup Q_1},X^*)$ such that $\{\psi_1,..,\psi_r \}$ is a basis of $\psi_{S \cup Q_1}^{*-1}(Ann(z_{w})_{w \in S \cup Q_1})$, $\psi |_{G_v} \ne 0$ and $\psi_i |_{G_v} = 0$ for all $i \geq 1$.
\end{itemize}
Furthermore, there is for each $v \in L$, a rank $3$ element $h^v \in H^1 (G_{S \cup Q_1 \cup \{v\}}, X)$ and a decomposition group above $v$ such that $h^v |_{G_w} =(z_w)_{w \in {S \cup Q_1}}$ and $h^v(\tau_v) = \left( \begin{array}{cc}
0 & 0 \\
s & 0 \\
\end{array}
\right)$ with $s \ne 0$.
\end{lemma}
\begin{proof}
The difference between the above lemma and Prop 34 of \cite{HR} is that we have added the additional condition of $\beta |_{G_v} = 0$ for all $\beta \in H^1(G_{S \cup Q_1}, X)$, where $X$ corresponds to $U_3$ in the notation above. This means that we need the prime $v$ to split completely in the $\phi, \phi^{-1}$ and identity eigenspaces which are disjoint from the $\chi /\phi$ eigenspace of $U_1^*$ and the $\chi / \phi$ and $\chi$ eigenspaces of $U_2^*$. The modified definition of trivial primes imposes only splitting conditions and the only non-splitting condition in the hypothesis above is in the $\chi \phi$ eigenspace of $Gal(K_{\psi}/K)$, none of which are in $U_1^*, U_2^*$ and $X$. Now, following the argument of Prop 34 of \cite{HR} we see that $v$ comes from a Cebotarev condition.
\end{proof}
As $h^v(\tau_v) = \left( \begin{array}{cc}
0 & 0 \\
s & 0 \\
\end{array}
\right)$ with $s \ne 0$, we define the sets $C_v$ and $N_v$ of \cite{CP}, to be the conjugates by the matrix $ \left( \begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)$ for the primes $v$ that we add.
We cannot control the behavior of $h^v$ at $\sigma_v$, so we add a pair of primes $v_1, v_2$ such that $h = -h^{v_1} +2 h^{v_2}$ has the appropriate image at Frobenius and $h |_{G_w} = z_w$ for $w \in S \cup Q_1$. Altering the definition of trivial primes still allows us to use the same techniques of \cite{HR} so we can use the following result (Theorem 41 of \cite{HR}).
\begin{theorem} \label{t3}
There is a set of two primes $\{v_1,v_2\}$ coming from the Cebotarev class $L$ in the previous lemma such that for $h = -h^{v_1} +2 h^{v_2}$ we can choose the values of $h(\sigma_{v_i})$ arbitrarily for $i=1,2$.
\end{theorem}
\section{Main theorem and its proof}
Let $C_l$ be the set of deformation classes of $\overline{\rho}$ to $\mathbb{W}$ satisfying
$ \rho(\sigma_l) = \left(
\begin{array}{cc}
l & 0 \\
0 & 1 \\
\end{array}
\right)$ and $ \rho(\tau_l) = \left(
\begin{array}{cc}
1 & * \\
0 & 1 \\
\end{array}
\right)$
We define $u_1, u_2 \in H^1 (G_l,X)$ by:
$ u_1 (\sigma_l) = \left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)$ and $ u_1 (\tau_l) = \left(
\begin{array}{cc}
0 & 0 \\
0 & 0 \\
\end{array}
\right)$
$ u_2 (\sigma_l) = \left(
\begin{array}{cc}
0 & 0 \\
0 & 0 \\
\end{array}
\right)$ and $ u_2 (\tau_l) = \left(
\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}
\right)$
Note that these two cohomology classes are the same as in \cite{HR}. We refer the reader to the calculations of Lemma 4.1 in \cite{CP} to produce the third cohomology class $u_3$ to get a three dimensional subspace $N_l$ which preserves $C_l$.
Recall that $R_n$ is the deformation ring of $\rho_n$ with $m_{R_n}$ its maximal ideal. We assume that there exists $\rho_n : G_{S_n} \rightarrow GL_2 (R_n / m^n_{R_n})$, $\Sha^2_{S_n}(X) =0$ and $\dim H^1_N (G_{S_{n}}, X) = n$. By theorem \cite{t3} we can find a set of primes $B$ such that $\dim H^1_N (G_{S_n \cup B}, X) = n +1$ (we simply choose the $\alpha_i \in C_{v_i}$ in the proof of theorem \ref{t3}). Let $U$ be the deformation ring and $\rho_U$ be the deformation associated to the augmented set $S_n \cup B$, with the deformation conditions ($N_v, C_v$). If $B$ consists of primes such that $\rho_n |_{G_{v}} \in C_v$ for $v \in B$, we have a surjection $ \phi : U \twoheadrightarrow R_n /m_{R_n}^n$ and we follow the argument as in \cite{R} or \cite{P}.
If $\rho_n |_{G_{v}} \notin C_v$ for $v \in B$, then we choose a set of cohomology classes $(z_{v})_{v \in S_n \cup B}$ such that the action of $z_{v}$ on $\rho_{n}|_{G_{v}}$ overcomes the local obstructions at $v \in S_n \cup B$. By theorem \ref{t3} we can find a set $A$ of two primes and a cohomology class $h$ such that:
\begin{itemize}
\item $\tilde{\rho_n} = (I+p^n h)\rho_n |_{G_q} \in C_q $ for $q \in A$ (no new obstructions at $A$)
\item $h |_{G_{v}} = z_{v}$, for $v \in S_n \cup B$ ($h$ overcomes local obstructions at $S_n \cup B$)
\end{itemize}
We now show that adding this set of primes $A$ does not alter the dimension of the Selmer groups, hence does not add more variables to the ring of power series.
\begin{lemma} \label{l4}
For a set $A = \{v_1, v_2\}$ of two primes chosen as in theorem \ref{t3} and $(z_v)_{v \in S_n \cup B} \notin \oplus_{v \in S_n \cup B} N_v \oplus \psi_{S_n \cup B} (H^1(G_{S_n \cup B}, X))$ we have $H^1_N(G_{S_n \cup B}, X) = H^1_N (G_{S_n \cup B \cup A}, X)$.
\end{lemma}
\begin{proof}
We adapt the argument of Prop 4.1 of \cite{R}.
Recall that in Lemma \ref{l2} the trivial primes $v$ were chosen so that $\beta \in H^1 (G_T, X) \Rightarrow \beta |_{G_v} =0$. As $N_v$ is a three dimensional subspace including the zero cocyle $u_2$, we see that $\beta |_{G_v} =0 \Rightarrow \beta \in N_v$. Thus, $H^1_{N} (G_{S_n \cup B }, X) \subset H^1_{N} (G_{S_n \cup B \cup A}, X)$.
Any element of $H^1_{N} (G_{S_n \cup B \cup A}, X) \setminus H^1_{N} (G_{S_n \cup B}, X)$ necessarily looks like $f + \alpha_1 h^{v_1} + \alpha_2 h^{v_2}$, where $f \in H^1 (G_{S_n \cup B}, X)$ and $h^{v_i}$ are as in theorem \ref{t3}. Since $\alpha_1 h^{v_1} + \alpha_2 h^{v_2} |_{G_v} = (\alpha_1 + \alpha_2) z_v \in f +N_v$ for $v \in S_n \cup B \cup A $, we see that $\alpha_1 + \alpha_2 = 0$. We know that $\alpha_1 (h^{v_1} - h^{v_2})|_{G_{q}}= 0$ for all $q \in S_n \cup B$ which means that $f|_{G_{q}} \in N_{q}$ for all $q \in S_n \cup B$ $\Rightarrow f \in H^1_N (G_{S_n \cup B}, X)$. We also know that $f|_{G_{v_{i}}} = 0$ for $i = 1,2$, so $f \in H^1_N (G_{S_n \cup B \cup A}, X)$. Thus, $\alpha_1 (h^{v_1} - h^{v_2}) \in H^1_N (G_{S_n \cup B \cup A}, X) \Rightarrow \alpha_1 ( h^{v_1} - h^{v_2} ) |_{G_{v_i}} \in N_{v_i}$, for $i = 1,2$. We now look at the construction of the $h^{v_i}$ in the proof of the previous lemma to get a contradiction.
If $\rho_n |_{G_{v_i}} \notin C_{v_i}$, we choose $h |_{G_{v_i}} = -h^{v_1} + 2 h^{v_2} \notin N_{v_i}$ for $i=1,2$. This implies that $-A+2E \notin N_{v_1}$ while we have that $ h^{v_1} - h^{v_2} |_{G_{v_i}} \in N_{v_1} \Rightarrow A-E \in N_{v_1}$. Combining these two conditions we get that $A, E \notin N_{v_1}$ so $\alpha_1 =0$ and $f \in H^1_N (G_{S_n \cup B \cup A}, X)$ which is a contradiction. A similar argument works for $N_{v_2}$.
If $\rho_n |_{G_{v_i}} \in C_{v_i}$ and $h^{v_i} (\sigma_{v_i}) \notin N_{v_i}$ i.e. $A \notin N_{v_1}$, then using the fact that $A-E \in N_{v_1}$ and $-A +2E \in N_{v_1}$, we get that $A \in N_{v_1}$, which is a contradiction.
(Note that if $A$ consists of only one prime, then the proof is exactly the same as in the first part of Prop 4.1 in \cite{R})
\end{proof}
Let $\tilde{W}$ be the deformation ring and $\rho_{\tilde{W}}$ associated to the augmented problem with deformation conditions ($N_v, C_v$). As $\tilde{\rho_n} |_{G_q} \in C_q$ for $q \in A$ we have a surjection $ \phi : \tilde{W} \twoheadrightarrow R_n /m_{R_n}^n$, which means that for some $I_1$, we $\tilde{W} / I_1 = R_n / m_{R_n}^n$. As $\dim H^1_N (G_{ S_n \cup B}, X) = \dim H^1_N (G_{S_n \cup B \cup A}, X) =n +1$, we see that as a ring $\tilde{W}$ consists of power series of $(n+1)$ variables. Thus, for some $I_2$, $\tilde{W}/I_2 = \mathbb{F}_q [[T_1,...,T_{n+1}]]/(T_1,...,T_{n+1})^2 $. Let $I = I_1 \cap I_2$, and define $W_0 = \tilde{W} /I$.
Our goal is to get a deformation ring which has $R_{n+1}/m_{R_{n+1}}^{n+1}$ as a quotient. If $W_0$ is such a deformation ring, we are done. If not, we get a sequence:
$$R_{n+1}/m_{R_{n+1}}^{n+1} \twoheadrightarrow ... \twoheadrightarrow W_1 \twoheadrightarrow W_0$$
where the kernel at each stage has order $p$. We add more primes of ramification to $S_n \cup B \cup A$ so that the augmented deformation ring has $W_1$ as a quotient and keep iterating to get our required deformation ring.
As $W_0$ is a quotient of $\tilde{W}$, we let $\rho_{W_0}$ be the deformation induced by $\rho_{\tilde{W}}$. As $\rho_{W_0} |_{G_v} \in C_v$ for $v \in S_n \cup B \cup A$ we can lift $\rho_{W_0}$ to $W_1$. Let us call this deformation $\rho_{W_1}$. Iterating the same argument as for $\rho_n$ we can lift $\rho_{W_1}$ to $W_2$ by adding a suitable set of primes $A_1$ to the set of ramification allowing us to eventually find a deformation that has $R_{n+1} / m^{n+1}_{R_{n+1}} = \mathbb{W}[[ T_1,...,T_{n+1}]] / (p, T_1,...,T_{n+1})^{n+1}$ as a quotient.
Now we are in a position to state the final theorem.
\begin{theorem}
There exists an irreducible deformation of $\overline{\rho}$, ramified at infinitely many primes, $\rho : G_{\mathbb{Q}} \rightarrow GL_2 (\mathbb{W} [[T_1, T_2,.., T_r,....,]])$.
\end{theorem}
\begin{proof}
We let $\rho = \displaystyle\lim_{\overleftarrow{n}} \rho_n$ and see that at each stage $n$, $Im \rho_n \supseteq GL_2 (\mathbb{W}[[ T_1,...,T_n]]/(p, T_1,...,T_n)^n)$. Hence, we get our desired deformation. By Corollary 43 of \cite{HR}, the deformation is irreducible.
\end{proof}
\section{Concluding remarks}
\begin{itemize}
\item
In \cite{P}, one could not generalize the results of \cite{R} for all number fields. One of the problems in using our definition of trivial primes is that when one adds them to the ramification set to solve the local condition property (finding an $h^v$ such that $h^v |_{G_w} =(z_w)_{w \in Z}$), the behavior at inertia is hard to control. In the reducible case one can use the subspaces $U_i$ to find a suitable $h^v$ but, in the irreducible case it is hard to guarantee the behavior of $h^v$ at inertia.
\item In \cite{R}, the image of the deformation is full, i.e, $\rho$ contains $SL_2(\mathbb{Z}_p [[T_1, T_2,.., T_r,....,]])$ but requires that the image of the residual representation $\overline{\rho}$ contains $SL_2(\mathbb{Z} / p \mathbb{Z})$, which is not true in our case. Hence, we do not get that the image of our deformation is full.
\end{itemize}
\section{Acknowledgements}
The author would like to thank A. Pacetti and M. Camporini for many conversations on deformation theory during two visits to Buenos Aires in 2014 which were funded by MathAmSud project DGMFPGT. This article was finished during a one year stay in 2015 at Universite Paris-6 which was funded by the CNPQ grant PDE 200845/2014-4. The author gratefully acknowledges support from all the agencies and thanks the Universidad de Buenos Aires and Universite Paris-6 for their hospitality.
|
1,116,691,500,233 | arxiv | \section{Introduction}
Blockchain can provide both distributed data storage and computing platform, where a large network of untrusted participants need to reach agreements on transactional data states~\cite{scheuermann2015iacr}. As an innovative distributed ledger technology, blockchain has improved certain software attributes and brought its distinctive features, e.g., transparency, immutability, and on-chain autonomy, into various application scenarios. In recent years, blockchain has been leveraged as a software component in application systems in a substantial number of projects by enabling a decentralised infrastructure~\cite{2019-Bratanova-ACS}, for instance, energy supply~\cite{energy}, industrial IoT~\cite{IIoT}, etc.
Despite being considered a viable solution to re-architect application systems, there are significantly increased concerns that blockchain systems may suffer from the defects of on-chain algorithmic mechanisms, and tedious disputes and debates in off-chain communities. The negative crises in two world-renowned blockchain systems, Ethereum and Bitcoin, have severely affected the trustworthiness of blockchain. In 2016, the ``DAO" (Decentralised Autonomous Organisation) attack in Ethereum was caused by flaws in smart contract code and resulted in the loss of over 60 million US dollars. This was remedied by conducting a hard fork to reverse the impacted transactions~\cite{DAOattack}. While in Bitcoin, the debate of whether to increase block size caused the split of the whole ecosystem~\cite{BitcoinSize}. After these events, the blockchain community started to explore a more trustworthy governance process for both on-chain and off-chain businesses.
Blockchain governance refers to the structures and processes that are designed to ensure the development and use of blockchain are compliant with legal regulations and ethical responsibilities~\cite{liu2021systematic}. It determines the allocation of decision rights, incentives, and accountability based on the blockchain decentralisation level, which further regulates stakeholders' behaviour throughout the whole blockchain development lifecycle, and the overall blockchain ecosystem. Blockchain governance can refer to existing governance frameworks (e.g., IT governance~\cite{weill2004governance, cobit2012business}, data governance~\cite{ballard2014ibm, ISO38505}), while further investigation is also necessary considering the absence of a clear source of authority in blockchain systems. In recent years, there are continuously increasing attention focusing on this research topic, including the customised governance methods in permissioned blockchains~\cite{selected1, selected14}, regulations for blockchain-based decentralised finance (e.g., cryptocurrencies)~\cite{selected3, selected16}, etc.
Nevertheless, it is found that there is a lack of consideration for software architecture design in this area, which may hinder the design and implementation of blockchain with proper governance solutions, resulting in conflicts between stakeholders and failures of blockchain systems. In this regard, this paper presents a pattern-oriented reference architecture, which can serve as a guideline to assist system architects and developers in the development of governance-driven blockchain systems, with reusable patterns as architecture components.
The contributions of this paper are as follows:
\begin{itemize}
\item We propose a reference architecture to guide and facilitate the design and development of governance-driven blockchain systems. To the best of our knowledge, this is the first study learning blockchain governance from the perspective of architecture design.
\item We associate multiple architectural patterns with the different components in the proposed reference architecture, to address the recurring governance-related issues in blockchain systems. The architectural patterns are gathered and analysed via a systematic literature review and further review of multiple blockchain systems.
\item We evaluate the correctness and utility of our proposed reference architecture, by mapping two blockchain system architectures on the proposed reference architecture.
\end{itemize}
The remainder of this paper is organised as follows. Section~\ref{sec:background} introduces background knowledge and related work. Section~\ref{sec:methodology} explains our research methodology. Section~\ref{sec:architecture} presents the overall reference architecture with annotated patterns. Section~\ref{sec:evaluation} evaluates our architecture. Section \ref{sec:conclusion} concludes the paper and outlines future work.
\section{Background and Related Work}
\label{sec:background}
\subsection{Blockchain}
Blockchain was popularised by Bitcoin~\cite{Satoshi:bitcoin} and the subsequent cryptocurrencies. The concept of blockchain was then generalised to distributed ledger technology, and considered an emerging paradigm for building next-generation applications in a decentralised way. In a software system, the blockchain component can provide two core elements: (i) a distributed ledger, and (ii) a decentralised ``compute" infrastructure.
Blockchain can verify and store digital transactions via the underlying distributed ledger, without relying on any central authority to establish trust between the interoperating entities~\cite{scheuermann2015iacr}. In permissionless blockchains, trust is preserved via game theoretic incentives to maintain a majority of honest nodes~\cite{Satoshi:bitcoin}. While in permissioned blockchains, trust is achieved through the compulsory identity verification of participating entities. On-chain transactions carry the changing states of data, and blocks are containers for storing transactions. Except for the genesis block, all the blocks are linked to the previous block and thus form a chain.
Blockchain can be leveraged as a ``compute" infrastructure via the on-chain programmability (i.e., smart contracts). Smart contracts are user-defined programs that can be deployed and executed in a blockchain system to enable complex business logic such as triggers and conditions~\cite{Omohundro:2014}. For instance, Ethereum provides a built-in Turing-complete scripting language, Solidity, for developing smart contracts.
\subsection{Blockchain Governance}
Existing studies have analysed the governance frameworks for blockchain~\cite{selected11, selected14, hofman2021blockchain, liu2021defining}. Please note that in this study, the term ``blockchain governance" refers to ``governance of blockchain", where we focus on how governance fits in the development and use of blockchain.
Essentially, blockchain can be classified into three types to meet different requirements. Different types of blockchain reflect the different levels of decentralisation, and affect the governance structure regarding the allocation of decision rights, accountability, and incentives. In a blockchain system, a transparent decision-making process can help oversee whether decisions are reasonable, hence, to gain the trust of all stakeholders. Accountability can be established via both institutional and technical manners, to ensure the identifiability and answerability of stakeholders for their decisions. In blockchain governance, incentives are considered factors that may influence stakeholders' behaviours. The governance structure can provide incentives to motivate desirable behaviours and resolve conflicts between stakeholders.
In addition, blockchain governance should be realised throughout the overall ecosystem. For on-chain transactions, the governance emphasises accountable access control. The capabilities of sending, validating, and reading transactions are assigned to different stakeholders considering the selected blockchain type. Regarding the blockchain platforms, they need to undergo a series of formalised procedures to finalise improvement proposals. Further, blockchain-based applications need to comply with industry regulations and specifications, any changes may lead to upgrades of the underlying blockchain platform. For the off-chain community governance, the stakeholders are gradually divided into different groups regarding their roles and decision rights, e.g., stakeholders may have different communication channels for certain issues.
Furthermore, blockchain governance should ensure that the related decisions and processes conform to legal regulations and ethical responsibilities. Specifically, managing legal compliance relies on local and international policies regarding where to deploy a blockchain, while promoting ethical guidelines can preserve human values in blockchain governance.
In recent years, many researchers explore the topic of blockchain governance from diverse perspectives. For instance, Katina et al.~\cite{selected5} propose and analyse seven interrelated elements of blockchain governance, including philosophy, theory, axiology, methodology, axiomatic, method and applications. Beck et al.~\cite{selected14} adopt the three major dimensions of IT governance (i.e., decision rights, incentives, and accountability), and discuss their allocations in blockchain governance. Allen and Berg~\cite{selected7} focus on the exogenous and endogenous governance methods for blockchain platforms, while John and Pam~\cite{selected10} and Pelt et al.~\cite{selected11} both investigate this topic regarding on-chain and off-chain development processes. Hofman et al.~\cite{hofman2021blockchain} propose a high-level analytic framework for blockchain governance, covering six different aspects (i.e., why, who, when, what, where, and how). However, the existing studies only provide general discussion and high-level principles to realise governance, while practitioners require more detailed solutions to face the learning curve and facilitate the architecture design of governance-driven blockchain systems.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\columnwidth]{figures/methodology_RA.pdf}
\caption{Methodology.
\label{fig:methodology}
\end{figure*}
\subsection{Reference Architecture}
A reference architecture can be regarded as ``a reference model mapped onto software elements (that cooperatively implement the functionality defined in the reference model) and the data flow between them. Whereas a reference model divides the functionality, a reference architecture is the mapping of that functionality onto a system decomposition"~\cite{bass2003software}. A reference architecture can support system development by addressing the inclusive business rules, architectural styles, best practices of software development, and software elements~\cite{RA_concept}. There are existing studies about blockchain reference model and reference architecture.
For instance, Yuan and Wang propose a 6-layer reference model of blockchain~\cite{FeiYue_RA}, while Ellervee et al.~\cite{ellervee2017comprehensive} present a blockchain reference model from the perspectives of actors, services, processes, and data models. Homoliak et al.~\cite{security_RA} present a security reference architecture for blockchain based on the blockchain network implementation stacks proposed by Wang et al.~\cite{BC_stack_model}. In addition, there are reference architectures for diverse blockchain-based applications, for instance, crowdsourcing~\cite{Crowdsourcing_RA}, healthcare~\cite{Healthcare_RA}, and government services~\cite{Government_RA}, etc. Nonetheless, we found a lack of consideration of software architecture design for blockchain governance. Hence, this study illustrates a pattern-oriented reference architecture for governance-driven blockchain systems.
\section{Methodology}
\label{sec:methodology}
This section introduces the methodology of this study. We adopted an empirically-grounded design methodology for reference architecture~\cite{RA_methodology}, and the overall research process is illustrated in Figure~\ref{fig:methodology}. There are six steps, and each step has an output to the following step.
The first step is to determine the type of our reference architecture. Based on Galster and Avgeriou's proposal~\cite{RA_methodology}, the reference architecture is an industry-crosscutting (\textit{usage context}), classical (\textit{when}), facilitation (\textit{why}) architecture to be implemented in multiple organisations (\textit{where}). Hereby, ``industry-crosscutting" means that the reference architecture can cover more than one industry, and “classical" refers to that this reference architecture is developed based on existing blockchain systems. ``Facilitation" indicates that the reference architecture aims to provide guidance for the future design of blockchain systems, while “multiple organisations" is determined by the decentralised nature of blockchain.
The second step is to select our design strategy. In this study, our design strategy is a combination of ``research-driven" and ``practice-driven". ``Research-driven" means that the design of this reference architecture is based on state-of-the-art research from a systematic literature review, while ``practice-driven" is also applicable since this study is based on the review results of multiple extant blockchain systems and summary of the best-practices for blockchain governance.
The third step is the empirical acquisition of data. This study adopts and extends several existing studies as data acquisition. Specifically, Liu et al. performed a systematic literature review, in which 37 primary studies were selected and analysed regarding six research questions~\cite{liu2021systematic}. The extracted results covered the definition, motivations, objects, process, stakeholders, and mechanisms of blockchain governance. Afterwards, the researchers reviewed the existing governance frameworks and standards (i.e., IT governance~\cite{weill2004governance, cobit2012business}, data governance~\cite{ballard2014ibm, ISO38505}, OSS governance~\cite{o2007emergence, de2007governance}, platform ecosystem governance~\cite{tiwana2010platform}), to understand the characteristics of blockchain governance. They also scrutinised the open websites and documents of five blockchain platforms (i.e., Bitcoin, Ethereum, Dash, Tezos, and Hyperledger Fabric), to understand how blockchain governance is implemented in a real-world context~\cite{liu2021defining}. In addition, they presented a pattern language for blockchain governance~\cite{pattern_collection}.
Based on the acquired data, in the next step, we constructed a reference architecture for governance-driven blockchain systems by integrating the architectural patterns into a widely-accepted reference model of blockchain~\cite{ISO23257}. Meanwhile, the variability of our reference architecture design in step five was enabled by annotating that applying different patterns can lead to the instantiation of various concrete architectures for blockchain systems.
In the final step, the evaluation of our proposed reference architecture was carried out by reviewing two other blockchain systems, Polkadot and Quorum. We map the architectural components of Polkadot and Quorum on the proposed reference architecture, to prove that the reference architecture can be transformed into meaningful concrete architectures. Further, the evaluation results can help refine the reference architecture.
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\textwidth]{figures/reference_architecture.pdf}
\caption{A pattern-oriented reference architecture for governance-driven blockchain systems.}
\label{fig:architecture}
\end{figure*}
\section{Reference Architecture}
\label{sec:architecture}
In this section, we present a pattern-oriented reference architecture for governance-driven blockchain systems. Figure~\ref{fig:architecture} illustrates the overview of the architecture, which consists of: 1) infrastructure layer, 2) platform layer, 3) API layer, 4) user layer, and 5) cross-layer functions. Specifically, the platform layer and API layer should be implemented in each participating node, and all nodes in a blockchain system share the same infrastructure layer and cross-layer functions. Moreover, this figure includes non-blockchain systems and other blockchain systems to illustrate the interactions of API layer. We apply a set of patterns as architectural components to realise governance in the reference architecture, which are annotated in the figure. In addition, we summarise these components in Table~\ref{tab:components}, to explain the applicable decentralised level, type, and responsibility of each annotated component.
\begin{table*}[tbp]
\footnotesize
\centering
\caption{Pattern-oriented components in the reference architecture.}
\label{tab:components}
\begin{tabular}{p{0.14\columnwidth}p{0.15\columnwidth}p{0.075\columnwidth}p{0.585\columnwidth}}
\toprule
{\bf Component} &
{\bf Decentralisation level} &
{\bf Type} &
\multicolumn{1}{c}{\bf Responsibility}\\
\midrule
\multirow{2}{0.14\columnwidth}{Network freezer} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Suspending blockchain transactions at the network level, to stop the broadcast of malicious transactions.}
\\
\cmidrule(l){1-4}
\multirow{3}{0.14\columnwidth}{Sharded chain} & \multirow{3}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{3}{0.2\columnwidth}{Optional} & \multirow{3}{0.585\columnwidth}{Splitting blockchain into multiple shards, where the data storage, computation, and communication are accordingly split, to improve the scalability of blockchain systems.}
\\ \\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Incentive distributor} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Providing on-chain tokens to drive the motivation and behaviour of stakeholders in decision-making process.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Protocol upgrade} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Mandatory} & \multirow{2}{0.585\columnwidth}{Implementing the software upgrades to a blockchain system.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Data migrator} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Migrating data from a source blockchain system to target blockchain system(s) based on data governance and management requirements.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Participation permission} & \multirow{2}{0.15\columnwidth}{Permissioned} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Managing the participation to a blockchain system, requiring real-world identity verification and the approval of authorities.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Accountability tracer} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Mandatory} & \multirow{2}{0.585\columnwidth}{Identifying the source of a blockchain transaction, to ensure the accountability of transaction senders.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Benevolent dictator} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Mandatory} & \multirow{2}{0.585\columnwidth}{Specific stakeholders possess additional decision rights for certain governance-related issues.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Transaction filter} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Examining submitted transactions to ensure the validity of transaction format/content, rejecting and discarding invalid transactions.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Validator selection} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Mandatory} & \multirow{2}{0.585\columnwidth}{Selecting the node operator who is eligible to validate and append the candidate block to the blockchain.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Block finality decider} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Waiting for a certain number of subsequent blocks to confirm that a previous block and its contained data is finalised and immutable.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Log extractor} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Extracting logged event information from the blockchain system for further analysis and audit.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Contract freezer} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Suspending all the operations to a particular smart contract.}
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Social contract} & \multirow{2}{0.15\columnwidth}{Permissioned \& permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & \multirow{2}{0.585\columnwidth}{Specifying the future maintainer or qualification of maintainers for a blockchain system.}
\\
\cmidrule(l){1-4}
Scam list & Permissionless & Optional & Listing the malicious blockchain addresses to warn all stakeholders of risky interactions.
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Token locker} & \multirow{2}{0.15\columnwidth}{Permissionless} & \multirow{2}{0.2\columnwidth}{Optional} &
Locking a certain amount of on-chain tokens for a specified time period, to restrict the token holder's behaviour in a decision-making process.
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Carbonvote} & \multirow{2}{0.15\columnwidth}{Permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & Counting votes for improvement proposals according to the tokens held by blockchain addresses, to prevent Sybil attack.
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Quadratic voting} & \multirow{2}{0.15\columnwidth}{Permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & Consuming $n^{2}$ number of tokens when a blockchain address submits $n$ votes for an improvement proposal, to capture the preference of stakeholders' decisions.
\\
\cmidrule(l){1-4}
Cross-chain token voting & \multirow{2}{0.15\columnwidth}{Permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & Issuing tokens and holding votes in other blockchain systems, for the improvement proposals in the original blockchain system.
\\
\cmidrule(l){1-4}
\multirow{2}{0.14\columnwidth}{Liquid democracy} & \multirow{2}{0.15\columnwidth}{Permissionless} & \multirow{2}{0.2\columnwidth}{Optional} & Delegating the decision rights and revoking the delegation for improvement proposals to/from other stakeholders.
\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Infrastructure Layer}
First, the infrastructure layer of a blockchain system consists of the functional components for building a fundamental operating environment, including \textit{data storage}, \textit{communication network}, and \textit{computation}. For \textit{data storage}, the blockchain system architects or developers need to determine the physical and logical location of on-chain data, and corresponding CRUD (i.e., create, read, update and delete) operations. \textit{Communication network} includes the peer-to-peer network for blockchain nodes, connection between stakeholders and the blockchain system, and interactions between a blockchain system with other systems. In the \textit{communication network}, \textit{\textbf{network freezer}} can be leveraged by the system administrators or governors to suspend all on-chain business. This pattern can disconnect the nodes or block data traffic in a short time, to avoid the negative impact in an emergency situation. A frozen blockchain system requires human interventions for reactivation. \textit{Computation} enables the runtime environment for each node, and on-chain programmability with complex business logic. In this layer, the structure of blockchain may influence the other three main components. Specifically, a blockchain can have either a single or multiple shards. A \textit{\textbf{sharded chain}} refers to that a blockchain is partitioned into different shards, consequently, the data storage, communication network, and computation are accordingly split regarding the shards. Blockchain nodes only need to process the transactions in their own shards. Compared with single-shard blockchains, a \textit{\textbf{sharded chain}} is more scalable and has better performance.
\subsection{Platform Layer}
The core services and features of a blockchain system are implemented and embodied in the platform layer. The main components in this layer include \textit{cryptographic services}, \textit{on-chain ledger}, \textit{membership services}, \textit{transaction system}, \textit{consensus mechanism}, \textit{node communication}, \textit{event management}, \textit{runtime environment}, and \textit{smart contract}.
A blockchain system incorporates a series of \textit{cryptographic services} to preserve data confidentiality and integrity. For instance, \textit{hash functions} can map arbitrary-size data to fixed-size data. The above-mentioned \textit{Merkle tree} is supported by \textit{hash functions}. \textit{Encryption algorithms} can generate and decrypt ciphertext via secret keys. \textit{Zero-knowledge proof} can preserve privacy in verification issues by only confirming that an entity knows or possess certain data without revealing the actual data.
Essentially, the \textit{on-chain ledger} is the implementation of \textit{data storage} in the infrastructure layer. Each participating node maintains a local replica of the blockchain ledger to preserve data integrity, availability, and consistency. A full node keeps all historical transaction information while a light node can only store the block headers, but a light node needs to rely on full nodes for data enquiry. To align the distinct objectives of stakeholders, especially the nodes, the \textit{\textbf{incentive distributor}} rewards tokens (i.e., programmable digital assets) to stakeholders who obey the codified rules and contribute to the operation of a blockchain system. More generally, the \textit{\textbf{incentive distributor}} can drive stakeholders' motivation and behaviour in a decision-making process. In addition, new functionalities of the blockchain system are implemented via \textit{\textbf{protocol upgrade}}, which may affect the records of on-chain ledger if forking is required. Please note that there are two types of forking: i) backward-compatible upgrades as soft forks, and ii) backward-incompatible upgrades as hard forks. For on-chain ledger data, a \textit{\textbf{data migrator}} is responsible for interacting with external migration tools, which can help migrate the ledger data from a source blockchain system to a target blockchain system, enabling more comprehensive on-chain data management and governance services. Migrating on-chain data can be realised via multiple ways~\cite{data_migration}, for instance, generating a snapshot of the source blockchain (including the entire states, smart contracts, and transactions), cloning a node from the source blockchain system, etc.
\textit{Membership services} implemented in blockchain systems are related to the decentralisation level of the deployed blockchain. Specifically, \textit{\textbf{participation permission}} refers to the identity verification of stakeholders, and approval of authorities (e.g., system administrator) in permissioned blockchain systems. \textit{\textbf{Accountability tracer}} is enabled by the digital signature of transaction senders. Every transaction needs the sender's signature, which is generated via two steps: i) hashing the original data, and ii) encrypting the hash value. Transactions need to be verified by block validators before they are officially recorded. If a transaction contains malicious information, the decrypted hash value can be used to check whether the transaction data is altered during transmission, and the signature can ensure the traceability and identifiability of transaction senders. Please note that in permissioned blockchain systems, accountability can be realised in terms of identifying the real-world stakeholders, while in permissionless blockchain systems, only accountable blockchain addresses can be located due to the inherent anonymity. In permissionless blockchain systems, \textit{\textbf{benevolent dictator}} is arranged throughout the blockchain development and operation stages, referring to stakeholders who have more decision rights than others. For instance, stakeholders may trust the decisions of core developers, who are considered the benevolent dictators based on their expertise of technical meritocracy and the collective benefits of update decisions.
The \textit{transaction system} is closely connected to the \textit{on-chain ledger}. This component can be regarded as the data entry of blockchain. All varying data states on blockchain are carried by the transactions. When a transaction is generated and sent to the blockchain system, a deployed \textit{\textbf{transaction filter}} can examine whether the submitted transactions meet the format or content requirements predefined by the blockchain project team or administrators, to avoid unauthorised or harmful information being fed to blockchain. The valid transactions are temporarily collected in the \textit{transaction pool}, which is a local memory in each node. Nodes can select transactions from the pool and generate candidate blocks, in which the transaction information is compressed in the form of \textit{Merkle tree}. A \textit{Merkle tree} is created via hashing the transactions, and then iteratively summarising and hashing the hash values until a Merkle root is generated. This data structure can preserve data integrity, facilitate the verification of historical transactions, and save the local space of nodes.
Blockchain in practice is a distributed ledger technology where each participating node holds a local replica of the whole ledger contents. A critical issue is how the multiple nodes can agree with the states of blockchain. Conflicts about the blockchain states will impact the security and availability of the overall platform: i) The ledger contents may be compromised if they are not synchronised across the nodes. Attackers can easily modify historical transactions and claim to be the valid version. ii) Requests for the same data at the same time are replied with identical responses. To address the above problems, \textit{consensus mechanisms} are leveraged as a governance method to align the agreement of different nodes. When node operators join a blockchain system, they should all synchronise the ledger contents. During the blockchain operation, each node collects pending transactions and generates a candidate block. \textit{\textbf{Validator selection}} decides a block validator each round, who is allowed to append its block to the blockchain, while other nodes all need to synchronise this block in their local replicas. The block validator can be selected according to different criteria, which are implemented as diverse consensus mechanisms (e.g., computation capability in Proof-of-Work, possessed stakes in Proof-of-Stake, appointed by the system administrator in Proof-of-Authority, etc.). \textit{\textbf{Block finality decider}} reinforces the immutability of blocks and their contained transactions in the way that, after a block is appended to blockchain, certain numbers of subsequent blocks can be regarded as the confirmation to ensure that the previous block is recorded and finalised with high probability.
In a blockchain system, each node needs to keep listening to the network to collect broadcast transactions and synchronise appended blocks, which is accomplished via \textit{peer discovery and management} and \textit{overlay P2P protocols}. These two components compose the \textit{node communication} component. The \textit{state management} component can be exploited to update the on-chain digital assets (e.g., tokens) based on the new transactions, while the \textit{event management} component logs the required information when a transaction triggers particular event(s). Stakeholders can analyse the logged information via \textit{\textbf{log extractor}}. In addition, \textit{runtime environment} supports the execution of smart contracts.
\textit{Smart contract} denotes the programs run in a blockchain, which enables the decentralised applications built on a blockchain system. A smart contract function is triggered by transactions, and the outputs can be stored on-chain. Smart contracts can be codified with various algorithms to provide different functionalities, or access control mechanisms to restrict users' behaviour~\cite{xu2018pattern}. Developers can implement \textit{\textbf{contract freezer}} in smart contracts, and define stakeholders who are eligible to trigger the freezer. \textit{\textbf{Contract freezer}} can pause or terminate all operations to a smart contract, when an attack is made to this contract. In addition, the blockchain project team or system administrators can deploy a special kind of smart contract, \textit{\textbf{social contract}}, to announce the future maintainers, or qualification of future maintainers for a blockchain system. If malicious operations are identified, the related blockchain addresses, both stakeholders and smart contracts, can be recorded and listed in the \textit{\textbf{scam list}}, which is also in the form of smart contract. Other stakeholders can stop interacting with these scams referring to the list. \textit{\textbf{Token locker}} can be deployed in permissionless blockchain systems, which can grant stakeholders the decision rights for specific governance issues (e.g., the approval of improvement proposals) when stakeholders lock a particular number of tokens for a certain time period as the security deposit. If stakeholders do not obey the rules during a decision-making process, the decision rights will be revoked, and the locked tokens may be destructed. In addition, a series of patterns for voting can be implemented via smart contracts to resolve conflicts and reach consensus. For instance, to update a blockchain system, stakeholders can submit, broadcast, and discuss improvement proposals via off-chain means, while the final decisions are usually made by virtue of voting. In \textit{\textbf{carbonvote}}, the votes are counted in terms of the number of tokens possessed by stakeholders, to prevent the Sybil attack where a stakeholder can register multiple blockchain addresses to compromise the vote. \textit{\textbf{Quadratic voting}} can express stakeholders' preferences when finalising improvement proposals. In this voting scheme, voting for improvement proposals consumes tokens as funds, and the preference is indicated via the exponential increase of consumed tokens that submitting $n$ votes will cost $n^2$ number of tokens. \textit{\textbf{Cross-chain token voting}} requires the interactions between blockchain systems. The blockchain project team or system administrator needs to issue tokens and deploy smart contracts for voting in source blockchain systems, and the token holders are eligible to vote for improvement proposals in the original blockchain system. \textit{\textbf{Liquid democracy}} allows stakeholders to delegate the decision rights to other trusted stakeholders, and revoke the delegation anytime. Finally, all approved improvement proposals are implemented via \textit{\textbf{protocol upgrade}}.
\subsection{API Layer, User Layer, and Other Systems}
The API layer consists of four main types of functions for a blockchain node to provide services to different applications and systems. Specifically, \textit{admin API} and \textit{user API} can provide access to the platform layer components for system administrators and users via \textit{admin application} and \textit{user application} in the user layer respectively. The \textit{external interface} can connect a blockchain system to non-blockchain systems such as oracles, and off-chain databases, while the \textit{inter-system interface} is for the communication between nodes in different blockchain systems.
\subsection{Cross-Layer Functions}
Cross-layer functions include \textit{governance and compliance}, \textit{development}, \textit{management and operation}, and \textit{security}, to provide auxiliary services to components in all other layers. In particular, \textit{governance and compliance} highlights the allocation of decision rights, incentives, and accountability within a blockchain system. This component specifies the high-level guidelines of how a blockchain system is operated and maintained, to meet the legal regulations, industry specifications, and broader ethical requirements, while other cross-layer functions need to refer to its guidance. The \textit{development} component is exclusively for developers' activities, for instance, codifying and testing the updated rules, building software packages, etc. This component needs to record developers' contributions for further incentive distribution and accountability process. \textit{Management and operation} can be regarded as the execution of \textit{governance and compliance} by system administrators, who need to monitor, manage, and control the blockchain system to ensure normal operation and risk management. The \textit{security} component can preserve data confidentiality, integrity and availability in a blockchain system, via predefined decision rights and fine-grained access control of certain stakeholders (e.g., node operators) to restrict their behaviour, positive or negative incentives to drive their motivations, and identity verification and management to establish a complete accountability process.
\section{Evaluation}
\label{sec:evaluation}
In this section, we evaluate the correctness and utility of our proposed reference architecture by mapping existing blockchain system architectures on the reference architecture. Regarding that our reference architecture is adapted from a widely-accepted reference model, the evaluation focuses on the validation of pattern-oriented architecture components for governance. We selected two blockchain systems: Polkadot\footnote{https://polkadot.network/} and Quorum\footnote{https://consensys.net/quorum/}. Polkadot maintains a permissionless multi-chain ecosystem well-known for the cross-chain interoperability, while Quorum provides permissioned blockchain systems for enterprises and individuals. We collected and scrutinised the available documents provided by these two blockchain systems, and mapped the pattern-oriented components to the reference architecture. Figure~\ref{fig:polkadot} and \ref{fig:quorum} demonstrate the simplified architecture mapping of Polkadot and Quorum respectively, and Table~\ref{tab:comparison} present an intuitive comparison of these two blockchain systems regarding the use of components for governance.
\begin{table*}[tbp]
\footnotesize
\centering
\caption{Comparison of the use of pattern-oriented components in Polkadot and Quorum architectures.}
\label{tab:comparison}
\begin{tabular}{p{0.07\columnwidth}p{0.4\columnwidth}p{0.4\columnwidth}}
\toprule
{\bf Component} &
\multicolumn{1}{c}{\bf Polkadot} &
\multicolumn{1}{c}{\bf Quorum} \\
\midrule
\multirow{2}{0.07\columnwidth}{Network freezer} & Block validators can vote to suspend the validation system of a certain parachain. & System administrators can suspend the operations of blockchain nodes or accounts to freeze the system. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Sharded chain} & Parachains are operated as shards, while the relay chain is regarded as the coordinator between different parachains. & \multirow{2}{0.4\columnwidth}{N/A} \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Incentive distributor} & Stakeholders are rewarded for their contributions to Polkadot operation. & Incentives are inactivated by default, and can be triggered during deployment. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Protocol upgrade} & \multirow{2}{0.4\columnwidth}{Polkadot can upgrade the on-chain protocol without forking.} & All participants need to approve the upgrade, and update the configuration file. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Data migrator} & Polkadot can interact with external blockchain systems via cross-consensus messages. & \multirow{2}{0.4\columnwidth}{N/A} \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Participation permission} & \multirow{2}{0.4\columnwidth}{N/A} & The deployer of a Quorum system needs to directly send invitations to other participants. \\
\cmidrule(l){1-3}
Accountabil-ity tracer & Polkadot participants are identified via their on-chain addresses. & The accountability includes stakeholders' real-world identities based on \textbf{\textit{participation permission}}. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Benevolent dictator} & The council and technical committee have additional decision rights for improvement proposals. & The Quorum project team can provide additional support to a Quorum system. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Transaction filter} & Polkadot defines a universal transaction format across the system. & Quorum offers different transaction types, transactions not meeting specific type requirements cannot be executed. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Validator selection} & Block validators are selected according to the staked tokens of candidates and nominators. & Block validators are determined and assigned by the system administrators. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Block finality decider} & Block validators vote to decide the valid chain, where the blocks are finalised. & In Quorum, certain consensus protocols can achieve immediate finality after a new block is appended to the blockchain. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Log extractor} & \multirow{2}{0.4\columnwidth}{N/A} & System administrators can monitor and analyse all activities within a Quorum system. \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Token locker} & Acquiring certain decision rights requires the locking of Polkadot tokens, e.g., becoming a block validator. & \multirow{2}{0.4\columnwidth}{N/A} \\
\cmidrule(l){1-3}
\multirow{2}{0.07\columnwidth}{Carbonvote} & Votes are counted regarding the number and locking period of Polkadot tokens. & \multirow{2}{0.4\columnwidth}{N/A} \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=\columnwidth]{figures/polkadot.pdf}
\caption{Architecture mapping of Polkadot.}
\label{fig:polkadot}
\end{figure*}
\subsection{Architecture mapping of Polkadot}
Polkadot consists of multiple parachains which can process transactions independently, and a relay chain to collect and confirm all blocks generated by each parachain and enable communication between them. In general, Polkadot can be regarded as a replicated sharded state machine (\textit{\textbf{sharded chain}}) where all parachains operate as shards, and the relay chain is responsible to preserve the consensus among all shards~\cite{Polkadot_design}. While inter-shard communication is facilitated by the relay chain, \textit{\textbf{data migrator}} is realised via the cross-consensus message format and protocols, through which Polkadot can send, receive and process data to/from external blockchain systems~\cite{Polkadot_xcm}.
Polkadot implements \textbf{\textit{incentive distributor}} via inherent token issuance. Stakeholders contributing to the relay chain and parachain operation are rewarded with tokens, e.g., validators and nominators can obtain tokens according to their staked tokens after block inclusion in the relay chain, and fishermen can get rewards by reporting illegal actions in parachains~\cite{Polkadot_design}. In Polkadot, transaction fees are split into two parts: one fraction is paid to the validator, while the other fraction is saved to support the implementation of future improvement proposals. Meanwhile, \textbf{\textit{token locker}} is enforced in different activities in Polkadot to assign certain decision rights to stakeholders while also restraint their behaviors~\cite{wood2016polkadot, Polkadot_design}. For instance, the selection of block validators, auction of parachain slots, and voting of improvement proposals all require stakeholders to deposit a certain number of tokens during the event. Any malicious operations may cause the loss of Polkadot tokens.
In Polkadot, all on-chain \textit{\textbf{protocol upgrades}} need to undergo referendum before implementation. A variant of \textbf{\textit{carbonvote}} is found applied in Polkadot that votes are counted regarding the number of staked tokens, and also the staked period~\cite{Polkadot_design}. A voluntary extended locking of tokens can increase the voting power of stakeholders, since a long-term locking can express the preference of stakeholders' decisions to some extent. Accepted proposals are implemented via upgrading Polkadot's WebAssembly execution host without the need of forking~\cite{Polkadot_upgrade}.
Polkadot employs the Nominated Proof-of-Stake consensus mechanism, where \textbf{\textit{validator selection}} is according to the staked tokens of validator candidates themselves or nominators~\cite{Polkadot_design}. Finally, a set of validators are selected and randomly assigned to each parachain at the beginning of every era (i.e., roughly one day). In each parachain, collators are responsible for the collection and execution of transactions, and generate blocks for the assigned validators. Note that Polkadot leverages \textbf{\textit{transaction filter}} by defining a universal transaction format~\cite{Polkadot_transaction}. Validators need to examine the parachain blocks, while a relay chain block is produced via the Blind Assignment for Blockchain Extension protocol~\cite{Polkadot_babe}. For each parachain, the \textbf{\textit{block finality decider}} would be the relay chain block, which means that a prachain block is finalised when it is included in a relay chain block. Whilst, the \textbf{\textit{block finality decider}} for Polkadot's relay chain is the GRANDPA protocol~\cite{Polkadot_grandpa}, in which validators need to vote for the longest relay chain. When more than two third of validators affirm the same chain containing a particular same block, this block and all its predecessors are finalised.
In Polkadot, the council and technical committee can be considered the \textbf{\textit{benevolent dictator}}, who have the rights to trigger fast-tracked referenda, or cancel an improvement proposal or referendum via internal voting. Within Polkadot, the \textit{\textbf{accountability tracer}} is realised via participants' on-chain addresses, and a common penalty is to destroy the staked tokens of malicious participants. Finally, validators can vote to activate the \textbf{\textit{network freezer}} of a parachain to suspend its validation system, and the recovery can be decided via either a validator-voting or referendum.
\subsection{Architecture mapping of Quorum}
Blockchain application providers can deploy Quorum blockchain systems via Blockchain as a service. Basically, Quorum blockchain systems can be considered permissioned Ethereum systems for enterprises or individuals. Consequently, \textbf{\textit{participation permission}} is realised that the deployer (i.e., blockchain application provider) can directly send invitations to other participants, and only the entities with valid invitation code can join a particular Quorum blockchain system~\cite{Quorum_Blockchain_Service}. In terms of \textbf{\textit{validator selection}}, Quorum supports Proof-of-Authority where the block validators are defined and managed by the deployer. Specifically, Quorum supports alternative consensus protocols, including QBFT, IBFT, Raft, and Clique~\cite{Go_Quorum}. It is noted that \textit{\textbf{block finality decider}} may be different according to the employed consensus protocol. In particular, QBFT, IBFT, and Raft can achieve immediate finality when a new block is appended, whilst forks might occur when Clique is selected.
\begin{figure}[t]
\centering
\includegraphics[width=0.39\columnwidth]{figures/quorum.pdf}
\caption{Architecture mapping of Quorum.}
\label{fig:quorum}
\end{figure}
In addition to the deployer, Consensys (i.e., the project team of Quorum) is also regarded as the \textbf{\textit{benevolent dictator}} in the circumstance that when the deployer or appointed administrator leaves the Quorum system without claiming a new administrator, Consensys can provide support based on the deployment agreement~\cite{Quorum_Blockchain_Service}. Quorum does not implement inherent token issuance or \textbf{\textit{incentive distributor}}, since permissioned blockchain system stakeholders need to adhere to off-chain organisational hierarchy or business agreements where the incentives and decision rights are ascertained. However, the deployer can allocate Ether tokens in a Quorum system if needed~\cite{Go_Quorum}.
In a Quorum system, all activities are recorded and can be used for further analysis and audit via \textbf{\textit{log extractor}}~\cite{Go_Quorum}. The \textit{\textbf{accountability tracer}} is enabled by the generated blockchain account and a public and private key pair~\cite{Quorum_Blockchain_Service}. Considering the above-mentioned \textbf{\textit{participation permission}}, accountability in Quorum can be extended to stakeholders' real-world identity. \textbf{\textit{Protocol upgrade}} in a Quorum blockchain system requires the approval of all participants, then updating the configuration file and restarting the nodes~\cite{Go_Quorum, Besu_upgrade}. Quorum realises \textbf{\textit{network freezer}} by managing the node and account permissioning~\cite{EEA_spe}, suspending the operation of all accounts can freeze a Quorum system. Further, Quorum can specify the types of transactions that a blockchain account is permitted to send~\cite{EEA_spe}. \textbf{\textit{Transaction filter}} is deployed to examine the transaction type.
\subsection{Discussion}
From the evaluation results, it can be found that our reference architecture is correct and usable, as two blockchain system architectures can be mapped on the proposed reference architecture. Comparing Polkadot and Quorum, it is observed that permissionless blockchain systems may implement additional components for voting. The decision rights are allocated to all stakeholders to engage their participation, and increase their trust to the system upgrades, since the results are finalised according to their own choices. Relatively, permissioned blockchain systems like Quorum would centralise the decision rights to the system deployer/administrator, and highlight the permissioning of stakeholders to replicate the real-world positions.
We noticed that several patterns are applied in the off-chain environment, instead of blockchain architectural components. For instance, Polkadot provides a \textbf{\textit{scam list}} introducing several common types of scams to raise stakeholders' awareness~\cite{Polkadot_scam}. Quorum posts an official press release claiming that Consensys acquired Quorum from J.P. Morgan~\cite{Quorum_release}, which can be regarded as an off-chain \textbf{\textit{social contract}}. Besides, since Quorum deploys Ethereum blockchain instances, \textbf{\textit{contract freezer}} is supported by the internal Ethereum virtual machine by default~\cite{pattern_collection}. In addition, we observe more novel governance mechanisms in the evaluation process. For example, Polkadot proposes a concept of \textit{``adaptive turnout biasing"}, where the threshold in a voting process is related to the turnout rate~\cite{Polkadot_design}. Finally, we remark that the proposed reference architecture is adaptive, so that future research can explore more governance patterns and integrate them into this reference architecture.
\section{Conclusion}
\label{sec:conclusion}
Governance is a significant factor throughout the lifecycle of a blockchain system, to ensure normal operation and continuous evolution. Nevertheless, it is found that most existing studies only provide high-level guidelines, while there is a lack of consideration from the respective of architecture design. In this article, we presented a reference architecture to help architects operationalise governance approaches in the future design and development of governance-driven blockchain systems. Specifically, we adopt a widely-accepted blockchain reference model, and apply a set of architectural patterns for governance to the components. We explain the responsibility of each component, and evaluate the correctness and utility of our proposed reference architecture via mapping two existing blockchain system architectures. In future work, we plan to develop decision models for the governance-driven blockchain system design.
\input{main.bbl}
\end{document}
|
1,116,691,500,234 | arxiv | \@ifstar\unnumberedsection\numberedsection{Introduction}\label{sec:intro}
Locating the thresholds for various Ramsey properties of random structures has been of prime interest of late. After \L{}uczak, Ruci\'nski and Voigt~\cite{LRV91} launched the systematic study of these thresholds a great number of results followed. In a series of papers, R\"odl and Ruci\'nski~\cite{RR94,RR93,RR95} established a version for Ramsey's theorem\footnote{Ramsey's theorem asserts that for fixed positive integers $r$, $k$ and $\ell$ any $r$-colouring of $\binom{[n]}{k}$ with $n$ sufficiently large yields an $\ell$-element set $S\subset [n]$ with all sets from $\binom{S}{k}$ being of the same colour.}~\cite{Ram30} in random graphs often referred to as the {\em symmetric random Ramsey theorem}, where here the term 'symmetric' denotes that here the (hyper)graph sought to be found appearing monochromatically is the same across all colours; we use the term {\em asymmetric} if the configurations assigned to colours may differ. Ramsey properties of random hypergraphs were pursued in~\cite{CG16,FRS10,GNPSST17,NPSS17,NS16,RR98,RRS07}; asymmetric Ramsey properties of random graphs and hypergraphs were studied in~\cite{GNPSST17,KSS14,MSSS09,MNS18}.
Ramsey theory also houses numerous problems seeking monochromatic configurations in the set of integers $[n]:=\{1,\ldots,n\}$; where here if to name a few one encounters for instance Schur's theorem\footnote{Schur's theorem asserts that in any finite colouring of $\mbb{N}$ there is always a monochromatic additive triple $(a,b,c)$ with $a+b=c$.}~\cite{Schur}; van der Waerden's theorem\footnote{van der Waerden's theorem asserts that any finite colouring of $\mbb{N}$ contains a monochromatic progression of any fixed length}~\cite{vdW27}. The reader can consult the book~\cite{GRS90} by Graham, Rothschild and Spencer for further such examples; in particular in what follows we devote much attention to a theorem by Rado that generalises the last two theorems. Such "Ramsey on the integers"-type problems were explored in the random setting as well~\cite{CG16,FRS10,GRR96,HST19,RR97,Sp17} and in fact for some of this problem sharp thresholds are known~\cite{FHPS16,FK00,FRRT06,SchSch18}.
An $\ell \times k$ matrix $A$ with integer entries is said to be {\em partition-regular} if any finite colouring of $\mbb{N}$ admits a monochromatic solution to the homogeneous matrical equation $A x = 0$. The matrical equation of Schur's theorem is the simple equation $x_1+x_2-x_3=0$; for van der Waerden's theorem the system of linear equations consists of $x_1-2x_2+x_3=0$, $x_2-2x_3+x_4=0$,\ldots, $x_{k-2}-2x_{k-1}+x_k=0$.
The characterisation of all partition-regular matrices is a classical result by Rado~\cite{Rado}, who showed that such matrices are captured through the so called {\em columns condition} (see, e.g.,~\cite[Chapter~3]{GRS90} for details). We would be remiss if we were not to remark that the matrix associated with van der Waerden's theorem is an example of what is commonly referred to as a {\em density-regular} system.
A partition-regular matrix $A$ is said to be {\em irredundant} if the equation $Ax = 0$ has a solution $\begin{bmatrix}x_1\; \cdots \; x_k\end{bmatrix}^{\top}$ {\em non-repetitive} in the sense that $x_i \not= x_j$ for $1\leq i < j \leq k$; otherwise the matrix $A$ is said to be {\em redundant}. Every redundant matrix admits an $\ell' \times k'$ irredundant submatrix $A'$ with $\ell' < \ell$ and $k' < k$ such that the sets of solutions for the equations $Ax =0$ and $A'y=0$ are the same when viewed as sets (see e.g.,~\cite{FRS10,RR97} for details). Owing to this, one may restrict the discussion to irredundant partition-regular matrices, for which we may also assume full row rank. Consequently, we refer to irredundant partition-regular matrices of full row rank as {\em Rado matrices}.
For a subset $X \subseteq [n]$, an integer $r \geq 1$, and a Rado matrix $A$, we write
$
X \to (A)_r
$
in order to denote that every $r$-colouring of $X$ admits a monochromatic solution for the matrical equation $A x = 0$. The aforementioned result of Rado~\cite{Rado} coupled with a classical compactness argument (see, e.g.,~\cite{GRS90}) asserts that if $A$ is a Rado matrix then $[n] \to (A)_r$ for every sufficiently large $n$.
For $p \in [0,1]$, let $[n]_p$ denote the binomial random subset of $[n]$ where members of $[n]$ are included independently at random each with probability $p$. Since $X \to (A)_r$ is
an increasing monotone property\footnote{If $X \to (A)_r$ then $Y \to (A)_r$ whenever $Y\supseteq X$.}, a {\sl threshold} for the property $[n]_p \to (A)_r$ exists by a result of Bollob\'as and Thomason~\cite{BT87}; i.e., there exists a function $\hat{p}\colon \mbb{N}\to[0,1]$ such that $\mbb{P}\left[[n]_p\to (A)_r\right]\longrightarrow 1$ whenever $p=\omega(\hat{p})$ (the \emph{$1$-statement}, hereafter), and such that $\mbb{P}\left[[n]_p\to (A)_r\right]\longrightarrow 0$ whenever for $p=o(\hat{p})$ (the \emph{$0$-statement}, hereafter).
Graham, R\"odl and Ruci\'nski~\cite{GRR96} studied Schur's theorem for two colours in random sets and determined the threshold of this Ramsey property to be $n^{-1/2}$. R\"odl and Ruci\'nski~\cite{RR97} studied the Rado's theorem in random sets of integers; they determined the $0$-statement for the associated property and provided the $1$-statement for a special case of Rado matrices, namely the aforementioned {\sl density regular} matrices.
The $1$-statement in its full generality was established later on by Friedgut, R\"odl and Schacht in~\cite{FRS10} and thus establishing the so called {\em symmetric random Rado theorem}. More recently, resilience versions of this problem were studied by Hancock, Staden and Treglown~\cite{HST19} and by Spiegel~\cite{Sp17}.
The following parameter introduced first in~\cite{RR97} arises in the threshold of the symmetric random Rado property. For an $\ell\times k$ Rado matrix $A$, set
\begin{equation}\label{eq:mA}
m(A) := \max_{\substack{W \dot{\cup} \overline{W} = [k] \\ |W| \geq 2 }} \frac{|W|-1}
{|W|-1+\mathrm{\mbs{rk}}\,(A_{\overline{W}}) - \mathrm{\mbs{rk}}\, A},
\end{equation}
where here $\mathrm{\mbs{rk}}\, A$ denotes the rank of $A$ and for $I \subseteq [k]$ the term $A_I$ denotes the submatrix of $A$ obtained by restricting $A$ to the columns whose index lies in $I$. This parameter is well-defined~\cite{RR97}.
\begin{theorem}\label{thm:FSR}{\em (Symmetric random Rado theorem~\cite[Theorem~3.1]{RR97},~\cite[Theorem~1.1]{FRS10})}\label{thm:random_Rado}\\
Let $A$ be a Rado matrix and let $r \in \mbb{N}$. There exist constants $0 < c < C$ such that the following holds
\[
\lim_{n \to \infty} \mbb{P} \bigg[ [n]_{p(n)} \to (A)_r \bigg] =
\begin{cases}
0, & p(n) \leq cn^{-1/m(A)}\\
1, & p(n) \geq Cn^{-1/m(A)},
\end{cases}
\]
where $p\colon\mbb{N}\to[0,1]$.
\end{theorem}
Given $r \geq 2$ partition-regular matrices, namely $A_1,\ldots,A_r$, we write $X \to (A_1,\ldots,A_r)$ to denote that $X$ has the property that for any $r$-colouring of its elements there exists a colour $i \in [r]$ such that the matrical equation $A_i x = 0$ has a solution in colour $i$. In this case $X$ is said to have the {\em asymmetric} Rado property (w.r.t. the matrices $A_1,\ldots,A_r$).
The asymmetric Rado property for $\mbb{N}$ and any $r$ Rado matrices can be deduced directly from the characterisation of partition-regular matrices due to Rado~\cite{Rado}. Indeed, given $A_1,\ldots,A_r$ Rado matrices, then the diagonal block matrix $B:=\mathrm{\boldsymbol{diag}}(A_1,\ldots,A_r)$ is also partition-regular as it satisfies the columns condition of Rado~\cite{Rado} which is equivalent to partition-regularity. As such, in any finite colouring of $\mbb{N}$ the homogeneous matrical equation $B x =0$ has a monochromatic solution which can be further "decomposed" into $r$ monochromatic solutions for each equation $A_i y = 0$ -- this in fact exceeds the requirement in the asymmetric case. This observation does not yield however a good estimate on the threshold for the random set $[n]_p$ since the "density" $m(B)$ is much higher from what heuristics suggests.
The only nontrivial result about the threshold for the random set $[n]_p\to (A_1,\ldots,A_r)$ in the literature is due to
Hancock, Staden, and Treglown~\cite[Theorem~4.1]{HST19} who in fact considered the {\sl resilience} version of Theorem~\ref{thm:Rado} and were the first to study asymmetric Rado property in a random setting.
They proved an upper bound of the form $Cn^{-1/m(A_1)}$ for the threshold, where $m(A_1)\ge m(A_i)$ for all $i\in[2,r]$. Again, heuristics below suggests that this is far from the right threshold whenever $m(A_1)>m(A_2)\ge m(A_i)$ for $i\in[3,r]$.
Our main result is the $1$-statement for what we conjecture to be the threshold for $[n]_p \to (A_1,\ldots,A_r)$. The following parameter arises in our result.
\begin{definition}\label{def:mAB}
Let $A$ and $B$ be two Rado matrices, where $A$ is an $\ell_A \times k_A$-matrix and $B$ is an $\ell_B \times k_B$-matrix. Set
\[
m(A,B):=\max_{\substack{W\subseteq [k_A] \\ |W|\ge
2}}\frac{|W|}{|W|-\mathrm{\mbs{rk}}\, A+\mathrm{\mbs{rk}}\,(A_{\overline{W}}) -1+1/m(B)}.
\]
\end{definition}
\noindent
The parameter $m(A,B)$ is well-defined (see~\eqref{eq:luck}).
Our main result reads as follows.
\begin{theorem}\label{thm:Rado}{\em (Main Result)}
Let $A_1, \ldots, A_r$ be $r$ Rado matrices satisfying
$m(A_1) \geq m(A_2) \geq \cdots \geq m(A_r)$. Then there exists a constant $C >0$ such that
\[
\lim_{n \to \infty} \mbb{P} \left[ [n]_{p} \to (A_1,\ldots,A_r) \right] = 1.
\]
whenever $p \ge C n^{-1/m(A_1,A_2)}$.
\end{theorem}
A special case when the matrices $A_i$ describe arithmetic progressions (asymmetric van der Waerden theorem) was proved independently and simultaneously by Zohar~\cite{Zohar}, see more information in the concluding remarks section, Section~\ref{sec:conclude}.
One can easily verify the equality $m(A,A) = m(A)$ (see the proof of Observation~\ref{obs:const} in Section~\ref{sec:aux}), and therefore Theorem~\ref{thm:Rado} retrieves the $1$-statement of the symmetric random Rado theorem (see Theorem~\ref{thm:FSR}) when the matrices $A_1,\ldots,A_r$ coincide. We conjecture that the $n^{-1/m(A_1,A_2)}$ is in fact the threshold for the associated property $[n]_p\longrightarrow (A_1,\ldots, A_r)$.
\begin{conjecture}\label{conj:AHP}
Let $A_1, \ldots, A_r$ be $r$ Rado matrices satisfying
$m(A_1) \geq m(A_2) \geq \cdots \geq m(A_r)$. There exist constants $0 <c < C$ such that the following holds
\[
\lim_{n \to \infty} \mbb{P} \bigg[ [n]_{p} \to (A_1,\ldots,A_r) \bigg] =
\begin{cases}
1, & p \geq Cn^{-1/m(A_1,A_2)}\\
0, & p \leq cn^{-1/m(A_1,A_2)}.
\end{cases}
\]
\end{conjecture}
Our main result, namely Theorem~\ref{thm:Rado}, arises quite naturally in arithmetic Ramsey problems for randomly {\sl perturbed} dense sets of integers.
That is, for $n$ sufficiently large, given a set $N \subseteq [n]$ with positive density the distribution $N \cup [n]_p$ is viewed as a random perturbation of $N$. The limiting behaviour of the {\sl symmetric} Rado property $N \cup [n]_p \to (A)_2$ is then of interest where $A$ is a prescribed Rado matrix. The study of {\sl symmetric} Ramsey properties of randomly perturbed dense graphs, namely $G \cup G(n,p)$ with $G$ a dense graph, originates with the work of Krivelevich, Sudakov and Tetali~\cite{KST}. Recently much progress has been attained by Das and Treglown~\cite{DT19} for the case of graphs. The $1$-statement of the Kohayakawa-Kreuter conjecture arises fairly naturally in this type of results for graphs and we forgo the details here. For the integers, much is less known. The authors~\cite{AHP} have established that $p = n^{-2/3}$ is the threshold for the densely perturbed set $N \cup [n]_p$ to admit the Schur property; yet no other result in this venue is currently known for any other Rado matrix. This is partly due to an asymmetric random Rado type theorem at the correct threshold being missing from the literature; an issue we conjecture to be mended here.
Our proof of Theorem~\ref{thm:Rado} builds upon the ideas of Mousset, Nenadov and Samotij~\cite{MNS18}. We in fact deduce Theorem~\ref{thm:Rado} from a broader result (namely Theorem~\ref{thm:main}) that provides a general {\sl combinatorial framework} for deducing $1$-statements for asymmetric random Ramsey-type results in random (hyper)graphs and sets alike. Theorem~\ref{thm:main} generalises a result of Friedgut, R\"odl and Schacht from~\cite{FRS10} who provide such a combinatorial framework for $1$-statements of symmetric random Ramsey-type problems. Our proof of Theorem~\ref{thm:main} relies on the {\em container method}~\cite{BMS15,ST15} and the clever sparsification "trick" from~\cite{MNS18}. We postpone the statement of Theorem~\ref{thm:main} until the next section. Roughly speaking though, given an asymmetric Ramsey-type problem in random integer sets or (hyper)graphs involving configurations, say $C_1,\ldots,C_r$, for which one seeks a $1$-statement, Theorem~\ref{thm:main} calls for the examination of certain combinatorial properties of the {\sl solution hypergraphs} associated with each of the configurations $\{C_i\}_{i \in [r]}$. That is, for configuration $C_i$ the {\sl solution} hypergraph associated with $C_i$ is the one whose edge set consist of all "copies"/"solutions" (of) $C_i$ in the {\sl complete} universe (i.e., $K_n$ or $[n]$). Theorem~\ref{thm:main} asserts that if these $r$ solution hypergraphs satisfy a short list of combinatorial properties then the desired $1$-statement for the associated asymmetric random Ramsey-type problem would follow. We will make this precise in \S~\ref{sec:technical_thm}.
The intuition underlying the parameter $m(A,B)$ and its involvement in our result is as follows. Let $A$ be an $\ell\times k$ Rado matrix and
let $H^{(A)}$ denote the $k$-tuples $x\in[n]^k$ forming solutions to $Ax=0$ and by $H^{(A)}_I$ the set of tuples {\sl projected} to $I$-indexed coordinates for some $I\subseteq [k]$. The common {\sl rule of thumb} sort of speak for the location of the threshold in the symmetric setting arises from requiring that the expected number of solutions to $Ax=0$ captured by $[n]_p$ dominates the expected size of $[n]_p$. Often, this is not enough and one in fact must require that the expected number of {\sl projected} solutions to $Ax=0$, i.e.\ the set of the form $\{x_I\colon Ax=0\}$ for \emph{any} $\emptyset\neq I\subseteq [k]$, captured by $[n]_p$ dominates the expected size of $[n]_p$.
This requirement is embodied in the maximisation seen in~\eqref{eq:mA}.
The parameter $m(A,B)$ arises in a similar manner. For a sequence $p$ sufficiently "small", say, one would like to colour $[n]_p$ as to avoid, say, a red solution to $Ax=0$ and, say, a blue solution to $B y=0$. With $p$ set, the (expected) density of the set of solutions to $Ax=0$ in $[n]_p$ is $q:=\Theta(\min_{\emptyset\neq I\subseteq[k_A]} |H^{(A)}_I|p^{|I|}/n)$. Moreover, at least one element from each of the solutions to $Ax=0$ in $[n]_p$ should be coloured blue. In fact this set of blue elements can be thought of as being randomly distributed in some sense. But if the `expected' number (which is of the order at least $\min_{\emptyset\neq I\subseteq[k_B]} |H^{(B)}_I|q^{|I|}$) of projected solutions to $By=0$ captured by this (random) set exceeds $qn$, then it "should" be impossible to avoid a blue solution to $By=0$, and here one observes the similarity with the symmetric case. This intuitive explanation is of course quite crude; nevertheless, the parameter $m(A,B)$ can be seen to emerge in this way.
The reader familiar with asymmetric Ramsey properties in random (hyper)graphs will undoubtedly draw parallels between the so-called {\em Kohayakawa-Kreuter conjecture} (see Conjecture~\ref{conj:KK} below) and our Conjecture~\ref{conj:AHP}; more specifically one cannot help but to compare $m(A,B)$ to the graph parameter arising for the threshold in the Kohayakawa-Kreuter conjecture.
For indeed, the intuition underlying the location of `most' thresholds in the Kohayakawa-Kreuter conjecture is as described above for the asymmetric random Rado problem.
Ramsey's theorem~\cite{Ram30} asserts that for sufficiently large $n$ any colouring of the edges of the complete $r$-uniform hypergraph $K^{({r})}_n=([n],\binom{[n]}{r})$ with $k$ colours admits a monochromatic copy of $F$; this is captured concisely with the notation $K^{({r})}_n\longrightarrow (F)_k$. This generalises to the asymmetric case as to read $H\longrightarrow (F_1,\ldots,F_k)$ in a straightforward manner. The binomial random hypergraph $H^{({r})}(n,p)$ is defined by choosing each of the $\binom{n}{r}$ possible edges independently at random with probability $p$. If $r=2$ then this is the binomial random graph model commonly denoted by $G(n,p)$. For a nonempty $r$-uniform hypergraph $F$ the $m_r$-{\em density} of $F$ is given by $m_r(F):=\max_{F'\subseteq F, v(F')>r} d_r(F')$, where $d_r(F)=\frac{e(F)-1}{v(F)-r}$ if its number of edges $e(F)>1$, and $d_r(F)=1/r$ if $e(F)=1$ and $v(F)=r$.
For a fixed $k\ge 2$, $F$ an $r$-uniform hypergraph and $p\ge Cn^{-1/m_r(F)}$ (for some absolute constant $C>0$) it does indeed hold that $\mbb{P}\left[H^{({r})}(n,{p})\longrightarrow (F)_k\right]\longrightarrow 1$ as $n\to\infty$. In \emph{many} cases $n^{-1/m_r(F)}$ is known to be the \emph{threshold}; nevertheless there are exceptions. R\"odl and Ruci\'nski~\cite{RR94,RR93,RR95} determined for every graph $F$ and any fixed number of colours the thresholds for the random graph $G(n,p)$. The case of random hypergraphs is not fully solved, but a general $1$-statement was given by Friedgut, R\"odl and Schacht~\cite{FRS10} and by Conlon and Gowers~\cite{CG16} (in the strictly balanced case), the matching $0$-statement for cliques was provided by Nenadov, Person, \v{S}kori\'c and Steger~\cite{NPSS17}. Gugelmann, Nenadov, Person, \v{S}kori\'c, Steger and Thomas~\cite{GNPSST17} also discovered another type of behaviour in random hypergraphs which is not exhibited in the random graph $G(n,p)$.
Amongst the first to consider asymmetric Ramsey properties in random graphs were Kohayakawa and Kreuter~\cite{KK97} who studied the case of cycles and put forth a conjecture as to where to locate the thresholds. For graphs $F_1$ and $F_2$ with $m_2(F_1)\ge m_2(F_2)$ the
asymmetric \emph{$m_2$-density} of $F_1$ and $F_2$ is given by
\[
m_2(F_1,F_2):=\max_{F_1'\subseteq F_1, e(F'_1)\ge 1}\frac{e(F'_1)}{v(F'_1)-2+1/m_2(F_2)}.
\]
The Kohaykawa-Kreuter conjecture is then as follows where here we take the version from~\cite{KSS14}.
\begin{conjecture}\label{conj:KK} {\em (The Kohayakawa-Kreuter conjecture)}\\
Let $F_1$, \ldots, $F_r$ be graphs with $m_2(F_1)\ge m_2(F_2)\ge \ldots\ge m_2(F_r)$ and $m_2(F_2)>1$. Then
the threshold for $G(n,p)\longrightarrow (F_1,\ldots,F_r)$ is $n^{-1/m_2(F_1,F_2)}$.
\end{conjecture}
This conjecture has been studied in~\cite{KK97,KSS14,MSSS09}. Kohayakawa and Kreuter verified the conjecture for cycles,
Marciniszyn, Skokan, Sp\"ohel and Steger~\cite{MSSS09} proved the $0$-statement for cliques and observed that the $1$-statement in the strictly `balanced' case would follow from the so-called K\L{}R-conjecture (which was verified later in~\cite{BMS15,CGSS14,ST15}) and Kohayakawa, Schacht and Sp\"ohel~\cite{KSS14} proved the strictly-balanced case. The asymmetric hypergraph analogue was studied in~\cite{GNPSST17}, where the $1$-statement was proven for general graphs with an additional $\log n$-factor.
In a recent paper Mousset, Nenadov and Samotij~\cite{MNS18} managed to erase this $\log n$-factor using a clever sparsification trick. The proof of~\cite{MNS18} (as well as of~\cite{GNPSST17}) makes use of the container method~\cite{BMS15,ST15}.
Support for Conjecture~\ref{conj:AHP} can be found in the fact that our combinatorial framework for deducing $1$-statements for asymmetric random Ramsey-type results, namely Theorem~\ref{thm:main}, recovers the $1$-statements for graphs and hypergraphs at the threshold conjectured by the Kohayakawa-Kreuter conjecture and its extension to hypergraphs. We revisit this statement in the remarks following the statement of Theorem~\ref{thm:main}.
The organisation of the paper is as follows. In Section~\ref{sec:technical_thm} we describe our main technical result (Theorem~\ref{thm:main}). In Section~\ref{sec:MNS} we prove Theorem~\ref{thm:main} by following closely the approach of Mousset, Nenadov and Samotij~\cite{MNS18}. In Section~\ref{sec:aux} we provide combinatorial results about Rado matrices, in Section~\ref{sec:Rado} we deduce Theorem~\ref{thm:Rado} from Theorem~\ref{thm:main} and Section~\ref{sec:conclude} contains some concluding remarks.
\@ifstar\unnumberedsection\numberedsection{Main technical result}\label{sec:technical_thm}
The purpose of this section is to state our main technical result, namely Theorem~\ref{thm:main}, from which Theorem~\ref{thm:Rado} is deduced with relative ease. Theorem~\ref{thm:main} is proven using the container method~\cite{BMS15,ST15} and is stated in the spirit of the main result~\cite[Theorem~2.5]{FRS10}.
We begin with the statements of the combinatorial properties that the aforementioned solution hypergraphs are required to satisfy for Theorem~\ref{thm:main} to take effect. Roughly speaking these properties fall into four categories to which we refer to as: {\sl containerability}, {\sl Ramseyness}, {\sl tameness}, and {\sl boundedness}. In what follows we make these precise.
Throughout, a sequence of hypergraphs $\boldsymbol{H}:=(H_n)_{n \in \mbb{N}}$ is said to have property $\P$ if $H_n$ has property $\P$ whenever $n$ is sufficiently large. We sometimes refer to $H_n$ as solution hypergraphs.
\vspace{1.5ex}
\noindent
\scaps{Containerability.} For some of the solution hypergraphs involved in the asymmetric random Ramsey-type problem we require that the container method can be applied to them. One can capture this using the following functions introduced in~\cite[Section~3.1]{ST15}.
Let $H$ be a $k$-uniform $n$-vertex hypergraph with average vertex degree $d > 0$. For $2 \leq j \leq k$ and $v \in V(H)$ set
\[
\deg_H^{(j)}(v) : = \max \{\deg_H(T): v \in T \subseteq V(H) \; \text{and}\; |T|= j\},
\]
where $\deg_H(T)$ is the number of edges of $H$ that contain $T$.
For $\tau > 0$ one defines
\[
\delta_j(H,\tau) := \frac{\sum_{v\in V(H)} \deg_H^{(j)}(v)}{\tau^{j-1}nd},
\]
and
\[
\delta(H,\tau):= 2^{\binom{k}{2}-1}\sum_{j=2}^k 2^{-\binom{j-1}{2}}\delta_j(H,\tau),
\]
which is the \emph{co-degree function} from~\cite[Section~3.1]{ST15}.
\vspace{1.5ex}
\noindent
\scaps{Tameness.} Like containerability the next property also involves degrees of the associated solution hypergraphs. However, tameness can be seen to be more intimately related to the configuration at hand. For indeed this property calls for the {\sl extendability} of so-called {\sl sub-solutions} into complete solutions to be controllable to a certain extent.
An {\sl ordered} ($k$-uniform) hypergraph $\mcal{H}: = (H,\boldsymbol{\pi})$ is a pair comprised of an $n$-vertex $k$-uniform hypergraph $H$ and a set of bijections $\boldsymbol{\pi}$,
where each element $\pi\in\boldsymbol{\pi}$, which may be viewed as a $k$-tuple, is a bijection from $[k]$ to some edge $e\in E(H)$ denoted $e_{\pi}$, i.e.\ $e_\pi=\pi([k])$. We thus view elements of $\boldsymbol{\pi}$ as \emph{ordered edges} of $\mcal{H}$. We also write $e$, $f\in\mcal{H}$ for such ordered edges, and notations $e\subseteq A$ or $e\cap f$ mean that we view $e$ and $f$ as sets $e([k_1])$ and $f([k_1])$ (by dropping the order).
We write $|\mcal{H}|$ for $|\boldsymbol{\pi}|$, i.e.\ the number of ordered edges in $\mcal{H}$, and we also identify $\mcal{H}$ with $\boldsymbol{\pi}$.
To put this in context of, say, the Rado problem, the edges of such ordered hypergraphs $\mcal{H}$ will arise later in Section~\ref{sec:Rado} as solution vectors $x$ with distinct entries to the equations of the form $Ax=0$, where $A$ is some Rado matrix and $x\in [n]^k$. The r\^ole of the bijections $\pi_e$ from $\boldsymbol{\pi}$ is to record the positions of the elements of $e$ as these are to be placed into the solution vector $x$.
For $\emptyset \not= I \subseteq [k]$ and $\pi \in \boldsymbol{\pi}$, we write $\pi|_I$ for the restriction of $\pi$ to $I$.
The $I$-{\em projection} of $\mcal{H}$ is defined to be
\[
\mcal{H}_I: = \left(H_I,\boldsymbol{\pi}_I\right)\text{ where } H_I:=(V(H),\{\pi(I)\colon \pi\in \boldsymbol{\pi}\})\text{ and } \boldsymbol{\pi}_I:=\left\{\pi|_I: \pi\in\boldsymbol{\pi} \right\}.
\]
In particular $\mcal{H}_{[k]}$ coincides with $\mcal{H}$. Given $e$, $f\in\mcal{H}$ and $u\in\mcal{H}_I$, we write $e\cap f=u$ if $e|_I=u=f|_I$ and $e([k_1])\cap f([k_1])=u(I)$.
Observe that for $I \subset W \subseteq [k]$ the $I$-projection of $\mcal{H}_W$ coincides with the $I$-projection of $\mcal{H}$, and that several edges of $\mcal{H}$ may indeed be projected onto the same edge of an $I$-projection.
For $\emptyset \not= I\subseteq [k]$ and (an ordered edge) $y\in \mcal{H}_I$ write
$
\deg_{\mcal{H}}(y) := |\{\pi\in \boldsymbol{\pi}:\pi|_I=y\}|
$
to denote the \emph{degree} of $y$ in $\mcal{H}=(H,\boldsymbol{\pi})$.
The following definition captures a setting in which the degrees of projected edges are not much larger than the average.
\begin{definition}
Let $K \in \mbb{N}$. An ordered hypergraph $\mcal{H}=(H,\boldsymbol{\pi})$ is said to be $K$-{\em tamed} if
for all $\emptyset \not= I \subset W \subseteq [k]$
\begin{equation}\label{eq:extend}
\deg_{\mcal{H}_W}(u) \leq K \frac{|\mcal{H}_W|}{|\mcal{H}_I|}
\end{equation}
holds for all $u \in\boldsymbol{\pi}_I$.
\end{definition}
For future reference let us note that $K$-tamed hypergraphs $\mcal{H}$ have the property that whenever $\emptyset \not= I \subset W \subseteq [k]$ it holds that:
\begin{equation}\label{eq:cherry}
\sum_{u \in \mcal{H}_I} \deg_{\mcal{H}_W}(u)^2 \le K^2 \frac{|\mcal{H}_W|^2}{|\mcal{H}_I|}.
\end{equation}
In particular when $W = [k]$ the condition becomes
\begin{equation}\label{eq:traditional-boundedness}
\sum_{u \in \mcal{H}_I} \deg_{\mcal{H}}(u)^2 \le K^2 \frac{|\mcal{H}|^2}{|\mcal{H}_I|}.
\end{equation}
Later (in Section~\ref{sec:Rado}), we will use $K$-tameness of an ordered $k$-uniform hypergraph $\mcal{H}=(H,\boldsymbol{\pi})$ to get bounds
on the co-degree function $\delta_j(H,\tau)$ by using $\deg^{(j)}_H (v) \le \sum_{I \in \binom{[k]}{j}} \Delta^{I}(\mcal{H})$, where $\Delta^{I}(\mcal{H})$ is the maximum over all
$\deg_{\mcal{H}}(u)$ with $u\in\mcal{H}_I$.
\vspace{1.5ex}
\noindent
\scaps{Ramseyness}. Another combinatorial property that we shall require is {\em Ramsey supersaturation} that is fit to the asymmetric setting.
Given (possibly ordered) hypergraphs $H_1$, \ldots, $H_r$, then $(H_i\colon i\in[r])$ is said to be $r$-{\em Ramsey} if for any vertex partition $U_1 \dot{\cup} \cdots \dot{\cup} U_r$ of $V(H)$ there exists an $i \in [r]$ such that $e(H_i[U_i]) > 0$. Observe that for $H_1=\ldots=H_r$ this reduces to the symmetric setting.
We will be working with the following quantitative version of the Ramsey property.
Given $r \in \mbb{N}$ and $i \in [r]$, let $(H_n^{(i)})_{n\in \mbb{N}}$ be a sequence of $k_i$-uniform (possibly ordered) hypergraphs with the property that for every $n \in \mbb{N}$ the hypergraphs $(H_n^{(i)})_{i \in [r]}$ are all defined on the common vertex set $[n]$.
Put $\mathfrak{H}_n:=\left((H_n^{(i)})\colon i \in [r]\right)$ and $\boldsymbol{\fH}:=\left((H_n^{(i)})\colon i \in [r]\right)_{n\in\mbb{N}}$.
The sequence $\boldsymbol{\fH}$ is said to be $(r,\zeta)$-{\em Ramsey} if for every sufficiently large $n$ and for any vertex partition $U_1 \dot{\cup} U_2 \dot{\cup} \cdots \dot{\cup} U_r$ of $[n]$ there exists an $i\in [r]$ such that $e(H^{(i)}_n[U_i]) > \zeta e(H^{(i)}_n)$.
\vspace{1.5ex}
\noindent
\scaps{Boundedness.} Most technical of all properties is that of {\sl boundedness} and it is here that we encounter the sparsification trick of~\cite{MNS18}. Roughly speaking the property essentially calls for a {\sl weight} function to be put on elements of the random set and thus {\sl sparsifying} it as to have that after sparsification the expected number of $I$-projected solutions for every nontrivial subset of indicies $I$ be asymptotically comparable with the size of the containers employed through the containerability property.
\begin{definition}\label{def:bounded}{\em ($(p,w,\tau)$-boundedness)}
Let $2 \leq k \in \mbb{N}$, let $p\colon\mbb{N}\to(0,1]$, $w:\P\left([k]\right) \to [1,\infty)$ and $\tau := \tau(n)\colon \mbb{N}\to (0,1)$ be functions, where additionally $\tau n \to \infty$ with $n$.
A sequence $(\mcal{H}_n)_{n \in \mbb{N}}$ of ordered $k$-uniform hypergraphs is said to be $(p,w,\tau)$-{\em bounded} if
\begin{equation}\label{eq:bounded}
\min_{\emptyset \not= I \subseteq [k]} p^{w(I)}|(\mcal{H}_n)_I| = \Theta(\tau n)
\end{equation}
holds for all sufficiently large $n$, i.e.\ there exist absolute constants $c',C'>0$ with
\[
c'\tau n\le \min_{\emptyset \not= I \subseteq [k]} p^{w(I)}|(\mcal{H}_n)_I|\le C'\tau n
\]
for all large $n$.
\end{definition}
\noindent
In the context of~\eqref{eq:bounded}, a set $W \subseteq [k]$ satisfying
\[
p^{w(W)}|(\mcal{H}_n)_W| = \min_{\emptyset \not= I \subseteq [k]} p^{w(I)}|(\mcal{H}_n)_I|
\]
is called a $w$-{\em minimiser set}. If the associated function $w$ has the property that for every $ i\in [k]$ there exists a $w$-minimiser set containing $i$ then $w$ is called {\em proper}. We write $\mcal{W}(w)$ for the set of $w$-minimisers whenever $w$ is proper. A sequence of ordered hypergraphs that is $(p,w,\tau)$-bounded with $w$ being proper is said to be $(p,w,\tau)$-{\em properly bounded}.
\vspace{2ex}
We are now ready to state our main technical result.
In the context of the asymmetric random Rado problem, say, this result conveys the message that upon collating all solution hypergraphs for all matrical equations involved in the problem into a single (ordered not necessarily uniform) hypergraph, then if the latter satisfies the above four combinatorial properties (namely containerability, Ramseyness, tameness, and boundedness) then a.a.s. this hypergraph, once restricted to the elements/vertices chosen by the random set, will support the desired Rado property. While true in spirit (and certainly in the symmetric case), the asymmetric nature of the Ramsey-type properties we are after
renders the above description slightly inaccurate in the sense that asymmetry will require the satisfaction of the above combinatorial properties in a manner not as
{\sl homogeneous} as described. This we now make precise; to that end will prefer to write $V_{n,p}$ in order to denote the binomial random set $[n]_p$.
\begin{theorem}\label{thm:main}{\em (Main technical result)}
Let $2 \leq r, k_1,\ldots, k_r \in \mbb{N}$, and let $\zeta , K, \eps>0$ such that $\eps < 1/2$ and $\zeta > t! \eps$, where $t:= \max_{i \in [2,r]}k_i$. For $i \in [r]$, let $(\mcal{H}_n^{(i)})_{n\in \mbb{N}}$ be a sequence of $k_i$-uniform ordered hypergraphs such that the hypergraphs $\{H_n^{(1)},\ldots,H_n^{(r)}\}$ are all defined on the same vertex set $[n]$ for every $n \in \mbb{N}$. There exists a $C>0$ such that the following holds for every $p\colon\mbb{N}\to (0,1]$
satisfying $p(n)\longrightarrow 0$ as $n\to\infty$.
If $\boldsymbol{\fH}=\left((\mcal{H}_n^{(i)})\colon i \in [r]\right)_{n\in\mbb{N}}$ is $(r,\zeta)$-Ramsey,
$\mcal{H}_n^{(1)}$ is $K$-tamed and $(p,w,\tau)$-properly bounded, and $\delta(H^{(i)}_n,\tau) \leq \eps/12 t!$ holds for all $i=2$,\ldots, $r$, then a.a.s. $(\mcal{H}^{(i)}_n[ V_{n,q}]\colon i\in[r])$ is $r$-Ramsey whenever $q:= q(n) \geq C p(n)$.
\end{theorem}
\begin{remark}
The condition $p(n)\longrightarrow 0$ is not very restrictive (and, in fact, it can be omitted at the expense of choosing $c'$ in~\eqref{eq:bounded} appropriately). Moreover, in applications one is concerned with sparse cases anyway. Since Ramsey properties are monotone, the truth of the statement for $p(n)\longrightarrow 0$ implies it for larger probabilities as well.
\end{remark}
\begin{remark}
Our Theorem~\ref{thm:main} is stated in the spirit of the main result for symmetric Ramsey problems of Friedgut, R\"odl and Schacht~\cite[Theorem~2.5]{FRS10}.
The boundedness condition there implicitly involves a form of the co-degree function combined with probabilities $p(n)$, whereas our theorem treats them separately. As a consequence, the verification for the (ordered) hypergraphs arising as solutions to the linear equations involving Rado matrices are short and straightforward.
\end{remark}
\begin{remark}
In~\S~\ref{sec:intro} we claimed to have Theorem~\ref{thm:main} reproduce the $1$-statement seen in the Kohayakawa-Kreuter conjecture (see Conjecture~\ref{conj:KK}) which has been recently proved in its full generality by Mousset, Nenadov and Samotij~\cite{MNS18}. This while the condition $m_2(F_2)>1$ stated in the Kohayakawa-Kreuter conjecture is missing from the premise of Theorem~\ref{thm:main}. As noted in~\cite{KSS14}, in the Kohayakawa-Kreuter conjecture the condition $m_2(F_2) > 1$ is required for the conjectured {\sl threshold} to hold; dropping this condition does not refute the $1$-statement; the latter remains true only not at the optimal density.
\end{remark}
\@ifstar\unnumberedsection\numberedsection{A generalised Mousset-Nenadov-Samotij type argument}\label{sec:MNS}
In this section we prove Theorem~\ref{thm:main}. The proof is an adaptation of the arguments from~\cite{MNS18} and thus we follow~\cite{MNS18} closely throughout this section.
Let $(\mcal{H}_n^{(i)})_{n\in \mbb{N}}$ and $\boldsymbol{\fH}=\left((\mcal{H}_n^{(i)})\colon i \in [r]\right)_{n\in\mbb{N}}$ be as in Theorem~\ref{thm:main}. We will be considering sequences of the form $(A_n,\xi_n)_{n \in \mbb{N}}$ where $A_n \subseteq [n]$ and $\xi_n : [n] \to [k_1]$ is a function viewed as a $k_1$-partition of $A_n=\dot\cup_{i=1}^{k_1}(\xi^{-1}(i)\cap A_n)$; consequently, we refer to $(A_n,\xi_n)$ as a $k_1$-{\em partite set}. Often we will suppress the index $n$ and treat $(A_n,\xi_n)$ as the pair
$$
\mcal{A} := (A \subseteq [n],\xi: [n] \to [k_1]).
$$
Whenever $\xi$ is clear from the context, we identify $\mcal{A}$ with $A$.
A set $e \subseteq [n]$ with $|e| =k_1$ is said to be $\xi$-{\em partite} if all its members lie in different parts of $\xi$, i.e.\ $\xi(e)=[k_1]$.
For $i \in [r]$, we denote by $\mcal{H}^{(1)}_n [\mcal{A}]$ the ordered subgraph of $\mcal{H}^{(1)}_n$ with edges $\pi$ satisfying $\pi([k_1])\subseteq A_n$ and $\pi^{-1}=\xi|_{\pi([k_1])}$ (i.e.\ those edges $\pi$ which respect the partition $\xi$), and we refer to such edges as \emph{$\xi$-partite}. Generally, we also denote projections $\pi|_W$ of $\pi$ to $W$ as $\xi$-partite if $\pi^{-1}|_{\pi(W)}=\xi|_{\pi(W)}$.
For $X \subseteq [n]$ we write $E_\xi(\mcal{H}^{(1)}_n[X])$ to denote the $\xi$-partite edges of $\mcal{H}^{(1)}_n$ spanned by $X$ and $e_\xi(H^{(1)}_n[X])$ to denote the cardinality of this set. We say that $\mathfrak{H}_n[\mcal{A}]$ is $r$-{\em partite-Ramsey} if for every partition $A_1 \dot{\cup} A_2\dot\cup \cdots \dot\cup A_r$ of $A$ either $e_\xi(\mcal{H}^{(1)}_n[A_1]) > 0$ or there exists an $i \in [2,r]$ such that $e(\mcal{H}^{(i)}_n[A_i]) > 0$.
Let us assume that $\mathfrak{H}_n[\mcal{A}]$ is not $r$-partite-Ramsey. Then there exists an $r$-colouring $A_1 \dot{\cup} A_2 \dot{\cup} \cdots \dot{\cup} A_r$ of $A$ such that $e_\xi (\mcal{H}^{(1)}_n[A_1]) = 0$ and $e(\mcal{H}^{(i)}_n[A_i]) = 0$ for every $i=2$,\ldots, $r$. This in turn implies that there exists an $r$-colouring $A'_1 \dot{\cup} A'_2 \dot{\cup} \cdots \dot{\cup} A'_r$ of $A$ such that any $v \in [n]$ that does not lie in a $\xi$-partite edge of $\mcal{H}^{(1)}_n$ satisfies $v \in A'_1$.
Since $e(\mcal{H}^{(i)}_n[A_i]) = 0$, the sets $A'_2$,\ldots, $A'_r$ are independent in the hypergraphs $\mcal{H}^{(2)}_n$,\ldots, $\mcal{H}^{({r})}_n$ and also in the hypergraphs $H^{(2)}_n$,\ldots, $H^{({r})}_n$ where the order of the vertices in edges is dropped. Moreover, every edge of a $k_i$-uniform $H^{(i)}_n$ gives rise to at most $k_i!$ ordered hyperedges in $\mcal{H}^{({i})}_n$, and hence the number of edges between $\mcal{H}^{({i})}_n[X]$ and $H^{({i})}_n[X]$ always differs by a constant factor.
The independent sets above can be
approximately described by the following version of the container theorem due to Saxton and Thomason~\cite{ST15}. The set $\P A$ below denotes the power set of a set $A$, and $\P(A)^s$ denotes the $s$-fold Cartesian product of $\P A$.
\begin{theorem}\label{thm:containers} {\em~\cite[Corollary~3.6]{ST15}} Let $H$ be a $k$-uniform hypergraph on $[n]$. Let $0 < \eps,\tau < 1/2$ and let $\tau$ satisfy $\delta(H,\tau) \leq \eps/12k!$.
Then there exists a constant $c := c(k)=800k!^3k$ and a function $f\colon \P([n])^s\to\P[n]$ where $s\le c\log(1/\eps)$, with the following properties. Let
$\mcal{T}=\{(T_1,\ldots, T_s)\in\mcal{P}([n])^s\colon |T_i|\le c\tau n, 1\le i\le s\}$, and let $\mcal{C}=\{f(T)\colon T\in \mcal{T}\}$. Then
\begin{enumerate}
\item [\namedlabel{itm:C1}{(C.1)}] for every independent set $I$ there exists a {\em signature} $T := (T_1,\ldots,T_s) \in \mcal{T}\cap\P(I)^s$ such that
$
I \subseteq f(T) \in \mcal{C},
$
\item [\namedlabel{itm:C2}{(C.2)}] $e(H[C]) \leq \eps e(H)$ for all $C \in \mcal{C}$,
\item [\namedlabel{itm:C3}{(C.3)}] $\log|\mcal{C}|\le c\log (1/\eps)n\tau\log(1/\tau)$.
\end{enumerate}
\end{theorem}
We further call the sets from $\mcal{C}$ \emph{containers}.
Given $\zeta,\eps, K, \tau$, and $p$ (per the quantification of Theorem~\ref{thm:main}) let $c_i := c_i(k_i)$ and $s_i := s_i(\eps,c_i)$ be the constants guaranteed by Theorem~\ref{thm:containers}, as this will be applied to every member of $(H^{(i)}_n)_{i=2,\ldots, r}$ with $\eps$ and $\tau$. Set
\begin{equation}\label{eq:const}
c := \max_{i \geq 2} c_i \; \text{and}\; s := \max_{i\geq 2} s_i.
\end{equation}
In addition let $(f_i)_{i\geq 2}$ denote the mappings from signatures to containers as defined in~\ref{itm:C1} for each such application.
By Theorem~\ref{thm:containers} applied with $\eps$ and $\tau$ to $H^{(i)}_n$, $i \geq 2$, there exists a collection of containers $\mcal{C}_i \subseteq \P[n]$ such that for each $A'_i$, $i\geq 2$, there exists a signature $T^{(i)} := (T^{(i)}_1,\ldots,T^{(i)}_{s_i}) \in \P(A'_i)^{s_i}$ (per~\ref{itm:C1}).
Given the signatures $(T^{(2)},\ldots,T^{(r)})$ note that $A \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})) \subseteq A'_1$ and thus
$$
e_\xi(\mcal{H}^{(1)}_n \big[ A \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)}))\big])=e_\xi(\mcal{H}^{(1)}_n \big[ A'_1\big]) = 0.
$$
The following observation summarises the above discussion.
\begin{observation}\label{obs:witnesses}
Let $\mcal{A} : = (A,\xi)$ be a $k_1$-partite set. If $\mathfrak{H}_n[\mcal{A}]$ is not $r$-partite-Ramsey then, there is a partition of
$A=A'_1\dot\cup A'_2\dot\cup \ldots\dot\cup A'_r$ (as described above), where the following two properties are met:
\begin{enumerate}
\item [\namedlabel{itm:P1}{(P.1)}] there exists a sequence of signatures $
(T^{(2)},\ldots,T^{(r)})$ as defined above such that
$$
\bigcup_{i=2}^r \bigcup_{j = 1}^{s_i} T^{(i)}_j \subseteq A,
$$
and every member of $\bigcup_{i=2}^r\bigcup_{j = 1}^{s_i} T^{(i)}_j$ lies in a $\xi$-partite edge of $\mcal{H}^{(1)}_n$. By assumption $(\mcal{H}^{(1)}_n)_{n\in\mbb{N}}$ is $(p,w,\tau)$-properly bounded; thus, for any $v \in \bigcup_{i=2}^r \bigcup_{j = 1}^{s_i} T^{(i)}_j$ an arbitrary $\xi$-partite edge of $\mcal{H}^{(1)}_n[A]$ containing $v$ can be fixed, and thus a $w$-minimiser set $W_v \subseteq [k_1]$ satisfying $\xi(v) \in W_v$ can be assigned to $v$. The set $Z_v:=\xi^{-1}(W_v)$ (containing $v$) is viewed as a \emph{witness} for $v$. Define
\begin{equation}\label{eq:witness-collection}
\tilde{A}:=\bigcup_{v\in \bigcup_{i=2}^r \bigcup_{j = 1}^{s_i} T^{(i)}_j } Z_v,
\end{equation}
and note that
\begin{equation}\label{eq:tilde-A-bound}
|\tilde{A}|\le k_1\cdot r\cdot s\cdot c\cdot \tau n
\end{equation}
holds.
\item [\namedlabel{itm:P2}{(P.2)}] We have $e_\xi(\mcal{H}^{(1)}_n \big[ A \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)}))\big])=0$.
\end{enumerate}
\end{observation}
Next we define a weighted partite random set, which generalises the model $[n]_p$.
\begin{definition} \label{def:Vnpwk}
Given $n,k \in \mbb{N}$, $p \in [0,1]$, and $w : [k] \to [1,\infty)$, the probability distribution $V_{n,p,w,k}$ on $[n]$ is defined as follows.
\begin{enumerate}
\item Choose a function $\xi:[n] \to [k]$ uniformly at random.
\item An element $x \in [n]$ is included independently in $V_{n,p,w,k}$ with probability $p^{w(\xi(x))}$.
\end{enumerate}
\end{definition}
From now on we write $V_{n,p,w}$ to denote $V_{n,p,w,k_1}$, where $k_1$ is the uniformity of the hypergraphs in $(\mcal{H}^{(1)}_n)_{n\in\mbb{N}}$.
We consider two probabilities namely $p\colon\mbb{N}\to[0,1]$ and $q\colon\mbb{N}\to[0,1]$, and we
shall write $p$ and $q$ instead of $p(n)$ and $q(n)$, respectively. As $V_{n,q,w}$ and $V_{n,q}$ can be coupled so that $V_{n,q,w} \subseteq V_{n,q}$
the following holds
\begin{equation}\label{eq:reason}
\mbb{P}[\mathfrak{H}_n[V_{n,q}] \; \text{is not $r$-Ramsey}] \leq \mbb{P} [\mathfrak{H}_n[V_{n,q,w}] \; \text{is not $r$-partite-Ramsey}].
\end{equation}
Before proceeding further, we introduce the following quantity $X_I:= |\{\boldsymbol{i} \in (\mcal{H}^{(1)}_n)_I: \boldsymbol{i} \subseteq V_{n,q,w} \}|$ and prove the following fact about it.
\begin{lemma}\label{lem:concentration}
For every $\emptyset \not= I \subseteq [k_1]$,
$$
\lim_{n\to \infty}\mbb{P} \left[X_I \leq 2 q^{w(I)}|(\mcal{H}^{(1)}_n)_I|\right] = 1.
$$
\end{lemma}
\begin{proof}
We start by noting that $\mbb{E} X_I = q^{w(I)}|(\mcal{H}^{(1)}_n)_I|$. Moreover, we may write $X_I = \sum_{f \in (\mcal{H}^{(1)}_n)_I} \mathbbm{1}_{f}$ where $\mathbbm{1}_{
f}$ is the indicator variable for whether $f \in (\mcal{H}^{(1)}_n)_I$ satisfies $f \subseteq V_{n,q,w}$. By Chebyshev's inequality
\begin{equation}\label{eq:cheby}
\mbb{P} \left[X_I > 2 q^{w(I)}|(\mcal{H}^{(1)}_n)_I| \right] \leq \frac{1}{\Omega(\mbb{E} X_I)} + \frac{\Delta}{\Omega((\mbb{E} X_I)^2)}
\end{equation}
where
$$
\Delta := \sum_{\substack{f,f' \in (\mcal{H}^{(1)}_n)_I \\ f \not= f' \\ f \cap f' \not = \emptyset }}\mbb{E} [\mathbbm{1}_{f} \cdot \mathbbm{1}_{f'}].
$$
For the latter quantity we have
\begin{align*}
\Delta &\overset{\phantom{\eqref{eq:cherry}}}{\leq} \sum_{\emptyset \not= J \subset I} \sum_{\substack{f,f' \in (\mcal{H}^{(1)}_n)_I \\ \xi(f\, \cap\, f') = J}} \mbb{E} [\mathbbm{1}_{f} \cdot \mathbbm{1}_{f'}]\\
& \overset{\phantom{\eqref{eq:cherry}}}{\leq} \sum_{\emptyset \not=J} q^{2w(I)-w(J)} \sum_{u \in (\mcal{H}^{(1)}_n)_J} \deg_{(\mcal{H}^{(1)}_n)_I}(u)^2\\
& \overset{\eqref{eq:cherry}}{\leq} \sum_{\emptyset \not=J} q^{2w(I)-w(J)} K^2 \frac{|(\mcal{H}^{(1)}_n)_I|^2}{|(\mcal{H}^{(1)}_n)_J|}\\
& \overset{\phantom{\eqref{eq:cherry}}}{=} K^2 q^{2w(I)}|(\mcal{H}^{(1)}_n)_I|^2 \sum_{\emptyset \not=J} \left(q^{w(J)}|(\mcal{H}^{(1)}_n)_J| \right)^{-1}.
\end{align*}
Substituting this estimate for $\Delta$ in~\eqref{eq:cheby} one arrives at
\begin{align*}
\mbb{P} \left[X_I > 2 q^{w(I)}|(\mcal{H}^{(1)}_n)_I| \right] & \leq \frac{1}{\Omega(q^{w(I)}|(\mcal{H}^{(1)}_n)_I|)} + \frac{K^2 q^{2w(I)}|(\mcal{H}^{(1)}_n)_I|^2 \sum_{\emptyset \not=J} \left(q^{w(J)}|(\mcal{H}^{(1)}_n)_J| \right)^{-1}}{\Omega(q^{2w(I)}|(\mcal{H}^{(1)}_n)_I|^2)}\\
& \leq \frac{1}{\Omega(q^{w(I)}|(\mcal{H}^{(1)}_n)_I|)} + \sum_{\emptyset \not=J} \frac{O(1)}{q^{w(J)}|(\mcal{H}^{(1)}_n)_J|}.
\end{align*}
Both summands in the last estimate vanish owing to~\eqref{eq:bounded} since it guarantees that $q^{w(X)}|(\mcal{H}^{(1)}_n)_X| \to \infty$ for every $\emptyset\neq X \subseteq [k_1]$.
\end{proof}
The role of the set $\mcal{A}$ from Observation~\ref{obs:witnesses} will be assumed by the set $V_{n,q,w}$, hence $A=V_{n,q,w}$ and $\xi$ is the random function from $V_{n,q,w}$.
Since the set $\tilde{A}\subseteq A$, we will assume from now on that the set $\tilde{A}$ is such that $|(\mcal{H}^{(1)}_n[\tilde{A}])_I|\le 2 q^{w(I)}|(\mcal{H}^{(1)}_n)_I|$ for all $\emptyset \not= I \subseteq [k_1]$. This indeed holds with probability $1-o(1)$, by the Lemma above, and this fact will be exploited towards the end of the proof.
Now we exploit our Observation~\ref{obs:witnesses} as follows:
\begin{align*}
\mbb{P} [\mathfrak{H}_n[V_{n,q,w}] & \; \text{is not $r$-partite-Ramsey}] \\
& \le \sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}} \mbb{P}[e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})) \big]) = 0\text{ and }\tilde{A}\subseteq V_{n,q,w}]\\
& \le \sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}} \mbb{P}[e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A}) \big]) = 0\text{ and }\tilde{A}\subseteq V_{n,q,w}]\\
&= \sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}} \mbb{P}[e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A}) \big]) = 0]\cdot\mbb{P}[\tilde{A}\subseteq V_{n,q,w}],
\end{align*}
where the sums are over all possible signatures $(T^{(2)},\ldots,T^{(r)})$ and the sets $\tilde{A}$ associated with them, which may arise as described in~\ref{itm:P1}, and $\xi$ is the random partition of $[n]$ from the definition of $V_{n,q,w}$.
Therefore, the remainder of the section will be concerned with establishing
\begin{equation}
\sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}} \mbb{P}[e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A}) \big]) = 0]\cdot\mbb{P}[\tilde{A}\subseteq V_{n,q,w}]
= o(1),\label{eq:main}
\end{equation}
which would prove Theorem~\ref{thm:main}.
\begin{comment}
Since the property $e_\xi(H^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})) \big]) = 0$ is monotone decreasing, while the property $\tilde{A}\subseteq V_{n,q,w}$ is increasing, the events are negatively correlated. It follows by the correlation inequality FKG~\cite{FKG71} that we can write:
\begin{align}
\sum_{\tilde{A}}\mbb{P}\left[e_\xi(H^{(1)}_n \big[ A'_1\big]) = 0\text{ and }\tilde{A}\subseteq V_{n,q,w}\right] \le
\sum_{\tilde{A}}\mbb{P}\left[e_\xi(H^{(1)}_n \big[ A'_1\big]) = 0\right]\cdot \mbb{P}[\tilde{A}\subseteq V_{n,q,w}] \label{eq:centre}
\end{align}
where here $\xi$ is the random partition of $[n]$ from the definition of $V_{n,q,w}$.
\end{comment}
\begin{lemma}\label{lem:Janson1}
For any choice of $\tilde{A}$ per~\eqref{eq:witness-collection}
\begin{equation}\label{eq:Janson-arg}
\mbb{P} \Bigg[ e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A}) \big]) = 0\Bigg] \leq 2^{-\Omega\left(\min_{\emptyset\neq I \subseteq [k_1]} q^{w(I)}\left|\left(\mcal{H}^{(1)}_n\right)_I\right|\right)}
\end{equation}
holds.
\end{lemma}
\begin{proof} Let $(T^{(2)},\ldots,T^{(r)})$ be the signatures associated with $\tilde{A}$ (as specified in~\ref{itm:P1}).
Owing to~\ref{itm:C2}, the $r$-partition of $[n]$ given by
$$
[n] \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})), f_2(T^{(2)}), \ldots ,f_r(T^{(r)}),
$$
has the property that $e(H^{(i)}_n[f_i(T^{(i)})]) \leq \eps e(H^{(i)}_n)$ and hence $e(\mcal{H}^{(i)}_n[f_i(T^{(i)})]) \leq \eps k_i! e(\mcal{H}^{(i)}_n)$ for every $i=2$,\ldots, $r$.
As $\mathfrak{H}_n$ is $(r,\zeta)$-Ramsey and $\eps t! < \zeta$, we obtain
$$
e(\mcal{H}^{(1)}_n \big[ [n] \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)}))\big]) \geq \zeta e(\mcal{H}^{(1)}_n).
$$
The assumption that $\mcal{H}^{(1)}_n$ is $K$-tamed implies that each member of $\tilde A$ lies in at most $K \frac{|\mcal{H}^{(1)}_n|}{\min_{i\in [k_1]} \left|(\mcal{H}^{(1)}_n)_{\{i\}}\right|}$ edges of $\mcal{H}^{(1)}_n$. Owing to~\eqref{eq:bounded}, upheld by $\mcal{H}^{(1)}_n$ by assumption,
$$
\min_{\emptyset \not= I \subseteq [k_1]} \left|(\mcal{H}^{(1)}_n)_I\right| = \Omega(\tau n / p )
$$
holds; where here we used the fact that $w(I) \geq 1$ for every $\emptyset \not= I \subseteq [k_1]$. Recalling that $|\tilde{A}|\le k_1\cdot r\cdot s\cdot c\cdot \tau n$, by~\eqref{eq:tilde-A-bound}, it follows that at most
$$
k_1\cdot r\cdot s\cdot c\cdot \tau n \cdot \frac{|\mcal{H}^{(1)}_n|}{\Omega(\tau n/p)} \leq (\zeta /2) |\mcal{H}^{(1)}_n|
$$
edges of $\mcal{H}^{(1)}_n$ involve $\tilde{A}$; where here we use the fact that $p \to 0$. We may then write that
\[
e(\mcal{H}^{(1)}_n \big[[n] \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)}) \cup \tilde{A})\big])\ge \zeta e(\mcal{H}^{(1)}_n)-|\tilde{A}|\cdot K \frac{|\mcal{H}^{(1)}_n|}{\min_{i\in [k_1]} \left|(\mcal{H}^{(1)}_n)_{\{i\}}\right|}\ge
(\zeta/2) e(\mcal{H}^{(1)}_n).
\]
For an edge $e \in E(\mcal{H}^{(1)}_n)$, we have
$$
\mbb{P} \left[ e \in E_\xi(\mcal{H}^{(1)}_n[V_{n,q,w}]) \right] \geq q^{w([k_1])}/k_1^{k_1}
$$
where $w([k_1]) := \sum_{i \in [k_1]} w(i)$ and the term $k_1^{k_1}$ is incurred by the need for the edge to be $k_1$-partite. Consequently,
$$
\mu:= \mbb{E} \left[ e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w} \setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)}) \cup \tilde{A})\big]) \right] = \Omega_{\zeta,k_1}(1)q^{w([k_1])} e(\mcal{H}^{(1)}_n).
$$
Gearing up towards an application of Suen's inequality (see below), set
$$
\mathbbm{1}_{e} :=
\begin{cases}
1, & e \in E_\xi(\mcal{H}^{(1)}_n[V_{n,q,w}]),\\
0, & \text{otherwise}
\end{cases}
$$
and consider the quantities:
$$
\Delta:=\frac{1}{2}\sum_{\substack{e,f \in E(\mcal{H}^{(1)}_n)\\ e\, \cap\, f \not= \emptyset}} \mbb{E} [\mathbbm{1}_{e} \mathbbm{1}_{f}]
\; \text{and}\;
\delta:=\max_{e\in E(\mcal{H}^{(1)}_n)}\sum_{\substack{e,f \in E(\mcal{H}^{(1)}_n)\\ e\, \cap\, f \not= \emptyset}} \mbb{E} [\mathbbm{1}_{f}]
$$
estimations of which are required for the subsequent application of Suen's inequality.
For $\delta$, the following upper bound
$$
\delta\le \max_{\emptyset\neq I\subseteq [k_1]} k_1!\cdot k_1\cdot K q^{w([k_1])}\frac{|\mcal{H}^{(1)}_n|}{\left|\left(\mcal{H}^{(1)}_n\right)_I\right|}
$$
holds; where here we relied on~\eqref{eq:extend}.
Next, for the correlation term $2\Delta$ we have that
\begin{align*}
2\Delta & \overset{\phantom{\eqref{eq:traditional-boundedness}}}{:=} \sum_{\substack{e,f \in E(\mcal{H}^{(1)}_n)\\ e\, \cap\, f \not= \emptyset}} \mbb{E} [\mathbbm{1}_{e} \mathbbm{1}_{f}] \\
& \overset{\phantom{\eqref{eq:traditional-boundedness}}}{\leq} \sum_{\emptyset\neq I \subseteq [k_1]} \sum_{\substack{u \in \left(\mcal{H}^{(1)}_n\right)_I} }\sum_{\substack{e,f \in E(\mcal{H}^{(1)}_n) \\ e \, \cap \, f = u}} \mbb{E}[\mathbbm{1}_{e} \mathbbm{1}_{f}] \\
& \overset{\phantom{\eqref{eq:traditional-boundedness}}}{\leq} \sum_{I\neq \emptyset} \sum_{u} \sum_{e,f} q^{2w([k_1])-w(I)} \\
& \overset{\phantom{\eqref{eq:traditional-boundedness}}}{\leq} \sum_{I\neq \emptyset} q^{2w([k_1])-w(I)} \sum_{u \in \left(\mcal{H}^{(1)}_n\right)_I} \deg_{\mcal{H}^{(1)}_n}(u)^2\\
& \overset{\eqref{eq:traditional-boundedness}}{\leq} \sum_{I\neq \emptyset} q^{2w([k_1])-w(I)} K^2 \frac{e(\mcal{H}^{(1)}_n)^2}{\left|\left(\mcal{H}^{(1)}_n\right)_I\right|} \\
& \overset{\phantom{\eqref{eq:traditional-boundedness}}}{=} O \left( \frac{\mu^2}{\min_{\emptyset\neq I \subseteq [k_1]}q^{w(I)}|\left(\mcal{H}^{(1)}_n\right)_I|} \right).
\end{align*}
The claim now follows by Janson's version~\cite{J98} of Suen's inequality:
\[
\mbb{P}\left[e_\xi(\mcal{H}^{(1)}_n \big[V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A})\big])=0 \right]\le \exp\left(-\min\left(\frac{\mu^2}{8\Delta},\frac{\mu}{2},\frac{\mu}{6\delta}\right)\right),
\]
and the estimates on $\mu$, $\Delta$, and $\delta$.
\end{proof}
Equipped with Lemma~\ref{lem:Janson1} we return to~\eqref{eq:main} and upon the appropriate substitution attain
\begin{align}
\sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}} \mbb{P}[e_\xi(\mcal{H}^{(1)}_n \big[ V_{n,q,w}\setminus (f_2(T^{(2)}) \cup \cdots \cup f_r(T^{(r)})\cup\tilde{A}) \big]) = 0]\cdot\mbb{P}[\tilde{A}\subseteq V_{n,q,w}]\nonumber\\
\le 2^{-\Omega\left(\min_{\emptyset\neq I \subseteq [k_1]} q^{w(I)}\mcal{H}^{(1)}_n[I]\right)}\sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}}\mbb{P}\left[\tilde{A}\subseteq V_{n,q,w}\right].\label{eq:centre2}
\end{align}
Recall, that every set of the form $\tilde{A}$ has the property
that each of its elements $v$ is covered by some witness set $Z_{u} \in \binom{[n]}{\le k_1}$ (for some element $u$, not necessarily $v$ itself) that is a subset of some partite edge
$e \in E_\xi(\mcal{H}^{(1)}_n)$ such that $\xi(Z_{u})$ is a $w$-minimiser (cf.~\ref{itm:P1} of Observation~\ref{obs:witnesses}).
Since each such $\tilde{A}$ arises as the union over all witnesses $Z_{u}$, where $u$ is an element in some of the
possible tuples of signatures $(T_2,\ldots,T_r)$, we can make the following definition:
\[
\mathfrak{U}_k := \{\tilde{A} \colon \text{$\tilde{A}$ is $k$-coverable}\},
\]
where $\tilde{A}$ is said to be $k$-{\em coverable} if the least number of sets $Z_v$ required to form $\tilde{A}$ as their union is $k$. By~\ref{itm:P1} each such set $\tilde{A}\in\mathfrak{U}_k$ can be covered using at most $r\cdot s\cdot c\cdot \tau n$ sets of the form $Z_v$.
Consequently, the double sum appearing on the r.h.s. of~\eqref{eq:centre2}
can be estimated as
\begin{equation}\label{eq:centre3}
\sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}}\mbb{P}\left[\tilde{A}\subseteq V_{n,q,w}\right] \leq
(rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}\sum_{k=1}^{ r\cdot s \cdot c \cdot \tau n} \sum_{\tilde{A} \in \mathfrak{U}_k} \mbb{P} [\tilde{A} \subseteq V_{n,q,w}],
\end{equation}
where here the factor $(rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}$ accounts for the number of possibilities to reconstruct the signature ensemble $(T^{(2)},\ldots,T^{(r)})$ from a given $\tilde{A}$.
The minimality of $k$ involved in the $k$-coverability of a set $\tilde{A} \in \mathfrak{U}_k$ implies that every set $\tilde{A} \in \mathfrak{U}_k$ gives rise to at most $k$ members in $\mathfrak{U}_{k-1}$ which can be attained by simply discarding precisely one of the sets of the form $Z_v$ involved in building $\tilde{A}$. That is, there are at most $k$ distinct members $\tilde{A}' \in \mathfrak{U}_{k-1}$ such that $\tilde{A} = \tilde{A}' \cup Z_v$ for some $v\in \tilde{A}$.
Peering closer into this union we write $\tilde{A} = \tilde{A}' \cup \boldsymbol{i} \cup \boldsymbol{r}$ as to distinguish between the {\sl intersection} $\boldsymbol{i} = Z_v \cap \tilde{A}'$ and the {\sl remainder} of this set namely $\boldsymbol{r}$. With this in mind let us recall that $\mcal{W}:= \mcal{W}(w)$ was defined to be the set of proper $w$-minimisers and write
\begin{equation}\label{eq:break}
k \sum_{\tilde{A} \in \mathfrak{U}_k} \mbb{P} [\tilde{A} \subseteq V_{n,q,w}] \leq
\sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \sum_{W \in \mcal{W}} \sum_{I \subseteq W} \sum_{\substack{\boldsymbol{i} \subseteq \tilde{A}' \\ \boldsymbol{i} \in (\mcal{H}^{(1)}_n)_I}} \sum_{\substack{\boldsymbol{r} \in (\mcal{H}^{(1)}_n)_{W \setminus I} \\ \boldsymbol{i} \cup \boldsymbol{r} \in (\mcal{H}^{(1)}_n)_W}} \mbb{P} [\tilde{A}' \cup \boldsymbol{r} \subseteq V_{n,q,w}].
\end{equation}
The sums seen on the right hand side of~\eqref{eq:break} are as follows. We consider the generation of all members in $\mathfrak{U}_k$ through the members of $\mathfrak{U}_{k-1}$ via unions of the latter with all possible sets of the form $Z_v$. Given $\tilde{A}' \in \mathfrak{U}_{k-1}$ we seek to traverse sets of the form $Z_v$ which extend $\tilde{A}'$. As each such set $Z_v$ is associated with $w$-minimiser (through $\xi$), the second sum goes over all possible options for $\xi(Z_v)$. The set $\boldsymbol{i} \cup \boldsymbol{r}$ being this set $Z_v$ is required to be $\xi$-partite and such that $\xi(\boldsymbol{i} \cup \boldsymbol{r}) = W$ (for $W \in \mcal{W}$ chosen in the second sum). The third sum ranges over all possible partite representations allowed for $\boldsymbol{i}$ to assume. The fourth sum ranges over all subsets of $\tilde{A}'$ that may assume the r\^ole of $\boldsymbol{i}$. Finally, the fifth sum ranges over all possible completions $\boldsymbol{r}$. We remind the reader that the notation $\boldsymbol{i} \subseteq \tilde{A}'$ and $\boldsymbol{r} \cup \tilde{A}'$ means that we may view $\boldsymbol{i}$, $\boldsymbol{r}$ resp., also as sets (by forgetting the order), and that we denote by $\boldsymbol{i} \cup \boldsymbol{r}$ an ordered tuple according to $\xi$.
The events $\{\tilde{A}' \subseteq V_{n,q,w}\}$ and $\{\boldsymbol{r} \subseteq V_{n,q,w} \}$ are independent on account of $\tilde{A}'\subset \tilde{A}$ and $\boldsymbol{r}$ being disjoint. Then
\begin{align}
k \sum_{\tilde{A} \in \mathfrak{U}_k} & \mbb{P} [\tilde{A} \subseteq V_{n,q,w}] \leq \nonumber \\
& \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} \sum_{I \subseteq W} \sum_{\substack{\boldsymbol{i} \subseteq \tilde{A}' \\ \boldsymbol{i} \in (\mcal{H}^{(1)}_n)_I}} \sum_{\substack{\boldsymbol{r} \in (\mcal{H}^{(1)}_n)_{W \setminus I} \\ \boldsymbol{i} \cup \boldsymbol{r} \in (\mcal{H}^{(1)}_n)_W}} \mbb{P} [\boldsymbol{r} \subseteq V_{n,q,w}] \leq \nonumber\\
& \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} \sum_{I \subseteq W}|(\mcal{H}^{(1)}_n[\tilde{A}])_I| q^{w(W \setminus I)} \Delta^{I}((\mcal{H}^{(1)}_n)_W) \overset{\eqref{eq:extend}}{\leq} \nonumber \\
& \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} \sum_{I \subseteq W} |(\mcal{H}^{(1)}_n[\tilde{A}])_I| \, q^{w(W \setminus I)} K \frac{|(\mcal{H}^{(1)}_n)_W|}{|(\mcal{H}^{(1)}_n)_I|};
\label{eq:upper}
\end{align}
An application of Lemma~\ref{lem:concentration} allows us to further estimate~\eqref{eq:upper} by appealing that $|(\mcal{H}^{(1)}_n[\tilde{A}])_I|\le X_I\le 2 q^{w(I)}|(\mcal{H}^{(1)}_n)_I|$ holds with high probability ($1-o(1)$):
\begin{align}
k \sum_{\tilde{A} \in \mathfrak{U}_k} \mbb{P} [\tilde{A} \subseteq V_{n,q,w}] & \leq \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} \sum_{I \subseteq W} 2q^{w(I)}|(\mcal{H}^{(1)}_n)_I| q^{w(W \setminus I)} K \frac{|(\mcal{H}^{(1)}_n)_W|}{|(\mcal{H}^{(1)}_n)_I|} \nonumber \\
& = \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} \sum_{I \subseteq W}2 K q^{w(W)}|(\mcal{H}^{(1)}_n)_W| \nonumber\\
& \leq \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]\sum_{W \in \mcal{W}} (2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W| \label{eq:middle}
\end{align}
Noting that
$$
\mbb{E} |\mathfrak{U}_k| = \sum_{\tilde{A} \in \mathfrak{U}_k} \mbb{P} [\tilde{A} \subseteq V_{n,q,w}] \; \text{and}\; \mbb{E} |\mathfrak{U}_{k-1}| = \sum_{\tilde{A}' \in \mathfrak{U}_{k-1}} \mbb{P} [\tilde{A}' \subseteq V_{n,q,w}]
$$
we may rewrite~\eqref{eq:middle} as
$$
\mbb{E} |\mathfrak{U}_k| \leq \frac{\mbb{E} |\mathfrak{U}_{k-1}| \sum_{W \in \mcal{W}} (2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|}{k}.
$$
Owing to $|\mathfrak{U}_0| = 1$ we may write
$$
\mbb{E} |\mathfrak{U}_k| \leq \frac{\left(\sum_{W \in \mcal{W}} (2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|\right)^k}{k!}.
$$
This combined with~\eqref{eq:centre3} and~\eqref{eq:break} now yields
$$
\sum_{(T^{(2)},\ldots,T^{(r)})}\sum_{\tilde{A}}\mbb{P}\left[\tilde{A}\subseteq V_{n,q,w}\right] \leq (rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}\sum_{k=1}^{r\cdot s \cdot c \cdot \tau n} \frac{\left(\sum_{W \in \mcal{W}} (2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|\right)^k}{k!}.
$$
Substituting this into~\eqref{eq:centre2} and using $k!\ge (k/e)^k$ we arrive at
\begin{multline}
\mbb{P} [\mathfrak{H}_n[V_{n,q,w}] \; \text{is not $r$-partite-Ramsey}]\\
\overset{\phantom{w \geq 1}}{\leq}
(rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}2^{-\Omega\left(\min_{\emptyset\neq I \subseteq [k_1]} q^{w(I)}|(\mcal{H}^{(1)}_n)_I|\right)} \sum_{k=1}^{r\cdot s \cdot c \cdot \tau n} \left(\frac{\sum_{W \in \mcal{W}} e(2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|}{k}\right)^k\label{eq:final_R}
\end{multline}
The function $x \mapsto (d/x)^x$ is increasing for $0 < x \leq d/e$. Therefore, the expression in the inner sum is maximised for $k=M$, where
\[
M:=\min\left\{\sum_{W \in \mcal{W}} (2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|,r\cdot s \cdot c \cdot \tau n\right\}.
\]
In what follows we replace $q$ with $Cp$ (since the Ramsey property is monotone and $q\ge Cp$) and we also use
$\min_{\emptyset \not= I \subseteq [k]} p^{w(I)}|(\mcal{H}^{(1)}_n)_I| = \Theta(\tau n)$), which leads us to
\[
\left(\frac{\sum_{W \in \mcal{W}} e(2K)^{2|W|} q^{w(W)}|(\mcal{H}^{(1)}_n)_W|}{M}\right)^M\le \max\left\{e^M, e^{O(\tau n\log C)}\right\}\le e^{O(\tau n\log C)},
\]
where we used $M\le r\cdot s \cdot c \cdot \tau n$ and we hide $s$, $r$, $c$ in the $O(\cdot)$-notation. From this we can bound the right hand side of~\eqref{eq:final_R}
from above by:
\[
(rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}2^{-\Omega\left(\min_{\emptyset\neq I \subseteq [k_1]} q^{w(I)}|(\mcal{H}^{(1)}_n)_I|\right)} r\cdot s \cdot c \cdot \tau n\cdot e^{O(\tau n\log C)}.
\]
Appealing to $(p,w,\tau)$-boundedness and settig again $q=Cp$, we may write for $C$ sufficiently large:
\begin{equation*}
\mbb{P} [\mathfrak{H}_n[V_{n,q,w}] \; \text{is not $r$-partite-Ramsey}]
\le (rs)^{k_1\cdot r\cdot s \cdot c \cdot \tau n}2^{-C \cdot \Omega (\tau n)} e^{O(\log(\tau n) + (\log C) \tau n))}=o(1),
\end{equation*}
where we exploted that $\tau n \to \infty$ due to $(p,w,\tau)$-boundedness. This completes the proof of Theorem~\ref{thm:main}.
\@ifstar\unnumberedsection\numberedsection{Auxiliary results for partition-regular matrices}\label{sec:aux}
Here we collect some facts about bounds on the number of solutions to the matrical equation $A x = b$, where $A$ is some $\ell\times k$ matrix with integer entries, $x\in\mbb{N}^\ell$ and $b\in\mbb{N}^k$. We write $\mathrm{\mbs{rk}}\, A$ to denote the rank of $A$. Further define $\overline{I} : = [k] \setminus I$ whenever $I \subseteq [k]$. Recall that $A_I$ denotes the submatrix of $A$, where we only keep columns indexed by $i\in I$. For a given $\ell\times k$-matrix $A$, we write $\mcal{H}$ for the solution vectors $x\in[n]^k$ to the equation $Ax=0$. Given $I\subseteq [k]$, we write $\mcal{H}_I$ for the set of all projections $x_I$, where $x\in[n]^k$ is a solution to $Ax=0$. For $j\in[k]$ we write $A_j$ to denote the $j$th column of $A$. For $J\subseteq [k]$, let $V(A_J)$ denote the vector space spanned by the columns of $A_J$.
Given two sets $X$ and $Y$ we write $X^Y$ for the set of functions of the form $Y \to X$. For a set $N\subset \mathbb{Z}$, we write
$A_{J}\cdot N^{J}:=\{\sum_{j\in J} \alpha_j A_{j}\colon \alpha_j\in N\}$.
\begin{lemma}\label{lem:project}
Let $A$ be an $\ell\times k$ matrix with integer entries and with $\mathrm{\mbs{rk}}\, A=\ell$. Then there exists a constant $K=K_A>0$ so that
for every $I \subseteq [k]$ we have
\begin{equation}\label{eq:project}
|\mcal{H}_I| \leq K n^{|I| - \mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{I}}}.
\end{equation}
\end{lemma}
\begin{proof}
Set $C:=V(A_{I})\cap V(A_{\overline{I}})$ and observe that
$\mathrm{\mbs{rk}}\, A+\dim C=\mathrm{\mbs{rk}}\, A_{I}+\mathrm{\mbs{rk}}\, A_{\overline{I}}$ holds. For $y\in\mcal{H}_I$ there is an $x\in[n]^k$ with $Ax=0$ and $x_I=y$. Since $A_Iy=-A_{\overline{I}}x_{\overline{I}}$
we infer that $A_Iy\in C$. Let $b\in C\cap A_{\overline{I}}\cdot [n]^{\overline{I}}$. Next we estimate the number of solutions $y\in[n]^k$ with $A_Iy=-b$.
Let $I_1\subseteq I$ be such that $\mathrm{\mbs{rk}}\, A_{I_1}=\mathrm{\mbs{rk}}\, A_I$, hence for any choice of $z\in[n]^{I\setminus I_1}$, there is at most one solution to $A_Iy=-b$ with $y_{I\setminus I_1}=z$ (because $A_{I_1}y_{I_1}=-b-A_{I\setminus I_1}y_{I\setminus I_1}=-b-A_{I\setminus I_1}z$ has at most one solution due to the linear independence of the columns of $A_{I_1}$).
Thus, for each $b\in C\cap A_{\overline{I}}\cdot [n]^{\overline{I}}$, there are at most $n^{|I|-|I_1|}$ vectors $y\in\mcal{H}_I$ with $A_Iy=-b$.
Since every solution $x\in[n]^k$ to $Ax=0$ must satisfy $A_{\overline{I}}x_{\overline{I}}\in C$, it remains to estimate $|C\cap A_{\overline{I}}\cdot [n]^{\overline{I}}|$.
For every $i\in\overline{I}$, let $A'_i$ be the orthogonal projection of the column $A_i$ to $C$, i.e.\ $(A_i-A'_i)^T z=0$ for all $z\in C$, and let $A'$ denote the matrix, whose columns are othogonal projections of the columns of $A$ to $C$.
If $b\in C\cap A_{\overline{I}}\cdot [n]^{\overline{I}}$, then $b$ is a linear integer combination of $A'_i$ with $i\in\overline{I}$. Let $J\subseteq \overline{I}$ with $|J|=\dim C$ be such that $A'_j$ with $ j\in J$ form a basis for $C$. Every other $A'_i$ ($i\in \overline{I}$) is a rational linear combination of $\{A'_j\colon j\in J\}$ where the coefficients only depend on the entries of the matrix $A$, i.e.\ $A'_i=\sum_{j\in J}A'_j\beta_{ij}$ with $\beta_{ij}=\frac{c}{d}$ with $c$, $d\in \mathbb{Z}$ and $|c|$, $|d|\le K'$ for some absolute constant $K'=K'_A$. It follows that $A'_{\overline{I}}\cdot[n]^{\overline{I}}\subseteq \{\sum_{j\in J} A'_j (\alpha_j+\sum_{i\in \overline{I}\setminus J}\alpha_i\beta_{ij})\colon \alpha_i\in [n]\}$, and it is not difficult to see that the number of possible coefficients for every $A'_j$ is at most $2 |\overline{I}| (K'!)^2 n=O(n)$. It follows that there exists a constant $K=K_A$ (we can take $K$ to be at most $(2 |\overline{I}| (K'!)^2)^{\dim C}$) with
\[
|\mcal{H}_I|\le K n^{|I|-|I_1|+\dim C}= Kn^{|I|-\mathrm{\mbs{rk}}\, A+\mathrm{\mbs{rk}}\, A_{\overline{I}}},
\]
where we used $\mathrm{\mbs{rk}}\, A+\dim C=\mathrm{\mbs{rk}}\, A_{I}+\mathrm{\mbs{rk}}\, A_{\overline{I}}$.
\end{proof}
For an $\ell\times k$-matrix $A$, a subset $I\subseteq [k]$ and a vector $y\in [n]^I$, the
\emph{degree} of $y$ in $H$, i.e., $\deg_\mcal{H}(y)$, is given by the number of $z \in [n]^{\overline{I}}$ such that
\[
A_I y + A_{\overline{I}} z = 0.
\]
Similarly, for $I \subseteq W \subseteq [k]$ and $y\in[n]^I$, we write $\deg_{\mcal{H}_W}(y)$ for the number of $y'\in[n]^W$ with $y=y'_I$.
\begin{lemma}\label{lem:deg}
Let $A$ be an $\ell\times k$ matrix with $\mathrm{\mbs{rk}}\, A=\ell$.
Let $\emptyset \neq I \subseteq W \subseteq [k]$. Then there exists a constant $K=K_A>0$ so that
\begin{equation}\label{eq:deg}
\deg_{\mcal{H}_W}(y) \leq K n^{|W \setminus I| - \mathrm{\mbs{rk}}\, A_{_{\overline{I}}} + \mathrm{\mbs{rk}}\, A_{_{\overline{W}}}}
\end{equation}
holds for every $y \in \mcal{H}_I$.
\end{lemma}
\begin{proof}
For a given $y\in \mcal{H}_I$, we need to estimate the number of projections $x_W$, where $x\in [n]^k$ is a solution
to $A_{\overline{I}}x_{\overline{I}}=-A_Iy$ and $x_I=y$. Since for two solution vectors $x'$, $x''$ with $x'_I=y=x''_I$ we have $A_{\overline{I}} (x'_{\overline{I}}-x''_{\overline{I}})=0$, we will instead be interested in estimating
the number of $z_{W\setminus I}$ so that the vectors $z$ are solutions to $A_{\overline{I}} z=0$ with $z\in[-n+1,n-1]^{\overline{I}}$ (as this would be an upper bound for $\deg_{\mcal{H}_W}(y)$).
A straightforward adaptation of Lemma~\ref{lem:project} above yields an upper bound of the form
\[
K n^{|W\setminus I| - \mathrm{\mbs{rk}}\, A_{\overline{I}} + \mathrm{\mbs{rk}}\, A_{([k]\setminus I)\setminus(W\setminus I)}}=
K n^{|W\setminus I| - \mathrm{\mbs{rk}}\, A_{\overline{I}} + \mathrm{\mbs{rk}}\, A_{\overline{W}}}.
\]
\end{proof}
For $\emptyset \not= I \subset W \subseteq [k]$, set $\Delta^{(I)}(\mcal{H}_W):= \max \{\deg_{\mcal{H}[W]}(\boldsymbol{i}) : \boldsymbol{i} \in \mcal{H}_I\}$.
\begin{lemma}\label{lem:lower}
Let $A$ be an $\ell\times k$ matrix with $\mathrm{\mbs{rk}}\, A=\ell$.
If the matrical equation $A x = 0$ has $\Omega(n^{k - \mathrm{\mbs{rk}}\, A})$ solutions over $[n]^k$ then
\[
|\mcal{H}_I| = \Omega\left(n^{|I|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{_{\overline{I}}}}\right)
\]
holds for every $I\subseteq [k]$.
\end{lemma}
\begin{proof}
For suppose that $|\mcal{H}_I| = o\left(n^{|I|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{_{\overline{I}}}}\right)$ for one such $I \subseteq [k]$, then this assumption together with Lemma~\ref{lem:deg} (applied to $W=[k]$, thus $\mathrm{\mbs{rk}}\, A_{\overline{[k]}} = 0$) yield
\[
\Omega(n^{k-\mathrm{\mbs{rk}}\, A}) = \left|\mcal{H}\right| \le |\mcal{H}_I|\Delta^{(I)}(\mcal{H}) = o (n^{|I| - \mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{I}}} \cdot n^{|[k]\setminus I| - \mathrm{\mbs{rk}}\, A_{\overline{I}} + \mathrm{\mbs{rk}}\, A_{\overline{[k]}}}) = o(n^{k-\mathrm{\mbs{rk}}\, A}),
\]
a contradiction.
\end{proof}
\noindent
Recall that a matrix $A$ is \emph{partition-regular} if in any finite coloring of $\mbb{N}$ there is a monochromatic solution to $Ax=0$. Frankl, Graham and R\"odl~\cite{FGR88}
proved the following supersaturation properties of partition-regular matrices.
\begin{theorem}\label{thm:super} {\em~\cite[Theorem~1]{FGR88}}
Let $A$ be a partition-regular $\ell \times k$ matrix of rank $\ell$ and let $r \in \mbb{N}$. There exists a $c:=c(r,A)$ such that for any $r$-colouring of $[n]$ with $n$ sufficiently large there exists a colour $i$ in which there are $\geq c n^{k - \ell}$ solutions all coloured $i$.
\end{theorem}
This Ramsey supersaturation result, Lemma~\ref{lem:lower}, and Lemma~\ref{lem:project} render the following.
\begin{corollary}\label{cor:project}
Let $A$ be an $\ell \times k$ partition-regular matrix of rank $\ell$. Then for every $I \subseteq [k]$
\begin{equation}\label{eq:project}
|\mcal{H}_I|= \Theta \left(n^{|I| - \mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{I}}}\right).
\end{equation}
\end{corollary}
Finally we will be using two further properties of partition-regular matrices, which we collect in the following lemma, see, e.g.,~\cite[Proposition~4.3]{HST19}.
\begin{lemma}{\cite[Proposition~4.3]{HST19}}
Let $A$ be an $\ell \times k$ irredundant partition-regular matrix and let $I \subseteq [k]$.
\begin{enumerate}
\item if $|I| = 1$ then
\begin{equation}\label{eq:depend}
\mathrm{\mbs{rk}}\, A - \mathrm{\mbs{rk}}\, A_{\overline{I}} = 0.
\end{equation}
\item If $|I| \geq 2$ then
\begin{equation}\label{eq:luck}
k-|I| - \mathrm{\mbs{rk}}\, A_{\overline{I}} \leq k- \mathrm{\mbs{rk}}\, A-1 - \frac{|I|-1}{m(A)}.
\end{equation}
\item
\begin{equation}\label{eq:bound_mA}
m(A)>1
\end{equation}
\end{enumerate}
\end{lemma}
We conclude this section with the following observation regarding the parameter $m(A,B)$ (see Definition~\ref{def:mAB}.
\begin{observation}\label{obs:const}
Let $A$ and $B$ be two matrices of dimensions $\ell_A \times k_A$ and $\ell_B \times k_B$, respectively. If $m(A) \geq m(B)$ then $m(A,B) \geq m(B)$.
In particular, $m(A,A)=m(A)$.
\end{observation}
\begin{proof}
Let $U \subseteq [k_A]$ with $|U|\ge 2$ be the set defining $m(A)$. It suffices to show that
\[
\frac{|U|}{|U|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{U}} - 1 + 1/m(B)} \geq m(B),
\]
since $\frac{|U|}{|U|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{U}} - 1 + 1/m(B)}$ is a lower bound on $m(A,B)$.
We rewrite the inequality above as
\[
|U| \geq m(B) (|U|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{U}} - 1) +1
\]
and then again as
\[
\frac{|U|-1}{|U|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{U}} - 1} \geq m(B).
\]
Noticing that the l.h.s.\ of the last inequality equals $m(A)$ (by our choice of $U$),
we conclude the proof of this observation because $m(A) \geq m(B)$ holds by the initial assumption.
\end{proof}
\@ifstar\unnumberedsection\numberedsection{Proof of Theorem~\ref{thm:Rado}}\label{sec:Rado}
In this section we deduce Theorem~\ref{thm:Rado} from Theorem~\ref{thm:main}. To that end let $A_1,\ldots,A_r$, $r \geq 1$, be Rado matrices such that
$
m(A_1) \geq m(A_2) \geq \cdots \geq m(A_r)
$
with $A_i$ having dimensions $\ell_i \times k_i$.
For $n \in \mbb{N}$ and $i \in [r]$ define $\mcal{H}^{(i)}_n=(H^{(i)}_n,\boldsymbol{\pi}^{(i)}_n)$ to be the ordered $k_i$-uniform hypergraph whose vertex set is $[n]$ and whose edge set is comprised of all solutions over $[n]$ for the matrical equation $A_i x = 0$ with pairwise distinct entries of the vectors $x$. The sequences $(\mcal{H}^{(i)}_n)_{n \in \mbb{N}}$ are thus defined as well as the sequence $\boldsymbol{\fH}=(\mathfrak{H}_n)_{n \in \mbb{N}}$. Set $p:=p(n):= n^{-1/m(A_1,A_2)}$.
We seek to apply Theorem~\ref{thm:main} to the sequences $\boldsymbol{\fH}$ and $(\mcal{H}^{(i)}_n)_{n \in \mbb{N}}$. Hence we need to verify that these sequences satisfy the premise of Theorem~\ref{thm:main}.
\vspace{2ex}
\TPARA{Ramseyness}
The existence of $\zeta >0$ such that $\mathfrak{H}_n$ is $(r,\zeta)$-Ramsey whenever $n$ is sufficiently large is asserted by~\cite[Lemma~4.4]{HST19} who deduce this from the removal lemma seen at~\cite[Theorem~2]{KSV12}. Somewhat simpler one may set $B:= \mathrm{\boldsymbol{diag}}(A_1,\ldots,A_r)$ which as noted in the Introduction is partition-regular. Then Theorem~\ref{thm:super} yields a constant $c(r,B)$ such that for any $n$ sufficiently large, any $r$-colouring of $[n]$ admits at least $c(r,B) n^{\sum_i k_i - \mathrm{\mbs{rk}}\,(B)} = c(r,B) n^{\sum_i (k_i - \mathrm{\mbs{rk}}\, A_i)}$ monochromatic solutions to the matrical equation $B x = 0$. It follows that $\boldsymbol{\fH}$ is $(r,c(r,B)/r)$-Ramsey.
\TPARA{Tameness of $\mcal{H}^{(1)}_n$}
Fix $\emptyset \not= I \subset W \subseteq [k_1]$. By Corollary~\ref{cor:project}
$$
\frac{\left|\left(\mcal{H}^{(1)}_n\right)_W\right|}{\left|\left(\mcal{H}^{(1)}_n\right)_I\right|} =\Theta \left( \frac{n^{|W|-\mathrm{\mbs{rk}}\, A + \mathrm{\mbs{rk}}\, A_{\overline{W}}}}{n^{|I|-\mathrm{\mbs{rk}}\, A+\mathrm{\mbs{rk}}\, A_{\overline{I}}}}\right) = \Theta \left(n^{|W \setminus I| -\mathrm{\mbs{rk}}\, A_{\overline{I}} + \mathrm{\mbs{rk}}\, A_{\overline{W}}}\right).
$$
By~\eqref{eq:deg}
$$
\deg_{\mcal{H}^{(1)}_n}(y) \leq K n^{|W \setminus I| - \mathrm{\mbs{rk}}\, A_{\overline{I}} + \mathrm{\mbs{rk}}\, A_{\overline{W}}}
$$
holds for every $y \in \left(\mcal{H}^{(1)}_n\right)_I$.
It follows that $\mcal{H}^{(1)}_n$ is $O(1)$-tamed.
\vspace{2ex}
\TPARA{Containerability for $(H^{(i)}_n)_{i=2}^r$}
We check that the conditions for `containerability' specified in~\cite[Corollary~3.6]{ST15} (see Theorem~\ref{thm:containers}) are met by $H^{(2)}_n$,\ldots, $H^{(r)}_n$. Pick $\eps < \min\{1/2, c(r,B)/r)$ and set $\tau := C'n^{-1/m(A_2)}$ where $C'$ is some sufficiently large constant. As $\tau = o(1)$, we need to verify for every $i \in [2,r]$ that a sufficiently large $C'$ implies (with our choice of $\tau$):
\begin{equation}\label{eq:tau}
\delta(H^{(i)}_n,\tau) \leq \eps/ 12 (k_i)!.
\end{equation}
To see~\eqref{eq:tau}, fix $i \in [2,r]$ and fix $j \in [2,k_i]$.
Then (with $\overline{I}=[k_i]\setminus I$):
\begin{align*}
\sum_{v \in [n]} \deg^{(j)}_{H^{(i)}_n}(v)
& \overset{\phantom{\eqref{eq:luck}}}{\leq} \sum_{v \in [n]}\sum_{I \in \binom{[k_i]}{j}} \Delta^{(I)}(\mcal{H}^{(i)}_n)\\
& \overset{\eqref{eq:deg}}{\leq} \sum_{v \in [n]} \sum_{I \in \binom{[k_i]}{j}} K n^{k_i - j - \mathrm{\mbs{rk}}\, (A_i)_{\overline{I}} + \mathrm{\mbs{rk}}\, (A_i)_{\overline{[k_i]}}} \\
& \overset{\phantom{\eqref{eq:luck}}}{=} \sum_{v \in [n]} \sum_{I \in \binom{[k_i]}{j}} K n^{k_i - j - \mathrm{\mbs{rk}}\, (A_i)_{\overline{I}}}\\
& \overset{\eqref{eq:luck}}{\leq} \sum_{v \in [n]} \sum_{I \in \binom{[k_i]}{j}} K n^{k_i - \mathrm{\mbs{rk}}\, A_i - 1 - \frac{j-1}{m(A_i)}} \\
& \overset{\eqref{eq:project}}{=} O_{k_i} \left( n\cdot n^{-\frac{j-1}{m(A_i)}} \cdot \frac{e(H^{(i)}_n)}{n}\right).
\end{align*}
Then
\begin{align*}
\delta_j(H^{(i)}_n,\tau) & \leq \frac{\sum_{v \in [n]} \deg^{(j)}(v)}{\tau^{j-1} \cdot n \cdot \left(\frac{k_i e(H^{(i)}_n)}{n}\right)} \\
& = \frac{O_{k_i} \left( n^{-\frac{j-1}{m(A_i)}} e(H^{(i)}_n)\right)}{(C')^{j-1} \cdot k_i\cdot n^{-\frac{j-1}{m(A_2)}}\cdot e(H^{(i)}_n)} \\
& = \frac{O_{k_i}(n^{-1/m(A_i)})}{(C')^{j-1} \cdot k_i\cdot n^{-1/m(A_2)}};
\intertext{owing to $m(A_2) \geq m(A_i)$ for all $i \in [2,n]$ it follows that $n^{-1/m(A_i)} \leq n^{-1/m(A_2)}$ and thus}
\delta_j(H^{(i)}_n,\tau) & \leq \frac{O_{k_i}(1)}{(C')^{j-1}}.
\end{align*}
Then
$$
\delta(H^{(i)}_n,\tau) = 2^{\binom{k_i}{2} - 1} \sum_{j=2}^{k_i}2^{-\binom{j-1}{2}} \delta_j(H^{(i)}_n,\tau) \leq 2^{\binom{k_i}{2} - 1} \sum_{j=2}^{k_i}2^{-\binom{j-1}{2}}\frac{O_{k_i}(1)}{(C')^{j-1}};
$$
from which the existence of a choice of $C'$ yielding~\eqref{eq:tau} is clear.
\TPARA{Boundedness of $\mcal{H}^{(1)}_n$}
First we observe that there exists a function $w:[k_1] \to [1,\infty)$ such that for every $x \in [k_1]$ the following is true
\begin{equation}\label{eq:w}
\min \{ |I| - \mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}} - w(I)/m(A_1,A_2): I \subseteq [k_1], x \in I\} = 1 - 1/m(A_2).
\end{equation}
A similar statement concerning asymmetric graph densities was proven in~\cite[Lemma~8]{MNS18}.
The proof of~\eqref{eq:w} is almost verbatim as the proof of~\cite[Lemma~8]{MNS18} (which follows by a compactness argument),
but we provide the details for completeness here.
To see~\eqref{eq:w}, define
$$
r_x(w) : = \min\{|I| - \mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}} - w(I)/m(A_1,A_2): I \subseteq [k_1], x \in I\} - 1 + 1/m(A_2)
$$
whenever $w:[k_1] \to [1,\infty)$ and $x \in [k_1]$.
It suffices to prove that there exists a function $w$ such that $r_x(w) = 0$ for every $x \in [k_1]$. To that end set
$$
\mcal{F} := \{w : [k_1] \to [1,\infty): r_x(w) \geq 0, \;\; \forall x \in [k_1]\}.
$$
Viewed as a subset of $\mbb{R}^{{k_1}}$, the set $\mcal{F}$ is non-empty and bounded and thus compact. The fact that $\mcal{F}$ is bounded is simple: it follows from
$0\le r_x(w)\le \frac{1}{m(A_2)}-\frac{w(x)}{m(A_1,A_2)}$. We focus on the non-emptiness of $\mcal{F}$. We argue that
\begin{equation}\label{eq:1-is-in}
w \equiv 1 \in \mcal{F}.
\end{equation}
To see~\eqref{eq:1-is-in}, recall first that by~\eqref{eq:depend} $\mathrm{\mbs{rk}}\, A_1 - \mathrm{\mbs{rk}}\, A_{1_{\overline{I}}} = 0$ whenever $I \subseteq [k_1]$ satisfies $|I| = 1$. Consequently, for $w \equiv 1$ and $I=\{x\}$ we get
$$
1-\mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}} -1/m(A_1,A_2)-1+1/m(A_2)\ge 1/m(A_2)-1/m(A_1,A_2)\ge 0,
$$
which follows from $m(A_1) \ge m(A_2)$ and Observation~\ref{obs:const}. If $|I|\ge 2$ and $I\ni x$ then we have from the definition of $m(A_1,A_2)$ that
$w(I)/m(A_1,A_2)=|I|/m(A_1,A_2)\le |I|-\mathrm{\mbs{rk}}\, A_1+rk(A_1)_{\overline{I}}-1+1/m(A_2)$,
and hence
\[
|I| - \mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}} - w(I)/m(A_1,A_2) - 1 + 1/m(A_2)\ge 0.
\]
Thus, in any case we have for all $x\in[k_1]$ that $r_x(1)\ge 0$. This then concludes the proof of~\eqref{eq:1-is-in} so that $\mcal{F}$ is non-empty.
The function $w \mapsto w([k])$ is continuous over the now known to be compact $\mcal{F}$. Let $w^*$ be the maximum attained by the function $w \mapsto w([k])$ over $\mcal{F}$. The function $w^*$ has the property $r_x(w^*) = 0$ for every $x \in [k_1]$. Otherwise, let $x'\in[k_1]$ such that $r_{x'}(w^*)>0$ then set $\tilde{w}(x):=w^*(x)+\eps\cdot 1_{x=x'}$ for a sufficiently small $\eps>0$. This clearly yields a contradiction to the maximality of $w^*$.
Now we proceed with the verification of the boundedness of $\mcal{H}^{(1)}_n$.
Recall that $\tau := C'n^{-1/m(A_2)}$ so that $\tau n \to \infty$ with $n$, by~\eqref{eq:bound_mA}. We prove that $\mcal{H}^{(1)}_n$ is $(p,w,\tau)$-bounded. This amounts to establishing
$$
\min_{I \subseteq [k_1]}p^{w(I)}\left|\left(\mcal{H}^{(1)}_n\right)_I\right| = \Theta(\tau n)
$$
for all sufficiently large $n$. Owing to Corollary~\ref{cor:project} and since $p=n^{-1/m(A_1,A_2)}$ and $\tau=C'n^{-1/m(A_2)}$ this has the form
$$
\Theta\left(\min_{\emptyset\neq I \subseteq [k_1]}n^{|I|-\mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}}-\frac{w(I)}{m(A_1,A_2)}}\right) = \Theta\left(n^{\min_{\emptyset\neq I \subseteq [k_1]}\left(|I|-\mathrm{\mbs{rk}}\, A_1 + \mathrm{\mbs{rk}}\, (A_1)_{\overline{I}}-\frac{w(I)}{m(A_1,A_2)}\right)}\right) = \Theta(n^{1-1/m(A_2)})
$$
where the last equality is owing to~\eqref{eq:w}, i.e., the "definition" of $w$.
\vspace{3ex}
This concludes the proof of Theorem~\ref{thm:Rado}.
\@ifstar\unnumberedsection\numberedsection{Concluding remarks}\label{sec:conclude}
While finalising the writing of this manuscript we were made aware of the work of Zohar~\cite{Zohar} who established a so-called {\sl asymmetric} random van der Waerden theorem as follows. Given integers $\ell_1 \ge ... \ge \ell_r \ge 3$ there exist constants $0 < c <C$ such that
$$
\lim_{n \to \infty} \Pr([n]_p \to (\ell_1, \dotsc, \ell_r)) =
\begin{cases}
1, & p \geq C n^{-\frac{\ell_2}{\ell_1(\ell_2-1)}},\\
0, & p \leq n^{-\frac{\ell_2}{\ell_1(\ell_2-1)}};
\end{cases}
$$
where for $A \subseteq[n]$ we write $A \to (\ell_1, \dotsc, \ell_r)$ to denote that $A$ has the property that for every $r$-colouring of $A$ there is a colour $i \in [r]$ admitting a monochromatic arithmetic progression of length $\ell_i$. While the $1$-statement of the result of Zohar~\cite{Zohar} is a special case of our main result, namely Theorem~\ref{thm:Rado}, the $0$-statement of the result of Zohar~\cite{Zohar} is of course not covered by our result.
An extension of the $0$-statement above to more general systems of Rado matrices could lead to the proof of Conjecture~\ref{conj:AHP}.
\bibliographystyle{amsplain_yk}
\@ifstar\unnumberedsection\numberedsection{\@ifstar\unnumberedsection\numberedsection}
\def\numberedsection{\@ifnextchar
\numberedsectionwithtwoarguments\numberedsectionwithoneargument}
\def\unnumberedsection{\@ifnextchar
\unnumberedsectionwithtwoarguments\unnumberedsectionwithoneargument}
\def\numberedsectionwithoneargument#1{\numberedsectionwithtwoarguments[#1]{#1}}
\def\unnumberedsectionwithoneargument#1{\unnumberedsectionwithtwoarguments[#1]{#1}}
\def\numberedsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 1.7ex\goodbreak
\refstepcounter{section}%
\begingroup
\noindent\leavevmode\Large\bfseries\scshape\normalsize
\begin{center}\S \thesection.\ #2\end{center}
\endgroup
\addcontentsline{toc}{section}{%
\protect\numberline{\textbf{\thesection.}}%
\hspace{2.5ex} #1}%
}
\def\unnumberedsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 1.7ex\goodbreak
\begingroup
\noindent\leavevmode\Large\bfseries\scshape\centering
\begin{center} #2 \end{center} \par
\endgroup
\vskip 2ex\nobreak
\addcontentsline{toc}{section}{%
\hspace{1ex} #1}%
}
\makeatother
\makeatletter
\def\@ifstar\unnumberedsubsection\numberedsubsection{\@ifstar\unnumberedsubsection\numberedsubsection}
\def\numberedsubsection{\@ifnextchar
\numberedsubsectionwithtwoarguments\numberedsubsectionwithoneargument}
\def\unnumberedsubsection{\@ifnextchar
\unnumberedsubsectionwithtwoarguments\unnumberedsubsectionwithoneargument}
\def\numberedsubsectionwithoneargument#1{\numberedsubsectionwithtwoarguments[#1]{#1}}
\def\unnumberedsubsectionwithoneargument#1{\unnumberedsubsectionwithtwoarguments[#1]{#1}}
\def\numberedsubsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 1.7ex\goodbreak
\refstepcounter{subsection}%
\noindent
\leavevmode
\begingroup
\bfseries\normalsize
\noindent\S \thesubsection\ \bscaps{#2.}\
\endgroup
\addcontentsline{toc}{subsection}{%
\hspace{2ex}\protect\numberline{\textbf{\thesubsection.}}%
\hspace{1ex} #1}%
}
\def\unnumberedsubsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 3ex\goodbreak
\noindent
\leavevmode
\begingroup
\bfseries\normalsize
\begin{center}\bscaps{#2.} \end{center}
\endgroup
\addcontentsline{toc}{subsection}{%
\hspace{1ex} #1}%
}
\makeatother
\makeatletter
\def\@ifstar\unnumberedsubsubsection\numberedsubsubsection{\@ifstar\unnumberedsubsubsection\numberedsubsubsection}
\def\numberedsubsubsection{\@ifnextchar
\numberedsubsubsectionwithtwoarguments\numberedsubsubsectionwithoneargument}
\def\unnumberedsubsubsection{\@ifnextchar
\unnumberedsubsubsectionwithtwoarguments\unnumberedsubsubsectionwithoneargument}
\def\numberedsubsubsectionwithoneargument#1{\numberedsubsubsectionwithtwoarguments[#1]{#1}}
\def\unnumberedsubsubsectionwithoneargument#1{\unnumberedsubsubsectionwithtwoarguments[#1]{#1}}
\def\numberedsubsubsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 3ex\goodbreak
\refstepcounter{subsubsection}%
\noindent
\leavevmode
\begingroup
\bfseries
\S \thesubsubsection\ \bscaps{#2.}\
\endgroup
\addcontentsline{toc}{subsubsection}{%
\hspace{2ex} \protect\numberline{\textbf{\thesubsubsection.}}%
#1}%
}
\def\unnumberedsubsubsectionwithtwoarguments[#1]#2{%
\ifhmode\par\fi
\removelastskip
\vskip 3ex\goodbreak
\noindent
\leavevmode
\begingroup
\bfseries
\bscaps{#2.}\
\endgroup
\addcontentsline{toc}{subsubsection}{%
#1}%
}
\makeatother
\usepackage{chngcntr}
\counterwithin*{paragraph}{section}
\setcounter{secnumdepth}{5}
\renewcommand\theparagraph{\Roman{paragraph}}
\newcommand\PARA{\paragraph{\hspace{-0.4cm}.} \hspace{-0.18cm}}
\newcommand{\TPARA}[1]{\paragraph{\hspace{-0.4cm}\phantom{.} \textbf{#1}}
\makeatletter
\def\namedlabel#1#2{\begingroup
#2%
\def\@currentlabel{#2}%
\phantomsection\label{#1}\endgroup
}
\makeatother
\makeatletter
\renewcommand*\env@matrix[1][*\c@MaxMatrixCols c]{%
\hskip -\arraycolsep
\let\@ifnextchar\new@ifnextchar
\array{#1}}
\makeatother |
1,116,691,500,235 | arxiv |
\section{Introduction}
A $p$:$q$ mean-motion resonance (MMR) occurs when the ratio of the periods of two interacting planets is close to $p/q$. This commensurability allows planetary conjunctions to occur at consistent locations in the planets' orbits, leading to periodic transfers of energy and angular momentum between the two bodies. Many examples of bodies in mean motion resonance are known in the solar system (for a review, see e.g. \citealt{p_1986}) and in exoplanetary systems (e.g. \citealt{lff_2011}, \citealt{ior_2017}). In this paper, we restrict our focus to systems of giant planets in MMR. Mean motion resonance between Jupiter and Saturn has been suggested as a possible phenomenon early in the solar system's history (e.g. \citealt{mtc_2007}, \citealt{wmr_2011}). Several sets of giant planets in resonance have been identified directly (e.g., GJ 876, \citealt{lp_2002}; HD 5319, \citealt{gfp_2015}; HD 33844, \citealt{wjb_2016}; HD 47366, \citealt{mwh_2018}; HD 202696, \citealt{tsh_2019}; and TOI-216, \citealt{knh_2019}) and resonance has been inferred due to stability constraints in the directly imaged system of giants HR 8799 \citep[e.g.,][]{fmc_2010,wgd_2018}. Understanding the population of giant planets in MMRs is important for constraining the typical migration histories of giant planets, as convergent migration of giant planets in a gas disk is a commonly cited mechanism for formation of gas giants in MMR (e.g. \citealt{lp_2002}).
Resonances often constitute stable regions in otherwise unstable parts of phase space. Because the interactions between planets in MMR can generate periodic oscillations of the system's line of conjunctions, they can protect planets from close encounters. Thus, MMRs are often invoked to explain observed systems that initially appear to be unstable.
Unfortunately, the presence of MMRs greatly complicates analysis of RV systems. Strong planet-planet interactions cause the planets to deviate from pure Keplerian motion even on the timescale of typical RV observations. This complicates the usual RV fitting process, where planets are often allowed to move on unperturbed Keplerian orbits. Furthermore, the additional frequencies introduced by these dynamical interactions can shift the peaks in a periodogram of the RV signal away from the true orbital periods of the planets. This difficulty in identifying the periods of the planets in turn means that, perhaps counterintuitively, the particular resonance that a system is in is not clear from the outset of fitting. Further exacerbating this issue is the fact that libration of the MMR's resonant angle occurs on timescales that are generally longer than the timescale of the RV observations, meaning that our observations only capture part of the full libration. This sampling issue, along with error in the observations, means that the best-fit solutions to RV signals may lie far from solutions that actually exhibit long term stability.
Thus, fitting RV systems in MMR necessitates different methods than those traditionally used to fit radial velocity systems. Firstly, theoretical radial velocities must be generated through full numerical integration of the equations of motion of the system (e.g., \citealt{tpl_2013}, \citealt{wtl_2014}, \citealt{nfw_2014}, \citealt{tkz_2017}, \citealt{mlt_2018}). Furthermore, while initial searches through parameter space can be performed without incorporating long term stability, the ``true" posterior distribution of the planetary orbital parameters should not include points that are unstable on short timescales. In some cases ``rejection sampling", i.e. throwing out all points that do not exhibit stability, can produce posterior distributions conditioned on long-term stability. However, as will be seen in this work, it is often the case that the fraction of stable points is so small that the posterior produced by rejection sampling does not adequately represent the underlying probability distribution. Thus, in order to find long-term stable posterior distributions it is often necessary to incorporate stability during the search through parameter space, though this is often not explicitly done. Incorporating long term stability makes exploring the parameter space difficult, as while regions close to particular MMRs will exhibit long term stability, intermediate regions will generally have no stable solutions, meaning that each proposed resonance must be investigated separately.
In this work, we illustrate these difficulties and ways they can be mitigated through the example of the planetary system orbiting the star HD 200964. HD 200964 is an intermediate mass subgiant (see Table \ref{tab:stell_pars} for a summary of the stellar parameters), which was reported by \citet{jphc11} (hereafter \citetalias{jphc11}) to host two massive ($M_p \gtrsim M_J)$ giant planets in a tight orbital configuration ($P_c$:$P_b$ $\sim$800:600 days). \citetalias{jphc11} gave a best-fit, long term ($>10^7$ years) stable solution that was close to a 4:3 MMR. In this work, we include additional observations from both the Keck telescope as well as the Automated Planet Finder (APF), which increase the length of time spanned by the RV data. In addition, we explicitly require stability in our search over parameter space, which greatly aids in finding regions of parameter space that both fit the data well and exhibit long term stability. We find that, in addition to the 4:3 solution identified by by \citetalias{jphc11}, the system can be fit by both a 3:2 MMR and a 7:5 MMR, with the 7:5 providing the best fit to the measured radial velocity. The presence of multiple plausible MMRs highlights the general difficulty in pinning down MMR in observed radial velocity systems. We also note that if the system is truly in a 3:2 MMR, this would mitigate difficulties in forming the system through convergent migration.
In Section \ref{obs}, we discuss how our observations of HD 200964 were performed. In Section \ref{prev}, we discuss the results of previous analyses of HD 200964. In Section \ref{methods} we discuss the various methods we employed to find best-fit, long-term stable solutions to the observed radial velocity. In Section \ref{res} we analyze the MMRs that stabilize the best-fit solutions we find. In Section \ref{reanaly} we perform our methodology on the \citetalias{jphc11} dataset and compare our results with theirs, and in Section \ref{third} we discuss the possibility of a third planet in the system. Finally, in Section \ref{conc} we summarize our results and give our conclusions.
\begin{deluxetable}{cc} \label{tab:stell_pars}
\setlength{\tabcolsep}{24pt}
\tabletypesize{\footnotesize}
\tablecaption{Stellar parameters for HD 200964, taken from \cite{bfv_2016}}
\tablehead{\colhead{Parameter}& \colhead{Value}}
\startdata
$V_{\text{mag}}$ & 6.48 \\
Distance [pc] & 72.2 \\
$T_{\rm{eff}}$ & 4982 \\
$\log g$ & 3.22 \\
$[\rm{M}/\rm{H}]$ & -0.1 \\
$\log L \left[L_\odot\right]$ & 1.13 \\
$R_* \left[R_\odot\right]$ & 4.92 \\
$M_* \left[ M_\odot \right]$ & 1.45 \\
Age [Gyr] & 3.3 \\
\enddata
\end{deluxetable}
\section{Observations} \label{obs}
The radial velocity measurements of HD 200964 used in this analysis come from three different facilities: the Hamilton spectrometer \citep{Vogt1987} paired with the Shane 3 m or the 0.6 m Coude Auxiliary Telescope, the HIRES spectrometer \citep{vab_1994} on Keck I, and the Levy spectrometer on the Automated Planet Finder (APF) telescope \citep{vrk_2014}. In all cases, the star's Doppler shifts were measured by placing a cell of gaseous iodine in the converging beam of the telescope, imprinting the stellar spectrum with a dense forest of iodine lines from 5000-6200 \AA\ \citep{Butler1996}. These iodine lines were used to generate a wavelength calibration that reflects any changes in temperature or pressure that the spectrometer undergoes, and enables the measurement of each spectrometer's point spread function. Although each spectrometer covers a much broader wavelength range, 3400-9000 \AA\ for the Hamilton and 3700-8000 \AA\ for HIRES and the Levy, only the iodine rich 5000-6200 \AA\ region was used for determining the observation's RV shift. For each stellar spectrum, the iodine region was divided into $\sim$700 individual 2\AA\ chunks. Each chunk produces an independent measure of the wavelength, point spread function, and Doppler shift. The final measured velocity is the weighted mean of the velocities of all the individual chunks. It is important to note that all RVs reported here have been corrected to the solar system barycenter, but are not tied to any absolute RV system. As such, they are “relative” velocities, with a zero point that is usually set simply to the mean of each dataset.
We make use of two previously published RV datasets, denoted here as the ``Lick" and ``Keck11" datasets, taken from \citet{Johnson2011} which originally announced the detection of these planets in the context of their intermediate-mass subgiant host star survey \citep{Johnson2006, Peek2009, Bowler2010, Johnson2010}. The Lick data have SNR of $\sim$120 in the center of the iodine region ($\lambda = 5500$\AA) corresponding to an internal uncertainty value of 4-5\ensuremath{\mathrm{m\,s}^{-1}}, while the Keck11 have SNR $\sim$180 in the same area which brings the internal uncertainties down to 1.5-2\ensuremath{\mathrm{m\,s}^{-1}}.
For additional details on this data, see \citet{Johnson2011}. New to this paper are an additional 50 velocities taken with Keck HIRES and 36 velocities taken with the APF, all obtained as part of the long running LCES Doppler survey \citep{Butler2017} and denoted as ``Keck" and ``APF", respectively. For our HIRES observations the median SNR in the iodine region is 159, corresponding to an average internal uncertainty of 1.4\ensuremath{\mathrm{m\,s}^{-1}}. The APF observations have a median SNR of 101 in the iodine region, which produces an average internal uncertainty of 1.5\ensuremath{\mathrm{m\,s}^{-1}}. These internal uncertainties reflect only one term in the overall RV error budget, and result from a combination of systematic errors from things like properly characterizing the point spread function, detector imperfections, optical aberrations, and under sampling the iodine lines, among others.
The new Keck and APF radial velocities are given in Tables \ref{tab:Keck_RV} and \ref{tab:APF_RV} respectively. Additionally, all four data sets, along with our maximum likelihood solution without stability taken into account (see Section \ref{fitting}), are plotted in Figure \ref{fig:rvt_nostab}.
\begin{figure*}[htbp]
\centering
\includegraphics[width=7in]{HD200964_best_nostab_crop}
\caption{The four data sets for the radial velocity of HD 200964, along with the theoretical radial velocity curve obtained using the parameters given in Table \ref{tab:bestfit_nostab}. The data sets are: Lick (pink points), Keck11 (green points), Keck (red points), and APF (blue points). Note that, as discussed in Section \ref{fitting}, each data set has a constant offset that we fit separately. Furthermore, the jitter term given in Table \ref{tab:bestfit_nostab} is added in quadrature to the quoted error bars to obtain the error bars shown in the figure. The residuals between the theoretical velocity and the data are shown in the bottom panel.}
\label{fig:rvt_nostab}
\end{figure*}
\begin{table} \label{tab:Keck_RV}
\centering
\caption{Keck Radial velocities for HD 200964}
\input{Keck_table.tex}
\end{table}
\begin{table} \label{tab:APF_RV}
\centering
\caption{APF Radial velocities for HD 200964}
\input{APF_table.tex}
\end{table}
\section{Previous Analysis} \label{prev}
The first analysis of the planetary system around HD 200964 was given by \citetalias{jphc11}, using the ``Lick" and ``Keck11" datasets. These authors first perform a Markov chain Monte Carlo (MCMC) analysis of the system assuming Keplerian orbits for both of the planets in the system, i.e. neglecting planet-planet interactions. They use the results of this Keplerian MCMC to initialize a Differential Evolution Markov Chain Monte Carlo (DEMCMC) algorithm. The theoretical radial velocity at a given time is calculated using an $N$-body integrator, with a constraint that the system must remain stable for 100 years. They then perform rejection sampling on their final posterior, throwing out points which are not stable for $10^7$ years. Their best-fit, long term stable solution appears to have an RMS scatter of 28.1 m/s, which would indicate poor agreement between the model and the data. Furthermore, as also reported by \citet{tcm15}, we find that the best fit solution reported by \citetalias{jphc11} does not exhibit long-term stability, regardless of whether the reported orbital elements are taken to be astrocentric or Jacobi. However, \citetalias{jphc11} do not appear to specify the epoch at which the planets have the reported orbital elements. When planet-planet interactions are included, the orbital elements of the planets change as a function of time. Thus, in order to fully specify an orbit, the time at which the orbital elements are referenced must be stated in addition to the elements themselves. For example, for the parameters given by \citetalias{jphc11}, the period of the outer planet ranges from $\sim 772$ to $857$ days over the timescale of the radial velocity observations. Given the degree to which the orbital elements change over the timescale of the observations for the parameters reported by \citetalias{jphc11}, it is quite possible that the discrepancy we find between their best-fit solution and the data is because the epoch to which the elements are referenced is not specified.
The reported 4:3 MMR exhibited by the system is interesting, as it is quite difficult to capture planets of gas giant mass into this resonance through convergent migration alone, as discussed by \cite{rpv12}. Subsequent works have explored the stable regions of parameter space for the parameters reported by \citetalias{jphc11}, (\citealt{wht12}), investigated in more detail the resonant behavior exhibited for the reported parameters (\citealt{mk_2016}) and investigated other, more complex scenarios for the formation of HD 200964 (\citealt{e_2012}, \citealt{tcm15})
\section{Methods} \label{methods}
In this work, in addition to analyzing a baseline of data longer than that used in \citetalias{jphc11}, we investigate the underlying posterior by explicitly conditioning our Markov chain Monte Carlo (MCMC) search on long-term stability. MCMC is a commonly used method to sample from a probability distribution (see e.g. \citealt{s_2017} for a review); in this context it used to sample from the posterior probability distributions for the orbital parameters of the planetary system (as well as the stellar jitter, see below). In this section we specify the methods employed to find these stable, best-fit solutions. We begin by investigating best-fit solutions including planet-planet interactions but neglecting stability (Section \ref{fitting}). After constructing the posterior distribution of orbital parameters without stability, we show that ``rejection sampling'', i.e. discarding solutions that do not exhibit long-term stability, yields few long-term stable solutions (Section \ref{rej}). Thus, to improve our measurement of the long-term stable posterior distribution, we explore parameter space using a likelihood function that explicitly takes stability into account (Section \ref{stab_fits}). We find that this method does a much better job of fitting the posterior distribution, though we find the posterior is multi-modal (Section \ref{post}). Finally, we perform a Monte Carlo search to verify after the fact that we have identified all relevant stable regions of parameter space (Section \ref{dart}).
\subsection{Fits Incorporating Planet-Planet Interactions} \label{fitting}
We begin our analysis by searching for fits to the RV data without explicitly requiring our solutions to be stable. Firstly, we note that inspection of a usual generalized Lomb-Scargle periodogram (GLS, see e.g. \citealt{zk_2009}), leads inexorably to the conclusion that the two planets in the system are closely packed. A GLS for the RV data shown in Figure \ref{fig:rvt_nostab} is plotted in Figure \ref{fig:GLS}. The two largest peaks (note that we have omitted a peak at $\sim$1 day which is likely an alias of the sampling period of the data) of the GLS are near $\sim$600 and $\sim$900 days. While the actual periods of the planets we determine will be affected by planetary eccentricity and dynamical interaction between the planets, these close peaks nonetheless indicate that the system likely contains two closely packed planets.
\begin{figure}[htbp]
\centering
\plotone{HD200964_GLS}
\caption{A generalized Lomb-Scargle periodogram for the RV data of HD 200964. Note the two strong peaks at $\sim$600 and $\sim$900 days, demonstrating that the system likely features two closely-packed planets. The full width at half maximum of each peak is indicated by the gray rectangle.}
\label{fig:GLS}
\end{figure}
Thus the gravitational interactions between the planets constitute an important component to the observed radial velocity of HD 200964, and cannot be neglected. Often, theoretical radial velocity values are calculated by advancing the planets along Keplerian orbits, in effect neglecting any perturbations between the planets. For non-closely packed systems this is generally a fine approximation, as perturbations between the planets are unimportant over the timescale of the RV observations. As illustrated in Figure \ref{fig:int_kep_comp}, however, this is not the case for HD 200964. Figure \ref{fig:int_kep_comp} plots the radial velocity as a function of time determined by both using only Keplerian orbits, as well as a full $N$-body integration of the equations of motion. The difference between the two values is shown in the bottom panel. The orbital parameters used correspond to our best-fit, long-term stable solution (see Section \ref{stab_fits}).
\begin{figure} [h]
\centering
\includegraphics[width=1.15\linewidth]{int_kep_comp}
\caption{A comparison of the radial velocity determined by numerically integrating the motions of the planets and by advancing the planets forward on Keplerian orbits. The orbital parameters used are our best-fit long-term stable solution, as discussed in Section \ref{stab_fits}. The top panel shows the stellar radial velocity determined by the two methods, while the bottom shows the difference in the two curves. There is substantial disagreement between the integrated and Keplerian radial velocities due to the strong planet-planet interactions present.}
\label{fig:int_kep_comp}
\end{figure}
Clearly, neglecting the planet-planet interactions is a poor approximation; in what follows all calculation of radial velocity values will be done by numerically integrating the star-planet system forward in time. In order to perform our numeric integrations, both to calculate the theoretical radial velocity and to determine the lifetime of our planetary systems (see Sections \ref{rej} and \ref{stab_fits}), we use the $N$-body integration package \texttt{REBOUND} (\citealt{rl12}).
For the purpose of computing the theoretical radial velocity for comparison with the observations, we use the IAS15 integrator (\citealt{rs15}), which is a 15th order integrator with adaptive time stepping. All orbital elements provided in this paper are quoted relative to the primary star, i.e. they are astrocentric coordinates, and are given at the epoch of the first data point, i.e. JD 2453213.895. Following \citet{bfv_2016}, we take the central star to have mass $M_* = 1.45 M_\odot$. We use a usual radial velocity coordinate system, such that the inclination $i$ represents the angle between orbital plane and the plane of the sky, which we take to be the reference plane. The argument of periapse, $\omega$, is the angle between the line of ascending nodes and the periapse direction. The observer is taken to lie in the $-\hat{z}$ direction relative to the reference plane; in keeping with convention velocities in this direction, i.e. towards the observer, are quoted as positive. For clarity, due to the strong planet-planet interactions we specify the mean longitudes of the two planets at epoch, $\lambda$, as opposed to the planets' time of periastron passage. In this work we fix $i=90^\circ$, corresponding to edge on orbits, and fix the longitude of ascending node, $\Omega=0$. We comment on the degeneracy between the system's inclination and the masses of the planets in Section \ref{post}.
Following other works (e.g \citealt{jfm_2007}, \citealt{cbm_2008}, \citetalias{jphc11}), we introduce a ``stellar jitter'' term in our fitting, which is an additional error term that is added in quadrature to the ``known'' error, i.e. the error on each measurement is taken to be $\sqrt{\sigma_k^2 + \sigma_j^2}$, where $\sigma_k$ is the given error and $\sigma_j$ is the proposed value of the stellar jitter term. We also note that we are using a single value to characterize the stellar jitter, meaning that we are neglecting variation in jitter between different instruments (\citealt{b_2009}). We have checked that the inclusion of multiple jitters has no qualitative effect on the posterior distribution shown in Figure \ref{fig:per_ratio_comp}. However, fitting a different jitter for each data set (as is done in e.g. \citealt{nrp_2016} or \citealt{mlt_2018}) would allow us to characterize the difference in instrumental noise between the various datasets.
We calculate the likelihood for a given set of orbital parameters by assuming that the radial velocity measurements are all independent and Gaussian distributed, with error given by $\sigma_i = \sqrt{\sigma_k^2 + \sigma_j^2}$, as discussed in the preceding paragraph. In this case, the log likelihood $\mathcal{L}$ is given by
\begin{align}
\mathcal{L} = -\sum_i \left[ \frac{\left(v_i - RV(t_i) - O_D\right)^2}{2 \sigma_i^2} + \log\left(\sigma_i \sqrt{2 \pi} \right) \right]
\end{align}
where $v_i$ are the measured radial velocities and $RV(t_i)$ are the model radial velocities. Here $O_D$ refers to the constant offset to each dataset (see Section \ref{obs}), which must also be fit, introducing 4 additional parameters into our fitting. Instead of including the 4 offsets as parameters in our MCMC search, the offsets are separately optimized for every proposed set of orbital parameters. That is, once the model radial velocities are known, it is straightforward to show that the constant offset to each dataset that maximizes the likelihood can be obtained by calculating the weighted mean of the difference between the model and the data
\begin{align}
O_D = \frac{1}{S} \sum_{i \in D} \frac{v_i - RV(t_i)}{\sigma_i^2}
\end{align}
where $S \equiv \sum_i 1/\sigma_i^2$. This simplifies our fitting algorithm, but does mean that we may miss degeneracies between the constant offsets and the orbital elements.
For our priors, we assume uniform probability in some specified domain for each parameter, except for the planetary eccentricities, where the priors are uniform in log space. For periods of each planet, the priors are uniform between 400 to 1000 days for planet b, and 500 to 1100 days for planet c. The prior on plantary mass is uniform between 0.1 and 10 $M_J$ for both planets. For the planetary eccentricity, the prior is uniform in log space between -4.5 and 0. For all the angles, the priors are taken to be uniform between $-720^\circ$ and $720^\circ$. This is done to ensure that the arguments of pericenter do not diverge to arbitrarily large values when the planet's eccentricity is low. In practice the actual values of parameters in our searches are quite far from the limiting bounds, with the exception of planetary eccentricity and the corresponding argument of pericenter, where the bounds are important for cases of low eccentricity.
To explore the parameter space, we initially use the \texttt{scipy} minimizer to optimize the orbital parameters. We initially fix the orbital periods and masses of the planets, using the GLS and the amplitude of the RV signal to provide rough estimates of these parameters, and perform an optimization on the rest of the parameters, starting from random values. We chose five of these optimizations which both had high likelihood and different final parameters to initialize our MCMCs.
We used the software \texttt{emcee} \citep{fhl13} to perform our MCMC search. We initialized different MCMC searches from our converged optimizations. We let these MCMCs run for $\sim$1000 steps, and look at the regions of high likelihood. We found that all of these searches identify a single region as having the highest likelihood. We then reinitialized a final search in this region. We ran this MCMC for an initial burn in period, then discarded these walker positions and ran the MCMC to convergence. To asses convergence of our MCMC runs, we used the potential scale reduction factor (PSRF, \citealt{gr_1992}). A common method to asses convergence is to run the MCMC until the PSRF for every parameter has a value $<1.1$ \citep{bg_1998}. However, for our MCMC runs the PSRF for the two eccentricities and arguments of pericenter often do not fall below 1.1, likely because at low eccentricities the posterior probability is completely insensitive to these parameters. Thus, in practice we consider our MCMC converged if the PSRF for all parameters, except for the two eccentricities and two arguments of pericenter, is below 1.1.
A corner plot showing our best fit posterior distribution for the orbital parameters is shown in Appendix \ref{corner_plots} (Figure \ref{fig:nostab_corner}). The model radial velocity produced from our best-fit parameters (maximum likelihood) is shown in Figure \ref{fig:rvt_nostab}, the median values of our posterior distribution are given in Table \ref{tab:median_nostab}, and the maximum likelhiood orbital parameters are given in Table \ref{tab:bestfit_nostab}.
The periods of the planets in our posterior distribution are much more constrained than the results obtained by \citetalias{jphc11}. The median period ratio of the system has also moved to $P_c/P_b \sim 7/5$, whereas \citetalias{jphc11} found values much closer to $4/3$. This is due to our observations spanning a longer timescale. To illustrate this point, in Figure \ref{fig:per_ratio_comp_early} we plot the posterior for our new data along with the $N$-body integrated posterior distribution produced by analyzing just the \citetalias{jphc11} data (see Section \ref{reanaly}). This is consistent with the results of \citet{lbw_2019}, who also report the period of planet c to be around 850 days based on a Keplerian fit to the data.
Interestingly, using $N$-body integration to determine the theoretical RV values broadens the posterior distribution of $P_b$ and $P_c$ compared to a purely Keplerian fit for the full dataset. For comparison with our $N$-body integrated fits, we repeat our analysis with the assumption of Keplerian orbits for both planets. The 2D histogram of a Keplerian fit to the data is plotted in red in Figure \ref{fig:per_ratio_comp}. In particular, it appears that the dynamical interaction between the planets allows for period ratios close to both 3:2 and 4:3 to fit the data, which are more strongly ruled out in a purely Keplerian fit.
\begin{center}
\begin{deluxetable*}{ccc}
\tabletypesize{\footnotesize}
\tablecaption{Median orbital parameters, no long term stability}
\tablehead{\colhead{Parameter\tablenotemark{a}}& \colhead{HD 200964 b}& \colhead{HD 200964 c}}
\startdata
Orbital Period, $P$ [days], & $604.69^{+3.38}_{-3.10}$ & $852.55^{+9.42}_{-8.30}$ \\
Mass, $m$ $\left[M_J\right]$ & $1.72^{+0.05}_{-0.05}$ & $1.20^{+0.06}_{-0.06}$ \\
Mean longitude, $\lambda$ [deg] & $307.40^{+5.26}_{-5.06}$ & $239.47^{+6.27}_{-6.42}$\\
Argument of periastron, $\omega$ [deg] & $294.48^{+21.08}_{-22.70}$ & $259.32^{+57.71}_{-47.07}$\\
$\log_{10}$ Eccentricity, $e$ & $-1.15^{+0.11}_{-0.15}$ & $-1.49^{+0.55}_{-1.81}$\\
Stellar Jitter, $\sigma_j$ [m/s] & $6.05^{+0.46}_{-0.39}$ &
\enddata
\vspace{1mm}
\tablenotetext{a}{Values for orbital elements are in astrocentric coordinates, are referenced to the epoch of the first data point, JD 2453213.895, and assume an inclination $i=90^\circ$. The reported values are median values for the posterior distribution, and the reported error bars are 84\% and 16\% quantiles.}
\label{tab:median_nostab}
\end{deluxetable*}
\begin{deluxetable*}{ccc}
\tabletypesize{\footnotesize}
\tablecaption{Maximum likelihood orbital parameters, no long term stability}
\tablehead{\colhead{Parameter\tablenotemark{a}}& \colhead{HD 200964 b}& \colhead{HD 200964 c}}
\startdata
Orbital Period, $P$ [days], & 607.7 & 845.3 \\
Mass, $m$ $\left[M_J\right]$ & 1.71 & 1.21 \\
Mean longitude, $\lambda$ [deg] & 312.5 & 233.7\\
Argument of periastron, $\omega$ [deg] & 297.4 & 270.5\\
$\log_{10}$ Eccentricity, $e$ & -1.13 & -0.92\\
Stellar Jitter, $\sigma_j$ [m/s] & 5.60 &
\enddata
\vspace{1mm}
\tablenotetext{a}{Values for orbital elements are in astrocentric coordinates, are referenced to the epoch of the first data point, JD 2453213.895, and assume an inclination $i=90^\circ$.}
\label{tab:bestfit_nostab}
\end{deluxetable*}
\end{center}
\begin{figure}[htbp]
\centering
\includegraphics[trim=70 0 0 20, clip, width=1.1\linewidth]{per_ratio_comp}
\caption{2D histograms of the posterior distributions for the planets' periods, using $N$-body integration to calculate the radial velocity but without long-term stability (black points, see Section \ref{fitting}) and advancing the planets on Keplerian orbits (red points). Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray) and 4:3 (orange).}
\label{fig:per_ratio_comp}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[trim=50 0 0 40, clip, width=1.1\linewidth]{per_ratio_comp_early.pdf}
\caption{2D histograms of the posterior distributions for the planets' periods, using $N$-body integration to calculate the radial velocity but without long-term stability. The black points show the posterior produced by using the full dataset, while the red points show the posterior obtained by analyzing only the \citetalias{jphc11} data. Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray), 4:3 (orange), and 5:4 (green).}
\label{fig:per_ratio_comp_early}
\end{figure}
Closer examination of our $N$-body integrated posterior distribution shows that many of the points, including our best fit solution, feature extremely close encounters between the two planets. An example from our best-fit parameters is shown in Figure \ref{fig:d_planet_star}, which plots the distance between each planet and the central star as a function of time. While neither of the planets is ejected over this timescale, the two planets, particularly the outer planet, experience large amplitude fluctuations in distance from the central star. Thus, it is extremely unlikely, if the system were truly in this orbital configuration initially, that we would observe it before the configuration changed substantially. Furthermore, integration over long time scales indicates the outer planet is scattered out past 100 AU on $10^5$ year timescales.
The majority of the solutions in Figure \ref{fig:per_ratio_comp} do not exhibit long term stability.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{HD200964_bestfit_nostab_d_planet_star}
\epsscale{2.0}
\caption{Distance between planet b (black line) and planet c (blue line), and the host star for our best-fit solution without long-term stability (see Section \ref{fitting}). The planets experience large, non-periodic fluctuations in distance from the star due to their strong mutual perturbations. The short timescale of these fluctuations relative to the age of the host star makes it unlikely, if the proposed best fit solution were correct, that the system would be observed in the original orbital configuration. Furthermore, these fluctuations are a strong indication that the system will become unstable on timescales much less than the age of the system. This is indeed the case---planet c is eventually scattered to a distance $> 100\, \rm{AU}$ on $10^5$ year timescales.}
\label{fig:d_planet_star}
\end{figure}
\subsection{Rejection Sampling} \label{rej}
In order to find long term stable solutions, we begin by using ``rejection sampling" on the posterior distribution found in Section \ref{fitting}. Rejection sampling is less computationally intensive than doing a full search conditioned on stability, and has been employed in other works to find best fit orbital parameters for planetary systems which are also stable (e.g. \citealt{wgd_2018}). In rejection sampling, we first construct a posterior distribution for the planetary system that does not take stability into account. Some fraction of the points (or, in our case, all of the points) in the posterior are chosen at random, and are then tested for long-term stability. All of the points in the posterior that pass the stability criteria then make up the new best-fit posterior which is conditioned on stability.
The converged posterior distribution shown in Figure \ref{fig:per_ratio_comp} contains 287,296 points in parameter space. We then tested all of these points for stability for $10^3$ orbital periods of planet c. We consider systems stable if both planets remained between 150\% of their initial periastron distance and 50\% of their apastron distance from the central star during the course of the integration. We considered distance from the central star, as opposed to the semi-major axis of the planets, as many of our best fit solutions feature extremely close encounters between the planets, as discussed above. This can cause the semi-major axis of planet b to diverge as its velocity is temporarily excited to above the escape velocity from the system, despite the fact that the system remains stable after this close encounter. Though it is unlikely a system featuring such a close encounter will survive on long timescales, we did not want to prematurely discard these solutions without checking for long term stability. These integrations were again carried out using the IAS15 integrator.
Of the points in the initial posterior, 2,295, i.e. $<1\%$ of the systems survived for $10^3$ orbital periods. We then tested these remaining points for longer term stability: each set of orbital parameters was integrated for $10^7$ orbital periods of planet c. For these long term stability analyses we use the WHFAST integrator (\citealt{rt15}), an implementation of the sympletic Wisdom-Holman integrator.
Unless otherwise noted, we set the timestep for our integrations with WHFAST to be $dt = P_{\rm{min}}/100$, where $P_{\rm{min}}$ is the shortest initial orbital period of the planets in the system. This is five times shorter than the orbital period recommended by \cite{dll98}, who recommend $dt = P_{\rm{min}}/20$ for a second order sympletic integrator. Of the points tested, only 1,111 survive for $10^7$ orbital periods. This is far too few points to construct a converged posterior for stable, best fit solutions to the data. We would require 1-2 orders of magnitude more points in our original, non-stable posterior, in order to retain enough points in the rejection sampling to construct a converged stable posterior, which would be extremely computationally intensive. Though the posterior obtained through rejection sampling is clearly not converged, the points do appear to lie in the general region of parameter space identified in Section \ref{stab_fits}. With rejection sampling, we only identify stable fits near 7:5 period ratio (c.f. Figure \ref{fig:per_ratio_comp_stab}, purple points), while the broader search described in Section \ref{stab_fits} identifies other possible period ratios.
It is also interesting to note that when this exercise was carried out for fits on just the Keck and APF datasets (i.e. omitting the Lick and Keck11 datasets), \textit{none} of the points in the initial posterior survived for $10^7$ orbital periods. It is only when we have data spanning a longer timescale that we appear to be able to find \textit{any} best-fit solutions that also exhibit stability. We suspect that this effect stems from the longer time baseline and better coverage of the RV signal that inclusion of the two later data sets provides. As more data is included the parameters of the planets in the system become better constrained, and our posterior distribution moves closer to the ``true" parameters of the underlying system, which presumably does exhibit long term stability. Thus, with more data, we expect a greater likelihood that the posterior distribution we construct without explicitly including stability will overlap with stable regions of parameter space.
While rejection sampling is insufficient to construct a converged posterior distribution, some of these points are useful places to initialize MCMC searches with stability included, which we discuss in the next section.
\subsection{Likelihood Function Conditioned on Stability} \label{stab_fits}
As rejection sampling is insufficient to produce a converged posterior distribution, we therefore try a different approach---we modify the likelihood function used in performing searches of parameter space by setting the likelihood function to be 0 if the system is not found to be stable for a predetermined period of time. We consider a planetary system to be stable if the semi-major axes of both planets remain between 50\% and 150\% of their initial values. This means that any samples in our final posterior distributions now exhibit long-term stability, but also means that our search has trouble exploring between stable regions of parameter space. If we were merely looking for maximum likelihood solutions, the stable solutions we found through rejection sampling would be sufficient for initializing our long-term stable MCMC searches. However, given the large upwards shift in period ratio that occurs when more data is included in the fitting when compared to the \citetalias{jphc11} data (see Figure \ref{fig:per_ratio_comp_early}), we feel it is quite important to explore other possible modes near the best-fitting solutions, since it is quite possible, as we discuss below, that additional frequencies introduced by the dynamical interaction between the planets are obscuring the underlying period ratio. Due to this difficulty, we use several different methods and initializations for our search, which we discuss in detail below. We ultimately identify three peaks in our posterior distribution, which are discussed in Section \ref{post}. We do require multiple different initializations to find these various modes, which leaves open the question of whether other initialization methods might find additional modes in the posterior distribution. We return to this question in Section \ref{dart}.
The simplest method of initialization, as well as the method that overall finds the best-fitting region of parameter space, is to simply initialize our MCMC near the best-fit solution found by rejection sampling. This method produces a peak near a 7:5 period commensurability, which is unsurprising given that this is where our non-stable posterior distribution is located.
For another initialization, we use a genetic algorithm (GA) to explore the parameter space. As we suspect there may be multiple local maxima of our posterior distribution, a GA may be useful to identify these different maxima and ultimately identify the global maximum. We use the open-source optimization framework Pyevolve \citep{p_2009}. Our genetic algorithm calculates likelihood scores using the same criteria discussed above, i.e. log-likelihood derived from assuming the observations are Gaussian distributed and independent, conditioned on long-term stability. The negative of the log-likelhiood is used as the ``fitness" for the GA. For our GA runs we test for stability for $10^6$ periods of planet c's orbit. We find that allowing the algorithm to evolve until an average fitness score of at least 800 is reached, or until there is no significant increase in likelihood between sequential generations, is sufficient time for the algorithm to find useful starting points for the MCMC. We initialize the MCMC in a small Gaussian ball around the best fit parameters determined by the genetic algorithm, and allow the MCMC to run to convergence.
The GA strongly favors a region of stability similar to the parameters identified in Figure \ref{fig:nostab_corner}, but with the period of the larger planet closer to $\sim 900$ days, which places the system firmly in a 3:2 MMR, as discussed in Section \ref{res}. This region is extremely stable, making it easier for the GA to explore. The GA misses the stable region of parameter space near $P_c/P_b \sim 7/5$ identified by our rejection sampling in Section \ref{rej}; this is likely because the search by the GA is \textit{too} broad for this application, and the stable region near 7:5 is much more narrow than the region near 3:2.
We also begin a search starting from the orbital parameters identified by \cite{tcm15}, who explored the formation and evolution of HD 200964 using the data of \citetalias{jphc11}, with a higher stellar mass of $M_* = 1.57 M_\odot$, and gave long-term stable solutions in the 4:3 MMR. The specific parameters reported in this work do not match the data well according to our model, likely because of a disagreement between the coordinate systems used. Thus, beginning with their reported planetary masses (scaled by a factor $M_p/M_*$) and eccentricities, we first optimize over angular parameters, before performing an optimization over all parameters and a subsequent MCMC search. This search does find stable solutions near a 4:3 period ratio that fit the data well, but the search also finds a smaller number of solutions near the 7:5. Though the walkers in our search spend more time near 4:3, solutions near 7:5 clearly have better posterior probability; it is likely the MCMC has difficulty moving between the two period ratios due to a dearth of stable solutions at period ratios intermediate between the two regions. We therefore initialize another MCMC at our best fit solution from the previous run. This MCMC converges to a region similar to the region identified by starting at the best-fit obtained through rejection sampling.
Thus, we have identified three peaks in our posterior distribution---one near a 3:2 period ratio, another near a 4:3 period ratio, and peak containing our best fit solution near a 7:5 period ratio. In the next section we discuss these peaks in more detail.
\subsection{Final Posterior Distribution} \label{post}
We give median values of the orbital parameters from each mode of the posterior distribution in Table \ref{tab:median_stab}, and maximum likelihood parameters in Table \ref{tab:bestfit_stab}. Since the 4:3 distribution joins on to the 7:5 distribution, we remove all points with $P_c > 7/5 \, P_b$ before calculating the median or the errors. Theoretical radial velocity curves for the maximum likelihood parameters are shown in Figure \ref{fig:rvt_comp}. The full posterior distributions are plotted in Appendix \ref{corner_plots}. We also stress that it is more meaningful to talk about overall stable regions of parameter space rather than particular orbital configurations. Long-term orbital integrations are inherently chaotic, and lifetimes of a given set of orbital parameters can vary by an order of magnitude depending on the machine used to carry out the integration.
All of our parameters discussed above are reported for $i=90^\circ$. Though there are still strong degeneracies between $M_p$ and $i$ in our modeling, we note both the theoretical RV signal and the long-term stability of the system are directly sensitive to the planetary mass $M_p$, as opposed to just $M_p \sin i$, which is the relevant quantity when planets are allowed to move on purely Keplerian orbits. One extension of our work would be to directly constrain the masses of the planets by allowing the overall inclination of the system to vary, while still keeping the planets coplanar. We could also allow mutual inclinations between the planets, which would necessitate allowing $\Omega$ to vary. This could improve our stability constraints, and allow us to further constrain $M_p$. We leave these investigations as avenues for future work.
We also note that all three posterior distributions identified, that is, near period ratios of 3:2, 4:3, and 7:5, feature a long tail in the eccentricity of planet c consistent with planet c on a circular orbit. We therefore re-run our MCMC, now setting planet c to be circular, which eliminates two parameters from our fitting. The resultant searches identify very similar regions of parameter space to the solutions with eccentricity included, but none of the solutions are truly consistent with planet c being circular. Instead, planet c's eccentricity is quickly excited by the companion, and, over longer timescales, both planets' eccentricities oscillate, with average values that are both of order $10^{-1}$. We also comment that for two planets to be in MMR, the ``test" particle must have some eccentricity. Thus, in what follows we use our orbital solutions with eccentricity included.
\begin{center}
\begin{deluxetable*}{c|cc|cc|cc}
\tabletypesize{\footnotesize}
\tablecaption{Median orbital parameters, $10^6 P_c$ stability}
\tablehead{\colhead{Parameter\tablenotemark{a}}\vrule& \colhead{HD 200964 b, 7:5}& \colhead{HD 200964 c, 7:5} \vrule& \colhead{HD 200964 b, 4:3}& \colhead{HD 200964 c, 4:3} \vrule& \colhead{HD 200964 b, 3:2}& \colhead{HD 200964 c, 3:2}}
\startdata
Orbital Period, $P$ [days] & $603.27^{+2.33}_{-2.17}$ & $854.46^{+4.56}_{-4.39}$ & $605.85^{+2.53}_{-2.48}$ & $837.51^{+4.62}_{-6.12}$ & $598.70^{+2.79}_{-2.77}$ & $881.11^{+7.62}_{-6.62}$ \\
Mass, $m$ $\left[M_J\right]$ & $1.72^{+0.05}_{-0.05}$ & $1.16^{+0.05}_{-0.05}$ & $1.74^{+0.05}_{-0.05}$ & $1.13^{+0.05}_{-0.06}$ & $1.68^{+0.06}_{-0.06}$ & $1.26^{+0.07}_{-0.07}$ \\
Mean longitude, $\lambda$ [deg] & $307.90^{+4.32}_{-4.04}$ & $236.76^{+4.28}_{-4.50}$ & $311.31^{+4.49}_{-4.46}$ & $223.98^{+4.70}_{-4.91}$ & $287.17^{+6.15}_{-4.57}$ & $269.31^{+5.52}_{-5.80}$\\
Argument of periastron, $\omega$ [deg] & $325.762^{+13.16}_{-13.51}$ & $252.58^{+112.94}_{-103.12}$ & $293.97^{+14.15}_{-13.89}$ & $273.05^{+96.72}_{-118.05}$ & $317.12^{+17.78}_{-19.11}$ & $169.28^{+160.12}_{-35.34}$\\
$\log_{10}$ Eccentricity, $e$ & $-1.21^{+0.05}_{-0.05}$ & $-3.10^{+0.90}_{-0.98}$ & $-1.16^{+0.06}_{-0.05}$ & $-2.99^{+1.02}_{-1.06}$ & $-1.12^{+0.14}_{-0.19}$ & $-1.47^{+0.38}_{-1.91}$\\
Stellar Jitter, $\sigma_j$ [m/s] & $6.27^{+0.42}_{-0.40}$ & \, & $6.57^{+0.47}_{-0.42}$ & \, & $7.47^{+0.53}_{-0.49}$ & \,
\enddata
\tablenotetext{a}{Values for orbital elements are in astrocentric coordinates, are referenced to the epoch of the first data point, JD 2453213.895, and assume an inclination $i=90^\circ$. The reported values are median values for the posterior distribution, and the reported error bars are 84\% and 16\% quantiles.}
\label{tab:median_stab}
\end{deluxetable*}
\begin{deluxetable*}{c|cc|cc|cc}
\tabletypesize{\footnotesize}
\tablecaption{Maximum Likelihood orbital parameters, $10^6 P_c$ stability}
\tablehead{\colhead{Parameter\tablenotemark{a}}\vrule& \colhead{HD 200964 b, 7:5}& \colhead{HD 200964 c, 7:5} \vrule& \colhead{HD 200964 b, 4:3}& \colhead{HD 200964 c, 4:3} \vrule& \colhead{HD 200964 b, 3:2}& \colhead{HD 200964 c, 3:2}}
\startdata
Orbital Period, $P$ [days] & 601.5 & 856.8 & 605.6 & 839.3 & 598.8 & 886.4 \\
Mass, $m$ $\left[M_J\right]$ & 1.75 & 1.18 & 1.77 & 1.16 & 1.72 & 1.33 \\
Mean longitude, $\lambda$ [deg] & 304.7 & 238.5 & 308.1 & 227.6 & 286.4 & 272.8\\
Argument of periastron, $\omega$ [deg] & 327.1 & 246.2 & 304.1 & 293.8 & 304.1 & 181.1\\
$\log_{10}$ Eccentricity, $e$ & -1.18 & -2.02 & -1.3 & -3.36 & -1.12 & -1.08\\
Stellar Jitter, $\sigma_j$ [m/s] & 6.1 & \, & 6.4 & \, & 7.2 & \,
\enddata
\tablenotetext{a}{Values for orbital elements are in astrocentric coordinates, are referenced to the epoch of the first data point, JD 2453213.895, and assume an inclination $i=90^\circ$.}
\label{tab:bestfit_stab}
\end{deluxetable*}
\end{center}
\begin{figure*}[htbp]
\centering
\includegraphics[width=7in]{HD200964_bestfit_stab_comp}
\caption{Comparison of the theoretical radial velocity curves for our best-fit, long-term stable solutions with different period ratios. \textit{Top Panel}: Our overall maximum likelihood solution, which has $P_c/P_b \sim 7/5$. \textit{Middle Panel}: An example solution which shows clear libration of the 4:3 resonant angle. \textit{Bottom Panel}: Our maximum likelihood solution that also shows libration of the 3:2 resonant angle.}
\label{fig:rvt_comp}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[ width=\linewidth]{HD200964_modes_comp.pdf}
\caption{2D histograms of the posterior distributions for the planets' periods without long-term stability (black points, see Section \ref{fitting}) and the three modes identified for fits conditioned on stability for $10^6 \, P_c$ (pink, purple, and red points, see Section \ref{stab_fits}). Note that the plotted values refer to the periods at JD 2453213.895. Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray) and 4:3 (orange).}
\label{fig:per_ratio_comp_stab}
\end{figure}
As previously discussed, all three of these posteriors represent different modes of the overall posterior distribution of orbital parameters. A 2D histogram of the posterior distribution of $P_c$ vs. $P_b$ is shown in Figure \ref{fig:per_ratio_comp_stab}, overplotted with the non-stable 2D histogram. We use this plot to give an idea of where each modes lies in $P_c$ vs $P_b$ space; we stress that each mode is pulled from a separate posterior distribution, meaning that the relative likelihood of the modes is not indicated by the density of points in each 2D histogram. Given where each mode lies relative to the non-stable histogram however, it is clear that the mode at period ratios slightly larger than 7:5 will have the overall highest likelihood. To further emphasize this point, in Figure \ref{fig:blobs} we plot $P(D|\theta)$, i.e. the likelihood, hexagonally binned in $P_b$ vs. $P_c$ space and averaged. Again, we stress that this is not a proper marginalization over the other parameters in our space; however, since it can be seen in Appendix \ref{corner_plots} that the posterior distributions for the other parameters occupy similar regions of parameter space, this plot still gives a rough idea of the relative probability in each mode without being quantitatively rigorous.
Figure \ref{fig:blobs} makes it clear that the mode identified near 7:5 is by far the most likely -- it is higher in likelihood than the 4:3 by a factor of $\sim\exp(10-15)$, and the 3:2 mode by a factor of $\sim\exp(20-25)$. If we were concerned only with agreement between the data and our model, this mode would constitute our full posterior distribution. However, given the large shift in period ratio seen when more data is added to the RV signal, it is important to identify possible modes near the best-fit solution, as these modes may prove to be the ``true" solution when more data is added.
\begin{figure}[htbp]
\centering
\epsscale{1.25}
\plotone{P1_P2_blobs}
\caption{Posterior probability distributions shown in Figure \ref{fig:per_ratio_comp_stab}, with the points hexagonally binned and averaged. The points are colored by $\log P(D|\theta)$. The plotted values refer to the periods at JD 2453213.895. Note that the probability has not been properly marginalized over the other parameters, and is only meant to give a rough idea of the relative probability between the peaks (see text). Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray) and 4:3 (orange).}
\label{fig:blobs}
\end{figure}
Though we have identified three different modes of our posterior distribution clustered around three different values of the period ratio, all of the periods discussed thus far refer to the periods of the two planets at epoch; over time, the periods of the two planets will oscillate due to their mutual perturbations. To get a better sense of mean values of period ratio for these three modes, and to give a sense of the likelihood in each mode, we randomly sample 1000 points from each of our posterior distributions. For each point, we numerically integrate the system for $500 \, P_c$, and compute the mean values of $P_b$ and $P_c$. These values are plotted in Figure \ref{fig:mean_per}, along with $P(D|\theta)$ for each point. Integrating out the solutions has little effect on the period ratios for points in the 4:3 posterior---these points remain at values slightly larger than a 4:3 period ratio. For the 7:5 posterior however, the average periods all lie much closer to an exact 7:5 ratio, or slightly below, whereas their initial ratios were generally above 7:5. For the 3:2 points the ratios all now lie above 3:2, while their initial period ratios were all below.
After this long term averaging over orbital elements, the period ratio distributions of our posterior modes lie even more clearly along or near lines of constant period ratio. This provides further support to the idea that these orbital configurations are stabilized by mean motion resonance. We explore this idea further in Section \ref{res}.
\begin{figure}[htbp]
\centering
\epsscale{1.25}
\includegraphics[ width=\linewidth]{mean_per}
\caption{Osculating period values averaged over $500 P_c$ for 1000 draws from the three modes of the posterior distribution identified by our MCMC search. Each mode lies close to a different fixed value of $P_c/P_b$. Colors are the same as those in Figure \ref{fig:blobs}. Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray) and 4:3 (orange). See Section \ref{post} for a discussion.}
\label{fig:mean_per}
\end{figure}
\subsection{Stable Regions of Period-Period Space} \label{dart}
To check whether we have identified all of the possible modes, we preform a simple Monte Carlo simulation to analyze the stable regions near the planetary parameters we have identified. We initialize $10^6$ planetary systems, randomly drawing all parameters, except for the planetary periods, from normal distributions centered on the values for the parameters identified from the other modes. We used the following parameters for the normal distributions, where $\mu$ denotes the mean of the normal distribution and $\sigma$ the standard deviation: $\mu_{m_b} = 1.7 M_J$, $\sigma_{m_b} = 0.1$, $\mu_{m_c} = 1.2 M_J$, $\sigma_{m_c} = 0.2$, $\mu_{\lambda_b} = 300^\circ$, $\sigma_{\lambda_b} = 20$, $\mu_{\lambda_c} = 250^\circ$, $\sigma_{\lambda_c} = 40$, $\mu_{\log e_b} = -1.1$, $\sigma_{\log e_b} = 0.2$, $\mu_{\log e_c} = -1.5$, $\sigma_{\log e_c} = 0.1$, $\mu_{\omega_b} = 310^\circ$, $\sigma_{\lambda_b} = 100$, $\mu_{\omega_c} = 200^\circ$, $\sigma_{\omega_c} = 100$. The periods of the two planets are drawn from uniform distributions in the range 575 to 635 for planet b, and 790 to 925 days for planet c. Each planetary system is tested for stability in the manner described above, and the stable systems are recorded. A 2D histogram of the stable solutions in period space, along with the 3 modes and the non-stable posterior, are shown in Figure \ref{fig:dart_throw}.
Several features are apparent from Figure \ref{fig:dart_throw}. Firstly, stable regions of parameter space lie along diagonals running from the lower lefthand side to the upper right, indicating that stable regions of parameter space lie along regions of constant period ratio. Secondly, there is an extremely stable region of parameter space near the 3:2 MMR, and another stable region at ratios slightly larger than 4:3. Interestingly, the 7:5 mode, which has the overall highest likelihood, lies between these two stable regions. This is likely because the 7:5 mode is second order, making it weaker than the first order 3:2 and 4:3 resonances it is adjacent to. The lack of stable regions of parameter space near the 7:5 emphasizes the need to account for stability when considering the posterior probability distribution of the orbital parameters. From Figure \ref{fig:dart_throw}, it seems that if we are interested in additional possible stable modes, the only possibilities are the two regions near 3:2 and 4:3, which is precisely the other locations our search uncovered. Since any possible modes cannot lie too far from the non-stable posterior, Figure \ref{fig:dart_throw} provides further evidence that we have identified all relevant modes of the long-term stable posterior distribution.
\begin{figure}[htbp]
\centering
\epsscale{1.25}
\includegraphics[trim=0 0 0 30, clip, width=1.1\linewidth]{dart_throw.pdf}
\caption{2D histogram of results from a Monte Carlo simulation of stable planetary systems. Orbital parameters for the two planets are randomly drawn, and systems that pass the stability criteria described in the text are recorded. The non-stable posterior distribution of the planetary periods is shown in red, and the three long-term stable modes are shown in pink.}
\label{fig:dart_throw}
\end{figure}
\section{Analysis of Underlying Mean-Motion Resonance} \label{res}
As discussed in the last section, and demonstrated in Figure \ref{fig:mean_per}, the period ratios of the points in our posterior distribution lie near lines of constant period ratio, which supports the idea that these systems are in MMR. In order to investigate whether our stable best fit solutions are truly in resonance, we track the evolution of the resonant angle, $\phi$, over time. A $p/q$ MMR between a massive perturber and a massless test particle is characterized by libration of the angle
\begin{align} \label{eq:phi}
\phi = p \lambda_{\rm{outer}} - q \lambda_{\rm{inner}} - (p-q) \varpi_{\rm{test}}
\end{align}
(see e.g. \citealt{Murray1998}). For a truly massless test particle, if the semi-major axis ratio and initial angles are perfectly tuned, $\phi$ is constant; for values slightly off from this region, $\phi$ oscillates sinusoidally. In the case of HD 200964, libration of the resonant angle will be complicated by the large masses of both planets---not only are both planets of comparable mass, but in addition both planets are relatively massive compared to the central star. Thus, we do not expect libration of the resonant angle to be particularly ``clean.''
We begin by discussing our solutions near a 3:2 period ratio, as they most clearly exhibit libration. The resonant angles for the maximum likelihood 3:2 solution are plotted in Figure \ref{fig:phi_lib_3_2}. The two resonant angles, $\phi_{\rm{inner}}$ and $\phi_{\rm{outer}}$, obtained by considering the inner and outer planets to be the test particle in Equation \eqref{eq:phi}, are shown. Both angles show clear libration, albeit with a large amplitude. Thus, it is straightforward to conclude that our long-term stable solutions near a 3:2 period ratio are in a 3:2 MMR.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{HD200964_bestfit_phi_lib_3_2_new}
\epsscale{2.0}
\caption{Value of the inner and outer 3:2 resonant angles for our best-fit 3:2 solution. Both angles clearly librate.}
\label{fig:phi_lib_3_2}
\end{figure}
For our 7:5 solutions however, the situation is more complex. The evolution of the 7:5 resonant angle for our maximum likelihood long term stable solution is shown in Figure \ref{fig:bestfit_phi_lib}. The two resonant angles, $\phi_{\rm{inner}}$ and $\phi_{\rm{outer}}$ are again shown. As can be seen in the figure, there does appear to be periodic variation in the value of $\phi$, but it is complicated by the presence of several other effects, which we enumerate in Figure \ref{fig:phi_lib_ev} by examining the evolution of $\phi$ as both the masses and the mass ratio of the planets involved in the resonance are increased.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{HD200964_bestfit_phi_lib_new}
\epsscale{3.0}
\caption{Value of the inner and outer 7:5 resonant angles for our best-fit solution, which are defined in Equation \eqref{eq:phi}. The angles do appear to show libration, but the large masses of both planets involved in the resonance complicate the libration pattern, as discussed in the text and demonstrated in Figure \ref{fig:phi_lib_ev}}.
\label{fig:bestfit_phi_lib}
\end{figure}
To begin, we plot the value of $\phi_{\rm{inner}}$ for two planets with $M_c = 10^{-4} M_J$, and $M_b = 0$. The angles of the planets are initialized such that the system begins perfectly in resonance. The planet's resonance angle is fixed at $\phi = \pi$ over the integration. As we increase the mass of the outer planet, the center of the resonance shifts off of an exact 7:5 period ratio. This causes the system to be initialized off resonance, causing $\phi$ to librate about $\pi$. Increasing the mass further to $0.5 M_J$ adds two new effects---firstly, the period of the libration of the resonant angle decreases dramatically, which is expected as the mass of the planets involved in the resonance increases. Secondly, there is now a much shorter period variation that has been introduced into $\phi$. This variation is caused by the outer planet perturbing the test particle during their closest approach, and therefore occurs on the synodic period of the planets. To illustrate this, we have noted conjunctions between the planets with dashed vertical lines. The strength of these synodic kicks makes the libration of the resonant angle less clear, though it can still be discerned by eye in this case.
If we now give both planets comparable mass, as seen in the righthand top panel, the fact that the ``test" particle now has the same mass as the particle we are considering the ``perturber" for calculating $\phi$ causes the center of the libration to circulate as well, though the oscillation of $\phi$ about this circulating center can be clearly discerned.
Finally, we increase the mass of both planets to $0.5 M_J$. In this case, we see a combination of the two effects that were present previously---$\phi$ oscillates about a center that circulates, while the strong synodic kicks cause large oscillations of $\phi$ on a synodic period.
These effects combine to produce the complicated behavior seen in the libration of $\phi_{\rm{inner}}$ for our best fit solution---for such high mass planets, the synodic kicks are extremely strong, and are on top of the rapid circulation of the center of the resonance. For contrast, in Appendix \ref{3_2_ev} we give analogous plots for the 3:2 resonant angle in Figure \ref{fig:phi_lib_ev_3_2}. In this case, the strength of the 3:2 resonance causes much less significant aberration from test particle case, even when both planets are $\sim M_J$.
\begin{figure*}[htbp]
\centering
\includegraphics[width=7in]{phi_lib_ev}
\caption{Evolution of $\phi_{\rm{inner}}$ for the 7:5 MMR as the masses of the planets involved in the resonance are increased. For low mass planets, libration of $\phi_{\rm{inner}}$ is easily discerned (middle left panel). As the mass of the perturbing planet is increased, kicks on a synodic timescale distort the libration pattern (bottom left panel; blue dashed lines denote conjunctions between the planets). If both planets have comparable mass, the center of the libration begins to circulate on the secular timescale (upper right panel). Finally, for large, comparably massive planets, both these effects serve to ``wash out" the libration of $\phi_{\rm{inner}}$ (lower right panel).}
\label{fig:phi_lib_ev}
\end{figure*}
For the 4:3, we can find orbital configurations that show clear libration even for very massive planets. However, the orbital configurations that match the data well appear to be only marginally in the 4:3 resonance or not at all, since the complicated effects seen in $\phi$ are not due to the massive planets involved in the resonance alone. To illustrate this, in Appendix \ref{3_2_ev}, Figure \ref{fig:phi_lib_ev_4_3}, we plot the evolution of $\phi$ in a manner analogous to the plots made for the 3:2 and 7:5 MMRs.
For the points in our posterior distribution, we only observed behavior similar to libration for the 4:3 resonance in $\phi_{\rm{outer}}$. An illustration of this is shown in Figure \ref{fig:bestfit_phi_lib_4_3}, which plots the outer resonant angle for a solution that does appear to show libration, and for our best-fit solution, which shows circulation. There appears to be a continuous evolution in behavior as the period of planet c is increased: for lower values of period, the outer 4:3 resonant angle does appear to librate about $\phi = \pi$, which is expected for a 4:3 MMR, though with a complex structure. For the larger period ratio solutions we find, i.e. those near a period ratio of 7:5, the angle appears to circulate instead.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{HD200964_bestfit_phi_lib_4_3}
\epsscale{2.0}
\caption{Value of the outer 4:3 resonant angle for two orbital configurations drawn from our posterior distribution. In the upper panel, we plot $\phi_{\rm{outer}}$ for a case where the period ratio of the planets is close to 4:3, and $\phi_{\rm{outer}}$ appears to librate. In the lower panel we plot the 4:3 outer resonant angle for our maximum solution; the angle appears to circulate in this case.}
\label{fig:bestfit_phi_lib_4_3}
\end{figure}
In summary, the 3:2 solutions we find are the only for which identification of the MMR through libration of the resonant angle is straightforward. For the 7:5 period ratio solutions, $\phi$ does appear to show periodic behavior which is clearly distinct from circulation. Interpretation of this behavior is not straightforward, though it does appear that the behavior of $\phi$ for the 7:5 MMR is consistent with libration for two Jupiter mass planets perturbing one another. For the 4:3 solutions, we see continuous behavior as the period ratio is increased, ranging from clear libration for period ratios closer to 4:3 to clear circulation for period ratios equal to or larger than 7:5.
\section{Reanalysis of Early Data} \label{reanaly}
Having now found several viable period ratios for long-term stable fits to the data, this now raises the question of whether the multiple resonances we have identified could have been found with just the data published in \citetalias{jphc11}. We therefore apply our methodology to just the Lick and Keck11 datasets, and analyze what aspects of the results we have presented can be found from those data sets alone.
To begin, we use a methodology similar to that discussed in Section \ref{stab_fits} to find a long term stable posterior distribution of orbital parameters. We perform initial optimization from several different locations in parameter space, including the parameters reported by \citetalias{jphc11}. We then run an initial $N$-body MCMC search from the best-fit obtained through optimization, without stability included, until we have a converged posterior distribution with $\sim 10^6$ points. At this point we perform a $10^6$ year rejection sample on our posterior, which leaves us with around 200 points in parameter space. This rejection sample identifies two clear regions of stability, one near a 4:3 period ratio and one near a 3:2. We follow up our rejection sampling with MCMC searches conditioned on stability starting in both of these regions.
The resulting posterior distribution is shown in Figure \ref{fig:per_ratio_early}, along with the $N$-body integrated posterior without stability. As can be seen in the figure, the posterior distribution near 3:2 is quite similar to the one found for our longer dataset, while the 4:3 distribution is broader and at slightly larger values of $P_b$. It is notable here that the stable regions of parameter space are quite distant from the best-fitting region, which for the early data is at low values of period ratio. This result is in contrast to our analysis of the full data set, for which the best-fitting and stable regions lie on top of one another. This means that stability analysis is even more important when the data set is not as complete.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.1\linewidth]{HD200964_early_Pb_Pc}
\caption{2D histograms of the posterior distributions for our planetary parameters from analysis of just the datasets used in \citetalias{jphc11}. The black points show the distribution without long-term stability, while the orange and green points show our posterior conditioned on stability for $10^6 P_c$. Lines denoting exact ratios of $P_c/P_b$ are shown for ratios of 3:2 (blue), 7:5 (gray), 4:3 (orange), and 5:4 (green).}
\label{fig:per_ratio_early}
\end{figure}
Thus, in addition to the 4:3 solution, we can identify the 3:2 orbital solution from analysis of the early data alone. However, it is interesting to note that the 7:5 solutions are not identified by this early search; it is only with the inclusion of more data that the 7:5 is even identified as a solution.
\section{Possibility of a Third Planet} \label{third}
Though the two planet configurations we have identified provide plausible long-term stable fits to the data, it is still possible there are other planets in the system. We briefly investigate this possibility by adding a third planet to our model and investigate the resulting change in our maximum likelihood.
We initialize our fitting of third planet by looking at a periodogram of the residuals of our data. We take the strongest peak identified by the periodogram, which is is at $\sim$ 7 days, and use this orbital period as our starting point when adding the third planet. Given the low period, the residual could be due to a stellar signature. The rotation period of the star is likely to be too long to be causing this signature: \citet{jps_2015} found a $v \sin i$ value of $1.88 \pm 0.23 \, \rm{km/s}$. Even at the upper end of the of the error bar, a simple calculation of rotation period using the value $R_{*}=4.92\,R_{\odot}$ gives $P_{{\rm rot}}=2\pi R_{*}/v\sin i\approx118\,{\rm days}$, which is clearly too long to give the $\sim$7 day planetary signal unless the star is rotating very close to pole on. On the other hand, the S-index values for HD 200964 do show some power at 8 days in the Keck data set, with a moderate correlation (Pearson correlation coefficient of 0.29), though this signal is not present in the APF data. Furthermore, there is significant power in both datasets around 26 days, which could likely be driving the correlation.
Because the GLS favors a lower period for the third planet, it is unlikely that planet-planet interactions are important for modeling this third body. An initial optimization over the third planet's parameters further reinforces this point, as the optimization favors the third planet having low mass compared to the other two, with $M_d \sim 5 \times 10^{-2} M_J$. To enforce long-term stability in the system, we therefore fix the orbital parameters of planets b and c, and fit only the parameters of planet d. This means we will miss any covariances between the parameters of the hypothetical third planet and the two outer planets, but this method also ensures that the resulting three planet system exhibits long term stability.
We perform an MCMC search over the third planet's parameters, starting from the point identified by our optimization. The underlying parameter space is difficult to probe, with many of the solutions having log likelhioods that are comparable to the two-planet case. We do find orbital configurations that improve our log likelihood substantially enough that they may be significant. For a simple comparison we use a Bayseian information criterion (BIC) to compare our two models. We note, however, that the BIC is a surrogate for calculating the evidence, which is the more robust method (see e.g. \citealt{l_2007} for a discussion). For a given model, the BIC is calculated via
\begin{align}
\text{BIC} = k \log n - 2 \log \hat{L}
\end{align}
where $\hat{L}$ is the maximum likelhiood, $n$ is the number of observations, and $k$ is the number of model parameters (i.e. 15 for the 2 planet case and 20 for the 3 planet case). In order to compare different models we calculate the BIC for each model and select the model with the lowest BIC.
Our maximum likelihood third planet parameters are similar to those identified above: the planet is low mass ($M_d = 4.22 \times 10^{-2} M_J$), in a short period ($P_d$ = 7.89 days), highly eccentric ($e_d$ = 0.588) orbit. The $\Delta$BIC for this model versus our two planet model is $\Delta\text{BIC} = 4.90$. This means that the three planet model is preferred. The radial velocity signal for the third planet with the signals from planets b and c removed is plotted in Figure \ref{fig:planet_d}. Thus, while a three planet model does provide a smaller BIC, the BIC difference between the two models is not large, indicating that the three planet model is not strongly preferred over the two planet model.
\begin{figure}[htbp]
\centering
\plotone{HD200964_planet_d_fold}
\epsscale{2.0}
\caption{Radial velocity of the best-fitting third planet as a function of orbital phase. The radial velocity of planets b and c has been removed.}
\label{fig:planet_d}
\end{figure}
\section{Summary and Conclusions} \label{conc}
In this paper we have investigated the mean motion resonance between the two planets orbiting the star HD 200964. We find that the system is stabilized because it is in, or near, a mean motion resonance. However, which of three possible resonances the system is in (3:2, 4:3, or 7:5) remains unclear, as the full libration period of the system's resonance angle ($\sim$30 years) is longer than the observational baseline ($\sim$14 years). We also find indications of a possible ``low" mass ($M_p \sim 0.05 M_J$) third planet to the system on a short period ($\sim 8$ day) orbit, though this third planet is not strongly preferred over our two planet model.
Previous analyses (\citetalias{jphc11}) identified the system as being in a 4:3 resonance. By including stability in our searches we were able to identify additional long-term stable solutions near a 3:2 MMR, even using the same data analyzed in \citetalias{jphc11}, though 4:3 solutions remain better fits to this data set. Furthermore, by using radial velocity data spanning a longer timescale than previous works, we found that the best fitting orbital configurations were not in the 3:2 or 4:3 MMR, but instead had period ratios much closer to 7:5.
The original identification of a 4:3 resonance was puzzling on theoretical grounds, as convergent migration of gas giants strongly prefers capture into the 3:2 rather than the 4:3 or 7:5. It is interesting to note that with the inclusion of more data the period ratio has gone up. We conclude that, this fact, along with the errors underlying the radial velocity measurements and the long timescale variation provided by libration of the resonant angle generate sufficient uncertainty in the period ratio that the 3:2 remains a plausible solution to the observed signal.
If long period observations are not available, it is of paramount importance that long-term stability is included in fitting RV systems in MMR. For these shorter period solutions, the region of parameter space identified by simply finding the best-fit to the RV data can be a considerable distance from the regions of parameter space that exhibit long-term stability. Thus, requiring any proposed set of best-fit parameters to exhibit long-term stability is invaluable in identifying the the ``true" values of the underlying planetary system, which may be obscured by the strong perturbations of the planets on one another.
\vspace{1mm}
\noindent The authors wish to thank Daniel Thorngren and Asher Wasserman for numerous insightful discussions on Bayesian statistics and interpretration of MCMC results. We would also like to thank the anonymous referee for their numerous helpful comments and suggestions, which greatly improved the quality of the manuscript. MMR and RMC acknowledge support from NSF CAREER grant number AST-1555385. WJG, BN, RMC, EC, NK, and JY also wish to thank the Heising-Simons foundation for supporting this work. JAB acknowledges support from MIT’s Kavli Institute as a Torres postdoctoral fellow. SSV gratefully acknowledges support from NSF grant AST-0307493. RPB gratefully acknowledges support from NASA OSS Grant NNX07AR40G, the NASA Keck PI program, and from the Carnegie Institution of Washington.
The work herein is based on observations obtained at the W. M. Keck Observatory, which is operated jointly by the University of California and the California Institute of Technology, and we thank the UC-Keck and NASA-Keck Time Assignment Committees for their support. We also wish to extend our special thanks to those of Hawaiian ancestry on whose sacred mountain of Mauna Kea we are privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible. The work herein was also based on observations obtained at the Automated Planet Finder (APF) facility and its Levy Spectrometer at Lick Observatory. Computations were carried out using the Hummingbird computational cluster at UC Santa Cruz. Simulations in this paper made use of the REBOUND code which is freely available at \url{http://github.com/hannorein/rebound}.
\software{emcee \citep{fhl13}, REBOUND \citep{rl12}, Scipy \citep{scipy}, Pyevolve \citep{p_2009}, corner \citep{f16}.}
|
1,116,691,500,236 | arxiv | \section{#1}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{displaymath}}{\begin{displaymath}}
\newcommand{\end{displaymath}}{\end{displaymath}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\renewcommand{\k}{\kappa}
\newcommand{\mu}{\mu}
\newcommand{\nu}{\nu}
\newcommand{\phi}{\phi}
\renewcommand{\r}{\rho}
\newcommand{\varrho}{\varrho}
\newcommand{{\cal H}}{{\cal H}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal O}}{{\cal O}}
\newcommand{{\cal L}}{{\cal L}}
\newcommand{\scriptscriptstyle}{\scriptscriptstyle}
\newcommand{\labell}[1]{\label{#1}\qquad_{#1}}
\newcommand{\reef}[1]{(\ref{#1})}
\newcommand{\nonumber}{\nonumber}
\newcommand{\partial}{\partial}
\newcommand{\textrm{d}}{\textrm{d}}
\newcommand{\mt}[1]{\textrm{\tiny #1}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal A}}{{\cal A}}
\newcommand{{\cal B}}{{\cal B}}
\newcommand{\ell_s}{\ell_s}
\newcommand{\zD}{\ensuremath{z_{D7}}}
\newcommand{\tzD}{\ensuremath{\zeta_m}}
\newcommand{\ensuremath{\tilde{\rho}}}{\ensuremath{\tilde{\rho}}}
\newcommand{\ensuremath{\tilde{z}}}{\ensuremath{\tilde{z}}}
\newcommand{\Rl}{\ensuremath{(R/l_s)^2}
\newcommand{\tc}{\ensuremath{\sqrt{g_s N}}}
\newcommand{\stc}{\ensuremath{(g_sN)^{\frac{1}{4}}}}
\newcommand{\mq}{\ensuremath{m_q}}
\newcommand{\ensuremath{\mbox{\small eff.}}}{\ensuremath{\mbox{\small eff.}}}
\newcommand{\mbox{${\cal N}$}}{\mbox{${\cal N}$}}
\newcommand{\ensuremath{{\cal Y}}}{\ensuremath{{\cal Y}}}
\newcommand{\ensuremath{{\cal Y}^{\ell,\pm}}}{\ensuremath{{\cal Y}^{\ell,\pm}}}
\newcommand{\ensuremath{\frac{\ell}{2}}}{\ensuremath{\frac{\ell}{2}}}
\newcommand{\ensuremath{SU(2)_R\times SU(2)_L}}{\ensuremath{SU(2)_R\times SU(2)_L}}
\newcommand{\ensuremath{\bar{\rho}}}{\ensuremath{\bar{\rho}}}
\newcommand{\ensuremath{{SU(N)}} }{\ensuremath{{SU(N)}} }
\newcommand{\ensuremath{\frac{1}{2}}}{\ensuremath{\frac{1}{2}}}
\newcommand{\ensuremath{M_{\pi}}}{\ensuremath{M_{\pi}}}
\newcommand{\ensuremath{\Lambda_{\mbox{\small QCD}}}}{\ensuremath{\Lambda_{\mbox{\small QCD}}}}
\newcommand{\ensuremath{\chi+i\,e^{-\phi}}}{\ensuremath{\chi+i\,e^{-\phi}}}
\newcommand{\ensuremath{SL(2,\bbz{})}}{\ensuremath{SL(2,\bbz{})}}
\newcommand{\ensuremath{{\mathcal Im}}}{\ensuremath{{\mathcal Im}}}
\newcommand{\ensuremath{\bar{1}}}{\ensuremath{\bar{1}}}
\newcommand{\ensuremath{\bar{2}}}{\ensuremath{\bar{2}}}
\newcommand{\ensuremath{\bar{\imath}}}{\ensuremath{\bar{\imath}}}
\newcommand{\ensuremath{\bar{\jmath}}}{\ensuremath{\bar{\jmath}}}
\newcommand{\ensuremath{\bar{k}}}{\ensuremath{\bar{k}}}
\newcommand{\ensuremath{\bar{l}}}{\ensuremath{\bar{l}}}
\newcommand{\ensuremath{\bar{a}}}{\ensuremath{\bar{a}}}
\newcommand{\ensuremath{\bar{b}}}{\ensuremath{\bar{b}}}
\newcommand{\ensuremath{\bar{c}}}{\ensuremath{\bar{c}}}
\newcommand{\ensuremath{\bar{d}}}{\ensuremath{\bar{d}}}
\newcommand{\ensuremath{\bar{z}}}{\ensuremath{\bar{z}}}
\newcommand{\ensuremath{\bar{w}}}{\ensuremath{\bar{w}}}
\newcommand{\ensuremath{\bar{\zeta}}}{\ensuremath{\bar{\zeta}}}
\newcommand{\ensuremath{\bar{\tau}}}{\ensuremath{\bar{\tau}}}
\newcommand{\ensuremath{\bar{A}}}{\ensuremath{\bar{A}}}
\newcommand{\ensuremath{\bar{B}}}{\ensuremath{\bar{B}}}
\newcommand{\ensuremath{\bar{C}}}{\ensuremath{\bar{C}}}
\newcommand{\ensuremath{\bar{D}}}{\ensuremath{\bar{D}}}
\newcommand{\N}[1]{\ensuremath{{\cal N}=#1}}
\newcommand{\ensuremath{\tilde{K}}}{\ensuremath{\tilde{K}}}
\newcommand{{\bf Ai}}{{\bf Ai}}
\newcommand{{\bf I}}{{\bf I}}
\newcommand{{\bf J}}{{\bf J}}
\newcommand{{\bf K}}{{\bf K}}
\newcommand{\ensuremath{\tilde{\eta}}}{\ensuremath{\tilde{\eta}}}
\newcommand{\ensuremath{\bar{\partial}}}{\ensuremath{\bar{\partial}}}
\def\tilde{\lambda} {\tilde{\lambda}}
\def\tilde{r} {\tilde{r}}
\def\tilde{\rho} {\tilde{\rho}}
\def r_\mt{vac}{r_\mt{vac}}
\newcommand{\ensuremath{\vec{n}}}{\ensuremath{\vec{n}}}
\newcommand{\ensuremath{\tilde{\lambda}}}{\ensuremath{\tilde{\lambda}}}
\newcommand{\ensuremath{\cos\theta}}{\ensuremath{\cos\theta}}
\newcommand{\ensuremath{\sin\theta}}{\ensuremath{\sin\theta}}
\newcommand{\ensuremath{\partial_\sigma}}{\ensuremath{\partial_\sigma}}
\newcommand{\ensuremath{\dot{\theta}}}{\ensuremath{\dot{\theta}}}
\newcommand{\ensuremath{\dot{\varphi}}}{\ensuremath{\dot{\varphi}}}
\newcommand{\ensuremath{\varphi}}{\ensuremath{\varphi}}
\newcommand{\ensuremath{\partial_t}}{\ensuremath{\partial_t}}
\newcommand{\ensuremath{\partial_{\tau}}}{\ensuremath{\partial_{\tau}}}
\newcommand{\ensuremath{\tilde{\sigma}}}{\ensuremath{\tilde{\sigma}}}
\newcommand{\ensuremath{\varepsilon_i}}{\ensuremath{\varepsilon_i}}
\newcommand{\ensuremath{\sigma_0}}{\ensuremath{\sigma_0}}
\newcommand{\ensuremath{\mathrm{N}}}{\ensuremath{\mathrm{N}}}
\newcommand{\ensuremath{\NC^{rs}_{mn}}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}}}
\newcommand{\ensuremath{\NC^{rs}_{mn}(\ei,\sz)}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}(\ensuremath{\varepsilon_i},\ensuremath{\sigma_0})}}
\newcommand{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}
\newcommand{\ensuremath{\sin\frac{\sz}{2}}}{\ensuremath{\sin\frac{\ensuremath{\sigma_0}}{2}}}
\newcommand{\ensuremath{\cos\frac{\sz}{2}}}{\ensuremath{\cos\frac{\ensuremath{\sigma_0}}{2}}}
\newcommand{\ensuremath{\mathrm{P}^l_m(\cos\sz)}}{\ensuremath{\mathrm{P}^l_m(\cos\ensuremath{\sigma_0})}}
\newcommand{\ensuremath{\mathrm{sign}}}{\ensuremath{\mathrm{sign}}}
\newcommand{\ensuremath{\hat{P}}}{\ensuremath{\hat{P}}}
\newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}}
\newcommand{{\cal E }}{{\cal E }}
\newcommand{\ensuremath{\mbox{arccosh}}}{\ensuremath{\mbox{arccosh}}}
\newcommand{\ensuremath{\mbox{cotan}}}{\ensuremath{\mbox{cotan}}}
\newcommand{\ensuremath{\mathcal{U}}}{\ensuremath{\mathcal{U}}}
\begin{document}
\title{\LARGE \bf Notes on Euclidean Wilson loops and Riemann Theta functions}
\author{Riei Ishizeki, Martin Kruczenski, Sannah Ziama\thanks{E-mail: \texttt{[email protected], [email protected], [email protected]}}\\
Department of Physics, Purdue University, \\
525 Northwestern Avenue, W. Lafayette, IN 47907-2036. }
\maketitle
\begin{abstract}
The AdS/CFT correspondence relates Wilson loops in \N{4} SYM theory to minimal
area surfaces in \ads{5} space. In this paper we consider the case of Euclidean flat Wilson loops which
are related to minimal area surfaces in Euclidean \ads{3} space. Using known mathematical results for such
minimal area surfaces we describe an infinite parameter family of analytic solutions for closed Wilson loops.
The solutions are given in terms of Riemann theta functions and the validity of the equations of motion is proven
based on the trisecant identity. The world-sheet has the topology of a disk and the renormalized area is written
as a finite, one-dimensional contour integral over the world-sheet boundary. An example is discussed in detail
with plots of the corresponding surfaces. Further, for each Wilson loops we explicitly construct a one parameter family of
deformations that preserve the area. The parameter is the so called spectral parameter. Finally, for genus three
we find a map between these Wilson loops and closed curves inside the Riemann surface.
\end{abstract}
\clearpage
\newpage
\section{Introduction}
\label{intro}
One of the first results of the AdS/CFT correspondence \cite{malda} was the computation of Wilson loops and from there the quark anti-quark potential
as done by Maldacena, Rey and Yee \cite{MRY}. Although much work was devoted to the computation of Wilson loops only few explicit examples are
known of minimal area surfaces in \ads{5} space. In the case of closed Euclidean Wilson loops (with constant scalar) the most studied one is the
circular Wilson loop \cite{cWL} which is dual to a half-sphere. The only other one we are aware of is the two intersecting arcs (lens shaped) \cite{lens}.
For infinite Wilson loops, parallel lines \cite{MRY} and the cusp are known \cite{DGO}. In the case of multiple contours as for example two concentric
circles, interesting results were found using integrability \cite{DF}. In the language we use here they correspond to elliptic functions which appear
for genus one $g=1$.
In the case of Minkowski signature AdS space and in particular light-like lines more is known starting with the light-like cusp \cite{cusp} and
culminating with a large recent activity \cite{scatampl} in relation to scattering amplitudes following \cite{AM, AM2}.
In this paper we point out that in the case of flat Euclidean Wilson loops which are dual to minimal area surfaces in Euclidean \ads{3}, much
can be done by using known results from the mathematical literature \cite{BB}. In fact, an infinite parameter family of solution is known in terms
of Riemann theta functions.
This type of construction using theta functions is described in detail in \cite{BBook} and was already used in the case of strings moving
in $t\times S^3$ by Dorey and Vicedo \cite{DV} and in the case of an Euclidean world-sheet inside \ads{3} space by Sakai and Satoh in \cite{SS}.
Here we consider
Euclidean Wilson loops inside Euclidean \ads{3} and rederive the original results by perhaps more pedestrian methods based on the trisecant identity
for theta functions. In this way theta functions are thought as special functions whose properties fit well with the equations of motion of the string in \ads{3}
space much in the same way as trigonometric functions fit the harmonic oscillator equation. Each theta function and therefore each Wilson loop is associated
with an auxiliary Riemann surface of given genus $g$. A relatively simple formula is derived for the area and an example for genus $g=3$ is worked out in detail. Perhaps our main contribution is to find closed Wilson loops and to derive a formula for the renormalized area that follows the AdS/CFT prescription.
The calculations are done at the classical level, it should be interesting to extend them for example to one-loop as can be done in the case of the circular
Wilson loop \cite{olcWL}.
This paper is organized as follows. We start by writing the equations of motion and use the Pohlmeyer reduction procedure to simplify the equations
and arrive at the cosh-Gordon equation plus a set of linear equations. In the following section we review the properties of the theta functions and show how
they can be used to solve the equations of motion and compute the regularized area. Finally we construct a particular example of genus three where we
show that there are closed Wilson loops that can be described by this method. We plot the corresponding surfaces and compute the areas. Besides, we also describe
the mapping of the Wilson loop into a curve embedded inside the Riemann surface. In the last section we give our conclusions.
\section{Equations of motion}
In this section we write the equations of motion and simplify them using the Pohlmeyer reduction \cite{Pohlmeyer}. In the context of Minkowski space-time
this procedure was used by Jevicki and Jin \cite{JJ} to find new spiky string \cite{spiky} solutions and by Alday and Maldacena \cite{AM2} to compute certain light-like
Wilson loops. In the case of Euclidean \ads{3} that we are interested in here, we can use embedding coordinates $X_{\mu=0\ldots 3}$ parameterizing a
space $R^{3,1}$ and subjected to the constraint
\begin{equation}
X_0^2 - X_1^2 - X_2^2 - X_3^2 = 1\ ,
\label{Xsq1}
\end{equation}
with an obvious $SO(3,1)\equiv SL(2,\mathbb{C})$ global invariance. The space has an $S^2$ boundary at infinity.
Other useful coordinates are Poincare coordinates $(X,Y,Z)$ given by:
\begin{equation}
X+iY= \frac{X_1+iX_2}{X_0-X_3}, \ \ \ \ Z= \frac{1}{X_0-X_3}.
\end{equation}
The boundary is now an $R^2$ space and located at $Z=0$.
A string is parameterized by world-sheet coordinates $\sigma_a=(\sigma,\tau)$ or equivalently complex coordinates
$z=\sigma+i\tau$, $\bar{z}=\sigma-i\tau$. The action in conformal gauge is given by
\begin{eqnarray}
S &=& \ensuremath{\frac{1}{2}} \int \left( \partial X_\mu \bar{\partial} X^\mu - \Lambda (X_\mu X^\mu -1)\right) \ d\sigma\, d\tau\\
&=& \ensuremath{\frac{1}{2}} \int \frac{1}{Z^2}\left(\partial_a X \partial^a X +\partial_a Y \partial^a Y +\partial_a Z \partial^a Z \right) \ d\sigma\, d\tau,
\label{action}
\end{eqnarray}
where $\Lambda$ is a Lagrange multiplier and the $\mu$ indices are raised and lowered with the $R^{3,1}$ metric.
An Euclidean classical string is given by functions $X_\mu(z,\bar{z})$ obeying the equations of motion:
\begin{equation}
\partial \bar{\partial} X_\mu = \Lambda X_\mu\ ,
\label{eomX}
\end{equation}
where $\Lambda$, the Lagrange multiplier is given by
\begin{equation}
\Lambda = -\partial X_\mu \bar{\partial} X^{\mu}. \label{Lcomp}
\end{equation}
These equations should be supplemented by the Virasoro constraints which read
\begin{equation}
\partial X_\mu \partial X^\mu = 0 = \bar{\partial} X_\mu \bar{\partial} X^\mu.
\label{Vc1}
\end{equation}
Later on we will be interested in finding the solutions in Poincare coordinates $(X,Y,Z)$ but for the moment it is convenient
to study the problem in embedding coordinates $X_\mu$. We can rewrite the equations using the matrix
\begin{equation}
\mathbb{X} = \left(\begin{array}{cc} X_0+X_3 & X_1-i X_2 \\ X_1 + i X_2 & X_0-X_3 \end{array} \right) = X_0 + X_i \sigma^i \ ,
\end{equation}
where $\sigma^i$ denote the Pauli matrices. Notice also that Poincare coordinates are simply given by
\begin{equation}
Z = \frac{1}{\mathbb{X}_{22}} , \ \ \ X+iY =\frac{\mathbb{X}_{21}}{\mathbb{X}_{22}}.
\label{PoinX}
\end{equation}
The matrix $\mathbb{X}$ satisfies
\begin{equation}
\mathbb{X}^\dagger = \mathbb{X} , \ \ \det \mathbb{X} = 1, \ \ \ \partial\bar{\partial} \mathbb{X}=\Lambda \mathbb{X}, \ \
\det(\partial\mathbb{X})=0=\det(\bar{\partial}\mathbb{X})\ ,
\end{equation}
as follows from the definition of $\mathbb{X}$, the constraint (\ref{Xsq1}), the equations of motion (\ref{eomX}) and the Virasoro constraints (\ref{Vc1}).
We can solve the constraint $\mathbb{X}^\dagger=\mathbb{X}$ by writing
\begin{equation}
\mathbb{X}=\mathbb{A}\mathbb{A}^\dagger, \ \ \ \ \det \mathbb{A}=1 , \ \ \ \ \mathbb{A} \in SL(2,\mathbb{C}).
\end{equation}
The equations of motion have a global $SL(2,\mathbb{C})\equiv SO(3,1)$ symmetry under which
\begin{equation}
\mathbb{X} \rightarrow U\mathbb{X}U^\dagger, \ \ \ \mathbb{A}\rightarrow U\mathbb{A}, \ \ \ U\in SL(2,\mathbb{C}).
\end{equation}
In the new variable there is an $SU(2)$ gauge symmetry
\begin{equation}
\mathbb{A} \rightarrow \mathbb{A} \ensuremath{\mathcal{U}}, \ \ \ \ensuremath{\mathcal{U}}(z,\bar{z})\in SU(2)\ ,
\end{equation}
since this leaves $\mathbb{X}$ invariant. We can define the current
\begin{equation}
J = \mathbb{A}^{-1} \partial \mathbb{A}, \ \ \ \bar{J}=\mathbb{A}^{-1} \bar{\partial} \mathbb{A}\ ,
\label{Jdef}
\end{equation}
which is invariant under the global symmetry and, under the local symmetry transform as
\begin{equation}
J \rightarrow \ensuremath{\mathcal{U}}^{\dagger} J\ensuremath{\mathcal{U}} + \ensuremath{\mathcal{U}}^\dagger \partial \ensuremath{\mathcal{U}}, \ \ \ \bar{J} \rightarrow \ensuremath{\mathcal{U}}^{\dagger} \bar{J} \ensuremath{\mathcal{U}} + \ensuremath{\mathcal{U}}^\dagger \bar{\partial} \ensuremath{\mathcal{U}}.
\end{equation}
From the definition of $J$, $\bar{J}$, the property that $\det \mathbb{A}=1$, the equations of motion and the constraints we find:
\begin{eqnarray}
\bar{\partial} J -\partial\bar{J} + [\bar{J},J] &=& 0\ ,\\
\mbox{Tr} J = \mbox{Tr} \bar{J} &=& 0\ ,\\
\partial ( \bar{J} + J^\dagger ) + \ensuremath{\frac{1}{2}} [J-\bar{J}^\dagger,\bar{J}+J^\dagger] &=& 0\ ,\\
\det (\bar{J}+J^\dagger) &=& 0\ ,
\end{eqnarray}
and the corresponding equations found by hermitian conjugations of the ones given. In the third equation we used that for $SU(2)$ currents we
have for example:
\begin{equation}
J \bar{J} = \ensuremath{\frac{1}{2}} [J,\bar{J}] + \ensuremath{\frac{1}{2}} \mbox{Tr}\left( J\bar{J}\right)\ ,
\end{equation}
and similarly for the other products. The trace part gives the Lagrange multiplier $\Lambda$ as:
\begin{equation}
\Lambda = \ensuremath{\frac{1}{2}} \mbox{Tr} \left(\left(J+\bar{J}^\dagger\right) \left(J^\dagger+\bar{J}\right)\right) \label{LJ} \ ,
\end{equation}
which does not provide an equation but is useful later to determine the world-sheet metric.
From the form of the equations it seems convenient to define
\begin{equation}
\mathcal{A} =\ensuremath{\frac{1}{2}}(\bar{J}+J^\dagger), \ \ \ \mathcal{B}=\ensuremath{\frac{1}{2}}(J-\bar{J}^\dagger).
\label{ABdef}
\end{equation}
The equations read now
\begin{eqnarray}
\mbox{Tr} \mathcal{A} = \mbox{Tr} \mathcal{B} &=& 0\ , \label{eqAB1}\\
\det \mathcal{A} &=&0 \ ,\label{eqAB2} \\
\partial \mathcal{A} + [\mathcal{B},\mathcal{A}] &=& 0 \ , \label{eqAB3}\\
\bar{\partial} \mathcal{B} + \partial \mathcal{B}^\dagger &=& [\mathcal{B}^\dagger,\mathcal{B}] + [\mathcal{A}^\dagger,\mathcal{A}]. \label{eqAB4}
\end{eqnarray}
The $SU(2)$ gauge symmetry acts on these currents as
\begin{equation}
\mathcal{A} \rightarrow \ensuremath{\mathcal{U}}^\dagger \mathcal{A} \ensuremath{\mathcal{U}}, \ \ \ \ \mathcal{B} \rightarrow \ensuremath{\mathcal{U}}^\dagger \mathcal{B} \ensuremath{\mathcal{U}} +\ensuremath{\mathcal{U}}^\dagger\partial \ensuremath{\mathcal{U}},\ \ \
\ensuremath{\mathcal{U}}(z,\bar{z}) \in SU(2).
\end{equation}
In a sense, $\mathcal{B}$ plays the role of a gauge field.
Since $ \mbox{Tr} \mathcal{A} = 0 $ we can write in terms of Pauli matrices $\sigma^j$:
\begin{equation}
\mathcal{A} = \left(\mathcal{A}^{(1)}_j + i \mathcal{A}^{(2)}_j\right) \sigma^j \ ,
\end{equation}
with $\mathcal{A}^{1,2}$ two real three-dimensional vectors. The property $\det \mathcal{A} =0$ implies that they are orthogonal and of the same
length. The $SU(2)$ gauge symmetry acts on them as three-dimensional rotation so we can take $\mathcal{A}^{(1)}$ to be along the $\hat{x}$ axis, and
$\mathcal{A}^{(2)}$ along the $\hat{y}$ axis. In that way we can choose the gauge such that
\begin{equation}
\mathcal{A} = \ensuremath{\frac{1}{2}} e^{\alpha(z,\bar{z})} (\sigma_1+ i \sigma_2) = e^{\alpha(z,\bar{z})} \sigma_+ \ ,
\end{equation}
where $\alpha(z,\bar{z})$ is a real function and $\sigma_+ = \ensuremath{\frac{1}{2}}(\sigma_1+i\sigma_2) = \left(\begin{array}{cc} 0&1\\ 0 & 0 \end{array}\right)$.
Equation $\partial \mathcal{A} + [\mathcal{B},\mathcal{A}] =0$ uniquely implies that
\begin{equation}
\mathcal{B} = -\ensuremath{\frac{1}{2}} \partial \alpha \sigma_z + b_2(z,\bar{z}) \sigma_+ \ ,
\end{equation}
for some function $b_2(z,\bar{z})$.
Finally the equation for $\mathcal{B}$ implies that $b_2=f(z) e^{-\alpha}$ for an arbitrary holomorphic function $f(z)$. It also implies that
\begin{equation}
\partial\bar{\partial}\alpha = e^{2\alpha} +f \bar{f} e^{-2\alpha} .
\label{alphafeq}
\end{equation}
So, up to a gauge transformation the most general solution is given by
\begin{eqnarray}
\mathcal{A} &=& e^{\alpha} \sigma_+ \ ,\\
\mathcal{B} &=& -\ensuremath{\frac{1}{2}} \partial \alpha \sigma_z + f(z) e^{-\alpha} \sigma_+ \ ,
\end{eqnarray}
with $\alpha$ satisfying eq.(\ref{alphafeq}).
Finally the function $f(z)$ can be locally eliminated by changing to world-sheet coordinates $w$ such that $dw=\sqrt{f} dz$. If we further redefine
$\alpha\rightarrow \alpha + \frac{1}{4}\ln(f\bar{f})$ and do a gauge transformation with $\ensuremath{\mathcal{U}}=e^{\frac{i\phi}{2}\sigma_z}$
(where $i\phi=\frac{1}{4}\, f/ \bar{f}$) then the result is equivalent to setting $f=1$.
From the equations of motion for $\mathcal{A}$ and $\mathcal{B}$, namely eqns.(\ref{eqAB1})-(\ref{eqAB4}) it can be seen that they are invariant
under multiplying $\mathcal{A}$ by a constant of modulus one that we call $\bar{\lambda}$ ($|\lambda|=1$). We get:
\begin{eqnarray}
\mathcal{A} &=& \bar{\lambda} e^{\alpha} \sigma_+ \ ,\\
\mathcal{B} &=& -\ensuremath{\frac{1}{2}} \partial \alpha \sigma_z + e^{-\alpha} \sigma_+ \ ,\\
\partial\bar{\partial}\alpha &=& 2 \cosh(2\alpha) \ ,
\end{eqnarray}
where we set $f=1$ by the reasons indicated before.
The constant $\lambda$ can be eliminated by a gauge transformation and a redefinition of $\alpha$ but we keep it for later convenience. It is called
the spectral parameter and should not be confused with the coupling constant in the dual gauge theory.
Having computed $\mathcal{A}$ and $\mathcal{B}$ we can use eq.(\ref{ABdef}) to reconstruct $J$ and $\bar{J}$ obtaining:
\begin{equation}
J=\left(\begin{array}{cc}-\ensuremath{\frac{1}{2}}\partial\alpha & e^{-\alpha}\\ \lambda e^{\alpha} & \ensuremath{\frac{1}{2}} \partial\alpha \end{array}\right), \ \ \ \
\bar{J}=\left(\begin{array}{cc}\ensuremath{\frac{1}{2}} \bar{\partial}\alpha & \bar{\lambda} e^{\alpha}\\ -e^{-\alpha} & -\ensuremath{\frac{1}{2}} \bar{\partial}\alpha \end{array}\right) .\label{JJbar}
\end{equation}
Finally we should use eq.(\ref{Jdef}) to compute $\mathbb{A}$. Summarizing we need first to solve the equation:
\begin{equation}
\partial\bar{\partial}\alpha = 2\cosh 2\alpha\ , \label{alphaeqn}
\end{equation}
then plug $\alpha$ into the definitions for $J$, $\bar{J}$, namely eq.(\ref{JJbar}), and solve for $\mathbb{A}$:
\begin{eqnarray}
\partial \mathbb{A} &=& \mathbb{A} J \ , \label{A1eqn} \\
\bar{\partial} \mathbb{A} &=& \mathbb{A} \bar{J} . \label{A2eqn}
\end{eqnarray}
Finally, the string solution is determined as $\mathbb{X}=\mathbb{A}\mathbb{A}^\dagger$. The equation for $\alpha$ is non-linear but the ones for $\mathbb{A}$ are linear since $J$, $\bar{J}$ are known once $\alpha$ is known. This is the main idea of the Pohlmeyer reduction \cite{Pohlmeyer} which we rederive here as it applies to our particular problem. Similar considerations in the context of string theory are well-known, for example see \cite{JJ}, \cite{DDJK}, \cite{AM2}, \cite{SS}.
Notice that, since $\mbox{Tr}J=\mbox{Tr}\bar{J}=0$ we
automatically find that $\det \mathbb{A}$ is constant independent of $z,\bar{z}$. However we need $\det \mathbb{A}=1$ so we just need to normalize
$\mathbb{A}$ dividing by an appropriate constant. Furthermore it is convenient to write
\begin{equation}
\mathbb{A} = \left(\begin{array}{cc}\psi_1 & \psi_2\\ \tilde{\psi}_1 & \tilde{\psi}_2\end{array}\right)\ ,
\end{equation}
where the vectors $\psi=(\psi_1,\psi_2)$ and $\tilde{\psi}=( \tilde{\psi}_1, \tilde{\psi}_2)$ are linearly independent and satisfy
\begin{equation}
\partial \psi = \psi J, \ \ \ \bar{\partial} \psi = \psi \bar{J}\ , \label{p12eq}
\end{equation}
and the same for $\tilde{\psi}$. They have to be linearly independent so the determinant $(\psi_1 \tilde{\psi}_2 -\psi_2 \tilde{\psi}_1)$ is non vanishing
(but is constant as discussed before). Even with these conditions there is a certain ambiguity in choosing $\psi$, $\tilde{\psi}$ but those boil down to
$SL(2,\mathbb{C})\equiv SO(3,1)$ transformations of $\mathbb{X}$.
\section{Solutions}
As shown in \cite{BB} solutions to eqns.(\ref{alphaeqn},\ref{A1eqn},\ref{A2eqn}) can be found using theta functions. In this section we rederive
the results of \cite{BB} using a perhaps more pedestrian way. The method we use is to consider the theta functions as special functions whose derivatives are such that
they are good candidates to solve eqns.(\ref{alphaeqn},\ref{A1eqn},\ref{A2eqn}) in much the same way as trigonometric functions are good candidates to solve the harmonic oscillator.
The equations are solved by simple substitution and adjustment of the parameters. To start we need to review some properties of the theta functions.
\subsection{Riemann theta functions and their properties.}
There is a vast literature on Riemann theta functions \cite{ThF}. In this section we review the minimal knowledge necessary to find
solutions to the equations. We follow the notation of \cite{FK} which also gives a good introduction to Riemann surfaces. Notice also
that in the next section we develop an example in detail which can be read in parallel with this section.
Consider a compact Riemann surface of genus $g$ with fundamental cycles $a_i$, $b_i$ ($i=1\ldots g$) and intersections
\begin{equation}
a_i \circ a_j =0 = b_i \circ b_j, \ \ \ a_i\circ b_j = \delta_{ij}\ ,
\end{equation}
which means that $a_i$ only intersects $b_i$. The Riemann surface is taken to be a hyperelliptic one defined by the function:
\begin{equation}
\mu(\lambda) = \sqrt{\lambda \prod_{j=1}^{2g} (\lambda-\lambda_j)} .
\end{equation}
The square root has cuts with branching points at $0,\infty, \lambda_j$ but is well defined in a double cover of the complex plane.
This double cover is the Riemann surface we consider. For all values of $\lambda\neq 0,\infty, \lambda_j$ there are two points on the Riemann surface,
one in the upper sheet, and one in the lower sheet.
Consider now $\omega_{i=1\ldots g}$ to be the unique basis of holomorphic abelian differentials
satisfying $\oint_{a_i} \omega_j = \delta_{ij}$, and define the $g\times g$ period matrix
\begin{equation}
\Pi_{ij} = \oint_{b_i} \omega_j.
\label{Pidef}
\end{equation}
It can be proved that $\Pi$ is a symmetric matrix and its imaginary part is positive definite allowing the definition of an associated $\theta$-function
\begin{equation}
\theta(\zeta) = \sum_{n\in \mathbb{Z}^g} e^{ 2\pi i \left(n^t\Pi n+n^t \zeta\right)}.
\end{equation}
The arguments of the $\theta$-function are $\zeta$ which is a vector in $\mathbb{C}^g$ and the period matrix $\Pi$ (which we consider fixed and therefore do not
explicitly write as an argument). The sum is done over all $n\in\mathbb{Z}^g$, that is all order $g$ vectors with integer components. All vectors ({\it e.g.}\ $n,\zeta$) are
taken to be column vectors (and therefore their transposes $n^t,\zeta^t$ are row vectors).
Simple but important properties of the theta function are
\begin{equation}
\theta(-\zeta)=\theta(\zeta)\ ,
\end{equation}
and the (quasi)-periodicity:
\begin{equation}
\theta\left(\zeta+\Delta_{2}+\Pi \Delta_{1}\right) =e^{-2\pi i\left[\Delta_{1}^t \zeta+\ensuremath{\frac{1}{2}}\Delta_{1}^t\Pi \Delta_{1}\right]}
\theta(\zeta)\ ,
\end{equation}
where $\Delta_1,\Delta_2\in \mathbb{Z}^g$, namely are vectors with integer components.
To shorten some equations, it is also useful to define the $\theta$ function with characteristics:
\begin{equation}
\hat{\theta}(\zeta)=\theta\left[\begin{array}{c} \Delta_1 \\ \Delta_2 \end{array}\right](\zeta)=
\exp\left\{2\pi i\left[\frac{1}{8}\Delta_1^t \Pi \Delta_1+\ensuremath{\frac{1}{2}} \Delta_1^t \zeta +\frac{1}{4}\Delta_1^t \Delta_2 \right]\right\}
\theta\left(\zeta+\ensuremath{\frac{1}{2}}\Delta_2+\ensuremath{\frac{1}{2}}\Pi \Delta_1\right)\ ,
\end{equation}
where again $\Delta_1,\Delta_2\in \mathbb{Z}^g$.
We introduced the notation $\hat{\theta}$ for this function
because, in the rest of the paper, $\Delta_1$ and $\Delta_2$ are fixed vectors. In particular from now on we are going to consider that $\Delta_1^t \Delta_2$ is an
odd integer which is also described as saying that $\left[\begin{array}{c} \Delta_1 \\ \Delta_2 \end{array}\right]$ is an odd characteristic.
In such case we have
\begin{equation}
\hat{\theta}(-\zeta) = e^{i\pi\Delta_1^t \Delta_2} \hat{\theta}(\zeta) = -\hat{\theta}(\zeta)\ ,
\end{equation}
as can be derived from the definition of $\hat{\theta}$ and we used that, in our case, $\Delta_1^t\Delta_2$ is odd. In particular this implies
\begin{equation}
\hat{\theta}(0)=0 \ \ \ \Rightarrow \ \ \ \theta\left(\ensuremath{\frac{1}{2}}\Delta_2+\ensuremath{\frac{1}{2}}\Pi \Delta_1\right)=0\ ,
\end{equation}
namely the vector $a=\ensuremath{\frac{1}{2}}\Delta_2+\ensuremath{\frac{1}{2}}\Pi \Delta_1$ is a zero of the theta function.
The (quasi)-periodicity of the theta function implies that
\begin{equation}
\theta\left[\begin{array}{c} \Delta_1 +2 \varepsilon_1 \\ \Delta_2 +2 \varepsilon_2 \end{array}\right](\zeta) =
e^{i\pi\Delta_1^t \varepsilon_2} \theta\left[\begin{array}{c} \Delta_1 \\ \Delta_2 \end{array}\right](\zeta)\ ,
\end{equation}
for any $\varepsilon_{1,2}\in \mathbb{Z}^g$. Therefore it only makes sense to consider $\Delta_{1,2}$ modulus two, namely its components being zero or one.
The most important property of the theta functions that we need in this paper is Fay's trisecant identity:
\begin{equation}
\label{eq:fay}
\theta(\zeta)\; \theta\left(\zeta+\int_{p_{2}}^{p_1}\!\!\!\! \omega+\int_{p_3}^{p_{4}}\!\!\!\!\omega\right) =
\gamma_{1234}\, \theta\Big(\zeta+\int_{p_{2}}^{p_{1}}\!\!\!\!\omega \Big)\, \theta\Big(\zeta+\int_{p_{3}}^{p_{4}}\!\!\!\!\omega \Big)
+\gamma_{1324}\, \theta\Big(\zeta+\int_{p_{3}}^{p_{1}}\!\!\!\!\omega \Big) \,\theta\Big(\zeta+\int_{p_{2}}^{p_{4}}\!\!\!\!\omega \Big) \ ,
\end{equation}
with
\begin{equation}
\gamma_{ijkl}=\frac{\theta(a+\int_{p_{k}}^{p_{i}}\!\!\omega)\, \theta(a+\int_{p_{l}}^{p_{j}}\!\!\omega)}
{\theta(a+\int_{p_{l}}^{p_{i}}\!\!\omega)\, \theta(a+\int_{p_{k}}^{p_{j}}\!\!\omega)} .
\end{equation}
In these formulas $p_j$ are points on the Riemann surface, and $a$ is a non-singular zero of the Riemann theta function, {\it i.e.}\ the function is zero but not its gradient.
In particular we are going to use $a=\ensuremath{\frac{1}{2}}\Delta_2+\ensuremath{\frac{1}{2}}\Pi \Delta_1$ which is a zero as noticed before. Also notice that the contour integrals
$\int_{p_a}^{p_b} \omega_j $ define a vector which from now on, following standard convention will be abbreviated as:
\begin{equation}
\int_{p_a}^{p_b} \omega_j \rightarrow \int_{p_a}^{p_b}.
\end{equation}
The function $\gamma$ may be viewed as a generalization of the cross-ratio function on $\mathbb{CP}^{1}$ to functions on Riemann surfaces.
Some immediate properties of these function are:
\begin{equation}
\gamma_{1233}\,=\,\gamma_{1134}\,=\,1, \ \ \gamma_{2134}\,=\,\gamma_{1234}^{-1},\ \ \ \gamma_{1214} =0=\gamma_{1232}.
\label{gammaid}
\end{equation}
One important use of the Fay's Trisecant formula is that it provides a direct way of obtaining directional derivatives of theta functions or of ratios
of them. Taking the derivative with respect to $p_{1}$ and then letting $p_{2}\rightarrow p_1$ we get
\begin{eqnarray}
\lefteqn{D_{p_{1}}\!\ln\left[\frac{\theta(\zeta)}{\theta(\zeta+\int_{p_{3}}^{p_{4}})}\right] =
- D_{p_{1}}\!\ln\Big[\frac{\theta(a+\int_{p_{3}}^{p_{1}})}{\theta(a+\int_{p_{4}}^{p_{1}})}\Big] } && \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ - \frac{D_{p_{1}}\!\theta(a)\:\theta\left(a+\int_{p_{4}}^{p_{3}}\right)}
{\theta\Big(a+\int_{p_{4}}^{p_{1}}\Big)\:\theta\Big(a+\int_{p_{1}}^{p_{3}}\Big)}
\frac{\theta\Big(\zeta+\int_{p_{3}}^{p_{1}}\Big)\:\theta\Big(\zeta+\int_{p_{1}}^{p_{4}}\Big)}
{\theta(\zeta)\:\theta\Big(\zeta+\int_{p_{3}}^{p_{4}}\Big)}.
\label{eq:finaldp1}
\end{eqnarray}
Here $D_{p_1}$ indicates a directional derivative defined as (summation over $j$ implied):
\begin{equation}
D_{p_1} F(\zeta) = \omega_j(p_1) \frac{\partial F(\zeta)}{\partial \zeta_j} \ ,
\end{equation}
and should not be confused with a derivative with respect to $p_1$ that, if appears, we will denote as $\partial_{p_1}$. Also, the final expression is simplified
using the identities (\ref{gammaid}).
We can further derive with respect to $p_3$ and take $p_4\rightarrow p_3$ obtaining:
\begin{equation}
D_{p_3p_1} \ln \theta(\zeta) = D_{p_3p_1} \ln \theta\left(a+\int_{p_{3}}^{p_{1}} \right)
- \frac{ D_{p_{1}}\!\theta(a) D_{p_3}\theta\left(a\right)}
{\theta\Big(a+\int_{p_{3}}^{p_{1}}\Big)\:\theta\Big(a+\int_{p_{1}}^{p_{3}}\Big)}
\frac{\theta\Big(\zeta+\int_{p_{3}}^{p_{1}}\Big)\:\theta\Big(\zeta+\int_{p_{1}}^{p_{3}}\Big)}
{\theta^2(\zeta)}.
\label{Thetapp}
\end{equation}
This summarizes the basic properties we need. Much more is known about these functions as can be found in the references \cite{FK}, \cite{ThF}.
\subsection{Solution to cosh-Gordon equation}
Eq.(\ref{Thetapp}) shows that the second derivative of the logarithm of a theta function contains the theta function. So solutions of
\begin{equation}
\partial\bar{\partial}\alpha = 2\cosh 2\alpha = e^{2\alpha} + e^{-2\alpha}\ ,
\end{equation}
should naturally be sought as logs of theta functions. To eliminate the constant term in eq.(\ref{Thetapp}) we subtract two such derivatives and get
\begin{eqnarray}
\lefteqn{D_{p_1p_3} \ln \frac{\theta(\zeta)}{\theta(\zeta+\int_{p_1}^{p_3})} =} \ \ \ \ \ \ && \\ &&
\frac{D_{p_1}\theta(a)D_{p_3}\theta(a)}{\theta(a+\int_{p_3}^{p_1})\theta(a+\int_{p_1}^{p_3})}
\left\{\frac{\theta(\zeta+2\int_{p_1}^{p_3})\theta(\zeta)}{\theta^2(\zeta+\int_{p_1}^{p_3})}
-\frac{\theta(\zeta+\int_{p_1}^{p_3}-2\int_{p_1}^{p_3})\theta(\zeta+\int_{p_1}^{p_3})}{\theta^2(\zeta)}\right\} . \nonumber
\end{eqnarray}
To get back the same theta functions we need to exploit their periodicity and therefore require
\begin{equation}
2\int_{p_1}^{p_3}\!\!\omega = \Delta_2 + \Pi \Delta_1\ ,
\end{equation}
where $\Delta_2, \Delta_1$ are integer vectors and $\Pi$ is the period matrix in eq.(\ref{Pidef}). This gives
\begin{equation}
\theta(\zeta+2\int_{p_1}^{p_3}) = e^{-2\pi i \Delta_1^t \zeta -i \pi \Delta_1^t \Pi \Delta_1} \theta(\zeta).
\end{equation}
We obtain
\begin{eqnarray}
\lefteqn{D_{p_1p_3} \ln \frac{\theta(\zeta)}{\theta(\zeta+\int_{p_1}^{p_3})} =
\frac{D_{p_1}\theta(a)D_{p_3}\theta(a)}{\theta(a+\int_{p_3}^{p_1})\theta(a+\int_{p_1}^{p_3})} e^{-\frac{i\pi}{2} \Delta_1^t \Pi \Delta_1}
} \ \ \ \ \ \ && \label{aeq1} \\ &&
\times \left\{e^{-2\pi i \Delta_1^t\zeta} e^{-\frac{i\pi}{2}\Delta_1^t\Pi\Delta_1}\frac{\theta^2(\zeta)}{\theta^2(\zeta+\int_{p_1}^{p_3})}
-e^{i\pi\Delta_1^t\Delta_2}e^{2\pi i \Delta_1^t\zeta} e^{\frac{i\pi}{2}\Delta_1^t\Pi\Delta_1}\frac{\theta^2(\zeta+\int_{p_1}^{p_3})}{\theta^2(\zeta)}\right\} .\nonumber
\end{eqnarray}
We should now choose $p_1$, $p_3$ and the path of integration between them such that $\Delta_1^t \Delta_2$ is odd so that $e^{i\pi\Delta_1^t\Delta_2}=-1$. Then
we take
\begin{equation}
e^{2\alpha} = - e^{-2\pi i \Delta_1^t\zeta - \frac{i\pi}{2} \Delta_1^t\Pi\Delta_1} \frac{\theta^2(\zeta)}{\theta^2(\zeta+\int_{p_1}^{p_3})}
=\frac{\theta^2(\zeta)}{\hat{\theta}^2 (\zeta)}\ ,
\label{alphasol}
\end{equation}
and
\begin{equation}
\zeta=2\omega(p_1) \bar{z} + 2 \omega(p_3) z.
\end{equation}
The last choice results in $\partial_z \zeta= 2 D_{p_3} \zeta$, $\bar{\partial}\zeta = 2 D_{p_1} \zeta$.
The correct normalization for $\zeta$ and $\alpha$ follows from the result
\begin{equation}
\frac{D_{p_1}\theta(a)D_{p_3}\theta(a)}{\theta^2(0)} = -\frac{1}{4} e^{-i\frac{\pi}{2}\Delta_1^t\Pi\Delta_1}\ ,
\end{equation}
which is explained in the appendix. In any case it should be clear at this point that the overall normalization of $\alpha$ can always be adjusted so
that eq.(\ref{aeq1}) becomes the cosh-Gordon eq. $\partial\bar{\partial}{\alpha} = 2\cosh 2\alpha$.
The final and very important point is that the theta functions are generically complex but $\alpha$ should be real. Again, following \cite{BB} we impose a
reality condition as follows. Suppose there is a $g\times g$ symmetric matrix $T$ such that
\begin{equation}
\bar{\Pi} = -T \Pi T, \ \ \ \ \bar{\zeta} =-T\zeta, \ \ \ \ T^2=1.
\end{equation}
Then, it is easy to prove, using the definition of the theta function that $\theta(\zeta)$, $\hat{\theta}(\zeta)$ are real whereas for example
$e^{i\pi\Delta_1^t\zeta}\theta(\zeta+\int_{p_1}^{p_3}\!\!\omega)=e^{i\pi\Delta_1^t\zeta}\theta(\zeta+\ensuremath{\frac{1}{2}}\Delta_2 + \ensuremath{\frac{1}{2}} \Pi\Delta_1)$ is purely imaginary.
This explains the minus sign in (\ref{alphasol}) and proves that $\alpha$ is real. The matrix $T$ is constructed \cite{BB} from an involution of the Riemann
surface that shuffles the basis of cycles $(a_i,b_i)$. To prove for example that $\theta(\zeta)$ is real we use
\begin{eqnarray}
\bar{\theta}(\zeta) &=& \sum_{n\in\mathbb{Z}^g} e^{-2\pi i (n^t \bar{\zeta} +\ensuremath{\frac{1}{2}} n^t \bar{\Pi} n)} \\
&=& \sum_{n\in\mathbb{Z}^g} e^{-2\pi i (-n^t T \zeta - \ensuremath{\frac{1}{2}} n^t T \Pi T n)} \\
&=& \theta(\zeta) \ ,
\end{eqnarray}
where in the last equation we redefine the summation variable $n\rightarrow Tn$ and used $T^t=T$, $T^2=1$.
\subsection{Solution to equations for $\psi$, $\tilde{\psi}$}
In the previous section we showed in detail how to use the properties of the theta function to solve the cosh-Gordon equation. Now we are going to do the same
for the equations determining $\psi$ but in a more sketchy way. Notice that $\psi$ and $\tilde{\psi}$ are two linearly independent solutions of the same equations:
\begin{eqnarray}
\partial \psi_1 &=& -\ensuremath{\frac{1}{2}} \partial\alpha \psi_1 + \frac{1}{\lambda} e^{\alpha} \psi_2 \ , \label{p1d}\\
\partial \psi_2 &=& e^{-\alpha} \psi_1 + \ensuremath{\frac{1}{2}} \partial\alpha \psi_2 \ , \label{p2d}\\
\bar{\partial} \psi_1 &=& \ensuremath{\frac{1}{2}} \bar{\partial}\alpha \psi_1 - e^{-\alpha} \psi_2\ , \label{p1db}\\
\bar{\partial} \psi_2 &=& \frac{1}{\lambda} e^{\alpha} \psi_1 - \ensuremath{\frac{1}{2}} \bar{\partial}\alpha \psi_2 \ ,\label{p2db}
\end{eqnarray}
which are the expanded version of eq.(\ref{p12eq}).
As a first step we can define a function $F=e^\alpha \frac{\psi_1}{\psi_2}$ which satisfies
\begin{equation}
\partial \ln F = \frac{1}{\lambda} e^{2\alpha} \frac{1}{F} - e^{-2\alpha} F .
\end{equation}
By using the identities for the first derivatives of the theta functions and the value of $e^{2\alpha}$ already given one can readily see that
\begin{equation}
F = -\frac{2 D_{p_3}\theta(a) \theta(a+\int_{p_4}^{p_1})}{\theta(a+\int_{p_4}^{p_3})\theta(a+\int_{p_3}^{p_1})}
e^{-2\pi i \Delta_1^t\zeta-\frac{i\pi}{2}\Delta_1^t\Pi\Delta_1}
\frac{\theta(\zeta)\theta(\zeta+\int_{p_3}^{p_4})}{\theta(\zeta+\int_{p_1}^{p_3})\theta(\zeta+\int_{p_1}^{p_4})}\ ,
\end{equation}
where we introduced another special point in the Riemann surface that we call $p_4$. The points $p_{1,3}$ are going to be taken as branching points, in particular
for definiteness we take $p_1=0$ and $p_3=\infty$. On the other hand $p_4$ is not a branching point and we take it to be on the upper sheet with $p_4=\lambda$,
the spectral parameter. Going back to the equations for $\psi_{1,2}$ and using the same techniques we find that
\begin{eqnarray}
\psi_1 &=& C e^{-\ensuremath{\frac{1}{2}} \alpha} \frac{\theta(\zeta+\int_{p_1}^{p_3}+\int_{p_1}^{p_4})}{\theta(\zeta+\int_{p_1}^{p_3})}
e^{2 z D_{p_3}\ln\theta(\int_{p_4}^{p_1})+ 2\bar{z} D_{p_1}\ln\theta(a+\int_{p_4}^{p_1})+2\pi i \bar{z} \Delta_1^t\omega(0) }\ ,\nonumber \\
\psi_2 &=& e^{\ensuremath{\frac{1}{2}} \alpha} \frac{\theta(\zeta+\int_{p_1}^{p_4})}{\theta(\zeta)}
e^{2 z D_{p_3}\ln\theta(\int_{p_4}^{p_1})+ 2\bar{z} D_{p_1}\ln\theta(a+\int_{p_4}^{p_1})+2\pi i \bar{z} \Delta_1^t\omega(0)}\ ,
\end{eqnarray}
the constant $C$ is determined to be
\begin{equation}
C = \frac{2 D_{p_3} \theta(a) \theta(a-\int_{p_1}^{p_4})}{\theta(\int_{p_1}^{p_4})\theta(0)} e^{\frac{i\pi}{2}\Delta_1^t\Pi\Delta_1}.
\end{equation}
Again, we emphasize that the technique is to match the equation with the properties of the theta functions and choose the parameters appropriately.
Another, linearly independent solution can be obtained by choosing a different point $p_4$ that we call $\bar{p}_4$. However it has to be associated
to the same value of $\lambda$ and therefore it can only be the same point but on the other (lower) sheet of the Riemann surface. Namely both $p_4$
and $\bar{p}_4$ project on $\lambda$. It should be noticed that, when $p_1=0$, namely one of the branching points, this implies
\begin{equation}
\int_{p_1}^{p_4} \omega_j = - \int_{p_1}^{\bar{p}_4} \omega_j\ ,
\end{equation}
because the first integral is done on the upper sheet and the second one on the lower sheet where the function $\mu$ changes sign
(and therefore $\omega_j$ changes sign). We obtain
\begin{equation}
\mathbb{A} = \frac{1}{(\psi_1 \tilde{\psi}_2-\psi_2\tilde{\psi}_1)^{\ensuremath{\frac{1}{2}}}} \left(\begin{array}{cc} \psi_1 & \psi_2 \\ \tilde{\psi}_1 & \tilde{\psi}_2 \end{array}\right).
\end{equation}
The (constant) normalization factor can be computed using the trisecant identity to give
\begin{eqnarray}
(\psi_1 \tilde{\psi}_2-\psi_2\tilde{\psi}_1) &=& -2 D_{p_3}\theta(a) e^{\frac{i\pi}{2}\Delta_1^t\Pi\Delta_1}
\frac{\theta(a+2\int_{p_1}^{p_4})}{\theta^2(\int_{p_1}^{p_4})} e^{2\pi i \Delta_1^t \int_{p_1}^{p_4}} \\
&=& 2 D_{p_3} \hat{\theta}(0) \frac{\hat{\theta}(2\int_{p_1}^{p_4})}{\theta^2(\int_{p_1}^{p_4})}.
\end{eqnarray}
To finish this section we rewrite the solution using the function $\hat{\theta}$ to obtain
\begin{eqnarray}
\psi_1 &=& 2 \frac{D_{p_3}\hat{\theta}(0)}{\theta(0)}\frac{\hat{\theta}(\int_{p_1}^{p_4})}{\theta(\int_{p_1}^{p_4})}
\frac{\hat{\theta}(\zeta+\int_{p_1}^{p_4})}{\hat{\theta}(\zeta)} e^{-\ensuremath{\frac{1}{2}}\alpha} e^{\mu z + \nu \bar{z}} \\ &&\nonumber\\
\psi_2 &=& \frac{\theta(\zeta+\int_{p_1}^{p_4})}{\theta(\zeta)} e^{\ensuremath{\frac{1}{2}} \alpha} e^{\mu z + \nu \bar{z}}\ ,
\end{eqnarray}
with
\begin{equation}
\mu = -2 D_{p_3} \ln \theta(\int_{p_1}^{p_4}), \ \ \ \ \nu = -2 D_{p_1} \ln \hat{\theta}(\int_{p_1}^{p_4}) . \label{munudef}
\end{equation}
It is straight-forward to check directly that these functions satisfy equations (\ref{p1d}),(\ref{p2d}),(\ref{p1db}),(\ref{p2db}).
The only identities that are needed are
\begin{equation}
- 4 D_{p_1}\theta(a) D_{p_3}\theta(a) e^{i\frac{\pi}{2}\Delta_1^t\Pi\Delta_1} = 4 D_{p_1}\hat{\theta}(0) D_{p_3}\hat{\theta}(0) = \theta(0)^2\ ,
\end{equation}
and
\begin{equation}
\lambda = -4 e^{-i\pi\Delta_1^t\Pi\Delta_1+2\pi i \Delta_1^t \int_{p_1}^{p_4}}
\left[\frac{D_{p_3}\theta(a) \theta(\int_{p_3}^{p_4})}{\theta(a+\int_{p_4}^{p_3}) \theta(0)}\right]^2
= -4 \left[\frac{D_{p_3}\hat{\theta}(0) \hat{\theta}(\int_{p_1}^{p_4})}{\theta(\int_{p_1}^{p_4}) \theta(0)}\right]^2 \ ,
\end{equation}
which are explained in the appendix.
The last identity allows us to define (that is to appropriately choose the sign of the square root)
\begin{equation}
\sqrt{-\lambda} \equiv 2 \frac{D_{p_3}\hat{\theta}(0) \hat{\theta}(\int_{p_1}^{p_4})}{\theta(\int_{p_1}^{p_4}) \theta(0)}.
\end{equation}
Then the final form for $\psi_{1,2}$ is simply:
\begin{eqnarray}
\psi_1 &=& \sqrt{-\lambda}\ \frac{\hat{\theta}(\zeta+\int_{p_1}^{p_4})}{\hat{\theta}(\zeta)} e^{-\ensuremath{\frac{1}{2}}\alpha} e^{\mu z + \nu \bar{z}} \\ &&\nonumber\\
\psi_2 &=& \frac{\theta(\zeta+\int_{p_1}^{p_4})}{\theta(\zeta)} e^{\ensuremath{\frac{1}{2}} \alpha} e^{\mu z + \nu \bar{z}}\ ,
\end{eqnarray}
where the sign of the square root is chosen according to the previous equation and $\mu$, $\nu$ were defined in eq.(\ref{munudef}).
We can also compute (remembering that $\int_{p_1}^{p_4}=-\int_{p_1}^{\bar{p}_4}$ because $p_1=0$.)
\begin{eqnarray}
\tilde{\psi}_1 &=& -\sqrt{-\lambda}\ \frac{\hat{\theta}(\zeta-\int_{p_1}^{p_4})}{\hat{\theta}(\zeta)} e^{-\ensuremath{\frac{1}{2}}\alpha} e^{-\mu z - \nu \bar{z}} \ ,\\ &&\nonumber\\
\tilde{\psi}_2 &=& \frac{\theta(\zeta-\int_{p_1}^{p_4})}{\theta(\zeta)} e^{\ensuremath{\frac{1}{2}} \alpha} e^{-\mu z - \nu \bar{z}}.
\end{eqnarray}
At this point we can replace $\psi_{1,2}$ and $\tilde{\psi}_{1,2}$ in $\mathbb{A}$ and then in $\mathbb{X}$. This allows us to compute the
solution directly in Poincare coordinates as:
\begin{eqnarray}
Z &=& \left|\frac{\hat{\theta}(2\int_{p_1}^{p_4})}{\hat{\theta}(\int_{p_1}^{p_4})\theta(\int_{p_1}^{p_4})}\right|
\frac{|\theta(0)\theta(\zeta)\hat{\theta}(\zeta)|\left|e^{\mu z + \nu \bar{z}}\right|^2}{|\hat{\theta}(\zeta-\int_{p_1}^{p_4})|^2+|\theta(\zeta-\int_{p_1}^{p_4})|^2} \ ,
\label{Zsol}\\
&& \nonumber \\
X+iY &=& e^{2\bar{\mu}\bar{z}+2\bar{\nu}z}\
\frac{\theta(\zeta-\int_{p_1}^{p_4})\overline{\theta(\zeta+\int_{p_1}^{p_4})}-\hat{\theta}(\zeta-\int_{p_1}^{p_4})\overline{\hat{\theta}(\zeta+\int_{p_1}^{p_4})}}{|\hat{\theta}(\zeta-\int_{p_1}^{p_4})|^2+|\theta(\zeta-\int_{p_1}^{p_4})|^2}\ , \label{XYsol}
\end{eqnarray}
\subsection{Shape of the Wilson loop}
The shape of the Wilson loop is determined by the intersection of the surface with the boundary.
The boundary is located at $Z=0$ which, from eq.(\ref{Zsol}) and for finite $z,\bar{z}$ only happens if
\begin{equation}
Z=0 \ \ \ \Leftrightarrow \ \ \ \theta(\zeta)=0 \ \mbox{or} \ \hat{\theta}(\zeta)=0 .
\end{equation}
Later on we are going to deal just with the second case so we determine the shape of the Wilson loop by
\begin{equation}
\hat{\theta}(\zeta)=0 .
\end{equation}
This equation defines a curve in the world-sheet which in turn is mapped to a curve in the
$(X,Y)$ plane using the solution to the equations of motion (\ref{XYsol}).
\subsection{Computation of the Area}
The expectation value of the Wilson loop is determined by the area of the minimal surface we described.
In conformal gauge the area is computed as:
\begin{equation}
A = 2 \int \partial X_\mu \bar{\partial}X^\mu d\sigma d\tau = 2 \int \Lambda d\sigma d\tau
= 4 \int e^{2\alpha} d\sigma d\tau \ ,
\label{Areadef}
\end{equation}
where we used eqns.(\ref{Lcomp}) to write the area in terms of the Lagrange multiplier $\Lambda$ and then used that
\begin{equation}
\Lambda=2e^{2\alpha}\ ,
\end{equation}
as follows from eqns.(\ref{LJ}) and (\ref{JJbar}) using that $|\lambda|=1$.
We could in principle replace $\alpha$ by its expression in eq.(\ref{alphasol}) and evaluate the integral
numerically but such procedure fails because the area is divergent. The correct procedure is to identify the divergent
piece analytically and then extract the finite piece in terms of a finite integral that can be easily evaluated
numerically. In order to do so we use eqs. (\ref{alphasol}) and (\ref{Thetapp}) (using $a=\int_{p_1}^{p_3}$) to get:
\begin{eqnarray}
e^{2\alpha} &=& 4 \left\{ D_{p_1p_3}\ln \theta(0)- D_{p_1p_3}\ln \hat{\theta}(\zeta)\right\} \\
&=& 4 D_{p_1p_3}\ln \theta(0) - \partial\bar{\partial}\ln \hat{\theta}(\zeta) . \label{alphaD13}
\end{eqnarray}
The first term in the expression for $e^{2\alpha}$ is a constant and the second one is a total derivative.
The first integral is clearly finite but the second one contains the divergent piece that we need to regulate.
In order to do that we observe that for these solutions
\begin{equation}
Z = |\hat{\theta}(\zeta)| h(z,\bar{z})\ ,
\end{equation}
where $Z$ is one of the Poincare coordinates and
\begin{equation}
h(z,\bar{z})=\left|\frac{\hat{\theta}(2\int_{p_1}^{p_4})}{\hat{\theta}(\int_{p_1}^{p_4})\theta(\int_{p_1}^{p_4})}\right|
\frac{|\theta(0)\theta(\zeta)|\left|e^{\mu z + \nu \bar{z}}\right|^2}{|\hat{\theta}(\zeta-\int_{p_1}^{p_4})|^2+|\theta(\zeta-\int_{p_1}^{p_4})|^2} .
\end{equation}
Furthermore using Stokes or Gauss theorem we find that for any well-behaved
function $F$:
\begin{equation}
\int d\sigma d\tau \partial \bar{\partial} F = \frac{1}{4} \int d\sigma d\tau \nabla^2 F
= \frac{1}{4} \oint \hat{n}\cdot\nabla F d\ell \ ,
\end{equation}
where the contour integral is over the boundary of the Wilson loop (in the world-sheet), $\hat{n}$ is an outgoing
normal vector and $d\ell$ is the differential of arc length.
The area is then:
\begin{equation}
A = 16 D_{p_1p_3}\ln \theta(0) \int d\sigma d\tau + \oint \hat{n}\cdot \nabla\ln h \ d\ell
- \oint \hat{n}\cdot \nabla \ln Z\ d\ell .
\end{equation}
The last integral is divergent and we concentrate now on extracting the leading divergence. The correct AdS/CFT prescription
is to cut the surface at $Z=\epsilon$ and write the area as
\begin{equation}
A = \frac{L}{\epsilon} + A_{f}\ ,
\end{equation}
where $L$ should be the length of the Wilson loop and $A_f$ is the finite part which is identified with the expectation value of the Wilson loop
through:
\begin{equation}
\langle W \rangle = e^{-\frac{\sqrt{\lambda}}{2\pi} A_f}\ ,
\end{equation}
where here $\lambda$ is the 't Hooft coupling of the gauge theory (not to be confused with the spectral parameter). This prescription is
equivalent to subtracting the area $A=\frac{L}{\epsilon}$ of a string ending on the contour of length $L$ and stretching along $Z$ from the boundary
to the horizon. To see that the coefficient of the divergence is indeed the length, let us compute
\begin{equation}
A_{\mbox{div.}} = - \oint_{Z=\epsilon} \frac{1}{Z} \hat{n}\cdot \nabla Z d\ell
= \frac{1}{\epsilon} \oint_{Z=\epsilon} |\nabla Z| d\ell\ ,
\end{equation}
where we used that the normal is precisely in the opposite direction of $\nabla Z$ because the contour is a curve of constant
$Z=\epsilon$ and $Z$ increases toward the inside. On the other hand the length in the boundary is given by
\begin{equation}
L = \oint \sqrt{|\hat{t}.\nabla X|^2 + |\hat{t}.\nabla Y|^2} d\ell \ ,
\end{equation}
where $\hat{t}$ is a unit vector tangent to the contour. We can move forward if we write the equation of motion for $X$ as derived form the
action (\ref{action}):
\begin{equation}
2 \nabla X \cdot \nabla Z = Z \nabla^2 X\ ,
\end{equation}
which, when $Z\rightarrow 0$, becomes $\nabla X\cdot \nabla Z=0$ namely $\nabla X$ is perpendicular to the normal and
therefore parallel to the tangent $\hat{t}$. The same is true for $Y$ so we find
\begin{equation}
L = \oint \sqrt{|\nabla X|^2 + |\nabla Y|^2} d\ell + {\cal O}(\epsilon^2) .
\end{equation}
Finally the equation of motion for $Z$ is
\begin{equation}
(\nabla Z)^2 - Z \nabla^2 Z = (\nabla X)^2 + (\nabla Y)^2\ ,
\end{equation}
which for $Z\rightarrow 0$ implies that $\sqrt{|\nabla X|^2 + |\nabla Y|^2}=|\nabla Z|$.
Therefore the length of the Wilson loop is given by
\begin{equation}
L = \oint |\nabla Z| d\ell - \frac{\epsilon}{2} \oint \frac{\nabla^2 Z}{|\nabla Z|}\ d\ell \ ,
\end{equation}
and the divergent piece of the area is indeed $A_{\mbox{div.}}=\frac{L}{\epsilon}$. There is a finite part remaining:
\begin{eqnarray}
A &=& \frac{L}{\epsilon} + A_{\mbox{f}} \ ,\\
A_{\mbox{f}} &=&
16 D_{p_1p_3}\ln \theta(0) \int d\sigma d\tau + \oint \hat{n}\cdot \nabla\ln h \ d\ell + \ensuremath{\frac{1}{2}} \oint \frac{\nabla^2 Z}{|\nabla Z|}\ d\ell . \nonumber
\end{eqnarray}
The integrals are performed on the world-sheet parameterized by $\sigma$, $\tau$. The first integral is proportional to the area of the world-sheet. The last two
integrals are done over the world-sheet boundary. The final expression can be simplified by rewriting $Z = |\hat{\theta}(\zeta)| h(z,\bar{z})$ and using that
$\hat{\theta}(\zeta)$ vanishes on the boundary where the contour integral is performed. It is then easy to check that $h(z,\bar{z})$ cancels and the final
formula for the area is:
\begin{eqnarray}
A &=& \frac{L}{\epsilon} + A_{\mbox{f}} \ ,\\
A_{\mbox{f}} &=&
16 D_{p_1p_3}\ln \theta(0) \int d\sigma d\tau - \ensuremath{\frac{1}{2}} \oint \frac{\nabla^2 \hat{\theta}(\zeta)}{|\nabla \hat{\theta}(\zeta)|}\ d\ell \\
&=&
16 D_{p_1p_3}\ln \theta(0) \int d\sigma d\tau - 2 \oint \frac{D_{p_1p_3} \hat{\theta}(\zeta)}{|D_{p_1} \hat{\theta}(\zeta)|}\ d\ell \label{Afres} \\
&=&
8 D_{p_1p_3}\ln \theta(0) \oint (\sigma d\tau-\tau d\sigma)
- 2 \oint \frac{D_{p_1p_3} \hat{\theta}(\zeta)}{|D_{p_1} \hat{\theta}(\zeta)|}\ d\ell.
\end{eqnarray}
where in the last step we used the well-known formula for the area of the region encircled by a given curve. The renormalized area is then expressed as a
finite one dimensional contour integral over the boundary of the world-sheet. Perhaps it would also be useful to clarify that
$|\nabla \hat{\theta}(\zeta)|$ denotes the norm of a real 2-vector whereas $|D_{p_1} \hat{\theta}(\zeta)|$ is the modulus of a complex number.
The final expression for the area is quite interesting because it does not depend on the spectral parameter $\lambda$.
Therefore the shape of both, the Wilson loop and the dual surface depend on the parameter $\lambda$ but the area $A_f$ does not.
In this way we explicitly find a one parameter family of deformations that preserve the area. It is not obvious at
first that the area $A_f$ should be independent of the spectral parameter because, although the definition (\ref{Areadef}) does not contain $\lambda$,
the regularized area does since, as we said, $L$ depends on $\lambda$. It so happens that the finite part $A_f$ does not.
The situation is similar to scale transformations that modify $L$ but not $A_f$.
\begin{figure}
\centering
\includegraphics[width=10cm]{contour}
\caption{The boundary is determined by the contour $Z=0$. However the area is computed by integrating up to a
contour $Z=\epsilon\rightarrow 0$ and then the leading divergence $\frac{L}{\epsilon}$ is subtracted. Here $L$ is
the length of the contour in the boundary (not in this ($\sigma$,$\tau$) plane).}
\label{contour}
\end{figure}
\section{Example with $g=3$}
We are going to consider an example to illustrate the shape of the Wilson loops that are obtained in this way. The main purpose is to show that we find closed
Wilson loops whose dual surface is known analytically.
Consider the function
\begin{equation}
\mu = i \sqrt{-i(\lambda+1-i)} \sqrt{-i(\lambda+1+i)} \sqrt{-i(\lambda-\frac{1+i}{2})} \sqrt{-i(\lambda-\frac{1-i}{2})}
\sqrt{2-\lambda} \sqrt{\lambda} \sqrt{\lambda+\ensuremath{\frac{1}{2}}}\ ,
\end{equation}
where the square root is taken to have a cut in the negative real axis. So defined, the function $\mu$ has cuts in the complex plane as illustrated in fig.\ref{cuts}
but is smooth in a double cover of the plane which defines a hyperelliptic Riemann surface of genus $g=3$. The cycles $a_i$, $b_i$ are taken as in the
figure. The Riemann surface has an involution $\lambda\rightarrow -\frac{1}{\bar{\lambda}}$ meaning that knowing the cuts for $|\lambda|>1$ we can
reconstruct the cuts inside the unit circle. This involution is important to construct the matrix $T$ that fixes the correct reality conditions (see \cite{BB}).
A basis for the holomorphic abelian differentials is given by
\begin{equation}
\nu_k = \frac{\lambda^{k-1}}{\mu} d\lambda, \ \ \ \ k=1\ldots 3 .
\end{equation}
If we compute
\begin{equation}
C_{ij} = \oint_{a_i} \nu_j , \ \ \ \ \tilde{C}_{ij} = \oint_{b_i} \nu_j\ ,
\end{equation}
then a normalized basis of holomorphic abelian differentials is
\begin{equation}
\omega_i = \nu_j \left(C^{-1}\right)_{ji}\ ,
\label{wdef}
\end{equation}
and the period matrix is
\begin{equation}
\Pi = \tilde{C} C^{-1} =\left( \begin{array}{ccc}0.5+0.64972 i&0.14972 i&-0.5\\ 0.14972 i&-0.5+0.64972 i&0.5\\ -0.5&0.5&0.639631\end{array}\right) .
\end{equation}
The Jacobi map (with base at 0) is defined as
\begin{equation}
\phi(\lambda)_j = \int_0^{\lambda} \omega_j = \int_0^\lambda \frac{\lambda^{j-1}}{\mu} \left(C^{-1}\right)_{ji} d\lambda .
\end{equation}
The function $\theta(\phi(\lambda))$ has three zeros: $\lambda=\infty, \frac{1-i}{2}, -1+i$. To prove this we can take, on the upper sheet,
a path from $\lambda=0$ to each of the zeros of $\mu$. Coming back along the lower sheet defines a closed path $\mathcal{C}_\lambda$
equivalent to:
\begin{equation}
\begin{array}{lclcl}
\lambda= -\ensuremath{\frac{1}{2}} &\ \ \rightarrow\ \ & \mathcal{C}_\lambda \equiv b_3-a_2 &\ \ \rightarrow\ \ &
\theta\left[ \begin{array}{ccc} 0&0&1\\0&-1&0 \end{array} \right]\!(\zeta) \\
\lambda= \ensuremath{\frac{1}{2}} - \ensuremath{\frac{1}{2}} i & \rightarrow & \mathcal{C}_\lambda \equiv a_2+b_2 & \rightarrow &
\theta\left[ \begin{array}{ccc} 0&1&0\\0&1&0 \end{array}\right]\!(\zeta) \\
\lambda= \ensuremath{\frac{1}{2}} + \ensuremath{\frac{1}{2}} i & \rightarrow & \mathcal{C}_\lambda \equiv b_2 & \rightarrow &
\theta\left[ \begin{array}{ccc} 0&1&0\\0&0&0 \end{array}\right]\!(\zeta) \\
\lambda= 2 & \rightarrow & \mathcal{C}_\lambda \equiv a_3-a_2 & \rightarrow &
\theta\left[ \begin{array}{ccc} 0&0&0\\0&-1&1 \end{array}\right]\!(\zeta) \label{Ch} \\
\lambda= -1+i & \rightarrow & \mathcal{C}_\lambda \equiv -a_2+a_3+b_1+b_3 & \rightarrow &
\theta\left[ \begin{array}{ccc} 1&0&1\\0&-1&1 \end{array}\right]\!(\zeta) \\
\lambda= -1-i & \rightarrow & \mathcal{C}_\lambda \equiv -a_1+a_2+a_3+b_1-b_3 & \rightarrow &
\theta\left[ \begin{array}{ccc} 1&0&-1\\-1&1&1 \end{array}\right]\!(\zeta)\\
\lambda= \infty & \rightarrow & \mathcal{C}_\lambda \equiv a_1-a_2+a_3+b_3 & \rightarrow &
\theta\left[ \begin{array}{ccc} 0&0&1\\1&-1&1 \end{array}\right]\!(\zeta) .
\end{array}
\end{equation}
The last column gives a theta function with characteristic determined by $\left[ \begin{array}{c}\varepsilon_1 \\ \varepsilon_2 \end{array} \right]$ where
$\varepsilon_{1,2}$ are given by $\mathcal{C}_\lambda \equiv \sum_{i=1}^3 (\varepsilon_{1i} b_i + \varepsilon_{2i} a_i)$.
The point is that this theta function vanishes if and only if $\theta(\phi(\lambda))$ vanishes because $\phi(\lambda) = \ensuremath{\frac{1}{2}}(\varepsilon_2 + \Pi \varepsilon_1)$
(since the integrals on the upper and lower sheet are equal to half the total integral).
If the characteristic is odd (namely $\varepsilon_1^t \varepsilon_2$ is odd) then by symmetry the theta function is zero at the origin and therefore the
theta function without characteristic is zero at the corresponding point.
The vector of Riemann constants \cite{FK} is computed from the sum of the characteristic of the zeros which is (mod 2)
\begin{equation}
\kappa \equiv \left[ \begin{array}{ccc} 1&1&0\\1&1&0 \end{array} \right]\ ,
\end{equation}
and has the property that for any $\lambda_{1,2}$ on the Riemann surface we have
\begin{equation}
\theta\left[ \begin{array}{ccc} 1&1&0\\1&1&0 \end{array} \right]\left(\phi(\lambda_1)+\phi(\lambda_2)\right)=0 . \label{kappadef}
\end{equation}
Moreover, this completely defines the set of zeros of the theta functions we are considering \cite{FK}.
\begin{figure}
\centering
\includegraphics[width=12cm]{cuts}
\caption{Genus three Riemann surface. A basis of fundamental cycles is depicted with solid paths being in the upper sheet and dotted lines in the lower
sheet. The circles show their intersection points. }
\label{cuts}
\end{figure}
To write the solution we choose the points $p_1=0$ and $p_3=\infty$ which works well since, as seen above a path between 0 and $\infty$ defines
and odd characteristic that can be taken to be $\Delta_1=\left[\begin{array}{c}0\\0\\1\end{array}\right]$,
$\Delta_2=\left[\begin{array}{c}1\\1\\1\end{array}\right]$. We therefore define
\begin{equation}
\hat{\theta}(\zeta)= \theta\left[\begin{array}{ccc} 0&0&1 \\ 1&1&1 \end{array}\right](\zeta) .\label{thetahatdef}
\end{equation}
We can now write
\begin{equation}
\zeta= 2 ( \omega(\infty) z + \omega(0) \bar{z}) = \left[\begin{array}{c} -0.4903\sigma-0.19069\tau\\ -0.4903\sigma+0.19069\tau\\ 0.59321\tau \end{array}\right] \ ,
\end{equation}
where we defined $z=\sigma+i\tau$.
The zeros of the function $\hat{\theta}$ in the complex plane $z$ determine the boundary and therefore the shape of the Wilson loops. We plot the contours
where $\hat{\theta}$ becomes zero in fig.\ref{zeros_theta_hat}.
\begin{figure}
\centering
\includegraphics[width=10cm]{contourSTall}
\caption{Zeros of the function $\hat{\theta}$. We can see the (quasi)-periodicity of the theta function as well as closed contours that map into closed Wilson loops in the boundary. }
\label{zeros_theta_hat}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{contourSTone}
\caption{Particular contour (taken from fig.\ref{zeros_theta_hat}) in the $z$ plane chosen to compute the minimal area surface.}
\label{zeros_theta_hat_1}
\end{figure}
The (quasi)-periodicity of the theta function is evident from the figure but also the existence
of closed curves which in turn will give rise to closed Wilson loops. Choosing the contour displayed in fig.\ref{zeros_theta_hat_1} determines a Wilson loop up
to the spectral parameter $\lambda$ that is still arbitrary. Taking into account that $|\lambda|=1$ we choose as examples
\begin{equation}
\lambda_1=i, \ \ \ \ \ \lambda_2=-\frac{1+i}{\sqrt{2}} .
\end{equation}
Therefore the point $p_4=i$ (or $p_4=-\frac{1+i}{\sqrt{2}}$) in the formulas for the surface (understood to be in the upper sheet). The shape of the
Wilson loop can be immediately obtained by mapping the contour into the $X,Y$ plane. The result in displayed in fig.\ref{WL_shape}
where we see a non-trivial closed Wilson loop. The minimal surfaces ending on these contours are displayed in fig.\ref{WL_surface}.
Finally we can compute the length of the Wilson loop and the finite part of the area using eq.(\ref{Afres}). The result is
\begin{eqnarray}
L_1 &=& 13.901, \ \ \ L_2 = 6.449 \ ,\\
A_f &=& -6.598 \ \ \ \mbox{for both.}
\end{eqnarray}
The length $L_{1,2}$ can be changed by a scale transformation but the finite part $A_f$ is scale invariant and independent of $\lambda$, the spectral parameter.
For comparison, for a circle of radius $R$ the corresponding values
are:
\begin{equation}
L^{(circle)} = 2\pi R , \ \ \ A_f^{(circle)} = -2\pi.
\end{equation}
Although in the end the results are numerical, we emphasize that the shape of the surface is known analytically. Also a relative simple expression
was found for the expectation value (after subtracting the infinities) in terms of a one-dimensional finite integral. At the end, the integral was
evaluated numerically to get the value for the area. Notice also that the area is smaller than the one for the circle. For a given length $L$
the area is not bounded from below since we can take a contour made out of two parallel lines of length $L/2$ and separated by a distance
$\delta\rightarrow 0$ in which case the area goes to minus infinity as $A_f \sim -\frac{L}{2\delta}$ \cite{MRY}.
On the other hand, for fixed $L$, the circle is expected to be an upper bound as shown in \cite{SARM}. Our result agrees with that bound.
\begin{figure}
\centering
\centering
\subfloat[$\lambda=i$]{\includegraphics[width=6cm]{contourXY_1}}
\subfloat[$\lambda=-\frac{1+i}{\sqrt{2}}$]{\includegraphics[width=5.5cm]{contourXY_2}}
\caption{Shape of the Wilson loops that we use as an example. It is obtained by mapping the contour in fig.\ref{zeros_theta_hat_1} into the $X,Y$ plane
for two values of the spectral parameter $\lambda$. }
\label{WL_shape}
\end{figure}
\begin{figure}
\centering
\subfloat[$\lambda=i$]{\includegraphics[width=7.5cm]{surface3D_1}}
\subfloat[$\lambda=-\frac{1+i}{\sqrt{2}}$]{\includegraphics[width=7cm]{surface3D_2}}
\caption{Minimal area surfaces ending on the contours illustrated in fig.\ref{WL_shape}. We emphasize that the surfaces are known analytically.}
\label{WL_surface}
\end{figure}
Having described a particular example in detail we want to elaborate further on the properties of these Wilson loops. We know that the zeros of $\hat{\theta}(\zeta)$ determine the boundary of the Wilson loop, but, from eqs.(\ref{kappadef}), (\ref{thetahatdef}) we know that all zeros are given by
\begin{equation}
\zeta = \ensuremath{\frac{1}{2}}\left[\begin{array}{c} 0\\ 0 \\1 \end{array}\right] +\ensuremath{\frac{1}{2}} \Pi\left[\begin{array}{c} 1\\ 1 \\ 1 \end{array}\right]+ \phi(\lambda_1)+\phi(\lambda_2)\ ,
\end{equation}
for arbitrary points $\lambda_{1,2}$ on the Riemann surface. The vectors $\left[\begin{array}{c} 0\\ 0 \\1 \end{array}\right] $ and
$\left[\begin{array}{c} 1\\ 1 \\ 1 \end{array}\right]$ represent the difference (mod 2) between the characteristics of $\hat{\theta}(\zeta)=\theta\left[\begin{array}{ccc} 0&0&1 \\ 1&1&1 \end{array}\right](\zeta)$ and $\kappa \equiv \left[ \begin{array}{ccc} 1&1&0\\1&1&0 \end{array} \right]$.
We cannot take random points $\lambda_{1,2}$ on the Riemann surface because we also have
\begin{equation}
\zeta= 2 \left( \omega_\infty z + \omega_0 \bar{z} \right)\ ,
\end{equation}
which satisfies
\begin{equation}
\bar{\zeta} =-T\zeta, \ \ \ \ \ T=\left(\begin{array}{ccc}0&-1&0\\-1&0&0\\0&0&-1\end{array}\right) .
\end{equation}
With some work it can be seen that we can take
\begin{equation}
\zeta =\ensuremath{\frac{1}{2}}\left[\begin{array}{c} 2\, n_1\\ 2\, n_2 \\1+2\, n_3 \end{array}\right] +\ensuremath{\frac{1}{2}} \Pi\left[\begin{array}{c} 1+2\, m_1\\ 1+2\, m_2 \\ 1+2\, m_3 \end{array}\right]- \phi(\lambda_1)+\phi\left(-\frac{1}{\bar{\lambda}_1}\right) , \label{iRs}
\end{equation}
where $n_{1,2,3}$ and $m_{1,2,3}$ are integers which can be absorbed in the definition of the path used to compute the function $\phi$.
It therefore follows that for genus 3 we can map the Wilson loop into a curve inside the Riemann surface. Namely for each point in the contour displayed in
fig.\ref{zeros_theta_hat_1} there is a point $\lambda_1$ in the Riemann surface such that:
\begin{equation}
2 \left( \omega(\infty) z + \omega(0) \bar{z} \right)=\ensuremath{\frac{1}{2}}\left[\begin{array}{c} 2\, n_1\\ 2\, n_2 \\1+2\, n_3 \end{array}\right] +\ensuremath{\frac{1}{2}} \Pi\left[\begin{array}{c} 1+2\, m_1\\ 1+2\, m_2 \\ 1+2\, m_3 \end{array}\right]- \phi(\lambda_1)+\phi\left(-\frac{1}{\bar{\lambda}_1}\right) . \label{iRs2}
\end{equation}
The set of such points describes a curve inside the Riemann surface which is depicted in fig.\ref{WL_in_RS}.
The statement is the following: for each point $\lambda_1$ in the curve and for a given choice of path used to define the function $\phi$ there is a set of integers
$n_{1,2,3}$ and $m_{1,2,3}$ such that eq.(\ref{iRs2}) can be solved for $z$. These values of $z$ lie in the closed curves depicted in fig.\ref{zeros_theta_hat}
which are the zeros of the function $\hat{\theta}$. Equivalently we can set all integers $n_{1,2,3}=0$ and $m_{1,2,3}=0$ but then we have to choose the
path used to define $\phi$ appropriately so that eq.(\ref{iRs2}) has a solution. Furthermore, with an appropriate choice we can map the curve in fig.\ref{WL_in_RS}
to the curve in fig.\ref{zeros_theta_hat_1} which was the one used in our examples.
For higher genus, there should be a set of curves in the Riemann surface for each closed
Wilson loop. In this paper we do not explore this issue further but we believe it should be an interesting subject to pursue.
As a final comment, it should be noted that these Wilson loops are not BPS since, for Euclidean Wilson loops with constant scalar, the only BPS ones are
straight lines \cite{Zarembo:2002an}.
\begin{figure}
\centering
\includegraphics[width=10cm]{rieman1}
\caption{For genus three, each Wilson loop computable in terms of theta functions maps into a closed curve inside the corresponding Riemann surface. In this
figure we display the closed curve corresponding to the Wilson loop of fig.\ref{WL_shape}. We notice it encircles two cuts.}
\label{WL_in_RS}
\end{figure}
\section{Conclusions}
\label{conclusions}
In this paper we discussed minimal area surfaces in \ads{3} space which are dual to Wilson loops in \N{4} SYM. We essentially follow the results of the paper
\cite{BB} where such solutions were found but provide a different derivation of the solutions and also a formula for the finite part of the area in accordance
with the one used in the AdS/CFT correspondence. Finally we make the observation that closed Wilson loops appear from these solutions in which case the
world-sheet has the topology of a disk and the area is expressed as a (finite) contour integral over the world-sheet boundary.
In this way we construct an infinite parameter family of new examples of Wilson loops whose dual surface is analytically known.
It should be noticed that, to our knowledge, only the circular Wilson loop and the lens shaped loop were previously known for closed Euclidean Wilson loop (with constant scalar). Furthermore, the result gives, for each individual Wilson loop a one parameter family
of deformation (given by the spectral parameter $\lambda$) such that the area remains the same.
Finally, for genus 3 we pointed out an interesting map between Wilson loops computable in this way and curves inside the Riemann surface.
We hope that these new solutions will give rise to a better understanding of Wilson loops in the context of the AdS/CFT correspondence and also in general.
It is evident that an important integrable structure lies behind them that should be explored in detail. It would be interesting if these ideas allow us to reconstruct
the shape of the Wilson loop from the field theory using some sort of coherent states formalism as in \cite{spinchain}.
\section{Acknowledgments}
We are grateful to Nadav Drukker, Georgios Michalogiorgakis, Alin Tirziu and Peter Ouyang for comments and suggestions.
This work was supported in part by NSF through grants PHY-0805948, a CAREER Award PHY-0952630, a Graduate Fellowship (S.Z.) and an AGEP
grant \#0450373, by DOE through grant DE-FG02-91ER40681 and by the SLOAN Foundation.
\section{Appendix}
In this appendix we derive a useful identity for the theta functions. Instead of doing a general derivation we show how it works in the example
we are dealing with in the main text and then the generalization should be clear.
Consider the function
\begin{equation}
h(\lambda) = c \left(\frac{e^{i\pi\Delta_1^t \int_{p_1}^\lambda} \theta(a+\int_{p_1}^\lambda)}{\theta(\int_{p_1}^\lambda)}\right)^2\ ,
\end{equation}
defined on the Riemann surface. Also $c$ is a constant that we choose later. By looking at the results (\ref{Ch}) one can see that the numerator has zeros at $\lambda=0, \ensuremath{\frac{1}{2}}(1-i), -1+i$ whereas
the denominator vanishes for $\lambda=\infty,\ensuremath{\frac{1}{2}}(1-i), -1+i$. It follows that $h(\lambda)$ has a zero at $\lambda=0$ and a pole at $\lambda=\infty$.
To see the behavior near $\lambda=0$ we use that (remembering that $p_1=0$ and eq.(\ref{wdef}))
\begin{equation}
\int_0^{\lambda} \omega_j \simeq \int \frac{d\lambda}{i\sqrt{\lambda}} \lambda C^{-1}_{1j} = -2 i \sqrt{\lambda} C^{-1}_{1j}, \ \ \ \lambda\rightarrow 0 .
\end{equation}
Thus,
\begin{equation}
h(\lambda) \simeq - 4 c \lambda \left(\frac{D_{p_1}\theta(a)}{\theta(0)}\right)^2 .
\end{equation}
If we choose
\begin{equation}
c = -\frac{1}{4} \left(\frac{\theta(0)}{D_{p_1}\theta(a)}\right)^2\ ,
\end{equation}
then $h(\lambda)\simeq \lambda$, that is, it has a simple zero at $\lambda=0$. Similarly one can check that it has a simple pole at $\lambda=\infty$. Finally it
can be seen to have the right periodicity properties to be well defined on the Riemann surface. The only such function is $h(\lambda)=\lambda$ so we conclude
\begin{equation}
\lambda = -\frac{1}{4} \left(\frac{\theta(0)}{D_{p_1}\theta(a)}\right)^2
\left(\frac{e^{i\pi\Delta_1^t \int_{p_1}^\lambda} \theta(a+\int_{p_1}^\lambda)}{\theta(\int_{p_1}^\lambda)}\right)^2\ ,
\end{equation}
as used in the main text. Furthermore, by expanding around $\lambda=\infty$ we find that $h(\lambda)=\lambda$ if we choose the constant $c$ to be
\begin{equation}
c = -4 e^{i\pi\Delta_1^t\Pi\Delta_1} \left(\frac{D_{p_3}\theta(a)}{\theta(0)}\right)^2 .
\end{equation}
Equating the two expressions for $c$ we find
\begin{equation}
\left(\frac{D_{p_3}\theta(a)D_{p_1}\theta(a)}{\theta^2(0)}\right)^2 = \frac{1}{16} e^{-i\pi\Delta_1^t\Pi\Delta_1}\ ,
\end{equation}
as we also used in the main text. Although we derived the result for our particular case it is clearly valid in general (see for example \cite{FK} for the case
of a generic hyperelliptic Riemann surface). Taking the square root of the last equation we find
\begin{equation}
\frac{D_{p_3}\theta(a)D_{p_1}\theta(a)}{\theta^2(0)} = \pm \frac{1}{2} e^{-\frac{i}{2}\pi\Delta_1^t\Pi\Delta_1} .\label{idch}
\end{equation}
The sign cannot be determined by this reasoning but it is easily found to be minus by a simple numerical computation. The result was used in the main
text to find the correct normalization of $\zeta$ so that $\alpha$ is a solution to the cosh-Gordon equation.
|
1,116,691,500,237 | arxiv | \section{Introduction}
The relation between Hermitian quantum mechanics and its parity-time ($\PT$--) symmetric extension became the focus of intense debates shortly after the latter paradigm was introduced in~\cite{Bender}. In particular, it was shown that any $\PT$-symmetric Hamiltonian with purely real spectrum, i.e., in the unbroken $\PT$-symmetric phase, can be transformed to a Hermitian one using a similarity transformation
\cite{Mostaf2002}. This fact establishes certain equivalence between the two formulations. In the classical limit, $\PT$-symmetric models occupy an ``intermediate position'' between conservative and dissipative nonlinear systems, featuring properties of both these types (see the discussion in~\cite{KYZ}). A typical real-world $\PT$-symmetric system consists of an active element coupled to a lossy one, {\color{black} like those theoretically introduced~\cite{ElGan07} and experimentally studied~\cite{Ruter} in the non-Hermitian discrete optics.} Generally, gain and dissipation break the conservation laws for energy or other physically relevant quantities. This strongly affects the nonlinear dynamics, making impossible (or highly nontrivial) its description using the analytical approaches, available, say, for Hamiltonian systems.
Recently, it has been discovered that dynamics of some nonlinear $\PT$-symmetric models with gain and dissipation still can be described using the Hamiltonian formalism
and is therefore characterized by a rather high degree of regularity. More specifically, the Hamiltonian structure has been revealed for $\PT$-symmetric coupled oscillators~\cite{coupled_oscil}, for some completely integrable dimer models~\cite{Ramezani,BG14,Barash_Hamilt, JC93,JCAH93,BPD15}, and for chains of $\PT$-symmetric pendula \cite{CP}. We also mention that systems which do not posses $\PT$ symmetry but display characteristics of conservative and dissipative ones, are also known; they are described by time-reversible Hamiltonians~\cite{PoOpBa}. However, all the mentioned examples belong to the class of dynamical systems, i.e., the corresponding models are described by systems of ordinary differential equations.
The main goal of this paper is to introduce a nonlinear $\PT$-symmetric dispersive system which incorporates gain and losses and at the same time allows for the Hamiltonian formulation. This model emerges as a $\PT$-generalization of the resonant four-wave mixing process in a spinor Bose-Einstein condensate loaded in linear and nonlinear lattices. We demonstrate that such a two-component model, where one component experiences gain and another one loses atoms, can be obtained from a real-valued Hamiltonian. Next, we explore the associated conserved quantities and present various exact solutions and some traits of the system's nonlinear dynamics. We in particular focus on exact bright soliton solutions and demonstrate numerically that they are stable in a wide parameter range.
The paper is organized as follows. In Sec.~\ref{sec:model} we discuss a physical example where the proposed $\PT$-symmetric model arises. The model itself and its basic properties are presented in Sec.~\ref{sec:main}. Section~\ref{sec:exact} demonstrates that the introduced system admits a variety of exact solutions which can be written down in the analytical form, and Sec.~\ref{sec:solitons} studies exact solutions in the form of bright solitons. Finally, Sec.~\ref{sec:concl} concludes the paper and outlines some perspectives.
\section{On the physical model}
\label{sec:model}
{\color{black} To introduce the model, we start with a physical example. We consider a one-dimensional Bose-Einstein condensate (BEC)
loaded in a linear lattice which experimentally can be created by a two counter-propagating laser beams~\cite{lin_latt}. We also assume that the scattering length varies periodically in space, i.e. creates a nonlinear lattice. The latter can be created, for example, by periodically varying external field affecting the scattering length by means of the Feshbach resonance. Nonlinear lattices have been realized in laboratory~\cite{nonlin_latt} and studied in numerous theoretical works (see e.g. Refs.~\cite{NL_meanfiled,Trippenbach}). In particular, in \cite{Trippenbach} it was shown that matching condition for resonant four-wave processes can be achieved by modification the momentum, while in the linear lattices the matching condition is achieved by the modifying the dispersion relation ~\cite{lin_matching}.}
{\color{black}
The meanfield Hamiltonian describing the BEC in linear and nonlinear lattices reads
\begin{eqnarray}
\label{Hmilt_BEC}
\hat{H}_{\rm BEC}=\int_{-\infty}^{\infty}\left\{ {\Psi}^*\hat{h}{\Psi}+\tOmega \cos(\nu t)V_1(x) |\Psi|^2 + \chi(x) [1+2\cos(2\nu t)] |\Psi|^4 \right\} dx,
\end{eqnarray}
where $\Psi$ is an order parameter $\hat{h}=-\partial_x^2 +V_{0}(x)$, $V_{0,1}(x)=V_{0,1}(x+L)$ are the even ("$0$") and odd ("$1$") components of the optical lattice: $V_0(x)=V_0(-x)$, $V_1(x)$ and $V_1(x)=-V_1(-x)$, $L$ is the lattice period, $\tOmega\ll 1$ is the small parameter defining relative depth of the odd component, and $\chi(x)=\chi(x+L)$ describes the nonlinear lattice which undergoes periodic oscillations with the frequency $2\nu$. The value of $\nu$ will be specified below. The amplitude of the shallow odd lattice $V_1(x)$ also periodically varies in time but with the frequency $\nu$. In (\ref{Hmilt_BEC}) dimensionless units with $\hbar=1$ and $m=1/2$, where $m$ is the atomic mass, are used.
}
{\color{black} Let us consider the evolution of a superposition $\Psi=\Psi_1(x,t)+\Psi_{2}(x,t)$ of two wave packets of Bloch states. Choosing these states as shown in Fig.~\ref{fig:bands}, i.e. Bloch modes with zero group velocities and having energies at the band edges with equal signs of curvatures of the dispersion relations, one can look for a solution in the form
\begin{eqnarray}
\label{ansatz}
\begin{array}{l}
\displaystyle{ \Psi_1(x,t)=\left [\epsilon u(\tx,\tilt)\psi_1(x)+\epsilon^2 \frac{\partial u}{\partial\tx }\tpsi_1(x) +\cdots\right] e^{-i\cE_1 t},
}
\\[3mm]
\displaystyle{
\Psi_2(x,t)=\left[\epsilon v(\tx,\tilt)\psi_2(x)+\epsilon^2 \frac{\partial v}{\partial\tx }\tpsi_2(x) +\cdots\right]e^{-i\cE_2 t}.
}
\end{array}
\end{eqnarray}
Here $\epsilon\ll 1$ is a formal small parameter, $u(\tx,\tilt)$ and $v(\tx,\tilt)$ are functions of slow variables $\tx=\epsilon x$ and $\tilt=\epsilon^2 t$, i.e., envelopes of the Bloch states $\psi_{1,2}$ corresponding to the energies $\cE_{1,2}$: $\hat{h}\psi_j=\cE_j\psi_j$ ($j=1,2$). In this case, $\psi_{1}$ and $\psi_{2}$ are real-valued periodic functions, as it follows from the Floquet theorem). Detailed calculations of other functional coefficients of the expansions (\ref{ansatz}), i.e., functions $\tpsi_{1,2}(x)$ can be performed within the framework of the multiple scale analysis. Here we do not present all the details as they are available in the literature (see e.g.~\cite{KS}), but only show their effect on the Hamiltonian structure of the model. In the meantime, it will be important that $\psi_j$ and $\tpsi_j$ are orthogonal to each other, which in our case means that $ \int_{-L/2}^{L/2}\psi_j(x)\tpsi_j(x)dx=0$.
Thus the dynamics of the superposition $\Psi$ is completely determined by the evolution of $u(\tx,\tilt)$ and $v(\tx,\tilt)$, and our aim is to obtain the Hamiltonian of this dynamics.
}
\begin{figure
\begin{center}
\includegraphics[width=0.8\textwidth]{fig00.eps}
\end{center}
\caption{{\color{black} Two schematic representations of resonantly interacting pairs of Bloch modes in the linear lattice. Panels (a) and (b) correspond to two pairs of modes (shown with circles) with positive and negative curvatures of dispersion relations (i.e., effective masses), respectively. In both panels, $k$ is the Bloch momentum.}}
\label{fig:bands}
\end{figure}
{\color{black} Now we use substitution (\ref{ansatz}) in the Hamiltonian (\ref{Hmilt_BEC}) and require $\nu$ to be the difference between the energies of the states: $\nu=\cE_2-\cE_1$ (see Fig.~\ref{fig:bands}). The formulated assumptions allow to approximate different terms as follows:
\begin{eqnarray}
\label{aux1}
\int_{-\infty}^{\infty}{\Psi}^*\hat{h}{\Psi}dx
\approx \epsilon^2 \int_{-\infty}^{\infty}\left(\cE_1 \psi_1^2|u|^2+\cE_2 \psi_2^2|v|^2\right)dx
\nonumber \\
+
\epsilon^4\int_{-\infty}^{\infty}
\left(\psi_1^2-\tpsi_1\frac{d\psi_1}{dx}-\tpsi_1\hat{h}\tpsi_1\right)|u_{\tx}|^2dx \nonumber\\
+ \epsilon^4\int_{-\infty}^{\infty}
\left(\psi_2^2-\tpsi_2\frac{d\psi_2}{dx}-\tpsi_2\hat{h}\tpsi_2\right)|v_{\tx}|^2 dx,
\end{eqnarray}
where we dropped the terms $\sim \psi_jd\psi_j/dx$ whose contribution to the integral is negligible due to the opposite parities of $\psi_j$ and $d\psi_j/dx$, as well as the terms $\sim \psi_j \tpsi_j$ due to their orthogonality;
}
{\color{black}
\begin{eqnarray}
\label{aux2}
\int_{-\infty}^{\infty} 2\cos(\nu t)V_1(x) |\Psi|^2dx\approx
\epsilon^2 \int_{-\infty}^{\infty}V_1(x) \psi_{1}\psi_2 \left(u^*v+uv^*\right)dx
\end{eqnarray}
where we neglected the terms $\sim V_1(x)|\psi_{j}|^2$ because of their symmetry (these terms can also be discarded in the rotating wave approximation as rapidly varying $\sim \cos (\pm\nu t)$);
\begin{eqnarray}
\label{aux3}
\int_{-\infty}^{\infty}\chi(x) |\Psi|^4dx\approx \epsilon^4 \int_{-\infty}^{\infty}\chi(x)\left( \psi_{1}^4|u|^4 + \psi_{2}^4|v|^4+ 4\psi_{1}^2\psi_{2}^2|u|^2|v|^2\right)dx
\end{eqnarray}
where the terms oscillating as $e^{\pm i\nu t}$ and $e^{\pm 2 i\nu t}$
are dropped as rapidly oscillating;
\begin{eqnarray}
\label{aux4}
\int_{-\infty}^{\infty}2\cos(2\nu t) \chi(x)|\Psi|^4dx\approx \epsilon^4 \int_{-\infty}^{\infty} \chi(x)\psi_{1}^2\psi_{2}^2\left[ u^2(v^*)^2+ (u^*)^2 v^2 \right]dx
\end{eqnarray}
where all other terms are rapidly oscillating.
}
\textcolor{black}{The term $\sim \epsilon^2$ in (\ref{aux1}) amounts to $\int_{-\infty}^{\infty} (\cE_1 |\Psi_1|^2 + \cE_2 |\Psi_2|^2)dx$. It does not affect the dynamics of slowly varying amplitudes $u$ and $v$, but simply describes the leading order of the Hamiltonian equation
\begin{equation}
\label{Hmilt}
i\frac{\partial \Psi}{\partial t} = \hat{H}_{BEC}\Psi,
\end{equation}
whose left hand side reads
\begin{equation}
\label{time}
i\frac{\partial \Psi}{\partial t} = \cE_1 \Psi_1 + \cE_2 \Psi_2 + i\epsilon^3\left(e^{-i\cE_1 t}\psi_1\frac{\partial u}{\partial \tilde{t}} + e^{-i\cE_2 t}\psi_2\frac{\partial v}{\partial \tilde{t}} \right)
\end{equation} }
{\color{black} For other integrals in the right hand sides of (\ref{aux1})-(\ref{aux4}), we note that they contain functions depending on fast ($\psi_{1,2}$) and slow ($u,v$) variables. In order to obtain formally the effective Hamiltonian for slow envelopes $u$ and $v$, we substitute the fast functions in the integrands by their mean values over the lattice period $L$. Then assuming the Bloch functions normalized, we obtain
$\displaystyle{\psi_{j}^2\to
\int_{-L/2}^{L/2}\psi_{j}^2(x)dx=1}$. Next,
we assume that the linear lattice is chosen such that
}
{\color{black}
\begin{eqnarray}
\int_{-L/2}^{L/2}\left(\psi_1^2-\tpsi_1\frac{d\psi_1}{dx}-\tpsi_1\hat{h}\tpsi_1\right) dx=
\int_{-L/2}^{L/2}\left(\psi_2^2-\tpsi_2\frac{d\psi_2}{dx}-\tpsi_2\hat{h}\tpsi_2\right) dx
\nonumber \\
:=\frac{1}{2m_{eff}}
\end{eqnarray}
were $m_{eff}$ is the effective mass which is proportional to the radius of curvature of the dispersion curves~\cite{KS} and can be positive as in Fig.~\ref{fig:bands}(a) or negative as in Fig.~\ref{fig:bands}(b). To simplify the consideration, we address only the case of positive effective mass: $m_{eff}>0$ (the generalization for $m_{eff} <0$ is straightforward).
}
{\color{black}
Additionally, we assume that one can find a constant $g$ such that within the accepted accuracy
\begin{eqnarray}
\label{cond}
\frac{g}{2}= \int_{-L/2}^{L/2} \chi(x)\psi_{1}^2\psi_{2}^2dx=\int_{-L/2}^{L/2}\chi(x) \psi_{1}^4dx = \int_{-L/2}^{L/2} \chi(x)\psi_{2}^4dx.
\end{eqnarray}
Finally, we consider weak coupling, allowing to define $\Omega = {\cal O}(1)$ through the relation
\begin{equation}
\epsilon^2\Omega = \tOmega\int_{-L/2}^{L/2} V_1(x)\psi_{1}\psi_2 dx
\end{equation}
}
{\color{black}
Having substituted the products of the fast functions by their average values, the integrals of the remaining slow envelopes can be computed with respect to the renormalized slow variable $\tx$ using that $$dx =
\frac{1}{\epsilon} d\tx.$$ Then collecting all the integrals $\sim \epsilon^3$ with respect to $d\tx$ leads to the energy functional
\begin{eqnarray}
\label{energy}
E= \int_{-\infty}^\infty \Bigl[|u_x|^2 + |v_x|^2 + \Omega (uv^* + u^* v) \nonumber \\
+ \frac{g}{2}(|u|^4 + |v|^4+u^2(v^*)^2+(u^*)^2 v^2 + 4|u|^2|v|^2)\Bigr]dx.
\end{eqnarray}
where we have omitted tildes over $x$, since the rest of the paper we deal only with the envelopes $u$ and $v$, and have renormalized the coefficients $2 m_{eff} \Omega \to \Omega$, $2 m_{eff} g \to g$. }
The equations for the evolution of the slow amplitudes are obtained from {\color{black} Schr\"odinger equation (\ref{Hmilt}) with the time derivative given by (\ref{time}), projected over $\psi_1$ and $\psi_2$, and can be expressed in the Hamiltonian form}
\begin{eqnarray}
\label{eq:motionE}
\frac{\delta E}{\delta u} = -iu_t^*, \quad \frac{\delta E}{\delta v} = -iv_t^*, \quad
\frac{\delta E}{\delta u^*} = iu_t, \quad
\frac{\delta E}{\delta v^*} = iv_t,
\end{eqnarray}
which gives
\begin{equation}
\label{eq:nonlin1D_cons}
\eqalign{iu_t = -u_{xx} +\Omega v + g |u|^2 u + 2g |v|^2u + g u^*v^2,\\
iv_t = -v_{xx} +\Omega u + g |v|^2 v+ 2g |u|^2v + g u^2v^*.}
\end{equation}
For $x$-independent solutions system (\ref{eq:nonlin1D_cons}) reduces to that explored in~\cite{Trippenbach}. In general, in the basis of new functions $u_{\pm}(x, t) = u(x,t) \pm v(x,t)$ system (\ref{eq:nonlin1D_cons}) decouples into two nonlinear Schr\"odinger equations \cite{Gergjikov,IB}
\begin{equation}
\label{eq:IB}
iu_{\pm,t} = -u_{\pm, xx} \pm \Omega u_\pm + g |u_\pm|^2 u_\pm,
\end{equation}
and is therefore completely integrable.
Another interesting observation is that the obtained system (\ref{eq:nonlin1D_cons}) has a {\em bi-Hamiltonian}~\cite{AC91} structure. Indeed, it can also be obtained from the Hamiltonian
\begin{equation}
\label{eq:Hlin_cons}
H = \int_{-\infty}^\infty \left[ u_x^* v_x + u_x v_x^* + \Omega (|u|^2 + |v|^2) + g(|u|^2+|v|^2)(u^*v + uv^*) \right] dx
\end{equation}
in the new canonical variables according to the following equations:
\begin{eqnarray}
\label{eq:motion}
\frac{\delta H}{\delta u} = -iv_t^*, \quad \frac{\delta H}{\delta v} = -iu_t^*,\quad
\frac{\delta H}{\delta u^*} = iv_t, \quad
\frac{\delta H}{\delta v^*} = iu_t.
\end{eqnarray}
\section{Hamiltonian $\PT$-symmetric coupler}
\label{sec:main}
Hamiltonian equations (\ref{eq:motion}) feature the \textit{cross-gradient} structure which has been recently revealed for some $\PT$-symmetric Hamiltonian systems of coupled oscillators \cite{coupled_oscil,BG14,BPD15}. This observation opens the route to generalize the system (\ref{eq:nonlin1D_cons}) by including $\PT$-symmetric gain and loss terms. To this end, let us introduce the following generalization of the Hamiltonian (\ref{eq:Hlin_cons}):
\begin{eqnarray}
\label{eq:Hlin_PT}
H = \int_{-\infty}^\infty \left [u_x^* v_x + u_x v_x^* + i\gamma (uv^* - u^*v) + \Omega (|u|^2 + |v|^2) \right.
\nonumber \\
\hspace{4cm}\left. + g(|u|^2+|v|^2)(e^{i\phi}u^*v + e^{-i\phi}uv^*) \right] dx,
\end{eqnarray}
where $\phi$ is a real constant.
Then, applying equations (\ref{eq:motion}) to the Hamiltonian (\ref{eq:Hlin_PT}), we arrive at the \textit{Hamiltonian $\PT$-symmetric coupler}:
\begin{equation}
\label{eq:nonlin1D}
\eqalign{
iu_t = -u_{xx} + i\gamma u +\Omega v + g e^{-i\phi} |u|^2 u + 2g e^{-i\phi} |v|^2u + ge^{i\phi} u^*v^2,\\
iv_t = -v_{xx} - i\gamma v +\Omega u + ge^{i\phi} |v|^2 v+ 2g e^{i\phi} |u|^2v + ge^{-i\phi} u^2v^*.}
\end{equation}
\textcolor{black}{To the best of our knowledge, system (\ref{eq:nonlin1D}) has not been considered in the previous literature. This system will be in the focus of our attention in the rest of this study.}
From the physical perspective, system (\ref{eq:nonlin1D}) can be considered as the described above model of the spinor BEC in the nonlinear lattice, where atoms are loaded in the $u$-component and are eliminated from the $v$-component with the strength characterized by the gain-loss coefficient $\gamma$. The model (\ref{eq:nonlin1D}) also includes nonlinear gain and losses due to inelastic two-body interactions characterized by the real parameter $\phi$. Without loss of generality, in what follows we assume that $\gamma, \Omega \geq 0$.
{\color{black} Model (\ref{eq:nonlin1D}) belongs to the class of nonlinear dispersive systems. Indeed, considering the propagation of small-amplitude plane waves $(u,v) = (p, q) e^{i(kx-\omega t)}$, where $|p|, |q| \ll 1$, $\omega$ is the frequency, and $k$ is the wavenumber, we obtain the two branches of dispersion relation in the form
\begin{equation}
\label{eq:disp}
\omega_\pm(k)=k^2\pm\sqrt{\Omega^2-\gamma^2}.
\end{equation}
Thus the waves propagating in such a coupler have dispersive nature and, in order to reflect this fact, the system (\ref{eq:nonlin1D}) can be referred to as a dispersive nonlinear $\PT$-symmetric coupler.}
{\color{black} To complete the formulation of the model, we note that underlying physical setup for system (\ref{eq:nonlin1D}) resulted from the process of four-wave mixing in linear and nonlinear lattices modifying, respectively, energy and momentum conservation laws. It turns out that the presence of gain and loss in (\ref{eq:nonlin1D}) allows one to modify the dispersion relation, which allows for obtaining matching conditions for four-wave mixing in terms of the two-component field $(u,v)$~\cite{Marek}. Thus the phenomenon of the wave mixing is also expectable in the framework of the model (\ref{eq:nonlin1D}). The peculiarities of this effect are beyond the scope of the present work: here we concentrate mainly on localized solitonic solutions.}
In the linear case ($g=0$) system (\ref{eq:nonlin1D}) assumes the form
\begin{equation}
i\frac{\partial\ }{\partial t} \left(\begin{array}{c}
u\\v
\end{array}\right) = {L} \left(\begin{array}{c}
u\\v
\end{array}\right), \quad {L} = \left(\begin{array}{cc}
-\partial^2_x + i\gamma &\Omega \\\Omega & -\partial^2_x - i\gamma
\end{array}\right).
\end{equation}
Linear operator $L$ is $\PT$ symmetric: it commutes with the $\PT$ operator, where ${\cal P}$ swaps the components,
\begin{equation}
{\cal P} \left(\begin{array}{c} u\\v\end{array}\right) = \left(\begin{array}{c} v\\u\end{array}\right),
\end{equation}
and $\cal T$ is the component-wise complex conjugation combined with the time reversal: ${\cal T }u(x,t) = u^*(x,-t)$, ${\cal T }v(x,t) = v^*(x,-t)$. \textcolor{black}{As readily follows from the dispersion relation (\ref{eq:disp})}, the linear coupler is stable (i.e., $\PT$ symmetry is unbroken) if $\gamma/\Omega<1$. For $\gamma/\Omega\geq 1$ there are unbounded in time solutions (i.e., $\PT$ symmetry is broken). The transition from unbroken to broken $\PT$ symmetry, i.e., the situation $\gamma = \Omega$, corresponds to the exceptional point (EP).
The nonlinear ($g\ne 0$) system (\ref{eq:nonlin1D}) is $\PT$ symmetric in the following sense: if functions $u(x,t)$ and $v(x,t)$ solve equations (\ref{eq:nonlin1D}) in some domain $(x,t)\in \mathbb{R}\times [-t_0, t_0]$, then the new functions $u_{\PT}(x,t):={v}^*(x,-t)$ and $v_{\PT}(x,t):={u}^*(x,-t)$ also solve system (\ref{eq:nonlin1D}) in the same domain.
The model (\ref{eq:nonlin1D}) generically
does not conserve the number of particles ($L^2$- norm of the solution)
\begin{equation}
N=\| u \|_{L^2}^2 + \| v \|_{L^2}^2 = \int_{-\infty}^\infty (|u|^2+|v|^2)dx.
\end{equation}
Indeed, it is straightforward to obtain the relation
\begin{equation}
\label{eq:dN}
\frac{dN(t)}{dt} = 2\gamma\int_{-\infty}^\infty (|u|^2 - |v|^2)\ dx - 2g\sin\phi\int_{-\infty}^\infty (|u|^4 - |v|^4)\ dx.
\end{equation}
At the same time, by construction, system (\ref{eq:nonlin1D}) conserves the real-valued Hamiltonian $H$ given by (\ref{eq:Hlin_PT}): $dH/dt=0$. In order to identify other conserved quantities, it is convenient to use the Lagrangian formalism, starting with the (real-valued) Lagrangian density
\begin{eqnarray}
{\cal L} = \frac{i}{2}(u_tv^* - u_t^*v + u^*v_t - uv_t^*) - [ u_x^* v_x + u_xv_x^*+i\gamma(uv^*-u^*v) \nonumber \\ \hspace{2cm}
+ \Omega(|u|^2+|v|^2) + g(|u|^2+|v|^2)(e^{i\phi}u^*v + e^{i\phi}uv^*)].
\end{eqnarray}
Then system (\ref{eq:nonlin1D}) is equivalent to the Euler--Lagrange equations
\begin{equation}
{\frac{\partial \cL}{\partial u}} = \frac{\partial\ }{\partial x} \left(\frac{\partial\cL}{\partial u_x}\right) + \frac{\partial\ }{\partial t} \left(\frac{\partial\cL}{\partial u_t}\right),\qquad
{\frac{\partial \cL}{\partial v}} = \frac{\partial\ }{\partial x} \left(\frac{\partial\cL}{\partial v_x}\right) + \frac{\partial\ }{\partial t} \left(\frac{\partial\cL}{\partial v_t}\right).
\end{equation}
Since the action functional $S=\int_0^t\int_{-\infty}^\infty {\cL}\, dx dt$ is invariant under
space and time translations, as well as under the phase rotation, from Noether's theorem (see e.g.~\cite{Sulem}) we obtain three conserved quantities for the model (\ref{eq:nonlin1D}): one of them corresponds to the Hamiltonian $H$ in (\ref{eq:Hlin_PT}), and two other correspond to the quasi-power
\begin{equation}
\label{eq:Q}
Q = \int (uv^*+u^*v)dx,\qquad \frac{dQ}{dt}=0,
\end{equation}
and the quasi-momentum
\begin{equation}
P = i\int_{-\infty}^\infty ( u_x v^* - u_x^* v)dx, \qquad \frac{dP}{dt}=0.
\end{equation}
As an immediate consequence of the established conservation laws, we observe that the total number of particles (as well as the $L^2(\mathbb{R})$-norm of the solution) is bounded from below by a nonnegative constant:
\begin{equation}
\label{eq:N}
N(t) \geq |Q(t)|=|Q(0)|.
\end{equation}
Additionally, the total $H^1$-norm
is also bounded from below by a nonnegative constant which is {\it a priori} defined by the quasi-momentum:
\begin{equation}
N(t) + \|u_x\|_2^2 + \|v_x\|_2^2 = \| u\|_{H^1}^2 + \| v\|_{H^1}^2 \geq |P(t)|=|P(0)|.
\end{equation}
Here, we used the standard definition of $H^1(\mathbb{R})$-norm, that is
\begin{equation}
\| u\|_{H^1}^2 = \| u\|_{L^2}^2 + \| u_x\|_{L^2}^2, \quad \| v\|_{H^1}^2 = \| v\|_{L^2}^2 + \| v_x\|_{L^2}^2.
\end{equation}
\section{Reductions and exact solutions}
\label{sec:exact}
Introduced in the previous section $\PT$-symmetric Hamiltonian coupler (\ref{eq:nonlin1D}) admits several important reductions and allows for exact analytical solutions in the form of continuous families of solitons. To obtain them, we look for solutions in the form
\begin{eqnarray}
\left(\begin{array}{c}
u \\ v
\end{array}\right)=R_1R_2\left(\begin{array}{c}
U \\ V
\end{array}\right),
\end{eqnarray}
where $R_1$ and $R_2$ are rotation matrices
\begin{eqnarray}
R_1=\left(\begin{array}{cc}
e^{-i\delta/2} &-e^{i\delta/2}\\
e^{i\delta/2} &e^{-i\delta/2}
\end{array}\right) \qquad
R_2=\left(\begin{array}{cc}
\cos(\alpha) & -\sin(\alpha)
\\
\sin(\alpha) & \cos(\alpha)
\end{array}\right)
\end{eqnarray}
and real parameters $\delta$ and $\alpha$ are to be defined.
The resulting equations are not shown here as they appear too lengthy. However, it is straightforward to verify that for the certain choice of parameters they allow for ``one-component'' solutions with $U\neq 0$ and $V\equiv0$ (or $U\equiv 0$ and $V\ne 0$). {\color{black} These ``one-component'' solutions correspond to the situation when components $u$ and $v$ are proportional, i.e., $u(x,t)=r v(x,t)$, where $r$ is a time- and space-independent constant.}
\subsection{Unbroken linear $\PT$ symmetry}
\label{sec:below}
In the domain of unbroken linear $\PT$ symmetry we have $\gamma\leq \Omega$. For the existence of a one-component solution $U\neq0$ and $V\equiv0$ we require $\alpha=0$ and $\sin(\delta) =- \gamma/\Omega$ \cite{DM11}, i.e.,
\begin{eqnarray}
\label{delta12}
\delta=\delta_1=-\arcsin (\gamma/\Omega) \quad \mbox{or} \quad \delta=\delta_2 = \pi +\arcsin (\gamma/\Omega).
\end{eqnarray}
Additionally, the coefficient $\phi$ depends on $\gamma$ as
\begin{equation}
\label{eq:phi}
\phi_{1,2} = -\arctan\left(\frac{\sin(2 \delta_{1,2})}{\cos(2\delta_{1,2})-3}\right) = \mp \arctan\left(\frac{\gamma\Omega\sqrt{\Omega^2-\gamma^2}}{\Omega^2+\gamma^2}\right).
\end{equation}
Now the system is reduced to the standard (conservative) nonlinear Schr\"odinger (NLS) equation with real coefficients
\begin{equation}
\label{eq:NLS}
iU_t = -U_{xx} \pm \sqrt{\Omega^2-\gamma^2}U + \frac{4g\Omega}{\sqrt{3\gamma^2 + \Omega}}|U|^2U,
\end{equation}
where the upper and the lower signs correspond to $\delta_1$ and $\delta_2$ in (\ref{delta12}). The original fields can be recovered from $U$ as $u=e^{-i\delta/2} U$ and $v=e^{i\delta/2} U$.
The NLS equation (\ref{eq:NLS}) is completely integrable and supports a variety of exact solutions, including bright ($g < 0$) and dark ($g > 0$) solitons \cite{AC91, Sulem}, rogue waves and breathers, etc. (see e.g. \cite{Peregrine(1983), AKM87}). Each of these solutions has two counterparts [for two choices of $\delta$ in (\ref{delta12})] in the Hamiltonian $\PT$-symmetric system (\ref{eq:nonlin1D}).
For the sake of illustration, we plot $\cos\phi_{1,2}$ and $\sin\phi_{1,2}$ in Fig.~\ref{fig:AB}(a).
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig01.eps}
\end{center}
\caption{(a): Dependencies $\cos\phi_{1,2}$ and $\sin\phi_{1,2}$ for $\phi_{1,2}$ in Eq.~(\ref{eq:phi}) for $\Omega=1$ and changing $\gamma$. (b)-(c) Parameters $A$ and $B$ of bright solitons (\ref{eq:AB}) plotted as functions of $\gamma$ for fixed $\Omega=1$ and $\mu=-2$. In all panels, blue and red curves correspond to symmetric and antisymmetric exact bright solitons, respectively [i.e., to upper and lower signs in Eqs. (\ref{delta12})--(\ref{eq:NLS}) and (\ref{eq:solitons})--(\ref{eq:AB})]. Note that $\cos\phi_1=\cos\phi_2$, and the corresponding blue and red curve coincide in (a).}
\label{fig:AB}
\end{figure}
\subsection{Broken linear $\PT$ symmetry}
For broken linear $\PT$ symmetry ($\gamma \geq \Omega$), we assume that $\sin\delta = -{\Omega}/{\gamma}$, i.e.,
\begin{eqnarray}
\label{delta12EP}
\delta=\delta_1=-\arcsin (\Omega/\gamma) \quad \mbox{or} \quad \delta=\delta_2 = \pi +\arcsin(\Omega/\gamma).
\end{eqnarray}
The solution $U\neq0$ and $V\equiv0$ is valid if $\phi=0$, $\alpha=\pi/4$ and $U$ solves the NLS equation with linear dissipation ($\delta=\delta_1$) or gain ($\delta=\delta_2$)
\begin{eqnarray}
\label{NLS_dissip}
iU_t = -U_{xx} +i\Omega \cot(\delta) U + 2 g|U|^2U.
\end{eqnarray}
The original fields are recovered as
\begin{equation}
\label{eq:uv}
u=-i\sqrt{2}\sin(\delta/2)U, \quad v=\sqrt{2}\cos(\delta/2)U.
\end{equation}
For $\delta=\delta_1$ or $\delta=\delta_2$ the total number of particles $N(t)$ decays to zero or grows, respectively. Note that the decaying solution does not violate inequality (\ref{eq:N}) since the substitution (\ref{eq:uv}) implies identically zero quasi-power (\ref{eq:Q}): $Q(t)=Q(0)=0$. Additionally we note that solution (\ref{eq:uv}) is asymmetric, i.e., the field amplitudes $u$ and $v$ are not equal: $|u/v| = |\tan(\delta/2)|$.
In the particular case of the exceptional point (EP) of the underlying linear $\PT$-symmetric system, $\Omega=\gamma$, Eq.~(\ref{NLS_dissip}) becomes the integrable conservative NLS equation. Thus the decaying and growing solutions bifurcate from the conservative solution at the EP $\Omega=\gamma$, with $\gamma/\Omega\geq 1$ being the bifurcation parameter.
\section{Bright solitons and their dynamics}
\label{sec:solitons}
As we have shown in Sec.~\ref{sec:below}, the introduced Hamiltonian $\PT$-symmetric system contains as a particular case the standard NLS equation and therefore admits various exact solutions. In this section, we explore an important class of solutions in the form of bright solitons, which can be found for the self-focusing nonlinearity in the NLS equation (\ref{eq:NLS}). We therefore assume $g=-1$. Additionally, in this section we set $\Omega=1$. Then, using the results of Sec.~\ref{sec:below}, we readily find two families of bright solitons {\color{black}of system (\ref{eq:nonlin1D})}:
\begin{equation}
\label{eq:solitons}
u_{1,2}(x,t) = e^{-i\delta_{1,2}/2}A_{1,2}\sech(B_{1,2} x) e^{-i\mu t}, \quad v_{1,2}(x,t) = e^{i\delta_{1,2}} u_{1,2},
\end{equation}
where
\begin{equation}
\label{eq:AB}
\eqalign{
A_{1,2} = \frac{\sqrt{2}}{2} (3\gamma^2+1)^{1/4}\sqrt{-\mu \pm \sqrt{1-\gamma^2}}, \quad
B_{1,2} = \sqrt{-\mu \pm \sqrt{1-\gamma^2}}.}
\end{equation}
Subscripts $1$ and $2$ correspond to upper and lower signs in Eqs.~(\ref{delta12})--(\ref{eq:NLS}). In what follows, we call the two identified families symmetric (with subscript $1$) and antisymmetric (with subscript $2$), because in the limit $\gamma=0$ the two components are identical for the symmetric solitons ($u_1=v_1$), but are opposite for antisymmetric solitons ($u_2=-v_2$).
In Eqs.~(\ref{eq:solitons})--(\ref{eq:AB}), $\mu$ is the real parameter which characterizes the temporal frequency of the solution (i.e., the BEC's chemical potential). From Eqs.~(\ref{eq:AB}) it follows that the symmetric solitons exist for $\mu<\sqrt{1-\gamma^2}$, and the antisymmetric solitons require $\mu<-\sqrt{1-\gamma^2}$. As it is typical for $\PT$-symmetric systems, the solutions constitute a continuous family: if the parameter $\gamma$ is fixed, one can construct a continuous set of solutions by changing the ``internal'' parameter $\mu$. Notice however, that for $\gamma$ in the interval $(0,1)$ the families of symmetric and antisymmetric solitons do not coexist, because, as readily follows from Eq.~(\ref{eq:phi}), symmetric solitons require $\phi<0$, and antisymmetric solitons exist for $\phi>0$. For fixed $\mu$, branches of symmetric and antisymmetric solitons coalesce at $\gamma=1$ as one can observe in Fig.~\ref{fig:AB}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig02.eps}
\end{center}
\caption{Stability domains for bright symmetric (a) and antisymmetric (b) solitons. White domains correspond to $(\mu, \gamma)$, where the solitons do not exist. Dark gray domains correspond to regions where the solitons exist and stable; light gray domains correspond to unstable solitons.}
\label{fig:stab}
\end{figure}
In order to check the linear stability of solitons, we used the substitution
\begin{equation}
\label{eq:linstab}
\eqalign{
u(x,t) = [e^{-i\delta/2}A\,\sech(B x) + w(x)e^{\lambda t} + z^*(x)e^{\lambda^*t}] e^{-i\mu t},\\%
v(x,t) = [e^{i\delta/2}A\,\sech(B x) + W(x)e^{\lambda t} + Z^*(x)e^{\lambda^*t}] e^{-i\mu t}}
\end{equation}
and linearized system (\ref{eq:nonlin1D}) with respect to perturbations $(w, z, W, Z)$. [In (\ref{eq:linstab}) we omitted subscripts $1$ and $2$ because the linear stability substitution has the same form for symmetric and antisymmetric solitons.] The growth rates of eventual instabilities, given by $\textrm{Re}\, \lambda$, were computed numerically from eigenvalues $\lambda$ of a large sparse matrix which was obtained after approximation of second spatial derivatives using the second finite difference.
The outcomes of our stability study are summarized in Fig.~\ref{fig:stab} which shows that solitons of either type are stable if wide parameter ranges.
Although the reported solitons are exact solutions, the system apparently is not integrable. This raises the question about solitons' interactions. \textcolor{black}{In order to address this issue, we prepare the initial condition for system (\ref{eq:nonlin1D}) in the form of a superposition of two separated solitons (\ref{eq:solitons}) which are launched towards each other:
\begin{eqnarray}
u(x, 0) = u_{j_+}(x-l, 0) e^{-i c x} + u_{j_-}(x+l, 0) e^{+i c x},\\%
v(x, 0) = v_{j_+}(x-l, 0) e^{-i c x} + v_{j_-}(x+l, 0) e^{+i c x},
\end{eqnarray}
where constants $l \gg 1$ and $c>0$ determine the initial separation between the solitons and initial solitons' velocity, respectively. Subscripts $j_+$ and $j_-$ can acquire values $1$ or $2$, depending on the type of the soliton (symmetric or antisymmetric). Both solitons correspond to the same value of the gain-and-loss coefficient $\gamma$ [which is the parameter of the system (\ref{eq:nonlin1D})], but, generically speaking, have different chemical potentials $\mu_+$ and $\mu_-$ [because the chemical potential does not enter system (\ref{eq:nonlin1D}) but represents an internal parameter of each soliton]. Once $\gamma$, $j_\pm$, and $\mu_\pm$ are chosen, the initial amplitudes and width of the solitons are computed from Eqs.~(\ref{eq:AB}). Then the dynamics of system (\ref{eq:nonlin1D}) is simulated numerically.}
Quite surprisingly, in our simulations we observe that the system behaves as nearly integrable, and solitons interact elastically, similarly to the solitons in the integrable NLS equation. In Fig~\ref{fig:dynstab}(a) we show the collision of two identical antisymmetric in-phase solitons, which pass through each other without any visible distortion. Two out-of phase solitons in Fig.~\ref{fig:dynstab}(b) elastically repel each other and recover their original shapes. Moreover, in spite of the gain and losses, we also observed elastic interactions of (stable) solitons with considerably different amplitudes, as illustrated in Fig.~\ref{fig:dynstab}(c). Similar results were also obtained for collisions of symmetric solitons, when their parameters are chosen from the stability domain.
\begin{figure}
\begin{center}
\includegraphics[width=1.00\textwidth]{fig03.eps}
\end{center}
\caption{Interactions of stable antisymmetric solitons. (a) Two identical in-phase solitons with $\mu=-2.5$ and $\gamma=0.2$; (b) Two identical out-of-phase solitons with $\mu=-2.5$ and $\gamma=0.2$; (c) Two solitons with $\gamma=0.3$ and different amplitudes: the large-amplitude soliton has $\mu=-4$, and the small-amplitude soliton has $\mu=-0.5$. All panels show the amplitude of field in the first component, i.e., $|u|$; \textcolor{black}{the behavior of the second component $|v|$ is almost identical to $|u|$}.}
\label{fig:dynstab}
\end{figure}
Finally, we explored numerically the dynamics of unstable solitons. \textcolor{black}{ To this end, we numerically integrated system (\ref{eq:nonlin1D}) with initial conditions in the form (\ref{eq:solitons})--(\ref{eq:AB}) with $t=0$ (small-amplitude random distortions have been introduced in the initial conditions in order to boost the development of an eventual dynamical instability). Three different dynamical scenarios observed for different $\gamma$ and $\mu$} are presented in Fig.~\ref{fig:dynunstab}. The first observed scenario (which was found the most typical for antisymmetric solitons) consists in the infinite growth of the $u$-component (the one that is subjected to the linear gain $i\gamma$) as shown in Fig.~\ref{fig:dynunstab}(a). \textcolor{black}{Because of the fast growth of the amplitude in the first component (i.e., $u$), the numerical process eventually diverges, and the computation terminates; the amplitude of the second component $v$ remains moderated and does not grow (at least until the moment when the numerical process diverges).} For symmetric solitons, we recorded two different scenarios: for relatively small gain-and-loss $\gamma$ the instability manifests itself in the emergence of a long-living oscillating (breather-like) mode [Fig.~\ref{fig:dynunstab}(b)], which is another pattern typical to integrable model. For large gain-and-loss $\gamma$ the initially quiescent
soliton breaks into a pair of pulses which propagate with different velocities [Fig.~\ref{fig:dynunstab}~(c)]. \textcolor{black}{For dynamics in Fig.~\ref{fig:dynunstab}(b) and (c), behaviors of $u$ and $v$ components are almost identical.}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig04.eps}
\end{center}
\caption{Different behaviours of unstable solitons. (a) Antisymmetric soliton with $\gamma=0.8$ and $\mu=-2$ exhibits the unbounded growth of $|u|$. (b) Symmetric soliton with $\gamma=0.6$ and $\mu=-2$ evolves into a long-living oscillating state. (c) Symmetric soliton with $\gamma=0.95$ and $\mu=-2$ breaks into a pair of spontaneously moving and long-living solitons. All panels show the amplitude of the field in the first component, i.e., $|u|$; \textcolor{black}{the behavior of the second component $v$ is explained in the text}. }
\label{fig:dynunstab}
\end{figure}
\section{Discussion and Conclusion}
\label{sec:concl}
In this paper, we proposed and investigated a nonlinear $\PT$-symmetric coupler which have several remarkable properties. First, it admits Hamiltonian and Lagrangian formulations, which is not a typical property of nonlinear $\PT$-symmetric systems in general. Second, it has (at least) three conservation laws which can be derived from Noether's theorem. In the conservative limit, the system becomes bi-Hamiltonian, i.e., admits two different Hamiltonian representations simultaneously.
Additionally, the introduced $\PT$-symmetric coupler supports a variety of exact solutions which can take form of bright and dark solitons or more complex patterns. Moreover, it was demonstrated numerically that some of the exact solutions are dynamically stable and undergo elastic collisions, similar to collisions of solitons of the integrable NLS equation. We have also outlined the physical relevance of the introduced model in the context of Bose-Einstein condensates in nonlinear lattices.
While the main goal of this paper was to introduce a dispersive Hamiltonian $\PT$-symmetric system, the proposed model admits several generalizations which are worth future study. The first evident generalization of the Hamiltonian (\ref{eq:Hlin_PT}) suggests to consider the case of two different nonlinear coefficients $g_1$ and $g_2$, i.e.,
\begin{eqnarray}
\label{eq:g12}
H = \int_{-\infty}^\infty \left[ u_x^* v_x + u_x v_x^* + i\gamma (uv^* - u^*v) + \Omega (|u|^2 + |v|^2)
\right. \nonumber \\ \left. \hspace{2cm}
+ (g_1|u|^2+g_2|v|^2)(e^{i\phi}u^*v + e^{-i\phi}uv^*) \right] dx.
\end{eqnarray}
Then the Hamiltonian equations (\ref{eq:motion}) lead to a generalized version of system (\ref{eq:nonlin1D}):
\begin{equation}
\label{eq:nonling12}
\eqalign{
iu_t = -u_{xx} + i\gamma u +\Omega v + g_1 e^{-i\phi} |u|^2 u + 2g_2 e^{-i\phi} |v|^2u + g_2e^{i\phi} u^*v^2,\\
iv_t = -v_{xx} - i\gamma v +\Omega u + g_2e^{i\phi} |v|^2 v+ 2g_1 e^{i\phi} |u|^2v + g_1e^{-i\phi} u^2v^*.}
\end{equation}
For $g_1\ne g_2$ this nonlinear system is not $\PT$ symmetric (in the sense discussed in Sec.~\ref{sec:main}). Interestingly enough, this model admits a stationary solitonic solution. Indeed, under the assumptions
\begin{eqnarray}
u= i\cot^{1/2}(\delta/2) U, \quad v= \tan^{1/2}(\delta/2) U,\\
\label{eq:Im2}
\gamma = \Omega\, \textrm {cosec}\, \delta, \quad
g_2 = -g_1 \cot^2 \delta,
\end{eqnarray}
where $\delta \in (0, \pi)$ is a free parameter, system (\ref{eq:nonling12}) reduces to a scalar NLS equation with linear gain or loss (depending on value of $\delta$) and purely imaginary nonlinearity [compare with (\ref{NLS_dissip})]:
\begin{equation}
\label{eq:cGL}
iU_t = -U_{xx} + i\Omega \cot(\delta) U + 2ig_1 \cot(\delta/2) \sin\phi|U|^2U.
\end{equation}
This equation has a well-known stationary solution in the form of the Pereira-Stenflo soliton \cite{PS77}
\begin{equation}
U = A e^{-i B^2 t}\textrm{sech}^{1+i\sqrt{2}}(B x),
\end{equation}
where
\begin{equation}
A^2 = -\frac{3\Omega \cos\delta}{8g_1 \sin\phi(1+\cos\delta)}, \quad B^2 = \frac{\Omega \cot(\delta)}{2\sqrt{2}}.
\end{equation}
The existence of this stationary
solution is remarkable in view of the fact that the coupler operates in the domain of broken $\PT$ symmetry, i.e., $|\gamma/\Omega|\geq 1$ [as readily follows from the first of equations (\ref{eq:Im2})].
Regarding further generalizations of the introduced models, we notice that while herein we have considered a spatially one-dimensional dispersive system, the Hamiltonian structure is expected to survive in the case of multiple dimensions as well, i.e., for $x\in \mathbb{R}^D$, $D\geq 2$.
Another potential generalization is related to the possibility to address nonlinear multi-component systems with three (or more) coupled waveguides. Some of such dispersive systems should also admit Hamiltonian structure.
\ack
The research of D.A.Z. is supported by Russian Science Foundation (Grant No. 17-11-01004).
V.V.K. was supported by the FCT (Portugal) Grant No. UID/FIS/00618/2013.
\bigskip
\section*{References}
|
1,116,691,500,238 | arxiv | \section{Introduction}
Symbolic dynamics for the planar three-body problem is not yet fully
developed. Not many authors are involved in this direction
(Chernin et al. 2006; Myllari 2007; Moeckel 2007).
We need to find a good procedure to assign symbols comparable to
the one-dimensional case (Tanikawa \& Mikkola 2000).
In the free-fall three-body problem, we saw that binary collision curves
(formed with initial condition points of orbits which experience collision)
constitute the boundary of regions in the initial value plane
(Tanikawa \&Umehara, 1998).
We expect this may be the case also for the case with angular momentum.
In the present report, we introduce symbols so that the boundaries of
different symbol sequences are binary collsion curves.
After the definition of symbols, we start numerical symbolic dynamics
of the planar three-body problem with angular momentum extending
the free-fall problem.
\section{Introduction of symbols}
We introduce symbols in this section. First, we define the
signed area of the triangle formed with three bodies.
If the three bodies are arranged in the counter-clockwise order
for body numbers 1, 2, and 3, we consider that the area is positive
(see Fig.
).
If the order is reverse, the area is considered to be negative.
The absolute value of the area of the triangle is its usual area.
Using this definition of area, we assign a symbol to a particular event
on the orbits. Suppose that the configuration of the three bodies becomes
collinear. If the angular momentum of the system is not zero, this
configuration cannot be maintained for a finite non-zero time interval
except the rectilinear case of the three-body problem.
Before and after this configuration, triangles of non-zero area are
recovered.
\begin{figure}[h]
\begin{center}
\plotfiddle{fig-1.eps}{4 cm}{0}{50}{50}{-180}{-40}
\end{center}
\caption{ Vectorial area of triangles}
\label{vectorial-area}
\end{figure}
We ask here whether the middle particle may cross or only be tangent to
the syzygy line. The latter does not happen since otherwise
the trajectory of the middle body is convex to the remaining two
bodies when it is on the syzygy. This is impossible because the trajectory
of any body should be concave to either or both of the remaining bodies
due to the gravitational attraction.
We need to consider the case of binary collision. At binary collision,
the configuration becomes collinear. As is well-known, the trajectory
of a body relative to the other body is a parabola in the vicinity of
binary collision. The third body can be considered to be stand still
compared with the high speed of the binary components. This again shows
that the collinear configuration cannot be maintained before and after
collision.
Finally we need to make an important remark. At binary collision,
an orbit experiences syzygy crossings not only once. The orbit may
experience at most three syzygy crossings.
In addition, the number of crossings is different depending on
whether the collision is considered as a limit of retrograde encounter
or prograde encounter. The limit should be taken to keep the continuity
to the neighboring initial conditions.
The analysis of collision will be given elsewhere.
We here make a notice that two or three symbols may be given
to the orbit at collision.
Now, suppose that the area changes sign from $+$ to $-$ at some
instant. Then we give
symbol '1' if the longest edge is 2--3, that is, the edge connecting
body2 and body 3. Simlarly, we give symbol '2'
if the longest edge is 3--1, and we give symbol '3'
if the longest edge is 1--2. When the area changes sign from $-$ to
$+$, we give symbol '4' if the longest edge is 2--3.
Simlarly, we give symbol '5' if the longest edge is 3--1, and
we give symbol '6' if the longest edge is 1--2.
See Fig.
.
\begin{figure}[h]
\begin{center}
\plotfiddle{fig-2.eps}{4 cm}{0}{50}{50}{-180}{-40}
\end{center}
\label{symbolass}
\caption{ Assignment of symbols.}
\end{figure}
\section{Orbits and Symbol sequences}
Each time a triple system becomes collinear, a symol is given according
to the rule described in the preceding section except at collision.
Therefore, an orbit represented by a continuous curve in the phase space
is replaced by a bi-infinite symbol sequence.
Here we only consider the future symbol sequence.
We denote the present by a point, and symbols by $s_i$, then a symbol
sequence $s$ can be written as
$$
s = .s_1 s_2 s_2 \ldots . \eqno(1)
$$
\subsection{The planar system with angular momentum}
Let us briefly explain the initial conditions of numerical integrations.
We want to extend the free fall problem with equal masses
(Anosova \& Orlov 1986; Tanikawa et al. 1995), in which bodies 2 and 3
initially stand still at $(-0.5,0)$ and $(0.5,0)$, respectively,
whereas body 1 stands still at
$(x,y) \in D_{11} = \{ x \geq 0, y \geq 0, (x-0.5)^2 + y^2 \leq 1 \}$.
$D_{11}$ is called the initial condition plane
(Fig.
).
If the body 1 moves everywhere in $D_{11}$, all the possible triangles are
realized.
Now, we give velocities to the bodies and angular momentum to the
system still with equal masses. In the planar three-body problem with
angular momentum, there too many degrees of freedom. We need somehow
to restrict the initial condition space so as to be able to express the
numerical results visibly. Here, we give maximal angular momentum for
a given configuration triangle (See Kuwabara \& Tanikawa, this isuue).
In this case, one of the symmetries
is lost, so the initial condition space becomes doubled.
See the right panel of Fig.
.
This time $D_{11} \cup D_{12}$ is the initial condition plane
(Tanikawa \& Kuwabara 2007).
\begin{figure}[htbp]
\begin{center}
\plotfiddle{fig-3a.eps}{2 cm}{0}{60}{60}{-300}{-180}
\plotfiddle{fig-3b.eps}{2 cm}{0}{60}{60}{-80}{-100}
\end{center}
\label{geomerty}
\caption{Geometry of the free-fall initial conditions (left) and
the problem with angulur momentum (right)}
\end{figure}
\subsection{Boundaries of symbols in the initial condition space}
Now, suppose that all the points in the initial condition space
have their own symbol sequences (1), that is, the orbits starting
at points of the initial condition space are all integrated.
If we truncate symbol sequences at the $n$-th digit, there are
finite number of possible combinations of symbols in these
length-$n$ ``words''. The initial condition space is divided by
``cylinders'' which contain these words.
We ask what kind of points, or equivalently, orbits, constitute
these boundaries.
For simplicity of discussion, let us consider the case $n=1$, the first
digit. It is apparaent from Fig. 4 that initial tirangles have positive
areas, so only '1', '2', and '3' appear at the first syzygy crossing.
These three symbols divide the initial condition space. Suppose the
region where the first digit is '1'. Take orbit $O$ in which body
1 crosses the syzygy between bodies 2 and 3 and at finite non-zero
distances both from bodies 2 and 3.
Then, all neighboring orbits also experience syzygy
crossing of body 1 bewteen bodies 2 and 3. This means that orbit
$O$ is inside the region occupied by symbol '1' which we call region
'1. This argument
applies as long as body 1 crosses the syzygy at finite distances from
both bodies 2 and 3. Orbits of the boundaries of region '1' should be
collision orbits. There can be two boundaries: between regions '1' and
'2' and between regions '1' and '3'. Similarly, there can be two
boundaries of region '2', and two boundaries of region '3'. In general,
there are six symbols. So, the boundaries of region '$i$' ($i=1,2,\ldots$)
can be more than two.
\begin{figure}[htbp]
\begin{center}
\plotfiddle{fig-4.eps}{2 cm}{0}{50}{50}{-150}{-70}
\end{center}
\label{triangle-init}
\caption{ Initial triangle and the momentum}
\end{figure}
\section{Numerical results}
\subsection{The free-fall problem}
In order to justify the argument of \S 3.2, we first give
results on the free-fall problem (Tanikawa \& Umehara 1998).
Figure 5 shows how the initial condition plane is divided by
collision curves. It is to be noted that the boundaries of
the plane, the $x$-axis, $y$-axis, and the circular boundary
are all collision curves.
Our numerical results are shown in Fig.6. In the free-fall case,
body 1 passes through between bodies 2 and 3 at the first syzygy
crossing. So the whole space is occupied by region '1'.
The initial condition space divided by '15' and '16' at the first two
digits. The reason is simple. At the right part of the space, bodies
1 and 3 form a binary, so body 3 passes through between bodies 1 and 2,
and the vectorial area goes from '-' to '+', giving symbol '6'.
At the right part, body 2 passes through the syzygy, giving symbol '5'.
The boundary of two regions is the $y$-axis. This is seen in the figure
of the left panel of Fig. 6. In the lower-left and lower-right corners
of the figure, some different colors are seen. These are considered to
be numerical artifacts.
\begin{figure}[htbp]
\begin{center}
\plotfiddle{fig-5.eps}{6 cm}{0}{40}{40}{-140}{-50}
\end{center}
\label{free-fall}
\caption{ binary collision curves divide the plane
(Reproduced from Tanikawa \& Umehara (1998)).
}
\end{figure}
In the middle panel of Fig. 3, the initial plane is divided by
regions with the first three digits of symbol sequences.
Here we have a new result. The boundary of yellow and orange
regions corresponds to the binary collision curve (broken curve) of
type 3 (Tanikawa et al. 1995) emanating from point $T_2$ in Fig. 5.
In Tanikawa et al. (1995), we could not follow the collision curve
down to the $x$-axis due to numerical difficulty. In the present
work, It has been shown that the collision curve actually arrive at
the $x$-axis.
\begin{figure}[h]
\begin{center}
\plotfiddle{fig-6.eps}{3 cm}{0}{80}{80}{-200}{-60}
\end{center}
\caption{The structure of the initial condition plane.
From the left, the first two digits of symbol sequences,
the first three digits, and the first four digits.}
\end{figure}
In the right panel of Fig. 3, the results for the first four digits
are shown. New collision curves appear. One is the boundary between
light blue and pink regions. This is the collision curve of type 1
emanating from $T_2$. The other two curves between yellow and green
regions are a collision curve emanating from $T_{1,1}$ and a curve
of type 2 starting at the middle of $T_1$ and $T_{1,1}$ in Fig. 5.
\subsection{The first digit for systems with angular momentum}
In the preceding secion, we have numerically shown that the boundaries
of regions with different cylinders are formed with binary collision
curves. In this section, let us look at the initial condition plane from
a different view point. We gradually add angular momentum to triple
systems. The parameter is the virial coefficient $k = T/|U|$ where
$T$ is the kinetic energy and $U$ is the potential energy of a triple
system. As $k$ increases from 0, The contibution of velocities increases
relative to a fixed potential energy.
At $k=1$, the energy of the triple system
is equal to zero. So we consider the range $0 \leq k \leq 1$.
\begin{figure}[h]
\begin{center}
\plotfiddle{fig-7a.eps}{1 cm}{0}{50}{50}{-120}{-60}
\plotfiddle{fig-7b.eps}{3 cm}{0}{50}{50}{-120}{-40}
\plotfiddle{fig-7c.eps}{1 cm}{0}{50}{50}{-120}{-75}
\plotfiddle{fig-7d.eps}{3 cm}{0}{50}{50}{-120}{-50}
\end{center}
\caption{ The structure of the initial condition plane
for $k=0, 0.001$ (top), $0.01, 0.1$ (second top), $0.2, 0.3$ (third
top), and $0.5, 0.7$ (bottom) for the first digit of symbol sequences.
Red is for symbol '1', green is for symbol '2', and blue is
for symbol '3'.
}
\end{figure}
In this report, we consider only the boundaries of the cylinders of
the first one word as a function of $k$. In Fig. 7, results are given
for $k=0, 0.001$ (top), $0.01, 0.1$ (second top), $0.2, 0.3$ (third top),
and $0.5, 0.7$ (bottom).
Red is for symbol '1', green is for symbol '2', and blue is
for symbol '3'.
As mentioned before, the whole plane is occupied by region '1'
for $k=0$. Even for a very small $k>0$, region '2' appears
(top-right figure). This is because near the left boundary of the plane
bodies 1 and 2 form a binary, and even for a small angular momentum,
this binary revolves each other, hence body 2 instead of body 1
experiences the first syzygy crossing. The boundary of two regions is
formed with a binary collision curve of type 3 (collision bewteen bodies
1 and 2). It seems that this curve extends from the center bottom (the
Euler point) to the top (the Lagrange point).
As $k$ increases, the boundary collision curve deforms and 'evolves'
to the right in the initial condition space (see the second top
figures of Fig. 7). Near the top of the plane, region '3' appears.
Though this region is visible only for $k \geq 0.1$ in Fig. 7,
the region is visible for $k = 0.001$ if we enlarge the neighborhood of
the Lagrange point. In addition, the binary collision curve seems to
spiral into the Lagrange point as long as the point is unstable
(see Fig. 8).
\begin{figure}[h]
\begin{center}
\plotfiddle{fig-8.eps}{3 cm}{0}{80}{80}{-200}{-60}
\end{center}
\caption{The structure of the initial condition plane around the
Lagrange point for the first one digit.
$k = 0.001$ (left) for $-0.005 \leq x \leq 0.005$,
$0.86 \leq y \leq 0.87$; $k=0.01$ (center) for
$-0.01 \leq x \leq 0.01$, $0.85 \leq y \leq 0.87$, and $k=0.1$ (right)
for $-0.1 \leq x \leq 0.1$, $0.76 \leq y \leq 0.96$.}
\end{figure}
\vspace{0.3cm}
\noindent
\section{Conclusions}
We have demonstrated that symbolic dynamics is effective in the
planar three-body problem. Main results are
\vspace{0.3cm}
\noindent
(i) In the free-fall problem, it has been shown that binary collision
curves divide the initial condition space. This has been suggested in
Tanikawa et al. (1995). In the present work, divisions are more clearly
seen.
\vspace{0.3cm}
\noindent
(ii) It has been shown for the three-body problem with angular momentum
that a binary collision curve connects the Euler point and
Lagrange point in the initial condition plane, which suggests
that the Euler and Lagrange points are connectd by an invarian manifold.
The collision curve seems to spiral into the Lagrange point if the
angular momentum is not zero.
\vspace{0.3cm}
\noindent
{\bf Acknowledgment}
This project was sipported by JSPS and RFBR under the Japan--Russia
Research Coorperative Program.
One of the authors (K.T.) expresses hearty thanks to the Russian
members of Japan-Russia Joint Research for the hospitality during
his stay in Sgt Petersberg in the end of August, 2007.
|
1,116,691,500,239 | arxiv | \section{Introduction}
The relations between, on the one hand, the evolution equation and semigroup theory and, on the other hand, functional integration and the theory of stochastic processes is an extensively studied topic
\cite{Bal,EthKur,IkeWat,Kal,Kol,RogWil} with a long history. Its roots can be traced back to the pioneering papers by Richard Feynman \cite{f-1948, f-1951}, who proposed an heuristic representation of the solution to the Schr\"odinger equation in terms of limits of integrals over finite Cartesian powers of some spaces. Feynman's ideas inspired Marc Kac \cite{Kac1949}, who rigorously proved a representation of the solution of the heat equation in terms of an integral on the space of continuous paths with respect to the Wiener measure. This formula, which is nowadays known as the celebrated "Feynman-Kac formula", is the first and most famous example of the connections between parabolic equations associated with second order elliptic operators and stochastic processes.
Remarkably,
Feynman heuristically presented two mathematical constructions which are now associated with names of Trotter \cite{Tro}
and Chernoff \cite{Chernoff}, who rigorously proved them much later.
Trotter and Chernoff formulas provide approximations of evolution (semigroups) that, in several cases, pave the way for the proof of representation formulas of Feynman-Kac type.
In the present paper, new Chernoff approximations are established for a particular class of Feller semigroups on a type of generally non-compact Riemannian manifolds.
In addition, these formulas are also proved to have a nice probabilistic interpretation on the said class of manifolds, since they allow the proof of the weak convergence of a sequence of random walks on the manifold to
the diffusion process associated with the elliptic operator generating the said Feller semigroups. \\
\noindent {\bf Literature on the subject}.
From a general perspective, this work refers to the theory of some strongly continuous semigroups of linear operators $(V(t))_{t\in {\mathbb{R}}^+}$ on the Banach space $C_0(\mathcal{M})$
of continuous real-valued functions vanishing at $\infty$ on a locally compact metric space $\mathcal{M}$. Such semigroups are called {\em Feller semigroups}.
They are naturally associated with strong Markov stochastic processes $(X^x(t))_{t\in {\mathbb{R}}^+}$ with values in the one-point compactification of $\mathcal{M}$ in such a way
that the action of the operators $V(t)$ on a function $f\in C_0(\mathcal{M})$ can be represented in terms of the following formula
$$(V(t)f)(x)={\mathbb{E}}[f(X^x(t))], \qquad x\in \mathcal{M},\: t\in {\mathbb{R}}^+\:.$$
${\mathbb{E}}$ is the expected value. This paper considers the concrete case where $\mathcal{M}$ is a smooth Riemannian manifold $M$ and the generator of the Feller semigroup when restricted to the space
$C_c^\infty(M)$ of smooth functions with compact support is given by the second-order differential operator
\begin{equation}\label{eqL_0-1}
(L_0f)(x)=\frac{1}{2}\sum _{k=1}^r(A_kA_kf)(x)+A_0f(x), \qquad x\in M,
\end{equation}
where $A_k$, $k=0, \ldots, r$ are smooth vector fields. The stochastic processes associated with this particular kind of Feller semigroups are named {\em Feller-Dynkin diffusions}. They have continuous paths and can be constructed in terms of the (martingale) solution of stochastic differential equations of the form \cite{Elw,Hsu,IkeWat,Wang}
\begin{equation}\label{SPDEi}dX(t)=\sum_{j=1}^r A_j(X(t))\circ dB^j(t) +A_0(X(t))dt.\end{equation}
This work in particular is devoted to the application of the Chernoff theorem (see theorem \ref{FormulaChernova} below) to the construction of an approximation formula for, on the one hand, the Feller semigroup and, on the other hand, the associated diffusion process and solutions to the evolution equation. This technique has been extensively implemented, e.g. in the study of
Chernoff approximations of
Feller semigroups (and corresponding Feller processes) \cite{Butko-DM2010, Butko-IDAQP2012, Butko-FCAA2018, Butko-SD2018}, in the construction of solutions to evolution equations \cite{BauConGro,BoOrSa,Butko-FPM2006}, and in the construction of the Wiener measure on compact manifolds \cite{Baer,SWW2007}
(see for overviews \cite{Butko-2019,SmHist,SmSchrHist}). Most of the results presented in literature are restricted to the case where either $M={\mathbb{R}}^d$ or $M$ is compact. More general classes of $C^k$ (with $k=1,2 \ldots, \infty$ depending on the case) Riemannian manifolds were studied in \cite{Jorgensen,Pinsky,Li} (see also \cite{Manton} for an introductory overview of Brownian motion and diffusion processes on manifolds).
In those papers, generally speaking, conditions are assumed about (a) the existence of a specific cover of open sets with both uniform metric properties and uniform bounds
on the vector fields $\{A_k\}_{k=0, \ldots, r}$ associated to the dynamical system \eqref{SPDEi} and (b) the validity of specific bounds on some curvatures. Under these conditions it is possible to prove the existence of Feller semigroups associated to the differential operator \eqref{eqL_0-1} as well as the non explosion property of the associated process \cite{Li}. In \cite{Jorgensen,Pinsky} similar conditions allow proving the convergence of geodesic random walks to the Brownian motion on the manifold.
A recent remarkable book on semigroups on $L^2(M)$ (instead of $C_0(M)$) for generally non-compact manifolds $M$
and the special case of Schr\"odinger-like operators
is \cite{Gu}. There, heat kernels are extensively studied for Schr\"odinger-like operators on Hermitian bundles
on generally non-compact base manifolds, extending many known results valid in ${\mathbb{R}}^n$ to these geometric structures.\\
\noindent {\bf Results of this work}.
In contrast to the quoted literature, the present work focuses on continuous semigroups on $C_0(M)$ with generators of the
form (\ref{eqL_0-1}) for
the case of a generic smooth Riemannian manifolds $(M,g)$
{\em of bounded geometry}, also requiring uniform boundedness properties of the involved vector fields for general
elliptic operators (\ref{eqL_0-1}).
Manifolds of bounded geometry are for instance ${\mathbb{R}}^d$, compact manifolds, and a wide class of {\em non-compact}
manifolds that are also relevant in applications, like Lie groups and homogeneous manifolds. The main results of this
work follow.
(a) As the first result, in Section \ref{sec3} we show that if the vector fields $\{A_k\}_{k=0,\ldots,r}$ enjoy a property known as
$C^\infty-$boundedness \cite{shubin}, then an extension of the differential operator $L_0$ in \eqref{eqL_0-1}
is the generator of a Feller-Dynkin semigroup on $C_0(M)$, and we provide a family of operator cores. This result paves the way for
the proof of theorem \ref{teo3.1}, the second result of this paper, where a Chernoff approximation formula (Eq. \eqref{convergenceformula}) for the Feller semigroup in terms of a family of rather simple shift operators is presented. The idea of using shift operators instead of integral operators on $\mathbb{R}^d$
goes back to \cite{R-JMP2019, R-AMC2018, RS-MN2018,R-PotAnonlinefirst2018} and is now applied to manifolds for the first time.
We also extend the described results to more general operators $L_0+c$, where $c\leq 0$ is a bounded continuous scalar potential.
(b) The probabilistic interpretation of the approximation formulas \eqref{S(t)}
and \eqref{convergenceformula} in the case of $c=0$ is discussed in
Section \ref{SecProbabilisticInterpretation}.
There, as the third main result, we show that it allows us to construct the diffusion process associated to the Feller semigroup in terms of a weak limit of a sequence of random walks on $M$.
Several interesting convergence results for diffusion processes on manifolds can be found in literature, see e.g. \cite{Mol,DeGaMa, Jorgensen,Pinsky,Li}. It is worth mentioning the approximation schemes for the Wiener measure proposed in \cite{AndDri,Baer}, the proof of convergence of random walks to Brownian motion on sub-Riemannian manifolds \cite{GorLae} and the recent application of the notion of controlled rough path to Riemannian manifolds \cite{DriSem}.
In contrast to the above mentioned results, in particular \cite{Jorgensen,Pinsky,Li}, where only geodesic paths are used in $M$ so that the 2nd order ODE are relevant,
in this paper we provide three different approximation schemes associated to {\em 1st order} differential equations of curves in $M$.
These equations are the ones of integral lines of the aforementioned vector fields $\{A_k\}_{k=0,\ldots,r}$.
Indeed, the first approximation scheme involves a sequence of jump processes with random jumps along integral curves of the vector fields $\{A_k\}_{k=0, \ldots, r}$.
Notice that more than one vector field is necessary to change the direction of the random walk when dealing with vector fields in $M$ instead of geodesics.
The second approximation scheme is a sequence of random walks with continuous piecewise geodesic paths. Finally, the third approximation scheme involves a sequence of random walks with continuous paths where the single steps are integral curves of the vector fields $\{A_k\}_{k=0, \ldots, r}$.
(c) These techniques are eventually applied
in section \ref{sez5}
to the Chernoff approximation of the specific case of the heat
semigroup and the Brownian motion on {\em parallelizable} Riemannian manifolds. In this context we acheive the final results presented in this work.
As noted above, besides the traditional
approximation of Brownian motion in terms of the weak limit of a sequence of random walks with piecewise geodesic
paths (theorem \ref{teo43}), we provide a new approximation result in terms of the limit of random walks with
paths along the integral curves of a family of parallelizing vector fields (theorem \ref{teo44}).\\
\noindent {\bf Structure, notations, and conventions}. The paper is organized as follows. Section \ref{sez2} presents
some basic definitions and results on Feller semigroups, Chernoff approximations and Riemannian
geometry notions that are used throughout the paper. Section \ref{sec3} presents the construction
of the Feller semigroup and its Chernoff approximation. Section \ref{SecProbabilisticInterpretation} is
devoted to the probabilistic interpretation of the Chernoff approximation formula and to the construction
of three different sequences of random walks on $M$ converging weakly to the diffusion process associated to
the Feller semigroup. Finally, section \ref{sez5} extends these results to the study of approximations of the heat
semigroup and the Brownian motion on parallelizable manifolds of bounded geometry. The appendix contains the
proofs of several technical propositions used in the main text.
From now on the notation $A\subset B$ includes the case $A=B$ and,
referring to a universe set $\mathcal{M}$, if $A \subset \mathcal{M}$, then $A^c := \mathcal{M} \setminus A$.
Throughout the paper we adopt the definition ${\mathbb{R}}^+ := [0,+\infty)$.
If $M$ is a smooth manifold the symbol $C_c^\infty(M)$ denotes the complex space of smooth {\em compactly supported} complex-valued functions on $M$.
An operator $A$ is always understood as a {\em linear} operator and its domain, denoted by $D(A)$, is always assumed to be a {\em linear subspace}.
The symbol ${\mathcal{B}}$ denotes
a Banach space over the field $\mathbb{C}$ or $\mathbb{R}$ and $\mathscr{L}({\mathcal{B}})$ denotes the set of all bounded
linear operators in $A: D(A) \to {\mathcal{B}}$ with $D(A)={\mathcal{B}}$.
If $A:D(A) \to {\mathcal{B}}$
and $B: D(B) \to {\mathcal{B}}$ are operators with $D(A),D(B) \subset {\mathcal{B}}$, then
(i) the domain of $A+B$ is defined as $D(A+B) := D(A)\cap D(B)$,
(ii) the domain of $AB$ is defined as $D(AB):= \{x \in D(B) \:|\: Bx \in D(A)\}$,
(iii) the domain of $aA$, with $a\in {\mathbb{R}}$ or ${\mathbb{C}}$, is $D(aA) :=D(A)$
except for $a=0$, where $D(0A)={\mathcal{B}}$; finally, $A \subset B$ means $D(A)\subset D(B)$ and $B|_{D(A)}=A$.
\section{Analytic and Geometric Preliminaries}\label{sez2}
We assume that the reader is familiar with the theory of $C_0$-semigroups and we recall here just some basic definitions and results in order to fix the notation and the used terminology.
We also recall some basic facts about the connection of the theory of $C_0$-semigroups and the theory of random processes with particular emphasis on Feller semigroups and Feller processes.
Generally speaking, we shall focus attention only on the notions and the results which are strictly necessary to state and prove the results in the work. Details appear in the classical monographs \cite{EN1, Kol,EthKur, Bil, RogWil, IkeWat} and references therein.
Section \ref{Chernoffsec} contains some basic notions about {\em Chernoff-functions} \cite{Chernoff} which will be used in this work.
In sections \ref{riemmannian} and \ref{Riemannian2} we shall remind the reader some basic notions of Riemannian geometry used throughout. Classical reference texts are \cite{Lee,DoCarmo,ONe, KobayashiNomizu}. Section \ref{bgeometry} introduces the basic notions and results on {\em manifolds of bounded geometry}. A recent review on the subject is \cite{DSS}.
\subsection{$C_0$-semigroups and evolution equations}\label{semgreveq}
\begin{definition}\label{semigrdef}
A mapping $V:{\mathbb{R}}^+\to \mathscr{L}({\mathcal{B}})$, is called a {\bf $C_0$-semigroup}, or {\bf a strongly continuous one-parameter semigroup} ({\bf of bounded operators}) if it satisfies the following conditions,
\begin{itemize}
\item[(1)] $V(0)= I$ the identity operator on $\mathcal{B}$,
\item[(2)] $V(t+s)=V(t)V(s)$ if $t,s \in {\mathbb{R}}_+$ ({\bf semigroup law}),
\item[(3)] ${\mathbb{R}}_+ \ni t \mapsto V(t)x$ is continuous for every $x\in \mathcal{B}$, i.e., $V$ is continuous in the {\em strong operator topology}.\hfill $\blacksquare$
\end{itemize}
\end{definition}
As is well known \cite{EN1}, if $(V(t))_{t\geq 0}$ is a $C_0$-semigroup in Banach space ${\mathcal{B}}$, then the set
\begin{equation} D(L) := \left\{\varphi\in {\mathcal{B}} \: \left|\: \exists \lim_{t\to +0}\frac{V(t)\varphi-\varphi}{t}\right.\right\}
\label{DL}\end{equation} is a dense linear subspace of ${\mathcal{B}}$ invariant under the action of each $V(t)$, $t\geq 0$.
The operator $L: D(L)\to \mathcal{B}$ $$L\varphi=\lim_{t\to +0}\frac{V(t)\varphi-\varphi}{t}\:, \quad \varphi \in D(L)$$
is called the ({\bf infinitesimal}) {\bf generator} of the $C_0$-semigroup $V$.
The generator turns out to be a closed linear operator that defines $V$ uniquely which, in turn, is denoted\footnote{As is well
known, this notation is only formal in the general case even if in some situations it has a rigorous
meaning in terms of norm-converging series if $L$ is bounded respectively spectral functional calculus in Hilbert spaces when $L$ is normal.} as $V(t)=e^{tL}$.
If $L: D(L) \to \mathcal{B}$ with $D(L) \subset \mathcal{B}$ is an operator, the problem of finding a function $u\colon {\mathbb{R}}^+\to \mathcal{B}$ such that
\begin{equation}\label{ACP1}
\left\{ \begin{array}{ll}
\frac{d}{d t}u(t)= Lu(t); & t\geq 0,\\
u(0)=u_0,\\
\end{array} \right.
\end{equation}
is called the {\bf abstract Cauchy problem} (for the evolution equation) associated
to $L$. A function $u\colon {\mathbb{R}}^+\to \mathcal{B}$ is called a {\bf classical solution} to abstract Cauchy problem (\ref{ACP1})
if, for every $t\geq 0$, the function $u$ has a continuous derivative (in the topology of ${\mathcal{B}}$) $u'\colon {\mathbb{R}}^+\to \mathcal{B}$, it holds
$u(t)\in D(L)$ for $t\in {\mathbb{R}}^+$, and (\ref{ACP1}) holds. The following fact can be found as Proposition 6.2 in \cite{EN1}, p. 145.
\begin{proposition}\label{ACPsol}
Let the operator $L: D(L) \to \mathcal{B}$
be the generator of a strongly continuous semigroup $(V(t))_{t\geq 0}$ in the Banach space $\mathcal{B}$.
Then, for every $u_0\in D(L)$ there is a unique classical solution to abstract Cauchy problem (\ref{ACP1}), which is
given by the formula $u(t)=V(t)u_0$.
\end{proposition}
\subsection{Feller semigroups and random processes}
$C_0$-semigroups are of particular interest because of their strong interplay with the theory of evolution equations, on the one hand, and with probability theory, on the other hand; from the probabilistic point of view the so-called {\em Feller semigroups} \cite{Kol,EthKur} are particularly important.
Let $\mathcal{M}$ be a locally-compact metric space. With the symbol
$C(\mathcal{M})$ we denote the space of continuous functions $f: \mathcal{M} \to {\mathbb{C}}$.
With $C_0(\mathcal{M})$ we shall denote the Banach space of continuous functions {\bf vanishing at $\infty$}, i.e. $$C_0(\mathcal{M}):=\{f \in C(\mathcal{M}) \:|\: \forall \varepsilon>0 \; \exists K \subset \mathcal{M} \hbox{ compact } |f(x)|<\varepsilon \; \forall x\in K^c \},$$ endowed with the $\| \; \|_{\infty}$-norm.
If $\mathcal{M}$ is compact, it is natural to define $C_0(\mathcal{M}):=C(\mathcal{M})$.
A linear operator $U:C_0(\mathcal{M})\to C_0(\mathcal{M})$ is said to be {\bf positive} if $(Uf)(x)\geq 0$
for $x\in \mathcal{M}$ whenever $f\in C_0(\mathcal{M})$ and $f(x)\geq 0$ if $x\in \mathcal{M}$. $U$ is said to be a {\bf contraction } if $\|Uf\|\leq \|f\|$ for $f \in C_0(\mathcal{M})$.
\begin{definition}\label{semFellerdef} If $\mathcal{M}$ is a locally-compact metric space,
a strongly continuous semigroup made of positive contractions on $C_0(\mathcal{M})$ is called a {\bf Feller semigroup}. \hfill $\blacksquare$
\end{definition}
A crucial result is the following one (theorem 2.2 Ch.4 in \cite{EthKur}):
\begin{theorem}\label{theoremFellergenerator} Let $\mathcal{M}$ be a locally compact metric space and $L_1 \colon D \to C_0(\mathcal{M})$ an operator with domain $D\subset C_0(\mathcal{M})$ subspace.
$L_1$ is closable and its closure $L:= \overline{L_1}$ is the generator of Feller semigroup if the following conditions are valid.
\begin{itemize}
\item[{\bf (a)}] $D$ is dense in $C_0(\mathcal{M})$,
\item[{\bf (b)}] $L_1$ satisfies the {\bf positive maximum principle}: \\
\begin{equation}\hbox{for each } f\in D:
\hbox{ if }\sup_{x\in \mathcal{M}}f(x)=f(x_0)\geq 0 \hbox{
for } x_0\in \mathcal{M},\hbox{
then }(L_1f)(x_0)\leq 0\:,\label{PMP}\end{equation}
\item[{\bf (c)}] $Ran(L_1 - \lambda I)$ is dense in $C_0(\mathcal{M})$ for some $\lambda >0$.
\end{itemize}
\end{theorem}
\begin{remark}$\null$\\
{\bf (1)} Given a closed operator $L:D(L)\subset {\mathcal{B}}\to {\mathcal{B}}$ on a Banach space ${\mathcal{B}}$, a dense subspace $D\subset D(L)$ is called a {\bf core} for $L$ if $L|_{D}$ is closable and $\overline{L|_{D}}=L$.\\
Theorem \ref{theoremFellergenerator} in fact yields the existence of the semigroup as well as a core for its generator.\\
{\bf (2)} In this paper, $\mathcal{M}$ is a Riemannian manifold $(M,g)$. We will introduce and use three types of operators: $L_0$ is always a differential operator defined on the whole $C^\infty(M)$, $L_1$ is its restriction to a suitable subspace $D_k$ satisfying the theorem above, $L= \overline{L_1}$ is the generator of the Feller semigroup. $\hfill \blacksquare$
\end{remark}
By the {\em Riesz-Markov theorem}, it is possible to associate to any Feller semigroup $V$ a family $(p_t(x))_{t\geq 0, x\in \mathcal{M}}$ of positive Borel measures on $\mathcal{M}$ such that, for all $t\geq 0$,
$$(V(t)f)(x)=\int_\mathcal{M} f(y) p_t(x, dy), \qquad x\in \mathcal{M}$$
and, for all $f\in C_0(\mathcal{M})$,
$$\lim_{x_n\to x}\int_\mathcal{M} f(y) p_t(x_n, dy)=\int_\mathcal{M} f(y) p_t(x, dy).$$
Moreover $p_t(x, \mathcal{M})\leq 1$.
If all the measures of the family $(p_t(x))_{t\geq 0, x\in \mathcal{M}}$ are {\em probability measures}, then the Feller semigroup is said {\bf conservative}. In this case, from the semigroup law, the family of probability measures satisfies the {\em Chapman-Kolmogorov} equation:
\begin{equation}\label{CK}p_{t+s}(x, A)=\int_\mathcal{M}p_t(y, A)p_s(x, dy), \qquad \hbox{for every Borel set } A\subset \mathcal{M}.\end{equation}
As a consequence,
given an arbitrary probability measure $\mu$ on the Borel $\sigma $-algebra ${\mathcal{B}}(\mathcal{M})$ of $\mathcal{M}$, it is possible to construct a Markov process $(X^{\mu}_t)_{t\geq 0}$ with values in $\mathcal{M}$ with finite dimensional distributions
\begin{equation}\label{fin-dim-distr}{\mathbb{P}}(X^{\mu}_{t_1}\in A_1, \dots X^{\mu}_{t_n}\in A_n)=
\int 1_{A_1}(x_1)\cdots 1_{A_n}(x_n)p_{t_n-t_{n-1}}(x_{n-1}, dx_n)\cdots p_{t_1}(x_0, dx_1)d\mu (x_0),
\end{equation}
for $0\leq t_1\leq \dots \leq t_n$ and $A_1, ..., A_n \in {\mathcal{B}}(\mathcal{M})$. The existence of the process is guaranteed by the {\em Kolmogorov existence theorem} \cite{Bil}, the family of measures \eqref{fin-dim-distr} being consistent due to the Chapman-Kolmogorov identity \eqref{CK}.
In the general case, it is still possible to define the associated Markov process $(X^{\mu}_t)_{t\geq 0}$ with values in the 1-point compactification $\mathcal{M}':=\mathcal{M}\cup \partial$ of $\mathcal{M}$ and
the process enjoys the {\em strong Markov property} \cite{RogWil}. If $X_s=\partial $ $\forall s\geq t$ whenever either $X_{t^-}=\partial $ or $X_t=\partial$, then these processes are called {\em Feller-Dynkin} (FD-) processes. The random variable
$$\xi:=\inf\{t\in {\mathbb{R}}^+ | X_t=\partial\}$$
is called {\em lifetime} or {\em explosion time} of the process.
In fact, if the Feller semigroup is conservative then $\xi=+\infty$ almost surely, hence the FD-process can be thought as a stochastic process with values in $\mathcal{M}$ instead of $\mathcal{M}'$ and it is called conservative.
By \eqref{fin-dim-distr} the action of the semigroup admits the following probabilistic representation
\begin{equation}(V(t)f)(x)={\mathbb{E}}[f(X_t^x)],\qquad x\in \mathcal{M}, \label{prorep}\end{equation}
where $X_t^x$ is the aforementioned Markov process with initial distribution $\mu=\delta_x$, the Dirac measure concentrated at $x\in \mathcal{M}$.
An important class of FD-processes are the {\em diffusions}, also called Feller-Dynkin diffusions \cite{IkeWat,RogWil}. They are defined as FD-processes with continuous paths up to the explosion time. The generator $L$ of the associated semigroup is a local operator with a domain that includes the set of smooth functions with compact support and $L$ satisfies the maximum principle \eqref{PMP} there. If $x\in \mathcal{M}$ and $(X^x_t)$ is the diffusion process starting at $x$, then its law $P^x$ is a probability measure on the metric space $C({\mathbb{R}}^+, \mathcal{M})$ of continuous paths on $\mathcal{M}$ or, more generally in the case of explosion, on $C({\mathbb{R}}^+, \mathcal{M}')$. The family $\{P^x\}_{x\in \mathcal{M}}$ is called a {\em system of diffusion measures}.\\
In the case where the state space $\mathcal{M}$ of the Feller-Dynkin diffusion is ${\mathbb{R}}^d$, it is well known (see e.g. \cite{RogWil,Kol}) that
the restriction of $L$ to $C^\infty_c({\mathbb{R}}^d)$ is a second-order elliptic operator of the form
\begin{equation}\label{generator1}(L_0f)(x)=\sum_{i,j}a^{ij}(x)\frac{\partial^2 f}{\partial x^i\partial x^j}(x)+\sum_{j}b^{j}(x)\frac{\partial f}{\partial x^j}(x)+ c(x)f(x), \qquad x\in {\mathbb{R}}^d,\quad f \in C_c^\infty({\mathbb{R}}^d) \:.\end{equation}
where $a^{ij}, b^j$, $c$, $i,j=1,\dots,d$, are real-valued continuous functions, $c\leq 0$ and the matrix of coefficients $a^{ij}(x)$ is symmetric and non-negative definite.
The corresponding semigroup $V$ provides a classical solution of the Cauchy problem (in the above semigroup sense) for $u_0\in C_c^\infty({\mathbb{R}}^d)$,
\begin{equation}\label{CPee}
\left\{ \begin{array}{ll}
u'_t(t,x)=Lu(t,x) \ \mathrm{ for }\ t>0, x\in {\mathbb{R}}^d\\
u(0,x)=u_0(x)\ \mathrm{ for } \ x\in {\mathbb{R}}^d
\end{array} \right.
\end{equation}
Actually, by formula \eqref{prorep}, the function $ u:{\mathbb{R}}^+\times {\mathbb{R}}^d\to {\mathbb{R}}$ admits the probabilistic representation formula $u(t,x)={\mathbb{E}}[u_0(X_t^x)]$.
Conversely, given globally Lipschitz maps $\sigma _k^i: {\mathbb{R}}^d\to {\mathbb{R}}$ and $b^i:{\mathbb{R}}^d\to {\mathbb{R}}$ and setting $a^{ij}=\sum_k\sigma _k^i\sigma _k^j$, it is possible to prove that there exists a Feller semigroup whose generator restricted to $C^\infty_c({\mathbb{R}}^d)$ has the form \eqref{generator1} with $c=0$. The associated diffusion process is constructed in terms of the so called martingale solution of the stochastic differential equation
\begin{equation}\label{SPDE1}dX^i_t=\sum_{k=1}^d \sigma _k^i(X_t)dB_t^k+b^i(X_t)dt,\end{equation}
where $(B_t)_{t\in {\mathbb{R}}^+}$, is a $d$-dimensional Brownian motion. For an extended discussion of this topic see, e.g. \cite{RogWil,IkeWat}.
\subsection{Chernoff approximations for $C_0$-semigroups} \label{Chernoffsec}
Here we recall {\em Chernoff's theorem} \cite{Chernoff,EN1,BS} which provides approximation method for $C_0$-semigroups
on Banach space in terms of suitable operator valued functions.
\begin{theorem}[The Chernoff theorem]\label{FormulaChernova}
Let $(e^{tL})_{t\geq 0}$ be a $C_0$-semigroup on a Banach space ${\mathcal{B}}$ with generator $L: D(L) \to {\mathcal{B}}$ and let $S:{\mathbb{R}}^+\to \mathscr{L}({\mathcal{B}})$ be a map satisfying the following conditions:
\begin{enumerate}
\item There exists $\omega\in\mathbb{R}$ such that $\|S(t)\|\leq e^{\omega t}$ for all $t\geq 0$;
\item The function $S$ is continuous in the strong topology in $\mathscr{L}({\mathcal{B}})$;
\item $S(0) = I$, i.e., $S(0)f=f$ for every $f \in {\mathcal{B}}$;
\item There exists a linear subspace $\mathcal{D} \subset D(L)$ that is a core for the operator $L : D(L) \to {\mathcal{B}}$ and such that $\lim_{t \to 0}(S(t)f-f-tLf)/t=0$ for each $f \in \mathcal{D}$.
\end{enumerate}
Then the following holds:
\begin{equation}\label{Chest}
\lim_{n\to\infty}\sup_{t\in[0,T]}\left\|S(t/n)^nf - e^{tL}f\right\|=0,\quad \mbox{for every $f\in {\mathcal{B}}$ and every $T>0$,}
\end{equation}
where $S(t/n)^n$ is a composition of $n$ copies of the linear bounded operator $S(t/n)$.
\end{theorem}
\begin{remark}{
Let $(e^{tL})_{t\geq 0}$ be a $C_0$-semigroup on a Banach space ${\mathcal{B}}$ with generator $L: D(L) \to {\mathcal{B}}$ and let $S:{\mathbb{R}}^+\to \mathscr{L}({\mathcal{B}})$ be a map satisfying formula \eqref{Chest} then:
\begin{itemize}
\item[(a)]
$S$ is called a {\bf Chernoff function} for operator $L$ or
{\bf Chernoff-equivalent} to $C_0$-semigroup $(e^{tL})_{t\geq 0}$ \cite{STT}.
\item[(b)]The expression $S(t/n)^nf$ is called a \textit{\bf Chernoff approximation expression} for $e^{tL}f$.
\item[(c)] The $\mathcal{B}$-valued function
$$U(t):=\lim_{n\to\infty}S(t/n)^nu_0=e^{tL}u_0$$ is the classical solution of the Cauchy problem (\ref{ACP1}) due to Proposition \ref{ACPsol} and Theorem \ref{FormulaChernova} if $u_0\in D(L)$, so
Chernoff approximation expressions become approximations to the solution with respect to norm in $\mathcal{B}$.
\end{itemize}
}\end{remark}
A definition of Chernoff equivalence and Chernoff function was suggested in 2002 \cite{STT} and developed in \cite{SWWdan,SWWcan,SWW2007,SmHist,SmSchrHist, R5}. New wording was proposed in \cite{R1, R-AMC2018}. Every $C_0$-semigroup $S(t)=e^{tL}$ is a Chernoff function for its generator $L$, actually it is the only one
Chernoff function which has a semigroup composition property.
Also there are other statements known as Chernoff-type theorems and they produce different notions of Chernoff function. Here we will not give an overview of this topic. We just fix one version of the Chernoff theorem, one definition of Chernoff function and work with it.
\subsection{Structures on Riemannian manifolds}\label{riemmannian}
In this section we recall some general notions of Riemannian geometry. For more details we refer to
\cite{Lee,DoCarmo,ONe, KobayashiNomizu}.
Let $(M,g)$ be a smooth (i.e., $C^\infty$) Riemannian manifold, which we will always assume to be connected, Hausdorff, and 2nd countable.
The {\bf Riemannian
distance} of $p,q \in M$ is defined as
\begin{equation} \label{defd}d_{(M,g)}(p,q) = \inf_{\gamma \in C_{p,q}} L_g(\gamma)\:.\end{equation}
Above, $C_{p,q}$ is the set of the smooth curves $\gamma : [a,b] \to M$ with $\gamma(a)=p$ and $\gamma(b)=q$ ($a<b$ depend on $\gamma$) and
$$L_g(\gamma) := \int_a^b \|\dot{\gamma}(t)\|_g dt\:,$$
-- where $\dot{\gamma}$ is the tangent vector to $\gamma$ and $\|\dot{\gamma}(t)\|_g=\sqrt{g_{\gamma(t)}(\dot\gamma(t),\dot\gamma(t))}$ its standard $g$-norm (see below) --
is the {\bf length} of the curve $\gamma$ computed with respect to $g$.
The Riemannian distance makes $M$ a metrical space whose metrical topology
coincides with the original topology of $M$ as topological manifold.
If $p\in M$ and $U_p \subset T_pM$ is a sufficiently small open neighborhood of the origin $0\in T_pM$,
the {\bf exponential map} at $p$, denoted by $\exp_p :U_p \to M$, is the map associating $v\in U_p$ with $\sigma(1,p,v)$, where
$[0,1] \ni s \mapsto \sigma(s,p,v) \in M$ is the restriction to $[0,1]$ of the maximal $g$-geodesic in $M$ starting from $p$, at $s=0$,
with initial tangent vector $v$. It is known that if $U_p$ is sufficiently small, $\exp_p$ is a diffeomorphism
from $U_p\subset T_pM$ onto the open neighborhood $V_p := \exp_p(U_p) \subset M$ of $p$.
Furthermore, such $V_p$ can be chosen to be an {\em open $d_{(M,g)}$-metric ball} $V_p= B^{(M,g)}_r(p)$ of
sufficiently small radius $r>0$ (in this case $U_p$ will be the open ball in $T_pM$ with radius $r$).
With the said choice of $B^{(M,g)}_r(p)$,
if $N:= \{ e_1,\ldots e_d\}$ is a $g$-orthonormal basis of $T_pM$, we can construct a bijective map denoted by $\exp^{-1}_{p,N}: B^{(M,g)}_r(p)\to B_r(0) \subset {\mathbb{R}}^d$ as:
$$\exp^{-1}_{p,N}: B^{(M,g)}_r(p) \ni q \mapsto (y^1(q), \ldots, y^d(q)) \in B_r(0) \subset {\mathbb{R}}^d\quad \mbox{where $\sum_{j=1}^d y^j(q) e_j =
\exp_p^{-1}(q)$.}$$ This map is smooth with its inverse and its image (i.e., the coordinate representation of the open neighborhood of
the origin of $T_pM$ previously denoted
by $U_p$)
is a standard ball $B_r(0) \subset {\mathbb{R}}^d$ centered at the origin with the same radius $r$ as $B_r^{(M,g)}(p)$. The pair
$(B^{(M,g)}_r(p), \exp^{-1}_{p,N})$ is called a (local) {\bf normal Riemannian chart centered on} $p$ and the coordinates $y^1,\ldots,y^d$, {\bf Riemannian coordinates centered on} $p$.
It turns out that, referring to this coordinate patch,
\begin{itemize} \item[(a)] the components at $y\in B_r(0)$ of the metric and its inverse respectively
satisfy $g_{ab}(0)= \delta_{ab}$ and $g^{ab}(0)= \delta^{ab}$ for $a,b = 1,\ldots, d$;
\item[(b)] the {\bf Levi-Civita connection coefficients} (see (\ref{CCLC}) below) $\Gamma^c_{ab}(y)$ associated to metric satisfy
$\Gamma^c_{ab}(0)= 0$ and it also holds $\frac{\partial g_{ab}}{\partial y^c}|_0 = \frac{\partial g^{ab}}{\partial y^c}|_0 =0$ for $a,b, c = 1,\ldots, d$;
\item[(c)] the ${\mathbb{R}}^d$-Euclidean norm in $B_r(0)$ coincides with the distance from
$p$ in the following sense: \begin{equation} \|y\| = d_{(M,g)}\left(\exp_p\left( \sum_{j=1}^d y^j(q) e_j \right), p\right);\label{distanceG}\end{equation}
\item[(d)] there is a unique geodesic segment $\gamma$ joining $p$ and $q\in B_r^{(M,g)}(p)$ and completely included in $B_r^{(M,g)}(p)$. In Riemannian coordinates centered on $p$, it coincides with the ${\mathbb{R}}^d$ segment joining the origin to $(y^1(q), \ldots, y^d(q))$.
The length $L_g(\gamma)$ is
$d_{(M,g)}(p,q)$.
\end{itemize}
$(M,g)$ is said to be {\bf geodesically complete} if all geodesics are defined for all values of their affine parameter in ${\mathbb{R}}$. Another way to say the same is that the exponential map $\exp_x$, for every given $x\in M$, is defined on the whole $T_xM$ (even if this does not imply that it defines a diffeomorphism on the whole $T_xM$).
The celebrated {\em Hopf-Rinow theorem} proves that geodesical
competeness is equivalent to the fact that $M$ is complete as a metric space with respect to $d_{(M,g)}$. In turn this is equivalent
to the fact that closed bounded (with respect to the geodesical distance) subsets of $M$ are compact. Finally for
geodesically complete manifolds, every pair $p,q\in M$ admits a (not necessaily unique) geodesic joining
them and the length of this geodesic segment coincides with $d_{(M,g)}(p,q)$, since the said geodesic
minimizes the length of the curves joining the points.
The {\bf injectivity radius at $p\in M$}, denoted by $I_{(M,g)}(p) \in {\mathbb{R}}^+$, is the supremum of the set of radii $r$ of the open ball
$B^{(M,g)}_r(p) \subset M$ such that $(B^{(M,g)}_r(p), \exp^{-1}_{p,N})$ is a normal Riemannian chart centered at $p$ for
an orthonormal basis $N$ of $T_pM$ (it does not depend on $N$). The {\bf injectivity radius} of $(M,g)$ is
$$I_{(M,g)} := \inf_{p\in M} I_{(M,g)}(p)\:.$$
\begin{remark}\label{compact-finiteI}
Compact smooth Riemannian manifolds in particular have always strictiy positive injectivity radius as the reader easily proves. \hfill $\blacksquare$
\end{remark}
Strictly positivity of the injectivity radius has several important consequences, the following one in particular.
\begin{lemma}\label{lemmaC} If $(M,g)$ is a connected smooth manifold with strictly positive injectivity radius, then $(M,g)$ is geodesically complete and all closed bounded sets are compact.
\end{lemma}
\begin{proof} See the appendix. \end{proof}
\subsection{Manifolds of bounded geometry}\label{bgeometry}
For future use, we introduce the definition of manifold $(M,g)$ {\em of bounded geometry}. This is a class of Riemannian
manifolds where, in particular, the thesis of Lemma \ref{lemmaC} is valid. See \cite{DSS} for a recent extended review
and \cite{Kor,shubin} for a summary of notions and results used in this paper.
Roughly speaking (see remark \ref{remBG} below), bounded geometry means that, on the one hand, every point $p\in M$ on the manifold there is a geodesical ball $B^{(M,g)}_r(p)$ covered by Riemannian coordinates centered on $p$ of radius $r>0$ independent of $p$. On the other hand, there are uniform bounds on all derivatives of the component of
the metric in the said Riemannian coordinates in $B^{(M,g)}_r(p)$ independent of $p$. Here is the formal definition.
\begin{definition}\label{defboundedgeom}
A connected smooth Riemannian manifold $(M,g)$ is said
{\bf of bounded geometry} if $(M,g)$ has strictly positive injectivity radius and for some constants $c_k<+\infty$, $k=0,1,\ldots$
$$\| \|\nabla^{(g)k} R\|_g\|_\infty \leq c_k \:, \quad k=0,1,\ldots\:. $$
\hfill $\blacksquare$
\end{definition}
Above and henceforth, $\nabla^{(g)}$ indicates the {\em covariant derivative} of the {\em Levi-Civita connection} associated to $g$, $R$ indicates the {\em Riemannian curvature tensor} and $\|\cdot\|_g$ denotes the natural point-wise norm associated to the metric $g$ acting on smooth tensor fields of a given order
(order $(1, 3+k)$ concerning $\nabla^{(g)k} R$). For instance, if $T$ is a smooth tensor field of order $(n,m)$, so that their components at $q\in M$
in coordinates $y^1,\ldots, y^d$ around $q$ are
${T^{a_1\cdots a_n}}_{b_1\cdots b_m}(y(q))$, we have
\begin{multline}\|T(q)\|^2_g =\sum_{a_1,\dots, a_n, b_1,\dots, b_n, c_1,\dots, c_n, d_1,\dots, d_n } g_{a_1c_1}(y(q)) \cdots g_{a_nc_n}(y(q)) g^{b_1d_1}(y(q))\cdots g^{b_md_m}(y(q))\\ {T^{a_1\cdots a_n}}_{b_1\cdots b_m}(y(q))
{T^{c_1\cdots c_n}}_{d_1\cdots d_m}(y(q))\:.
\end{multline}
\begin{example} From the definition above, the following manifolds in particular are of bounded geometry (Example 2.1 in \cite{DSS,shubin}):
\begin{itemize}
\item[(i)] every smooth compact Riemannian manifold;
\item[(ii)] ${\mathbb{R}}^m$ equipped with its natural metric;
\item[(iii)] every smooth Riemannian locally flat manifold with strictly positive injectivity radius;
\item[(iv)] some classical manifolds as the $m$-dimensional hyperbolic space (the unit ball $B_1(0)$ in ${\mathbb{R}}^m$
equipped with the {\em Poincar\'e disk metric});
\item[(v)] Homogeneous manifolds with invariant metric;
\item[(vi)] covering manifolds of compact manifolds with a Riemannian
metric which is lifted from the base manifold.\hfill $\blacksquare$
\end{itemize}
\end{example}
Another crucial feature of a smooth Riemannian manifolds of bounded geometry is the one that follows \cite{DSS}. For every given $r \in (0, I_{(M,g)}]$, there is a sequence of finite constants
$C^{(r)}_k \in {\mathbb{R}}^+$, $k=0,1,2,\ldots$ and a constant $c^{(r)}>0$ such that
\begin{equation}
\det [g_{ab}(y)]
\geq c^{(r)}\:, \mbox{ if $y\in B_r(0)$} \quad \mbox{and} \quad \max_{|\alpha|\leq k} \| \partial_y^\alpha
g_{ab}(y)\|^{(B_r(0))}_\infty \leq C_k^{(r)}\:, \quad a,b = 1,\ldots, d \label{estimateg}
\end{equation}
where $y^1,\ldots, y^n$ are the coordinates of every normal Riemannian chart with domain $B_r^{(M,g)}(p)$ centered at $p\in M$ and $g_{ab}(y)$ are the components of the metric in that local coordinate system.
We stress that the constant $C_k$ do {\em not} depend on $p$ and {\em all domains have the same geodesical radius $r$}.
From (\ref{estimateg})
taking advantage of the Kramer rule to compute the element $g^{ab}(y)$ of the inverse of the matrix of the coefficients $g_{ab}(y)$, as well as recursively using the identity
$$\frac{\partial g^{ab}}{\partial y^i}= -\sum_{c,d}g^{ac}g^{bd} \frac{\partial g_{cd}}{\partial y^i}\:,$$
it easily arises the existence of another sequence
of finite constants
$H^{(r)}_k \in {\mathbb{R}}^+$, $k=0,1,2,\ldots$ such that
\begin{equation}
\max_{|\alpha|\leq k} \| \partial_y^\alpha g^{ab}(y)\|^{(B_r(0))}_\infty \leq H_k^{(r)}\:, \quad a,b = 1,\ldots, d \label{estimateg2}
\end{equation}
where, as above, $y^1,\ldots, y^n$ are the coordinates of every normal Riemannian chart with domain $B_r^{(M,g)}(p)$ centered at $p\in M$ of radius $r \in (0, I_{(M,g)}]$.
Finally, referring to Levi-Civita's connection coefficients \begin{equation}\label{CCLC}
\Gamma^a_{bc}(y):= \frac{1}{2}\sum_{d}g^{ad}(y)\left( \partial_{y^c}g_{bd} +\partial_{y^b} g_{dc} -\partial_{y^d} g_{bc}\right)\:,\end{equation}
from the above pair of results, we obtain the existence of another sequence
of finite constants
$J^{(r)}_k \in {\mathbb{R}}^+$, $k=0,1,2,\ldots$ such that
\begin{equation}
\max_{|\alpha|\leq k} \| \partial_y^\alpha \Gamma^{a}_{bc} (y)\|^{(B_r(0))}_\infty \leq J_k^{(r)}\:, \quad a,b,c = 1,\ldots, d \label{estimateg3}
\end{equation}
valid in every normal Riemannian chart around every $p\in M$ as before defined on a metric ball of radius $r \in (0, I_{(M,g)}]$
with center $p$.
\begin{remark}\label{remBG}
We observe {\em en passant} that if $(M,g)$ has strictly positive injectivity radius and satisfies (\ref{estimateg})
for a given $r \in (0, I_{(M,g)})$ -- so that it also satisfies (\ref{estimateg2}) and (\ref{estimateg3}) -- it is necessarily of bounded geometry,
just in view of the polynomial expression in components of the Riemann tensor in terms of $\Gamma_{ab}^c$ and their first derivatives.\hfill $\blacksquare$
\end{remark}
\subsection{Completeness of vector fields}\label{Riemannian2}
Let $M$ be a general smooth manifold.
As a vector field $A$ on $M$ is a map $A:M\to TM$, we use the notation $A(p)\in T_pM$.
Assuming that $A$ is smooth, let us consider the Cauchy problem
\begin{equation}\label{CP-M} \left\{
\begin{array}{l}\dot \gamma (s)=A(\gamma(s))\\
\gamma (t_0)=x
\end{array}
\right.\end{equation}
A solution $\gamma :(\alpha, \beta)\to M$ of \eqref{CP-M} is called {\bf maximal} if it is not the proper restriction of any other solution of \eqref{CP-M}.
By the uniqueness of local solution of the Cauchy problem
\cite{ONe} there exists only one maximal solution $\gamma$ of \eqref{CP-M} and any other solution is one of its restrictions. $\gamma$ is called the {\bf maximal integral curve of $A$ starting at $x$}.
A smooth vector field $A$ on the smooth manifold $M$ is said to be {\bf complete} [\cite{ONe}, p. 51] if each of its maximal integral curves is defined on the entire real line.
We finally quote an elementary but crucial technical results whose proof is incuded for completeness in the appendix.
\begin{lemma}\label{lemma-complete-solutions}
Let $(M,g)$ be a connected geodesically complete Riemannian manifold. Let $A$ be a smooth vector field such that
\begin{equation}\label{assAalpha}
\| \|A\|_g\|_\infty<+\infty\,.
\end{equation}
Then the maximal solutions of \begin{equation}\label{CPVF}\frac{d}{dt}\gamma (t)=A(\gamma (t))
\end{equation} are complete.
\end{lemma}
\begin{remark}\label{remcompleteness}
The thesis of the lemma is automatically satisfied for a smooth field in the case of compact manifolds (for instance as consequence of remark \ref{compact-finiteI} and lemma \ref{lemmaC}, but the result is elementary and valid also in absence of metric $g$). Yet, assuming that $A$ is {\em $C^\infty$-bounded} (see definition \ref{def22} below), the remaining hypotheses are true for manifolds of bounded geometry, as a consequence of lemma \ref{lemmaC}. Hence the thesis of lemma \ref{lemma-complete-solutions} is valid also in this case. \hfill $\blacksquare$
\end{remark}
\section{Feller semigroups and Chernoff approximations for diffusions on Riemannian manifolds}\label{sec3}
This section is devoted to the study of diffusions on Riemannian manifolds $(M,g)$ of bounded geometry. We consider second-order elliptic operators $L_0: C^\infty(M) \to C^\infty(M)$ of the form \eqref{Hoperator0} proving that they admit an extension $L:D(L)\subset C_0(M)\to C_0(M)$ that generates a Feller semigroup $(e^{tL})_{t\in {\mathbb{R}}^+}$ on $C_0(M)$. We also provide a family of operator-cores for $L$. This result is finally applied in section \ref{sez3.3} to the construction of Chernoff approximations for the semigroup $(e^{tL})_{t\in {\mathbb{R}}^+}$ in terms of a family of shift operators.
\subsection{Relevant operators and subspaces of $C_0(M)$}
Let $(M,g)$ be a $d$-dimensional $C^\infty$ connected Riemannian manifold which we also assume to be geodesically complete.
Let $\{A_k\}_{k=0, 1,...,r}$ be a family of $C^\infty$ vector fields on $M$. We start by considering the second order differential operator $L_0: C^\infty(M) \to C^\infty(M)$
\begin{equation}\label{Hoperator0}
(L_0f)(x):=\frac{1}{2}\sum_{k =1}^rA_k(A_k f)(x)+(A_0f)(x), \qquad x\in M\:, \quad f \in C^\infty(M)
\end{equation}
In every local coordinate neighbourhood $U$ containing $x$, if $\sigma^i_k(x)$ are the components of the vector $A_k$, the operator $L_0$ can be represented by the differential operator
\begin{equation}\label{op-coordinates}
(L_0f)(x)=\frac{1}{2} \sum_{i, j}a^{ij} (x)\frac{\partial ^2}{\partial x^i\partial x^j} f(x)+\sum_{i}b^i(x)\frac{\partial }{\partial x^i}f(x), \qquad x\in U,\end{equation}
with $b^i(x)=\sigma_0^i(x)+\frac{1}{2}\sum_{j,k}\sigma^j_k(x)\frac{\partial}{\partial x^j}\sigma ^i_k(x)$ and $a^{ij} (x)=\sum _k\sigma _k^i(x)\sigma_k^j(x)$ are the entries of a
positive semidefinite matrix.
$(L_0 + c): C^\infty(M) \to C(M)$ with $L_0$ taking the form (\ref{op-coordinates})
in every coordinate patch, and $c \in C(M)$ used as a multiplicative operator,
is said to be
{\bf elliptic at $x\in M$} if the matrix of coefficients $a^{ij}(x)$ is positive semidefinite and {\em non-singular} in every local coordinate system of $M$ around $x$. If this condition holds for every $x\in M$, then $L_0+c$
is said to be {\bf elliptic}.
It is easy to see that $L_0+c$ is elliptic if the matrices of coefficients $a^{ij}$ are positive semidefinite and non-singular in every chart of an atlas of $M$.
\begin{remark} If $A_k$, $k=0,\ldots r$, are smooth vector fields on the smooth manifold $M$, then the 2nd order operator
$L_0+c:= \frac{1}{2}\sum_{i=1}^r A_iA_i + A_0 +c$ is elliptic at $p\in M$ if and only if the vector fields $A_k$, with $k=1,\ldots, r$, define a set of generators of $T_pM$.
(In particular, ellipticity requires $r \geq d:= \dim M$ necessarily).
In order to prove this fact, it is sufficient to notice that $a^{ij} (p)=\sum _k\sigma _k^i(p)\sigma_k^j(p)$ is automatically positive semidefinite, hence ellipticity at $p$ is equivalent to
\begin{equation}\label{ellipticp}\sum_{k=1}^r \langle \sigma_k(p), \omega \rangle \sigma_k(p) =0 \quad \mbox{iff}\quad \omega=0 \quad \mbox{when
$\omega \in T_p^*M$}\:,\end{equation}
where $\langle \cdot, \cdot \rangle$ is the standard pairing on $T_pM \times T^*_pM$ and (\ref{ellipticp}) holds iff $\{A_j(p)\}_{j=1,\ldots,d}$ generates $T_pM$. $\hfill \blacksquare$
\end{remark}
$L_0+c$ is said {\bf uniformly elliptic} (with respect to the metric $g$) if there is a costant $C>0$ such that
$$\sum_{i,j=1}^n a^{ij}(x) \xi_i \xi_j \geq C\sum_{i,j=1}^n g^{ij}(x) \xi_i \xi_j \quad \mbox{for every $\xi_k\in {\mathbb{R}}$, $k=1,\ldots, d$, and every coordinate patch over $M$.}$$
It is easy to see that if the condition above is true for the local charts of an atlas of $M$ and a given $C>0$, then it is true for all local charts of $M$ for the same $C$.
\begin{remark}
It is elementary to prove that, if $L_0+c$ is elliptic and $M$ is compact, then $L_0+c$ is uniformly elliptic. \hfill $\blacksquare$
\end{remark}
In general, the space $C_c^\infty(M)$ is dense in $C_0(M)$.
\begin{proposition}\label{corollaryD} If $M$ is a smooth manifold, then $C_c^\infty(M)$
is dense in $C_0(M)$ in the norm $||\cdot||_\infty$.
\end{proposition}
\begin{proof}
See the appendix.
\end{proof}
\subsection{Generators of Feller semigroups on Riemannian manifolds}
This section is devoted to the construction of generators of Feller semigroup on $C_0(M)$ as well as to the description of their cores. In the following we shall always assume that $(M,g) $ is a smooth manifold of bounded geometry. We start by giving the definition of some relevant subspaces of smooth functions.
\begin{definition}
Let $(M,g)$ be a manifold of bounded geometry. A function $f:M\to {\mathbb{R}}$ is said {\bf $C^k$-bounded} if $f\in C^k(M) $ and if for every $r_0 \in (0, I_{(M,g)})$ and every multiindex $\alpha$, with $|\alpha |\leq k$ there is a constant $C_\alpha <\infty$ such that $|\partial ^\alpha _xf(x)|\leq C_\alpha <+\infty$ in every local Riemannian chart $(B_{r_0}^{(M,g)}, \exp^{-1}_{p,N})$ centered at every $p\in M$. \\
A function $f:M\to {\mathbb{R}}$ is said {\bf $C^\infty$-bounded} if $f$ is $C^k$-bounded for any $k\geq 0$.\\
The space of $C^k$-bounded functions on $M$ is denoted with the symbol $C^k_b(M)$ for $k=0,1,\ldots, \infty$. \hfill $\blacksquare$
\end{definition}
\begin{remark}
It is easy to prove \cite{shubin} that $f\in C^k (M)$ is $C^k$-bounded iff there exists a constant $C< +\infty$ such that the covariant derivative $\| \nabla ^k f\|_\infty<C$.
\end{remark}
Let us consider the operator $L_0$ (\ref{Hoperator0}) and define $L_1$ as its restriction to one of the linear subspaces $D_k\subset C_0(M)$
\begin{equation}\label{domainD}
D_k:=\{f \in C_0(M)\cap C^\infty(M) \cap C_b^k (M) \:\: |\: \: L_0f\in C_0(M)\} \quad \mbox{for $k=0,1,\ldots, \infty$.}
\end{equation}
Each
$D_k$ is non-trivial and dense in $C_0(M) $ since $C_c^\infty(M) \subset D_k $ and by proposition \ref{corollaryD}. Actually, for every given $k$, $L_1$
satisfies hypotheses (a) and (b) of theorem \ref{theoremFellergenerator}, the latter can be trivially proved by direct inspection.
If we are able to prove that also hypothesis (c) of theorem \ref{theoremFellergenerator} is fulfilled (there exists a $\lambda >0$ such that $Ran (L_1-\lambda I)$ is dense in $C_0(M)$),
then theorem \ref{theoremFellergenerator} proves that $L:= \overline{L_1}$ is the generator of a Feller semigroup $(V(t))_{t\geq 0}$ on $C_0(M)$.
\begin{remark}\label{exampleRd2} In the case $M={\mathbb{R}}^d$ and the coefficients $a^{ij}, b^j$ of the differential operator \eqref{generator1} are bounded and globally Lipschitz (their smoothness is guaranteed by the assumptions that the vector fields $A_k$ are smooth), probabilistic arguments \cite{RogWil} provide the existence of a Feller semigroup. The associated diffusion process is constructed in terms of the martingale solution of the stochastic PDE \eqref{SPDE1}. In this case the representation formula \eqref{prorep} allows to prove that the generator restricted on the space $C^\infty _c({\mathbb{R}}^d)$ is actually given by the second order operator \eqref{generator1}. \\
Analogous results can be obtained in the case where the manifold $M$ is compact, extensively studied, e.g., in \cite{IkeWat}. If $A_j$, $j=0,...,r$ are smooth vector fields, it is possible to construct a diffusion process $X=(X(t))$ solution of the stochastic PDE
$$dX(t)=\sum_{j=1}^r A_j(X(t))\circ dB^j(t) +A_0(X(t))dt$$
where $\circ$ denotes the Stratonovich stochastic integral.
The action of the Feller semigroup $V(t):C(M)\to C(M)$ given by $V(t)f(x)={\mathbb{E}}^x[f(X(t))]$ and the generator extends the operator \eqref{Hoperator0} (see \cite{IkeWat,Hsu} for details).
However, we stress that this technique does not directly provide a core for the generator.
\end{remark}
This section presents some sufficient conditions for the validity of the hypotesis (c) in Riemannian manifolds different form ${\mathbb{R}}^d$.
\begin{definition}\label{def22}\cite{shubin} Let $(M,g)$ a manifold of bounded geometry. A differential operator of order $n$, $P : C^\infty(M) \to C^\infty(M)$, in local coordinates,
$$(P f)(x)= \sum_{|\alpha| \leq n}P_\alpha(x) \partial^\alpha_xf$$
is said to be
{\bf $C^\infty$-bounded} if, for every $r_0 \in (0, I_{(M,g)})$ and every pair of multiindeces $\alpha, \beta $ there is a constant $C_{\alpha,\beta}\geq 0$ such that $|\partial ^{\beta}_xP_\alpha(x)| \leq C_{\alpha, \beta}$ in every local Riemannian chart $(B_{r_0}^{(M,g)}, \exp^{-1}_{p,N})$ centered at every $p\in M$.
\end{definition}
\begin{remark} $\null$\\
{\bf(1)} It is possible to prove \cite{shubin}
that a $C^\infty$-bounded vector field $A$ on $M$ fulfills the following conditions
$$\| \|\nabla^{(g)k} A\|_g\|_\infty \leq a_k\:, \quad k=0,1,\ldots\:.$$
for some constants $a_k<+\infty$, $k=0,1,\ldots$.\\
{\bf(2)} It is possible to prove \cite{shubin} that if a vector field $A$ on $M$ is {\bf $C^\infty$-bounded},
then every differential operator given by the $p$-th power $A^p$ is $C^\infty$-bounded. Obviously, linear combinations of $C^\infty$-bounded operators are $C^\infty$-bounded operators. Therefore the operator $L_0$ (\ref{Hoperator0}) is $C^\infty$-bounded if $(M,g)$ is of bounded geometry and the smooth vector fields $A_j$ are $C^\infty$-bounded for $j=0,\ldots,r$. \\
{\bf (3)}
Every $C^\infty$ vector field on a compact Riemannian manifold is automatically $C^\infty$-bounded.
Analogously, the operator $L_0$ (\ref{Hoperator0}) is $C^\infty$-bounded in the case the smooth Riemannian manifold $M$ is {\em compact} and the fields $\{ A_j\}_{j=0,\ldots ,r}$ are smooth.$\hfill \blacksquare$\end{remark}
From now on $\nabla^{(g)} \cdot A$ denotes the scalar field called {\bf covariant divergence} of $A$ completely defined in local coordinates around $p\in M$ as
$$\nabla^{(g)}\cdot A := \sum_{j=1}^d (\nabla^{(g)}_j A)^j
= \sum_{j=1}^d\left( \partial_j A^j|_p+A^j\partial_j\log \sqrt{|g|}\right)\:.$$
Let us move on to state and prove the pivotal technical result of this section which we will use to prove that its closure $L= \overline{L_1}$ generates a Feller semigroup. Everything relies upon the following technical result proved in the appendix and based on fundamental
achievements by Shubin (Theorem 2.2 in \cite{shubin}), some of them already established in \cite{Kor} where analytic semigroups in $L^p$-spaces
are in particular studied in manifolds of bounded geometry.
\begin{proposition}\label{teoL} Let $(M,g)$ be a smooth Riemannian manifold of bounded geometry and consider
a uniformly elliptic 2nd order differential operator
$L_0 :C^\infty(M) \to C^{\infty}(M)$ be of the form (\ref{Hoperator0}), where the $r\geq d$ real smooth vector fields $A_i$
are $C^\infty$-bounded and $A_0$ is defined
as
\begin{equation}A_0 :=\frac{1}{2} \sum_{i=1}^r (\nabla^{(g)} \cdot A_i) A_i\:.\label{linkA}\end{equation}
Then,
\begin{itemize}
\item[(i)] $L := \overline{L_1}$ --
with $L_1 := L_0|_{D_k}$ and $D_k$ defined in (\ref{domainD}) -- is the generator
of a Feller semigroup in $C_0(M)$ for every fixed $k=0,1, \ldots, \infty$.
\item[(ii)] Both the generator $L$ and the generated semigroup are independent of $k$.
\end{itemize}
\end{proposition}
\begin{proof}
(i) What we have to prove is nothing but that the three hypotheses of theorem \ref{theoremFellergenerator} are satisfied for $L_1 : D_k \to C_0(M)$.
Condition (a) has been established in proposition \ref{corollaryD}. Condition (b) immediatey arises from the form of $L_0$ and the
ellipticity property it satisfies. Regarding (c), the pivotal result appears in the following lemma proved in the appendix.
\begin{lemma}\label{propL}
With $(M,g)$ and $A_j$ ($j=0,\ldots,r$) and $L_0$ as in the hypothesis -- in particular $A_0$
as in (\ref{linkA})-- for every $h \in C_c^\infty(M)$
and $\lambda >0$ there exists $f \in C_0(M) \cap C_b^\infty(M)$
fulfilling
\begin{equation} L_0f - \lambda f = h\:.\label{EQR2}\end{equation}
\end{lemma}
\begin{proof}
See the appendix.
\end{proof}
Now observe that, due to lemma \ref{propL}, if $\lambda>0$ and $h \in C_c^\infty(M)$, there
is $f \in C_0(M)\cap C_b^\infty(M)$ (hence $f\in D_k$ for all $k=0,1,\ldots, \infty$)
such that $L_0f = \lambda f + h$. This fact can be rephrased to $(L_1 - \lambda I)f = h$. Since $C_c^\infty(M)$ is dense in $C_0(M)$ due to proposition \ref{corollaryD}, we have proved that $Ran(L_1-\lambda I)$ is dense in $C_0(M)$ for $\lambda >0$, demonstrating that also the hypothesis (c) in theorem \ref{theoremFellergenerator} is satisfied. Let us finally prove (ii). This is consequence of the following general lemma.
\begin{lemma}\label{lemmaMN} Let $M: D(M) \to \mathcal{B}$ and $N : D(N) \to \mathcal{B}$ be two closed densely defined operators in the Banach space $\mathcal{B}$ which are generators of corresponding strongly continuous semigroups. If $M \subset N$, then $M=N$.
\end{lemma}
\begin{proof} See the appendix.
\end{proof}
\noindent The proof ends observing that
$L_0|_{D_{k+1}} \subset L_{0}|_{D_{k}}$
so that $\overline{L_0|_{D_{k+1}}} \subset \overline{L_{0}|_{D_{k}}}$ and both operators are generators of strongly-continuous semigroups on $C_0(M)$.
The case $D_\infty$ is encompassed since, e.g., $D_\infty \subset D_1$.
\end{proof}
We can finally prove the main result of this section, by relaxing the requirement on the form of $A_0$.
\begin{theorem}\label{teoL2} Let $(M,g)$ be a smooth Riemannian manifold of bounded geometry and consider a
uniformly elliptic 2nd order differential operator $L_0 :C^\infty(M) \to C^{\infty}(M)$
of the form (\ref{Hoperator0}), where $A_0$ and the $r\geq d$ vector fields $A_i$ are real, smooth and $C^\infty$-bounded.
Then,
\begin{itemize}
\item[(i)] $L := \overline{L_1}$ --
with $L_1 := L_0|_{D_k}$ and $D_k$ defined in (\ref{domainD}) -- is the generator
of a Feller semigroup in $C_0(M)$ for every fixed $k=0,1, \ldots, \infty$.
\item[(ii)] Both the generator $L$ and the generated semigroups are independent of $k$.
\end{itemize}
\end{theorem}
\begin{proof}
(ii) has the same proof as that of (ii) in proposition \ref{teoL}.
The proof of (i) is based on the following technical result.
\begin{lemma}\label{propL2} With $(M,g)$ and $A_j$ ($j=1,\ldots,r$) and $L_0$ as in the hypothesis assume that
\begin{equation}
A_0 := \frac{1}{2}\sum_{i=1}^r (\nabla^{(g)}\cdot A_i) A_i + B \label{A0new}\:,
\end{equation}
for a real $C^\infty$-bounded vector field $B$.
If there exists $c>0$ independent of the used local chart around $x\in M$ such that
\begin{equation}\label{dominance}
\sum_{a,b=1}^d B^a(x)B^b(x)\xi_a\xi_b \leq c\sum_{a,b=1}^d \sum_{i=1}^r A^a_i(x)A^b_i(x)\xi_a\xi_b \quad \mbox{for every
$\xi_k \in {\mathbb{R}}$ and every $x\in M$}
\end{equation}
then $L := \overline{L_1}$ -- with $L_1 := L_0|_{D_k}$ and $D_k$ defined in (\ref{domainD}) is the generator
of a Feller semigroup in $C_0(M)$.
\end{lemma}
\begin{proof}
See the appendix
\end{proof}
In view of lemma \ref{propL2}, to prove (i), it is sufficient to prove that (\ref{dominance}) is always satisfied however we choose the real smooth $C^\infty$-bounded vector field $B$. If we think of the numbers $\xi_k$ as the components of a form $\xi \in T^*_xM$, dividing both sides for $||\xi||^2_g\neq 0$, the inequality can be rephrased to, where $\langle \cdot, \cdot \rangle$ is the standard pairing on $T_xM\times T^*_xM$,
$$\frac{|\langle B(x), \xi(x) \rangle|^2}{||\xi||_g^2}\leq c \frac{\sum_{i=1}^r |\langle A_i(x), \xi(x) \rangle|^2}{||\xi||_g^2}\:.$$
The left-hand side above satisfies
$$\frac{|\langle B(x), \xi(x) \rangle|^2}{||\xi||_g^2} \leq \frac{||B(x)||^2_g||\xi||^2_g}{||\xi||^2_g} \leq || \|B\|_g||^2_\infty < +\infty$$
whereas the right-hand side fulfils
$$\frac{\sum_{i=1}^r |\langle A_i(x), \xi(x) \rangle|^2}{||\xi||_g^2} \geq C \frac{||\xi||_g^2}{||\xi||_g^2} =C >0$$
just in view of the uniformly ellipticity condition. Choosing $c:= || \|B\|_g||^2_\infty /C$, which is necessarily finite, (\ref{dominance}) is satisfied.
\end{proof}
To conclude, we prove that we can modify $L_0$ by adding a zero-order term in a certain class of continuous functions
preserving the results above.
\begin{theorem}\label{teoL3} Let $(M,g)$ be a smooth Riemannian manifold of bounded geometry and consider a
uniformly elliptic 2nd order differential operator $L_{0c} :C^\infty(M) \to C(M)$
of the form
\begin{equation}L_{0c}:= L_0 + c\:, \label{L0u}\end{equation}
where $L_0$ is the operator defined in theorem \ref{teoL2} and $c\in C^0_b(M)$ being bounded and continuous, defines
a multiplicative operator
$c\in \mathscr{L}(C_0(M))$.
Then,
\begin{itemize}
\item[(i)] Assuming additionally that $c(x) \leq 0$ for all $x\in M$ we obtain that $L := \overline{L_{1c}}$ --
with $L_{1c} := L_{0c}|_{D_k}$ and $D_k$ defined in (\ref{domainD}) -- is the generator
of a Feller semigroup in $C_0(M)$ for every fixed $k=0,1, \ldots, \infty$.
\item[(ii)] Under condition $c(x) \leq 0$ for all $x\in M$ both the generator $L$ and the generated semigroups are independent of $k$.
\item[(iii)]
If condition $c(x) \leq 0$ for all $x\in M$ does not hold, then $L$ as in (i) is still the generator of a strongly continuous semigroup in $C_0(M)$ for every fixed $k=0,1, \ldots, \infty$, and (ii) is still valid.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Let us start the proof by establishing that the multiplicative operator $-c$ is {\em accretive} (\cite{RS2} Definition on p. 240). In fact, if
$f\in C_0(M)$, let $p\in M$ be such that $|f(p)|= \sup_{x\in M} |f(x)|$.
Let us construct a {\em normalized functional} $\lambda \in C_0(M)'$ {\em tangent} to $f\in C_0(M)$ as
$$\lambda(h) := \overline{f(p)} h(p) \:, \quad h \in C_0(M)\:,$$
It holds trivially $||\lambda||=||f||$ and $\lambda(f) = ||f||^2$ so that $\lambda$ is normalized and tangent to $f$, and also $\lambda((-c)f)\geq 0$
(notice that $c\leq 0$), so that $-c$ is accretive. At this juncture we can apply the lemma on p. 244 of \cite{RS2}
with $0\leq a<1/2$, $b:= \sup_M |c|$,
$A:= -\overline{L_0|_{D_k}}$, and\footnote{Notice that in \cite{RS2} semigroups are represented as $e^{-tA}$ whereas for us they are
represented as $e^{tL}$ this explains the sign minus in front of the operators.} $B:= -c \in \mathscr{L}(C_0(M))$.
Since $\overline{L_0|_{D_k}}$
generates a Feller semigroup which is a contraction semigroup by definition, we conclude from the above lemma
that $\overline{L_0|_{D_k}} + c$ is the generator of a contraction semigroup.
Since $c \in \mathscr{L}(C_0(M))$, we also have $\overline{L_0|_{D_k}} + c = \overline{(L_0+c)|_{D_k}} = \overline{L_{0c}|_{D_k}}$.
According to definition \ref{semFellerdef} of Feller semigroup, the proof of (i) ends by proving that the generated semigroup of contractions is made of positive operators.
This fact immediately arises from the Trotter product formula
$$e^{-t\overline{A+B}}f = \lim_{n\to +\infty} \left(e^{-tA/n} e^{-tB/n}\right)^n f\:,$$
i.e., Theorem X.51 in \cite{RS2}, with $A=-\overline{L_{0c}|_{D_k}}$ and
$B= -c$, which is valid because $A+B$ generates a contraction semigroup as established above.
Now observe that $e^{-tA/n}$ is positive, since it is an element of a Feller semigroup, and $e^{-tB/n}$ is positive as well just because,
by direct inspection, it is nothing but the multiplicative operator with a positive function
$e^{tc(x)}$. Since the limit in the Trotter formula here is computed with respect to the norm $||\cdot||_\infty$, we find
$e^{-t\overline{A+B}}f \geq 0$ if $f\geq 0$, so that the semigroup generated by $L$ is made of positive elements and the proof
of (i) ends.
The proof of (ii) is identical
to that of (ii) in theorem \ref{teoL2}.
To prove (iii)
it is sufficient to write $c(x)=\tilde c(x) +\sup_x c(x)$ with $\tilde c=c-\sup_x c(x)$ and apply items (i) and (ii) to $L_0+\tilde c$, noting that the added constant $\sup_x c(x)$ does not affect domains and closures. The resulting semigroup $V_c(t)$ has the form $V_c(t)=e^{t\sup_x c(x)}V_{\tilde c}(t)$, where $V_{\tilde c}(t)$ is the Feller semigroup generated by $\overline{L_{0\tilde c}|_{D_k}}$.
\end{proof}
\subsection{Chernoff functions for the Feller semigroup }\label{sez3.3}
In this section we discuss how the Feller semigroup $V(t)$ generated by $L$ can be obtained by a suitable Chernoff function $S$ again constructed out of the vector fields $A_j$.
In the following we shall assume that the smooth Riemannian manifold $(M,g)$ is of bounded geometry. In particular this implies that $(M,g)$ is geodesically complete (see definition \ref{defboundedgeom} and lemma \ref{lemmaC}).
\begin{theorem}\label{teo3.1}
Let $(M,g)$ be a smooth Riemannian manifold of bounded geometry and consider a
uniformly elliptic 2nd order differential operator $L_0 :C^\infty(M) \to C^{\infty}(M)$
of the form (\ref{Hoperator0}), where $A_0$ and the $r\geq d$ vector fields $A_i$ are real, smooth and $C^\infty$-bounded. Let $c \in C_b^0(M)$ and let $L_{0c}:=L_0+c $ and
$L := \overline{L_{1c}}$ -- with $L_{1c} := L_{0c}|_{D_k}$ and $D_k$ defined in (\ref{domainD}) for $k=0,1,\ldots, \infty$. \\
For any $x\in M$, $t\geq 0$ and $f\in C_0(M)$ let us define
\begin{equation}\label{S(t)}(S(t)f)(x)=\frac{1}{4r}\sum_{j =1}^r\left(f\left(\gamma_{x,A_j}(\sqrt{2rt})\right)+
f\left(\gamma_{x,-A_j}(\sqrt{2rt})\right)\right)+\frac{1}{2}f(\gamma_{x,A_0}(2t)) + tc(x)f(x).
\end{equation}
where $\gamma_{x,A_j}:{\mathbb{R}}^+\to M$ is the integral curve of the vector field $A_j$ starting at time $t=0$ at the point $x\in M$, namely the solution of the initial value problem
\begin{equation}\label{CPAk}
\left\{ \begin{array}{ll}
\frac{d}{dt} \gamma_{x,A_j} (t)=A_j(\gamma_{x,A_j}(t)), \\
\gamma_{x,A_j} (0)=x.
\end{array} \right.
\end{equation}
Then the following holds.
\begin{enumerate}
\item For all $t\geq 0$ $S(t)(C_0(M))\subset C_0(M)$.
\item If $(V(t))_{t\geq 0}$ is the strongly continuous
semigroup on $C_0(M)$ generated by $L$ (according to theorems \ref{teoL2} and \ref{teoL3})
then for any $f\in C_0(M)$ and $T>0$ the following holds
\begin{equation}\label{convergenceformula}\lim_{n\to \infty}\sup _{t\in [0,T]}\|S(t/n)^nf-V(t)f\|=0\:.\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We remark that the right hand side of \eqref{S(t)} is well defined for all $t\geq 0$ since by lemma
\eqref{lemma-complete-solutions} the maximal solution of the Cauchy problem \eqref{CPAk}
is defined for all $t\geq 0$, the manifold $(M,g)$ being geodesically complete by the assumption of bounded geometry.
Let us first assume $c=0$.
\begin{enumerate}
\item
The continuity of the functions $x\longmapsto f(\gamma_{x,A_0}(2t))$ and $x\longmapsto f(\gamma_{x,A_j}(\sqrt{2rt}))$, $j=1,\dots , r$, follows from the continuity of the maps $x\longmapsto \gamma_{x,A_j}(\tau)$ for all $j=0,\dots , r$ and $\tau \in {\mathbb{R}}^+$. Moreover, if $f\in C_0(M)$, then for any $x\in M$, $\tau\in {\mathbb{R}}^+$ and $k=0,...,r$, the map $x\longmapsto f(\gamma_{x,A_j}(\tau))$ belongs to $C_0(M)$
proving 1 in th thesis. Indeed, given $\varepsilon >0$ there exists a compact set $K_\varepsilon$ such that $|f(y)|<\varepsilon $ for $y\in K_\varepsilon^c$. Set $\sup _{x\in M}\|A_j(x)\|:=c_j<\infty$ and consider the set $K_{\varepsilon, \tau}$ defined as the closure of the set of points $y\in M$ whose distance from $K_\varepsilon$ is less then $c_j\tau$:
\begin{equation}\label{ket}K_{\varepsilon, \tau}:=\overline{\{y\in M\:|\: d(y, K_\varepsilon)\leq c_j\tau \}},\end{equation}
where $d(y, K_\varepsilon):=\inf _{x\in K_\varepsilon} d(y,x)$. Since $K_\varepsilon$ is compact, it is bounded, namely it is
contained in some closed geodesical ball of finite radius $R$ centered on some $x_0\in M$. Therefore, the closed
set $K_{\varepsilon, \tau}$ is bounded as well since it is enclosed in a closed ball of radius $R+c_j\tau$ centered on $x_0$
and it is therefore compact by the Hopf-Rinow theorem because $(M,g)$ is complete. If $x \in K_{\varepsilon, \tau}^c$ then $\gamma_x(\tau) \in K_\varepsilon ^c$,
hence $|f(\gamma_{x,A_j}(\tau))|<\varepsilon$. Indeed if this was not true, i.e. if $\gamma_{x,A_j}(\tau)\in K_\varepsilon$, then:
$$d(x, K_\varepsilon)\leq d(x,\gamma_{x,A_j}(\tau))\leq \int_0^\tau \|\dot\gamma_{x,A_j}(s)\|ds= \int_0^\tau \|A_j(\gamma_{x,A_j}(s))\|ds<c_j\tau.$$\\
\item
\begin{enumerate}
\item
First of all we prove that if $f\in C_0(M)$ then $\sup_{x\in M}|(S(t)f)(x)|\leq \sup_{x\in M}|f(x)|.$\\
Indeed, for all $x\in M$ we use the fact that function $f$ is bounded and obtain
$$
|(S(t)f)(x)|\leq\frac{1}{4r}\sum_{k =1}^r\left(\left|f\left(\gamma_{x,A_j}(\sqrt{2rt})\right)\right|+\left|f\left(\gamma_{x,-A_j}(\sqrt{2rt})\right)\right|\right)+\frac{1}{2}|f(\gamma_{x,A_0}(2t))|
$$
$$
\leq\frac{1}{4r}\sum_{k =1}^r\left(2\sup_{z\in M}|f(z)|\right) + \frac{1}{2}\sup_{z\in M}|f(z)|=\sup_{z\in M}|f(z)|.
$$\\
\item The mapping ${\mathbb{R}}^+\ni t\longmapsto S(t)f\in C_0(M)$ is continuous.\\
It is sufficient to show that for any $k=0,\dots r$ the map ${\mathbb{R}}^+ \ni \tau \longmapsto S_j(\tau )f\in C_0(M)$ given by $S_j(\tau )f(x):=f(\gamma_{x,A_j}(\tau))$ is continuous in the $\sup$-norm.\\
Let $\tau _0\in {\mathbb{R}}^+$ and fix $\varepsilon >0$. Since $f\in C_0(M)$, there exists a compact set $K_\varepsilon $ such that $|f(y)| <\varepsilon/2$
for $y\in K_\varepsilon ^c$. If $c_j:=\sup _{x\in M}\|A_j(x)\|$ and considering the compact set
$K_{\varepsilon, \tau}$ defined in \eqref{ket} with $\tau =\tau _0+1$, we have that if $t\in [0, \tau_0+1]$ then
$\gamma_{x,A_j}(t)\in K^c_\varepsilon $ for any $x\in K^c_{\varepsilon, \tau_0+1}$, hence
$$|f(\gamma_{x,A_j}(\tau))- f(\gamma_{x,A_j}(\tau_0))|<\varepsilon, \qquad \forall x\in K^c_{\varepsilon, \tau_0+1}.$$
If $x\in K_{\varepsilon, \tau_0+1}$, then for $t\in [0, \tau _0+1]$ we have $\gamma_{x, A_j}(t)\in K'_{\varepsilon, \tau_0+1}$,
where $K'_{\varepsilon, \tau_0+1}$ is the compact set defined as
$$K'_{\varepsilon, \tau_0+1}=\overline{\{y\in M \:|\:d(y, K_{\varepsilon, \tau_0+1})\leq c_j(\tau_0+1) \}}.
$$
Since $f$ is continuous on $M$, it is uniformly continuous on the compact set $K'_{\varepsilon, \tau_0+1}$ and for any $\varepsilon >0 $ there exists a $\delta >0$ such that $|f(x)-f(y)|<\varepsilon$ for $x,y\in K'_{\varepsilon, \tau_0+1}$ such that $ |x-y| <\delta$. If $x\in K_{\varepsilon, \tau_0+1}$ and $|\tau-\tau_0|<\min\{1,\delta /c_j\}$, then $\gamma_x(\tau), \gamma_x(\tau_0)\in K'_{\varepsilon, \tau_0+1}$ and $|\gamma_x(\tau)-\gamma_x(\tau_0)|<\delta$, hence:
$$|f(\gamma_{x,A_j}(\tau))- f(\gamma_{x,A_j}(\tau_0))|<\varepsilon, \qquad \forall x\in K_{\varepsilon, \tau_0+1}.$$
\item If $\varphi$ belongs to the core $D_k$ of $L$ with $k\geq 3$ we have $$S(t)\varphi=\varphi+tL_1\varphi+o(t) \quad \mbox{as ${\mathbb{R}}^+ \ni t\to 0$
in the uniform norm}$$
-- where $D_k$ is defined in \eqref{domainD} and $L_1 := L_0|_{D_k}$ with $L_0$ defined in (\ref{Hoperator0}).\\% -- and
For fixed $x\in M$ and $k\in \{1,\dots ,r\}$ let us consider the map $t\mapsto \varphi (\gamma_{x,A_j}(t))$ which is smooth by the stated assumptions on $\varphi \in D_k$ and $A_j$. By Taylor expansion we have for $t\downarrow 0$:
\begin{align}
\varphi (\gamma_{x,A_j}(t))&=\varphi (\gamma_{x,A_j}(0))+t\frac{d}{dt}|_{t=0}\varphi (\gamma_{x,A_j}(t)) +\frac{t^2}{2}\frac{d^2}{dt^2}|_{t=0}\varphi (\gamma_{x,A_j}(t))+R_j(x,t)\\
&=\varphi (x)+t\left(A_j\varphi\right)(x)+\frac{t^2}{2}\left(A_jA_j\varphi\right) (x)+R_j(x,t), \label{Taylor1}\\
\end{align}
where $$R_j(x,t)=\frac{t^3}{3!}\left(A_jA_jA_j\varphi\right)(u),$$ with $u=\gamma _{x,A_j}(\xi)$, $\xi\in [0, t]$. Analogously for $j=0$ we have:
\begin{align*}
\varphi (\gamma_{x,A_0}(t))&=\varphi (\gamma_{x,A_0}(0))+t\frac{d}{dt}|_{t=0}\varphi (\gamma_{x,A_0}(t)) +R_0(x,t)\\
&=\varphi (x)+t\left(A_0\varphi\right)(x)+R_0(x,t), \label{Taylor2}
\end{align*}
with $R_0(x,t)=\frac{t^2}{2!}\left(A_0A_0\varphi\right)(u),$ with $u=\gamma _{x,A_0}(\xi)$, $\xi\in [0, t]$. Hence
\begin{align*}
S(t)\varphi (x)&= \frac{1}{4r}\sum_{j =1}^r\left(\varphi\left(\gamma_{x,A_j}(\sqrt{2rt})\right)+
\varphi\left(\gamma_{x,-A_j}(\sqrt{2rt})\right)\right)+\frac{1}{2}\varphi(\gamma_{x,A_0}(2t))\\
&=\varphi(x)+\frac{1}{4r}\sum_{j =1}^r 2rt \left(A_jA_j\varphi\right) (x) +t\left(A_0\varphi\right)(x)+ t^{3/2}\tilde R(t,x)\\
&= \varphi(x) +tL_1\varphi(x)+t^{3/2}\tilde R(t,x
\end{align*}
where
$$\tilde R(t,x)=\sqrt t (A_0A_0 \varphi)(u_0)+\frac{\sqrt{2r}}{12}\sum_{j=1}^r\left(\left(A_jA_jA_j\varphi\right)(u_j)+\left(A_jA_jA_j\varphi\right)(u'_j)\right),$$
for suitable $u_0, u_j, u'_j\in M$, $j=1,\ldots, r$. The proof concludes by proving that $\sup_{t\in [0,1], x\in M}|\tilde R(x,t)|<\infty$. This fact arises from the bounds
$$\|(A_0A_0 \varphi)\|_\infty\:,\quad \|(A_jA_jA_j\varphi)\|_\infty\:, j=1, \ldots, r,
$$
due to the very definition (\ref{domainD}) of $D_k$ as well as on the assumption that the vector fields $\{A_j\}_{j=0, \ldots r}$ are $C^\infty$-bounded and $\varphi \in D_k$ with $k\geq 3$.
\end{enumerate}
This concludes the proof of (2) since the conditions (1)-(4) in theorem \ref{FormulaChernova}
assuring the validity of (2) are valid
in view of the results above ((3) is trivially true).
\end{enumerate}
The case $c\neq 0$ has now an easy proof. Let $S_0$ denote the Chernoff function of $L$ with $c=0$
and let $S$
denote the analog for the case $c\neq 0$. If $f\in C_0(M)$ then $S(t)f = S_0(t)f + tcf \in C_0(M)$ because $S_0(t)f \in C_0(M)$, $f\in C_0(M)$ and
$c$ is continuous and
bounded. Hence (1) is true. Regarding (2), the estimate
$\|S(t)f\|=\|S_0(t)f+tcf\|\leq \|S_0(t)\|||f||+t\|c\|\|f\|=(1+t\sup_{x\in M}|c(x)|)||f||\leq e^{t\|c\|}||f||$
proves that condition (1) in theorem \ref{FormulaChernova} is valid. Requirement (2) is valid because $S= S(t)$ is the sum of
two continuous $\mathscr{L}(C_0(M))$-valued functions of $t$. (3) is trivially true. Condition (4) is satisfied because if $\varphi \in D_k$
with $k\geq 3$, exploiting condition (c) in (2) above, and where $L_1$ is referred to the case $c=0$,
$$S(t)\varphi=S_0(t)\varphi+tc\varphi=\varphi+tL_1\varphi+o(t)+tc\varphi=\varphi+t(L_{1} + c)\varphi+o(t)
= \varphi+tL_{1c}\varphi+o(t)\:. $$
Hence theorem \ref{FormulaChernova} implies that (2) is valid.
\end{proof}
\begin{theorem} Under assumptions of theorem \ref{teo3.1}, the following facts hold.
\begin{itemize}
\item[(1)] For the operator $L$ defined in theorem \ref{teoL3} and $S(t)$ defined in (\ref{S(t)}), we have that the classical solution\footnote{In sense of Proposition \ref{ACPsol}.} $u$ of the Cauchy problem
\begin{equation*
\left\{ \begin{array}{l}
\frac{\partial }{\partial t}u(t,x)=Lu(t,x)\\
u(0,x)=u_0(x)
\end{array}
\right.
\end{equation*}
is given for $u_0\in D(L)$ by \begin{equation}\label{C-M-1}u(t,x)=\lim_{n\to\infty}(S(t/n)^{n }u_0)(x).\end{equation}
\item[(2)] In the case $A_0=0$ and $c=0$, then an alternative equivalent form for the operator $S(t):C_0(M)\to C_0(M)$, $t\geq 0$, is:
\begin{equation}\label{S(t)-vers-2}
(S(t)f)(x)=\frac{1}{2r}\sum_{j =1}^r\left(f\left(\gamma_{x,A_j}(\sqrt{2rt})\right)+f\left(\gamma_{x,-A_j}(\sqrt{2rt})\right)\right), \qquad f\in C_0(M)
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
Result (1) immediately arises from \eqref{convergenceformula}, which is valid for all $f\in C_0(M)$, for all $x\in M$, and all $t\geq 0$.
(2) It can be proved with a proof strictly analogous to that of the corresponding statement in the theorem \ref{teo3.1}.
\end{proof}
\section{A probabilistic interpretation of Chernoff construction}\label{SecProbabilisticInterpretation}
The convergence result stated by Chernoff construction can be equivalently formulated (see \cite{EN1} Th 5.2 Ch. III) in the following way for all $t\geq 0$ and $f\in C_0(M)$:
$$V(t)f=\lim_{n\to\infty}(S(1/n)^{\lfloor nt\rfloor }f)\:.$$
Assuming that the function $c=0$, the latter formula admits a probabilistic interpretation in terms of the limit of expectations with respect to a sequence of random walks on the manifold $M$. Actually, in the following sections we shall set $c=0$ and provide three different constructions.
\subsection{A jump process on $M$}
Let $\{X_n(t)\}_{n\geq 1} $ be a sequence of jump processes on $M$ defined as
\begin{equation}\label{Xnjump}
\begin{cases}
X_n(0)\equiv x ,\\ X_n(t):=X_n(\lfloor nt\rfloor/n)=Y_n(\lfloor nt\rfloor) \quad t>0,
\end{cases}\end{equation}
the jump chain $\{Y_n(m)\}_{m\geq 1}$ is a Markov chain with transition probabilities given (for each Borel set $B\subset M$) by
\begin{multline} {\mathbb{P}}(Y_n(m)\in B|Y_n(m-1)=y)= \\ =\frac{1}{4r}\sum_{j=1}^r\left(\delta_{\gamma_{y, A_j}\left(\sqrt{2r/n}\right)}\left(B\right) + \delta_{\gamma_{y, -A_j}\left(\sqrt{2r/n}\right)}\left(B\right)\right)+\frac{1}{2}\delta_{\gamma_{y, A_0}(2/n)}\left(B\right), \quad B\in {\mathcal{B}}(M).\end{multline}
Actually $(X_n(t))_{t\geq 0}$
is a random walk on $M$ with steps given by integral curves of the vector fields $A_k$, $k =0,\dots r$.
Now equation \eqref{C-M-1} can be written in the following form:
\begin{equation}u(t,x)=\lim_{n\to\infty}(S(1/n)^{\lfloor nt\rfloor }u_0)(x)
=
\lim_{n\to\infty}{\mathbb{E}}[u_0(X_n(t))]\label{limit1}\end{equation}
Actually, the sequence of jump processes $\{X_n\}$ converges weakly to the diffusion process $(X(t))_{t\in {\mathbb{R}}^+}$ on $M$ associated to the Feller semigroup $V(t)$, as we are going to show.
Let $D_{M}[0,+\infty)$ denote the space of c\'adl\'ag $M-$valued functions over the interval $[0,+\infty)$, i.e the functions which are right-continuous and admit left hand limits. It is possible to define a distance function (i.e. metric) on $D_{M}[0,+\infty)$ under which it becomes a separable metric space. The topology induced by the metric is called {\em Skorohod topology} \cite{Bil,EthKur}.
In the following, with the symbol ${\mathcal{S}}_M$ we shall denote the corresponding Borel $\sigma-$algebra on $D_{M}[0,+\infty)$. In fact ${\mathcal{S}}_M$ coincides with the $\sigma-$ algebra generated by the projection maps $\pi_t:D_{M}[0,+\infty)\to M$
\begin{equation}\label{sigmaDM}
{\mathcal{S}}_ M=\sigma (\pi_t, t\geq 0)
\end{equation}
where
\begin{equation}\label{pit}\pi_t(\gamma):= \gamma (t), \qquad \gamma \in D_{M}[0,+\infty).\end{equation}
As a consequence, a stochastic process $X=(\Omega, {\mathcal{F}} , {\mathcal{F}}_t, (X(t)), {\mathbb{P}})$ with trajectories in $D_{M}[0,+\infty)$ can be looked at as a $D_{M}[0,+\infty)-$valued random variable, i.e. as a map $X:\Omega \to D_{M}[0,+\infty)$ defined as:
$$X(\omega):=\gamma _\omega, \qquad \gamma_\omega (t):=X(t)(\omega), \qquad t\in [0, +\infty), \, \omega \in \Omega. $$
The measurability of the map $X$ from $(\Omega, {\mathcal{F}})$ to $(D_{M}[0,+\infty), {\mathcal{S}}_M)$ follows from \eqref{sigmaDM}.
We shall denote with $\mu_X$ the probability measure on ${\mathcal{S}}_M$ obtained as the pushforward of ${\mathbb{P}}$ under $X$, defined for any Borel set $I\in {\mathcal{S}}_M$ as $\mu_X(I)={\mathbb{P}}(X(\omega )\in I)$.
Considered the sequence of jump processes $(X_n)$ defined on a probability space $(\Omega, {\mathcal{F}}, {\mathbb{P}})$ by \eqref{Xnjump}, let $\mu_{X_n}$ be the corresponding distribution on $(D_{M}[0,+\infty), {\mathcal{S}}_M)$. Further, let $\mu_X$ be the analogous distribution corresponding to the Feller process $X$.
\begin{theorem}\label{thconvD}Under the assumptions of theorem \ref{teo3.1},
the sequence of processes $X_n$ converges weakly in $D_M[0,+\infty)$ and its weak limit is the Feller process $X$.
\end{theorem}
\begin{proof} The proof is a direct application of \eqref{limit1} and of theorem 2.6 Ch 4 of \cite{EthKur}, see also theorem 19.25 in \cite{Kal}.\end{proof}
\subsection{A piecewise geodesic random walk }
For any $T>0$, let us consider the space $C_{M}[0,T]$ of continuous functions $\gamma :[0, T] \to M$ endowed with the topology of uniform convergence. The corresponding Borel $\sigma $-algebra ${\mathcal{B}}_M$ is generated by the coordinate projections $\pi_t$, $t\in [0,T] $ defined as above (see Eq. \eqref{pit}) \cite{Bal}.
The stochastic process $X$ associated to the Feller semigroup $V(t)$ is a diffusion process, hence it has continuous trajectories.
Let us consider the sequence of processes $(Z_n)_n$ with sample paths in $C_{M}[0,T)$, obtained by continuous interpolation of the paths of $(X_n)_n$ by means of geodesic arcs.
More precisely, the process $(Z_n(t))_{t\geq 0}$ is defined as
\begin{equation}
\begin{cases}
Z_n (0)\equiv x, \\
Z_n(m/n)\equiv X_n(m/n), \quad m\in {\mathbb{N}},\\
\quad Z_n(t)=\gamma_{X_n(m/n),X_n((m+1)/n)}(t-m/n), \: t\in [m/n, (m+1)/n]
\end{cases}
\end{equation},
where $\gamma_{x,y}(t)$ denotes an arbitrary shortest geodesics in $M$ such that $\gamma_{x,y}(0)=x$ and $\gamma_{x,y}(1/n)=y$.
Let us denote with $\mu_n$, resp. $\mu$, the Borel measure over the space $C_{M}[0,T]$ induced by the process $Z_n$, resp. $X$. The following holds.
\begin{theorem}\label{teoconvC}
Under the assumptions above, $Z_n$ converges to $X$ on $C_{M}[0,T]$.
\end{theorem}
In other words, theorem \ref{teoconvC} states that the sequence of measures $\{\mu_n\}$ converges weakly to $\mu$.
Before proving theorem \ref{teoconvC} we recall some preliminary results.
\begin{definition}
Let $(M,d)$ be a separable metric space. The {\em modulus of continuity } of a function $\gamma:[0,T]\to M$ is defined for any $\delta>0$ as:
$$w(\gamma, \delta):=\sup\{d(\gamma (t),\gamma(s)), s,t\in [0,T], |t-s|<\delta\}.$$
\end{definition}
\begin{lemma}\label{lemma4}
let $\nu_n$ be a sequence of probability measures on $D_M[0,T]$ converging weakly to a finite measure $\nu$ which is concentrated on $C_M[0,T]$. Then for any $\varepsilon >0$
\begin{equation}
\lim_{\delta \downarrow 0}\limsup_{n\in {\mathbb{N}}}\nu_n(\{\gamma\in D_M[0,T]\colon w(\gamma , \delta )>\varepsilon\})=0
\end{equation}
\end{lemma}
For a proof see \cite{SWW2007}.
\begin{proof}[Proof of theorem \ref{teoconvC}]
Let us consider the trajectories $\gamma_\omega$ of the process $Z_n$, defined as $\gamma_\omega (t):=Z_n(t)(\omega)$. Fix $\delta >0$ and take $n$ sufficiently large in such a way that $1/n<\delta$. Consider $s,t\in [0,T]$, $s<t$, $|t-s|<\delta$. We will have $s\in [m/n,(m+1)/n]$ and $t\in [m'/n,(m'+1)/n]$, with $m\leq m'$. We have:
\begin{align*}
& d(\gamma_\omega (s), \gamma _\omega (t))\\
&\leq
d\left(\gamma_\omega(s),\gamma_\omega((m+1)/n)\right)+d\left(\gamma_\omega((m+1)/n),\gamma_\omega(m'/n)\right)+ d\left(\gamma_\omega(m'/n),\gamma_\omega(t)\right)\\
&\leq d\left(\gamma_\omega(m/n),\gamma_\omega((m+1)/n)\right)+d\left(\gamma_\omega((m+1)/n),\gamma_\omega(m'/n)\right)+ d\left(\gamma_\omega(m'/n),\gamma_\omega((m'+1)/n)\right)\\
&\leq 3\max\{ d\left(\gamma_\omega(m/n),\gamma_\omega(m'/n)\right), |m/n-m'/n|<\delta\}
\end{align*}
We can then estimate the probability that the modulus of continuity of the trajectories of $Z_n$ exceeds a given $\varepsilon >0$ as
\begin{eqnarray*}
&&\mu_n \left(\{\gamma\in C_M[0,T]\colon w(\gamma ,\delta)>\varepsilon\}\right)\\
&&\leq \mu_n \left(\{\gamma\in C_M[0,T]\colon \max_m\{d(\gamma (m/n),\gamma (m+1)/n))\}>\varepsilon/3\}\right)\\
& &=\mu_{X_n}\left(\{\gamma\in D_M[0,T]\colon
w(\gamma ,\delta)>\varepsilon /3\}\right)
\end{eqnarray*}
By theorem \ref{thconvD} and lemma \ref{lemma4}, we get for any $\varepsilon>0$
$$\lim_{\delta\downarrow 0}\limsup_n \mu_n \left(\{\gamma\in C_M[0,T]\colon w(\gamma ,\delta)>\varepsilon\}\right) =0$$
Since $Z_n(0)=x$ for any $n$, the sequence of probability measures $\{\mu_n\}$ is tight \cite{Bil} and the measure $\mu$, i.e. the law of $X$ is the only possible limit point.
\subsection{A different interpolation scheme}
Let us consider the sequence of processes $(\tilde Z_n)_n$ with sample paths in $C_{M}[0,T)$, obtained by continuous interpolation of the paths of $(X_n)_n$ by means of integral curves of the vector fields $A_k$, $k=0,...,r$.
More precisely, introduced a sequence of i.i.d. discrete random variables $\xi_j$ with distribution
$${\mathbb{P}}(\xi_j =0)=1/2, \qquad {\mathbb{P}}(\xi_j =k)=\frac{1}{2r}, \quad k=1,\dots, 2r,$$
and the map $\tau:\{0,\dots,2r \}\times [0,1]\to {\mathbb{R}}$ defined by
$$\tau(k, t)=\begin{cases}
2t & k=0\\
\sqrt{2rt} & k=1,\dots 2r
\end{cases}$$
the process $(\tilde Z_n(t))_{t\in {\mathbb{R}}^+}$ can be defined as
\begin{equation}\label{deftildeZ-n}
\begin{cases}
\tilde Z_n (0)\equiv x,\\
\tilde Z_n(t)= \gamma_{\tilde Z_n(m/n), (-1)^{\xi_m}A_{\xi_m/2}}(\tau (\xi_m, t-m/n)) \qquad t\in [m/n,(m+1)n],
\end{cases}
\end{equation}
where for a$x\in M$ and a smooth vector field $A$ on $M$, $\gamma _{x,A}$ denotes the maximal solution of the Cauchy problem \eqref{CP-M}.
In particular the following holds:
\begin{equation*
\quad \tilde Z_n (m/n)=X_n(m/n), \;\: m\in {\mathbb{N}}.
\end{equation*}
\end{proof}
Analogously to the case of geodesic interpolation studied in the previous section, it is possible to prove the weak convergence of $\tilde Z_n $ to $X$ on $C_M[0,T]$. Let $\tilde \mu _n$ (resp. $\mu$) be the Borel probability measure on $C_M[0,T]$ induced by the process $\tilde Z_n$ (resp. $X$).
\subsubsection{A technical interlude}
In this subsection we introduce some results that will be applied to the proof of theorem \ref{teo-conv-lastZn}.
In this section, if $t=
\sum_{i=1}^dt^ie_i$ and $s=
\sum_{i=1}^ds^ie_i$,
where $(e_j)_{j=1,\ldots,d}$ is the standard orthonormal basis of ${\mathbb{R}}^d$,
$$\|t\|:= \sqrt{\sum_{i=1}^d (t^i)^2}\quad\mbox{and}\quad \langle t, s\rangle := \sum_{i=1}^d t^is^i$$ respectively denote the
standard Euclidean norm and the standard inner product in ${\mathbb{R}}^d$. Furthermore, $d_{{\mathbb{R}}^d}(p,q) := \|p-q\|\in [0,+\infty)$ denotes the usual Euclidean distance of $p,q \in {\mathbb{R}}^d$.
\\
Let us start by considering the case where $M={\mathbb{R}}^d$.
\begin{proposition}\label{TRd}
Let $A\colon{\mathbb{R}}^d\to{\mathbb{R}}^d$
be a smooth vector field such that, for some $M_1,M_2 \in (0,+\infty)$,
\begin{enumerate}
\item $\|A(x)\|\leq M_1$ if $x\in {\mathbb{R}}^d$
\item the components $A^i\colon{\mathbb{R}}^d\to{\mathbb{R}}$ satisfy
$\|\nabla A^i(x)\|\leq M_2$ for all $i=1,..,d$ if $x\in {\mathbb{R}}^d$.
\end{enumerate}
Consider the unique maximal and complete (for 1) smooth solution $\gamma\colon {\mathbb{R}} \to \mathbb{R}^d$ of the Cauchy problem
\begin{equation}\label{CPZn}
\left\{\begin{array}{l}
\dot \gamma (t)=A(\gamma(t))\\
\gamma (0)=\gamma_0
\end{array}\right.
\end{equation}
for every $\gamma_0 \in {\mathbb{R}}^d$ and define $d_{\gamma_0}:[0,+\infty) \to {\mathbb{R}}$ as $$d_{\gamma_0}(t):= d_{{\mathbb{R}}^d}(\gamma(0), \gamma(t)).$$
Then there exists $T>0$ independent of $\gamma_0$ such that the function $d_{\gamma_0}$ is non-decreasing in $[0,T]$.
Even more, $d_{\gamma_0}$ is strictly increasing in $[0,T]$ if $A(\gamma_0)\neq 0$.
\end{proposition}
\begin{proof}
First of all let us remark that if $A(\gamma(0))=0$ then $d_{\gamma_0}(t)=d_{{\mathbb{R}}^d}(\gamma(0), \gamma(t)) =0$
and the result holds trivially for any $T>0$.
Let us therefore restrict ourselves to the case $A(\gamma(0))\neq 0$ where, by the local uniqueness
of the solutions of the Cauchy problem \eqref{CPZn},
we have that $A(\gamma(t))\neq 0$ for all $t\neq 0$.
Let $f_{\gamma_0}\colon[0,+\infty) \to \mathbb{R}$ be the smooth map $f_{\gamma_0}(t)=d_{\gamma_0}(t)^2=\|\gamma (t)-\gamma (0)\|^2$.
To prove the thesis it is enough to demonstrate that
\begin{equation} \mbox{\em if $A(\gamma_0)\neq 0$, then there exists $T>0$ independent of $\gamma_0$ such that
$f_{\gamma_0}'(t)>0$ for all $t\in (0,T]$.} \label{newstat} \end{equation}
To prove (\ref{newstat}), we start by notincing that
linearity and symmetry of the inner product in $\mathbb{R}^d$ and
the trivial identity arising from (\ref{CPZn})
\begin{equation}\gamma(t)-\gamma(u) = \int_u^t A(\gamma(s))ds\label{easyEq}\end{equation}
yield
\begin{equation}
f_{\gamma_0}(t)=\left<\int_0^t A(\gamma (s))ds,\int_0^t A(\gamma (u))du\right>=
2\int_0^t\int_0^s \left<A(\gamma (s)),A(\gamma (u))\right>duds.\nonumber
\end{equation}
The derivative $f_{\gamma_0}'(t)$ appearing in (\ref{newstat}) therefore admits the explicit form
\begin{equation} f_{\gamma_0}'(t)=2\int_0^t \left<A(\gamma (t)),A(\gamma (u))\right>du. \label{firstf}\end{equation}
The components $A^i(\gamma (u))$ ($i=1,..,d$) of the vector field $A(\gamma(u))$
can be expanded as
\begin{equation}\label{Corr-eq}A^i(\gamma (u))= A^i(\gamma (t))+\left<\nabla A^i(\xi_{i,u,t} ),\gamma (u)-\gamma(t)\right>\end{equation}
where, according to the Lagrange for of the remainder of the $\mathbb{R}^d$ Taylor expansion,
\begin{equation}\label{Corr-eq2} \xi_{i,u,t}= \gamma (t)+\theta_i (\gamma (u)-\gamma(t))\quad \mbox{with $ \theta_i\in [0,1]$.
}
\end{equation}
Plugging (\ref{Corr-eq}) in the right-hand side of (\ref{firstf}), a trivial computation leads to
\begin{equation}\label{3.3.2}f_{\gamma_0}'(t)=2\int_0^t \sum_{i=1}^d\Big(|A^i(\gamma (t))|^2+A^i(\gamma (t))\left<\nabla A_i(\xi_{i,u,t}),\gamma (u)-\gamma(t)\right>\Big)du. \end{equation}
The proof the theorem ends proving that there exists
$T>0$ such that,
if $0\leq t\leq T$, then
\begin{equation}\label{3.3.3}
\sum_{i=1}^d|A^i(\gamma (t))\left<\nabla A^i(\xi_{u,t}),\gamma (u)-\gamma(t)\right>|\stackrel{\textrm{wish to prove}}{<}
\sum_{i=1}^d |A^i(\gamma (t))|^2=\|A(\gamma (t))\|^2.
\end{equation}
Indeed, (\ref{3.3.3}) entails that the integrand in \eqref{3.3.2} -- that is the one in (\ref{firstf}) -- is strictly positive so that (\ref{newstat}) is valid
because the integrand of (\ref{firstf}) is also $u$-continuous.
To prove (\ref{3.3.3}), let us focus on its left-hand side.
It is bounded by
\begin{multline}\label{3.3.3-1}
\sum_{i=1}^d|A^i(\gamma (t))\left<\nabla A^i(\xi_{u,t}),\gamma (u)-\gamma(t)\right>|
\leq \sum_{i=1}^d|A_i(\gamma (t))|\ |\left<\nabla A^i(\xi_{u,t}),\gamma (u)-\gamma(t)\right>|\\
\leq \sum_{i=1}^d \|A(\gamma (t))\| \|\nabla A^i(\xi_{u,t})\| \|\gamma (u)-\gamma(t)\|
\leq d M_2 \|A(\gamma (t))\| \|\gamma (u)-\gamma(t)\|.
\end{multline}
The bound (\ref{3.3.3-1}) can be further improved
estimating $ \|\gamma (u)-\gamma(t)\|$
with the following argument where we
use the notation $\gamma(t)=\sum_{i=1}^d\gamma^i(t)e_i$ and
we exploit again (\ref{easyEq}) and (\ref{Corr-eq})-(\ref{Corr-eq2}).
\begin{align*}
\gamma_i(u)-\gamma _i(t)&=\int_t^uA_i(\gamma (s))ds
=\int_t^uA^i(\gamma (t))ds+\int_t^u \left<\nabla A^i(\xi_{i,s,t}),\gamma (s)-\gamma (t)\right>ds\\
&= A^i(\gamma (t)) (u-t)+\int_t^u\left<\nabla A^i(\xi_{i,s,t}),\gamma (s)-\gamma (t)\right>ds. \label{in-45}
\end{align*}
Since $\|\nabla A_i(x)\|\leq M_2$ due to condition 2, we therefore have
\begin{equation*}
|\gamma_i (u)-\gamma_i(t)|\leq \|A(\gamma (t)\|(t-u)+\int_u^t M_2\|\gamma (s)-\gamma (t)\|ds,
\end{equation*}
so that
\begin{equation}\label{ineqforiter}
\|\gamma (u)-\gamma(t)\|\leq \sqrt d\left(\|A(\gamma(t))\|(t-u)+\int_u^t M_2\|\gamma (s)-\gamma (t)\|ds\right).
\end{equation}
Let us iterate this inequality for $\|\gamma (u)-\gamma(t)\|$ finding an improved
estimate in terms of $\|A(\gamma(t))\|$ and $t-u$, hence in terms of $T$ because $0\leq u\leq t\leq T$.
Let us start by applying inequality (\ref{ineqforiter}) to the term $\|\gamma (s)-\gamma (t)\|$ on the
integrand in the right-hand side of (\ref{ineqforiter}):
\begin{multline*}
\|\gamma (u)-\gamma(t)\|\leq \sqrt d\|A(\gamma(t))\|\left((t-u)
+M_2
\sqrt d\int_u^t(t-s_1)ds_1\right)\\
+(M_2
\sqrt d)^2\int_u^t \int_{s_1}^t
\|\gamma (s_2)-\gamma (t)\|ds_2
ds_1.
\end{multline*}
Applying (\ref{ineqforiter}) again, we obtain
\begin{multline*}
\|\gamma (u)-\gamma(t)\|\leq \sqrt d\|A(\gamma(t))\|\left((t-u)
+M_2
\sqrt d\int_u^t(t-s_1)ds_1+(M_2
\sqrt d)^2\int_u^t \int_{s_1}^t(t-s_2)ds_2ds_1\right)\\
+(M_2
\sqrt d)^3\int_u^t \int_{s_1}^t
\int_{s_2}^t \|\gamma (s_3)-\gamma (t)\|ds_3
ds_2
ds_1.
\end{multline*}
To state the general estimate, let us introduce the $n$-dimensional orthogonal simplex,
$$\Delta_n:=\{(s_1,...,s_n)\in [u,t]^n\colon u\leq s_1\leq...\leq s_n\leq t\}$$
which is the a corner of $n$-dimensional cube $[u,t]^n$ with length of the corner-touching edges $t-u$.
The $n$-dimensional Lebesgue measure of $\Delta_n$ equals $|\Delta_n|=(t-u)^n/n!$ and a direct calculation shows that
$|\Delta_{n+1}|=\int_{\Delta_{n}}(t-s_n)ds_n\dots ds_1$. With this notation, the last inequality reads
\begin{multline*}
\|\gamma (u)-\gamma(t)\|\leq \sqrt d\|A(\gamma(t))\|\left(|\Delta_1|
+M_2
\sqrt d|\Delta_2|+(M_2
\sqrt d)^2|\Delta_3|\right)\\
+(M_2
\sqrt d)^3\int_{\Delta_3} \|\gamma (s_3)-\gamma (t)\|ds_3
ds_2
ds_1.
\end{multline*}
Applying (\ref{ineqforiter}) to this inequality as many times as we need for each $n\geq 1$, and recalling that
$M_2\neq 0$, we have
\begin{multline*}
\|\gamma (u)-\gamma(t)\|\leq \sqrt d\|A(\gamma(t))\|\frac{1}{M_2
\sqrt d}\left(M_2
\sqrt d|\Delta_1|
+(M_2
\sqrt d)^2|\Delta_2|+\dots+(M_2
\sqrt d)^n|\Delta_{n}|\right)\\
+(M_2
\sqrt d)^{n+1}\int_{\Delta_{n}} \|\gamma (s_{n})-\gamma (t)\|ds_{n}
\dots
ds_1,
\end{multline*}
which, after exploiting $|\Delta_n|=(t-u)^n/n!$, becomes
\begin{multline}\label{alslastest}
\|\gamma (u)-\gamma(t)\|\leq \frac{\|A(\gamma(t))\|}{M_2}\sum_{m=1}^n\frac{\left(M_2
\sqrt d(t-u)\right)^n}{n!}
+(M_2
\sqrt d)^{n+1}\int_{\Delta_{n}} \|\gamma (s_{n})-\gamma (t)\|ds_{n}
\dots
ds_1.
\end{multline}
An estimate of the remainder in this formula arises
from the trivial bound
\begin{equation}\label{derivedfrom3.3.1}
\|\gamma (t)-\gamma (u)\|=
\left\|\int _u^t A(\gamma(s))ds\right\|
\leq \int _u^t \|A(\gamma(s))\|ds
\leq M_1(t-u)\quad \mbox{if $0\leq u\leq t$}
\end{equation}
which specializes to $\|\gamma (s_{n})-\gamma (t)\|\leq M_1(t-s_n)$ in (\ref{alslastest}), yielding
\begin{multline*}
\left|(M_2 \sqrt d)^{n+1}\int_{\Delta_{n}} \|\gamma (s_{n})-\gamma (t)\|ds_{n}\dots ds_1\right|
\leq (M_2 \sqrt d)^{n+1}\int_{\Delta_{n}} M_1(t-s_n)ds_{n}\dots ds_1\\
=M_1(M_2 \sqrt d)^{n+1}|\Delta_{n+1}|=\frac{M_1\left(M_2 \sqrt d(t-u)\right)^{n+1}}{(n+1)!}\longrightarrow 0 \textrm{ as }n\to\infty.
\end{multline*}
As a consequence, taking the limit as $n\to\infty$ in (\ref{alslastest}), we finally obtain
\begin{equation}\label{alslastest-1}
\|\gamma (u)-\gamma(t)\|\leq \frac{\|A(\gamma(t))\|}{M_2}\left(e^{M_2\sqrt d(t-u)}-1\right).
\end{equation}
Combining (\ref{alslastest-1}) with (\ref{3.3.3-1}) we find
\begin{multline*}
\sum_{i=1}^d|A^i(\gamma (t))\left<\nabla A^i(\xi_{u,t}),\gamma (u)-\gamma(t)\right>|
\leq d M_2 \|A(\gamma (t))\| \|\gamma (u)-\gamma(t)\|\\
\leq
d M_2\|A(\gamma (t))\|\frac{\|A(\gamma(t))\|}{M_2}\left(e^{M_2\sqrt d(t-u)}-1\right)=d\|A(\gamma (t))\|^2\left(e^{M_2\sqrt d(t-u)}-1\right)\\
\leq d\|A(\gamma (t))\|^2\left(e^{M_2\sqrt dT}-1\right)\:,
\end{multline*}
where at the last step we used the fact that that $\mathbb{R}\ni y\longmapsto e^y$ is monotonically increasing and that
$t-u\leq T$ because $0\leq u\leq t\leq T$.
In summary, we have established that, for all $T>0$, if $0\leq u\leq t\leq T$, then
\begin{equation}\label{semifinalest-1}
\sum_{i=1}^d|A^i(\gamma (t))\left<\nabla A^i(\xi_{u,t}),\gamma (u)-\gamma(t)\right>|
\leq d\|A(\gamma (t))\|^2\left(e^{M_2\sqrt dT}-1\right).
\end{equation}
This inequality is sufficient to prove (\ref{3.3.3}) concluding the proof, just choosing
$T>0$ such that \begin{equation}\label{semifinalest-11}d\|A(\gamma (t))\|^2\left(e^{M_2\sqrt dT}-1\right)<\|A(\gamma (t))\|^2\quad \mbox{if $0\leq t \leq T$}\:.\end{equation}
This is always feasible because, as observed at the beginning of the proof,
$A(\gamma(t)) \neq 0$ if $A(\gamma(0)) \neq 0$ as we supposed in (\ref{newstat}).
We can therefore divide both sides of (\ref{semifinalest-11})
for $||A(\gamma(t))|| \neq 0$, and the resulting inequality is solved as (taking the constraint $T>0$ into account),
\begin{equation}
0< T < \frac{1}{M_2\sqrt{d}}\ln \left( 1+ \frac{1}{d}\right) \label{T}
\end{equation}
Notice that this $T$ can be chosen independent of $\gamma_0=\gamma(0)$.
\end{proof}
This result can be extended to Riemannian manifold $(M,g)$ of bounded geometry. Indeed, in this case the following result allows to prove a bound for the euclidean norm of the components of a vector field $A$ annd of its covariant derivative $\nabla A$ in local normal charts in terms of their Riemannian norm $\|A\|_g$ and $\|\nabla A\|_g$.
\begin{proposition}\label{teo2} Let $(M,g)$ be a $d$-dimensional smooth Riemannian manifold of bounded geometry. If $r_0 \in (0,I_{(M,g)})$ is sufficiently small, then there exist four constants $k_1,k_2,k_3.k_4 \in [0,+\infty)$ such that for every local normal Riemannian chart centered at every $p\in M$
$(B^{(M,g)}_{r_0}(p), \exp^{-1}_{p,N})$ with
coordinates $y^1,\ldots,y^n$ and every smooth vector field $A$ on $M$, the following uniform bounds hold:
\begin{itemize}
\item[(a)] $||A(y(q))||^2 \leq k_1 ||A(q)||^2_g$ \:,
\item[(b)] $||\nabla A(y(q))||^2 \leq k_2 ||\nabla^{(g)} A(q)||^2_g + k_3 ||A(q)||^2_g + k_4 ||\nabla^{(g)} A(q)||_g ||A(q)||_g$ \:,
\end{itemize}
when $q\in B^{(M,g)}_{r_0}(p)$ (i.e. $y(q)\in B_{r_0}(0) \subset {\mathbb{R}}^d$).
\end{proposition}
\noindent Above $\nabla$ denotes the standard gradient in ${\mathbb{R}}^d$ and $||\cdot ||$ indicates the standard pointwise Euclidean norm of vectors and ${\mathbb{R}}^d$-$(1,1)$ tensors referring to their components in Cartesian coordinates $y^1,\ldots, y^d$:
$$||A(y)||^2 = \sum_{a=1}^d |A^a(y)|^2\:\quad \mbox{and}\quad ||T(y)||^2 := \sum_{a,b=1}^d |T_b^a(y)|^2\:,$$
whereas $||\cdot ||_g$ denotes the previously defined natural point-wise norm associated to the metric $g$ acting on vector fields and tensor fields of order $(1,1)$ and $\nabla^{(g)}$ is the Levi-Civita covariant derivative associated to the metric.
\begin{proof}
See the appendix
\end{proof}
We are now in a position to state the final result which extends proposition \ref{TRd} to Riemannian manifolds of {bounded} geometry.
\begin{proposition}\label{teo4.5}
Let $(M,g)$
be a smooth Riemannian manifold of bounded geometry (thus geodesically complete) and $A$ a smooth vector field on $M$ such that, for some $c_1,c_2 \in (0,+\infty)$,
\begin{enumerate}
\item $\sup_{x\in M}\|A(x)\|_g \leq c_1$,
\item $\sup_{x\in M}\|\nabla^{(g)} A\|_g \leq c_2 $.
\end{enumerate}
Consider the unique maximal and complete (for 1) smooth solution $\gamma\colon {\mathbb{R}} \to M$ of the Cauchy problem
\begin{equation}\label{CPZn2}
\left\{\begin{array}{l}
\dot \gamma (t)=A(\gamma(t))\\
\gamma (0)=\gamma_0
\end{array}\right.
\end{equation}
for every $\gamma_0 \in M$ and define $d_{\gamma_0}:[0,+\infty) \to {\mathbb{R}}$ as $$d_{\gamma_0}(t):= d_{(M,g)}(\gamma(0), \gamma(t)).$$
Then, there exists $T>0$ independent of $\gamma_0$ such that the function $d_{\gamma_0}$ is non-decreasing in $[0,T]$.
Even more, $d_{\gamma_0}$ is strictly increasing in $[0,T]$ if $A(\gamma_0)\neq 0$.
\end{proposition}
\begin{proof}
First of all, exactly as for the case $M={\mathbb{R}}^d$, we remark that if $A(\gamma(0))=0$ then
$d_{\gamma_0}(t)=d_{(M,g)}(\gamma(0), \gamma(t)) =0$
and the result holds trivially for any $T>0$.
Let us therefore restrict ourselves to the case $A(\gamma(0))\neq 0$ where, by the local uniqueness
of the solutions of the Cauchy problem \eqref{CPZn2},
we have that $A(\gamma(t))\neq 0$ for all $t\neq 0$.
Let $f_{\gamma_0}\colon[0,+\infty) \to \mathbb{R}$ be the smooth map
$f_{\gamma_0}(t)=d_{\gamma_0}(t)^2$.
To prove the thesis it is enough to demonstrate that
\begin{equation} \mbox{\em if $A(\gamma_0)\neq 0$, then there exists $T>0$ independent of $\gamma_0$ such that
$f_{\gamma_0}'(t)>0$ for all $t\in (0,T]$.} \label{newstat2} \end{equation}
Statement (\ref{newstat2}) will be demonstrated by reducing to the analogous proof in ${\mathbb{R}}^d$ here
performed in a suitably Riemannian coordinate patch centered on $\gamma(0)$. To this end it is
fundamental to prove that the solution $\gamma(t)$ cannot exit such a Riemannian coordinate domain.
For a given $\gamma(0)\in M$ take $r\in (0, I_{(M,g)})$ and consider the geodesical ball $B_r^{(M,g)}(\gamma(0)))$.
We prove that there is $T'>0$, independent of $\gamma(0)$,
such that $\gamma(t) \in B_r^{(M,g)}(\gamma(0)))$ for $t\in [0,T']$.
From the definition (\ref{defd}) of $d_{(M,g)}$ we have that
$$d_{(M,g)}(\gamma(T'),\gamma(0)) \leq \int_0^{T'} \|\dot{\gamma}(t)\| dt =
\int_0^{T'} \|A(\gamma(t))\| dt \leq \int_0^{T'} c_1 dt =T' c_1\:.$$
We conclude that, defining $T' := r/c_1$, we have that $\gamma(t) \in B_r^{(M,g)}(\gamma(0)))$ for $t\in [0,T']$ as wanted.
We henceforth restrict our attention to the ball $B_r^{(M,g)}(\gamma(0)))$, since the curve cannot exit it if $t\in [0,T')$,
looking for $T \in (0,T')$ satisfying (\ref{newstat2}). We can describe the curve $\gamma$ in Riemannian coordinates
$y^1,\ldots, y^d$ centered on $\gamma(0)$ inside the ball $B_r(0)\subset {\mathbb{R}}^d$, taking advantage of the results
already proved in ${\mathbb{R}}^d$ in proposition \ref{TRd}. Now, the crucial observation is that, due to (\ref{distanceG})
and noticing that $\gamma(0)$ coincides to the origin $0$ of ${\mathbb{R}}^d$ when describing it in Riemannian
coordinates $y^1,\ldots, y^d$, we have that
$$d_{\gamma(0)}(t) = ||\gamma(t)-\gamma(0)||\:,$$
where the norm is the Euclidean one in ${\mathbb{R}}^n$ when describing the curve $\gamma$ in coordinates
$\gamma(t) \equiv (y^1(t), \ldots, y^d(t))$. From now on the proof of (\ref{newstat2}) is identical to that
of (\ref{newstat}), using the fact that, in the said coordinate patch,
conditions 1 and 2 in proposition \ref{TRd} are true for $x \in B_r(0)$ if choosing the initial $r=r_0$ sufficiently
small that proposition \ref{teo2} is valid (observe that this choice is independent of $\gamma(0)$).
As a matter of fact, with the said $r_0$, taking advantage of (a) and (b) in proposition \ref{teo2}, we can choose
$$M_1\geq \sqrt{k_1}c_1\quad \mbox{and}\quad M_2 \geq \sqrt{k_2c_1^2+ k_3c_2^2 + k_4c_1c_2}\:.$$
With the proof of proposition \ref{TRd} and $M_1,M_2$ as above (taking $M_2>0$ as in the proof of proposition \ref{TRd}), the wanted
$T$ is every $T \in (0,T')$ which also satisfies (\ref{T}). It is clear from the procedure that $T$ can be chosen independent of $\gamma(0)$.
\end{proof}
\subsubsection{Weak convergence of the sequence $Z_n$ to $X$ }
Coming back to the sequence $\tilde Z_n$ of random walks defined in \eqref{deftildeZ-n}, the results of proposition \ref{teo4.5} allow to prove that for any $T>0$ the sequence of measures $\tilde \mu _n$ on $(C([0,t], M), {\mathcal{B}}(C([0,t], M))$ induced by $\tilde Z_n$ converges weakly to the measure $\mu$ induced by the diffusion process $X$.
\begin{theorem}\label{teo-conv-lastZn}
Under the assumptions of theorem \ref{teo3.1}, the sequence of measures $\tilde\mu _n$ on $(C([0,t], M), {\mathcal{B}}(C([0,t], M))$ induced by the random walks $\tilde Z_n$ defined by \eqref{deftildeZ-n} converges weakly to the measure $\mu$ on $(C([0,t], M), {\mathcal{B}}(C([0,t], M))$ induced by the diffusion process $X$ associated with the elliptic operator $L$.
\end{theorem}
\begin{proof}
Since by assumptions $(M,g)$ is of bounded geometry and the vector fields $\{A_k\}_{k=0,\ldots , r}$ are $C^\infty$-bounded, they satisfy the assumptions of proposition \ref{teo4.5}. In particular there exists two constants $c_1,c_2\in {\mathbb{R}}^+$ such that for all $k=0,\dots, r$
$$\sup_{x\in M}\|A_k(x)\|_g \leq c_1,\quad
\sup_{x\in M}\|\nabla^{(g)} A_k\|_g \leq c_2, $$
and there exists a $T>0$ such that for all $k=0,\dots, r$ and $x\in M$ the functions $d^k:{\mathbb{R}}^+\to{\mathbb{R}}$ defined as
$d^k(t):=d(x, \gamma_{x,\pm A_k}(t)$ in non-decreasing for $t\in [0,T]$, with $\gamma_{x,A}$ denoting the maximal solution of the Cauchy problem \eqref{CP-M}.
The main argument is now completely similar to the one in the proof of theorem \ref{teoconvC}.
Let us consider the trajectories $\gamma_\omega$ of the process $\tilde Z_n$, defined as $\gamma_\omega (t):=\tilde Z_n(t)(\omega)$.
By proposition \ref{teo4.5} there exists $T>0$ such that for any $x\in M$ we have $d(x,\gamma_x(t))\leq d(x,\gamma_x(t'))$ for all $0\leq t\leq t'\leq T$, with $\gamma _x:[0,+\infty )\to M$ is the maximal solution of the Cauchy problem \eqref{CPZn2}.
Fix $\delta >0$ and take $n$ sufficiently large in such a way that $1/n<\min (\delta, T)$ and . Consider $s,t\in [0,T]$, $s<t$, $|t-s|<\delta$. We will have $s\in [m/n,(m+1)/n]$ and $t\in [m'/n,(m'+1)/n]$, with $m\leq m'$, hence:
\begin{align*}
& d(\gamma_\omega (s), \gamma _\omega (t))\\
&\leq
d\left(\gamma_\omega(s),\gamma_\omega((m+1)/n)\right)+d\left(\gamma_\omega((m+1)/n),\gamma_\omega(m'/n)\right)+ d\left(\gamma_\omega(m'/n),\gamma_\omega(t)\right)\\
&\leq d\left(\gamma_\omega(m/n),\gamma_\omega((m+1)/n)\right)+d\left(\gamma_\omega((m+1)/n),\gamma_\omega(m'/n)\right)+ d\left(\gamma_\omega(m'/n),\gamma_\omega((m'+1)/n)\right)\\
&\leq 3\max\{ d\left(\gamma_\omega(m/n),\gamma_\omega(m'/n)\right), |m/n-m'/n|<\delta\}
\end{align*}
The probability that the modulus of continuity of the trajectories of $\tilde Z_n$ exceeds a given $\varepsilon >0$ can be estimated by
\begin{eqnarray*}
&&\mu_n \left(\{\gamma\in C_M[0,T]\colon w(\gamma ,\delta)>\varepsilon\}\right)\\
&&\leq \mu_n \left(\{\gamma\in C_M[0,T]\colon \max_m\{d(\gamma (m/n),\gamma (m+1)/n))\}>\varepsilon/3\}\right)\\
& &=\mu_{X_n}\left(\{\gamma\in D_M[0,T]\colon
w(\gamma ,\delta)>\varepsilon /3\}\right)
\end{eqnarray*}
By theorem \ref{thconvD} and lemma \ref{lemma4}, we get for any $\varepsilon>0$
$$\lim_{\delta\downarrow 0}\limsup_n \mu_n \left(\{\gamma\in C_M[0,T]\colon w(\gamma ,\delta)>\varepsilon\}\right) =0$$
Since $\tilde Z_n(0)=x$ for any $n$, the sequence of probability measures $\{\mu_n\}$ is tight \cite{Bil} and the measure $\mu$, i.e. the law of $X$ is the only possible limit point.
\end{proof}
\section{Heat equation and Brownian motion on parallelizable manifolds }\label{sez5}
The results of the previous sections can be also applied to the construction on the Brownian motion on $M$. Here we shall assume that the manifold $M$ is {\bf parallelizable} i.e. that there exist smooth vector fields $\{e_k\}_{k =1,...,d}$ such that for any $x\in M$ the vectors $\{e_k \}_{k=1,...,d}$ provide a linear basis of $T_xM$.
Examples of such manifolds are e.g. the spheres $S^1$, $S^3$, $S^7$ and Lie groups as well as orientable 3-manifolds.
Without loss of generality, we can take
$\{e_k \}_{k =1,...,d}$ in such a way that for any $x\in M$ the vectors $\{e_k\}_{k=1,...,d}$ are orthonormal with respect to the metric tensor $g$. Further, given a local neighborhood $U$, the components $e_k^i$ the vectors $e_k$ with respect to the local basis $\partial _i:=\frac{\partial}{\partial x^i}$ satisfy the following equality:
$$\sum _{k=1}^de_k^i(x)e_k^j(x)=g^{ij}(x)$$
Let us consider the Laplace-Beltrami operator $L_0:=\Delta_{LB}$ on $M$ defined in local coordinates on the smooth maps $u\in C^\infty(M)$ as:
$$ \Delta_{LB}u=\sum_{i,j=1}^dg^{ij}\nabla^{(g)}_i\nabla^{(g)}_ju\:,$$
or, more explicitly
$$(\Delta_{LB}u)(x)=\sum _{i,j=1}^dg^{ij}(x)\left(\frac{\partial^2u}{\partial x^i\partial x^j}(x)-\sum_{k=1}^d\Gamma_{ij}^k\frac{\partial u}{\partial x^k}(x)\right).$$
Under suitable hypotheses, the results of previous sections can be applied to $\Delta_{LB}$ providing on the one hand the existence of an associated Feller semigroup - the {\em heat semigroup} - in $C_0(M)$ and, on the other hand, a Chernoff approximation in terms of translation operators of the form \eqref{S(t)} or \eqref{eqshift1-hm}. From the probabilistic point of view, these results yield also an approximation for the Brownian motion on $M$, i.e. the diffusion process associated to the heat semigroup, in terms of the weak limit of sequences of different types of random walks on $M$.
More precisely we have the following result.
\begin{theorem} Let $(M,g)$ be a smooth Riemannian manifold of bounded geometry.
Then the closure in $C_0(M)$ of $\Delta_{LB}|_{D_k}$ where $D_k$ is defined in (\ref{domainD}) with $L_0:= \frac{1}{2}\Delta_{LB}$ is the generator of a (unique) Feller semigroup on $C_0(M)$. Both the generator and the semigroup are independent of $k=0,1, \ldots$.
\end{theorem}
\begin{proof} Since $(M,g)$ is of bounded geometry $-\Delta_{LB}$ is $C^\infty$-bounded, furthermore
$\Delta_{LB}|_{C_c^\infty}$ is symmetric and $-\Delta_{LB}|_{C_c^\infty}\geq 0$.
Finally $-\Delta_{LB}$ is automatically uniformly elliptic since the matrix defining its pricipal symbol is nothing but the metric $g$.
Hence $\Delta_{LB}$ enjoys exactly the same properties as those of the operator $L_0$ we used in the proof of
lemma \ref{propL} and
proposition \ref{teoL}.
The proof for $\Delta_{LB}$ is therefore identical.
\end{proof}
\subsection{An approximation in terms of random walk with piecewise geodesic paths }
\begin{lemma}\label{lemma-fin}Let $(M,g)$ be a smooth parallelizable Riemannian manifold of bounded geometry.
For each $x\in M$, $t\geq 0$, $f\in C_0(M)$ set
\begin{equation}\label{eqshift1-hm}
(S(t)f)(x)=\frac{1}{2d}\sum_{k=1}^d\bigg(f\left(\gamma_{x,\sqrt{d }e_k}(\sqrt t)\right)+f\left(\gamma_{x,-\sqrt{d }e_k}(\sqrt t)\right)\bigg)
\end{equation}
where $\gamma _{x,v}$ denotes the geodesics starting at time 0 at the point $x\in M$ with initial velocity $v\in T_xM$. Further let $ L_0:C^\infty(m)\to C^\infty(M)$ be the differential operator
$ L _0=\frac{1}{2}
\Delta_{LB}$ and let $L_1:=L_0|_D$, where $D$ is given by \eqref{domainD}. \\
Then, with respect to the norm $\|f\|=\sup_{x\in M}|f(x)|$, the following holds:
\begin{itemize}
\item[(I)] for each $t\geq 0$ and $f\in C_0(M)$ we have $S(t)f\in C_0(M)$ and $\|S(t)f\|\leq \|f\|$.
\item[(II)] for each $f\in D_k$, with $k\geq 3$, we have $\lim_{t\to+0}\|S(t)f-f-tL_1f\|/t=0$.
\item[(III)] if $t\to t_0$, $t_n\geq 0$ and $f\in C_0(M)$, then $\lim\limits_{t\to t_0}\|S(t)f- S(t_0)f\|=0$ for each $t_0\geq 0$.
\end{itemize}
\end{lemma}
\begin{proof}
First of all we remark that under the stated assumptions the manifold is geodesically complete. Indeed, this follows from the bounded geometry assumption and lemma \ref{lemmaC}.\\
The proof of I) and III) is completely analogous to the proof of points 2., 3a. and 3b. of theorem \ref{teo3.1}. We can restrict ourselves to prove point II).\\
for $t\downarrow 0$, we have
$$f(\gamma _{x,v}(t))=f(x)+vf(x)t+\frac{1}{2}\frac{d^2}{ds^2}f(\gamma _{x,v}(s))_{|s=0}t^2+\frac{t^3}{3!}R(t,x),$$
with $R(t,x)=\frac{d^3}{ds^3}f(\gamma _{x,v}(s))_{|s=u}$, $u\in [0,t]$.
In particular, by the geodesic equation
\begin{equation}\label{geod-eq}
{\ddot\gamma}^k _{x,v}(t)=-\Gamma_{ij}^k\dot\gamma^i _{x,v}(t)\dot\gamma^j _{x,v}(t),
\end{equation}
we obtain
\begin{align*}
\frac{d^2}{dt^2}f(\gamma _{x,v}(t))&=\sum_{i,j}\partial ^2_{ij}f(\gamma _{x,v}(t))\dot\gamma^i _{x,v}(t)\dot\gamma^j _{x,v}(t)+\sum_{i}\partial _{i}f(\gamma _{x,v}(t))\ddot\gamma^i _{x,v}(t),\\
&=\sum_{i,j}\partial ^2_{ij}f(\gamma _{x,v}(t))\dot\gamma^i _{x,v}(t)\dot\gamma^j _{x,v}(t)-\sum_{i,j,k}\partial _{k}f(\gamma _{x,v}(t))\Gamma_{ij}^k\dot\gamma^i _{x,v}(t)\dot\gamma^j _{x,v}(t).\\
\end{align*}
Analogously,
\begin{equation}\label{trdDerivative}\frac{d^3}{dt^3}f(\gamma _{x,v}(t))=
\left((2\Gamma_{mj}^i\Gamma_{lk}^m-\partial _l\Gamma_{kj}^i)\partial _i f+\partial _{lkj}f+3\Gamma^i_{kl}\partial _{ij}f\right)
\dot\gamma^l _{x,v}(t)\dot\gamma^k _{x,v}(t)\dot\gamma^j _{x,v}(t),
\end{equation}
(where, for notational simplicity, we have used the convention on the sum over repeated indices).
Hence, by using the identity $\sum_k e_k ^ie_k ^j=g(x)^{ij}$:
\begin{align*}
S(t)f(x)&=
f(x)+\frac{1}{2}\sum_{k=1}\left(\sum_{i,j}\partial ^2_{ij}f(x)e_k^ie_k ^j -\sum_{k,i,j}\partial _{k}f(x)\Gamma^k_{ij}e_k^ie_k ^j\right)t+t^3/2R(t,x)\\
&=
f(x)+L_1f(x)+t^3/2R(t,x),\\
\end{align*}
with
$$R(t,x)= \frac{1}{12d}\sum_{k=1}^d \left(\frac{d^3}{dt^3}f(\gamma _{x,\sqrt d e_k}(t))_{|t=u_k}+\frac{d^3}{dt^3}f(\gamma _{x,-\sqrt d e_k}(t))_{|t=u'_k}\right)$$
with $u_k, u'_k\in [0, \sqrt d]$, $k=1, \ldots , d$, and $\frac{d^3}{dt^3}f(\gamma _{x,\sqrt d e_k}(t))$ is given by \eqref{trdDerivative}.\\
Let us take an $r_0\in (0, I_{(M,g))}]$ sufficiently small in such a way that the thesis of proposition \ref{teo2} holds and consider an atlas made of local normal Riemannian charts $(B^{(M,g)}_{r_0}(p), \exp^{-1}_{p,N})$.
By the assumption that $(M,g)$ is of bounded geometry, estimate \eqref{estimateg3}, the bound
$$|\dot \gamma_{x,v}^i(t) |\leq \sqrt{\sum_{i=1}^d |\dot \gamma_{x,v}^i(t) |^2} \leq k_1 \|v\|_g$$
resulting from statement (a) of proposition \ref{teo2} and by the geodesic equation \eqref{geod-eq}, and the condition $f\in D_k$ with $k\geq 3$,
we obtain: $$\sup_{t\in [0,1], x \in M}|R(t,x)|<\infty,$$ which yields II.
\end{proof}
\begin{corollary}\label{corollaryheat}
Under the assumptions of lemma \ref{lemma-fin} the closure in $C_0(M)$ of $L_1$ is the generator of a Feller semigroup $V$ and for any $f\in C_0(M)$ and $T>0$:
\begin{equation}\label{convergenceformulaheat}\lim_{n\to \infty}\sup _{t\in [0,T]}\|S(t/n)^nf-V(t)f\|=0\:.\end{equation}
\end{corollary}
The heat semigroup $V$ provides a solution of the heat equation on $M$
\begin{equation}\label{CauchyProblemheat}
\left\{ \begin{array}{l}
\frac{\partial }{\partial t}u(t,x)=\frac{1}{2}\Delta_{LB}u(t,x)\\
u(0,x)=u_0(x)
\end{array}\right.\end{equation}
in the sense that if $u_0\in D(L)$ then $u(t):=V(t)u_0\in D(L)$ and $\frac{d}{dt}u(t)=Lu(t)$ in the strong sense.\par
Analogously to the case of diffusion processes on manifolds, the approximation result stated in corollary \ref{corollaryheat} admits a probabilistic interpretation.
Indeed, we can still define a sequence of random walks on $M$ with steps given by geodesic arcs according to the following construction.
For any $n\in {\mathbb{N}}$, let $X_n $ be a jump process defined as
$$ X_n(0)=x, \qquad X_n(t):=X_n(\lfloor nt\rfloor/n)=Y_n(\lfloor nt\rfloor),$$
where $\{Y_n(m)\}_m$ is a Markov chain with transition probabilities
\begin{multline} {\mathbb{P}}(Y_n(m)\in I|Y_n(m-1)=y) =\frac{1}{2d}\sum_{k=1}^d\left(\delta_{\gamma_{y, \sqrt{d}e_k(y)}(\sqrt{1/n})}\left(I\right) + \delta_{\gamma_{y, -\sqrt{d}e_k(y)}(\sqrt{1/n})}\right)\left(I\right), \quad I\in {\mathcal{B}}(M).\end{multline}
Analogously, let $(Z_n)$ the sequence of processes with continuous paths obtained by $X_n$ as geodesic interpolation, namely:
$$ Z_n(0)=x, \quad Z_n(m/n)=X_n(m/n), \quad Z_n(t)=\gamma _{X_n(m/n),X_n((m+1)/n)(t-m_n)}, \: t\in [m/n,(m+1)/n]$$
where $\gamma_{x,y}$ is the geodesic such that $\gamma_{x,y}(0)=x$ and $\gamma_{x,y}(1/n)=y$.
Denoted with $X$ the diffusion process on $M$ associated to the semigroup generated by the operator $L=\bar L_1$ we have the following result
\begin{theorem}\label{teo43} Under the assumption of corollary \ref{corollaryheat},
for any $T>0$, $X_n$ converges weakly to $X$ in $D_M[0,T]$ and $Z_n$ converges weakly to $X$ in $C_M[0,T]$
\end{theorem}
The proof is completely similar to the proofs of theorems \ref{thconvD} and \ref{teoconvC}.
\subsection{An approximation in terms of random walk with steps along integral curves of the parallelizing vector fields }
In the case where the parallelizing vector fields $e_1,\ldots,e_d$ of the manifold $(M,g)$
(simultaneously of bounded geometry and parallelizable)
are $C^\infty$-bounded, we can
view $\Delta_{LB}$ as a subcase of the operator $L_0$ discussed in Section \ref{sec3} and recast all the discussion therein using the paths constructed out of the integral
lines of the fields $e_k$ instead of the geodesics. In fact, since $\sum_{i=1}^d e_i^a(x) e_i^b(x) = g^{ab}(x)$ and using the fact that
$\nabla^{(g)}_k g^{ab} =0$, we can write
$$\Delta_{LB} = \sum_{a,b=1}^d g^{ij} \nabla_a^{(g)}\nabla_b^{(g)} = \sum_{a,b=1}^d \nabla_a^{(g)} g^{ab} \nabla_b^{(g)}
= \sum_{a,b=1}^d \nabla_a^{(g)} \sum_{i=1}^d e^a_i e^a_i \nabla_b^{(g)} =
\sum_{i=1}^d \sum_{a,b=1}^d \nabla_a^{(g)} e^a_i e^a_i \nabla_b^{(g)} $$
$$= \sum_{i=1}^d \sum_{a,b=1}^d e^a_i \nabla_a^{(g)} e^a_i \nabla_b^{(g)} + \sum_{i=1}^d (\nabla^{(g)}\cdot e_i) e_i $$
In other words $\Delta_{LB}$ is the operator $L_0$ in (\ref{Hoperator0}) generated by the vector fierlds $e_1,\ldots, e_d$, with a suitable choice for
$e_0$ since,
if $f \in C^\infty(M)$,
$$ (\Delta_{LB} f)(x) = \sum_{i=1}^d e_i(e_i f)(x)+ (e_0 f)(x) \quad
\mbox{where}\quad e_0 := \sum_{i=1}^d (\nabla^{(g)}\cdot e_i) e_i\:. $$
In this case theorem \ref{teo-conv-lastZn} holds yielding the Brownian motion on $M$, i.e. the diffusion process associated with the Laplace-Beltrami operator $\Delta_{LB}$, as the weak limit of a sequence of random walks $\{\tilde Z_n\}$ of the form \eqref{deftildeZ-n}, with steps constructed out of integral curve of the vector fields $\{e_k\}_{k=1,\ldots, d}$. This result can be rephrased in following form.
\begin{theorem}\label{teo44}
Let $(M,g)$ be a smooth parallelizable manifold of bounded geometry admitting a set of parallelizing vector fields $e_1,\ldots,e_d$ which are $C^\infty$-bounded. Then the Wiener measure $\mu$ on $(C([0,t], M), {\mathcal{B}}(C([0,t], M))$, i.e. the law of the diffusion process associated to the Laplace Beltrami operator $\Delta _{LB}$ is the weak limit of the sequence of probability measures $\tilde\mu _n$ on $(C([0,t], M), {\mathcal{B}}(C([0,t], M))$ induced by the random walks $\tilde Z_n$ defined by \eqref{deftildeZ-n} with $A_k=e_k$.
\end{theorem}
\section{Acknowledgments}
We are grateful to Sergio Albeverio, Fernanda Florido-Calvo, Christian G\'erard, Simone Murro, and Andrea Pugliese for useful discussions,
suggestions, and for having pointed out relevant references to us. The financial support of CIRM (Centro Internazionale per la Ricerca Matematica) - FBK (Fondazione Bruno Kessler), is gratefully acknowledged. Ivan Remizov's work is partially supported by Laboratory of Dynamical
Systems and Applications NRU HSE, of the Ministry of science and higher education of
the RF grant ag. No 075-15-2019-1931.
O.G. Smolyanov acknowledges the financial support of the grant "Fundamental problems of mechanics and mathematics" of Lomonosov Moscow State University and also the financial support
of Moscow Institute of Physics and Technology, within the state program to support the leading Russian Universities.
\section{Proof of some technical propositions}
\noindent{\bf Proof of Lemma \ref{lemmaC}}. Suppose there is a maximal geodesic $\gamma : I\ni t \to \gamma(t) \in M$,
where $t$ is the a length parameter along $\gamma$ used as its affine parameter,
such that $\sup I = \omega <+\infty$ (the case $-\infty < \inf I$ is analogous). Let $\{t_n\}_{n\in \mathbb{N}} \subset I$ be an increasing sequence such that $t_n \to \omega$ as $n\to +\infty$.
Consider an element $t_n$.
If there were an open
ball $B_n\subset T_{\gamma(t_n)}M$ centered at the origin and of radius $r > \omega- t_n$ where the exponential map $\exp_{\gamma(t_n)} T_{\gamma(t_n)}B_n
\to M$
is a diffeomorphism onto its image, then
$B_n$ would include in particular the tangent vector of $\gamma$ at $\gamma(t_n)$ and also a longer parallel vector. As a consequence
$\gamma$ could be extended to a longer geodesics. Since this is not possible, we conclude that $I_{(M,g)}(\gamma(t_n)) < \omega- t_n$. In turn, it would imply $0\leq I_{(M,g)} \leq \inf_{n \in \mathbb{N}}I_{(M,g)}(\gamma(t_n))=0$, whereas $I_{(M,g)}>0$ by hypothesis. Hence all maximal geodesics must be complete. The last statement immediately arises from
Hopf-Rinow's theorem.
$\hfill \Box$\\
\noindent{\bf Proof of Lemma \ref{CPVF}}.
Let $\gamma:(a,b)\to M$ be a maximal solution of \eqref{CPVF} and let us assume {\em ab absurdum} that $b <+\infty$. Consider a $t_0\in (a,b)$ and let $f:(t_0,b)\to {\mathbb{R}}$ be the continuous function defined as $$f(t):=d(\gamma(t),\gamma(t_0))\, ,$$
where $d:= d_{(M,g)}$ is the above defined distance induced by the Riemannian metric. Since we have assumed that $ b<\infty$, the function $f$ cannot be bounded on $[t_0,b)$. Indeed, $f$ were bounded, then there would exist an $R>0$ such that $\gamma (t)\in B_R(\gamma(t_0))$ for all $t\in [t_0, b)$, where $B_R(\gamma(t_0))$ denotes the closed ball with radius $R$ and center $\gamma(t_0)$. On the other hand, under the stated assumptions on $M$,
Hopf-Rinow theorem assures the compactness of the closed metric balls. By a classical result (see, e.g., lemma 56, Ch. 1 in \cite{ONe}), if there exists a compact set $K$ such that the maximal solution $\gamma:[t_0,b)\to M$
satisfies the condition $\gamma ([t_0,b))\subset K$, then $b=+\infty$.
Hence, since $f$ cannot be bounded, there exists a monotonically increasing sequence $t_n\to b$ such that $d(\gamma(t_n),\gamma (t_0))\to \infty $. Let
$s\colon [t_0,b)\to {\mathbb{R}}$ be the curvilinear abscissa along the curve $\gamma$, namely:
\begin{equation}\label{st} s(t)=\int_{t_0}^t\sqrt{g(A(\gamma (u)), A(\gamma(u)))}du. \end{equation}
Clearly, for any $n\geq 1$, the following holds
$$\frac{d(\gamma(t_n), \gamma (t_0))}{t_n-t_0}\leq \frac{s(t_n)-s(t_0)}{t_n-t_0}.$$
the latter inequality, the boundedness of the sequence $\{t_n-t_0\}$ and the fact that $\{d(\gamma(t_n), \gamma (t_0))\}$ is unbounded and strictly positive gives $$\lim_{n\to\infty}\frac{s(t_n)-s(t_0)}{t_n-t_0}= +\infty$$
On the other hand, by Lagrange's theorem applied to the (known to be differentiable) function $s: [t_0, b )\to {\mathbb{R}}$ defined in \eqref{st}, for any $n$ there exist a $u_n\in (t_0, t_n)$ such that $$\sqrt{g(A(\gamma (u_n)), A(\gamma(u_n)))}=\frac{d s}{dt}(u_n)=\frac{s(t_n)-s(t_0)}{t_n-t_0}.$$
The left hand side of the above equality is bounded by the assumptions on $A$, while the right hand side is unbounded by the discussion above and we have obtained a contradiction. $\hfill \Box$\\
\noindent {\bf Proof of Proposition \ref{corollaryD}}.
Let us start with the following lemma.
\begin{lemma}\label{lemmaDEN} Let $M$ be a smooth manifold and $f \in C_0(M)$. For every $\varepsilon >0$ there is $\psi \in C^\infty(M) \cap C_0(M)$ such that $||f- \psi||_\infty <\varepsilon$.
\end{lemma}
\begin{proof} (There are different ways to prove this density result and this is just a possibility).
It is sufficient to prove the thesis for
real functions and, in turn, for
$f\geq 0$. The general statement follows by decomposing $f = f_+ -f_-$ where $0\leq f_\pm = \frac{1}{2}(|f|\pm f) \in C_0(M)$.
Let us therefore prove the thesis for $0\leq f \in C_0(M)$.\\
If $p\in M$, there is a local chart $(U,\psi)$ such that $p\in U$. We can always restrict $U$ to a smaller open neightborhood $V$ of $p$, such that
$\overline{V} \subset U$ is a compact set. Since there is such a local chart for every $p\in M$ and the topology of $M$ is $2$nd countable, we can extract a subcovering of $M$ made of charts $\{V_j, \psi_j\}_{j\in J}$ where $J$ is finite or countably infinite. Using paracompactness property of $M$, we can refine $\{V_j, \psi_j\}_{j\in J}$ to a locally finite covering (equipped with corresponding coordinate maps $\psi_j$, the restrictions of the original ones) still indicated with the same
symbol $\{V_j, \psi_j\}_{j\in J}$. Finally, we can define a partition of the unit $\{\chi_j\}_{j\in J}$ subordered to the covering $\{V_j\}_{j\in J}$. Therefore
\begin{itemize}
\item[(i)] $\chi_j \in C_c^\infty(M)$,
\item[(ii)] $0\leq \chi_j \leq 1$,
\item[(iii)] $supp(\chi_j) \subset V_j$,
\item[(iv)] $\sum_{j\in J}\chi_j(x)=1$ where, due to locally finiteness property, for every $x\in M$ there is an open set containing $x$ whose intesection with the $V_j$ is not empty only for a finite number of indices $j\in J$, hence the sum is always finite.
\end{itemize}
To go on we assume that $J={\mathbb{N}}$ (the case of $J$ finite is simpler). If $f \in C_0(M)$, the function $f|_{V_n}\geq 0$ represented in coordinates through the map $\psi_n$ turns out to be the restriction of a contiunuous function
defined on a compact $\psi_n(\overline{V_n})\subset {\mathbb{R}}^n$. Using Stone-Weierstrass theorem we conclude that, for every $\varepsilon>0$, there is a smooth function $p^{(n,\varepsilon)}$ defined on $V$ that, in coordinates is the restriction to $V$ of a polynomial defined in the compact set $\psi_n(\overline{V_n})\subset {\mathbb{R}}^n$, such that with obvious notation
\begin{equation}
||f|_{V_n} - p^{(n,\varepsilon)}||^{(V_n)}_\infty < \varepsilon\label{one}.
\end{equation}
It is always possible to choose
\begin{equation}
0\leq p^{(n,\varepsilon)} \leq f|_{V_n}\label{two}.
\end{equation}
In fact, for $\mu>0$ define
$g_\mu := f + \mu$. Using the same argument as above, there is a smooth function $ q^{(n,\mu)}$ (in coordinates the restriction to the compact
$\psi_n(\overline{V_n})$ of a polynomial) such that the inequality holds $|| q^{(n,\mu)} - g_\mu||_\infty < \mu/3$, that is if $x\in V_n$
$$-\mu/3 \leq q^{(n,\mu)}(x) - f(x) - \mu < \mu/3$$
which implies
$$2\mu/3 < q^{(n,\mu)}(x) - f(x) < 4\mu/3$$
so that
$$0 < f(x) + 2\mu/3 < q^{(n,\mu)}(x) < f(x) + 4\mu/3$$
Defining $\varepsilon := 4\mu/3$ and $p^{(n,\varepsilon)}:= q^{(n,\mu)}$ we have that (\ref{one}) and (\ref{two}) are valid simultaneously.
In view of the definition of the functions $\chi_n$, (\ref{one}) and (\ref{two}) immediately imply
\begin{equation}
||f \cdot \chi_n - p^{(n,\varepsilon)}\chi_n||_\infty < \varepsilon\label{one2}.
\end{equation}
and
\begin{equation}
0\leq p^{(n,\varepsilon)} \cdot \chi_n \leq f\cdot \chi_n \label{two2}.
\end{equation}
Notice that the functions $ p^{(n,\varepsilon)} \cdot \chi_n$ and $f\cdot \chi_n $ are everywhere well defined on $M$ and belong to $C^\infty_c(M)$.
To conclude the proof, for $\varepsilon>0$ define
$$\psi := \sum_{n\in {\mathbb{N}}} \chi_n \cdot p^{(n,\varepsilon/2^{n+1})} $$
This function is well-defined belongs to $C^\infty(M)$. Furthermore
$$0\leq \psi = \sum_{n\in {\mathbb{N}}} \chi_n \cdot p^{(n,\varepsilon/2^{n+1})} \leq \sum_{n\in {\mathbb{N}}} \chi_n \cdot f = f $$
so that $\psi \in C^\infty(M) \cap C_0(M)$. Finally
$$||f-\psi||_\infty = \left|\left| \sum_{n\in {\mathbb{N}}} \chi_n \cdot p^{(n,\varepsilon/2^{n+1})}- \chi_n \cdot f \right|\right|_\infty \leq \sum_{n\in {\mathbb{N}}} || \chi_n \cdot p^{(n,\varepsilon/2^{n+1})}- \chi_n \cdot f ||_\infty \leq \sum_{n\in {\mathbb{N}}} \varepsilon 2^{n+1} = \varepsilon\:.$$
\end{proof}
In view of the lemma, in turn, it is sufficient to prove that $C_c^\infty(M)$ is dense in $C_0(M) \cap C^\infty(M)$. If $f\in C_0(M) \cap C^\infty(M)$ and $\varepsilon>0$,
then there is a compact $K \subset M$ such that $|f(x)| <\varepsilon$ if $x \not \in K$. Let $A\supset K$ be an open set whose closure is compact
(It can be constructed as follows. Every $p\in K$ admits an open neighborhood which is relatively compact -- just work in a coordinate patch-- due compactness , $K$ is therefore covered by a finite class of those relatively-compact open sets. The union of those sets is the wanted $A$.) Define $B := M \setminus A$.
Since $K$ and $B$ are disjoint closed sets ($K$ is closed because $M$ is Hausdorff by hypothesis), from the {\em smooth Urysohn lemma}, there exists $\chi \in C^\infty(M)$ such that $|\chi(x)| \leq 1$ for $x\in M$ and
$K\subset\chi^{-1}(\{1\})$, $B\subset \chi^{-1}(\{0\})$. Furthermore, from the construction, we see that $supp(\chi) \subset A \cup \partial A= \overline{A}$. We conclude that
$\chi\in C_c^\infty(M)$. The function $\psi := \chi \cdot f$ belongs to $C_c^\infty(M)$ as well and furthermore
$$||f - \psi||_\infty \leq ||f|_K -\psi|_K||^{(K)}_\infty + ||f|_{M\setminus K} -\psi|_{M\setminus K}||^{(M\setminus K)}_\infty$$
$$ = ||f|_K -f|_K||^{(K)}||_\infty + ||f\cdot (1-\chi)|_{M\setminus K}||^{(M\setminus K)}_\infty \leq 0 + ||f|_{M\setminus K}||^{(M\setminus K)}_\infty = \varepsilon\:.$$
The proof is over since we have proved that if $f\in C_0(M) \cap C^\infty(M)$ and $\varepsilon>0$, then there exists $\psi \in C_c^\infty(M)$ such that $||f-\psi||_\infty < \varepsilon$.
$\hfill\Box$\\
\noindent{\bf Proof of Lemma \ref{propL}}.
Noticing that $C_c^\infty(M)$ is dense in $L^2(M, \mu_g)$, let us first establish that $L_0|_{C_c^\infty(M)}$ is symmetric in $L^2(M, \mu_g)$ -- where from now on $\mu_g$ is the volume form (a positive Borel measure) associated to the metric $g$. Furthermore we also prove that $-L_0|_{C_c^\infty(M)} \geq 0$.
\begin{lemma}\label{lemma33} With the hypotheses of Lemma \ref{propL}, (\ref{linkA}) in particular,
$L_0|_{C_c^\infty(M)}$ is symmetric
and
$-\langle h, L_0 h \rangle \geq 0$
if $h\in C_c^\infty(M)$.
\end{lemma}
\begin{proof}If $A$ is a vector field viewed as differential operator, taking advantage of a partition of the unit, exploiting $Af = \nabla^{(g)}_Af = \sum_k A^j\nabla^{(g)}_j f$ and the fact that $\nabla^{(g)}_j|_{C_c^\infty(M)}$ is symmetric in $L^2(M, \mu_g)$, one immediately sees that, if $h,h' \in C_c^\infty(M)$,
$$\langle h', Ah \rangle = -\langle Ah', h \rangle - \langle h', (\nabla^{(g)} \cdot A) h \rangle \:,$$
where $ \nabla^{(g)} \cdot A$ acts as multiplicative operator.
Exploiting the fact that $C_c^\infty(M)$ is invariant under the action of $A_0$ and $A_i$ we find
$$\langle L_0 h', h\rangle =\langle h', L_0h \rangle - 2\langle h', A_0h \rangle + \sum_{i=1}^r\langle h', (\nabla^{(g)} \cdot A_i) A_ih \rangle
- \langle h', \nabla^{(g)} \cdot A_0 h\rangle $$
$$ + \frac{1}{2}\sum_{i=1}^r \langle h', \left(\nabla^{(g)} \cdot (\nabla^{(g)} \cdot A_i) A_i\right) h\rangle = \langle h', L_0 h \rangle$$
where we have used (\ref{linkA}) in the last passage. We have proved that $L_0|_{C^\infty_c(M)}$ is symmetric because $C^\infty_c(M)$ is dense and $\langle L_0 h', h\rangle =\langle h', L_0h\rangle $ for all
$h,h' \in C_c^\infty(M)$.\\
Regarding positivity, we have for $h\in C_0^\infty(M)$,
$$-\langle h, L_0h\rangle = -\frac{1}{2}\sum_{i=1}^r\int_M \overline{h}A_iA_i hd\mu_g - \int_M \overline{h} A_0h d\mu_g$$
$$=
\frac{1}{2}\sum_{i=1}^r\langle A_ih, A_ih\rangle+ \frac{1}{2}\sum_{i=1}^r\int_M (\overline{h} \nabla^{(g)}\cdot A_i) A_i h d\mu_g
- \int_M \overline{h} A_0h d\mu_g = \frac{1}{2}\sum_{i=1}^r\langle A_ih, A_ih\rangle \geq 0$$
where we have used again (\ref{linkA}) in the last passage \end{proof}
Let us pass to prove that there is a solution $f\in C^\infty(M)$ of (\ref{EQR2}) when $h\in C_c^\infty(M)$.
Since $L_0|_{C_c^\infty(M)}$ is symmetric (``formally selfadjoint" in Shubin's terminology), uniformly elliptic, and $C^\infty$-bounded,
Corollary 4.2 in \cite{shubin} implies that
$L_0|_{C_c^\infty(M)}$ is essentially selfadjoint in $L^2(M, \mu_g)$ and we will denote
by $L'$ the unique selfadjoint extension of $L_0|_{C_c^\infty(M)}$ (i.e., the closure of the latter in the Hilbert space $L^2(M, \mu_g)$). Let us focus on the equation for the unknown $f\in D(L')$
\begin{equation} L' f - \lambda f = h\:,\label{EQR}\end{equation}
when $h \in C_c^\infty(M)\subset L^2(M, \mu_g)$ and $\lambda >0$ are given.
By multiplying both sides with a test function $h'\in C_0^\infty(M)$ and integrating the result, using the fact that $L'$
is a selfadjoint extension of $L_0|_{C_c^\infty(M)}$, we find that an $f$ satisfying (\ref{EQR}), if any, must also satisfy (\ref{EQR2})
(where $L_0$ appears instead of $L'$!)
in {\em distributional sense}, since $f \in D(L') \subset L^2(M, \mu_g) \subset {\cal D}'(M)$.
Elliptic regularity (Theorem 8.3.1 and Corollary 8.3.2 in \cite{hormander}) applied to the elliptic operator $A= L_0-\lambda I$ implies
that, if $f$ exists, $f$ has to belong to $C^\infty(M)$ and also satisfies (\ref{EQR2}) in classical sense.
As a matter of fact, $f$ solving (\ref{EQR}) exists because every $\lambda>0$ belongs to the resolvent set of $L'$.
Indeed, $-L'\geq 0$ (that is true because $-L'$ is the Hilbert-space closure of $-L_0|_{C_c^\infty(M)}$ which is positive for the lemma above)
entails $\sigma(-L') \subset [0,+\infty)$.
A solution of (\ref{EQR}) (which also solves (\ref{EQR2}) and is smooth) therefore exists:
\begin{equation} f = R_\lambda(L')h\label{EQR3}\end{equation}
where $R_\lambda(L'): L^2(M, \mu_g) \to L^2(M, \mu_g)$ is the resolvent operator of $L'$. \\
Let us pass to prove that $f \in C_0(M) \cap C_b^\infty(M)$
when $M$ is not compact (otherwise there is nothing to prove). We henceforth assume that $M$ is non-compact.
We can say much more about $f$ in (\ref{EQR3}). First of all we observe that the map ${\cal D}(M) = C_c^\infty(M) \ni h \mapsto R_\lambda(L') =f \in L^2(M; \mu_g) \subset {\cal D}'(M)$ is sequentially continuous with respect to the natural topologies \cite{hormander} of $C_c^\infty(M)$ and ${\cal D}'(M)$ because $R_\lambda(L')$ is bounded in $L^2(M, \mu_g)$. Therefore we can apply Schwartz' kernel theorem \cite{hormander} that establishes that there exists a distribution $G \in {\cal D}'(M\times M)$ such that, for every pair $h,h' \in C_c^\infty(M)$,
\begin{equation}\int_M h'(x)\left(R_\lambda(L')h\right)(x) d\mu_g(x) = \int_{M \times M} G(x,y) \:h'(x)h(y) \:d\mu_g(x)\otimes d\mu_g(y)\:.\label{intdist}\end{equation}
The integral on the left-hand side is a standard integral, the one on the right-hand side is just a formal expression accounting for the action of a distribution.
However, Theorem 2.2 in \cite{shubin} (in the case $p=2$) proves that \\
(a) the distribution
$G$ is smooth outside the diagonal, i.e., $G\in C^{\infty}(M\times M\setminus \Delta)$, where $\Delta = \{(x,x) \:|\: x \in M\}$, \\
(b)
there exists $\eta>0$ such that
for every $\delta >0$ and every pair of multiindices $\alpha,\beta$, there exists $C_{\alpha, \beta, \delta}>0$ with
\begin{equation}
|\partial_x^\alpha \partial_y^\beta G(x,y)| \leq C_{\alpha, \beta, \delta} e^{-\eta d_g(x,y)}\quad \mbox{if $d_g(x,y) \geq\delta$,}\label{stima1}
\end{equation}
where
$d_g$ is the geodesical distance on $(M,g)$ which is well defined since $M$ is connectedand the derivatives $\partial_x$ and $\partial_y$ are computed in a pair of Riemannian charts (possibly the same).
Let us
take $x_0 \not \in supp(h)$ and consider an open neighborhood $U$ of $x_0$ such that $\overline{U}$ is compact and $\overline{U}\cap supp(h) =\emptyset$. Since $U\times supp(h) \ni (x,y) \not \in \Delta$,
if $h' \in C_c^\infty(M)$ is supported in $U$ item (a) above permits us to intepret litterally the integral on the right-hand side of (\ref{intdist}). Taking advantage of the Fubini theorem, we can rearrange (\ref{intdist}) to
$$\int_M h'(x) \left(f(x) - \int_M G(x,y) h(y) d\mu_g(y)\right) d\mu_g(x) =0\:.$$
Since $C_c^\infty(U)$ is dense in $L^2(U, d\mu_g )$ and $x_0$ and $U$ as above are arbitrary, we can conclude that
\begin{equation}f(x) = \int_M G(x,y) h(y) d\mu_g(y)\quad \mbox{almost everywhere if $x\not \in supp(h)$.}\label{fintG}\end{equation}
This result can be made even stronger observing that the function $\overline{U}\times supp(h) \ni (x,y) \mapsto G(x,y)h(y)$ is smooth due (a) and thus continuous and bounded. Hence, a direct use of dominated convergence theorem proves that $$U\ni x \mapsto \int_M G(x,y) h(y) d\mu_g(y)$$ is continuous as well. Since the left-hand side of (\ref{fintG}) is also continuous, we have proved that
\begin{equation}f(x) = \int_M G(x,y) h(y) d\mu_g(y)\quad \mbox{if $x \in M \setminus supp(h)$.}\label{fintG2}\end{equation}
Let us conclude the proof by establishing that $f$ vanishes at infinity and $||A_kf||_\infty < +\infty$ for $k=0,1,\ldots,r$.
Since $supp(h)$ is compact and the open geodesical balls are a basis of the topology of $M$, there is a finite covering
$\{B_{r_n}(x_n)\}_{n=1,\ldots, N}$ of $supp(h)$ made
of closed geodesical balls with finite radius. As a consequence there exist a sufficiently large closed ball $B_R(x_0)$ including $supp(f)$. (It is sufficient to enlarge the radius $r_0$ of
$B_{r_0}(x_0)$, to $R:= D+P$ where $D:= \max\{d_g(x_0,x_n)\:|\: n=0,1, \ldots, N\}$ and $P= \max\{r_n\:|\: n=0,1, \ldots, N\}$.) Notice that for every closed ball $B_R(x_0)$, with arbitary large $R>0$, it must hold
$M \setminus B_R(x_0) \neq \emptyset$ necessarily, otherwise $M$ would be compact due to Lemma \ref{lemmaC}
since $M$ is of bounded geometry, and $M$ is not compact by hypothesis. With $\eta>0$ as in (b), choose $\delta>0$ and define another closed ball $B_{R'}(x_0)$ with $R' > \delta + R$. If $y \in B_R(x_0)$ and $x \in M \setminus B_{R'}(x_0)$ we have
$d_g(x,y) \geq d_g(x,x_0) - R > R'-R > \delta +R -R > \delta$ so that we can use the inequality (\ref{stima1}) with $\alpha=\beta=0$, finding
\begin{equation} |f(x)| \leq \int_M |G(x,y)| |h(y)| d\mu_g(y) \leq vol_g(B_R(x_0)) C_\delta ||h||_\infty e^{\eta R} e^{-\eta d_g(x,x_0)}\quad \mbox{if $x \in M \setminus B_{R'}(x_0)$}\label{stima3}\end{equation}
where, for $x \in M \setminus B_{R'}(x_0)$ and $y\in B_R(x_0)$, we took advantage of
$$R+ d_g(x,y) \geq d_g(x,x_0)$$
so that
$$-\eta d_g(x,y) \leq -\eta d_g(x,x_0) + \eta R$$
which implies (\ref{stima3}) through (\ref{stima1}).
To conclude,
with $h, x_0,\eta,\delta, R, R', C_\delta$ fixed as above and if $||h||_\infty >0$ (otherwise there is nothing to prove since $f=0$),
for every $\varepsilon>0$ define
$$R_\varepsilon := - \frac{1}{\eta}\log \left( \frac{\varepsilon}{vol_g(B_R(x_0)) C_\delta ||h||_\infty e^{\eta R}} \right).$$
For every $\varepsilon>0$ (such small that $R_\varepsilon > R'$),
consider the closed ball $B_{R_\varepsilon}(x_0)$ which is compact in view of Lemma \ref{lemmaC}.
Here, (\ref{stima3}) yields
\begin{equation}|f(x)|\leq vol_g(B_R(x_0)) C_\delta ||h||_\infty e^{\eta R} e^{-\eta R_\varepsilon} = \varepsilon\quad \mbox{if $x \in M \setminus B_{R_\varepsilon}(x_0)$.}\label{stima3333}\end{equation} We have proved that $f \in C_0(M)$.
With a procedure similar to the we used to prove (\ref{fintG2}) based on Lagrange theorem and dominated convergence theorem proves that in every Riemannian coordinate patch,
\begin{equation}\partial_{x}^\alpha f(x) = \int_M \partial_{x}^\alpha G(x,y) h(y) d\mu_g(y)\quad \mbox{if $x \in M \setminus supp(h)$.}\label{fintG22}\end{equation}
Every $\partial_{x}^\alpha f$ is necessarily bounded on a finite covering of Riemannian charts of a compact ball $B_{R_\epsilon}$ including $supp(h)$. Outside $B_{R_\epsilon}$, a procedure similar to that followed to prove (\ref{stima3333}) and relying on (\ref{stima1}) for $\beta=0$ proves that there is a constant $H_\alpha<+\infty$ such that, in every local Riemannian coordinate patch on $M$ and for $i=1,\ldots,d$,
\begin{equation}
|\partial_{x}^\alpha f(x)| < H_\alpha\:.\label{fintG23}
\end{equation} We have established that $f \in C_b^\infty(M)$ concluding the proof. $\hfill \Box$\\
\noindent{\bf Proof of Lemma \ref{lemmaMN}}.
Let us consider $u \in D(M)$ and the map $u(t) := e^{tM}u$ for $t\in [0,+\infty)$. Due to Proposition \ref{ACPsol} (i.e. Proposition 6.2 in \cite{EN1}) $u(t) \in D(M)$ and this map is the unique classical solution of the Cauchy problem associated to $M$ with initial datum $u$. In particular it is continuously differentiable and satisfies $\frac{du}{dt} = Mu(t)$.
Since $M\subset N$, it also satisfies $\frac{du}{dt} = Nu(t)$ and thus, again for Proposition \ref{ACPsol}, it is also the unique solution of the Cauchy problem associated to $N$ with initial datum $u$. That is $u(t) = e^{tN}u$. We have in particular found that, if $u\in D(M)$, then $e^{tN}u \in D(M)$ for $t\in [0,+\infty)$, so that $D(M)$ is invariant under the semigroup generated by $N$. Proposition 6.2 in \cite{EN1} implies that $D(M)$ is a core for $N$. Since $M\subset N$
and both operators are closed, then $M=N$. $\hfill \Box$\\
\noindent{\bf Proof of Lemma \ref{propL2}}. Let us denote by $L''$ the {\em Hilbert-space} closure $\overline{L_0|_{C_c^\infty(M)}}$. We remark that $L_0|_{C_c^\infty(M)}$ is closable since its adjoint has a dense domain, as one can easely prove by a integration-by-parts argument. We write $L''$ in place of $L'$, to stress that the differential operator $L_0$
whose $L''$ is the Hilbert space closure over the domain $C_c^\infty(M)$ now
includes the perturbation $B$. The proof, except for a point, is identical to that of proposition \ref{teoL}
using Proposition 4.1 in place of its Corollary 4.2 in \cite{shubin}, observing that elliptic regularity
works also for $-L''$ since this property depends only
on the second order part of $L_0$, and noticing that the properties of $G$ established in
Theorem 2.2 of \cite{shubin}, (\ref{stima1}) in particular, are valid also if $L_0|_{C_c^\infty(M)}$ is not symmetric. The only new item to prove separately is that
there is a $\lambda>0$ in the resolvent set of $-L''$, which, differently from $-L'$, is no longer positive and
selfadjoint due to the presence of the term $B$. With this result the proof of the thesis concludes.
Let us prove the existence of such $\lambda >0$ by establishing that $L''$ is the generator of a strongly
continuous semigroup
in $L^2(M, \mu_g)$ under the hypotheseis (\ref{dominance}): in this case, the standard spectral bound of generators of strongly continuos
semigroups (Corollary 1.13 in \cite{EN1}) implies that $Re(\sigma(L''))$ has finite upper
bound so that $\rho(L'') \cap (0,+\infty) \neq \emptyset$ and the requested $\lambda>0$ exists. In the rest of the proof $-L'$
will denote again the positive selfadjoint operator
used in the proof of proposition \ref{teoL}, which is the Hilbert-space closure of $L_0|_{C_c^\infty(M)}$, where $A_0$ does {\em not}
contain the perturbation
$B$. As is known from Proposition 4.1 in \cite{shubin}, $D(L'') = D(L')= W^2_2(M)$ (see \cite{shubin} for the definition of those
Sobolev spaces on smooth Riemannian manifolds of bounded geometry). The operator $B|_{C_c^\infty(M)}$
is $L^2(M,\mu_g)$-closable since its adjoint has dense domain (it including $C_c^\infty(M)$) and the closure of $B|_{C_c^\infty(M)}$ has domain that evidently includes
$W^2_2(M)$ because $C^\infty_c(M)$ is dense in $W^2_1(M) \supset W^2_2(M)$ \cite{shubin}. We intend to prove that, defining $L'+ \overline{B|_{C_c^\infty(M)}}$ on the domain $W^2_2(M)$ of the first addend,
then $L'+ \overline{B|_{C_c^\infty(M)}}$ is (i) closed and (ii) it is the generator of a strongly continuous semigroup. Notice that, in this case
$L'+ \overline{B|_{C_c^\infty(M)}}= L''$ since $L'' \subset L'+ \overline{B|_{C_c^\infty(M)}}$ by construction ($L''$ is the closure of $L_0|_{C_0^\infty}$
whereas the right-hand side is a closed extension of that)
and the two sides of the
inclusion have the same domain $W^2_2(M)$. Hence (i) and (ii) imply that $L''$ itself is the generator of a strongly continuous
semigroup as wanted. To conclude the proof, we prove that (i) and (ii) are true if (\ref{dominance}) holds.
Since $\sigma(L')\subset (-\infty,0]$
and $L'$ is selfadjoint, $\{e^{tL'}\}_{t \in [0,+\infty)}$ is an analytic semigroup in $L^2(M, \mu_g)$. To prove (i) and (ii), according to
Theorem X.54 in \cite{RS2}, it is sufficient to demonstrate that for every $a>0$, there is a corresponding $b>0$ such that (the norm is that of
$L_2(M, \mu_g)$)
$$||\overline{B|_{C_c^\infty(M)}} \psi || \leq a|| L' \psi|| + b||\psi||\quad \mbox{for all $\psi \in W_2^2(M)$.}$$
Observe that, since $C_c^\infty(M)$ is a core for $L'$(it is essentially selfadjoint thereon)
and $\overline{B|_{C_c^\infty(M)}}$ is closed, the condition above is equivalent to
$$||B \psi || \leq a|| L' \psi|| + b||\psi||\quad \mbox{for all $\psi \in C_c^\infty(M)$.}$$
In turn, according to the remark on the condition (iii) on p. 162 of \cite{RS2}, the condition above is equivalent to the next statement:
For every $a>0$ there is $b>0$ such that
\begin{equation} ||B \psi ||^2 \leq a|| L' \psi||^2 + b||\psi||^2\quad \mbox{for all $\psi \in C_c^\infty(M)$}\label{lastCC}
\end{equation}
(where these $a,b$ are generally different from those in the previous inequality).
To conclude we prove that (\ref{lastCC}) is consequence of (\ref{dominance}). From the latter, replacing $\xi_k$ with $\nabla^{(g)}_k\psi$,
if $\psi \in C^\infty_c(M)$, we have
$$\int_M \overline{(B\psi)(x)} (B\psi)(x) d\mu_g(x) \leq c\int_M
\sum_{i=1}^r \sum_{a,b=1}^d\overline{(A^a_i \nabla^{(g)}_a \psi)(x)}
(A^b_i \nabla^{(g)}_b \psi)(x) d\mu_g(x)$$ $$=-2c\int_M \overline{\psi(x)}(L'\psi)(x) d\mu_g(x)\:. $$
Namely, if $\langle\cdot,\cdot\rangle$ is the scalar product in $L^2(M,\mu_g)$, standard results of spectral theory
\cite{Moretti,Schm} yield
$$||B\psi||^2 \leq 2c\langle \psi, -L'\psi \rangle =2c\int_{{\mathbb{R}}^+} \lambda d\nu_{\psi}(\lambda) $$
where $\nu_\psi(E) = \langle \psi, P^{(-L')}(E)\psi\rangle$, with $P^{(-L')}$ being is the {\em spectral measure} of the selfadjoint positive operator
$-L'$ and $E\subset {\mathbb{R}}$ any Borel set. Here observe that, since $c>0$, for every $a>0$ there is $b>0$ such that
$$2c\lambda \leq a\lambda^2 + b\quad \mbox{for all $\lambda\geq 0$.}$$
It is in fact sufficient to use $b= c^2/a$.
Therefore, again from standard results of spectral theory,
$$||B\psi||^2 \leq 2c\int_{{\mathbb{R}}^+} \lambda d\nu_{\psi}(\lambda) \leq a\int_{{\mathbb{R}}^+} \lambda^2 d\nu_{\psi}(\lambda) + b\int_{{\mathbb{R}}^+}1\: d\nu_{\psi}(\lambda)=
a||-L'\psi||^2 + b||\psi||^2\:. $$
In summary, for every $a>0$, there is $b>0$ such that (\ref{lastCC}) holds
$$ ||B \psi ||^2 \leq a|| L' \psi||^2 + b||\psi||^2\quad \mbox{for all $\psi \in C_c^\infty(M)$,}$$
concluding the proof.
$\hfill \Box$\\
\noindent {\bf Proof of Proposition \ref{teo2}}.
(a) Let us start with a given $r \in (0, I_{(M,g)})$ and
consider a Riemannian system of coordinates in the ball $B^{(M,g)}_r(p)$.
Expanding $g_{ab}(y)$ around $0$ up to the first order with the usual Taylor expansion, we have
$$g_{ab}(y)=\delta_{ab} + 0 + R^{(2)}_{ab}(y)$$
where, for some $\xi \in B_r(0)$,
$$ R^{(2)}_{ab}(y)= \frac{1}{2!} \sum_{i,j}\frac{\partial^2g_{ab}}{\partial y^i\partial y^j}|_\xi y^iy^j \quad y \in B_r(0), \quad i,j=1,
\ldots, d\:.$$
Taking the second bound in (\ref{estimateg}) into account for $k=2$ and using $|y^k|\leq r$ we have
$$\left|||A(y(q))||^2 - ||A(y(q))||^2_g \right|= \left|\sum_{a,b=1}^d A^a(y) g_{ab}(y) A^b(y) -A^a(y)\delta_{ab} A^b(y)\right| = \left|\sum_{a,b=1}^dA^a(y) A^b(y) R^{(2)}_{ab}(y)\right|$$
$$\leq \sum_{a,b=1}^d |A^a(y)| |A^b(y)| \frac{1}{2}C^{(r)}_2d^2r^2\leq \frac{C^{(r)}_2 d^2r^2}{2}\sum_{i,j=1}^d \|A(y)\| \|A(y)\| = \frac{C^{(r)}_2d^4 r^2}{2}||A(y)||^2\:.$$
In particular
$$||A(y(q))||^2 - ||A(y(q))||^2_g \leq \frac{C^{(r)}_2d^4 r^2}{2}||A(y(q))||^2$$
namely, if $||y||<r$, we have,
$$ \left(1-\frac{C^{(r)}_2d^4 r^2}{2}\right)||A(y)||^2 \leq ||A(y(q))||^2_g \:.$$
Restricting $r$ to $r_0>0$ such that\footnote{It is always possible to find such $r_0$ since the functions $r\mapsto C_k^{(r)}$ are monotone not-decreasing.} $(1-d^4 r_0^2C^{(r_0)}_2/2) >0$ and defining $k_1 := (1-d^4 r_0^2C^{(r)}_2/2) ^{-1}$, we conclude that (a) is valid for
$y \in B_{r_0}(0)$, i.e., $q \in B^{(M,g)}_{r_0}(p)$.\\
(b) Let us first show that, if $r_0>0$ is suitably small, then
\begin{equation}\label{quasib} ||T(y(q))||^2 \leq k_2 ||T(q)||^2_g\:, \quad\mbox{for all}\:\: q \in B^{(M,g)}_{r_0}(p)\end{equation}
for some $k_2 \geq 0$ independent of $T$ and $p$, for every smooth tensor field $T$ of order $(1,1)$.
The proof is strictly analogous to that of (a), observing that
\begin{equation}\label{quasib2} ||T(y(q))||^2 - ||T(y(q))||_g^2 = \sum_{a,b,i,j=1}^d T_a^i(y) \left(\delta^{ab}\delta_{ij}
- g^{ab}(y)g_{ij}(y)\right)T_b^j(y)\end{equation}
and
$$ g^{ab}(y)g_{ij}(y) = \delta^{ab}\delta_{ij}+0 + R^{(2)ab}_{ij}(y)$$
where, for some $\xi \in B_r(0)$,
$$R^{(2)ab}_{ij}(y)= \frac{1}{2!} \sum_{i,j}\frac{\partial^2 g^{ab}g_{ij}}{\partial y^i\partial y^j}|_\xi y^iy^j \quad y \in B_r(0), \quad i,j=1,
\ldots, d\:,$$
Using in (\ref{quasib2}) both the second bound in (\ref{estimateg}) and (\ref{estimateg2}) for $k=0,1$ as we did in the proof (a) we obtain (\ref{quasib}).
To conclude the proof of (b), observe that, if $y \in B_{r_0}(0)$,
$$\partial_{y^a}A^i = (\nabla^{(g)}_a A)^i - \sum_{c=1}^d \Gamma^i_{ac} A^c$$
so that, using (\ref{estimateg3}) toghether with rough estimates $|A^i| \leq ||A||$, $|\nabla^{(g)}_a A^i| \leq \|\nabla^{(g)} A\|$, we have
$$\|\nabla A\|^2 \leq \| \nabla^{(g)} A\|^2 + 2d^3 J_{0}^{(r_0)} \|A\| \|\nabla^{(g)} A\| + d^4 (J_{0}^{(r_0)})^2 \|A\|^{2}.$$
Finally observe that (a) and (\ref{quasib}) respectively imply
$$\|A\| \leq k_1\|A\|_g\quad \mbox{and}\quad \|\nabla^{(g)} A\| \leq \sqrt{k_2} \| \nabla^{(g)} A \|_g$$
which, inserted in the previous inequality, yield
$$\|\nabla A(y(q))\|^2 \leq k_2\| \nabla^{(g)} A(q)\|_g^2 + 2d^3 J_{0}^{(r_0)} k_1\sqrt {k_2} \|A(q)\|_g \|\nabla^{(g)} A(q)\|_g + d^4 (J_{0}^{(r_0)})^2
k_1^2\|A(q)\|^2_g$$
which must hold if $q \in B^{(M,g)}_{r_0}(p)$. By construction, the constants, $k_1$, $k_2$, $k_3 := d^4(J_{0}^{(r_0)})^2k_1^2$, and $k_4 := 2d^3 J_{0}^{(r_0)} k_1\sqrt {k_2}$ do not depend on $A$ and the estimate is valid for every $p\in M$ provided $q \in B^{(M,g)}_{r_0}(p)$. $\hfill \Box$\\
|
1,116,691,500,240 | arxiv | \section{INTRODUCTION}
A growing body of theoretical and observational research suggests that charged solar energetic particles (SEPs) gain most of their energy at traveling shocks relatively close to the Sun \citep{Zank:2008}. Interplanetary shocks have been well studied with in-situ measurements near Earth and throughout the solar system \citep{Stone:1985,Forbes:2006}. Many SEP bursts observed close to Earth are not directly associated with Earth-detected shocks. This suggests that SEPs are accelerated much closer to the solar corona, possibly by shocks near the Sun \citep{Reiner:2007}. Coronal shocks could accelerate particles to very high energies in short periods \citep{Roussev:2004}. However, field and shock geometry are key parameters in the ability of shocks to accelerate particles regardless of the shock strength, especially near the Sun \citep{Giacalone:2006}.
Coronal shocks have been observed earlier\citep{Pick:2006,Nindos:2008}. \citet{Maia:2000} reported on fast coronal transients propagating with similar speeds in both radio and white light. \citet{Vourlidas:2003} used coronagraph observations to study a white-light coronal shock beyond 2.5 $R_S$. Recent results \citep[e.g.,][]{Gallagher:2010, Patsourakos:2009a, Patsourakos:2009b} have used EUV observations to show the intimate connection between EUV waves and CMEs. However, there is still considerable debate about how shocks appear in these observations. Additionally, a widely used means of characterizing coronal shock kinematics is observations of drifting metric radio emissions from the Sun (approximately 18--180 MHz). These type II radio bursts are associated with coronal shocks accelerating electrons that excite plasma radio emissions \citep{McLean:1985,Reiner:2003,Mancuso:2003}.
Ultra-high cadence EUV imaging observations of off-limb coronal waves are presented, for two recent solar eruptions on June 12 and June 13, 2010. We use the Atmospheric Imaging Assembly (AIA) instrument \citep{Title:2006} aboard the Solar Dynamics Observatory. The temporal resolution, $\sim$12~s, allows for following the evolution of impulsive features in the low corona (1.2--2.0~$R_S$)---a capability newly available in EUV imaging of the Sun. About 3 hours after the June 12 event and 2 hours after the June 13 event, elevated proton fluxes (~6.5 MeV) were observed at 1 AU, leading us to investigate the connection between the remote wave observations and in-situ particle fluxes. We combine simultaneous EUV wave and type II radio burst observations with a coronal magnetic field model to investigate the morphology, kinematics, thermal and density properties of the wavefronts, and their energetic particle production capability.
The Letter is structured as follows: In Section~2, we detail the AIA and radio observations used. In Section~3 the kinematics, morphology, and physical properties of the EUV transients are described. We summarize our findings in Section~4.
\section{OBSERVATIONS AND DATA ANALYSIS}
\subsection{EUV observations}
We used observations from two AIA channels peaking at 193(FeXII) and 211~{\AA}(FeXIV). We refer to them as the 193 and 211 channels throughout the paper. Both channels have $\sim12$~s cadence, $~6$-s lag between the two. The data were processed to level 1.5 using a standard AIA pipeline. Base difference images were produced from an average of ten subframes immediately preceding the events. Event movies can be found as online supplemental materials to this Letter. For temperature and density analysis of the June 13 event, we use six EUV AIA channels dominated by Fe lines (details are given below).
The first event occurred on June 12, 2010 above active region (AR) 11081 located close to the northwestern limb (N23W43). The EUV transient coincided with an M2.0 X-ray flare between 00:30--01:02 UT, peaking at 00:57 UT. We considered observations between 00:56--01:03~UT, the times during which we could detect and measure the coronal transient feature in the FOV of the AIA instrument. During this period a faint, but discernible front was launched roughly radially above the AR.
The second event occurred on June 13, 2010 above AR11079, on the southwestern limb (S25W84). It coincided with an M1.0 flare between 05:30-05:44~UT, peaking at 05:39 UT. In EUV an eruption started at 05:34~UT on the limb, turning into a CME loop propagating radially outward. At 05:37~UT, a hemispheric wavefront appeared (in both 193 and 211~channels) in front of the CME and separated from it, traveling in the same direction but markedly faster. The wave reached the AIA FOV edge at 05:42 UT, followed by the CME at 05:44 UT.
In Figure \ref{fig1}, panels A and C show two base difference images in the 211 AIA channel of the June 12 and 13 events, respectively. Dashed lines trace the wavefronts. Although both events were only clearly visible in difference images, the second event was notably brighter, exhibiting lower velocities, as we show below.
Since wave signatures were very faint, we made manual measurements. For each subframe in each event, the expanding wavefront edge was selected along three radial profiles close to the wave's nose. To reduce measurement errors, they were repeated ten times for each image sequence in both channels. We fitted second-order polynomials to measured positions in order to obtain front velocities and accelerations, using MPFIT routines \citep{Markwardt:2009} combined with a statistical bootstrapping technique \citep{Efron:1979}. Since the waves were very dim, we only managed meaningful observations for two profiles in each event. Third-order fits were also attempted, but did not produce significantly different results.
We also corrected for plane-of-sky projection of the wavefronts, assuming spherical waves propagating radially away from the Sun. Then, the brightest EUV emission is detected at front edges. We deprojected front positions by assuming $r=r'/\sin(\phi)$, where $r$ and $r'$ are the true and projected radial distances from the flare site, respectively, and $\phi$ is the AR heliographic longitude. Velocities and accelerations are presented in Table \ref{table1}.
\subsection{RADIO OBSERVATIONS}
Metric radio spectra were provided by the Learmonth Solar Radio Observatory (Western Australia). Type II bursts indicate electron acceleration by coronal shocks, which may also accelerate protons and heavier ions. The Newkirk coronal electron density model \citep{Newkirk:1961} was used to relate the observed emissions to the height of the emission source.
Figure \ref{fig1}, panels B and D, show type II burst dynamic spectra. Multiple bands are visible for the June 12 type II radio burst, indicating that this event is rather complex. Both fundamental and harmonic emissions were observed for that event, starting at 00:57:45 UT. The harmonic emission persisted longer, until about 01:07 UT, but was too faint to be measured. A strong type III burst was also observed at 00:53 UT---an indication of an impulsive release of electrons in the corona. We separated two emission lanes in each spectrogram and fitted the peak emission frequencies.
We performed the same analysis for the June 13 radio burst, which started at 05:38:13 UT. The fundamental emission was barely discernible in the radio spectrogram (panel D). However, there were two parallel bands of harmonic emission.
\section{RESULTS AND DISCUSSION}
Figure \ref{fig1}, panel E, shows particle flux enhancements possibly associated with the coronal shocks observed by AIA. The in-situ particle measurements were made by the Energetic and Relativistic Nuclei and Electron \citep[ERNE;][]{Torsti:1995} instrument on SOHO. The time series of energetic protons (between 1.68--90.5 MeV) show an impulsive flux increase at all energies on June 12, followed by an additional increase in the low energies on June 13. Vertical dashed lines denote onsets of the EUV waves. Below we investigate EUV wave observations and radio shock properties in an attempt to characterize the solar sources of the elevated particle fluxes.
\subsection{Kinematics of the EUV waves and radio shocks}
Table \ref{table1} presents measured EUV wave front and radio shock kinematics. As described previously, we performed measurements of front edge positions along three linear profiles starting from the flare region (hereafter trials). These are denoted in roman numerals in the second column of the Table, together with the AIA channel. The third and fourth columns show initial velocities and acceleration, respectively, derived from second-order polynomial fits to the de-projected position measurements in two trials for each event. The rows in bold show trial averages for each wavelength, for each event.
For June 12, we obtained velocities of $\sim1275\pm44$~km~s$^{-1}$ for AIA/193 and $\sim1300\pm44$~km~s$^{-1}$ for AIA/211 channel. For June 13, we get $\sim731\pm22$~km~s$^{-1}$ for AIA/193 and $\sim741\pm31$~km~s$^{-1}$ for AIA/211 channel. The fits imply average decelerations of $-1000$~m~s$^{-2}$ for June 12 and $-800$~m~s$^{-2}$ for June 13.
\citet{Patsourakos:2010} studied the June 13 CME in EUV with AIA, between 05:34--05:43 UT. They fit circles to the expanding CME bubble and determined its kinematics. They found that the bubble front in the direction of propagation away from the solar limb accelerated to a maximum speed of ~400 km/s, after which it decelerated. They did not comment on the wave kinematics in that work.
\citet{Veronig:2010} studied a very similar dome-like CME and wave event off the eastern solar limb (seen from the STEREO-B spacecraft) with EUV observations They found upward expansion speeds of the dome-like wave of $\sim650$ km~s$^{-1}$. They also found that the EUV wavefront coincided with the white light transient observed by STEREO-B coronagraphs. This implies that the front edge of the white light emission may be caused by compressed elecron plasma behind the traveling shock. Quadrature position modeling was done by \citet{Patsourakos:2009b} to first show this connection.
Figure \ref{fig2} compares measured AIA EUV wavefront positions and estimated shock locations from radio observations. Wavefront positions measured in the 193 and 211 channels for the trial with lowest uncertainties are plotted as X-symbols. Diamonds denote radio shock positions. The June 12 radio emission occurred at lower heights than the EUV wavefront, suggesting electron acceleration away from the shock nose. Alternatively, the electron density model used might not apply for this case of open magnetic geometry (see Section~3.4). Radio emission started faster than the EUV wave, but decelerated completely by 01:00~UT, while the EUV wave continued to rise. By contrast, the radio emission on June 13 was split into two harmonic bands, which correlate very well with the EUV wave positions. This might imply local electron acceleration in front of and behind the nose of the traveling shock. Section~3.4 considers the coronal magnetic geometry in interpreting these observations.
\subsection{Temperature and density behavior of the EUV waves}
To investigate the temporal and density properties of the EUV waves, we performed differential emission measure (DEM) analysis on the June 13 wave (we were not able to do so for the June 12 wave due to data constraints), using region-averaged pixel values in the six EUV Fe-dominated channels (94, 131, 171, 193, 211, 335~\AA). We hand-selected four regions (labeled R1-4 in top panel of Fig.\ref{fig3}) in two frames - 05:37:00~UT and 05:39:00~UT (hereafter T1 and T2) - corresponding to times before and during the wavefront passage. The first three regions were chosen to sample different parts of the wave; the fourth was chosen upstream of the wave for comparison. Calculations were done for 16 temperature bins between $logT=5.5-7.0$, following the Monte Carlo method as implemented in \citet{Weber:2004}. Results for regions 1 and 4 were not statistically significant, so our analysis was limited to times T1 and T2 in R2 and R3.
For each region (of approximately 10,000 pixels), time, and channel, we constructed mean intensity data and errors. The mean observation sets were then solved for their DEM distributions. The DEM solutions we quote provide model intensities with the smallest $\chi^2$ fit to the data, when folded through the AIA responses. (See bottom panels of Fig.~\ref{fig3}.) We considered the relative degree of model fits versus the difference between T1 and T2 data.)
The DEMs for regions 2 and 3 are plotted in the bottom panels of Fig.\ref{fig3}, where T1 is shown in red and T2 is shown in green. The Monte Carlo analysis produces multiple solutions by varying the data by the errors, and these are represented as clouds of colored dotted lines with a very small spread. It can be seen that observations for T1 and T2 are significantly different. We find that the DEM temperature profile does not change appreciably from T1 to T2, for either region, but the overall emission measure increases.
To roughly estimate the jump in density, we consider a simple model. Assume that all measured intensity is emitted along the region's line-of-sight only from the wave-affected volume, i.e., no foreground nor background emission. Also assume no change in temperature. Then, since the integrated DEM is the full emission measure (EM) of the volume, we may estimate the density ratio as:
\begin{equation}
\frac{n_{e2}}{n_{e1}} \sim \frac{\sqrt{EM_2}}{\sqrt{EM_1}} \sim
\frac{\sqrt{\int \mathrm{DEM}_2(T) \mathrm{d}T}}{
\sqrt{\int \mathrm{DEM}_1(T) \mathrm{d}T}}
\end{equation}
For region 2, we find that $n_{e2}/n_{e1} \simeq 1.18$, and for region 3, we find that $n_{e2}/n_{e1} \simeq 1.12$, consistent with weak coronal shocks. For a more sophisticated model that accounts for foreground and background emission, the density changes within the wave-affected volume would have to be {\it even larger} in order to generate the observed change in intensities. Therefore, we find that $n_{e2}/n_{e1} \simeq 1.12$ is a {\it lower} limit.
\subsection{EUV wave morphology}
In both 211 and 193 channels, brightness increases downstream of the EUV wavefronts, relative to upstream. In the 211 channel this brightening is more pronounced, and is also more uniform throughout the downstream region. In the 193 channel, by contrast, the downstream material emits only close to the leading front. In both cases, ripples of emission behind the wavefronts expand as the fronts sweep through regions of upstream coronal plasma. These features persist until the transients leave the AIA FOV for both events. However, the downstream sheath emission on June 12 was dimmer than the emission on June 13 (where a CME bubble was seen).
\subsubsection{Lateral Overexpansion of the June 13 CME}
\citet{Patsourakos:2010} studied the CME of June 13, and found a strong lateral overexpansion of the CME bubble in the first ~4 minutes, after which the bubble expansion became equal in the radial and lateral directions (see top panel in their Fig.4.) Recently reported 3D numerical MHD simulations of coronal CME propagationin \citep{Das:2011} show that a pile-up compression (PUC) of coronal plasma may form between coronal shocks and the CMEs behind them. This occurred in the simulation whenever the CME expanded faster laterally than radially. Their interpretation is that as a CME expands fast laterally, plasma piles in front of it in a `sheath' behind the shock \citep{Opher:2010}. Additionaly, there was no significant temperature increase in the PUC in the simulations, consistent with our DEM results.
Comparing the intensities of the June 13 wave with results from \citet[their Fig.~4]{Patsourakos:2010}, we find the wave began increasing brightness significantly towards 05:38~UT, roughly coincident with the maximum speed of the CME. Even as the CME bubble aspect ratio reduced to $~$1, the overall wave brightness increased, peaking around 05:42~UT (after that the wave begins to disappear from the AIA FOV). Since the waves are quite dim, it was difficult to obtain quantitative observations. Future work will elucidate the temporal connection between lateral overexpansion and PUC formation. However, the DEM result of no significant temperature change support the modeling findings of a plasma compression sheath behind the shock from \citet{Das:2011}.
\subsection{Importance of the Magnetic Geometry for Particle Acceleration and Release}
Figure \ref{fig4} shows SDO/AIA (green) and STEREO-Ahead/EUVI \citep[red;][]{Howard:2008} difference images during both events, with a magnetic potential field source surface\citep[PFSS;][]{Schrijver:2003} model overlaid. On June 12 (left, top and bottom) the field geometry above the AR was very open, so particles were free to escape into interplanetary space as they gained energy. However, the complex magnetic topology does not allow us to address the possible sites of particle acceleration, and thus the discrepancy in positions and velocities between the EUV wave and radio shock.
On June 13 (Fig.~4, right panel), the magnetic geometry above the AR was much more closed - the shock might have been quasi-perpendicular at its nose, accelerating particles more effectively there (evidenced by the radio emission bands positioned in front of and behind the AIA wavefront). However, DEM analysis shows it was weak, so the 1 AU impulsive proton fluxes higher than 8~MeV did not increase above the already elevated levels (Fig.~\ref{fig1}, panel~E).
\section{SUMMARY}
We have presented observations of two western off-limb coronal waves in very high-cadence EUV imaging data. The waves were associated with metric type II bursts and significant increases in proton fluxes observed at 1 AU. We characterized the wave events in relation to the elevated particle fluxes at 1 AU. Our findings are:
1) The June 12 and 13, 2010 waves were large-scale, dome-like off-limb coronal transients, seen in EUV light. Enhanced emission sheaths followed the wavefronts.
2) The June 12 wave has a high initial speed ($\sim1287$~km~s$^{-1}$), but without a discernible driver, supported by a high average deceleration rate($\sim-1170$~m~s$^{-2}$).Similar behavior was observed in radio shock emission, although a discrepancy is clear between wave/shock positions and velocities. This might signify a more complex relation between the shock and wave, or alternatively, that the electron density model used for the radio data does not apply in this case. The June 13 wave started slower ($\sim736$~km~s$^{-1}$), but had a clear CME driver behind it sustaining its propagation, and consequently, a lower deceleration rate ($\sim-800$~m~s$^{-2}$).
3) DEM analysis of the June 13 wave event shows the enhanced emission was likely due to a density increase in the sheath behind the shock, and not to a temperature increase. We deduce from the emission measures ratio a density jump of at least $\sim1.12$.
4) EUV, radio, and in-situ observations, combined with a potential magnetic field model, reveal differences in the two events in terms of the possible field-to-shock orientation. In our interpretation, a more open field geometry of the June 12 event allowed protons accelerated impulsively (to $\sim50$~MeV) to escape quickly into interplanetary space. A closed field geometry during the June 13 event is supported by radio observations indicating the shock was effective in accelerating electrons at its nose, although proton fluxes above $\sim~8$~MeV at 1 AU did not increase appreciably.
The mechanisms of shock formation in the low corona are still under considerable debate. However, it seems that shocks do form low in the corona, and they are able to accelerate particles. The newly-introduced capability for multi-wavelength ultra-high cadence EUV observations of transients in the corona with SDO/AIA enables studying their dynamics in great detail. Based on our findings, the magnetic field geometry is important both for accelerating particles, and for their release into interplanetary space. Future work will involve analyzing multiple events and associated in-situ particle fluxes from multiple spacecraft, in order to constrain remote EUV wave observables significant for particle acceleration in the corona
\acknowledgments
We acknowledge support under AIA subcontract SP02H1701R from Lockheed-Martin and NASA LWS EMMREM project NNX07AC14G. We thank David Long, Maher Dayeh, Marc De Rosa, Steve Saar, and Suli Ma for help and discussions.
\clearpage
\begin{deluxetable}{cccccc}
\tabletypesize{\scriptsize}
\tablecaption{EUV wave/radio shock kinematics, June 12 and 13, 2nd order fits\label{table1}}
\tablewidth{0pt}
\tablehead{
\colhead{Time} & \colhead{Channel/profile\tablenotemark{a} } & \colhead{Initial Velocity (km s$^{-1}$)} &
\colhead{Acceleration(km s$^{-2}$)} }
\startdata
06/12 00:56 & 193/II & 1169.34$\pm$29.31 & -0.88$\pm$0.16\\
06/12 00:56 & 193/III & 1381.59$\pm$32.70 & -1.29$\pm$0.17\\
06/12 00:56 & 211/II & 1180.25$\pm$31.67 & -0.95$\pm$0.17\\
06/12 00:56 & 211/III & 1418.08$\pm$31.56 & -1.55$\pm$0.17\\
\tableline
{\bf 06/12 00:56} & {\bf 193/AVG\tablenotemark{b}} & {\bf 1275.46$\pm$43.91} & {\bf -1.09$\pm$0.23}\\
{\bf 06/12 00:56} & {\bf 211/AVG} & {\bf 1299.16$\pm$44.71} & {\bf -1.25$\pm$0.24}\\
\tableline
\\
06/13 05:37 & 193/I & 774.29$\pm$18.90 & -0.97$\pm$0.18\\
06/13 05:37 & 193/II & 688.86$\pm$11.93 & -0.45$\pm$0.11\\
06/13 05:37 & 211/I & 791.89$\pm$25.30 & -1.15$\pm$0.24\\
06/13 05:37 & 211/II & 691.31$\pm$18.98 & -0.62$\pm$0.18\\
\tableline
{\bf 06/13 05:37} & {\bf 193/AVG} & {\bf 731.57$\pm$22.35} & {\bf -0.71$\pm$0.21}\\
{\bf 06/13 05:37} & {\bf 211/AVG} & {\bf 741.60$\pm$31.63} & {\bf -0.89$\pm$0.30}\\
\tableline
\\
\tableline
\tableline
06/12 00:57 & FUND & 2819.09$\pm$95.79 & -26.78$\pm$1.76\\
06/12 00:57 & HARM & 2905.60$\pm$147.92 & -46.79$\pm$5.30\\
06/13 05:39 & HARM & 589.54$\pm$27.21 & -0.53$\pm$0.23\\
06/13 05:38 & HARM & 610.93$\pm$14.97 & 0.28$\pm$0.11\\
\enddata
\tablecomments{Measurements for the 06/12 event started at 00:56 UT, for the 06/13 event - at 05:37 UT - the times we were able to first measure waves.}
\tablenotetext{a}{For radio measurements - emission type}
\tablenotetext{b}{Average of the profile measurements for that channel and event.}
\end{deluxetable}
\clearpage
\begin{figure}
\epsscale{0.8}
\plotone{fig1.pdf}
\caption{Panels A and C: AIA/211 base difference images showing two stages of the June 12 and 13, 2010 coronal waves, respectively. The two frames in each panel are $\sim4$ minutes apart. Approximate positions of the wavefronts are in dashed black lines. Dotted lines in panel C outline the June 13 CME. The radial profiles along which velocity measurements were made are also shown. Panels B and D: radio spectra from Learmonth observatory for June 12 and 13, respectively. Panel B also shows a strong type III burst around 00:56~UT on June 12. Panel E: proton fluxes observed between June 11(DOY 162) and June 17(DOY 168), 2010 by the SOHO/ERNE instrument. Proton energies vary between 1.68-90.5 MeV. Vertical dashed lines show AIA waves onsets.\label{fig1}}
\end{figure}
\clearpage
\begin{figure}
\epsscale{0.8}
\plotone{fig2.pdf}
\caption{Time-height profiles of June 12 (top) and 13 (bottom) EUV waves and radio shocks. Wavefront positions from the lowest uncertainty trial (see Table \ref{table1}) from AIA/193 and 211 channels are shown as X-symbols. Diamonds denote shock positions estimated from radio type II burst observations with the Newkirk density model.\label{fig2}}
\end{figure}
\clearpage
\begin{figure}
\epsscale{1.0}
\plotone{fig3.pdf}
\caption{Top - a snapshot of the June~13 event (base difference) with overlaid regions for which DEM solutions were attempted. Bottom - the DEM solutions for regions 2(left) and 3(right) as overlaid dotted histograms for the two times T1(red) and T2(green).\label{fig3}}
\end{figure}
\clearpage
\begin{figure}
\epsscale{1.0}
\plotone{fig4.pdf}
\caption{SDO/AIA (top panels) and STEREO-Ahead/EUVI (bottom panels) base difference images during the June 12 (left) and 13 (right) events. The PFSS model coronal fields are overlaid to show the topology in which the waves/shocks propagated.\label{fig4}}
\end{figure}
\clearpage
|
1,116,691,500,241 | arxiv |
\section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{figures/variety}
\vspace{-0.2cm}
\caption{We compute frame fields as maps into the \emph{octahedral} and \emph{odeco varieties}, a projected slice of which is depicted here.}
\label{fig.variety-slice}
\vspace{-0.5cm}
\end{figure}
Inspired by the success of field--based approaches to quadrilateral meshing
on surfaces (\emph{cf.} \cite{vaxman2016}), recent research in applied geometry has focused on developing an
analogous approach to hexahedral meshing.
Motivated by applications in finite-element modeling, hexahedral meshing
is the problem of dividing a given volume into hexahedral elements (deformed cubes)
with minimal distortion and such that mesh boundary faces are aligned to the boundary
of the volume.
Hexahedral meshing couples a geometry problem---minimizing distortion
of mesh elements---to a combinatorial problem---placing mesh elements to achieve a desired
connectivity structure.
As in 2D, field-based approaches first ignore combinatorial constraints
and solve for a \emph{frame field}, which represents the local alignment
and singular structure of a mesh (see Figure \ref{fig.teaser}).
Then, they integrate that field to guide the placement of hex elements~\cite{nieser2011cubecover}.
So as not to impose unnatural constraints, the space
of frame fields must be expressive enough to represent the range of possible singularities that
may appear in hexahedral meshes.
These singularities are described by gluing relations
restricted to the symmetries of a cube, i.e., the octahedral group.
One might hope that 2D field-based methods would extend easily to 3D.
There are at least two obstructions to transferring ideas from the 2D case.
First, the singular structure of
a 3D frame field can be much more complicated than that of a cross field,
comprising an embedded graph rather than a set of isolated points.
Second, we must understand the geometry of the space of frames to measure and optimize smoothness of a frame field.
Complementing the recent results of \citet{liu} on frame field singularities
and necessary conditions for hex-meshability,
in the present work we study the second challenge, namely characterizing the geometry of
the space of frames.
A 3D frame field is, intuitively, an assignment of three mutually orthogonal directions
to each point in a volume. An orthonormal basis of three
vectors---comprising an orthogonal matrix---is sufficient to specify a frame,
but thanks to octahedral symmetry, multiple orthogonal matrices can specify the same frame.
Formally, the space of octahedral frames can be viewed as
the quotient of the group of 3D rotations
$\mathrm{SO}(3)$ by the right action of
the octahedral group $\mathrm{O}$ comprising the rotational symmetries of a cube
(see \S\ref{sec.frame-space}).
The non-commutativity of 3D rotations makes the geometry of this quotient more complicated
than that of its 2D counterpart.
Recent attempts to lift ideas and techniques from 2D have either ignored the
geometry of the frame space entirely or treated it as a black box for
nonlinear optimization.
Consider perhaps the simplest form of optimization on a space, projection---finding
the closest point in the space to a given point in an ambient space.
Previous work on frame fields has treated this projection problem
as a nonlinear, nonconvex optimization problem
over frames parametrized by Euler angles, with no guarantees on
convergence or global optimality.
Our description of the octahedral frame space as an algebraic variety
suggests a different approach to projection based on
semidefinite programming, which yields a certificate of global optimality in polynomial time.
Our semidefinite relaxation of projection is exact in a neighborhood of
the octahedral variety, and we conjecture---with strong empirical evidence---that it is
so universally.
Even when conducting local optimization on the space of octahedral frames, parametrization
by Euler angles may not be the best approach. We show that the map
from $\mathrm{SO}(3)$ to the octahedral variety is a local isometry, enabling us to compute
geodesics on the octahedral variety in closed form. Manifold-based optimization
that moves along geodesics can then be used to accelerate local optimization dramatically.
Beyond precisely characterizing the space of octahedral frames, our algebraic approach admits a generalization to frames whose axes scale independently. This larger space better captures frame field geometry, e.g.\ allowing for a nonzero direction aligned to singular arcs even if the directions orthogonal to the arcs must vanish. We call these new objects \emph{odeco frames}, thanks to their construction using orthogonally decomposable tensors, and we derive relevant projection operators.
Our experiments show how the theoretical objects we study enable volumetric frame field design in practice. In particular, we apply standard manifold-based optimization algorithms to field design, built on our differential and algebraic descriptions of octahedral and odeco frames (see Figure \ref{fig.variety-slice}). The end result is an efficient suite of techniques for producing smooth fields that obey typical constraints for our target application.
\paragraph{Outline} In \S\ref{subsec.octa} we reintroduce the spherical harmonic representation of octahedral frames and demonstrate how this amounts to an equivariant \emph{isometric} embedding of the quotient $\mathrm{SO}(3)/\mathrm{O}$ into $\mathbb{R}^9$. In \S\ref{subsec.odeco} we introduce the \emph{odeco} frames, whose axes can scale independently, and we exhibit the spaces of octahedral and odeco frames as nested varieties cut out by quadratic equations. In \S\ref{sec.ingredients} we describe essential primitives for optimization over octahedral and odeco frames, namely geodesics and projection via semidefinite programming. In \S\ref{sec.fields} we formulate the frame field optimization problem. In \S\ref{sec.alg} we describe two algorithms for optimizing fields. \S\ref{sec.exper} describes experiments using these algorithms. A discussion and conclusion follow in \S\ref{sec.concl}. Further experimental results are included in the supplemental material. In summary, our contributions are
\begin{itemize}[leftmargin=*]
\item a proof of isometric embedding of $\mathrm{SO}(3)/O$ in $\mathbb{R}^9$;
\item descriptions of the spaces of octahedral and more general odeco frames as nested algebraic varieties; and
\item new state-of-the-art optimization techniques for volumetric frame fields valued in both varieties, featuring geodesics and SDP-based projection as primitives.
\end{itemize}
\section{Related Work}
\label{sec.related-work}
\subsection{2D Frame Fields and Quadrilateral Meshing}
Cross fields, and their application to quad meshing, have been studied extensively
in geometry processing (\emph{cf.} \cite{vaxman2016}). A useful insight from
cross field research is that it is advantageous to replace field values defined up to some
symmetry with a \emph{representation vector}---that
is, some function of the field value invariant under the relevant symmetry. In two dimensions,
four vectors forming a right-angled cross can be represented in a unified manner by
their common complex fourth power. This amounts to an embedding of a quotient manifold
into a Euclidean space; we show that the octahedral variety generalizes
this idea to 3D in a natural and isometric manner.
Recently, an effort
has been made to unify and formalize the various algorithmic approaches to cross
fields, borrowing
from the Ginzburg--Landau theory from physics \cite{beaufort2017computing,viertel,osting2017}.
This amounts to replacing
an ill-posed unit norm constraint with a penalty term and taking
the limit as the penalty parameter goes to infinity. \citet{viertel}
show that local optima of such a procedure have isolated singularities
with indices $\pm 1/4$, as appropriate for quad meshing.
They propose a diffusion-generated algorithm to compute such local optima.
Inspired by the work of Merriman, Bence, and Osher (MBO) \shortcite{MBO1} on
mean curvature flow, this algorithm alternates between finite-time diffusion
and pointwise normalization. \citet{osting2017}
study an analogous method applied to orthogonal matrix--valued fields.
Our projection operators enable
us to develop similar MBO diffusion-generated methods for
optimization of octahedral and odeco fields. In \S\ref{sec.exper} we demonstrate that a modified strategy, where the diffusion parameter is adjusted on-the-fly, frequently helps to avoid local minima.
\subsection{3D Frame Fields and Hexahedral Meshing}
\citet{huang} introduce a representation of frames as functions over the sphere that
exhibit octahedral symmetry, parametrized by coefficients in the spherical
harmonic basis. As an initialization step, they solve a Laplace equation,
resulting in coefficients in the interior that
do not correspond to octahedral frames. These must be projected via nonconvex
optimization over frames parametrized by Euler angles.
They further optimize the results by minimizing a discrete Dirichlet energy
over Euler angles.
\citet{RaySokolov2} refine the three stages of this approach,
with improved boundary constraints in the initialization, an efficient projection technique,
and an L-BFGS optimization algorithm.
\citet{Solomon} reformulate the initialization step of
\citet{RaySokolov2} using the boundary element
method (BEM). This provides a way to harmonically interpolate the
boundary conditions exactly, ignoring the constraints in the interior. Finally,
sampled interior values are
approximately projected onto the constraints as in the previous methods.
As the constraints are ignored at the Dirichlet energy--minimization stage,
there is no sense in which the final frame fields are optimal.
A related but distinct problem is the computation of fields of symmetric
second order tensors (i.e., symmetric matrices) \cite{Palacios2017}.
Every symmetric matrix has an orthonormal basis of eigenvectors, which
corresponds to an octahedral frame. One might thus
think symmetric matrix fields can be used to parametrize octahedral frame fields.
While a symmetric matrix field corresponds to at least one
frame field, the correspondence is not one-to-one---for example, a field of
identity matrices corresponds to all frame fields. Moreover, symmetric matrix
fields can only represent singularities whose indices are multiples of $\pm 1/2$,
while indices $\pm 1/4$ are crucial for hex meshing \cite{liu}. Our fourth order octahedral and odeco fields are rich enough to represent all indices.
The work of \citet{Fang2016} on generalized polycubes imposes an even-more-extreme restriction that singularities may only appear on the boundary.
After cutting handles,
the regular frame field in the interior of the volume may be represented by a
field of rotation (orthogonal) matrices. This is further relaxed to a field of matrices, and
orthogonality is approximately enforced via a penalty term. The resulting field
is used to construct a polycube map of the cut volume, through which a
hex mesh is pulled back. By requiring regularity in the interior,
this method may exclude fields that achieve lower distortion overall.
The main driving force for research on 3D frame fields has been hexahedral mesh generation. For a broader overview we refer the reader to the surveys of \cite{Armstrong:2015:CTM} and \cite{Yu:2015:RAA}, and we limit the subsequent discussion to techniques involving frame fields. The idea of such methods is to construct a volumetric integer-grid map \cite{liu}, through which portions of the Cartesian integer grid are pulled back to a structure-aligned hexahedral mesh in the input domain. \citet{nieser2011cubecover} introduced a parametrization technique that turns a given frame field into an integer-grid map by solving a mixed-integer Poisson problem.
Extraction of the hexahedral mesh from the map is hampered by degeneracies in the map, motivating the sanitization technique of \citet{Lyon:2016:HRH} to improve robustness.
Several refinements of the above ideas have been proposed, including different guidance for the frame field \cite{li2012} and heuristics to improve the singularity structure by decimation or splitting \cite{li2012,Jiang:2013:AHM} to avoid degeneracies of the map.
While robust hexahedral meshing based on frame fields remains an open problem, recently several hex-dominant meshing algorithms have been proposed \cite{Sokolov:2016:HM,Gao:2017:RHM} that also use frame fields but circumvent the problem of non-meshable singularities. \citet{Gao:2017:RHM} propose a hierarchical optimizer for frame fields that is based on local relaxation.
However, many practical applications demand pure hexahedral meshes---e.g.~for the construction of volumetric spline spaces in isogeometric analysis---and consequently a full understanding of meshable field topology is required. To this end, \citet{liu} enumerate the singular vertex types that may occur in a hex mesh with bounded edge valence; they also develop a topological index formula analogous
to the Poincar\'e-Hopf formula for vector fields on surfaces. These local and global
constraints being established, they propose an algorithm to generate a (meshable)
frame field from a prescribed (meshable) singular structure.
The theoretical portion of our work complements the topological
work of \citeauthor{liu} with a closer study of the geometry of spaces of frames.
In concurrent work, \citet{Corman2019} propose an
alternative approach to computing a frame
field with prescribed singular structure via a discretized connection on a frame bundle.
This approach does not immediately extend to a method for \emph{de novo} frame field design.
However, the connection associated to a field is a natural object for the study of such
properties as integrability. We leave to future work the study of connections
associated to fields valued in the octahedral and odeco varieties and the extraction of such
connections directly from the field coefficients.
\subsection{Alternative Frame Representations}
Complementing the spherical harmonic--based representation in the papers above on octahedral frame fields, \citet{chemin} propose an equivalent representation of octahedral frames as
certain symmetric tensors of order four. They introduce algebraic equations
characterizing the octahedral frames among symmetric fourth-order tensors. These equations
are equivalent under a change of basis to our defining equations for the octahedral variety.
However, by using a basis suggested by the structure of $\mathrm{SO}(3)$, we are able to
not only present the defining equations of the octahedral variety,
but also illuminate how it is an isometric embedding
of the quotient space $\mathrm{SO}(3)/\mathrm{O}$, enabling us to compute geodesics.
Additionally, we use our algebraic description of the octahedral variety to introduce a
semidefinite relaxation of projection and to place it in the context of
the more general odeco variety.
In concurrent work, \citet{golovaty2019variational} also represent frames by fourth-order symmetric tensors subject to some algebraic conditions, and they propose gradient flow on a Ginzburg-Landau--type energy to optimize for smooth fields.
An additional alternative representation is proposed by \citet{Gao:2017:RHM}, who use quaternions to encode octahedral frames. Their representation uses relatively few values, but a matching procedure has to be embedded in their optimization objective to account for the fact that $48$ quaternions correspond to the same frame. The concurrent work \cite{beaufort2019quaternionic} uses quaternions to derive another parametrization of octahedral frames by points of a variety in three complex dimensions.
All previous work on 3D frame fields has only considered octahedral frames, which
do not capture the unidirectional behavior of frame fields near singular curves
(\emph{cf.} Figure \ref{fig.prism}).
In the algebraic geometry community,
\citet{Robeva} and \citet{Boralevi}
have completely characterized a family of algebraic varieties
of \emph{orthogonally decomposable} (odeco) tensors.
We show how the octahedral variety is embedded in one of the odeco varieties,
and we introduce a technique for optimization over the odeco frames.
For our purposes, odeco frames generalize octahedral frames by allowing independent
scaling of the ``axes'' of a frame, including degeneration to unidirectionality
at singular curves and to zero at singular nodes.
Finally, \citet{shen2016harmonic} consider fields having different local discrete
symmetry groups, such as the tetrahedral and icosahedral groups. They
extend the methods of \cite{huang,RaySokolov2} to compute such fields.
\subsection{Semidefinite Relaxations}
Relaxation of algebraic optimization problems to semidefinite
programs has been studied extensively in the field of real algebraic geometry
and optimization. \citet{blekherman2012semidefinite} provide an introduction to this discipline.
The efficacy of semidefinite
relaxation in computer science was demonstrated dramatically in the seminal paper
of \citet{goemans1995improved} on the maximum cut problem. Since then,
semidefinite relaxation has continued
to be employed to solve both combinatorial and continuous optimization problems,
such as angular synchronization \cite{singer2011angular}.
In geometry processing and graphics, this machinery has been applied to such
problems as correspondence \cite{kezurer2015tight}, consistent mapping \cite{huang2013consistent}, registration \cite{maron2016point}, camera motion estimation/calibration \cite{agrawal2003camera,ozyesil2015stable}, and deformation \cite{kovalsky2014controlling}.
Most relevant to the present work are relaxations of the Euclidean projection problem
onto a variety defined by quadratic equations,
an example of a quadratically-constrained quadratic program (QCQP).
\citet{cifuentes2017} have recently shown a stability
result implying that the semidefinite relaxation of Euclidean projection
onto a smooth, quadratically-defined variety is exact
in a neighborhood of the variety. \citet{cifuentes2018}
have additionally shown that the region in which the relaxation is exact is a
\emph{semialgebraic set}, and they have provided a formula for the degree of
its algebraic boundary. Unfortunately, computing this boundary is generally
intractible for interesting varieties.
A deeper theoretical understanding of when
semidefinite relaxations of Euclidean projection are globally exact is still lacking.
\section{Spaces of Frames}
\label{sec.frame-space}
As previously studied in \cite{huang, RaySokolov2, Solomon, chemin},
the basic unknown in the volumetric frame field problem is a tuple of three mutually orthogonal directions at each point in a volume. These directions
may be represented by an orthonormal basis of vectors, but the signs and order of the vectors
are irrelevant thanks to octahedral symmetry.
This redundancy makes detecting smoothness of a field of tuples difficult.
Hence, it pays to use a unified representation invariant to the symmetries of the frame.
Below, we apply machinery from differential and algebraic geometry to derive a succinct
description of this basic octahedral frame and show how it is related to
rotations of the function $x_1^4+x_2^4+x_3^4$ on the unit sphere expressed
in the spherical harmonic basis
(as used to represent frames in \cite{huang, RaySokolov2, Solomon}) and to tensorial
representations (as used in \cite{chemin}). Algebraic language not only provides a succinct
description of previous representations but also suggests a means of generalizing to frames
whose three axes scale independently
(e.g.\ rotations of the function $\sum_i \lambda_i x_i^4$ for
varying $\lambda\in\mathbb{R}^3$), whereas previous work requires them to have the same length
($\lambda_1=\lambda_2=\lambda_3$). This broader set better aligns with the realities of
the frame field problem, since singular edges have nontrivial directionality that
cannot be captured by existing representations.
Relevant background material may be found in Appendix \ref{app.background}.
\subsection{Octahedral Variety}
\label{subsec.octa}
Intuitively, an octahedral frame field is a smooth assignment of
(unoriented) orthonormal coordinate axes to each point $x$ in a region
$\Omega \subset \mathbb{R}^3$. Such frames are called octahedral because
they exhibit symmetries described by the octahedral group $\mathrm{O}$.\footnote{Technically,
their stabilizers are conjugate to $\mathrm{O}$.}
The space of such octahedral frames
can be identified with the quotient space $\mathcal{F}~=~\mathrm{SO}(3)/\mathrm{O}$ --- that is, the quotient
of the group of oriented rotations by the right action of the octahedral
group. Since $\mathrm{O}$ is not a normal subgroup of $\mathrm{SO}(3)$ (indeed $\mathrm{SO}(3)$
is simple) $\mathcal{F}$ is not a group.
However, as $\mathrm{O}$ is a finite group acting freely on $\mathrm{SO}(3)$, $\mathcal{F}$
is a manifold, and the surjective map $\mathrm{SO}(3) \to \mathcal{F}$ is a covering map.
The universal cover of $\mathcal{F}$ is that of $\mathrm{SO}(3)$,
namely $\mathrm{SU}(2)$, and so its fundamental group
$\pi_1(\mathcal{F})$ is the lift of $\mathrm{O}$ to $\mathrm{SU}(2)$, the binary octahedral
group $\mathrm{BO}$. In particular, $\mathrm{BO}$ classifies singular curves of octahedral fields \cite{mermin_topological_1979}.
$\mathrm{SO}(3)$ admits a bi-invariant Riemannian metric \eqref{eq.bi_invariant}, which
means that the action of $\mathrm{O}$ by right-multiplication is isometric.
Thus the Riemannian metric on $\mathrm{SO}(3)$ descends to $\mathcal{F}$,
making it a Riemannian manifold. In particular, $\mathcal{F}$
has geodesics that lift to
geodesics of $\mathrm{SO}(3)$. We will employ these geodesics to compute optimization sub-steps
on $\mathcal{F}$ (\emph{cf.} \S\ref{subsec.geodesics}).
We have introduced $\mathcal{F}$ as an abstract smooth manifold, but an embedding
of $\mathcal{F}$ in a Euclidean space is necessary to make it
amenable to computation. Previous works have introduced this equivariant
embedding via the
action of $\mathrm{SO}(3)$ on polynomials and spherical harmonic coefficients.
For completeness, we reintroduce it here, but in a slightly different way
that highlights the role of representation theory and the orbit-stabilizer theorem.
Later, we will show that the embedding of $\mathcal{F}$ in $\mathbb{R}^9$ is an \emph{equivariant
isometric embedding} (\emph{cf.} \cite{Moore1976}).
Consider the irreducible representation $\rho: \mathrm{SO}(3) \to \mathrm{SO}(9)$ corresponding
to the fourth band of spherical harmonics.
The $9 \times 9$ orthogonal matrices $\rho(r)$ for $r \in \mathrm{SO}(3)$ are sometimes referred to as Wigner $D$-matrices. Form a linear operator $H\in\mathbb{R}^{9\times9}$ as
\[ H = \frac{1}{|\mathrm{O}|}\sum_{o \in \mathrm{O}} \rho(o). \]
This $H$ is a projection operator onto the subspace of $\mathbb{R}^9$ invariant under all octahedral rotations:
\begin{lemma} \label{lem.oct-avg}
For any $o' \in \mathrm{O}$, $\rho(o')H = H$.
\end{lemma}
\begin{proof}
\[ \rho(o')H = \frac{1}{|\mathrm{O}|}\sum_{o \in \mathrm{O}} \rho(o')\rho(o)
= \frac{1}{|\mathrm{O}|}\sum_{o \in \mathrm{O}} \rho(o' o)
= \frac{1}{|\mathrm{O}|}\sum_{o \in \mathrm{O}} \rho(o) = H. \qedhere \]
\end{proof}
\begin{corollary}
For $q \in \mathbb{R}^9$, $\rho(\mathrm{O})\cdot q = q$ if and only if $q \in \mathrm{Im}(H)$.
\end{corollary}
\begin{proof}
The forward implication follows by writing $q = Hq$. The
reverse implication follows directly from Lemma \ref{lem.oct-avg}.
\end{proof}
Because $\mathrm{O}$ is a maximal subgroup of $\mathrm{SO}(3)$,
we have the following corollary.
Here $\stab{q}$ denotes the stabilizer of $q$---that is,
the subgroup of $\mathrm{SO}(3)$ leaving $q$ invariant (see Appendix \ref{app.background}).
\begin{corollary}
Let $q \in \mathrm{Im}(H)$ and nonzero. Then $\stab(q) = \mathrm{O}$.
\end{corollary}
It happens that $H \in \mathbb{R}^{9\times 9}$ has rank one, i.e., its image is one-dimensional,
motivating the following definitions.
\begin{definition}
The \textbf{canonical octahedral frame} is the normalized vector
\[ q_0 = \left(0, 0, 0, 0, \sqrt{\frac{7}{12}}, 0, 0, 0, \sqrt{\frac{5}{12}}\right)^{\mathstrut\scriptscriptstyle{\top}} \in \mathbb{R}^9. \]
such that $H = q_0 q_0^{\mathstrut\scriptscriptstyle{\top}}$.
\end{definition}
Our canonical frame is the same as the normalized
projection of the polynomial $\sum_i x_i^4$ into the fourth band
of spherical harmonics, as in, e.g., \cite{Solomon}.
\begin{definition}
The \textbf{octahedral variety} is the orbit of $q_0$ under the action of $\mathrm{SO}(3)$,
\[ F = \rho(\mathrm{SO}(3)) q_0 = \{ \rho(r) q_0 \mid r \in \mathrm{SO}(3) \}. \]
\end{definition}
To summarize, $F$ is the orbit of $q_0$, whose stabilizer
is $\mathrm{O}$. A smooth version of the orbit-stabilizer theorem
\cite[Theorem 21.18]{lee2012} shows that there is an equivariant diffeomorphism
$\phi: \mathcal{F} \to F$:
\begin{center}
\begin{tikzcd}
\mathrm{SO}(3) \arrow[d, two heads] \arrow[r, "\rho"] & \mathrm{SO}(9) \arrow[d, two heads, "\cdot q_0"] \\
\mathcal{F} \arrow[r, dashed, "\phi"] & F
\end{tikzcd}
\end{center}
Next, we will characterize the Riemannian geometry of $F$ by showing that $\phi$
is an isometry up to a uniform scale factor.
To our knowledge, this observation has not appeared in previous work.
\begin{proposition}
\label{thm.isometric}
Let $\alpha = \sqrt{3/20}$ and
$F_\alpha \coloneqq \alpha F = \{\alpha q : q \in F\}$.
Let $\pi_\alpha:~\mathbb{R}^{9\times 9}~\to~\mathbb{R}^9$ denote matrix multiplication by
the scaled canonical octahedral frame $q_\alpha \coloneqq \alpha q_0$. That is, $\pi_\alpha(A) = A q_\alpha$.
Then the map
\[\pi_\alpha \circ \rho: \mathrm{SO}(3) \to F_\alpha \]
is a local isometry, making
the induced diffeomorphism $\phi_\alpha : \mathcal{F} \to F_\alpha$ an isometry.
\end{proposition}
\begin{proof}
Taking the differential of $\rho$ at the identity yields
the associated Lie algebra representation
\[ (D\rho)_I : \mathfrak{so}(3) = T_I \mathrm{SO}(3) \to \mathfrak{so}(9) \subset \mathbb{R}^{9\times 9}.\]
$(D\rho)_I$ is characterized by
$L_i\coloneqq(D\rho)_I(l_i)$, the images of the Lie algebra generators $l_i$ (see Appendix \ref{app.background} for definitions and supplemental material for explicit expressions).
$\pi_\alpha$ is a linear map, so its differential is also multiplication by $q_\alpha$.
To see that $\pi_\alpha \circ \rho$ is a local isometry, first recall that
the metric on $\mathrm{SO}(3)$ is bi-invariant. In particular, for each $g \in \mathrm{SO}(3)$,
an orthonormal basis for $T_g\mathrm{SO}(3)$ is given by the right-translated Lie algebra generators
\[ \{(D\Rtrans_g)_I l_i\}_{i=1}^3. \]
So it suffices to prove that their images under $\pi_\alpha \circ \rho$ form an orthonormal basis in $T_{\rho(g)q_\alpha}F_\alpha$.
But
\begin{equation}
\begin{aligned}
D(\pi_\alpha \circ \rho)_g (D\Rtrans_g)_I l_i
&= (D\pi_\alpha) (D\rho)_g (D\Rtrans_g)_I l_i \\
&= (D\pi_\alpha) (D\Rtrans_{\rho(g)})_I (D\rho)_I l_i \\
&= (D\pi_\alpha) L_i \rho(g) \\
&= L_i \rho(g) q_\alpha,
\end{aligned} \end{equation}
where the second equality follows by differentiating the
representation property $\rho \circ \Rtrans_g = \Rtrans_{\rho(g)} \circ \rho$
at the identity.
So the equation we need to prove is
\begin{equation} \label{eq.iso_condition}
\left\langle L_i q, L_j q\right\rangle = \delta_{ij} \quad \forall q \in F_\alpha.
\end{equation}
This isometry condition can be checked explicitly at $q_\alpha$:
\begin{equation}
\label{eq.iso_at_qalpha}
\left\langle L_i q_\alpha, L_j q_\alpha \right\rangle = \delta_{ij}
= \left\langle l_i, l_j \right\rangle.
\end{equation}
For a general $q = \rho(g)q_\alpha \in F_{\alpha}$, we compute
\begin{equation}
\begin{aligned}
\left\langle L_i \rho(g) q_\alpha, L_j \rho(g) q_\alpha\right\rangle
&= \left\langle \rho(g)^{\mathstrut\scriptscriptstyle{\top}} L_i \rho(g) q_\alpha, \rho(g)^{\mathstrut\scriptscriptstyle{\top}} L_j \rho(g) q_\alpha\right\rangle \\
&= \left\langle D\rho_I(g^{\mathstrut\scriptscriptstyle{\top}} l_i g) q_\alpha, D\rho_I(g^{\mathstrut\scriptscriptstyle{\top}} l_j g) q_\alpha\right\rangle.
\end{aligned}
\label{eq.iso_conj}
\end{equation}
The first equality follows because $\rho(g) \in \mathrm{SO}(9)$, and the second follows by Lemma \ref{lem.adj_comm}.
Let $a_{ik} = a_{ik}(g)$ be the coefficients of the adjoint representation of $\mathrm{SO}(3)$ (see Appendix \ref{app.background}), i.e., $g^{\mathstrut\scriptscriptstyle{\top}} l_i g \eqqcolon \sum_k a_{ik} l_k$; these coefficients form an orthogonal matrix.
Now using linearity of $D\rho$ along with \eqref{eq.iso_at_qalpha}
and \eqref{eq.iso_conj}, we obtain
\begin{equation}\begin{aligned}
\left\langle L_i \rho(g) q_\alpha, L_j \rho(g) q_\alpha\right\rangle
&= \left\langle \sum_r a_{ir} L_r q_\alpha, \sum_s a_{js} L_s q_\alpha \right\rangle \\
&= \sum_{r, s} a_{ir} a_{js} \langle L_r q_\alpha, L_s q_\alpha \rangle \\
&= \sum_k a_{ik} a_{jk} = \delta_{ij}
\end{aligned}\end{equation}
as required.
\end{proof}
In summary, we have described the octahedral variety as an embedded submanifold in
$\mathbb{R}^9$ isometric to $\mathcal{F} = \mathrm{SO}(3)/\mathrm{O}$. This isometry means that we can do
manifold optimization over frames by working in the embedding (\emph{cf.} \S\ref{subsec.rtr}).
To show that $F$ is really an algebraic variety, we will need to exhibit equations
that cut it out. We will delay this until \S\ref{subsec.octa_odeco}, when we
can give a unified description of the octahedral and odeco varieties.
\subsection{Odeco Variety}
\label{subsec.odeco}
The previous section provides more insight
into the octahedral frames used in all previous work.
Octahedral frames are suitable for representing singularity-free frame fields.
However, frame fields commonly encountered in applications
have singularities comprising an embedded graph.
Consider the triangular prism shown in Figure \ref{fig.prism}.
Near the singular curve, a smooth octahedral field would rotate infinitely
quickly, and it would not have a well-defined value along the curve.
This issue is analogous to the case of \emph{unit} cross fields---note
that for cross fields, the hairy ball theorem \emph{requires}
singularities on simply-connected surfaces.
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{figures/prism_scaling_cubes_stacked2_composed}%
\end{subfigure}
\qquad%
\begin{subfigure}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{figures/prism_scaling_composed2}%
\end{subfigure}
\caption{An odeco field
on a triangular prism. The norm of the second band coefficients
indicates degree of anisotropy (left). Odeco frames close to
the approximate position of the singular curve, indicated by a dot (right), scale toward
zero in the directions normal to the curve, but remain nondegenerate in the
direction along the curve.}
\label{fig.prism}
\end{figure}
One solution to this is to replace the hard constraint that the field values
lie in the octahedral variety with a penalty term,
as motivates the MBO method for octahedral fields detailed below (\S\ref{subsec.mbo}).
Another solution is homogenization---i.e., allowing field values to scale,
replacing singularities by zeros, as is considered for cross fields in \cite{knoppel2013}.
But consider the triangular prism again: a scaled octahedral field
would vanish completely at the singular curve since all three orthogonal axes must scale
uniformly. This makes the octahedral representation unable to capture the alignment
of the field to the singular curve.
To cope with this problem and to show the value of our algebraic approach,
we now describe a superset of the octahedral frames.
This set allows the axes to scale independently; for instance,
as shown in Figure \ref{fig.prism},
the frame axes orthogonal to the singular curve scale toward zero
while the axis tangent to the singular curve remains nonzero.
The symmetric orthogonally decomposable (\emph{odeco}) tensor varieties, introduced
in \cite{Robeva}, parametrize symmetric tensors $T \in \Sym^d \mathbb{R}^n$
that can be written
\[ T = \sum_{i=1}^n \lambda_i (v_i)^{\otimes d} \]
for some set of $n$ orthonormal vectors $v_i \in \mathbb{R}^n$,
where $v^{\otimes d}$ denotes the $d$-wise tensor power of the vector $v$.
That is, an odeco tensor
encodes a set of orthogonal vectors up to permutation.
If $d$ is even, $T$ is also invariant under sign changes to the $v_i$.
Moreover, an odeco tensor assigns \emph{weights} $\lambda_i$
to the vectors $v_i$. This property allows us to encode frames whose
axes scale independently.
There is a one-to-one correspondence between symmetric tensors of order $d$
over $\mathbb{R}^n$ and homogeneous polynomials of degree $d$ in $n$ variables
(\emph{cf} \cite[\S1.2]{Robeva}).
This correspondence is given in one direction by taking a polynomial
$p \in \mathbb{R}[x_1, \dots, x_n]$ to
its tensor of $d$th derivatives, and in the other direction by symbolic evaluation
on the vector of formal variables $x = (x_1, \dots, x_n)$:
\[ T \in \Sym^d \mathbb{R}^n \mapsto
p_T(x) = \frac{1}{d!} T(x, \dots, x) \in \mathbb{R}[x]. \]
This is a generalization of the correspondence between symmetric bilinear forms
(i.e. symmetric $2$-tensors) and quadratic forms (i.e., homogeneous quadratic polynomials).
Note that $T$ is odeco if and only if $p_T$ can be written
as a sum of $d$th powers of linear forms
\[ p_T(x) = \sum_i \lambda_i (v_i^{\mathstrut\scriptscriptstyle{\top}} x)^d \]
where the $v_i$ are orthonormal as above.
In this case, we also refer to $p_T$ as an odeco polynomial.
The defining equations of the odeco varieties are quadrics---homogeneous quadratic
equations---in the tensor coefficients \cite[Theorem 4]{Boralevi}
or equivalently in the coefficients of the associated polynomial $p_T$. That is,
a homogeneous polynomial
\[ p(x) = \sum_{d_1 + \dots + d_n = d} u_{d_1, \dots, d_n}
x_1^{d_1} \dots x_n^{d_n} \]
is odeco if and only if the coefficients $u$ satisfy
\begin{equation} \label{eq.odeco_quads}
u^{\mathstrut\scriptscriptstyle{\top}} A_i u = 0 \end{equation}
for a finite set of symmetric matrices $A_i$. In the case
relevant to us, where $n = 3$ and $d = 4$,
there are exactly $27$ such defining equations.
While dimension counting might suggest otherwise, these equations are not redundant---as can be seen by computing a Gr\"obner basis of the ideal they generate.
The matrices $A_i$ are listed explicitly in the supplemental document.
We will henceforth refer to this particular odeco variety simply as
the \textbf{odeco variety} $\tilde{F}$. Figure \ref{fig.odeco_examples}
plots several fourth-order odeco polynomials over the unit sphere.
\begin{figure}[h]
\centering
\newcommand{0.18\textwidth}{0.2\columnwidth}
\begin{tabular}{cccc}
\includegraphics[align=c,height=0.18\textwidth]{figures/odeco1} &%
\includegraphics[align=c,height=0.18\textwidth]{figures/odeco4} &%
\includegraphics[align=c,height=0.18\textwidth]{figures/odeco2} &%
\includegraphics[align=c,height=0.18\textwidth]{figures/odeco5}
\end{tabular}
\caption{%
Examples of $\boldsymbol{z}$-aligned odeco polynomials, plotted over the sphere.}
\label{fig.odeco_examples}
\end{figure}
\subsubsection{Octahedral Variety as a Subvariety of Odeco Variety}
\label{subsec.octa_odeco}
Recall that our octahedral frames were represented by coefficients in an irreducible
representation of $\mathrm{SO}(3)$, while the odeco variety was defined using monomial or tensor
coefficients. To see the relationship between the two varieties,
it is beneficial to recast the odeco variety in the irreducible representation basis.
This corresponds to looking at the coefficients of odeco polynomials in
the basis of spherical harmonics.
The quartic polynomials comprising the odeco
variety decompose as linear combinations of the spherical
harmonics of bands $0$, $2$, and $4$. Consider
an odeco polynomial $\sum_{i=1}^3 \lambda_i (v_i^{\mathstrut\scriptscriptstyle{\top}} x_i)^4$, and let
\[ q = (q_0, q_2, q_4) \in V_0 \times V_2 \times V_4 \]
be its coefficients in this basis of even spherical harmonics. These $15$ coefficients
give us a different representation of odeco frames in $\mathbb{R}^{15}$,
where each band has a clear meaning.
In particular, $q_0 = C_0 \sum_i \lambda_i$, where $C_0$ is a constant, and similarly
\[ \|q_2\|^2 = C_2 \left(\sum_i \lambda_i^2 - \sum_{i < j} \lambda_i \lambda_j \right). \]
The parenthesized expression is the squared distance between $(\lambda_1, \lambda_2, \lambda_3)$
and the line $\lambda_1 = \lambda_2 = \lambda_3$.
Thus $q_2 = 0$ if and only if $q_4$ is a scalar multiple of an octahedral frame.
The set
\[ \{q \in \tilde{F} \mid q_2 = 0\} \]
consists of scalar multiples of the octahedral variety indexed by $q_0$.
Fix $q_2 = 0$ and let $q_0$ be a constant such that $\|q_4\| = 1$. The octahedral
variety is the intersection of this affine subspace with the odeco variety.
Reducing the equations \eqref{eq.odeco_quads} of the odeco variety with respect to this subspace
yields $15$ inhomogeneous quadratic equations
\begin{equation} \label{eq.octa_quads}
\begin{pmatrix}
1 \\ q_4
\end{pmatrix}^{\mathstrut\scriptscriptstyle{\top}} P_i \begin{pmatrix}
1 \\ q_4
\end{pmatrix} = 0, \quad i = 1, \dots, 15 \end{equation}
cutting out the octahedral variety $F$.
As was the case for the odeco variety,
these equations are not redundant. The symmetric
matrices $P_i$ are listed explicitly in the supplemental document.
\section{Geodesics and Projection}
\label{sec.ingredients}
We have introduced two spaces, the octahedral and odeco varieties, that
can serve as target sets for frame fields. To compute
such fields, we will have to solve optimization problems over products
of many copies of these varieties. Na\"ively, one might
plug the quadratic constraints in \eqref{eq.odeco_quads}
and \eqref{eq.octa_quads} directly into a generic quadratic optimization solver.
However, the $A_i$ or $P_i$ are not positive semidefinite,
nor can their span be rewritten as the span of positive semidefinite matrices.
So the constraints are nonconvex and challenging to enforce.
As an alternative, we use optimization algorithms that are tailored
to the manifold-valued variables in our problem. These algorithms
employ sub-steps such as geodesic traversal and projection. We derive
these operations for a single frame below.
\subsection{Octahedral Geodesics}
\label{subsec.geodesics}
By Proposition \ref{thm.isometric}, the scaled
octahedral variety $F_\alpha$ is
locally isometric to $\mathrm{SO}(3)$ via the map $r \mapsto \rho(r) \cdot q_\alpha$.
It follows that geodesics of $\mathrm{SO}(3)$ push forward to geodesics of $F_\alpha$.
The relation \eqref{eq.exp_rep} in Appendix \ref{app.background}
then allows us to compute geodesics on $F_\alpha$ in closed form without evaluating
the representation map $\rho$ explicitly.
Let $\mathfrak{v} \in T_q F_\alpha$. $\mathfrak{v}$ can be
written in a basis induced by the $\mathrm{SO}(3)$ action, $\mathfrak{v} = \sum_{i=1}^3 v_i L_i q$,
Here the coefficient vector $v = (v_1, v_2, v_3) \in \mathbb{R}^3$ is
the ``axis-angle'' representation of a rotation, and
the $\mathrm{SO}(3)$ exponential maps it to the corresponding rotation matrix.
This can be computed by conjugation
with a rotation $r$ taking the unit vector $v / \|v\|$
to $(0, 0, 1)$. Composing with the representation map, we have
\begin{equation} \label{eq.octa_exp}
\exp_q(\mathfrak{v})
= \rho(r^{\mathstrut\scriptscriptstyle{\top}} \exp(\|v\| l_3) r) q
= \rho(r)^{\mathstrut\scriptscriptstyle{\top}} \exp(\|v\| L_3) \rho(r) q,
\end{equation}
where (unsubscripted) $\exp$ denotes the ordinary matrix exponential.
To compute $\rho(r)$,
define spherical coordinates $\theta, \varphi$ such that
\[ v = \|v\| (\cos \theta \sin \varphi, \sin \theta \sin \varphi, \cos \varphi). \]
Then one possible choice for $r$ is
\[ r = \exp(-\phi l_2)\exp(-\theta l_3)
= r_{23}^{\mathstrut\scriptscriptstyle{\top}} \exp(-\phi l_3) r_{23} \exp(-\theta l_3), \]
where $r_{23} = \exp((\pi/2)l_1)$.
So
\begin{equation} \label{eq.rot_axis}
\rho(r) = R_{23}^{\mathstrut\scriptscriptstyle{\top}} \exp(-\phi L_3) R_{23} \exp(-\theta L_3),
\end{equation}
where $R_{23} = \exp((\pi/2)L_1)$.
Combining \eqref{eq.rot_axis} with \eqref{eq.octa_exp},
we can compute geodesics by products of two simple ingredients:
$R_{23}$ and $\exp(t L_3)$ for $t \in \mathbb{R}$.
Closed-form expressions for both appear in the supplemental document, \S2.
Note that a formula similar to \eqref{eq.rot_axis} is common in the graphics literature,
e.g., for rotating BRDFs expressed in spherical
harmonic coefficients (\emph{cf.} \cite{kautz2002fast}, Appendix).
\subsection{Projection via Semidefinite Relaxation}
\label{subsec.proj}
$F$ and $\tilde{F}$ are both varieties defined by quadratic equations.
$F \subset \mathbb{R}^9$ is
cut out by $15$ inhomogeneous quadratic equations \eqref{eq.octa_quads}, while
$\tilde{F} \subset \mathbb{R}^{15}$ is cut out by $27$
homogeneous quadratic equations \eqref{eq.odeco_quads}.
Consider the problem of finding the closest point
in $F$ to a given point $y \in \mathbb{R}^9$:
\begin{equation} \label{prob.proj} \tag{P$_{F}$}
\Pi_{F}(y) = \argmin_{q \in F} \|q - y\|^2.
\end{equation}
This problem is called Euclidean projection onto a quadratic variety.
It is an example of a QCQP, and the general recipe for semidefinite
relaxation detailed in \S\ref{subsec.relaxations} automatically applies.
The SDP will have the form
\begin{equation}
\label{sdp.proj}
\begin{alignedat}{4}
&\argmin_{Q \in \mathbb{R}^{10\times10}} &\langle Y, Q \rangle \\
&\text{subject to} &Q_{11} &= 1 \\
&&\langle P_i, Q \rangle &= 0, &\; i = 1, \dots, 15 \\
&&Q &\succeq 0,
\end{alignedat} \tag{SDP$_F$}
\end{equation}
where $P_i$ are the symmetric matrices from \eqref{eq.octa_quads}, and
\[ Y = \begin{pmatrix} \|y\|^2 & -y^{\mathstrut\scriptscriptstyle{\top}} \\ -y & I_{9\times 9} \end{pmatrix}. \]
Let $Q^*$ be an optimal solution to \eqref{sdp.proj}.
Since \eqref{sdp.proj} is a relaxation of \eqref{prob.proj}, $\langle Y, Q^*\rangle$
is a lower bound on the objective value of \eqref{prob.proj}.
If it happens that $\rank(Q^*) = 1$, then $Q^*$
can be written in the form
\[ Q^* = \begin{pmatrix} 1 \\ q^* \end{pmatrix} \begin{pmatrix} 1 \\ q^* \end{pmatrix}^{\mathstrut\scriptscriptstyle{\top}}, \]
from which it follows that
\[ \begin{pmatrix} 1 \\ q^* \end{pmatrix}^{\mathstrut\scriptscriptstyle{\top}} P_i \begin{pmatrix} 1 \\ q^* \end{pmatrix} = 0,
\quad i = 1, \dots, 15, \]
i.e., $q^* \in F$.
In this case, $q^*$ is the globally optimal solution to \eqref{prob.proj}.
This situation is called exact recovery.
The foregoing discussion also applies, \emph{mutatis mutandis}, to $\tilde{F}$.
In our MBO algorithm presented below (\emph{cf.} \S\ref{subsec.mbo}), we alternate
projection with stepping off the variety.
We refer to the following theorem, which suggests
that when taking small enough steps from smooth points of a variety,
projection will be generically exact.
\begin{theorem}[\citet{cifuentes2017}, Theorem 1.2]
\label{thm.proj_stability}
Consider the Euclidean projection problem
\[ \argmin_{q \in \mathcal{V}} \|q - y\|, \quad y \in \mathbb{R}^n, \]
where $\mathcal{V} \subset \mathbb{R}^n$ is a real variety defined
by quadratic equations $f_1(q) = \dots = f_m(q) = 0$. Let $\bar{y} \in \mathcal{V}$
be a point at which the rank of the Jacobian $\nabla f(\bar{y})$ is equal
to the codimension of $\mathcal{V}$.
Then the semidefinite relaxation of projection is exact for $y \in \mathbb{R}^n$ sufficiently
close to $\bar{y}$.
\end{theorem}
In addition to this theoretical motivation, we have ample empirical evidence
that our relaxations are exact under much more general conditions.
The blue histogram in Figure \ref{fig.exactness}
shows the results of attempting to project $10^6$ random points
onto $F$ via semidefinite relaxation. We use the ratio of
the second largest eigenvalue $\lambda_2(Q^*)$ to the largest eigenvalue $\lambda_1(Q^*)$
as a proxy for exactness, as it
indicates whether $Q^*$ was rank-one up to machine precision.
For all octahedral projections, $\lambda_2(Q^*)/\lambda_1(Q^*)$ was $\approx 10^{-8}$ or less.
Based on these results, we make the following conjecture.
\begin{conjecture} \label{conj.octa_exact}
For a generic point $q_0 \in \mathbb{R}^9$,
the solution to \eqref{sdp.proj} has rank $1$,
and therefore yields the exact projection $\Pi_F(q_0)$.
\end{conjecture}
We also tried projecting $10^6$ random quartic polynomials onto
the odeco variety, as shown in the red histogram in Figure \ref{fig.exactness}.
The vast majority of odeco projections were also exact up to numerical precision:
out of $10^6$ solutions, only $60$ had an eigenvalue ratio greater than $10^{-8}$.
Theorem \ref{thm.proj_stability} gives us some intuition as to why this might happen.
The stability result only holds near smooth points of the variety, but whereas
the octahedral variety is a smooth manifold,
the odeco variety has a singularity at the origin, separating polynomials of different
signs.
\begin{conjecture} \label{conj.odeco_pos}
For a generic point $q_0 \in \mathbb{R}^{15}$ representing a \emph{positive}
polynomial, the SDP relaxation yields the exact projection $\Pi_{\tilde{F}}(q_0)$.
\end{conjecture}
Indeed, the green histogram in Figure \ref{fig.exactness}
shows the results of odeco projection on $10^6$ random
positive quartic polynomials, generated as sums of squares of random quadratic polynomials. For
all such positive initial points, the SDP solution had $\lambda_2(Q^*)/\lambda_1(Q^*) < 10^{-8}$,
supporting Conjecture \ref{conj.odeco_pos}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogxaxis}[
ymin=-0.05, ymax=1,
enlarge x limits = false,
minor y tick num = 4,
xlabel={\footnotesize $\lambda_2/\lambda_1$},
ylabel={\footnotesize Probability Density},
xlabel near ticks,
ylabel near ticks,
every tick label/.append style = {font=\tiny},
legend cell align = left,
legend style = {font=\tiny, row sep=0.1pt}]
\addplot[const plot,mark=none,red,fill,fill opacity=0.1] table[x index = 0, y index = 1, col sep = comma] {figures/OdecoExactnessTest2Ratio.csv};
\addlegendentry{Odeco}
\addplot[const plot,mark=none,blue,fill,fill opacity=0.1] table[x index = 0, y index = 1, col sep = comma] {figures/OctaExactnessTest3Ratio.csv};
\addlegendentry{Octahedral}
\addplot[const plot,mark=none,green,fill,fill opacity=0.1] table[x index = 0, y index = 1, col sep = comma] {figures/OdecoPosExactnessTest2Ratio.csv};
\addlegendentry{Odeco (positive initial polynomial)}
\addplot[mark=|, blue] coordinates {(2.41e-8, 0)};
\addplot[mark=|, green] coordinates {(1.54e-9, 0)};
\end{semilogxaxis}
\end{tikzpicture}
\caption{Histogram of eigenvalue ratio $\lambda_2(Q^*)/\lambda_1(Q^*)$
for solutions to the SDP
relaxations of Euclidean projection onto $F$ and $\tilde{F}$.
Projections of $10^6$ random points were tested for each. The maximum ratio
for octahedral projection was $\num{2.41e-8}$. The maximum ratio
for odeco projection of positive quartic polynomials was $\num{1.54e-9}$. See \S\ref{subsec.proj}.}
\label{fig.exactness}
\end{figure}
For octahedral frames, we also compare our SDP-based projection to the
previous state of the art method proposed by
\citet{RaySokolov2}. Because \citeauthor{RaySokolov2}'s
method is based on nonconvex optimization, we would expect it to get
stuck in local minima. Indeed, out of $\num{100000}$ trials,
we found $\num{600}$ cases for which the result of \citeauthor{RaySokolov2}'s method
was at least $10^{-3}$ further from the initial point than our projection---a
nearly $1\%$ error rate. Moreover, the difference between the computed projections
can be substantial, as illustrated in Figure \ref{fig.proj_comparison}.
\begin{figure}
\newcommand{0.4\columnwidth}{0.2\columnwidth}
\begin{tabular}{cccc}
\includegraphics[align=c,width=0.4\columnwidth]{figures/octa_proj_arrows.1} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/octa_proj_arrows.2} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/octa_proj_arrows.3} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/octa_proj_arrows.4} \\
{\color{red} $d_\textrm{\citeauthor{RaySokolov2}} \hfill = 0.963$} & {\color{red} 1.07} & {\color{red} 1.08} & {\color{red} 1.07} \\
{\color{blue} $d_\textrm{Ours} \hfill = 0.496$} & {\color{blue} 0.850} & {\color{blue} 0.857} & {\color{blue} 0.871}
\end{tabular}
\caption{%
For some query polynomials, our globally optimal SDP-based projection onto the octahedral variety (blue) yields dramatically different results from \citeauthor{RaySokolov2}'s approximate projection (red). Our projections are closer to the query points $y$ (distances shown in blue) compared to \citeauthor{RaySokolov2}'s (distances in red). The query polynomials are plotted on the sphere, with color and distance from the center proportional to magnitude.}
\label{fig.proj_comparison}
\end{figure}
\section{From Frames to Frame Fields}
\label{sec.fields}
The overall aspiration of this work is to compute smooth frame fields
in a region $\Omega \subseteq \mathbb{R}^3$ aligned to the normal $\boldsymbol{n}$
on its boundary $\partial \Omega$. We assume $\Omega$ to be compact with
$\partial \Omega$ a union of smooth, embedded surfaces.
We think of a frame field as a map $\phi: \Omega \to \mathcal{V}$
into some space of frames $\mathcal{V}$ satisfying alignment boundary conditions.
We've examined the geometry of two candidates
for the fiber $\mathcal{V}$---the octahedral variety $F$ and
the odeco variety $\tilde{F}$.\footnote{
The term ``fiber'' is intended to be suggestive.
It may be fruitful to consider a bundle $\pi: P \to \Omega$ with fiber $\mathcal{V}$,
whose restriction $\partial P \to \partial \Omega$
is nontrivial and encodes the boundary alignment condition.
The map $\phi$ would be replaced with a section of $P$.}
It remains to describe the boundary conditions and to define an energy
we want $\phi$ to minimize.
\subsection{Boundary Conditions}
\label{subsec.bdry}
In either $F$ and $\tilde{F}$, the frames aligned to a given direction
$\boldsymbol{n} \in \mathbb{R}^3$ form the intersection of an affine subspace with the
respective variety. In each case, this intersection is a lower-dimensional variety
defined by a different set of quadratic equations. We can impose alignment boundary
conditions by working over this lower-dimensional variety. For boundaries with sharp
creases, we can optionally exclude creased vertices, where the normal direction
is ill-defined, from the alignment constraint. Other constraints
can also be imposed at creases---for example, alignment to the crease tangent.
\subsubsection{Octahedral boundary conditions}
Let $\rho: \mathrm{SO}(3) \to \mathrm{SO}(9)$ be the irreducible representation as in \S\ref{sec.frame-space}.
First consider the octahedral frames aligned to the vertical $\boldsymbol{z} = (0, 0, 1)^{\mathstrut\scriptscriptstyle{\top}}$.
The space of vertical-aligned frames
$F_{\boldsymbol{z}} \coloneq \rho(A_{\boldsymbol{z}}) q_0$,
where $A_{\boldsymbol{z}}$ is the coaxial subgroup consisting of all rotations about $\boldsymbol{z}$, i.e.,
\[ A_{\boldsymbol{z}} = \{\exp(t l_3) \mid t \in \mathbb{R}\}, \]
and $q_0$ is the canonical octahedral frame.
As derived in \cite{huang, RaySokolov2}, such frames have the form
\begin{equation}
\begin{aligned}
\exp(t L_3) q_0 &= \left(\sqrt{\frac{5}{12}}\sin(4t), 0, 0, 0, \sqrt{\frac{7}{12}}, 0, 0, 0, \sqrt{\frac{5}{12}}\cos(4t)\right)^{\mathstrut\scriptscriptstyle{\top}} \\
&= q_{\boldsymbol{z}} + B s_{\boldsymbol{z}},
\end{aligned}
\end{equation}
where $q_{\boldsymbol{z}} = (0, 0, 0, 0, \sqrt{7/12}, 0, 0, 0, 0)^{\mathstrut\scriptscriptstyle{\top}}$,
$s_{\boldsymbol{z}} = (\cos(4t), \sin(4t))^{\mathstrut\scriptscriptstyle{\top}}$, and $B_{\boldsymbol{z}}$
is the obvious $9 \times 2$ matrix.
Given $q \in \mathbb{R}^9$, the closest vertical-aligned frame is
\[ \Pi_{F_{\boldsymbol{z}}}(q) = q_{\boldsymbol{z}} + B_{\boldsymbol{z}}\frac{B_{\boldsymbol{z}}^{\mathstrut\scriptscriptstyle{\top}} q}{\|B_{\boldsymbol{z}}^{\mathstrut\scriptscriptstyle{\top}} q\|}. \]
For a general unit normal $\boldsymbol{n}$, the aligned frames can be written
\begin{equation} \label{eq.octa_bdry}
\rho(r_{\boldsymbol{n}})(q_{\boldsymbol{z}} + B_{\boldsymbol{z}} s_{\boldsymbol{z}}), \end{equation}
where $r_{\boldsymbol{n}} \in \mathrm{SO}(3)$ is any rotation taking
$\boldsymbol{z}$ to $\boldsymbol{n}$. The projection of $q \in \mathbb{R}^9$
onto the $\boldsymbol{n}$-aligned frames is then
\begin{equation} \label{eq.proj_aligned} \Pi_{F_{\boldsymbol{n}}}
= \rho(r_{\boldsymbol{n}})\Pi_{F_{\boldsymbol{z}}}(\rho(r_{\boldsymbol{n}})^{\mathstrut\scriptscriptstyle{\top}} q) \end{equation}
During optimization, this projection is used for boundary-aligned frames.
\subsubsection{Odeco boundary conditions}
Consider the standard odeco frame $\sum_i \lambda_i x_i^4$ rotated by
some $r \in A_{\boldsymbol{z}}$, and fix $\lambda_3 = 1$.
As described in \S\ref{subsec.octa_odeco},
it has coefficients in the even-numbered irreducible representations
(bands of spherical harmonics)
\[ q = (q_0, q_2, q_4) \in V_0 \times V_2 \times V_4 = \mathbb{R} \times \mathbb{R}^5 \times \mathbb{R}^9. \]
Just as in the octahedral case, $q$
can be decomposed into a fixed part and a rotating part parametrized by
a lower-dimensional vector
\begin{equation} \label{eq.odeco_bdry}
q = q_{\boldsymbol{z}} + B_{\boldsymbol{z}} s_{\boldsymbol{z}}, \end{equation}
where now $s_{\boldsymbol{z}} \in \mathbb{R}^5$ and $B_{\boldsymbol{z}} \in \mathbb{R}^{15 \times 5}$.
The equations
\eqref{eq.odeco_quads} reduce to three quadratic equations in the coefficients of
$s_{\boldsymbol{z}}$, which
can be used to construct a semidefinite program to project onto the vertical-aligned
odeco frames, as in \S\ref{subsec.proj}. The details are
given in the supplementary material.
For frames aligned to a direction $\boldsymbol{n}$,
\[ q = \rho(r_{\boldsymbol{n}}) (q_{\boldsymbol{z}} + B_{\boldsymbol{z}} s_{\boldsymbol{z}}), \]
where $\rho$ is now the direct sum representation on $V_0 \times V_2 \times V_4$.
Projection onto the $\boldsymbol{n}$-aligned odeco frames can be constructed
from projection onto the $\boldsymbol{z}$-aligned odeco frames as in \eqref{eq.proj_aligned}.
\subsection{Objective Function}
As we are searching for smoothest fields, a natural choice for the energy
is Dirichlet energy $\frac{1}{2}\int_\Omega \|\nabla\phi\|^2 \; dx$, where the norm
is taken with respect to a metric on $\mathcal{V}$. There are
multiple problems this formulation will need to confront. As we have
seen, $F$ is a smooth manifold. But being a quotient of the $3$-sphere,
it has positive curvature, making mere existence of harmonic maps
$\Omega \to F$ a hard problem. Even more fundamentally, it is
not clear how to represent singularities of $\phi$---the map cannot be consistently defined
along singular curves. $\tilde{F}$ attempts to
solve this problem by representing singular frames with only one direction,
while allowing the other two to vanish. Along a singular
curve, we would expect an odeco field to take rank-one values of the form
$\lambda v^{\otimes 4}$, where $v$ is tangent to the singular curve.
\subsection{Discretization}
Let $\mathcal{T} = (V, T)$ be a tetrahedral mesh of $\Omega$
with vertices $V$ and tetrahedra $T$. The set of boundary vertices will be denoted
by $\partial V$. A discrete octahedral
frame field on $\mathcal{T}$ is specified by a map $q : V \to \mathcal{V} = F$ or
$\tilde{F}$, giving a frame $q_i$ for each vertex $i$. As the octahedral variety
$F \subset \mathbb{R}^9$ and the odeco variety
$\tilde{F} \subset \mathbb{R}^{15}$, we can think of $q$ as a $d \times n$ matrix, where $n = |V|$
and $d = 9$ or $15$, respectively.
We then define a discretized Dirichlet energy using the Euclidean metric in
the spherical harmonic basis to compare elements
of $\mathcal{V}$. This is equivalent to measuring $L^2$ distance between the
corresponding polynomials over the sphere, as employed in \cite{huang, RaySokolov2, Solomon}.
Note that this would not be the case if we compared odeco frames in the monomial basis. The
discrete energy is
\[ E(q) = \frac{1}{2} \tr\left(q L q^{\mathstrut\scriptscriptstyle{\top}}\right), \]
where $L$ is the linear finite element Laplacian on $\mathcal{T}$, with the
usual cotangent weights.
\section{Algorithms}
\label{sec.alg}
We now have two \emph{variety-constrained optimization} problems of the form
\begin{equation} \label{eq.overall_problem}
\begin{alignedat}{4}
&\min \; &&\frac{1}{2}\tr\left(q L q^{\mathstrut\scriptscriptstyle{\top}}\right) \\
&\text{s.t. } && q_i \in \mathcal{V}, &&\forall i \in V \\
&&& q_i \in \mathcal{V}_{\boldsymbol{n}_i}, \quad &&\forall i \in \partial V,
\end{alignedat}
\end{equation}
where the variety $\mathcal{V}$ is either $F$ or $\tilde{F}$, $q_i$
are the columns of $q$, $\partial V$
denotes the set of boundary vertices, and $\mathcal{V}_{\boldsymbol{n}_i}$
is the variety of frames aligned to the normal at boundary vertex $i$.
\subsection{Manifold Optimization Methods}
\label{subsec.rtr}
\begin{wrapfigure}{l}{0.25\columnwidth}
\centering
\includegraphics[width=0.25\columnwidth]{figures/rtr}
\end{wrapfigure}
In the case $\mathcal{V} = F$, \eqref{eq.overall_problem} becomes a manifold-constrained
optimization problem.
The octahedral variety is a Riemannian manifold,
and we can compute geodesics on it
as described in \S\ref{subsec.geodesics}.
So we can apply the standard Riemannian trust-region (RTR) algorithm \cite{Absil2007}
as implemented in the Manopt toolbox for MATLAB \cite{manopt}.
We consider the field to be a point in the product manifold
\[ q \in \prod_{i \in V \setminus \partial V} F \times
\prod_{i \in \partial V}F_{\boldsymbol{n}_i}. \]
In addition to the geodesics computed in \S\ref{subsec.geodesics}, we also need
a way to project vectors (e.g. gradients) in the ambient space $\mathbb{R}^9$ into the
tangent space at a point $q$, $\operatorname{proj}_q: \mathbb{R}^9 \to T_qF$.
The vectors $\{L_i q\}_{i=1}^3$ form an orthogonal basis for this tangent
space (\emph{cf.} Proposition \ref{thm.isometric}), so projection
is just given by taking the inner product with each of these vectors.
The odeco variety $\tilde{F}$ is not a manifold, but it is smooth almost everywhere.
We know of no way to compute exact geodesics on $\tilde{F}$, but we can implement
a version of RTR by replacing the exact exponential map with a \emph{retraction}, i.e., a first-order
approximation (\emph{cf.} \cite[Definition 2.1]{Absil2007}).
In doing so, we give up the superlinear local convergence guarantees
of RTR (\emph{cf.} \cite[Theorem 4.12]{Absil2007}) but retain a global convergence
guarantee. In practice, we find that odeco RTR converges at a reasonable rate.
At a smooth point of $\tilde{F}$, it is easy to project vectors onto the tangent space using the fact that $\tilde{F}$ is defined
by \emph{quadratic} equations. Let $q \in \tilde{F}$, and assume the coefficients of $q$
are expressed in spherical harmonic coefficients.
Then differentiating the equations
$q^{\mathstrut\scriptscriptstyle{\top}} \tilde{A}_i q = 0$ shows that the normal space at $q$ is
\[ N_q \tilde{F} = \Span \{\tilde{A}_i q\}, \]
where $\tilde{A}_i$ are expressed in the spherical harmonic basis.
Then $T_q \tilde{F} = (N_q \tilde{F})^\perp$,
where the orthogonal complement is taken with respect to the standard metric under which
the spherical harmonic functions are orthonormal.
We compute retractions as follows. The tangent
space to the odeco variety decomposes into a \emph{rotational part} and a \emph{scaling part}:
\[ T^r_q \tilde{F} = \Span \{\tilde{L}_i q\}_{i=1}^3, \qquad T^s_q \tilde{F} = (T^r_q \tilde{F})^\perp, \]
where the orthogonal complement is taken within $T_q \tilde{F}$. We then set
\[ \operatorname{retr}_q(v) = e^{v_r \cdot \tilde{L}} (q + v_s), \]
where $v_s$ is the component of $v$ in $T^s_ q \tilde{F}$, $v_r$ is the component of $v$ in
$T^r_q \tilde{F}$, and
\[ v_r \cdot \tilde{L} \coloneqq \sum_j (v_r)_j \tilde{L}_j, \]
where $v_r = \sum_j (v_r)_j \tilde{L}_j q$. It is simple to verify that
$\operatorname{retr}_q(v) \in \tilde{F}$ and
that
\[ \left. \frac{\partial}{\partial t} \operatorname{retr}_q(tv) \right\rvert_{t = 0} = v. \]
We compare the convergence behavior of octahedral RTR to that of the method of \citet{RaySokolov2} under
two sets of conditions---\citeauthor{RaySokolov2}'s initialization and combinatorial
Laplacian (Figure \ref{fig.convergence-raycond}); and our random initialization and linear finite element Laplacian (Figure \ref{fig.convergence-ourcond}).
The quadratic local convergence of RTR stands in stark contrast
to the slower linear convergence behavior of
\citeauthor{RaySokolov2}'s method. RTR converges faster but finds solutions
of similar Dirichlet energy to previous work (see Table 1 in the supplemental document).
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
enlarge x limits = false,
minor y tick num = 4,
xlabel={\footnotesize Time (s)},
ylabel={\footnotesize Gradient Norm},
xlabel near ticks,
ylabel near ticks,
every tick label/.append style = {font=\tiny},
legend style={font=\tiny}]
\addplot[mark=none, color=red!60!yellow] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/bone_80k_raycond_grad_ray.csv};
\addlegendentry{Ray et al. (Bone)}
\addplot[mark=none, color=blue!60!green] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/bone_80k_raycond_grad_rtr.csv};
\addlegendentry{RTR (Bone)}
\addplot[mark=none, color=red] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/cup_85k_raycond_grad_ray.csv};
\addlegendentry{Ray et al. (Cup)}
\addplot[mark=none, color=blue] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/cup_85k_raycond_grad_rtr.csv};
\addlegendentry{RTR (Cup)}
\addplot[mark=none, color=red!60!black] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/rockerarm_91k_raycond_grad_ray.csv};
\addlegendentry{Ray et al. (Rockerarm)}
\addplot[mark=none, color=blue!60!black] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/rockerarm_91k_raycond_grad_rtr.csv};
\addlegendentry{RTR (Rockerarm)}
\end{semilogyaxis}
\end{tikzpicture}
\caption{Convergence behavior of octahedral RTR and the method of \citet{RaySokolov2}
on various models, starting from \citeauthor{RaySokolov2}'s initialization
and using the combinatorial Laplacian.}
\label{fig.convergence-raycond}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
enlarge x limits = false,
minor y tick num = 4,
xlabel={\footnotesize Time (s)},
ylabel={\footnotesize Gradient Norm},
xlabel near ticks,
ylabel near ticks,
every tick label/.append style = {font=\tiny},
legend style={font=\tiny}]
\addplot[mark=none, color=red!60!yellow] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/bone_80k_ourcond_grad_ray.csv};
\addlegendentry{Ray et al. (Bone)}
\addplot[mark=none, color=blue!60!green] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/bone_80k_ourcond_grad_rtr.csv};
\addlegendentry{RTR (Bone)}
\addplot[mark=none, color=red] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/cup_85k_ourcond_grad_ray.csv};
\addlegendentry{Ray et al. (Cup)}
\addplot[mark=none, color=blue] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/cup_85k_ourcond_grad_rtr.csv};
\addlegendentry{RTR (Cup)}
\addplot[mark=none, color=red!60!black] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/rockerarm_91k_ourcond_grad_ray.csv};
\addlegendentry{Ray et al. (Rockerarm)}
\addplot[mark=none, color=blue!60!black] table[x index = 0, y index = 1, col sep = comma] {figures/convergence/rockerarm_91k_ourcond_grad_rtr.csv};
\addlegendentry{RTR (Rockerarm)}
\end{semilogyaxis}
\end{tikzpicture}
\caption{Convergence behavior of octahedral RTR and the method of \citet{RaySokolov2}
on various models, starting from random initialization
and using the geometric Laplacian.}
\label{fig.convergence-ourcond}
\end{figure}
\subsection{Generalized MBO Methods}
\label{subsec.mbo}
As we have observed, it is possible to do optimization on the octahedral
and odeco varieties by moving along curves that stay on the varieties exactly. However,
this hard constraint sometimes causes pure manifold optimization to get stuck in local minima.
To avoid such local minima, an approach that allows ``tunneling'' is required.
\begin{wrapfigure}{r}{0.25\columnwidth}
\centering
\includegraphics[width=0.25\columnwidth]{figures/mbo}
\end{wrapfigure}
The method of Merriman, Bence, and Osher \shortcite{MBO1} is a
diffusion-generated method for computing (possibly singular)
maps into a target space embedded
in a Euclidean space.
Following \cite{viertel,osting2017}, we first replace the hard constraint that the
field values lie in the variety with a penalty term, yielding an
energy of Ginzburg-Landau type,
\begin{equation}
E_\epsilon(q) = \frac{1}{2}\int_\Omega \|\nabla q\|^2 \; dx
+ \frac{1}{2\epsilon^2}\int_\Omega \dist(q, \mathcal{V})^2 \; dx, \label{eq.g-l}
\end{equation}
where $\mathcal{V}$ is our variety. Consider this energy in the limit $\epsilon \to 0$,
as in \cite[\S4]{viertel}.
The MBO method consists of alternating descent on the two terms of $E_\epsilon$.
Gradient flow on the first term---Dirichlet energy---yields
componentwise heat diffusion,
\begin{equation}
\begin{aligned}
\partial_t q(x, t) &= \Delta q(x, t) \\
q(x, 0) &= q_{k - 1}.
\end{aligned}
\end{equation}
with the boundary constraints given in \S\ref{subsec.bdry}.
In practice, we do one step of implicit Euler integration per overall algorithm
step.
Gradient flow on the second term of \eqref{eq.g-l} in the limit $\epsilon \to 0$
amounts to projection onto the variety.
The overall algorithm is shown in Algorithm \ref{alg.mbo}.
\begin{algorithm}
\DontPrintSemicolon
\SetKwInOut{Input}{input}
\SetKwInOut{Result}{result}
\Input{initial $d \times n$ field $q_0$, diffusion parameter $\tau_0$,
optimization schedule $\beta(k)$}
\Result{$q_k$}
\BlankLine
$k \gets 1$\;
\Repeat{$\Delta E_k / E_k < \delta$ or $\Delta_k < \delta$}{
$\tau_k \gets \beta(k) \tau_0$\;
\textbf{Diffusion step: }Solve the linear system $(M - \tau_k L) \overline{q}_k^{\mathstrut\scriptscriptstyle{\top}} = M q_{k-1}^{\mathstrut\scriptscriptstyle{\top}}$
with columns $(\overline{q}_k)_i$ constrained to be in the affine span
\eqref{eq.octa_bdry}
or \eqref{eq.odeco_bdry} for each $i \in \partial V$.\;
\textbf{Projection step: } Project $\overline{q}_k$ into the variety:
$(q_k)_i \gets \begin{cases}
\Pi_{\mathcal{V}}((\overline{q}_k)_i) &i \notin \partial V \\
\Pi_{\mathcal{V}_{\boldsymbol{n}_i}}((\overline{q}_k)_i) &i \in \partial V.
\end{cases}$\;
$\Delta_k \gets \frac{\tr((q_k - q_{k-1}) M (q_k - q_{k-1})^{\mathstrut\scriptscriptstyle{\top}})}
{\tr(q_k M q_k^{\mathstrut\scriptscriptstyle{\top}})}$\;
$E_k \gets \tr(q_k L q_k^{\mathstrut\scriptscriptstyle{\top}})$\;
$\Delta E_k \gets E_k - E_{k-1}$\;
$k \gets k + 1$
}
\caption{MBO over a variety $\mathcal{V} \subset \mathbb{R}^d$\label{alg.mbo}}
\end{algorithm}
We follow the suggestion of \citep{viertel,vanGennip2014}
to set $\tau_0$ relative to the inverse of the smallest nonzero eigenvalue of the Laplacian.
In ordinary MBO, $\beta(k) = 1$, i.e.,
$\tau$ does not change over the course of the algorithm.
In Figure \ref{fig.mbo_tau}, we test the effects of different values of
$\tau$. As we decrease $\tau$, the field is able to relax more and reduce the Dirichlet energy.
On the other hand, larger values of $\tau$ allow more tunneling so that
the field can escape local minima.
This suggests a modified
MBO scheme (mMBO) in which we start with a large $\tau$ and progressively reduce
it over the course of optimization---analogously to what happens to
the temperature in simulated annealing.
Our mMBO uses an optimization schedule $\beta(k) = 50 k^{-3}$.
This optimization schedule starts with a large diffusion time for
robustness to random initialization, then
sweeps $\tau$ through various orders of magnitude
quickly. We have found that this heuristic produces a good
balance between quick convergence and field quality.
\begin{figure}
\centering
\newcommand{0.4\columnwidth}{0.3\columnwidth}
\begin{tabular}{ccc}
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.5_sg_ic.2} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.05_sg_ic.2} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.05_rtr_sg_ic.2} \\
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.5_ic} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.05_ic} &%
\includegraphics[align=c,width=0.4\columnwidth]{figures/torus_316k_tau0.05_rtr_ic} \\
\footnotesize{$\tau \approx 0.5$} &%
\footnotesize{$\tau \approx 0.05$} &%
\footnotesize{$\tau \approx 0.05$ + RTR} \\
\footnotesize{$E = 106.76$} &%
\footnotesize{$E = 106.76$} &%
\footnotesize{$E = 85.99$}%
\end{tabular}%
\caption{Results of octahedral MBO on a torus. With a relatively
large diffusion time $\tau$, MBO produces a field with tightly packed singular
regions (left).
At a smaller value of $\tau$, singular curves relax toward
the boundary, reducing the Dirichlet energy (center).
RTR reduces the energy even further (right),
pushing the singular curves to the boundary.}
\label{fig.mbo_tau}
\end{figure}
Figure \ref{fig.convergence} depicts the convergence behavior of RTR, MBO with various $\tau$ values, and mMBO
starting from a projection of an approximately harmonic field. While the energy value achieved by
ordinary MBO is limited for each $\tau$, mMBO achieves a lower value by sweeping
through various values of $\tau$.
Note that the iteration counts shown do not reflect wall-clock time; in particular,
RTR runs at least an order of magnitude
faster than the other algorithms.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
enlarge x limits = false,
minor y tick num = 4,
xlabel={\footnotesize Iteration},
ylabel={\footnotesize Dirichlet Energy $E$},
xlabel near ticks,
ylabel near ticks,
every tick label/.append style = {font=\tiny},
cycle list = {red, orange, green, blue, black}]
\addplot+[mark=none] table[x expr = \coordindex + 1, y index = 0, col sep = comma] {figures/MBO_E_only.csv};
\addlegendentry{MBO ($\tau \approx 120$)}
\addplot+[mark=none] table[x expr = \coordindex + 1, y index = 0, col sep = comma] {figures/MBO2_E_only.csv};
\addlegendentry{MBO ($\tau \approx 12$)}
\addplot+[mark=none] table[x expr = \coordindex + 1, y index = 0, col sep = comma] {figures/MBO3_E_only.csv};
\addlegendentry{MBO ($\tau \approx 1.2$)}
\addplot+[mark=none] table[x expr = \coordindex + 1, y index = 1, col sep = comma] {figures/RTR.csv};
\addlegendentry{RTR}
\addplot+[mark=none] table[x expr = \coordindex + 1, y index = 1, col sep = comma] {figures/MMBO.csv};
\addlegendentry{mMBO}
\end{axis}
\end{tikzpicture}
\caption{Energy convergence on the rockerarm\_91k model. MBO's convergence to lower
energies is limited by the diffusion time $\tau$. Decreasing $\tau$ according
to the optimization schedule used in mMBO achieves the best results overall.}
\label{fig.convergence}
\end{figure}
\section{Experiments}
\label{sec.exper}
We initialize our algorithms with
vertexwise random octahedral fields, generated
by---separately for each vertex---starting with the canonical frame,
rotating it by a random angle between $0$ and $2\pi$
about the positive $z$-axis, and then rotating the positive $z$-axis to
a random direction.
We do this even for optimization of odeco fields to avoid encountering odeco frames that
have negative weights $\lambda_i$ (\emph{cf.} Conjecture
\ref{conj.odeco_pos}).
When starting from octahedral initialization,
the odeco frames we compute always have nonnegative weights.
\begin{figure}[ht]
\newcommand{0.4\columnwidth}{0.48\columnwidth}
\centering
\includegraphics[width=0.4\columnwidth]{figures/sphere_sg_ic.1.3}%
\hfill%
\includegraphics[width=0.4\columnwidth]{figures/sphere_sg_ic.2} \\
\includegraphics[width=0.4\columnwidth]{figures/sphere_sg_ic.3}%
\hfill%
\includegraphics[width=0.4\columnwidth]{figures/sphere_sg_ic.5}
\caption{Robustness to random initialization.
Odeco MBO transforms vertexwise random initial fields into
qualitatively identical results up to the symmetry of the sphere.
All results were computed
on a sphere with $48950$ vertices.
Integral curves are depicted in regions of rapid rotation,
i.e., in proximity to singularities.}
\label{fig.mbo_init}
\end{figure}
In Figure \ref{fig.mbo_init}, we demonstrate the robustness of MBO to random initialization.
On the sphere, global rotations of a field do not change the objective value.
Initializing odeco MBO with different fields of vertexwise random frames, we find
that it converges to randomly rotated copies of a qualitatively identical, symmetric field.
Figure \ref{fig.energy_divergence} illustrates the behavior of octahedral
and odeco fields as the density of the underlying tetrahedral mesh increases.
Due to the unit-norm constraint,
the gradient of an octahedral field becomes unbounded close to its singularities.
This leads to logarithmic divergence of the total energy as the tet
mesh becomes finer. In contrast, the additional scaling degrees of freedom
available to odeco fields allow renormalization of singularities, as shown
in Figure \ref{fig.sing_energy}. Thus, odeco field energy plateaus
as mesh density increases. Note also the much smaller variance in energy between
runs for odeco fields, quantitatively illustrating robustness to random initialization.
\begin{figure}[hb]
\centering
\begin{tikzpicture}
\begin{semilogxaxis}[
x dir = reverse,
minor y tick num = 4,
xlabel={\footnotesize Tet Volume (Geometric Mean)},
ylabel={\footnotesize Energy},
xlabel near ticks,
ylabel near ticks,
every tick label/.append style = {font=\tiny},
legend pos = north west,
legend cell align = left,
legend style = {font=\tiny, row sep=0.1pt},
cycle list = {red, blue},
mark size = 0.5pt]
\addplot+[only marks] table[x index = 0, y index = 1, col sep = comma] {figures/sphere_energy_octa.csv};
\addlegendentry{Octahedral}
\addplot+[only marks] table[x index = 0, y index = 1, col sep = comma] {figures/sphere_energy_odeco.csv};
\addlegendentry{Odeco}
\end{semilogxaxis}
\end{tikzpicture}
\caption{Energy density diverges for octahedral fields,
but plateaus for odeco fields, as mesh density increases. Also notice the higher
variance for octahedral fields.
Ten fields of each type were computed by RTR on each of thirteen
tetrahedral meshes of the unit sphere of various densities.}
\label{fig.energy_divergence}
\end{figure}%
\begin{figure}
\newcommand{0.4\columnwidth}{0.48\columnwidth}
\centering
\includegraphics[width=0.4\columnwidth]{figures/sphere_energy_octa}%
\hfill%
\includegraphics[width=0.4\columnwidth]{figures/sphere_energy_odeco}%
\caption{The energy density at singularities of an octahedral field
dominates the total energy (left). In contrast, an odeco field's energy is distributed more
uniformly (right). Results computed using MBO + RTR.}
\label{fig.sing_energy}
\end{figure}%
Table 1 (supplemental document) compares timings and energy values
for our methods and the method of \citet{RaySokolov2},
comprising $18$ types of experiments on $15$ different models, for $270$ total runs.
Direct comparison of energy values with \citet{RaySokolov2} is not possible, as their
method uses the graph Laplacian and initializes by solving a
linear system, whereas our method uses the geometric (finite element) Laplacian
and random initialization. To attempt a fair comparison, we report results of their
method alone, their method followed by RTR with the geometric Laplacian,
and their method substituted with the geometric Laplacian and random initialization.
All experiments in the table were conducted on an Ubuntu workstation with a four-core,
3.6 GHz Intel Core i7-7700 and 32 GB of RAM.
A MATLAB implementation of our algorithms accompanies this paper and also includes
our implementation of \cite{RaySokolov2}.
The results in Table 1 show that RTR converges very quickly but,
like previous work, easily gets stuck in local minima.
In contrast, mMBO takes longer to converge but produces higher-quality
fields that reflect the symmetries of the volume.
Running RTR to polish the results of mMBO produces the best energies.
Despite sometimes getting stuck in local minima (see Figure \ref{fig.proj_comparison}),
we observe
that \citeauthor{RaySokolov2}'s approximation of projection can be useful in practice.
Table 1 (supplemental document) includes experiments with MBO and mMBO
(see \S\ref{subsec.mbo}) substituting \citeauthor{RaySokolov2}'s projection
for the true projection. In most cases, the resulting energy is very similar,
suggesting that the diffusion iterations in MBO smooth out any errors resulting from
incorrect projection. This hybrid MBO can be a useful tradeoff between correctness
and speed.
In Figures \ref{fig.field_comparison.cup}--\ref{fig.field_comparison.bunny}, we compare
fields computed by octahedral mMBO + RTR to those computed by the method of \citet{RaySokolov2}.
Our fields not only have lower Dirichlet energy, but also better singular
structures. To visualize singular structures, we use the visualization technique of \citet{liu}.
To illustrate the importance of singular structure, we have generated hexahedral meshes
from both sets of fields, following the methods laid out in \cite{nieser2011cubecover} and \cite{Lyon:2016:HRH}.
The meshes are computed from the raw field data, with no preprocessing other than tetrahedral mesh refinement
to resolve and localize singular curves. In particular, we do not
``correct'' singularities---thus, both sets of meshes show some degeneracies resulting from collapsed
or flipped tetrahedra during the parametrization step. However, our fields yield meshes with fewer, smaller
degeneracies, less distortion, and more regular structure.
\begin{figure}
\newcommand{0.4\columnwidth}{0.28\columnwidth}
\centering
\begin{tabular}{lccc}
{\rotatebox[origin=c]{90}{\footnotesize{\cite{RaySokolov2}}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ray_noodles.5_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ray_sg.6_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ray_hex.2_cropped} \\
{\rotatebox[origin=c]{90}{\footnotesize{mMBO + RTR}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ours_noodles.4_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ours_sg.4_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/cup_ours_hex.1_cropped}%
\end{tabular}%
\caption{Twisted singular curves on the handle of the cup
in \citeauthor{RaySokolov2}'s result lead to twisting in the resulting mesh (top).
Our field yields a mesh with a more regular structure throughout (bottom).}%
\label{fig.field_comparison.cup}
\end{figure}
\begin{figure}
\newcommand{0.4\columnwidth}{0.48\columnwidth}
\centering
\begin{tabular}{cc}
\includegraphics[align=c,width=0.4\columnwidth]{figures/bone_80k_ray_sg.7_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bone_80k_ours_sg.6_cropped} \\
\includegraphics[align=c,width=0.4\columnwidth]{figures/bone_80k_ray_hex.2_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bone_80k_ours_hex.1_cropped} \\
\cite{RaySokolov2} & mMBO + RTR
\end{tabular}%
\caption{On the bone model, twisted singular curves in a field computed by
the method of \citet{RaySokolov2} lead to a degenerate integer grid map, producing
highly-distorted hexahedra (left). Our field yields a mesh without this degeneracy (right).}%
\label{fig.field_comparison.bone}
\end{figure}
\begin{figure}
\newcommand{0.4\columnwidth}{0.28\columnwidth}
\centering
\begin{tabular}{lccc}
{\rotatebox[origin=c]{90}{\footnotesize{\cite{RaySokolov2}}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ray_noodles.4_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ray_sg.4_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ray_hex.3_cropped} \\
{\rotatebox[origin=c]{90}{\footnotesize{mMBO + RTR}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ours_noodles.3_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ours_sg.3_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/teddy_200k_ours_hex.2_cropped}%
\end{tabular}%
\caption{
Due to the twisting of singular curves in the field generated by
\citeauthor{RaySokolov2}'s method (top), the mesh is highly distorted on the right arm.
In contrast, our field (bottom) has a regular cubic singular structure, leading to fewer mesh degeneracies.}%
\label{fig.field_comparison.teddy}
\end{figure}
\begin{figure}
\newcommand{0.4\columnwidth}{0.28\columnwidth}
\centering
\begin{tabular}{lccc}
{\rotatebox[origin=c]{90}{\footnotesize{\cite{RaySokolov2}}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ray_noodles.3_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ray_sg.2_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ray_hex_sg.1_cropped} \\
{\rotatebox[origin=c]{90}{\footnotesize{mMBO + RTR}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ours_noodles.2_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ours_sg.1_cropped} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/bunny_ours_hex_sg.1_cropped}%
\end{tabular}%
\caption{Note the simpler, more regular singular curves in our result
as compared to the result of \citet{RaySokolov2}---especially
on the bunny's nose. The degeneracy in \citeauthor{RaySokolov2}'s result
leads to a collapse of the integer grid map on the head.}%
\label{fig.field_comparison.bunny}
\end{figure}
Figures \ref{fig.field_comparison.gear} and \ref{fig.field_comparison.hilbert}
compare fields produced by mMBO + RTR to fields produced by the method of \citet{Gao:2017:RHM}.
Our fields better respect the symmetries of the underlying models.
We note that mMBO + RTR yields a higher-quality result
without requiring a multiscale method like the one \citet{Gao:2017:RHM} employ.
\begin{figure}
\newcommand{0.4\columnwidth}{0.47\columnwidth}
\centering
\begin{tabular}{cc}
\includegraphics[align=c,width=0.4\columnwidth]{figures/gear_ic_comp_gao} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/gear_ic_comp_mmbo_rtr} \\
$E = 10580$ & $E = 9901.4$ \\
\cite{Gao:2017:RHM} & mMBO + RTR
\end{tabular}%
\caption{Comparison of octahedral field algorithms on the gear model. A field
produced with mMBO + RTR (right) exhibits lower energy and
greater symmetry than one produced by the method of \citet{Gao:2017:RHM} (left).}%
\label{fig.field_comparison.gear}
\end{figure}
\begin{figure}
\newcommand{0.4\columnwidth}{0.47\columnwidth}
\centering
\begin{tabular}{cc}
\includegraphics[align=c,width=0.4\columnwidth]{figures/hilbert_ic_comp_gao} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/hilbert_ic_comp_mmbo_rtr} \\
$E = 8150.1$ & $E = 6319.9$ \\
\cite{Gao:2017:RHM} & mMBO + RTR
\end{tabular}%
\caption{Comparison of octahedral field algorithms on the space-filling
torus model. A field
produced with mMBO + RTR (right) exhibits lower energy and
greater symmetry than one produced by the method of \citet{Gao:2017:RHM} (left).}%
\label{fig.field_comparison.hilbert}
\end{figure}
\section{Discussion and Future Work}
\label{sec.concl}
A stronger understanding of the unknowns in the volumetric frame field problem enables both theoretical and practical developments. On the theoretical side, our reexamination of the typical representation of octahedral frames not only yields useful geodesic formulas and projection operators but also suggests generalization to odeco frames. For both frame representations, optimization algorithms designed explicitly to traverse the manifold of unknowns yield gains in efficiency and in the quality of computed results.
Our work is intended not only to improve on existing frame field computation algorithms but also to inspire additional research into the structure of spaces of frames and to highlight the significance of this structure to practical aspects of field computation and meshing. Most immediately, a few open questions are left from our discussion.
On the theoretical side, conjectures \ref{conj.octa_exact} and
\ref{conj.odeco_pos} remain to be proven. We anticipate that both
can be embedded in a more general framework explaining when
relaxations of projection problems are exact.
On the algorithmic side, RTR seems to get stuck in local minima
much more often than MBO, especially on denser meshes.
We hypothesize that this is because RTR strictly moves along the manifold,
while MBO is free to tunnel through the ambient space. On the other hand,
RTR converges much faster and yields high-quality fields on coarse meshes.
Given these observations, we anticipate that it may be possible to incorporate RTR into a multiscale method that leverages its efficiency while avoiding local minima that appear at the finest scales.
\begin{figure}
\newcommand{0.4\columnwidth}{0.4\columnwidth}
\centering
\begin{tabular}{lcc}
{\rotatebox[origin=c]{90}{\footnotesize{\cite{RaySokolov2}}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/rockerarm_bottom_ray_sg.1_composed} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/rockerarm_bottom_ray_hex.1_cropped} \\
{\rotatebox[origin=c]{90}{\footnotesize{mMBO + RTR}}} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/rockerarm_bottom_ours_sg.1_composed} &
\includegraphics[align=c,width=0.4\columnwidth]{figures/rockerarm_bottom_ours_hex.1_cropped} %
\end{tabular}%
\caption{A field of lower Dirichlet energy (bottom) may
still result in more mesh degeneracies than a field of higher Dirichlet energy (top)
due to topological impediments to meshability,
namely the presence of an additional valence $3$--$5$ junction.}%
\label{fig.field_comparison.rockerarm}
\end{figure}
As with most existing frame field computation algorithms,
even when mMBO+RTR converges to a smooth field, the topology of the field is not always
hex-meshable. Figure \ref{fig.field_comparison.rockerarm} shows a case where
the method of \citet{RaySokolov2} yields a field of higher
Dirichlet energy, but which leads to fewer degeneracies than our field. In
particular, the presence of a singular curve whose type changes from valence $3$
to $5$ leads to a degeneracy in the integer-grid map.
While our method succeeds at finding fields of lower
Dirichlet energy---the objective function of our method and that of
\citet{RaySokolov2}---minimizing this energy is an incomplete proxy
for our ultimate goal, namely to obtain
smooth, meshable fields. We would like to investigate additional
metrics, e.g., integrability of frame fields, that might make it possible
to express meshability constraints rigorously.
To define such additional metrics, it might be fruitful
to consider further alternative frame field representations. For example,
given our use of Lie algebra representations, a logical next step might be
to introduce an $\mathrm{SO}(3)$--principal bundle and work with connections on that
bundle as variables. In this theory, quantities such as curvature, torsion,
or the Chern-Simons functional might encode important features of frame fields.
The discretization of connections in \cite{Corman2019} represents a promising first
step.
Finally, while our algorithms do not explicitly take account of symmetry, we
find that fields computed by MBO consistently reproduce the symmetries of the
volume. It would be interesting to develop a better theoretical understanding
of this behavior and to develop machinery for explicitly promoting conformation to near-symmetry in deformed domains.
These and other challenges appear when extending well-known machinery from geometry processing on surfaces to volumetric domains, as demanded by applications in engineering, simulation, and other disciplines. Over the longer term, careful consideration of problems like the ones we consider in this paper will lead to a versatile and generalizable approach to geometry processing.
\section{Energy and Timing Results}
In the following table, ``X+RTR'' indicates that RTR was used to polish the result of algorithm X on the previous line, and the corresponding runtimes and iteration counts are the totals for X and RTR. Ray is the algorithm of \citet{RaySokolov2} with their initialization and the combinatorial Laplacian. Ray2 indicates their algorithm but with random initialization and geometric Laplacian (to provide a fairer comparison against our results). RayMBO and RayMMBO are MBO and mMBO, respectively, but using Ray et al.'s approximate projection approach rather than SDP-based projection. For consistency, all energy values are computed by converting to odeco frames and using the geometric Laplacian.
\tablecaption{Energy and computation time comparison on various models. \label{tbl.1}}
\tablefirsthead{%
\toprule Mesh & {Vertices} & Type & Method & {Energy} & {Time (\si{\second})} & {Iterations} \\ \midrule}
\tablehead{%
\toprule Mesh & {Vertices} & Type & Method & {Energy} & {Time (\si{\second})} & {Iterations} \\ \midrule}
\tabletail{\midrule}
\tablelasttail{\bottomrule}
\begin{center}
\begin{xtabular}{lS[table-format=6.0]llS[table-format=6.2]S[table-format=4.2]r}
37881\_gear.2 & 55337 & Octa & Ray & 1627.7881976314 & 90.886845 & 354 \\
37881\_gear.2 & 55337 & Octa & Ray+RTR & 1387.84483806352 & 100.502334 & 426 \\
37881\_gear.2 & 55337 & Octa & Ray2 & 1418.57420822576 & 1164.663405 & 4876 \\
37881\_gear.2 & 55337 & Octa & Ray2+RTR & 1418.57419221369 & 1167.082352 & 4879 \\
37881\_gear.2 & 55337 & Octa & RayMBO & 1830.1853713872 & 6.2545 & 14 \\
37881\_gear.2 & 55337 & Octa & RayMBO+RTR & 1407.08649047954 & 19.72424 & 111 \\
37881\_gear.2 & 55337 & Octa & RayMMBO & 1338.28078152288 & 31.995769 & 89 \\
37881\_gear.2 & 55337 & Octa & RayMMBO+RTR & 1334.91798361169 & 40.659427 & 117 \\
37881\_gear.2 & 55337 & Octa & RTR & 1410.86984069843 & 40.187156 & 371 \\
37881\_gear.2 & 55337 & Octa & MBO & 1830.19489568117 & 177.384508 & 14 \\
37881\_gear.2 & 55337 & Octa & MBO+RTR & 1407.08651635116 & 189.866174 & 103 \\
37881\_gear.2 & 55337 & Octa & mMBO & 1338.26794700124 & 1135.065046 & 89 \\
37881\_gear.2 & 55337 & Octa & mMBO+RTR & 1334.91793091988 & 1146.192842 & 118 \\
37881\_gear.2 & 55337 & Odeco & RTR & 1135.61318251883 & 435.184382 & 421 \\
37881\_gear.2 & 55337 & Odeco & MBO & 1450.33888301473 & 400.697329 & 13 \\
37881\_gear.2 & 55337 & Odeco & MBO+RTR & 1144.05131580336 & 521.300887 & 126 \\
37881\_gear.2 & 55337 & Odeco & mMBO & 1115.17470601942 & 3168.780669 & 91 \\
37881\_gear.2 & 55337 & Odeco & mMBO+RTR & 1111.32463016858 & 3215.482291 & 124 \\
\midrule
53754\_hilbert & 19901 & Octa & Ray & 2335.48736903533 & 75.006834 & 1461 \\
53754\_hilbert & 19901 & Octa & Ray+RTR & 2226.7012470211 & 80.30714 & 1550 \\
53754\_hilbert & 19901 & Octa & Ray2 & 2197.26386805774 & 254.358463 & 4839 \\
53754\_hilbert & 19901 & Octa & Ray2+RTR & 2197.26386661463 & 255.529156 & 4842 \\
53754\_hilbert & 19901 & Octa & RayMBO & 3244.82023717033 & 0.447835 & 4 \\
53754\_hilbert & 19901 & Octa & RayMBO+RTR & 2206.59997715256 & 5.45634 & 122 \\
53754\_hilbert & 19901 & Octa & RayMMBO & 2186.63180549537 & 22.535856 & 194 \\
53754\_hilbert & 19901 & Octa & RayMMBO+RTR & 2178.6347191175 & 25.994516 & 252 \\
53754\_hilbert & 19901 & Octa & RTR & 2218.77716086912 & 9.414869 & 250 \\
53754\_hilbert & 19901 & Octa & MBO & 3246.84487243297 & 7.22986 & 4 \\
53754\_hilbert & 19901 & Octa & MBO+RTR & 2204.45506552818 & 12.913441 & 147 \\
53754\_hilbert & 19901 & Octa & mMBO & 2186.60084134159 & 421.800114 & 195 \\
53754\_hilbert & 19901 & Octa & mMBO+RTR & 2178.64463106349 & 425.399151 & 246 \\
53754\_hilbert & 19901 & Odeco & RTR & 1689.466965234 & 78.001416 & 196 \\
53754\_hilbert & 19901 & Odeco & MBO & 2394.23484964477 & 24.533498 & 4 \\
53754\_hilbert & 19901 & Odeco & MBO+RTR & 1683.76701446998 & 70.031574 & 116 \\
53754\_hilbert & 19901 & Odeco & mMBO & 1690.20172528145 & 1413.47697 & 181 \\
53754\_hilbert & 19901 & Odeco & mMBO+RTR & 1683.24230653009 & 1433.605046 & 223 \\
\midrule
78322\_gear.2 & 8522 & Octa & Ray & 711.432530886834 & 5.925626 & 177 \\
78322\_gear.2 & 8522 & Octa & Ray+RTR & 659.381199067609 & 7.825635 & 252 \\
78322\_gear.2 & 8522 & Octa & Ray2 & 666.530296089721 & 148.161695 & 4860 \\
78322\_gear.2 & 8522 & Octa & Ray2+RTR & 666.514552591622 & 148.577403 & 4864 \\
78322\_gear.2 & 8522 & Octa & RayMBO & 927.393314730515 & 2.116394 & 18 \\
78322\_gear.2 & 8522 & Octa & RayMBO+RTR & 660.820605950343 & 4.387593 & 87 \\
78322\_gear.2 & 8522 & Octa & RayMMBO & 657.589044486314 & 6.452463 & 80 \\
78322\_gear.2 & 8522 & Octa & RayMMBO+RTR & 654.206358386474 & 7.340928 & 91 \\
78322\_gear.2 & 8522 & Octa & RTR & 671.160213013799 & 2.625101 & 126 \\
78322\_gear.2 & 8522 & Octa & MBO & 927.391571918813 & 27.893567 & 18 \\
78322\_gear.2 & 8522 & Octa & MBO+RTR & 660.820642231487 & 30.004544 & 84 \\
78322\_gear.2 & 8522 & Octa & mMBO & 657.583939368473 & 123.072349 & 80 \\
78322\_gear.2 & 8522 & Octa & mMBO+RTR & 654.206404790309 & 123.952688 & 91 \\
78322\_gear.2 & 8522 & Odeco & RTR & 557.439248933843 & 22.281006 & 156 \\
78322\_gear.2 & 8522 & Odeco & MBO & 752.342390269602 & 77.148586 & 18 \\
78322\_gear.2 & 8522 & Odeco & MBO+RTR & 553.784016559938 & 89.827963 & 104 \\
78322\_gear.2 & 8522 & Odeco & mMBO & 559.11157148746 & 333.326316 & 71 \\
78322\_gear.2 & 8522 & Odeco & mMBO+RTR & 556.119533318641 & 336.883245 & 86 \\
\midrule
anchor\_62k & 15704 & Octa & Ray & 36.0056864721536 & 30.154563 & 754 \\
anchor\_62k & 15704 & Octa & Ray+RTR & 32.9995413786798 & 32.73569 & 789 \\
anchor\_62k & 15704 & Octa & Ray2 & 34.6895978291598 & 193.142287 & 4853 \\
anchor\_62k & 15704 & Octa & Ray2+RTR & 34.6895397183706 & 194.041172 & 4856 \\
anchor\_62k & 15704 & Octa & RayMBO & 43.4494123060452 & 1.219396 & 8 \\
anchor\_62k & 15704 & Octa & RayMBO+RTR & 33.0231080701726 & 6.244455 & 112 \\
anchor\_62k & 15704 & Octa & RayMMBO & 33.0431188039382 & 9.508392 & 82 \\
anchor\_62k & 15704 & Octa & RayMMBO+RTR & 32.9506404647394 & 12.791621 & 114 \\
anchor\_62k & 15704 & Octa & RTR & 33.2599092659859 & 7.256053 & 194 \\
anchor\_62k & 15704 & Octa & MBO & 43.446053167056 & 12.335145 & 8 \\
anchor\_62k & 15704 & Octa & MBO+RTR & 33.0231027405009 & 16.892954 & 109 \\
anchor\_62k & 15704 & Octa & mMBO & 33.0431234369269 & 132.941786 & 82 \\
anchor\_62k & 15704 & Octa & mMBO+RTR & 32.9506362201387 & 135.998473 & 112 \\
anchor\_62k & 15704 & Odeco & RTR & 28.2246600426223 & 59.858448 & 187 \\
anchor\_62k & 15704 & Odeco & MBO & 35.2462988792237 & 44.704981 & 8 \\
anchor\_62k & 15704 & Odeco & MBO+RTR & 28.0064065851676 & 62.49842 & 58 \\
anchor\_62k & 15704 & Odeco & mMBO & 28.0784958661707 & 480.518111 & 76 \\
anchor\_62k & 15704 & Odeco & mMBO+RTR & 27.9733483375547 & 490.351929 & 96 \\
\midrule
bone\_80k & 21796 & Octa & Ray & 11.2574799519351 & 29.546018 & 533 \\
bone\_80k & 21796 & Octa & Ray+RTR & 9.48349318458969 & 35.534527 & 597 \\
bone\_80k & 21796 & Octa & Ray2 & 10.2375154765512 & 257.976529 & 4870 \\
bone\_80k & 21796 & Octa & Ray2+RTR & 10.2375154633285 & 258.934231 & 4872 \\
bone\_80k & 21796 & Octa & RayMBO & 13.1776624687465 & 1.034407 & 5 \\
bone\_80k & 21796 & Octa & RayMBO+RTR & 9.56730840445834 & 8.614894 & 100 \\
bone\_80k & 21796 & Octa & RayMMBO & 9.32812495796291 & 22.76487 & 167 \\
bone\_80k & 21796 & Octa & RayMMBO+RTR & 9.27697989868459 & 27.638714 & 191 \\
bone\_80k & 21796 & Octa & RTR & 10.5696504848295 & 13.635808 & 329 \\
bone\_80k & 21796 & Octa & MBO & 13.1780766431127 & 9.461959 & 5 \\
bone\_80k & 21796 & Octa & MBO+RTR & 9.56730404470509 & 15.629057 & 102 \\
bone\_80k & 21796 & Octa & mMBO & 9.32837290964278 & 345.514333 & 166 \\
bone\_80k & 21796 & Octa & mMBO+RTR & 9.27697620812697 & 349.758274 & 191 \\
bone\_80k & 21796 & Odeco & RTR & 6.68434055134972 & 153.424661 & 336 \\
bone\_80k & 21796 & Odeco & MBO & 9.49631024843654 & 34.939864 & 5 \\
bone\_80k & 21796 & Odeco & MBO+RTR & 6.634143250847 & 86.095409 & 104 \\
bone\_80k & 21796 & Odeco & mMBO & 6.646645872809 & 1238.84297 & 152 \\
bone\_80k & 21796 & Odeco & mMBO+RTR & 6.57524545748854 & 1261.850163 & 190 \\
\midrule
bunny\_324k & 60624 & Octa & Ray & 1401.01950662545 & 773.274404 & 3059 \\
bunny\_324k & 60624 & Octa & Ray+RTR & 1367.50329646521 & 792.414133 & 3129 \\
bunny\_324k & 60624 & Octa & Ray2 & 1368.5427185326 & 1203.046886 & 4870 \\
bunny\_324k & 60624 & Octa & Ray2+RTR & 1368.18024732712 & 1212.696356 & 4888 \\
bunny\_324k & 60624 & Octa & RayMBO & 1926.8324020679 & 6.8655 & 13 \\
bunny\_324k & 60624 & Octa & RayMBO+RTR & 1380.54714062876 & 36.618918 & 267 \\
bunny\_324k & 60624 & Octa & RayMMBO & 1350.81520691799 & 23.663039 & 88 \\
bunny\_324k & 60624 & Octa & RayMMBO+RTR & 1341.67868086027 & 36.498163 & 135 \\
bunny\_324k & 60624 & Octa & RTR & 1511.39813378183 & 68.327213 & 696 \\
bunny\_324k & 60624 & Octa & MBO & 1925.16760751121 & 253.436295 & 13 \\
bunny\_324k & 60624 & Octa & MBO+RTR & 1376.69359285074 & 282.747721 & 273 \\
bunny\_324k & 60624 & Octa & mMBO & 1350.7802465171 & 1631.805764 & 88 \\
bunny\_324k & 60624 & Octa & mMBO+RTR & 1341.79773491077 & 1646.435503 & 129 \\
bunny\_324k & 60624 & Odeco & RTR & 997.623179425844 & 1005.799612 & 1001 \\
bunny\_324k & 60624 & Odeco & MBO & 1415.54993813064 & 355.101635 & 9 \\
bunny\_324k & 60624 & Odeco & MBO+RTR & 984.879218072676 & 579.814421 & 216 \\
bunny\_324k & 60624 & Odeco & mMBO & 1003.97002594538 & 3679.236843 & 84 \\
bunny\_324k & 60624 & Odeco & mMBO+RTR & 980.745267474436 & 3848.204137 & 227 \\
\midrule
camille\_hand\_160k & 38778 & Octa & Ray & 3935.67980912147 & 119.028082 & 1186 \\
camille\_hand\_160k & 38778 & Octa & Ray+RTR & 3218.96415821507 & 129.845858 & 1273 \\
camille\_hand\_160k & 38778 & Octa & Ray2 & 3218.19641953673 & 495.450275 & 4868 \\
camille\_hand\_160k & 38778 & Octa & Ray2+RTR & 3218.19594372639 & 499.124449 & 4872 \\
camille\_hand\_160k & 38778 & Octa & RayMBO & 4374.80989057275 & 5.430821 & 14 \\
camille\_hand\_160k & 38778 & Octa & RayMBO+RTR & 3221.83914944056 & 19.060633 & 133 \\
camille\_hand\_160k & 38778 & Octa & RayMMBO & 3163.41028121077 & 21.726737 & 113 \\
camille\_hand\_160k & 38778 & Octa & RayMMBO+RTR & 3145.32505518525 & 29.60902 & 148 \\
camille\_hand\_160k & 38778 & Octa & RTR & 3429.21407221247 & 30.329409 & 450 \\
camille\_hand\_160k & 38778 & Octa & MBO & 4374.74460444532 & 65.355754 & 14 \\
camille\_hand\_160k & 38778 & Octa & MBO+RTR & 3221.83789144758 & 78.758923 & 111 \\
camille\_hand\_160k & 38778 & Octa & mMBO & 3161.73725317673 & 404.238249 & 92 \\
camille\_hand\_160k & 38778 & Octa & mMBO+RTR & 3143.65851219035 & 411.756848 & 126 \\
camille\_hand\_160k & 38778 & Odeco & RTR & 2504.88247531802 & 327.808727 & 423 \\
camille\_hand\_160k & 38778 & Odeco & MBO & 3316.36182813216 & 134.355212 & 9 \\
camille\_hand\_160k & 38778 & Odeco & MBO+RTR & 2458.21831339449 & 233.069882 & 127 \\
camille\_hand\_160k & 38778 & Odeco & mMBO & 2471.07438442528 & 1383.076433 & 84 \\
camille\_hand\_160k & 38778 & Odeco & mMBO+RTR & 2447.42672098434 & 1420.2088 & 121 \\
\midrule
cup\_85k & 21060 & Octa & Ray & 437.462863336574 & 32.292784 & 482 \\
cup\_85k & 21060 & Octa & Ray+RTR & 430.09109848822 & 39.110168 & 524 \\
cup\_85k & 21060 & Octa & Ray2 & 449.156678929421 & 294.484185 & 4868 \\
cup\_85k & 21060 & Octa & Ray2+RTR & 449.15665863193 & 296.09705 & 4871 \\
cup\_85k & 21060 & Octa & RayMBO & 522.689547570963 & 1.165037 & 6 \\
cup\_85k & 21060 & Octa & RayMBO+RTR & 428.28123303564 & 4.138869 & 41 \\
cup\_85k & 21060 & Octa & RayMMBO & 428.65039182988 & 6.97532 & 53 \\
cup\_85k & 21060 & Octa & RayMMBO+RTR & 428.289349076208 & 9.736845 & 68 \\
cup\_85k & 21060 & Octa & RTR & 457.998417602114 & 13.322019 & 299 \\
cup\_85k & 21060 & Octa & MBO & 522.713554003032 & 17.074732 & 7 \\
cup\_85k & 21060 & Octa & MBO+RTR & 428.281150690064 & 20.113414 & 45 \\
cup\_85k & 21060 & Octa & mMBO & 428.649637890993 & 137.725451 & 53 \\
cup\_85k & 21060 & Octa & mMBO+RTR & 428.28928111591 & 140.476922 & 68 \\
cup\_85k & 21060 & Odeco & RTR & 354.428921984561 & 137.677766 & 324 \\
cup\_85k & 21060 & Odeco & MBO & 419.903657366974 & 56.185528 & 7 \\
cup\_85k & 21060 & Odeco & MBO+RTR & 353.53051586124 & 74.37888 & 45 \\
cup\_85k & 21060 & Odeco & mMBO & 353.795913855106 & 503.605044 & 54 \\
cup\_85k & 21060 & Odeco & mMBO+RTR & 353.08959013222 & 512.115006 & 68 \\
\midrule
elk\_120k & 24998 & Octa & Ray & 4563.0642808578 & 84.867933 & 1041 \\
elk\_120k & 24998 & Octa & Ray+RTR & 4350.84194718717 & 89.007857 & 1078 \\
elk\_120k & 24998 & Octa & Ray2 & 4445.01909898391 & 401.959224 & 4850 \\
elk\_120k & 24998 & Octa & Ray2+RTR & 4445.01898251458 & 406.845053 & 4856 \\
elk\_120k & 24998 & Octa & RayMBO & 6152.81681831048 & 8.005883 & 16 \\
elk\_120k & 24998 & Octa & RayMBO+RTR & 4371.92690283341 & 17.386109 & 211 \\
elk\_120k & 24998 & Octa & RayMMBO & 4327.67035158438 & 20.678396 & 146 \\
elk\_120k & 24998 & Octa & RayMMBO+RTR & 4311.8152689282 & 25.125086 & 180 \\
elk\_120k & 24998 & Octa & RTR & 4503.97199211569 & 10.486997 & 254 \\
elk\_120k & 24998 & Octa & MBO & 6088.9598263369 & 68.495371 & 14 \\
elk\_120k & 24998 & Octa & MBO+RTR & 4370.79144580514 & 82.172627 & 238 \\
elk\_120k & 24998 & Octa & mMBO & 4327.07934048708 & 663.876338 & 140 \\
elk\_120k & 24998 & Octa & mMBO+RTR & 4311.47322082693 & 667.756932 & 178 \\
elk\_120k & 24998 & Odeco & RTR & 3178.78178912481 & 109.287722 & 249 \\
elk\_120k & 24998 & Odeco & MBO & 4482.30347508535 & 137.363207 & 11 \\
elk\_120k & 24998 & Odeco & MBO+RTR & 3156.06199657163 & 227.84347 & 214 \\
elk\_120k & 24998 & Odeco & mMBO & 3198.49754612327 & 1906.577191 & 141 \\
elk\_120k & 24998 & Odeco & mMBO+RTR & 3147.13092737582 & 1958.580046 & 255 \\
\midrule
fandisk\_75k & 15689 & Octa & Ray & 1546.12525838979 & 31.520845 & 587 \\
fandisk\_75k & 15689 & Octa & Ray+RTR & 1524.48253561324 & 32.720836 & 608 \\
fandisk\_75k & 15689 & Octa & Ray2 & 1530.71518434132 & 272.87045 & 4849 \\
fandisk\_75k & 15689 & Octa & Ray2+RTR & 1530.71506079569 & 273.919586 & 4853 \\
fandisk\_75k & 15689 & Octa & RayMBO & 1714.78072308148 & 1.196189 & 9 \\
fandisk\_75k & 15689 & Octa & RayMBO+RTR & 1531.74770279418 & 3.97561 & 65 \\
fandisk\_75k & 15689 & Octa & RayMMBO & 1527.74904410715 & 5.982631 & 63 \\
fandisk\_75k & 15689 & Octa & RayMMBO+RTR & 1523.01492293117 & 7.463257 & 79 \\
fandisk\_75k & 15689 & Octa & RTR & 1531.19007553433 & 6.163054 & 251 \\
fandisk\_75k & 15689 & Octa & MBO & 1714.75708806945 & 25.8378 & 9 \\
fandisk\_75k & 15689 & Octa & MBO+RTR & 1531.74757171682 & 29.329142 & 75 \\
fandisk\_75k & 15689 & Octa & mMBO & 1527.74684335567 & 192.373556 & 63 \\
fandisk\_75k & 15689 & Octa & mMBO+RTR & 1523.014848323 & 193.879218 & 79 \\
fandisk\_75k & 15689 & Odeco & RTR & 1328.25176918128 & 54.854635 & 201 \\
fandisk\_75k & 15689 & Odeco & MBO & 1482.10653143557 & 70.927686 & 9 \\
fandisk\_75k & 15689 & Odeco & MBO+RTR & 1323.26348078298 & 88.149173 & 65 \\
fandisk\_75k & 15689 & Odeco & mMBO & 1327.1800207322 & 564.682418 & 63 \\
fandisk\_75k & 15689 & Odeco & mMBO+RTR & 1322.21062272244 & 573.50418 & 84 \\
\midrule
genus3 & 11828 & Octa & Ray & 17.4004976601531 & 14.699231 & 468 \\
genus3 & 11828 & Octa & Ray+RTR & 14.8359439752782 & 18.070935 & 523 \\
genus3 & 11828 & Octa & Ray2 & 15.5117727872977 & 77.000292 & 2666 \\
genus3 & 11828 & Octa & Ray2+RTR & 15.511772787256 & 77.566222 & 2668 \\
genus3 & 11828 & Octa & RayMBO & 18.5893521606509 & 1.576914 & 10 \\
genus3 & 11828 & Octa & RayMBO+RTR & 14.8743906632279 & 5.227495 & 122 \\
genus3 & 11828 & Octa & RayMMBO & 14.5866045362088 & 15.355283 & 114 \\
genus3 & 11828 & Octa & RayMMBO+RTR & 14.5148961476322 & 18.885343 & 146 \\
genus3 & 11828 & Octa & RTR & 15.4594538861435 & 6.261069 & 220 \\
genus3 & 11828 & Octa & MBO & 18.5895699502403 & 11.2015 & 10 \\
genus3 & 11828 & Octa & MBO+RTR & 14.8743847067222 & 15.481857 & 127 \\
genus3 & 11828 & Octa & mMBO & 14.5866539703607 & 127.728328 & 114 \\
genus3 & 11828 & Octa & mMBO+RTR & 14.5148901377253 & 130.619952 & 148 \\
genus3 & 11828 & Odeco & RTR & 11.5128251461558 & 69.03679 & 287 \\
genus3 & 11828 & Odeco & MBO & 14.2882926303124 & 31.989904 & 8 \\
genus3 & 11828 & Odeco & MBO+RTR & 11.447028697553 & 76.717956 & 169 \\
genus3 & 11828 & Odeco & mMBO & 11.4689325573172 & 451.627696 & 103 \\
genus3 & 11828 & Odeco & mMBO+RTR & 11.3574832457972 & 461.509836 & 134 \\
\midrule
rockerarm\_91k & 18956 & Octa & Ray & 1146.53689650917 & 47.039329 & 774 \\
rockerarm\_91k & 18956 & Octa & Ray+RTR & 1103.36602780753 & 49.385221 & 807 \\
rockerarm\_91k & 18956 & Octa & Ray2 & 1162.39961885946 & 302.119105 & 4868 \\
rockerarm\_91k & 18956 & Octa & Ray2+RTR & 1162.39956491074 & 302.872089 & 4871 \\
rockerarm\_91k & 18956 & Octa & RayMBO & 1372.24220101733 & 0.884723 & 6 \\
rockerarm\_91k & 18956 & Octa & RayMBO+RTR & 1113.38625947076 & 3.938696 & 65 \\
rockerarm\_91k & 18956 & Octa & RayMMBO & 1105.39482281235 & 10.82882 & 102 \\
rockerarm\_91k & 18956 & Octa & RayMMBO+RTR & 1102.34287558757 & 12.825129 & 127 \\
rockerarm\_91k & 18956 & Octa & RTR & 1198.7289810018 & 7.337638 & 231 \\
rockerarm\_91k & 18956 & Octa & MBO & 1371.51855178122 & 18.967741 & 6 \\
rockerarm\_91k & 18956 & Octa & MBO+RTR & 1113.38601980429 & 22.168306 & 76 \\
rockerarm\_91k & 18956 & Octa & mMBO & 1105.39421790983 & 355.803039 & 102 \\
rockerarm\_91k & 18956 & Octa & mMBO+RTR & 1102.34268809852 & 357.826054 & 125 \\
rockerarm\_91k & 18956 & Odeco & RTR & 891.579050658908 & 87.653674 & 270 \\
rockerarm\_91k & 18956 & Odeco & MBO & 1086.38021213032 & 51.642116 & 6 \\
rockerarm\_91k & 18956 & Odeco & MBO+RTR & 895.875051984699 & 76.330619 & 71 \\
rockerarm\_91k & 18956 & Odeco & mMBO & 898.325305536742 & 929.905959 & 88 \\
rockerarm\_91k & 18956 & Odeco & mMBO+RTR & 894.741653270749 & 945.188489 & 116 \\
\midrule
sphere\_143k & 25383 & Octa & Ray & 26.5393000348803 & 85.164424 & 872 \\
sphere\_143k & 25383 & Octa & Ray+RTR & 24.9898022379363 & 88.896349 & 899 \\
sphere\_143k & 25383 & Octa & Ray2 & 25.2357665464039 & 480.697354 & 4846 \\
sphere\_143k & 25383 & Octa & Ray2+RTR & 25.2355979325505 & 482.491655 & 4849 \\
sphere\_143k & 25383 & Octa & RayMBO & 27.5956288027606 & 8.556322 & 23 \\
sphere\_143k & 25383 & Octa & RayMBO+RTR & 25.616555488109 & 14.175963 & 64 \\
sphere\_143k & 25383 & Octa & RayMMBO & 26.7861449404125 & 17.739617 & 92 \\
sphere\_143k & 25383 & Octa & RayMMBO+RTR & 25.6518104269633 & 26.966432 & 231 \\
sphere\_143k & 25383 & Octa & RTR & 31.699116164254 & 20.892226 & 421 \\
sphere\_143k & 25383 & Octa & MBO & 27.6168785858888 & 282.781156 & 39 \\
sphere\_143k & 25383 & Octa & MBO+RTR & 25.5200284691705 & 288.188512 & 105 \\
sphere\_143k & 25383 & Octa & mMBO & 26.3233705312071 & 635.644448 & 93 \\
sphere\_143k & 25383 & Octa & mMBO+RTR & 25.564566066823 & 647.298618 & 170 \\
sphere\_143k & 25383 & Odeco & RTR & 12.3948213136043 & 160.087314 & 357 \\
sphere\_143k & 25383 & Odeco & MBO & 17.7576608812635 & 543.465406 & 33 \\
sphere\_143k & 25383 & Odeco & MBO+RTR & 12.3890231472453 & 682.910434 & 340 \\
sphere\_143k & 25383 & Odeco & mMBO & 14.8907491153617 & 1686.499851 & 99 \\
sphere\_143k & 25383 & Odeco & mMBO+RTR & 12.3830194928387 & 1856.30069 & 473 \\
\midrule
teddy\_200k & 37924 & Octa & Ray & 33.8418672212724 & 174.743187 & 1772 \\
teddy\_200k & 37924 & Octa & Ray+RTR & 29.0957872858223 & 186.536541 & 1858 \\
teddy\_200k & 37924 & Octa & Ray2 & 30.1033321005087 & 481.329782 & 4849 \\
teddy\_200k & 37924 & Octa & Ray2+RTR & 30.103331849994 & 483.027425 & 4851 \\
teddy\_200k & 37924 & Octa & RayMBO & 42.3780547008977 & 4.9669 & 12 \\
teddy\_200k & 37924 & Octa & RayMBO+RTR & 30.9726175122927 & 21.851548 & 162 \\
teddy\_200k & 37924 & Octa & RayMMBO & 28.6556691713306 & 21.876527 & 100 \\
teddy\_200k & 37924 & Octa & RayMMBO+RTR & 28.5274616774123 & 30.988832 & 125 \\
teddy\_200k & 37924 & Octa & RTR & 32.5808048499773 & 26.164811 & 352 \\
teddy\_200k & 37924 & Octa & MBO & 42.3618766909961 & 55.298653 & 11 \\
teddy\_200k & 37924 & Octa & MBO+RTR & 30.9718730204497 & 71.252168 & 153 \\
teddy\_200k & 37924 & Octa & mMBO & 28.6572167207951 & 473.555237 & 100 \\
teddy\_200k & 37924 & Octa & mMBO+RTR & 28.527453817187 & 480.968247 & 132 \\
teddy\_200k & 37924 & Odeco & RTR & 21.6716266699613 & 382.480744 & 511 \\
teddy\_200k & 37924 & Odeco & MBO & 30.790585408057 & 120.406552 & 8 \\
teddy\_200k & 37924 & Odeco & MBO+RTR & 21.3084848134291 & 255.272108 & 168 \\
teddy\_200k & 37924 & Odeco & mMBO & 21.2838696435872 & 1549.883215 & 94 \\
teddy\_200k & 37924 & Odeco & mMBO+RTR & 20.9666969139953 & 1596.813034 & 137 \\
\midrule
torus\_316k & 56975 & Octa & Ray & 34.5519243414904 & 125.939267 & 539 \\
torus\_316k & 56975 & Octa & Ray+RTR & 32.5472980223157 & 138.051121 & 589 \\
torus\_316k & 56975 & Octa & Ray2 & 39.3743078012412 & 1094.406649 & 4881 \\
torus\_316k & 56975 & Octa & Ray2+RTR & 39.3743065310155 & 1095.953862 & 4883 \\
torus\_316k & 56975 & Octa & RayMBO & 52.3414014694667 & 6.758357 & 12 \\
torus\_316k & 56975 & Octa & RayMBO+RTR & 33.2451095688752 & 29.140859 & 199 \\
torus\_316k & 56975 & Octa & RayMMBO & 32.6308861505007 & 24.816486 & 112 \\
torus\_316k & 56975 & Octa & RayMMBO+RTR & 32.4770451792588 & 33.478607 & 143 \\
torus\_316k & 56975 & Octa & RTR & 40.0082103677944 & 55.302343 & 666 \\
torus\_316k & 56975 & Octa & MBO & 52.3826243806595 & 201.624188 & 12 \\
torus\_316k & 56975 & Octa & MBO+RTR & 33.2059836202069 & 221.748262 & 214 \\
torus\_316k & 56975 & Octa & mMBO & 32.630324711629 & 1840.399585 & 112 \\
torus\_316k & 56975 & Octa & mMBO+RTR & 32.477033221334 & 1849.876337 & 148 \\
torus\_316k & 56975 & Odeco & RTR & 22.8976488588223 & 593.216555 & 585 \\
torus\_316k & 56975 & Odeco & MBO & 39.3413234133634 & 397.581357 & 11 \\
torus\_316k & 56975 & Odeco & MBO+RTR & 22.432627635989 & 567.424407 & 171 \\
torus\_316k & 56975 & Odeco & mMBO & 22.9281981236922 & 4044.660572 & 102 \\
torus\_316k & 56975 & Odeco & mMBO+RTR & 22.3475736348343 & 4157.564081 & 197 \\%
\end{xtabular}
\end{center}
\section{Geodesics in the Octahedral Variety}
The \textbf{angular momentum operators} $L_i$ are the images under a Lie algebra representation of $\mathfrak{so}(3)$ of the Lie algebra generators $l_i$. For the nine-dimensional real irreducible representation of $\mathrm{SO}(3)$, we use
\[
L_1 = \begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{\frac{7}{2}} & 0 & -\sqrt{2} \\
0 & 0 & 0 & 0 & 0 & -\frac{3}{\sqrt{2}} & 0 & -\sqrt{\frac{7}{2}} & 0 \\
0 & 0 & 0 & 0 & -\sqrt{10} & 0 & -\frac{3}{\sqrt{2}} & 0 & 0 \\
0 & 0 & 0 & \sqrt{10} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{3}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \sqrt{\frac{7}{2}} & 0 & \frac{3}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 \\
\sqrt{2} & 0 & \sqrt{\frac{7}{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix} \]
\[
L_2 =
\begin{pmatrix}
0 & \sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-\sqrt{2} & 0 & \sqrt{\frac{7}{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -\sqrt{\frac{7}{2}} & 0 & \frac{3}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -\frac{3}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -\sqrt{10} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \sqrt{10} & 0 & -\frac{3}{\sqrt{2}} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{3}{\sqrt{2}} & 0 & -\sqrt{\frac{7}{2}} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \sqrt{\frac{7}{2}} & 0 & -\sqrt{2} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \sqrt{2} & 0
\end{pmatrix} \]
\[
L_3 = \begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & -3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
-4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}
\]
In the main text (\S4.1) we give an explicit formula for geodesics in the octahedral variety
in terms of the following two ingredients:
\[ \exp(\theta L_3) = \left(
\begin{array}{ccccccccc}
\cos (4 \theta ) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \sin (4 \theta ) \\
0 & \cos (3 \theta ) & 0 & 0 & 0 & 0 & 0 & \sin (3 \theta ) & 0 \\
0 & 0 & \cos (2 \theta ) & 0 & 0 & 0 & \sin (2 \theta ) & 0 & 0 \\
0 & 0 & 0 & \cos (\theta ) & 0 & \sin (\theta ) & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -\sin (\theta ) & 0 & \cos (\theta ) & 0 & 0 & 0 \\
0 & 0 & -\sin (2 \theta ) & 0 & 0 & 0 & \cos (2 \theta ) & 0 & 0 \\
0 & -\sin (3 \theta ) & 0 & 0 & 0 & 0 & 0 & \cos (3 \theta ) & 0 \\
-\sin (4 \theta ) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cos (4 \theta ) \\
\end{array}
\right) \]
and
\[ R_{23} = \exp\left(\frac{\pi}{2}L_1\right) =
\left(
\begin{array}{ccccccccc}
0 & 0 & 0 & 0 & 0 & \frac{\sqrt{\frac{7}{2}}}{2} & 0 & -\frac{1}{2 \sqrt{2}} & 0 \\
0 & -\frac{3}{4} & 0 & \frac{\sqrt{7}}{4} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{1}{2 \sqrt{2}} & 0 & \frac{\sqrt{\frac{7}{2}}}{2} & 0 \\
0 & \frac{\sqrt{7}}{4} & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{3}{8} & 0 & \frac{\sqrt{5}}{4} & 0 & \frac{\sqrt{35}}{8} \\
-\frac{\sqrt{\frac{7}{2}}}{2} & 0 & -\frac{1}{2 \sqrt{2}} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{\sqrt{5}}{4} & 0 & \frac{1}{2} & 0 & -\frac{\sqrt{7}}{4} \\
\frac{1}{2 \sqrt{2}} & 0 & -\frac{\sqrt{\frac{7}{2}}}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{\sqrt{35}}{8} & 0 & -\frac{\sqrt{7}}{4} & 0 & \frac{1}{8} \\
\end{array}
\right) \]
\section{Defining Equations of the Odeco Variety}
For odeco frames aligned to the vertical $\boldsymbol{z}$ (\emph{cf.}
\S5.1.2), $s_{\boldsymbol{z}}$
satisfies three homogeneous quadratic equations defined by the following symmetric
matrices:
\begin{gather*}
\left(
\begin{array}{ccccc}
-4 & 0 & -3 \sqrt{2} & 0 & 0 \\
0 & 0 & 0 & 18 & 0 \\
-3 \sqrt{2} & 0 & 0 & 0 & 18 \\
0 & 18 & 0 & 72 & 0 \\
0 & 0 & 18 & 0 & 72 \\
\end{array}
\right) \\
\left(
\begin{array}{ccccc}
0 & 6 \sqrt{2} & 0 & 0 & 0 \\
6 \sqrt{2} & 0 & 0 & 0 & 36 \\
0 & 0 & 0 & -36 & 0 \\
0 & 0 & -36 & 0 & 0 \\
0 & 36 & 0 & 0 & 0 \\
\end{array}
\right) \\
\left(
\begin{array}{ccccc}
-4 & 0 & 3 \sqrt{2} & 0 & 0 \\
0 & 0 & 0 & -18 & 0 \\
3 \sqrt{2} & 0 & 0 & 0 & -18 \\
0 & -18 & 0 & 72 & 0 \\
0 & 0 & -18 & 0 & 72 \\
\end{array}
\right)
\end{gather*}
Below, we reproduce explicitly the $27$ symmetric $15 \times 15$ matrices $A_i$ in the equations
(8) cutting out the odeco variety, expressed in the spherical harmonics
basis.
{\tiny%
\input{odecoMats/odecoMats.1}
\input{odecoMats/odecoMats.2}
\input{odecoMats/odecoMats.3}
\input{odecoMats/odecoMats.4}
\input{odecoMats/odecoMats.5}
\input{odecoMats/odecoMats.6}
\input{odecoMats/odecoMats.7}
\input{odecoMats/odecoMats.8}
\input{odecoMats/odecoMats.9}
\input{odecoMats/odecoMats.10}
\input{odecoMats/odecoMats.11}
\input{odecoMats/odecoMats.12}
\input{odecoMats/odecoMats.13}
\input{odecoMats/odecoMats.14}
\input{odecoMats/odecoMats.15}
\input{odecoMats/odecoMats.16}
\input{odecoMats/odecoMats.17}
\input{odecoMats/odecoMats.18}
\input{odecoMats/odecoMats.19}
\input{odecoMats/odecoMats.20}
\input{odecoMats/odecoMats.21}
\input{odecoMats/odecoMats.22}
\input{odecoMats/odecoMats.23}
\input{odecoMats/odecoMats.24}
\input{odecoMats/odecoMats.25}
\input{odecoMats/odecoMats.26}
\input{odecoMats/odecoMats.27}}
\section{Defining Equations of the Octahedral Variety}
Below, we reproduce explicitly the $15$ symmetric $10 \times 10$ matrices $P_i$ in the equations
(9) cutting out the octahedral variety.
\input{octaMats/octaMats.1}
\input{octaMats/octaMats.2}
\input{octaMats/octaMats.3}
\input{octaMats/octaMats.4}
\input{octaMats/octaMats.5}
\input{octaMats/octaMats.6}
\input{octaMats/octaMats.7}
\input{octaMats/octaMats.8}
\input{octaMats/octaMats.9}
\input{octaMats/octaMats.10}
\input{octaMats/octaMats.11}
\input{octaMats/octaMats.12}
\input{octaMats/octaMats.13}
\input{octaMats/octaMats.14}
\input{octaMats/octaMats.15}
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,500,242 | arxiv | \section{Introduction}\label{sec:intro}
\paragraph{Background.}
An \emph{algebraic circuit} over a field $\F$ for a multivariate polynomial $P(x_1,\ldots,x_N)$ is a directed acyclic graph (DAG)
whose internal vertices (called gates) are labeled as either $+$ (sum) or $\times$ (product), and leaves
(vertices of in-degree zero) are labeled by the variables $x_i$ or constants from $\F$. A special output gate (the root of the DAG) represents the polynomial
$P$. If the DAG happens to be a tree, such a resulting circuit is called an \emph{algebraic formula}.
The size of a circuit is the number of nodes in the DAG. We also consider the product-depth
of the circuit, which is the maximum number of product gates on a root-to-leaf path.
An algebraic circuit is therefore a computational model, which solves the computational
task of evaluating $P$ on a given input $(x_1,\ldots,x_N)$. The complexity of this model is measured
by the size of the circuit, which serves as an indicator of the
time complexity of computing the polynomial. The product-depth measures the degree to which this computation can be made parallel.
As an algebraic circuit is supposed to construct a formal polynomial $P$, it is a \emph{syntactic} model of
computation. This is unlike a Boolean circuit, which is only required to model specific
truth-table constraints. The problem of proving algebraic circuit lower
bounds is therefore widely considered to be easier than its Boolean counterpart. Indeed, we know that
proving VP $\neq$ VNP, the algebraic analog of the P vs. NP problem, is implied by the latter separation, in the non-uniform setting (\cite{burg}). We refer the reader to \cite{Saptarishi-survey} for a much more elaborate survey of this topic.
\paragraph{The LST breakthrough.}
Much like in the Boolean setting, the problem of showing lower bounds for \emph{general} algebraic circuits (or even formulas) has remained elusive. However, some remarkable progress has been made very recently by Limaye, Srinivasan, and Tavenas (\cite{LST1}) who in a spectacular breakthrough, showed the first super-polynomial lower bounds for algebraic circuits of \emph{all}
constant depths. Prior to their work, the best known lower bound (\cite{KayalST16}) even for product-depth 1 (or $\Sigma\Pi\Sigma$ circuits) was only almost-cubic. This is in stark contrast with the Boolean setting, in which we have known strong constant-depth lower bounds for many decades \cite{Ajtai83, FurstSS84,Yao85,Hastad86,Razborov1987LowerBO,Smolensky87}. Constant-depth circuits are critical to the study of algebraic complexity theory, as unlike the Boolean setting, strong enough bounds against them are known to yield VP $\neq$ VNP (\cite{AgrawalV08}). This helps put into perspective the importance of the work \cite{LST1}.
The crucial step in the proof of their result is to first establish super-polynomial lower bounds for a certain restricted class of (low-depth) algebraic circuits, namely \emph{set-multilinear} circuits which we now define along with other important circuit models. A polynomial is said to be homogeneous
if each monomial has the same total degree and \emph{multilinear} if every variable occurs
at most once in any monomial. Now, suppose that the underlying variable set is partitioned into $d$ sets
$X_1,\ldots, X_d$. Then the polynomial is said to be \emph{set-multilinear} with respect to this variable partition if each
monomial in $P$ has \emph{exactly} one variable from each set. We also define different models
of computation corresponding to these variants of polynomials classes. An algebraic formula (circuit) is set-multilinear with respect to a variable partition
$(X_1,\ldots, X_d)$ if each internal node in the formula (circuit) computes a set-multilinear polynomial. Multilinear/homogeneous circuits and formulas are defined analogously.
Several well-studied and interesting polynomials happen to be set-multilinear. For example, the Determinant
and the Permanent polynomials, the study of which is profoundly consequential to the field of algebraic complexity theory, are
set-multilinear (with respect to the column variables). Another well-studied polynomial, namely the Iterated
Matrix Multiplication polynomial, is also set-multilinear. The polynomial IMM$_{n,d}$ is defined on $N= dn^2$ variables, where the variables are partitioned into $d$ sets $X_1,\ldots, X_d$ of
size $n^2$, each of which is represented as an $n\times n$ matrix with distinct variable entries. The
polynomial IMM$_{n,d}$ is defined to be the polynomial that is the $(1, 1)$-th entry of the product
matrix $X_1 X_2\cdots X_d$. This polynomial has a simple divide-and-conquer-based set-multilinear formula of size
$n^{O(\log d)}$, and more generally for every $\Delta\leq \log d$, a set-multilinear formula of product-depth $\Delta$ and size $n^{O(\Delta d^{1/\Delta})}$, and circuit\footnote{In this paper, when speaking of constant-depth models of computation at a high level, we shall often use the terms circuit and formula interchangeably as a product-depth $\Delta$ circuit of size $s$ can be simulated by a product-depth $\Delta$ formula of size $s^{2\Delta}$.} of size $n^{O( d^{1/\Delta})}$. Even without the set-multilinearity constraint, no significantly better upper
bound is known. It is reasonable to conjecture that
this simple upper bound is tight up to the constant in the exponent.
The lower bounds in \cite{LST1}
for general constant-depth algebraic circuits are shown in the following sequence of steps:
\begin{enumerate}
\item It is shown that general low-depth algebraic circuits can be transformed to set-multilinear algebraic circuits of low depth, and without much of a blow-up in size (as long as the degree is small). More precisely, any product-depth $\Delta$ circuit of size $s$ computing a polynomial that is set-multilinear with respect to the partition $(X_1,\ldots,X_d)$ where each $|X_i|\leq n$, can be converted to a set-multilinear circuit\footnote{There is also an intermediate `homogenization' step which we skip describing here for the sake of brevity.} of product-depth $2\Delta$ and size $\poly(s)\cdot d^{O(d)}$. Such a `set-multilinearization' of general formulas of small degree was already shown before in \cite{Raz-Tensor} (which we describe soon in more detail); however, the main contribution of \cite{LST1} here is to prove this \emph{depth-preserving} version of it.
\item Strong lower bounds are then established for low-depth set-multilinear circuits (of small enough degree). More precisely, any set-multilinear circuit $C$ computing IMM$_{n,d}$ (where $d = O(\log n)$) of product-depth $\Delta$ must have size at least $n^{d^{\exp(-O(\Delta))}}$. This combined with the first step yields the desired lower bound for general constant-depth circuits.
\end{enumerate}
Given Raz's set-multilinearization of formulas of small degree that we alluded to, and this description of the set-multilinear formula lower bounds from \cite{LST1} when $d = O(\log n)$, it is evident the `small degree' regime is inherently interesting to study - as it provides an avenue, via `hardness escalation', for tackling one of the grand challenges of algebraic complexity theory, namely proving super-polynomial lower bounds for general algebraic formulas. However, we shall now see that even the large degree regime can be equally consequential in this regard.
\paragraph{The large degree regime.}
Consider a polynomial $P$ that is set-multilinear with respect to the variable partition $(X_1,\ldots,X_d)$ where each $|X_i|\leq n$. The main focus of this paper is to study set-multilinear circuit complexity in the regime where $d$ and $n$ are \emph{polynomially} related (as opposed to say, the assumption $d = O(\log n)$ described above). We now provide some background and motivation for studying this regime.
In follow-up work \cite{LST2}, the same authors showed the first super-polynomial
lower bound against unbounded-depth set-multilinear formulas computing IMM$_{n,n}$\footnote{Note that for IMM$_{n,n}$, each $X_i$ has size $n^2$, not $n$. But the important thing for us here is that the degree, $n$, is polynomially related to this parameter.}. As is astutely described in \cite{LST2}, studying the set-multilinear formula complexity of IMM is extremely interesting and consequential even in the setting $d=n$ because of the following reasons:
\begin{itemize}
\item IMM$_{n,n}$ is a \emph{self-reducible} polynomial i.e., it is possible to construct formulas for IMM$_{n,n}$ by recursively using formulas for IMM$_{n,d}$ (for any
$d<n$). In particular, if we had formulas of size $n^{o(\log d)}$ for IMM$_{n,d}$ (for some $d<n$), this would imply formulas
of size $n^{o(\log n)}$ for IMM$_{n,n}$. In other words, an optimal $n^{\Omega(\log n)}$
lower bound for IMM$_{n,n}$ implies $n^{\omega_d(1)}$ lower bounds for IMM$_{n,d}$ for any $d< n$.
\item Raz in \cite{Raz-Tensor} showed that if an $N$-variate set-multilinear
polynomial of degree $d$ has an algebraic formula of size $s$, then it also has a set-multilinear
formula of size $\poly(s)\cdot (\log s)^{d}$. In particular, for a set-multilinear polynomial $P$ of degree
$d = O(\log N/ \log \log N)$, it follows that $P$ has a formula of size $\poly(N)$ if and only if $P$ has a
set-multilinear formula of size $\poly(N)$. Thus, having $N^{\omega_d(1)}$ set-multilinear
formula size lower bounds for such a low degree would imply super-polynomial lower bounds for general formulas.
\end{itemize}
In particular, proving the optimal $n^{\Omega(\log n)}$ set-multilinear formula size lower bound for IMM$_{n,n}$ would have dramatic consequences. To this end, the authors in \cite{LST2} are able to show a weaker bound of the form $(\log n)^{\Omega(\log n)}$ instead. Even though it is the case that `simply' improving the base of this exponent from $\log n$ to $n$ yields general formula lower bounds, it seems that we are still far from achieving it. Indeed, as is observed in \cite{LST2}, we do not even have the optimal $n^{\Omega(\sqrt{n})}$ lower bound\footnote{This is known for set-multilinear (and even multilinear) $\Sigma\Pi\Sigma\Pi$ circuits (see \cite{FournierLMS15,Kayal0T18}), but those are only special cases of general product-depth $2$ circuits, which are $\Sigma\Pi\Sigma\Pi\Sigma$.} when product-depth $\Delta = 2$.
Moreover, we do not know how to obtain a lower bound of the form $n^{\Omega(\sqrt{n})}$ for product-depth $2$ set-multilinear circuits for \emph{any} explicit polynomial of degree $n$ and in $\poly(n)$ variables. For product-depths $\Delta\leq \log n$, \cite{LST2} shows a set-multilinear formula size lower bound of $(\log n)^{\Omega(\Delta n^{1/\Delta})}$ for IMM$_{n,n}$, which is in fact the best set-multilinear lower bound we know for any polynomial of degree $n$ and in $\poly(n)$ variables, and for any $\Delta \geq 2$. As far as we know, the previous best lower bound of $\exp(\Omega(n^{1/\Delta}))$, also for IMM$_{n,n}$, followed from the work of Nisan and Wigderson (\cite{NisanW97}). It is therefore an interesting challenge to improve the base of this exponent from $\log n$ to $n$ i.e., establish a near-optimal $n^{\Omega( n^{1/\Delta})}$ lower bound in the constant (or low) depth setting.
\paragraph{Our Results.}
In this paper, we obtain these ``optimal'' lower bounds, albeit not for IMM$_{n,n}$, but rather for another explicit polynomial in VNP. We show the following:
\begin{theorem}\label{thm-intro:main-bd-depth}
Let $N$ be a growing parameter and $\Delta$ be an integer such that $1\leq \Delta \leq \log N/\log \log N$. There is an explicit polynomial $P_N$ defined over $N = n^2$ variables with degree $d = n$ that is set-multilinear with respect to the variable partition $X = (X_1,\ldots, X_d)$ where each $|X_i| = n$ and such that any set-multilinear
formula of product-depth $\Delta$ computing $P_N(X)$ must have size at least $N^{\Omega(d^{1/\Delta}/\Delta)}$.
\end{theorem}
Notice that obtaining this precise bound is interesting also when viewed through the lens of \emph{depth reduction}. Tavenas (\cite{Tavenas15}), building on several prior works (\cite{AgrawalV08, Koiran12}), showed that any algebraic circuit of $\poly(N)$ size computing a homogeneous $N$-variate polynomial of degree $d$ can be converted to a homogeneous circuit of product-depth\footnote{The result is stated in \cite{Tavenas15} for $\Sigma\Pi\Sigma\Pi$ circuits but the proof can be appropriately modified for larger product-depths.} $\Delta$ of size $(Nd)^{O(d^{1/\Delta})}$. It easily follows from the proof that this depth
reduction preserves syntactic restrictions. That is, if we start with a syntactically set-multilinear circuit, the resulting product-depth $\Delta$ circuit is also syntactically set-multilinear. Therefore, the precise bound in Theorem \ref{thm-intro:main-bd-depth} is \emph{sharp} in the sense that any asymptotic improvement in its exponent would imply super-polynomial set-multilinear circuit lower bounds, which would be quite a strong and interesting consequence.
Another very intriguing direction is to consider the problem of \emph{improved} depth reduction for set-multilinear circuits. If an asymptotic improvement in the exponent on the bound for general circuits from \cite{Tavenas15} could be shown to hold for set-multilinear circuits in the setting of Theorem \ref{thm-intro:main-bd-depth} (i.e., when $N= d^2$), this would again imply super-polynomial set-multilinear circuit lower bounds. There is some evidence towards this possibility, as \cite{KOS19} shows such an improvement in a certain regime of parameters for multilinear circuits (see the discussion in Section \ref{sec:open} for more details).
\begin{remark}\label{rem:true-bd}
The lower bound in Theorem \ref{thm-intro:main-bd-depth} is actually $d^{\Omega(d^{1/\Delta}/\Delta)}$, where $d$ is the degree of the underlying polynomial, and it holds as long as degree $d\leq n$ (the details are deferred to the proof of Theorem \ref{thm:main-bd-depth} in Section \ref{sec:main}). Observe that for constant $\Delta$ this bound already nearly matches the bound $(\log n)^{\Omega(\Delta d^{1/\Delta})}$ in \cite{LST2} (which was shown for IMM$_{n,d}$) when $d = (\log n)^{\Omega(1)}$ and exceeds it as soon as $d$ becomes super-polylogarithmic in $n$. Moreover for $d < \log n/ \log \log n$, both the bounds are trivial even for $\Delta =1$.
\end{remark}
We also remark that in several lower bounds for algebraic circuit classes in the past, the lower bound was initially shown for a polynomial in VNP and then with additional effort, was shown to also hold for a polynomial in VP (in particular, the IMM polynomial). A strong candidate for the choice of this polynomial family in VNP has been the Nisan-Wigderson (NW) design-based (\cite{NisanW94-hardness}) family of polynomials. For instance, \cite{KayalSS14} showed a lower bound of $n^{\Omega(\sqrt{n})}$ for the top fan-in of a $\Sigma\Pi^{[O(\sqrt{n})]}\Sigma\Pi^{[\sqrt{n}]}$ circuit computing the NW polynomial, which was subsequently shown for IMM by \cite{FournierLMS15}. Similarly, \cite{KayalLSS17} showed an $n^{\Omega(\sqrt{d})}$ size lower bound for homogeneous depth-4 algebraic
formulas for the NW polynomial, which was then shown for IMM later in \cite{/KumarS17}.
Much like these examples, our hard polynomial family in Theorem \ref{thm-intro:main-bd-depth} is also indeed the NW polynomial family, as we shall see in Section \ref{sec:main}. Our motivation to study constant-depth set-multilinear formula complexity was to prove the optimal lower bounds for the IMM polynomial. Although we are presently able to show it only for the NW polynomial instead of IMM, we are hopeful that this is an important step in its direction.
In addition to our lower bound for bounded-depth set-multilinear formulas, we observe that the same proof technique also implies a lower bound of the form $n^{\Omega(\log n)}$ for unbounded-depth set-multilinear formulas. \cite{LST2} showed a weaker bound of the form $(\log n)^{\Omega(\log n)}$ but for IMM$_{n,n}$.
\begin{theorem}\label{thm-intro:main-gen-depth}
For a given integer $N$, there is an explicit polynomial $P_N$ defined over $N = n^2$ variables with degree $d = n$ that is set-multilinear with respect to the variable partition $X = (X_1,\ldots, X_d)$ where each $|X_i| = n$ such that any set-multilinear
formula computing $P_N(X)$ must have size at least $N^{\Omega(\log N)}$.
\end{theorem}
The hard polynomial in Theorem \ref{thm-intro:main-gen-depth} is also the NW polynomial, which if `improved' to IMM$_{n,n}$, then as discussed, would yield super-polynomial general formula lower bounds. However, we note that in this case, our result is in some sense subsumed by the result of Raz (\cite{Raz09}) who showed an $n^{\Omega(\log n)}$ lower bound for the $n\times n$ permanent (or determinant) polynomial for unbounded-depth multilinear formulas.
\paragraph{Other Related Work.}
In the bounded-depth setting, other than the works \cite{LST1,LST2,NisanW97} already mentioned, there have been several lower bounds for the class of low-depth \emph{multilinear} circuits (\cite{RazY09,ChillaraL019,ChillaraEL018,KayalNS20}). In the unbounded-depth setting, apart from the works \cite{LST2,Raz09} already mentioned for set-multilinear formulas, there have also been several strong lower bounds of the form $n^{\Omega(\log n)}$ against \emph{multilinear} formulas (\cite{DvirMPY12,HrubesY11,Kayal0T18}). However, in both settings of depth, several of these works are not even applicable to the set-multilinear setting as the corresponding hard polynomial does not happen to be set-multilinear.
\paragraph{Proof overview.}
Our overall proof techniques are similar to that of many known lower bounds. We work with a measure that we show to be small for all polynomials computed by small enough set-multilinear
formulas (appropriately so in the bounded and unbounded-depth settings) and large for the NW polynomial. These \emph{partial derivative measures} were introduced by
Nisan and Wigderson in \cite{NisanW97}, who used them to prove the constant-depth set-multilinear formula lower bounds we discussed earlier. \cite{LST1,LST2} use a particular variant of this measure and our measure is in turn inspired from these works.
Given a variable partition $(X_1,\ldots, X_d)$, we label each set of variables $X_i$ as `positive' or `negative' uniformly at random. Let $\calP$ and $\calN$ denote the set of positive and negative indices respectively, and let $\calM^\calP$ and $\calM^\calN$ denote the sets of all set-multilinear monomials over $\calP$ and $\calN$ respectively. For a polynomial that is set-multilinear over the given variable partition $(X_1,\ldots, X_d)$, our measure then is simply the rank of the `partial derivative matrix' whose rows are indexed by the elements
of $\calM^{\calP}$ and columns indexed by $\calN^{\calP}$, and the entry of this matrix corresponding to a
row $m_1$ and a column $m_2$ is the coefficient of the monomial $m_1\cdot m_2$ in the given polynomial.
In contrast, the measure used in \cite{LST1} is deterministic and moreover, it is \emph{asymmetric} with respect to the positive and negative variable sets, in the sense that while keeping the positive variable sets as is, it first reduces the size of the negative variable sets by arbitrarily setting a few of these variables to field constants, and then works with the resulting polynomial. On the other hand, \cite{LST2} does use a randomized measure, but one that is still asymmetric, relying on randomly setting a few of the variables inside each set to constants. The way they control the discrepancy between the sizes of the positive and negative variable sets (which is indeed crucial for obtaining the claimed lower bounds) is by imposing a Martingale-like distribution. The lower bound of \cite{NisanW97} also uses random restrictions to enable them to effectively ``simplify" the circuit and upper bound its complexity. Our symmetric, randomized measure avoids random restrictions altogether, and though it is inspired by the measure and the techniques from~\cite{LST1}, it is also reminiscent of the measures used in \cite{Raz09,RazY09} to prove multilinear formula lower bounds.
\section{Preliminaries}
We begin by defining the hard polynomial of our main result (Theorem \ref{thm-intro:main-bd-depth}). As is done in previous lower bounds using the NW polynomials (for example, see \cite{KayalSS14}), we will identify the set of the first $n$ integers as elements of $\F_n$ via an arbitrary correspondence $\phi : [n] \rightarrow \F_n$. If $f(z) \in \F_n[z]$ is a univariate polynomial, then we abuse notation to let $f(i)$ denote the evaluation
of $f$ at the $i$-th field element via the above correspondence i.e., $f(i)\coloneqq \phi^{-1}
(f(\phi(i)))$. To simplify the exposition, in the following definition, we will omit the correspondence $\phi$ and identify a variable
$x_{i,j}$ by the point $(\phi(i), \phi(j)) \in \F_n \times \F_n$.
\begin{definition}[Nisan-Wigderson Polynomials]\label{def:NW}
For a prime power $n$,
let $\F_n$ be a field of size $n$. For an integer $d\leq n$ and the set $X$ of $nd$ variables
$\{x_{i,j} : i\in[n], j \in [d]\}$, we define the degree $d$
homogeneous polynomial $NW_{n,d}$ over any field as
\[
NW_{n,d}(X) = \sum_{\substack{f(z)\in\F_n[z]
\\ \deg(f)<d/2}} \prod_{j\in[d]} x_{f(j),j}.
\]
\end{definition}
Next, we turn to the measure that we shall use to prove Theorems \ref{thm-intro:main-bd-depth} and \ref{thm-intro:main-gen-depth}. For the purpose of setting it up, we follow the notation of \cite{LST1} in the following definition. However, we do remark that we do not need it in its full generality as we will eventually work with a simpler, \emph{symmetric} notion that was alluded to in Section \ref{sec:intro}. Nevertheless, employing the same notation has the advantage that the reader is quite possibly already familiar with it in the context of proving set-multilinear circuit lower bounds.
\begin{definition}[Relative Rank Measure of \cite{LST1,LST2}]
Let $f$ be a polynomial that is set-multilinear with respect to the variable
partition $(X_1, X_2,\ldots, X_d)$ where each set is of size $n$. Let $w = (w_1, w_2,\ldots, w_d)$ be a tuple (or word) of non-zero real numbers such that $2^{|w_i|} \in [n]$ for all $i \in [d]$. For each $i \in [d]$, let $X_i(w)$ be the
variable set obtained by removing arbitrary variables from the set $X_i$ such that $|X_i(w)| = 2^{|w_i|}$, and let $\ol{X}(w)$ denote the tuple of sets of variables $(X_1(w),\ldots,X_d(w))$.
Corresponding to a word $w$, define $\calP_w \coloneqq \{i\ |\ w_i > 0\}$ and $\calN_w \coloneqq \{i\ |\ w_i < 0\}$. Let $\calM^{\calP}_{w}$ be the
set of all set-multilinear monomials over a subset of the variable sets $X_1(w), X_2(w),\ldots, X_d(w)$
indexed by $\calP_w$, and similarly let $\calM^{\calN}_{w}$ be the set of all set-multilinear monomials over these variable
sets indexed by $\calN_w$.
Define the ‘partial derivative matrix’ matrix $\calM_w(f)$ whose rows are indexed by the elements
of $\calM^{\calP}_w$ and columns indexed by the elements of $\calN^{\calP}_w$ as follows: the entry of this matrix corresponding to a
row $m_1$ and a column $m_2$ is the coefficient of the monomial $m_1\cdot m_2$ in $f$. We define
\[
\rk_w(f) \coloneqq \frac{\mathrm{rank}(\calM_w(f))}{\sqrt{|\calM^{\calP}_w|\cdot |\calM^{\calN}_w|}} = \frac{\mathrm{rank}(\calM_w(f))}{2^{\frac{1}{2}\sum_{i\in[d]}|w_i|}}.
\]
\end{definition}
\begin{definition}
For any tuple $w = (w_1,\ldots, w_t)$ and a subset $S \subseteq [t]$, we shall refer to the
sum $\sum_{i\in S} w_i$ by $w_S$. And by $w|_S$, we will refer to the tuple obtained by considering only the
elements of $w$ that are indexed by $S$. We denote by $\Fsm[\calT]$ the set of set-multilinear polynomials over the tuple
of sets of variables $\calT$.
\end{definition}
The following is a simple result that establishes various useful properties of the relative rank measure.
\begin{claim}[\cite{LST1}]\label{clm:rk-props}
\begin{enumerate}
\item(Imbalance) Say $f \in \Fsm[\ol{X}(w)]$. Then, $\rk_w(f)\leq 2^{-|w_{[d]}|/2}$.
\item(Sub-additivity) If $f,g \in \Fsm[\ol{X}(w)]$, then $\rk_w(f+g)\leq \rk_w(f)+\rk_w(g)$.
\item(Multiplicativity) Say $f = f_1 f_2\cdots f_t $ and assume that for each $i\in [t]$, $f_i\in \Fsm[\ol{X}(w|_{S_i})]$, where $(S_1, \ldots, S_t)$ is a partition of $[d]$. Then
\[
\rk_w(f) = \prod_{i\in[t]} \rk_{w|_{S_i}}(f_i).
\]
\end{enumerate}
\end{claim}
\section{Main Result}\label{sec:main}
We are now ready to prove our main result. We start by showing that the \emph{symmetric} relative rank is large for the NW polynomial.
\begin{claim}\label{clm:NW-full-rk}
For an integer $n = 2^k$ and $d\leq n$, let $w \in \{k,-k\}^d$ with $w_{[d]} = 0$. Then $\rk_w(NW_{n,d}) = 1$ i.e., $\calM_w(NW_{n,d})$ has full rank.
\end{claim}
\begin{proof}
Fix $n = 2^k$ and $d$, so that we can also write $NW$ for $NW_{n,d}$, and let $n' = d/2$. The condition on $w$ implies that $|\calP_w| = |\calN_w| = n'$. Observe that $\calM_w(NW)$ is a square matrix of dimension $|\calM^{\calP}_{w}| = |\calM^{\calN}_{w}| = n^{n'}$. Consider a row of $\calM_w(NW)$ indexed by a monomial $m_1 = x_{i_1,j_1}\cdots x_{i_{n'},j_{n'}}\in \calM^{\calP}_{w}$. $m_1$ can be thought of as a map from $S = \{j_1,\ldots,j_{n'}\}$ to $\F_n$ which sends $j_\ell$ to $i_\ell$ for each $\ell \in [n']$. Next, by interpolating the pairs $(j_1,i_1),\ldots, (j_{n'},i_{n'})$, we know that there exists a unique polynomial $f(z)\in \F_n(z)$ of degree $<n'$ for which $f(j_\ell) = i_\ell$ for each $\ell\in [n']$. As a consequence, there is a unique `extension' of the monomial $x_{i_1,j_1}\cdots x_{i_{n'},j_{n'}}$ that appears as a term in $NW$, which is precisely $m_1\cdot \prod_{j\in \calN_w}x_{f(j),j}$. Therefore,
all but one of the entries in the row corresponding to $m_1$ must be zero, and the remaining entry must be $1$. Applying the same argument to the columns of $\calM_w(NW)$, we deduce that $\calM_w(NW)$ is a permutation matrix, and so has full rank.
\end{proof}
The following is a more precise and general version of Theorem \ref{thm-intro:main-bd-depth} that is stated in Section \ref{sec:intro}. We also incorporate Remark~\ref{rem:true-bd} here and show our lower bound for any degree $d\leq n$. Theorem \ref{thm-intro:main-bd-depth} follows from the special case $d = n$.
\begin{theorem}\label{thm:main-bd-depth}
For an integer $n = 2^k$,
let $\F_n$ be a field of size $n$. Let $d,\Delta$ be integers such that $d\leq n$ is large enough\footnote{We only need $d$ to be larger than some absolute constant.} and $\Delta \leq \log d/ \log \log d$. Let $X_i$ denote the set of $n$ variables
$\{x_{i,j} : j \in [d]\}$ and $X$ be the tuple $(X_1,\ldots, X_d)$. Then, any set-multilinear
formula family of product-depth $\Delta$ computing $NW_{n,d}(X)$ must have size at least $d^{\Omega(d^{1/\Delta}/\Delta)}$.
\end{theorem}
\begin{proof}
We show that the symmetric relative rank of low-depth set-multilinear formulas is small with high probability in the lemma below, and then combine it with Claim \ref{clm:NW-full-rk} above to prove the desired bound.
\begin{lem}\label{lem:rk-bd-depth}
Let $C$ be a set-multilinear formula of product-depth $1\leq \Delta \leq \log d/ \log \log d$ of size at most $s$ which computes a polynomial that is set-multilinear with respect to the partition $(X_1,\ldots, X_d)$ where each $|X_i| = n$. Let $w \in \{k,-k\}^d$ be chosen uniformly at random. Then, we have
\[
\rk_w(C)\leq s \cdot 2^{-\frac{kd^{1/\Delta}}{20}}
\]
with probability at least $1 - s\cdot d^{-\frac{d^{1/\Delta}}{12\Delta}}$.
\end{lem}
\begin{proof}
We prove the statement by induction on $\Delta$.
If $\Delta = 1$, then $C = C_1 + \cdots +C_t$ where each $C_i$ is a product of linear forms. So, for all $i\in [t]$, by Claim \ref{clm:rk-props},
\[
\rk_w(C_i) = \prod_{i=1}^d 2^{-\frac{1}{2}|w_j|} = 2^{-\frac{kd}{2}}
\]
where in the last step, we used the observation that regardless of the choice of $w$, $|w_j| = k$ for all $j\in [n]$. Hence, by the sub-additivity of $\rk_w$, with probability $1$, we have
\[\rk_w(C) \leq s\cdot 2^{-\frac{kd}{2}}\leq s\cdot 2^{-\frac{kd}{20}}.
\]
Next, we assume the statement is true for all formulas of product-depth $\leq \Delta$. Let $C$ be a formula of
product-depth $\Delta + 1$.
So, $C$ is of the form $C = C_1 + \cdots + C_t$. Following an overall proof strategy similar to the one in \cite{LST1}, we say that a sub-formula $C_i$ of size $s_i$ is of type 1 if one of its factors has
degree at least $T_\Delta = d^{\frac{\Delta}{\Delta+1}}$, otherwise we say it is of type 2.
Suppose $C_i = C_{i,1}\cdot \cdots \cdot C_{i,t_i}$ is of type 1 with, say, $C_{i,1}$ having degree at least $T_\Delta$. Let $w^{i,1}$ be the corresponding word i.e., $w^{i,1} = w|_{S_1}$ if $C_{i,1}$ is set-multilinear with respect to $S_1\subsetneq [d]$. If it has size $s_{i,1}$, then since it has product-depth at most $\Delta$, it follows by induction that
\[
\rk_w(C_i) \leq \rk_{w^{i,1}}(C_{i,1}) \leq s_{i,1}\cdot 2^{-\frac{kT_{\Delta}^{1/\Delta}}{20}} \leq s_{i}\cdot 2^{-\frac{kd^{1/(\Delta+1)}}{20}}
\]
with probability at least
\[
1- s_{i,1}\cdot T_\Delta^{-\frac{T_\Delta^{1/\Delta}}{12\Delta}} \geq 1- s_{i}\cdot d^{-\frac{d^{1/(\Delta+1)}}{12\Delta }\cdot \frac{\Delta}{\Delta + 1}} = 1- s_{i}\cdot d^{-\frac{d^{1/(\Delta+1)}}{12(\Delta+1)}}.
\]
Now suppose that $C_i= C_{i,1}\cdot \cdots \cdot C_{i,t_i}$ is of type 2 i.e., each factor $C_{i,j}$ has degree $<T_\Delta$. Note that this forces $t_i> d/T_\Delta = d^{ \frac{1}{\Delta + 1}}$. As the formula is set-multilinear, $(S_1, \ldots, S_{t_i})$ form a partition of $[d]$
where each $C_{i,j}$ is set-multilinear with respect to $(X_\ell)_{\ell\in S_j}$ and $C_i$ is set-multilinear with
respect to $(X_\ell)_{\ell\in S}$. Let $w^{i,1},\ldots, w^{i,t_i}$ be the corresponding decomposition, whose respective sums are denoted simply by $w_{S_1},\ldots,w_{S_{t_i}}$.
From the properties of $\rk_w$ (Claim~\ref{clm:rk-props}), we have
\[
\rk_w(C_i) = \prod_{j=1}^{t_i} \rk_{w^{i,j}}(C_{i,j}) \leq \prod_{j=1}^{t_i} 2^{-\frac12 |w_{S_j}|} = 2^{-\frac12\sum_{j=1}^{t_i}|w_{S_j}|},
\]
from which we observe that the task of upper bounding $\rk_w(C)$ can be reduced to the task of lower bounding the sum $\sum_{j=1}^{t_i}|w_{S_j}|$, which is established in the following claim. For the sake of convenience, the choice of the alphabet for $w$ below is scaled down to $\{-1,1\}$.
\begin{claim}\label{clm:sum-lb}
For large enough $d$, suppose $(S_1, \ldots, S_{\ell})$ is a partition of $[d]$ such that each $|S_j| < T_\Delta = d^{\frac{\Delta}{\Delta+1}}$. Then, we have
\[
\Pr_{w\sim \{-1,1\}^d}\left[\sum_{j=1}^{\ell}|w_{S_j}| < \frac{d^{1/(\Delta+1)}}{10}\right] \leq d^{-\frac{d^{1/(\Delta+1)}}{12}}.
\]
\end{claim}
\begin{proof}
We first show that without loss of generality, we may assume that each $S_j$ has size `roughly' $T_\Delta$. To see this, we apply the following \emph{clubbing} procedure to the sets in the partition $(S_1, \ldots, S_{\ell})$:
\begin{itemize}
\item Start with the given partition $(S_1, \ldots, S_{\ell})$. At each step in the procedure, we shall `club' two of the sets in the partition according to the following rule.
\item If there are two distinct sets $S'$ and $S''$ in the current partition each of size $< T_\Delta/2$, we remove both of them and add their union $S'\cup S''$ to the partition.
\item If the rule above is no longer applicable, then we have at most one set in the current partition of size $<T_\Delta/2$. If there is none, then we halt the procedure here. Otherwise, we union this set with any one of the other sets and then halt.
\end{itemize}
After the clubbing procedure, we are left with a partition $(S_1',\ldots,S_{\ell'}')$ of $[d]$ such that $\frac{T_\Delta}{2}\leq |S_j'|\leq \frac{3T_\Delta}{2}$ for each $j\in [\ell']$, also implying that $\frac{2d^{1/(\Delta+1)}}{3}\leq \ell'\leq 2d^{1/(\Delta+1)}$. Through a repeated use of the triangle inequality, we see that $\sum_{j=1}^{\ell'}|w_{S'_j}|\leq \sum_{j=1}^{\ell}|w_{S_j}|$. Therefore, upper bounding the latter sum is a `smaller' event than upper bounding the former sum. Hence, it suffices to prove the statement of the claim with the assumption that $\frac{T_\Delta}{2}\leq |S_j|\leq \frac{3T_\Delta}{2}$ for each $j\in [\ell]$ (we henceforth drop the primed notation).
Now, in the event that the sum $\sum_{j=1}^{\ell}|w_{S_j}|$ is at most $\frac{d^{1/(\Delta+1)}}{10}$, since $\ell\geq \frac{2d^{1/(\Delta+1)}}{3}$, it follows that for at least half of the sets $S_j$, $w_{S_j} = 0$ (as $\frac{2}{3} - \frac{1}{10} = \frac{17}{30}>\frac12$). By Stirling's approximation, it follows that for a fixed $j$, the probability
\[
\Pr_{w\sim \{-1,1\}^d}\left[w_{S_j} = 0\right]\leq \sqrt{\frac{2}{\pi |S_j|}}\leq \sqrt{\frac{4}{\pi T_\Delta}} = \sqrt{\frac{4}{\pi}}\cdot \frac{1}{d^{\frac{\Delta}{2(\Delta+1)}}}< \frac{2}{d^{1/3}},
\]
where in the final step, we used $\Delta\geq 2$. Therefore, the probability that this happens for $\ell/2$ distinct $j$ is bounded by
\[\binom{\ell}{\ell/2} \cdot \left(\frac{2}{d^{1/3}}\right)^\frac{\ell}{2}< 2^\ell\cdot \left(\frac{2}{d^{1/3}}\right)^\frac{\ell}{2} = \left(\frac{2\sqrt{2}}{d^{1/6}}\right)^\ell \leq \left(\frac{2}{d^{1/9}}\right)^{{d^{1/(\Delta+1)}}}< d^{-\frac{d^{1/(\Delta+1)}}{12}},\]
where we used the bound $\ell\geq \frac{2d^{1/(\Delta+1)}}{3}$.
\end{proof}
The claim above and the preceding calculation immediately implies that for a sub-formula $C_i$ of type 2,
\[
\rk_w(C_i) \leq s_{i}\cdot 2^{-\frac{kd^{1/(\Delta+1)}}{20}}
\]
with probability at least $1-d^{-\frac{d^{1/(\Delta+1)}}{12}}\geq 1 - s_i\cdot d^{-\frac{d^{1/(\Delta+1)}}{12(\Delta+1)}}$.
Next, by a union bound over $i\in [t]$ and the sub-additivity property of $\rk_w$, it follows that
\[
\rk_w(C) \leq \rk_w(C_1) +\cdots + \rk_w(C_t) \leq s_1 \cdot 2^{-\frac{kd^{1/(\Delta+1)}}{20}} + \cdots + s_t \cdot 2^{-\frac{kd^{1/(\Delta+1)}}{20}} = s \cdot 2^{-\frac{kd^{1/(\Delta+1)}}{20}}
\]
with probability at least $1 - s\cdot d^{-\frac{d^{1/(\Delta+1)}}{12(\Delta+1)}}$, which concludes the proof of the lemma.
\end{proof}
Returning to the proof of the theorem, let $C$ be a set-multilinear formula of product depth $\Delta$ of size $s$ computing $NW_{n,d}(X)$. Suppose $s < d^{\frac{d^{1/\Delta}}{24\Delta}}$. Then, by Lemma \ref{lem:rk-bd-depth}, with probability at least $1 - d^{-\frac{d^{1/\Delta}}{24\Delta}}$,
\[
\rk_w(C)\leq s \cdot 2^{-\frac{kd^{1/\Delta}}{20}}.
\]
But now, we can condition on the event that $w_{[d]} = 0$ (which occurs with probability $\Theta(\frac{1}{\sqrt{d}}$)) to establish the existence of a word $w\in \{-k,k\}^d$ with $w_{[d]} = 0$ such that $w$ satisfies $
\rk_w(C)\leq s \cdot 2^{-\frac{kd^{1/\Delta}}{20}}$. This is because of the asymptotic bound $\frac{1}{\sqrt{d}} \gg d^{-\frac{d^{1/\Delta}}{24\Delta}}$, which follows from the given constraints on the parameters $d,\Delta$. Therefore, by Claim \ref{clm:NW-full-rk},
\[
s\geq 2^{\frac{kd^{1/\Delta}}{20}}\cdot \rk_w(C) = n^{\frac{d^{1/\Delta}}{20}}
\]
which contradicts the assumption that $s < d^{\frac{d^{1/\Delta}}{24\Delta}}$. Thus, we conclude that $s\geq d^{\frac{d^{1/\Delta}}{24\Delta}} = d^{\Omega(d^{1/\Delta}/\Delta)}$.
\end{proof}
Next, we show the supplementary result (Theorem \ref{thm-intro:main-gen-depth}) mentioned in Section \ref{sec:intro}, stated more precisely below.
\begin{theorem}\label{thm:main-gen-depth}
For an integer $n = 2^k$,
let $\F_n$ be a field of size $n$ and suppose $d\leq n$ is large enough. Let $X_i$ denote the set of $n$ variables
$\{x_{i,j} : j \in [n]\}$ and $X$ be the tuple $(X_1,\ldots, X_d)$. Then, any set-multilinear
formula family computing $NW_{n,d}(X)$ must have size at least $d^{\Omega(\log d)}$.
\end{theorem}
\begin{proof}
We first need the following structural result, whose proof can be immediately extrapolated from \cite{Saptarishi-survey} (see Lemma 13.3), where it is shown for multilinear and homogeneous formulas.
\begin{lem}[Product Lemma]\label{lem:prod-ub-depth}
Assume that $F$ is a formula with at most $s$ leaves, and is set-multilinear with respect to the set partition $(X_1,\ldots,X_d)$.
Then, we can write
\[
F = \sum_{i = 1}^s \prod_{j=1}^\ell F_{i,j}
\]
where $\ell \geq \log_3 d$ and for each $i\in [s]$, the product $F_i = \prod_{j=1}^\ell F_{i,j}$ is also set-multilinear. Furthermore, the degrees of $F_{i,j}$ satisfy the following geometric decay property:
\[
\left(\frac{1}{3}\right)^j d \leq \deg(F_{i,j})\leq \left(\frac{2}{3}\right)^j d, \text{ and } \deg(F_{i,\ell}) = 1.
\]
\end{lem}
\begin{lem}\label{lem:rk-gen-depth}
Let $F$ be a set-multilinear formula of size at most $s$ which computes a polynomial that is set-multilinear with respect to the partition $(X_1,\ldots, X_d)$ where each $|X_i| = n$. Let $w \in \{k,-k\}^d$ be chosen uniformly at random. Then, we have
\[
\rk_w(C)\leq s \cdot 2^{-\frac{k\log d}{20}}
\]
with probability at least $1 - s\cdot d^{-\frac{\log d}{60}}$.
\end{lem}
\begin{proof}
We begin by writing $F$ in the form that is given by Lemma \ref{lem:prod-ub-depth}. Now, because of the geometric decay of the degrees of $F_{i,j}$, we observe that for each $i\in [s]$, at least for the first $\frac{3\ell}{4}$ many values of $j$, $\deg(F_{i,j})\geq d^{1/4}$. In other words, at least a \emph{constant} fraction of the $F_{i,j}$s have their degrees at least \emph{polynomially large} in $d$. This observation will be instrumental in establishing the following claim, which is akin to Claim \ref{clm:sum-lb} used in the proof of Lemma \ref{lem:rk-bd-depth}.
\begin{claim}\label{clm:sum-lb-easy}
For large enough $d$, suppose $(S_1, \ldots, S_{\ell})$ is a partition of $[d]$ such that $\left(\frac{1}{3}\right)^j d \leq |S_j|\leq \left(\frac{2}{3}\right)^j d$ for all $j\in[\ell]$, and $|S_\ell| = 1$. Then, we have
\[
\Pr_{w\sim \{-1,1\}^d}\left[\sum_{j=1}^{\ell}|w_{S_j}| < \frac{\log d}{10}\right] \leq d^{-\frac{\log d}{60}}.
\]
\end{claim}
\begin{proof}
Consider the given event that $\frac{\log d}{10}$ exceeds the sum $\sum_{j=1}^{\ell}|w_{S_j}|$. As $\ell \geq \frac{\log d}{\log 3} > \frac{5\log d}{8}$, it follows that for at least half of the sets $S_j$, $w_{S_j} = 0$ (since $\frac{5}{8}-\frac{1}{10} = \frac{21}{40}>\frac12$). By the observation above, it also follows that at least for $\frac{\ell}{4}$ many of the \emph{first} $\frac{3\ell}{4}$ values of $j$, $w_{S_j} = 0$. But for a fixed such $j$, since $|S_j|\geq d^{1/4}$, the probability
\[
\Pr_{w\sim \{-1,1\}^d}\left[w_{S_j} = 0\right]\leq \sqrt{\frac{2}{\pi |S_j|}}< \frac{1}{\sqrt{|S_j|}} \leq \frac{1}{d^{1/8}},
\]
Therefore, the probability that this happens for $\ell/4$ distinct $j$ amongst the first $\frac{3\ell}{4}$ values of $j$ is bounded by
\[\binom{3\ell/4}{\ell/4} \cdot \left(\frac{1}{d^{1/8}}\right)^\frac{\ell}{4}< 2^{3\ell/4}\cdot \left(\frac{1}{d^{1/8}}\right)^\frac{\ell}{4} < \left(\frac{2}{d^{1/32}}\right)^\ell < d^{-\frac{\log d}{60}}.\]
\end{proof}
By sub-additivity of $\rk_w$ (Claim~\ref{clm:rk-props}), we have
\begin{equation}\label{eqn:subadd}
\rk_w(F)\leq \rk_w(F_1)+\cdots + \rk_w(F_s).
\end{equation}
So, fix an $i\in [s]$. As the formula is set-multilinear, let $(S_1, \ldots, S_{\ell})$ be the partition of $[d]$
such that each $F_{i,j}$ is set-multilinear with respect to $(X_t)_{t\in S_j}$. Let $w^{i,1},\ldots, w^{i,\ell}$ be the corresponding decomposition, whose respective sums are denoted by $w_{S_1},\ldots,w_{S_{\ell}}$. Then, by Claim \ref{clm:sum-lb-easy},
\[
\rk_w(F_i) = \prod_{j=1}^{\ell} \rk_{w^{i,j}}(F_{i,j}) \leq \prod_{j=1}^{\ell} 2^{-\frac12 |w_{S_j}|} = 2^{-\frac12\sum_{j=1}^{\ell}|w_{S_j}|}\leq 2^{-\frac{k\log d}{20}}
\]
with probability at least $1- d^{-\frac{\log d}{60}}$. Therefore, by a union bound over $i\in[s]$ and (\ref{eqn:subadd}), we conclude that \[
\rk_w(F)\leq s \cdot 2^{-\frac{k\log d}{20}}
\]
with probability at least $1 - s\cdot d^{-\frac{\log d}{60}}$.
\end{proof}
Returning to the proof of the theorem, let $F$ be a set-multilinear formula of size $s$ computing $NW_{n,d}$. Suppose $s < d^{\frac{\log d}{120}}$. Then, by Lemma \ref{lem:rk-gen-depth}, with probability at least $1 - d^{-\frac{\log d}{120}}$,
\[
\rk_w(F)\leq s \cdot 2^{-\frac{k{\log d}}{20}}.
\]
But now, we can condition on the event that $w_{[d]} = 0$ (which occurs with probability $\Theta(\frac{1}{\sqrt{d}}$)) to establish the existence of a word $w\in \{-k,k\}^d$ with $w_{[d]} = 0$ such that $w$ satisfies $
\rk_w(F)\leq s \cdot 2^{-\frac{k{\log d}}{20}}$. This is because of the trivial asymptotic bound $\frac{1}{\sqrt{d}} \gg d^{-\frac{\log d}{120}}$. Therefore, again by Claim \ref{clm:NW-full-rk},
\[
s\geq 2^{\frac{k{\log d}}{20}}\cdot \rk_w(F) = n^{\frac{\log d}{20}}
\]
which contradicts the assumption that $s < d^{\frac{\log d}{120}}$. Thus, we conclude that $s\geq d^{\frac{\log d}{120}} = d^{\Omega(\log d)}$.
\end{proof}
\section{Discussion and Open Problems}\label{sec:open}
We conclude by mentioning some interesting directions for future work.
\begin{itemize}
\item The most interesting and natural question is to make the hard polynomial in our main result IMM$_{n,n}$. This would imply super-polynomial algebraic formula lower bounds. As far as we know, it is conceivable that our complexity measure could be used to prove the lower bound for the IMM$_{n,n}$ polynomial. While the relative rank of IMM$_{n,n}$ itself is low, there might be a suitable ``restriction" of it such that for a randomly chosen $w\in \{-k,k\}^n$, with reasonably high probability the restriction has large rank. This could then be used to prove the lower bound for IMM$_{n,n}$ (using Lemma \ref{lem:rk-bd-depth} or Lemma \ref{lem:rk-gen-depth}). The result from~\cite{LST1} also showed its lower bound for the IMM polynomial by first analyzing a suitable restriction of IMM (although unfortunately that very same restriction idea does not work for us; please see the discussion in the appendix). Perhaps an intermediate question is to make the hard polynomial computationally simpler, for instance to find any hard polynomial that lies in VP.
\item Another interesting question is to prove an improved depth hierarchy theorem for constant-depth set-multilinear formulas. \cite{LST1} shows a depth hierarchy theorem for low-depth set-multilinear formulas. However, since their lower bounds only hold for small degrees, the depth hierarchy theorem in~\cite{LST1} only gives a quasi-polynomial separation of successive product-depths. It would be very interesting to obtain exponential separations (which for instance have been shown for low-depth multilinear circuits in \cite{ChillaraEL018}) using our measure.
\item Another interesting direction could be to obtain lower bounds for general set-multilinear circuits via improved depth reduction results.
The work of Kumar, Oliveira, and Saptharishi (\cite{KOS19}) provides some insight in this context, which shows an improved depth reduction to product-depth $\Delta$ with a size blow-up of $N^{O(\Delta\cdot (N/\log N)^{1/\Delta})}$ for {multilinear} circuits (regardless of degree). If a similar improvement (or any asymptotic improvement in the exponent) on the bound for general circuits from \cite{Tavenas15} could be shown to hold for set-multilinear circuits in the setting of Theorem \ref{thm-intro:main-bd-depth} or Theorem \ref{thm:main-bd-depth} (i.e., when $N\geq d^2$), then combined with our lower bounds, this would imply super-polynomial set-multilinear circuit lower bounds. We should note that \cite{FournierLMS15} rules out the possibility of obtaining a stronger reduction to depth-4, or $\Sigma\Pi\Sigma\Pi$ circuits, as it shows an $n^{\Omega(\sqrt{n})}$ size lower bound for set-multilinear depth-4 circuits computing IMM$_{n,n}$, which of course has small polynomial-sized set-multilinear circuits. Nevertheless, there is still the possibility of obtaining improved depth reduction statements for product-depths 2 (which as noted earlier, is $\Sigma\Pi\Sigma\Pi\Sigma$ and hence more general than depth-4) or higher, and combining it with our Theorem \ref{thm-intro:main-bd-depth} to obtain unbounded-depth set-multilinear circuit lower bounds. \cite{KumarS16} shows a quasi-polynomial separation between the strength of homogeneous $\Sigma\Pi\Sigma\Pi$ and $\Sigma\Pi\Sigma\Pi\Sigma$ circuits, which could be considered as some evidence towards the validity of this possibility.
\end{itemize}
\section*{Acknowledgments}
We would like to thank Swastik Kopparty, Mrinal Kumar, and Ben Rossman for several helpful discussions.
\bibliographystyle{alpha}
|
1,116,691,500,243 | arxiv | \section{Introduction} \label{section: intro}
\emph{Oblivious RAM (ORAM)} is a cryptographic primitive that enables users to
access a database on a server without revealing the access pattern to the
server.\footnotemark
\footnotetext{In the original paper, an ORAM is defined to be a random access
machine for which the memory access pattern is independent of the
input~\cite{Goldreich87}. Our use of the word ORAM follows the convention of
some subsequent work, e.g., \cite{Ren15,Wang15,Devadas16}.}
Although originally introduced in the context of software
protection~\cite{Goldreich87}, ORAM is directly relevant to the present cloud
computing scenarios.
In the previous studies on ORAM, researchers focused mainly on reducing the
access bandwidth cost, a performance measure used as a proxy of the access
time.
This is because even the current most state-of-the-art ORAM constructions have
two or three orders of magnitude larger bandwidth cost than the ordinary
(non-secure) accesses.
However, in certain settings, the ORAM access is already rather efficient.
For example, Maas et al. proposed PHANTOM~\cite{Maas13}, an ORAM-based secure
processor, and reported that if PHANTOM is deployed on the server, SQLite
queries can be performed without revealing the access pattern at the cost of
1.2--6$\times$ slowdown compared to non-secure SQLite queries.
In such cases, it is reasonable to pay more attention to performance measures
other than the access speed.
In particular, the \emph{server space usage} is a very important performance
measure for big-data applications.
First, there are applications where the amount of data is virtually unbounded,
and thus the limit of the available space defines the limit of the analyses.
Second, due to the cache effect, small memory usage often leads to faster
computation.
Third, space costs money, especially in a cloud computing server.
The second and the third points are especially relevant if the data is meant to
be stored in the main memory (by default), which is exactly the case in ORAM
application scenarios such as PHANTOM.
In most modern ORAM constructions, if the size of the original database is $n$
bits, the amount of the space required by the server is $n+\Theta(n)$ bits.
In this paper, we investigate the possibility of ORAM constructions that need
only $n+o(n)$ bits of server space.
We call such ORAM constructions \emph{succinct}.
This space efficiency formalization is widely used in the field of
\emph{succinct data structures} and has proved to be useful to design
practically relevant space-efficient data structures in theoretically clean
ways.
The main difficulty to achieve succinctness is that most existing ORAM
construction approaches rely on the use of linear amount of "dummy" data.
The situation is similar to conventional hash tables, which need extra space
linear to the stored keys size.
Although it seems possible to reduce the constant factor of the extra space to
some extent, it is not at all trivial if one can achieve sublinear extra space
maintaining the state-of-the-art performance in other aspects such as access
bandwidth and user space usage.
\paragraph{Results.}
\cref{table: theoretical performance} shows the performance comparison of the
proposed methods and the existing methods.
Our first construction takes
$n(1+\Theta(\frac{\log{n}}{B}+\frac{g(n)}{f_1(n)/\log{n}}))$-bit server space
where $n$ is the database size, $f_1(\cdot)$ is an arbitrary function such that
$f_1(n)=\omega(\log{n})$ and $O(\log^2{n})$, $g(\cdot)$ is an arbitrary
function such that $g(n)=\omega(1)$ and $o(\sqrt{f_1(n)/\log{n}})$, and $B$ is
the size of a \emph{block}, the unit of communication between the user and the
server.
The bandwidth blowup is $O(\log^2{n})$ and the user space is $O(f_1(n))$
blocks.
Our second construction achieves
$n(1+\Theta(\frac{\log{n}}{B}+\frac{\log\log{n}}{f_2(n)}))$-bit server space,
$O(\log^2{n})$-bandwidth blowup and $O(f_2(n)+R(n))$-user space where
$f_2(\cdot)$ is an arbitrary function such that $f_2(n) = \omega(\log\log{n})$
and $O(\log^2{n})$, $R(\cdot)$ is an arbitrary function such that $R(n) =
\omega(\log{n})$.
For example, suppose $B = \lg^2{n}$, $R = \lg{n}\lg\lg{n}$, $f_1(n) = f_2(n) =
\lg{n}\lg\lg{n}$ and $g(n) = \lg\lg\lg{n}$.
Then, the user space of each of our first and the second constructions is
$O(\log{n}\log\log{n})$ and the server space is
$n(1+\Theta(\frac{\log\log\log{n}}{\log\log{n}}))$ (resp.
$n(1+\Theta(\frac{1}{\log{n}}))$) bits in the first (resp. second)
construction.
The second construction has better theoretical performance than the first one.
However, in practice, with some parameter settings, the first construction also
works comparably well as the second construction depending on which performance
measure one cares (See \cref{section: non-asymptotic}).
The first construction is also the basis of the second construction.
If $B = \omega(\log{n})$, Goldreich's construction~\cite{Goldreich87} and our
constructions are succinct.
(Each of these methods works as long as $B \ge c\lg{n}$ for $c$ around 3.)
The assumption $B = \omega(\log{n})$ is justified as follows.
Stefanov et al.~\cite{Stefanov13} mentioned that the typical block size is
64--256 KB (resp. from 128B to 4KB) in cloud computing scenario (resp. software
protection scenario).
Even $B \ge \lg^{1.5}{n}$ holds if $n \le 2^{6501}$ (resp. $n \le 2^{97}$) in
cloud computing (resp. software protection) scenario with moderate block size
of 64KB (resp. 128B).
We achieved exponentially smaller bandwidth blowup compared to Goldreich's
construction~\cite{Goldreich87}, which is the only preceding non-trivial
succinct ORAM construction.
The bandwidth blowup of our constructions is smaller or equal to other
non-succinct constructions except the construction of Kushilevitz et
al.~\cite{Kushilevitz12}, the Onion ORAM~\cite{Devadas16} and the so called SSS
construction~\cite{Stefanov12}.
The construction of Kushilevitz et al. (and every other construction that is
listed above it in \cref{table: theoretical performance}) is based on a very
expensive procedure called oblivious sorting and the constant factor hidden in
the asymptotic notation of the bandwidth cost is prohibitively large.
The Onion ORAM achieves $O(1)$-bandwidth blowup but it requires several
assumptions.
First, the Onion ORAM requires the server to perform some computation, e.g.,
homomorphic encryption evaluation.
(In every other construction in \cref{table: theoretical performance}, the
server suffices to respond to read/write requests.)
It also requires a computational assumption (decisional composite residuosity
assumption or learning with errors assumption), and larger block size
($B=\widetilde{\omega}(\log^2{n})$ to $\widetilde{\omega}(\log^6{n})$ depending
on the exact construction, where $\widetilde\omega(\cdot)$ hides a polyloglog
factor).
The SSS construction takes $cn$-bit user space where $c \ll 1$.
This method is effective for ordinary cloud computing setting but the user
space is too large for secure processor setting --- the PHANTOM-like
applications where server space efficiency is more important.
\begin{table}
\centering
\caption{Comparison of theoretical performances.
Bandwidth blowup is the number of blocks required to be communicated for
accessing one block of data.
User space includes the temporary space needed during access procedures.
$n$ is the database size in bits and $B$ is the block size in bits.
$B$ must satisfy $B \ge c_1\lg{n}$ and $B = O(n^{c_2})$ for constants
$c_1>1$, $0<c_2<1$.
Typically, $c_1$ is around $3$.
$f_1(\cdot)$ is an arbitrary function such that $f_1(n) = \omega(\log{n})$
and $O(\log^2{n})$.
$f_2(\cdot)$ is an arbitrary function such that $f_2(n) =
\omega(\log\log{n})$ and $O(\log^2{n})$.
$R(\cdot)$ is an arbitrary function such that $R(n) = \omega(\log{n})$.
$g(\cdot)$ is an arbitrary function such that $g(n) = \omega(1)$ and
$o(\sqrt{f_1(n)/\log{n}})$.
Bounds with $\dagger$ are amortized.
The method in~\cite{Devadas16} requires additional assumptions.
The user space bound of the method in~\cite{Stefanov12} has a constant factor
$\ll 1$.}
\label{table: theoretical performance}
\begin{tabular}{c | c c c}
& Server space (\#bits)
& \begin{tabular}[x]{@{}c@{}}Bandwidth\\blowup\end{tabular}
& \begin{tabular}[x]{@{}c@{}}User space\\(\#block)\end{tabular} \\
\hline
Goldreich~\cite{Goldreich87} &
$n\parens{1+\Theta\parens{\frac{\log{n}}{B}+\frac{1}{\sqrt{n}}}}$ &
$O(\sqrt{n}\log{n})^\dagger$ &
$O(1)$ \\
Ostrovsky~\cite{Ostrovsky90} &
$O(n\log{n})$ &
$O(\log^3{n})^\dagger$ &
$O(1)$ \\
Ostrovsky, Shoup~\cite{Ostrovsky97} &
$n(1+\Theta(1))$ &
$O(\sqrt{n}\log{n})$ &
$O(1)$ \\
Ostrovsky, Shoup~\cite{Ostrovsky97} &
$O(n\log{n})$ &
$O(\log^3{n})$ &
$O(1)$ \\
Goodrich, Mitzenmacher~\cite{Goodrich11} &
$n(1+\Theta(1))$ &
$O(\log^2{n})^\dagger$ &
$O(1)$ \\
Kushilevitz, Lu, Ostrovsky~\cite{Kushilevitz12} &
$n(1+\Theta(1))$ &
$O(\frac{\log^2{n}}{\log\log{n}})$ &
$O(1)$ \\
Stefanov, Shi, Song~\cite{Stefanov12} &
$n(1+\Theta(1))$ &
$O(\log{n})$ &
$O(n)$ \\
Stefanov et al.~\cite{Stefanov13} &
$n(1+\Theta(1))$ &
$O(\log^2{n})$ &
$O(R(n))$ \\
Devadas et al.~\cite{Devadas16} &
$n(1+\Theta(1))$ &
$O(1)$ &
$O(1)$ \\
\hline
Our result (\cref{main theorem}) &
$n\parens{1+\Theta\parens{\frac{\log{n}}{B}+\frac{g(n)}{f_1(n)/\log{n}}}}$ &
$O(\log^2{n})$ &
$O(f_1(n))$ \\
Our result (\cref{second main theorem}) &
$n\parens{1+\Theta\parens{\frac{\log{n}}{B}+\frac{\log\log{n}}{f_2(n)}}}$ &
$O(\log^2{n})$ &
$O(f_2(n)+R(n))$
\end{tabular}
\end{table}
\paragraph{Possible applications.}
There are several ORAM application scenarios with different requirements.
Our methods are particularly relevant to \emph{secure processor} scenario.
In this scenario, it is assumed that a special processor under the control of
the user is available in a remote server and the adversary cannot observe the
activities inside the processor.
The cloud service user sends a piece of code to the trusted processor, which,
in turn, executes the code on the server.
The communication between the cloud service user and the secure processor is
protected by private key encryption.
ORAM is implemented inside of the trusted processor using FPGA and it hides the
processor's access pattern to the main memory on the server.
After executing the code, the secure processor may return the (encrypted)
output to the cloud service user.
One of the main advantages of this approach over the conventional ORAM
application, in which the cloud service user locally executes ORAM, is that
ORAM bandwidth blowup applies to the relatively cheap processor--memory
communication rather than the costly over-network communication.
Note that, with the ORAM user-server terminology, the secure processor (resp.
the main memory) is the user (resp. the server).
In secure processor scenario,
\begin{itemize}[noitemsep]
\item the user space is very limited, e.g., 6MB;
\item The server usually does not perform complex computation;
\item Simple ORAM algorithms are desirable for hardware implementation;
\item The server space is much larger than the user space but there is some
noticeable limit.
The server can use disks if needed but it greatly slows down accesses.
\end{itemize}
In most existing secure processor systems, the Path ORAM~\cite{Stefanov13} or
its close variants are used~\cite{Fletcher12,Maas13,Ren13,Fletcher15}.
Indeed, the Path ORAM satisfies the first three requirements above.
However, it does not capture the last point.
For example, suppose 128GB database is stored in the Path ORAM.
If the block size is 128B, it takes about 10G blocks, i.e., 1.28TB (to ensure
rigorous security).
Then, each ORAM access procedure takes about 31$\mu$s assuming each memory
access takes 100ns.
If half of the 10G blocks are stored in the main memory and the other half is
stored in the disk, due to the randomized access pattern of the Path ORAM,
almost every ORAM access procedure ends up a disk seek, which takes
milliseconds order time.
In such cases, it is reasonable to use another ORAM construction that takes,
say, half the space of the Path ORAM even though it requires twice as many
memory accesses.
\paragraph{Tree-based ORAM.}
Our ORAM constructions are tree-based.
In a typical tree-based ORAM construction, $N$ blocks are stored in a complete
binary tree with $N$ leaves on the server.
Each node of the tree can store up to $Z$ blocks where $Z$ is a constant.
Each block is assigned a position label, which is an integer chosen uniformly
at random from $[N]$.
A block with position label $i$ must be stored at some node on the path from
the root to the $i$-th leaf.
This framework was introduced by Shi et al.~\cite{Shi11} and used in many
subsequent studies~\cite{Stefanov13, Gentry13, Ren13, Chung14, Devadas16}.
Consider a particular block $b$.
As the user continuously issues access requests, $b$ moves around the tree in
roughly the following manner.
First, when the user issues an access request to $b$, $b$ is picked out of the
tree and given a new uniformly random position label.
Then, $b$ is inserted into the tree from the root.
If the user issues an access request to another block, then, with some
probability, $b$ will move down the path to the leaf indicated by its position
label.
If the next node on the path is full, $b$ must wait for the blocks ``ahead'' to
move down.
If the pace at which the blocks move down the tree cannot keep up with the pace
at which blocks are picked out and reinserted from the root, then, some blocks
will not be able to reenter the tree.
If such ``congestion'' occurs, the user must maintain the overflown blocks
locally.
Note that most space in the tree is wasted: there are $2N-1$ nodes in the tree,
each with capacity $Z$, whereas there are only $N$ blocks.
Thus, to save server space, it is desirable to make the tree more compact, for
example, by reducing $Z$.
However, to maintain a low probability of ``congestion'', it is desirable to
make the tree larger, for example, by increasing $Z$.
To construct a succinct tree-based ORAM, we need to satisfy these conflicting
demands simultaneously.
\paragraph{Our ideas.}
One of our key ideas is the following two-stage tree layout.
We first change the tree to a complete binary tree with $N/\lg^{1.4}{N}$ leaves
(assume this is a power of 2).
In addition, we set the capacity of each leaf node to $\lg^{1.4}{N} +
\lg^{1.3}{N}$ while keeping the capacity of each internal node at $Z$.
The total size of the leaf nodes is then $N + N/\lg^{0.1}N$, and the total size
of all tree nodes except the leaves is $\Theta(N/\lg^{1.4}{N})$.
Thus, the total size of the entire tree is $N+o(N)$.
We choose each position label from $[N/\lg^{1.4}{N}]$.
To see why blocks can flow around in this tree without much congestion, suppose
that the user inserts each block directly into the leaf node pointed to by the
block's position label.
Clearly, the loads of leaves in this hypothetical setting dominates the loads
of leaves in the real setting.
Then, the situation would exactly be the same as the ``balls-into-bins''
game~\cite{MitzenmacherUpfal} with $N$ balls and $N/\lg^{1.4}{N}$ bins.
In particular, the number of blocks stored in each leaf node is
$\log^{1.2}{N}+\Theta(\log^{0.6}{N})$ with high probability.
Thus, every leaf node has sufficient capacity to store all of its assigned
blocks.
Furthermore, the blocks in the internal nodes flow as smoothly as in the
original non-succinct ORAM construction since we did not modify that part.
Therefore, the blocks flow without much congestion throughout the tree.
This is the idea behind the first construction (\cref{main theorem}).
Another key idea follows naturally from the above argument, specifically from
the connection to the balls-into-bins game.
A remarkable phenomenon known as ``the power of two choices'' states that, in
the balls-into-bins game, if one chooses two bins uniformly and independently
for each ball, and throws the ball into the least loaded bin, the bin loads
will be distributed much more tightly around the mean than they are in the
one-choice game~\cite{Azar99,Berenbrink00,MitzenmacherUpfal}.
The maximum bin load corresponds to the leaf node size in tree-based ORAM
constructions.
Thus, the size of the tree can be further decreased by using the two-choice
strategy to assign the position labels.
This is the idea behind the second construction (\cref{second main theorem}).
We note that the current paper is the first to apply the power of two choices
to tree-based ORAM.
(Some non-tree-based constructions~\cite{Pinkas10, Goodrich11, Kushilevitz12}
use the two choices idea in the form of cuckoo hashing~\cite{Pagh04}.)
Moreover, the resulting algorithms keep the simplicity of the Path
ORAM~\cite{Stefanov13}, which is a highly valuable asset in the relevant
application scenario as mentioned above.
As for the analysis, the existing stash size analyses~\cite{Stefanov13,Ren13}
do not seem to work with parameter regimes required for succinctness.
We will give a different proof route (though it still heavily borrows
from~\cite{Stefanov13,Ren13}).
\paragraph{Our contributions.}
Our contributions in the current paper are as follows:
\begin{itemize}[noitemsep]
\item We introduce the notion of succinct oblivious RAM.
This is a promising first step to systematically design ORAM constructions
with small server space usage;
\item We propose two succinct ORAM constructions.
Not only being succinct, these constructions exhibit state-of-the-art
performance in terms of the bandwidth blowup.
The methods are simple and easy to implement;
\item We also give non-asymptotic bounds and simulation results which
indicate that the proposed methods are practically effective.
\end{itemize}
\subsection{Related Work}
In the field of succinct data structures~\cite{Jacobson88,Jacobson89}, the goal
is to represent an object such as a string~\cite{Munro01-1, Sadakane02,
Grossi03, Ferragina05-1, Golynski06, Sadakane06, Jansson07, Ferragina07, Hon14,
Navarro14-2} or a tree~\cite{Clark96, Munro01-2, Raman03, Benoit05,
Ferragina05-2, Navarro14} in such a way that a) only $OPT+o(OPT)$ bits are
required, and b) relevant queries such as random access or substring search are
efficiently supported.
Here, $OPT$ is the information theoretic optimum, i.e., the minimum number of
bits needed to represent the object.
The current study is related to succinct data structures in the following way.
Suppose a remote server hosts a database that is implemented by a succinct data
structure, and a user wishes to access the database without revealing the
access pattern to the server.
The user, of course, can apply any existing ORAM constructions.
However, if ORAM increases the database size by some constant factor, it
destroys the $OPT+o(OPT)$ bound guaranteed by the succinct data structure.
One can apply the succinct ORAM constructions proposed in this paper to hide
succinct data structure access pattern on a remote storage device without
harming the theoretical guarantee on the data structure size.
\subsection{Organization of the Paper}
In \cref{section: prelim}, we introduce basic notions that will be used in
later sections.
We describe our first succinct ORAM construction (encapsulated in \cref{main
theorem}) in \cref{section: succinct oram} and the second construction
(encapsulated in \cref{second main theorem}) in \cref{section: succincter
oram}.
Then, we present non-asymptotic analyses and simulation results in
\cref{section: non-asymptotic}.
We conclude the paper in \cref{section: conclusion}.
\section{Preliminaries} \label{section: prelim}
\subsection{Notations}
We denote the set $\{0, 1, \dots n-1\}$ as $[n]$ for a non-negative integer
$n$.
We write $\lg{x}$ to denote the base-$2$ logarithm of $x$ and $\ln{x}$ to
denote the natural logarithm of $x$.
We write $\log{x}$ to denote the logarithm of $x$ in the context where the base
can be any positive constant.
We write $\mathrm{poly}(n)$ to denote $n^c$ for some constant $c>0$.
A negligible function of $n$ is defined to be a function that is asymptotically
smaller than $1/n^c$ for any constant $c>0$.
\subsection{Oblivious RAM} \label{subsec: ORAM}
\paragraph{Definition.}
Oblivious RAM is defined through the interaction between three parties the
\emph{user}, the \emph{server} and the \emph{oblivious RAM (ORAM) simulator}.
The user wishes to perform random access to the database on the server without
revealing the ``access pattern'' to the server.
Roughly speaking, the ORAM simulator works as a mediator between the user and
the server.
It takes access requests to the database from the user and translates them to
``appropriate'' access requests to the server.
The database on the server can be maintained as some ``data structure'' instead
of the raw form on which the user intends to perform random access and thus,
the access requests to the server need not be (and are not) the same as the
access requests to the database.
The ORAM simulator then, performs random accesses to the server on behalf of
the user (using translated requests), thereby making it impossible for the
server to infer the access patterns to the database even though the accesses to
the server is visible from the server.
Formally, let each of $B$ and $n$ be a positive integer and $N := \ceil{n/B}$.
The value $B$ models the unit of communication and $n$ models the database
size.
We call a chunk of $B$ bits a \emph{block}.
For brevity, we assume $n$ is a multiple of $B$ in the rest of the paper.
A \emph{logical (resp. physical) access request} is a triplet $(\mathrm{op},
\mathrm{addr}, \mathrm{val})$, where $\mathrm{op} \in
\{\mathrm{read},\mathrm{write}\}$, $\mathrm{addr} \in [N]$ (resp.
$\mathrm{addr} \in \mathbb{N}$), $\mathrm{val} \in \{0, 1\}^B$.
The user sends logical access requests to the ORAM simulator and receives a
block for each request.
The server receives physical access requests from the ORAM simulator and
returns a block for each request in the following way: for $(\mathrm{read}, i,
v)$, the server returns $v$ of the most recent request $(\mathrm{write}, i,
v)$.
The ORAM simulator takes a sequence of logical access requests from the user
and for each logical access request, it makes a sequence of physical access
requests to the server receiving a returned block for each of them, and returns
a block to the user.
The ORAM simulator is possibly stateful and probabilistic.
It must respond to logical access requests online and must satisfy the
following conditions:
\begin{description}
\item[Correctness] The ORAM simulator is correct if and only if, for a
logical access request with $\mathrm{addr}=i$, it returns $v$ of the previous
and most recent logical access request $(\mathrm{write},i,v)$;\footnotemark
\footnotetext{We use the convention that not only read but also write
requests have return values.}
\item[Security] The ORAM simulator is computationally (resp. information
theoretically) secure if and only if, for any logical access request
sequences of the same length, the distributions of the $\mathrm{addr}$ values
of the resulting physical access requests are computationally (resp.
information theoretically) indistinguishable.
\end{description}
An ORAM construction is an ORAM simulator implementation.
We have distinguished the user from the ORAM simulator for exposition but in
practice, an ORAM simulator is a program run by the user.
Thus, we do not distinguish them in the rest of the paper.
\paragraph{Encryption.}
In the ORAM constructions considered in this paper, the user holds a symmetric
cipher key and every block is encrypted when it is stored on the server.
Encryption can increase the database size.
Theoretically, we can bound the space overhead due to encryption to
$o(1)$-factor.
For example, one can encrypt a block $m$ as $(r,m \oplus F(r))$ where $r$ is a
random bits of size $\omega(\log{n})$ and $o(B)$, $F$ is a pseudorandom
function (key is omitted) and $\oplus$ denotes bitwise XOR.
Or, in practice, one can use ``counter mode'' of block cipher, i.e., encrypting
a block $m$ as $(i,m \oplus F(z||i))$ where $F$ is AES, $i$ is the number of
blocks encrypted so far and $z$ is a nonce.
Assuming that we allocate 128 bits to $i$ and the typical block sizes mentioned
in \cref{section: intro}, the additional space is 1/4096--1/16384 (resp.
1/8--1/256) factor of the original database size in cloud computing (resp.
software protection) scenario.
Since the space overhead due to encryption is rather small, we ignore it in the
rest of the paper.
\paragraph{Performance measures.}
The most popular ORAM performance measures include the amount of the space
required by the user/server and the amount of time required for each logical
access.
In most ORAM constructions, the user needs to maintain a small amount of
information locally.
In addition to this, in some constructions, the user temporarily needs to store
more information during the access procedure.
We refer the amount of the space the user temporarily needs during access
procedure as \emph{temporary space usage} and the amount of the space the user
needs even if no access is made as \emph{permanent space usage}.
In this paper, we pay special attention to the server space usage.
In particular, we use the following notion of \emph{succinctness} as a
criterion for ORAM server-space efficiency:
\begin{definition}
If the server space usage of an ORAM construction representing an $n$-bit
database is $n + o(n)$ bits, the ORAM construction is said to be succinct.
\end{definition}
As for the access efficiency, following the previous studies, we use the amount
of communication between the user and the server as a proxy for the access
time.
We define the \emph{bandwidth blowup} of an ORAM construction to be the number
of blocks that needs to be communicated between the user and the server per
logical access.
In other words, the bandwidth blowup is the ratio of communication amount
needed for secure access to communication amount needed for ordinary (insecure)
access.\footnotemark
\footnotetext{The bandwidth blowup is a ratio and does not have a unit.}
\paragraph{Asymptotic behavior of parameters.}
Among the ORAM-related parameters, the original database size $n$ and block
size $B$ are outside of the user's control.
Other parameters, e.g., the metadata size, can be chosen by the user.
We assume that $B$ is a function of $n$ satisfying $B = \omega(\log{n})$.
(See \cref{section: intro} for the justification.)
Thus, after all, $n$ is the only free parameter on which the other parameters
depend.
In all asymptotic statements in this paper, the limit is taken as $n \to
\infty$.
\subsection{Sub-ORAM}
We use an ORAM construction encapsulated into the following proposition as a
blackbox.
Concretely, the Path ORAM~\cite{Stefanov13} suffices.
\begin{proposition} \label{prop: Path ORAM}
Let $n$ be the database size and $B$ be the block size, both in bits.
If $B \ge 3\lg{n}$ and $B = O(n^c)$ for some $0<c<1$, there exists an
information theoretically secure ORAM construction such that
i) the server's space usage is
\begin{equation*}
n \parens{10 + \Theta\parens{\frac{\log{n}}{B}}} \text{ bits;\footnotemark}
\end{equation*}
\footnotetext{The description of the original paper depends on the assumption
that $N$ is a power of two. If this assumption is not true and we pad the
database to make $N$ a power of two, the factor 10 in the server space bound
becomes 20.}
ii) the worst-case bandwidth blowup is $O(\log^2{n})$;
iii) the user's temporary space usage is $O(\log{n})$ blocks; and
iv) for any $R = \omega(\log{n})$, the probability that the user's permanent
space usage becomes larger than $R$ blocks during $\mathrm{poly}(n)$ logical accesses
is negligible.
\end{proposition}
\section{Succinct ORAM Construction} \label{section: succinct oram}
In this section, we prove the following theorem.
\begin{theorem} \label{main theorem}
Let $n$ be the database size and $B$ be the block size, both in bits.
If $B \ge 3\lg{n}$ and $B = O(n^c)$ for some constant $0<c<1$, then for any
$f:\mathbb{N}\to\mathbb{R}$ such that $f(n) = \omega(\log{n})$ and $f(n) = O(\log^2{n})$ and
any $g:\mathbb{N}\to\mathbb{R}$ such that $g(n) = \omega(1)$ and $g(n) =
o(\sqrt{f(n)/\log{n}})$, there exists an information theoretically secure
ORAM construction such that
i) the server's space usage is bounded by
\[
n \parens{1 + \Theta\parens{\frac{\log{n}}{B} +
\frac{g(n)}{\sqrt{f(n)/\log{n}}}}} \text{ bits;}
\]
ii) the worst case bandwidth blowup is $O(\log^2{n})$;
iii) the user's temporary space usage is $O(f(n))$ blocks; and
iv) for any $R = \omega(\log{n})$, the probability that the user's permanent
space usage becomes larger than $R$ blocks during $\mathrm{poly}(n)$ logical accesses
is negligible.
\end{theorem}
\begin{corollary}
If, in addition to the conditions of \cref{main theorem}, $B=\omega(\log{n})$,
then, the ORAM construction of \cref{main theorem} is succinct.
\end{corollary}
In the remainder of this section, $n, B, c, f(\cdot), g(\cdot)$ are as
described in \cref{main theorem}.
\subsection{Description}
For the clarity of explanation, we first describe a simplified ORAM construction
where the user needs to maintain a large amount of information locally.
Then, we obtain an ORAM construction with the claimed bounds by slightly
modifying the simplified construction.
As we mentioned in \cref{section: intro}, in a tree-based ORAM construction,
blocks on the server are stored in the nodes of a complete binary tree.
The key point of the method in this section is the choice of the tree height
$L$ and the leaf node capacity $M$.
Specifically, in the rest of this section, let
\[
L := \ceil{\lg{\frac{N}{f(n)}}} \quad\text{and}\quad
M := \ceil{\frac{N}{2^L} + g(n)\sqrt{\frac{NL}{2^L}}}
\]
where $N = n/B$.
We assume, for brevity, that each of $\lg{\frac{N}{f(n)}}$ and $\frac{N}{2^L} +
g(n)\sqrt{\frac{NL}{2^L}}$ is an integer.
\paragraph{Block usage.}
The ORAM is supposed to provide the user with an interface to access the
database as if it is stored in array $A$ of $B$-bit blocks (\cref{subsec:
ORAM}).
We use blocks as follows :
\begin{itemize}[noitemsep]
\item Each block is either a \emph{data block} or a \emph{metadata block};
\item Each data block is either a \emph{real block} or a \emph{dummy block}.
A real block contains an entry of $A$.
A dummy block does not contain any information on the database contents and
is used only to hide the access pattern;
\item Each real block is given a \emph{position label}, a value in $[2^L]$;
\item A metadata block contains the metadata of several data blocks.
For each data block, its metadata consists of
\begin{description}[align=right,labelwidth=0.7cm,noitemsep]
\item[\textmd{\textsf{type}:}] A flag indicating whether the block is real
or dummy;
\item[\textmd{\textsf{addr}:}] If the block is real and represents $A[i]$,
the value of \textsf{addr} is $i$.
If the block is a dummy, the value is arbitrary;
\item[\textmd{\textsf{pos}:}] If the block is real with position label $i$,
the value of \textsf{pos} is $i$.
If the block is a dummy, the value is arbitrary.
\end{description}
\end{itemize}
\paragraph{Data layout.}
The server maintains a tree containing data blocks, which we call \emph{data
tree}, and another tree containing metadata blocks, which we call
\emph{metadata tree}.
The data tree is used in such a way that at each point of time, it contains
most real blocks with high probability.
The user maintains \emph{stash}, which contains the real blocks that are not in
the data tree, and \emph{position table}, which contains the position labels of
all real blocks.
Below, we explain each of them more in detail.
The data tree is a complete binary tree with $2^L$ leaves.
Each node of the tree is a \emph{bucket}, which is a container that can
accommodate a certain number of blocks.
We call the buckets corresponding to the internal nodes as \emph{internal
buckets} and the buckets corresponding to the leaf nodes as \emph{leaf
buckets}.
The size of each internal bucket is $Z$ (blocks) while the size of each leaf
bucket is $M$ (blocks).
We will determine $Z$ to be 3 in \cref{subsec: succ oram user space} but for
now, we consider it as an arbitrary constant.
The data tree is represented as the bitstring derived by concatenating all
buckets in breadth first order.
As is well-known, with this representation, given an index of a node, the index
of the parent or left/right child can be derived by simple arithmetic.
The total space usage of the data tree is equal to the sum of the bucket sizes.
The metadata tree is also a complete binary tree with $2^L$ leaves.
Each node of the tree is the metadata of the data blocks in the corresponding
bucket of the data tree.
The metadata tree is represented similarly to the data tree but there is a
subtlety.
If the metadata of the blocks in a bucket has a size smaller than $B$, it is
wasteful to allocate one full block for them.
To avoid this waste, we represent metadata tree as the bitstring derived by
concatenating the metadata of all data blocks in the data tree in breadth first
order.
The space usage of the metadata tree is equal to the sum of all metadata of all
data blocks.
Each real block in the stash is maintained with its \textsf{addr} and
\textsf{pos}.
The stash can be any linear-space data structure that efficiently supports
insertion, deletion and range query by \textsf{pos}, e.g., a self balancing
binary search tree.
The position table stores the position label of the real block storing $A[i]$
in the $i$-th entry.
\paragraph{Access procedure.}
Access requests are processed in such a way that the following invariant
conditions are always satisfied:
\begin{itemize}[noitemsep]
\item Each real block is stored either in the data tree or in the stash;
\item If a real block with position label $\ell$ is stored in the data tree,
it is in the bucket on the path from the root to the $\ell$-th leaf.
\end{itemize}
\begin{table}
\centering
\caption{The notations for access procedure}
\label{table: notations}
\begin{tabular}{r | l}
$\mathrm{Pos}$
& the position table \\
$P(\ell)$
& the path from the root to the $\ell$-th leaf of the data tree \\
$P(\ell,i)$
& the depth $i$ bucket on $P(\ell)$ (the root is at depth 0) \\
$P(\ell,i,j)$
& the $j$-th block in $P(\ell,i)$ (counted from one) \\
$\mathrm{meta}[P(\ell,i)]$
& the metadata of the blocks in $P(\ell,i)$ \\
$|P(\ell,i)|$
& the number of blocks in $P(\ell,i)$ ($|P(\ell,i)|=Z$ for $i < L$ and $|P(\ell,L)|=M$) \\
$\mathrm{md}[i]$
& the $i$-th metadata in $\mathrm{md}$ (if $\mathrm{md} = md[P(\ell,i)]$, the metadata of $P(\ell,i,j)$) \\
$\textsc{Random}(b)$
& returns a uniformly random $b$-bit integer \\
$\textsc{BitReversal}(\ell)$
& returns the $L$-bit integer derived by reversing the bits of $L$-bit integer $\ell$ \\
$G$
& a persistent/global variable storing the number of \textsc{Access} called so far
\end{tabular}
\end{table}
The main routine of the access procedure is described in \cref{algorithm: main}
and the subroutines for \textsc{Access} are described in \cref{algorithm:
subroutine}.
The notations used in the access procedure are summarized in \cref{table:
notations}.\footnotemark
\footnotetext{We note that the pseudocode and notations borrow much from
existing work~\cite{Stefanov13, Ren15}.}
We use $\cdot$ to denote an arbitrary value.
For example, the metadata $(\mathrm{dummy},\cdot,\cdot)$ means any metadata
with $\mathsf{type} = \mathrm{dummy}$ (\textsf{addr} and \textsf{pos} are
arbitrary).
Though the encryption/decryption are omitted from the pseudocode for brevity,
everything on the server needs to be encrypted.
For example, in the step ``$\mathrm{md} \leftarrow \mathrm{meta}[P(\ell,i)]$'',
the user retrieves the ciphertext of $\mathrm{meta}[P(\ell,i)]$, decrypts it
and save it in the variable $\mathrm{md}$.
For brevity, we assume that every block is already initialized, i.e., each real
block is assigned a valid value with the metadata stored in the corresponding
node of the metadata tree and the position table contains the correct position
labels.
Let $b_a$ be the accessed block.
We first read the position label $\ell$ of $b_a$ from the position table and
update the position table entry to a number chosen uniformly at random from
$[L]$ (line 2--4), which will become the new position label of $b_a$ after the
access operation is finished.
By the invariant conditions above, $b_a$ is either in the stash or $P(\ell)$.
We scan $P(\ell)$ and retrieve $b_a$ if it is in $P(\ell)$ (\textsc{ReadPath}
operation in line 5).
If $b_a$ was not in $P(\ell)$, we retrieve it from the stash (line 6--9).
If the current request is a write request, we update the block contents to the
new value (line 11--12).
Then, we insert $b_a$ with the updated position label and the possibly updated
value into the stash (line 13).
After that, we perform \textsc{EvictPath} operation (line 14).
The purpose of this operation is a) to move back the blocks in the stash into
the tree and b) to move the real blocks in the tree downwards (far from the
root).
To do this, \textsc{EvictPath} retrieves all real blocks in the path
$P(\textsc{BitReversal(G)})$ (to be explained shortly) into the stash and then,
going up $P(\textsc{BitReversal(G)})$ from leaf to the root, tries to move as
many blocks in the stash into the buckets on the path.
If some blocks are left in the stash after \textsc{EvictPath}, the user keeps
them charging the permanent space usage.
Lastly, the value stored at $b_a$ is returned (line 15).
The function $\textsc{BitReversal}(\cdot)$ takes an $L$-bit integer $x$ and
returns the bit reversed version of $x$ while $G$ is the number of
\textsc{Access} operations called so far (modulo $2^L$).
Thus, if $L=8$ for example, $G$ cycles as $0,1,2,3,4,5,6,7,0,1,2,\dots$ as
$\textsc{Access}$ is called successively.
Then, $\textsc{BitReversal}(G)$ cycles as $0,4,2,6,1,5,3,7,0,4,\dots$.
The advantage of this \textsc{EvictPath} scheduling is that the eviction paths
(paths on which \textsc{EvictPath} is called) are distributed evenly, that is,
each of the $2^i$ nodes at depth $i$ is on the eviction path every $2^i$
$\textsc{Access}$ operations.
This \textsc{BitReversal}-based scheduling was first proposed by Gentry et
al.~\cite{Gentry13} and is advantageous to keep the stash size small (used
implicitly in Lemma~\ref{Ring ORAM Lemma 3}).
It also enables to simplify stash size analysis.
For security, the important thing is that $G$ (and $\textsc{BitReversal}(G)$)
is independent of the accessed database locations.
\begin{algorithm}
\caption{Main routine}
\label{algorithm: main}
\begin{algorithmic}[1]
\Function{Access}{$a,\mathrm{op},v'$}
\State $\ell' \leftarrow \textsc{Random}(L)$
\State $\ell \leftarrow \mathrm{Pos}[a]$
\State $\mathrm{Pos}[a] \leftarrow \ell'$
\Statex
\State $v \leftarrow \textsc{ReadPath}(\ell,a)$
\If {$v = \bot$ }
\State find $(a,\ell,v'') \in \text{stash}$
\Comment{there exists $(a,\ell,v'') \in \text{stash}$}
\State $v \leftarrow v''$
\State $\text{stash} \leftarrow \text{stash} \setminus (a,\ell,v'')$
\EndIf
\State $ret \leftarrow v$
\If {$\mathrm{op} = \mathrm{write}$}
\State $v \leftarrow v'$
\EndIf
\State $\text{stash} \leftarrow \text{stash} \cup (a,\ell',v)$
\Statex
\State \textsc{EvictPath}()
\Statex
\State return $ret$
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Subroutines for \textsc{Access}}
\label{algorithm: subroutine}
\begin{algorithmic}[1]
\Function{ReadPath}{$\ell, a$}
\State $v \leftarrow \bot$
\For {$i \leftarrow 0$ to $L$}
\State $\mathrm{md} \leftarrow \mathrm{meta}[P(\ell,i)]$
\For {$j \leftarrow 1$ to $|P(\ell, i)|$}
\State $v' \leftarrow P(\ell, i, j)$
\If {$\mathrm{md}[j] = (\mathrm{real},a,\ell)$}
\State $v \leftarrow v'$
\State $\mathrm{md}[j] \leftarrow (\mathrm{dummy},\cdot,\cdot)$
\EndIf
\EndFor
\State $\mathrm{meta}[P(\ell,i)] \leftarrow \mathrm{md}$
\EndFor
\State return $v$
\EndFunction
\end{algorithmic}
\hfill
\begin{algorithmic}[1]
\Function{EvictPath}{ }
\State $\ell \leftarrow G \mod{2^L}$
\Comment{$G$ is global/persistent, and initially zero}
\State $G \leftarrow G+1$
\State $\ell' \leftarrow \textsc{BitReversal}(\ell)$
\For {$i \leftarrow 0$ to $L$}
\State $\text{stash} \leftarrow
\text{stash} \cup \textsc{ReadBucket}(P(\ell',i))$
\EndFor
\For {$i \leftarrow L$ to $0$}
\State $\textsc{WriteBucket}(P(\ell',i),\text{stash})$
\EndFor
\EndFunction
\end{algorithmic}
\hfill
\begin{algorithmic}[1]
\Function{ReadBucket}{$P(\ell,i)$}
\State $S \leftarrow \emptyset$
\State $\mathrm{md} \leftarrow \mathrm{meta}[P(\ell,i)]$
\For {$j \leftarrow 1$ to $|P(\ell,i)|$}
\State $v \leftarrow P(\ell,i,j)$
\If {$\mathrm{md}[j]= (\mathrm{real},a,\ell')$ for some $a$ and $\ell'$}
\State $S \leftarrow S \cup (a,\ell',v)$
\State $\mathrm{md}[j] \leftarrow (\mathrm{dummy},\cdot,\cdot)$
\EndIf
\EndFor
\State $\mathrm{meta}[P(\ell,i)] \leftarrow \mathrm{md}$
\State return $S$
\EndFunction
\end{algorithmic}
\hfill
\begin{algorithmic}[1]
\Function{WriteBucket}{$P(\ell,i)$,stash}
\State $S \leftarrow$ blocks in the stash whose labels have the same length $i$ prefix as $\ell$
\For{$j \leftarrow 1$ to $|P(\ell,i)|$}
\If {$S \neq \emptyset$}
\State pick arbitrary $(a,\ell,v) \in S$
\State $P(\ell,i,j) \leftarrow v$
\State $\mathrm{md}[j] \leftarrow (\mathrm{real},a,\ell)$
\State $S \leftarrow S \setminus (a,\ell,v)$
\Else
\State $P(\ell,i,j) \leftarrow \text{garbage}$
\State $\mathrm{md}[j] \leftarrow (\mathrm{dummy},\cdot,\cdot)$
\EndIf
\EndFor
\State $md[P(\ell,i)] \leftarrow \mathrm{md}$
\EndFunction
\end{algorithmic}
\end{algorithm}
\paragraph{Outsourcing position table.}
In the construction described so far, the user space usage is much larger than
the bound claimed in \cref{main theorem} since the user needs to maintain the
position table locally.
To obtain \cref{main theorem}, we modify the construction so that the position
table is stored on the server using the sub-ORAM in \cref{prop: Path ORAM},
e.g., the Path ORAM~\cite{Stefanov13}.
Access procedure is the same except that the line 3--4 of \textsc{Access}
($\ell \leftarrow Pos[a]$ and $Pos[a] \leftarrow \ell)$ is replaced by a
sub-ORAM write access.
\subsection{Security} \label{subsec: succ oram security}
Fix $t>0$.
Let $\mathbf{a}$ be a length $t>0$ sequence of logical addresses to be accessed
and $\mathbf{a}'$ be the corresponding sequence of physical addresses (indices
of the server memory) to be accessed.
The sequence $\mathbf{a}'$ is determined by $\mathbf{a}$ and the randomness
used by the ORAM simulator.
To prove the information theoretic security, it suffices to show that
$\mathbf{a}'$ really does not depend on $\mathbf{a}$.
The sequence $\mathbf{a}'$ consists of $\mathbf{a}'_1$, the physical addresses
accessed in step 3--4 of \textsc{Access} and $\mathbf{a}'_2$, those accessed in
the rest parts of \textsc{Access}.
The addresses $\mathbf{a}'_1$ is determined by the sub-ORAM access procedure
and is independent of $\mathbf{a}$ due to the information theoretic security of
the sub-ORAM.
The addresses $\mathbf{a}'_2$ consists of addresses accessed by
$\textsc{ReadPath}(\ell,a)$ and $\textsc{EvictPath}()$.
$\textsc{ReadPath}(\ell,a)$ accesses the path $P(\ell)$, which is determined by
$\ell$, the position label of the accessed block.
Since the position labels are chosen independently and uniformly at random, the
$\textsc{ReadPath}$ accesses are independent of $\mathbf{a}$.
$\textsc{EvictPath}$ accesses $P(\textsc{BitReversal}(G))$, which is determined
by $G$, the number of times $\textsc{Access}$ was called (modulo $2^L$).
Thus, the accesses of $\textsc{EvictPath}$ is also independent of $\mathbf{a}$.
Therefore, $\mathbf{a}'$ is independent of $\mathbf{a}$.
\subsection{Server Space}
First, it is helpful to observe the followings:
\begin{equation}
\log{N} = \Theta(\log{n}), \quad
L = \Theta(\log{n}), \quad
M = \Theta(f(n)) \label{useful bounds}.
\end{equation}
Remember that the server holds the data tree, the metadata tree and the
position table.
The total size of the internal (resp. leaf) buckets is $Z(2^L-1)$ (resp.
$M2^L$) blocks.
Since
\begin{align*}
& Z(2^L-1) < Z2^L = ZN/f(n),
& M2^L = N+g(n)\sqrt{NL2^L} = N\parens{1+\Theta\parens{\frac{g(n)}{\sqrt{f(n)/\log{n}}}}},
\end{align*}
the number of the blocks in the data tree is bounded by
\begin{equation*}
ZN/f(n) + N\parens{1+\Theta\parens{\frac{g(n)}{\sqrt{f(n)/\log{n}}}}}
= N\parens{1 + \Theta\parens{\frac{1}{f(n)} + \frac{g(n)}{\sqrt{f(n)/\log{n}}}}}.
\end{equation*}
The metadata for each data block takes 1 bit for \textsf{type}, $\ceil{\lg{N}}$
bits for \textsf{addr} and $L$ bits for \textsf{pos}.
The total is $\Theta(\log{n})$ bits, which is $\Theta(\frac{\log{n}}{B})$
blocks.
Thus, the number of bits in the data tree and the metadata tree combined is
\begin{equation*}
BN\parens{1 + \Theta\parens{\frac{1}{f(n)} + \frac{g(n)}{\sqrt{f(n)/\log{n}}}}} \parens{1 + \Theta\parens{\frac{\log{n}}{B}}}
= n\parens{1 + \Theta\parens{\frac{\log{n}}{B} + \frac{g(n)}{\sqrt{f(n)/\log{n}}}}}.
\end{equation*}
The position labels take $NL = n\frac{L}{B} \le n\frac{\lg{n}}{B}$ bits.
By \cref{prop: Path ORAM}, the sub-ORAM containing the position table takes
$\Theta(n\frac{\log{n}}{B})$ bits.
Thus, the server space is $n(1 + \Theta(\frac{\log{n}}{B} +
\frac{g(n)}{\sqrt{f(n)/\log{n}}}))$ bits.
\subsection{Bandwidth Blowup}
The bandwidth cost of each of \textsc{ReadPath} and \textsc{EvictPath} is
proportional to the sum of the numbers of the blocks in a root--leaf path in
the data tree and the metadata tree.
The number for the data tree is $ZL+M = O(\log{n}) + O(f(n)) = O(f(n))$.
The number for the metadata tree is around $\frac{2\lg{N}+1}{B} = o(1)$ factor
of that for the data tree.
The bandwidth cost for accessing the position table is $O(\log^2{n})$ by
\cref{prop: Path ORAM}.
Therefore, the bandwidth blowup of \textsc{Access} is $O(\log^2{n})$.
\subsection{User Space} \label{subsec: succ oram user space}
The temporary user space usage is proportional to the sum of the numbers of the
blocks in a root--leaf path in the data tree and the metadata tree.
As is shown in the bandwidth analysis, the latter is bounded by $O(f(n))$.
In the rest of this subsection, we bound the permanent user space usage, i.e.,
the stash size.
First, we import some concepts and tools from~\cite{Stefanov13} and
\cite{Ren15}.
Fix a sequence of input logical access requests.
Later, we will specify a concrete request sequence that we use for the
analysis.
Let $\mathrm{ORAM}_Z$ be the real ORAM construction that we are analyzing and
$\mathrm{ORAM}_\infty$ be the hypothetical ORAM construction derived by
modifying the size of each bucket in $\mathrm{ORAM}_Z$ to $\infty$.
Let $S_Z$ (resp. $S_\infty$) be the $\mathrm{ORAM}_Z$ (resp.
$\mathrm{ORAM}_\infty$) after processing the access requests.
Note that $\mathrm{ORAM}_Z$ (resp. $\mathrm{ORAM}_\infty$) is an ORAM
construction while $S_Z$ (resp. $S_\infty$) is the state of the construction at
a particular point of time.
We write $G$ to denote a post-processing algorithm that takes $S_Z$ and
$S_\infty$ and modify $S_\infty$ in the following way.
The algorithm $G$ enumerates the buckets in $\mathrm{ORAM}_\infty$ in reverse
breadth first order.
We define $b^Z_1$ to be the root bucket of $\mathrm{ORAM}_Z$ and $b^Z_{2i}$
(resp. $b^Z_{2i+1}$) to be the left (resp. right) child of $b^Z_i$.
We define $b^Z_0$ to be the stash of $\mathrm{ORAM}_Z$.
For $i\in[2^{L+1}]$, $b^\infty_i$ is defined similarly.
For each $i$ from $2^{L+1}-1$ to 1, $G$ processes each block $v\in b^\infty_i$
as follows: i) if $v \in b^Z_i$, $v$ is left as it is; ii) if in $S_Z$, $v$ is
stored in some proper ancestor of $b^Z_i$, $G$ moves $v$ to
$b^\infty_{\floor{i/2}}$, i.e., $b^Z_i$'s parent. If the number of blocks left
in $b^\infty_i$ after such transportation is less than $Z$, $G$ outputs an
error; iii) if, in $S_Z$, $v$ is not stored in any ancestor of $b^Z_i$, $G$
outputs an error.
We denote the output of $G$ with input $S_Z$ and $S_\infty$ as
$G_{S_Z}(S_\infty)$.
For $S\in\{S_Z,G_{S_Z}(S_\infty)\}$, we define $st(S)$ to be the number of
blocks in the stash of $S$.
Due to the following lemma, $st(S_Z)$ and $st(G_{S_Z}(S_\infty))$ are
equivalent as random variables.
\begin{lemma}[\cite{Ren15} Lemma 1]
If the randomness used in $\mathrm{ORAM}_Z$ is the same as
$\mathrm{ORAM}_\infty$, i.e., the position labels assigned to the accessed
blocks are the same in the two ORAM constructions, then $G$ does not output an
error and $S_Z = G_{S_Z}(S_\infty)$.
\end{lemma}
\noindent Let a subtree be a connected subgraph of the complete binary tree
with $2^L$ leaves that contains the root.
For a subtree $T$, let $C(T)$ be the number of blocks that can be stored in the
corresponding buckets of $\mathrm{ORAM}_Z$, and $X(T)$ be the number of blocks
that are stored in the corresponding buckets of $S_\infty$.
Note that $C(T)$ is a constant while $X(T)$ is a random variable.
Also, let $n(T)$ denote the number of nodes in $T$.
\begin{lemma}[\cite{Ren15} Lemma 2]
For any integer $R>0$, $st(G_{S_Z}(S_\infty)) > R$ iff there exists a subtree
$T$ such that $X(T) > C(T)+R$.
\end{lemma}
\noindent We call those subtrees that contain only internal nodes (of the
enclosing complete binary tree) as internal subtrees.
\begin{lemma}[\cite{Ren15} Lemma 3] \label{Ring ORAM Lemma 3}
For any internal subtree $T$, $E[X(T)] \le n(T)/2$.
\end{lemma}
\noindent Let the working set of a sequence of access requests
$(\mathrm{op}_i,\mathrm{addr}_i,\mathrm{val}_i)_i$ be the set
$\{\mathrm{addr}_i\}_i$.
\begin{lemma}[\cite{Stefanov13} Lemma 3] \label{Path ORAM Lemma 3}
Among all access request sequences of working set size $t$, the probability
$\Pr[st(S_Z)>R]$ is maximized by the sequence that contains exactly one access
to each of the $t$ different addresses.
\end{lemma}
\noindent Because of \cref{Path ORAM Lemma 3}, we fix the input access request
sequence to $(\mathrm{op}_i,i,\mathrm{val}_i)_{i\in[N]}$ without loss of
generality.
($\mathrm{op}_i$ and $\mathrm{val}_i$ are arbitrary since they do not affect
the stash size.)
Now we prove
\begin{equation} \label{target bound}
\Pr[st(S_Z)>R] = n^{-\omega(1)}
\end{equation}
for $R = \omega(\log{n})$.
Remember that \eqref{target bound} is a bound on the stash size at a particular
point of time.
Given \eqref{target bound}, the probability that the stash size becomes larger
than $R$ at \emph{any} point in $\mathrm{poly}(n)$ logical accesses is also bounded as
$n^{-\omega(1)}$ by union bound.
Let \textsf{G} be the event that no leaf bucket of $S_\infty$ contains more
than $M$ blocks.
Let \textsf{B} be the complement of \textsf{G}.
\begin{lemma} \label{balls-into-bins bound}
$\Pr[\mathsf{B}] = n^{-\omega(1)}$.
\end{lemma}
\begin{proof}
Consider $\mathrm{ORAM}_\infty$ just before post-processing.
For $i \in [2^L]$, let $\mathrm{load}_i$ be the number of real blocks in the
$i$-th leaf bucket and $\mathrm{ctr}_i$ be the number of real blocks with
position label $i$.
Since a real block can be stored in the $i$-th leaf bucket only if it has
position label $i$, $\mathrm{load}_i \le \mathrm{ctr}_i$.
For $i \in [2^L]$ and $j \in [N]$ let $\mathrm{ctr}_{i,j}$ be the indicator
random variable of the event that the $j$-th accessed real block is assigned
position label $i$.
Clearly, $\mathrm{ctr}_i = \sum_j \mathrm{ctr}_{i,j}$ for each $i \in [2^L]$,
and $E[\mathrm{ctr}_{i,j}] = \Pr[\mathrm{ctr}_{i,j}=1] = 1/2^L$ for each $i \in
[2^L]$ and $j \in [N]$.
Thus, $E[\mathrm{ctr}_i] = E[\sum_j \mathrm{ctr}_{i,j}] = \sum_j
E[\mathrm{ctr}_{i,j}] = N/2^L$ for each $i \in [2^L]$.
For each $i\in[2^L]$, $\{\mathrm{ctr}_{i,j}\}_{j\in[N]}$ are mutually
independent.
The lemma follows as
\begin{align*}
\Pr[\mathsf{B}]
& = \Pr[\cup_{i\in[2^L]} \mathrm{load}_i>M] \\
& \le \Pr[\cup_{i\in[2^L]} \mathrm{ctr}_i > M] \\
& \le \Sigma_{i\in[2^L]} \Pr[\mathrm{ctr}_i > M] \\
& = \sum_{i\in[2^L]} \Pr \brackets{\mathrm{ctr}_i > \frac{N}{2^L}\parens{1+g(n)\sqrt{\frac{L2^L}{N}}}} \\
& \le 2^L \exp\parens{-(1/3)g(n)^2 L} \\
& \le n \exp(-\omega(1)\ln{n}) \\
& = n^{-\omega(1)}.
\end{align*}
We used Chernoff bound in the fifth step.
\end{proof}
Let $\mathcal{T}$ be the set of all subtrees and $\mathcal{T}'$ be the set of
all internal subtrees.
Then,
\begin{align}
\Pr[st(S_Z)>R]
& = \Pr[st(G_{S_Z}(S_\infty))>R] \nonumber \\
& = \Pr[\cup_{T\in\mathcal{T}} X(T)>C(T)+R] \nonumber \\
& \le \Pr[\cup_{T\in\mathcal{T}} X(T)>C(T)+R | \mathsf{G}] + \Pr[\mathsf{B}] \nonumber \\
& = \Pr[\cup_{T\in\mathcal{T}'} X(T)>C(T)+R | \mathsf{G}] + \Pr[\mathsf{B}] \nonumber \\
& \le \Sigma_{T\in\mathcal{T}'} \Pr[X(T)>C(T)+R | \mathsf{G}] + \Pr[\mathsf{B}] \nonumber \\
&\le \sum_{m\ge 1} 4^m \max_{\substack{T\in\mathcal{T}' \\ n(T)=m}} \Pr[X(T)>C(T)+R|\mathsf{G}] + \Pr[\mathsf{B}]. \label{stash bound 1}
\end{align}
In the last step, we used the fact that the number of ordered binary trees with
$m$ nodes is bounded by $4^m$.
\begin{lemma} \label{stash bound 4}
For any internal subtree $T$ with $n(T) = m$,
\begin{equation*}
\Pr[X(T)>C(T)+R|\mathsf{G}]
\le (2Z)^{-R} \exp(-m(Z\ln{2Z} + 1/2 - Z)) \Pr[\mathsf{G}]^{-1}.
\end{equation*}
\end{lemma}
\begin{proof}
For any $t>0$,
\begin{align}
\Pr[X(T) >C(T)+R |\mathsf{G}]
& = \Pr[e^{tX(T)} > e^{t(C(T)+R)}| \mathsf{G}] \nonumber \\
& \le E[e^{tX(T)} | \mathsf{G}] e^{-t(C(T)+R)} \nonumber \\
& \le E[e^{tX(T)}] \Pr[\mathsf{G}]^{-1} e^{-t(C(T)+R)}. \label{stash bound 2}
\end{align}
For $j\in[N]$, let $X_j(T)$ be the indicator random variable of the event that,
in $S_\infty$, the $j$-th accessed real block is in $T$ and let $p_j :=
\Pr[X_j(T)=1]$.
Clearly, $\sum_j X_j(T) = X(T)$ and $E[X(T)]=E[\sum_j X_j(T)] = \sum_j
E[X_j(T)] = \sum_j p_j$.
The random variable $X_j(T)$ depends only on $j$ and the position label of the
$j$-th accessed real block.
Thus, $\{X_j(T)\}_{j\in[N]}$ are mutually independent.
Then,
\begin{align}
E[e^{tX(T)}]
& = E[e^{t\sum_{j\in[N]}X_j(T)}] \nonumber \\
& = E[\Pi_{j\in[N]}e^{tX_j(T)}] \nonumber \\
& = \Pi_{j\in[N]} E[e^{tX_j(T)}] \nonumber \\
& = \Pi_{j\in[N]} (p_j(e^t - 1)+1) \nonumber \\
& \le \Pi_{j\in[N]} \exp(p_j(e^t-1)) \nonumber \\
& = \exp((e^t-1)\Sigma_{j\in[N]}p_i) \nonumber \\
& = \exp((e^t-1)E[X(T)]). \label{stash bound 3}
\end{align}
We used the independence of $\{X_j(T)\}_{j\in[N]}$ in the third step.
Let $m := n(T)$.
From bounds \eqref{stash bound 2}, \eqref{stash bound 3} and \cref{Ring ORAM
Lemma 3}, $\Pr[X(T)>C(T)+R|\mathsf{G}]$ is bounded by
\begin{equation*}
\exp((e^t-1)m/2)e^{-t(mZ+R)}\Pr[\mathsf{G}]^{-1}
= \exp(-tR) \exp(-m(tZ-(1/2)(e^t-1))) \Pr[\mathsf{G}]^{-1}.
\end{equation*}
The lemma follows by setting $t = \ln{2Z}$.
\end{proof}
If $Z=3$, $q := Z\ln{2Z}+1/2-Z-\ln{4} = 1.4889 \dots > 0$.
By \eqref{stash bound 1} and \cref{stash bound 4},
$\Pr[st(S_Z)>R]$ is bounded by
\begin{equation*}
\sum_{m \ge 1} 4^m 6^{-R} \exp(-m(q+\ln{4})) \Pr[\mathsf{G}]^{-1} + \Pr[\mathsf{B}]
< \frac{(1/6)^R}{1-e^{-q}} \Pr[\mathsf{G}]^{-1} + \Pr[\mathsf{B}].
\end{equation*}
By \cref{balls-into-bins bound}, the bound above is $n^{-\omega(1)}$ if
$R=\omega(\log{n})$.
\section{Succincter ORAM Construction} \label{section: succincter oram}
In this section, we prove the following theorem.
\begin{theorem} \label{second main theorem}
Let $n$ be the database size and $B$ be the block size, both in bits.
If $B \ge 3\lg{n}$ and $B = O(n^c)$ for some $0 < c < 1$, then for any
$f:\mathbb{N}\to\mathbb{R}$ such that $f(n) = \omega(\log\log{n})$ and $f(n) = O(\log^2{n})$,
there exists an information theoretically secure ORAM construction for which
i) the server's space usage is bounded by
\[
n \parens{1 + \Theta\parens{\frac{\log{n}}{B} +
\frac{\log\log{n}}{f(n)}}} \text{ bits;}
\]
ii) the worst case bandwidth blowup is $O(\log^2{n})$;
iii) the user's temporary space usage is $O(\log{n}+f(n))$ blocks; and
iv) for any $R = \omega(\log{n})$, the probability that the user's permanent
space usage becomes larger than $R$ blocks during $\mathrm{poly}(n)$ logical accesses
is $n^{-\omega(1)}$.
\end{theorem}
\begin{corollary}
If, in addition to the conditions of \cref{second main theorem}, $B =
\omega(\log{n})$, then, the ORAM construction of \cref{second main theorem} is
succinct.
\end{corollary}
\cref{second main theorem} is stronger than \cref{main theorem}.
For example, if $B = \Omega(\log^2{n})$ and $f(n)=\Theta(\log{n}\log\log{n})$,
the server space bound of \cref{second main theorem} implies that the extra
server space is $\Theta(n/\log{n})$ and the user temporary space usage is
$\Theta(\log{n}\log\log{n})$.
In contrast, the extra server space bound of \cref{main theorem} is
$\omega(n/\sqrt{\log{n}})$ even if we allow the user's temporary space to
become $\Theta(\log^2{n})$.
In the rest of this section, $n, B, f(\cdot)$ are as described in the statement
of \cref{second main theorem}.
In the following exposition, we often refer to \cref{section: succinct oram} to
avoid repetition.
We recommend the readers to read \cref{section: succinct oram} beforehand.
\subsection{Description}
As in Section~\ref{section: succinct oram}, we first explain a simplified
version with a large user space usage, and construct the full version that
achieves the claimed bounds from the simplified version.
Let
\[
L := \ceil{\lg(N/f(n))} \quad\text{and}\quad
M := \ceil{N/2^L + (1+\varepsilon)\lg{L}}
\]
where $N = n/B$ and $\varepsilon>0$ is a constant.
We assume, for brevity, that each of $\lg(N/f(n))$ and $N/2^L + (1+\varepsilon)\lg{L}$
is an integer.
\paragraph{Block usage.}
The block usage is the same as the ORAM construction described in
\cref{section: succinct oram} except that each real block is given \emph{two}
position labels instead of one.
We call them the \emph{primary position label} and the \emph{secondary position
label}.
Only the primary position labels are stored in the metadata blocks.
\paragraph{Data layout.}
The data layout is basically the same in \cref{section: succinct oram}.
We only explain the difference from \cref{section: succinct oram}.
First, the position table stores both the primary position labels and the
secondary position labels.
Second, the user maintains an additional table called \emph{counter table}.
It is a size $2^L$ array whose $i$-th entry is the number of real blocks with
primary position label $i$.
Last, since the value of each of $L$ and $M$ is different from that in
\cref{section: succinct oram}, the tree/bucket size is changed accordingly.
\paragraph{Access procedure.}
The same invariant conditions as \cref{section: succinct oram} are maintained
except that the ``position label'' in the second condition is replaced by
``primary position label''.
The main routine is described in \cref{algorithm: main (two choices)}.
The array $\mathrm{Pos}$ and the subroutines \textsc{Random}, \textsc{ReadPath}
and \textsc{EvictPath} are the same as in \cref{section: succinct oram} while
$\mathrm{Ctr}$ is the counter table.
We let $P(\ell)$ denote the path from the root to the $\ell$-th leaf in the
data tree.
For brevity, we assume that every block is already initialized, i.e., each real
block is assigned a valid value with the metadata stored in the corresponding
node of the metadata tree and the position table and counter table contain the
correct values.
Let $b_a$ be the accessed block.
We first retrieve the two position labels $\ell_1$ and $\ell_2$ of $b_a$ from
the position table and update each of the two position table values to a number
chosen independently and uniformly at random from $[L]$, which will become the
new position labels of $b_a$ (line 2--4).
One of $\ell_1$ and $\ell_2$ is the primary position label and the other is the
secondary position label but we do not know (and do not need to know) which is
which.
By the invariant conditions, $b_a$ is either in the stash or in $P(\ell_1)$ or
$P(\ell_2)$.
We scan $P(\ell_1)$ and $P(\ell_2)$ and retrieve $b_a$ from $P(\ell_i)$ if the
primary position label is $\ell_i$ and $b_a$ is in $P(\ell_i)$ (line 5).
If $b_a$ is not found in the paths, it must be in the stash and we retrieve it
from the stash (line 11--13).
At this point, we know the primary position label $\ell$ of $b_a$ (since it is
written in the \textsf{pos} entry of the block) and we decrement the $\ell$-th
entry of the counter table, determine the new primary position label $\ell'_i$
and increment the $\ell'_i$-th entry of the counter table (line 14--17).
After, that, we update the block contents if it is a write request (line
19--20), insert $b_a$ into the stash (line 21), call \textsf{EvictPath} (line
22) and returns the retrieved block content (line 23) all in the same way as
Algorithm~\ref{algorithm: main}.
\paragraph{Outsourcing the position/counter table.}
In the full version of the construction, the position table and the counter
table are stored on the server using the sub-ORAM in \cref{prop: Path ORAM}.
Every access to each of these tables is done using the sub-ORAM access
procedure.
\begin{algorithm}
\caption{Main routine (two choices)}
\label{algorithm: main (two choices)}
\begin{algorithmic}[1]
\Function{Access}{$a,\mathrm{op},v'$}
\State $\ell'_1 \leftarrow \textsc{Random}(L), \ell'_2 \leftarrow \textsc{Random}(L)$
\State $(\ell_1,\ell_2) \leftarrow \mathrm{Pos}[a]$
\State $\mathrm{Pos}[a] \leftarrow (\ell'_1,\ell'_2)$
\Statex
\State $v_1 \leftarrow \textsc{ReadPath}(\ell_1,a), v_2 \leftarrow \textsc{ReadPath}(\ell_2,a)$
\If {$v_1 \neq \bot$}
\State $(v, \ell) \leftarrow (v_1, \ell_1)$
\ElsIf {$v_2 \neq \bot$}
\State $(v, \ell) \leftarrow (v_2, \ell_2)$
\Else
\State Find $(a,\ell'',v'') \in \text{stash}$
\Comment{There exists $(a,\ell'',v'') \in \text{stash}$}
\State $(v, \ell) \leftarrow (v'', \ell'')$
\State $\text{stash} \leftarrow \text{stash} \setminus (a,\ell'',v'')$
\EndIf
\Statex
\State $\mathrm{Ctr}[\ell] \leftarrow \mathrm{Ctr}[\ell]-1$
\State $c_1 \leftarrow \mathrm{Ctr}[\ell'_1], c_2 \leftarrow \mathrm{Ctr}[\ell'_2]$
\State $i \leftarrow \argmin \{c_1,c_2\}$
\State $\mathrm{Ctr}[\ell'_i] \leftarrow c_i + 1$
\Statex
\State $ret \leftarrow v$
\If {$\mathrm{op} = \mathrm{write}$}
\State $v \leftarrow v'$
\EndIf
\State $\text{stash} \leftarrow \text{stash} \cup (a,\ell'_i,v)$
\Statex
\State \textsc{EvictPath}()
\Statex
\State return $ret$
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Security}
The security proof of the current ORAM construction is almost the same as in
\cref{subsec: succ oram security}.
The only difference in the situation is that now, the sequence of accessed
addresses $\mathbf{a}'_2$ depends on two position labels instead of one.
Anyway, these position labels are distributed independently and uniformly at
random and thus, are independent of $\mathbf{a}$.
\subsection{Server Space}
The bounds \eqref{useful bounds} still hold.
The number of blocks in the leaf buckets is
\begin{align*}
M2^L & = N\parens{1+(1+\varepsilon)\frac{\lg{L}}{f(n)}} \nonumber \\
& = N\parens{1+ \Theta\parens{\frac{\log\log{n}}{f(n)}}}.
\end{align*}
The number of blocks in the internal buckets is $Z(2^L-1) < ZN/f(n)$, which is
$O(\frac{\log\log{n}}{f(n)})$.
Thus, the data tree size is bounded by $N(1+ \Theta(\frac{\log\log{n}}{f(n)}))$
blocks.
As in \cref{section: succinct oram}, the metadata size of each data block is
$\Theta(\frac{\log{n}}{B})$ blocks.
Thus, the number of blocks in the data tree and the metadata tree combined is
at most $1+\Theta(\frac{\log{n}}{B})$ times larger than $N(1+
\Theta(\frac{\log\log{n}}{f(n)}))$, which is
\begin{equation*}
n \parens{1+ \Theta\parens{\frac{\log{n}}{B} + \frac{\log\log{n}}{f(n)}}} \text{ bits}.
\end{equation*}
Position labels take $2NL = 2nL/B \le 2n\frac{\log{n}}{B}$ bits while counter
table values take $2^L\ceil{\lg{N}} = N\ceil{\lg{N}}/f(n) \le N = n/B$ bits.
By \cref{prop: Path ORAM}, the sub-ORAM containing the position table (resp.
counter table) takes $\Theta(n\frac{\log{n}}{B})$ (resp. $\Theta(n/B)$) bits.
Therefore, the server space usage is bounded by $n (1+ \Theta(\frac{\log{n}}{B}
+ \frac{\log\log{n}}{f(n)}))$ bits.
\subsection{Bandwidth Blowup}
By the same argument as in the bandwidth analysis, the bandwidth cost of each
of \textsc{ReadPath} and \textsc{EvictPath} is proportional to $ZL+M =
O(\log{n}+f(n))$ (in blocks).
By \cref{prop: Path ORAM}, the bandwidth cost of access to each of the position
table and the counter table is $O(\log^2{n})$.
Thus, the bandwidth blowup is $O(\log^2{n})$.
\subsection{User Space}
By the same argument as in \cref{subsec: succ oram user space}, the temporary
user space is proportional to $ZL+M = O(\log{n}+f(n))$.
In the rest of the subsection, we bound the permanent user space, i.e., the
stash size.
Using the current ORAM construction, define $\mathrm{ORAM}_Z$,
$\mathrm{ORAM}_\infty$, $S_Z$ and $S_\infty$ analogously to \cref{subsec: succ
oram user space}.
Then, we prove $\Pr[st(S_Z)>R] = n^{-\omega(1)}$ for $R=\omega(\log{n})$.
Most arguments in \cref{subsec: succ oram user space} can be reused and we
focus on the differences.
First, \cref{balls-into-bins bound} still holds for the current
construction but the proof is different from \cref{subsec: succ oram user
space}.
\begin{proof}[Proof of \cref{balls-into-bins bound} for the two-choice
construction]
Define $\mathrm{load}_i$, $\mathrm{ctr}_i$ and $\mathrm{ctr}_{i,j}$ in the same
way as we did in the proof of \cref{balls-into-bins bound} except that the
primary position labels are used instead of the position labels.
By the same argument as the proof of \cref{balls-into-bins bound}, it
suffices to prove $\Pr[\cup_{i\in[2^L]} \mathrm{ctr}_i > M] = n^{-\omega(1)}$.
We apply an existing bound for the heavily loaded case of the
\emph{balls-into-bins game with two choices}.
In the balls-into-bins game with $m$ balls and $n$ bins (with one choice), each
of the $m$ balls is thrown into one of the $n$ bins chosen uniformly and
independently at random.
In the balls-into-bins game with two choices, for each ball, two bins are
chosen uniformly and independently at random.
Then, the ball is thrown into the least loaded bin.
The \emph{gap} of a balls-into-bins game with $m$ balls and $n$ bins is defined
to be the difference between the number of balls in the bin with the maximum
load and the average number of balls in a bin, i.e., $m/n$.
Berenbrink et al.~\cite{Berenbrink00} proved the following proposition.
\begin{proposition}
In the two-choice balls-into-bins game with $m$ balls and $n$ bins, for any
$c$, $\Pr[\text{gap} > \lg\lg{n} + \gamma(c)] < 1/n^c$, where $\gamma(c)$ is a
constant that depends only on $c$.
\end{proposition}
\begin{corollary} \label{balls-into-bins bound (two-choice)}
In the two-choice balls-into-bins game with $m$ balls and $n$ bins,
$\Pr[\text{gap} > (1+\varepsilon)\lg\lg{n}] = n^{-\omega(1)}$ for any $\varepsilon>0$.
\end{corollary}
\noindent After processing the access requests, the $2^L$ values in the counter
table are distributed in exactly the same way as the bin loads after the
balls-into-bins game with two choices with $N$ balls and $2^L$ bins.
(Remember that each of the $N$ logical addresses is accessed exactly once.)
Thus,
\begin{align*}
\Pr[\mathsf{B}]
& \le \Pr\brackets{\cup_{i\in[2^L]} \mathrm{ctr}_i > M} \\
& = \Pr\brackets{\cup_{i\in[2^L]} \mathrm{ctr}_i > N/2^L+(1+\varepsilon)\lg{L}} \\
& = (2^L)^{-\omega(1)} \\
& = n^{-\omega(1)}
\end{align*}
where we used \cref{balls-into-bins bound (two-choice)} in the third step.
\end{proof}
Next, we modify \cref{stash bound 4}.\footnotemark
\footnotetext{We need to do this since $\{X_j(T)\}_{j\in[N]}$ (defined
analogously in \cref{stash bound 4}) is not mutually independent due to
the two-choice strategy.}
\begin{lemma} \label{stash bound 4 (two-choice)}
For any internal subtree $T$ with $n(T)=m$,
\begin{equation*}
\Pr[X(T)>C(T)+R|\mathsf{G}]
\le (Z)^{-R} \exp(-m(Z\ln{Z} + 1 - Z)) \Pr[\mathsf{G}]^{-1}.
\end{equation*}
\end{lemma}
\begin{proof}
The part where the arguments in \cref{subsec: succ oram user space} breaks down
is \eqref{stash bound 3}.
In \cref{subsec: succ oram user space}, we used the mutual independence of
$\{X_j\}_{j\in[N]}$ for the third step of \eqref{stash bound 3} but here,
$\{X_j\}_{j\in[N]}$ are not mutually independent.
We fix this problem as follows.
Let $\mathrm{ORAM}'_\infty$ be another hypothetical ORAM construction derived
by modifying $\mathrm{ORAM}_\infty$ so that every time a just accessed real
block $b$ is inserted into the stash, another block called $b$'s \emph{shadow}
is also inserted into the stash.
If $b$ is given the primary position label $\ell_1$ and the secondary position
label $\ell_2$ at the time $b$'s shadow $b'$ is inserted into the stash, $b'$
is given the primary position label $\ell_2$ and the secondary position label
$\ell_1$.
A shadow is evicted in the same way as a real block but it does not affect the
counter table.
Let $S'_\infty$ be $\mathrm{ORAM}'_\infty$ after processing the access
requests.
Since each of the $N$ real blocks is accessed exactly once, each real block in
$S'_\infty$ has one shadow.
For $j\in[N]$, let $Y_j(T)$ (resp. $Y'_j(T)$) be the indicator random variable
of the event that, in $S'_\infty$, the $j$-th accessed real block (resp. the
$j$-th accessed real block's shadow) is in subtree $T$.
Since shadows do not affect real blocks' move, $X_j$ and $Y_j$ are equivalent
random variables.
Also, since the primary and the secondary position label of each accessed block
is chosen independently and uniformly at random, $Y_j+Y'_j$ is distributed
equally as $U_j+U'_j$ where $U_j$ and $U'_j$ are independent random variables,
each distributed equally with the $X_j$ in the proof of \cref{stash bound 4}.
Thus, with $Y(T):=\sum_jY_j(T)$,
\begin{align}
E[e^{tX(T)}] & \le E[e^{t\sum_j(Y_j(T)+Y'_j(T))}] \nonumber \\
& = E[e^{t\sum_j(U_j+U'_j)}] \nonumber \\
& = E[e^{t\sum_j U_j}] E[e^{t\sum_j U'_j}] \nonumber \\
& = E[e^{t\sum_j U_j}]^2 \nonumber \\
& = \exp((e^t-1)2E[X(T)]) \label{stash bound 3 (two-choices)}
\end{align}
where we used~\eqref{stash bound 3} in the last step.
Then, from \eqref{stash bound 2}, \eqref{stash bound 3 (two-choices)} and
\cref{Ring ORAM Lemma 3}, $\Pr[X(T)>C(T)+R|\mathsf{G}]$ is bounded by
\begin{equation*}
\exp((e^t-1)m-t(mZ+R))\Pr[\mathsf{G}]^{-1}
= \exp(-tR) \exp(-m(tZ-(e^t-1))) \Pr[\mathsf{G}]^{-1}.
\end{equation*}
The lemma follows by setting $t = \ln{Z}$.
\end{proof}
If $Z=4$, $q = Z\ln{Z} + 1 - Z - \ln{4} > 1.15888 \dots > 0$.
By \eqref{stash bound 1} and \cref{stash bound 4 (two-choice)},
$\Pr[st(S_Z)>R]$ is bounded by
\begin{equation*}
\sum_{m \ge 1} 4^m 4^{-R} \exp(-m(q+\ln{4})) \Pr[\mathsf{G}]^{-1} + \Pr[\mathsf{B}]
< \frac{(1/4)^R}{1-e^{-q}} \Pr[\mathsf{G}]^{-1} + \Pr[\mathsf{B}].
\end{equation*}
By \cref{balls-into-bins bound}, the bound above is $n^{-\omega(1)}$ if
$R=\omega(\log{n})$.
\section{Practicality of the Proposed Methods} \label{section: non-asymptotic}
\cref{table: performance with concrete params} shows the performance of the
proposed methods, the Path ORAM~\cite{Stefanov13} and the Ring
ORAM~\cite{Ren15} with concrete parameters.
The Ring ORAM has asymptotically the same performance as the Path ORAM but it
achieves constant factor smaller bandwidth at the cost of larger server space.
It is easy to integrate the main technique of the Ring ORAM to the internal
nodes of the proposed methods\footnotemark and we also show the performance of
these variants.
\footnotetext{Specifically, we modify the access procedure to access only one
block per each bucket (instead of all blocks in the bucket) by permuting the
blocks in each bucket. For this technique to work, we need to introduce, for
each bucket, additional space dedicated only for dummy blocks and this is why
we cannot apply this technique to the leaves maintaining succinctness.}
The table contains ``rigorous'' and ``aggressive'' parameter settings.
Rigorous parameters were derived from theoretical analysis with additional care
for constant factors.
The aggressive parameters for existing methods were taken from the experiments
in the original papers.
We chose the aggressive parameters for the proposed methods by simulation: we
simulated database scan (accessing addresses $1, 2, \dots, N$) for 100 times
and found some parameters for which the stash size after every scan was zero.
(Such usage of scan is standard in literature since \cref{Path ORAM Lemma 3}
means scan maximizes the stash size.)
We emphasize that constructions with aggressive parameters lack rigorous
security and they are not suitable for fair comparison.
Unfortunately, we could not derive rigorous bounds for the second construction
(Theorem~\ref{second main theorem}) for reasonable size of $N$ since the
balls-into-bins analysis of Berenbrink et al.~\cite{Berenbrink00}, used in the
stash size analysis, requires a very large number of bins.
However, the simulation results indicate that the second construction works for
reasonable size of $N$.
\begin{table}
\centering
\caption{Performance comparison with concrete parameters. The symbol
$\dagger$ means the integration of Ring ORAM techniques. $N=2^{20}$,
$B=2^{10}$. $A$ and $S$ are parameters for the Ring ORAM. ($A$ specifies the
infrequency of \textsf{EvictPath} and $S$ is the space in each bucket
reserved for dummy blocks.) The cost for recursive calls and metadata
handling are relatively minor and not included. The stash overflow
probability is $<2^{-80}$ for rigorous settings. Aggressive settings do not
have security guarantees (stash size bounds) and, in particular, are not
suitable for fair comparison.}
\label{table: performance with concrete params}
\begin{tabular}{c c c | c c c}
\multicolumn{2}{c}{} &
\begin{tabular}[x]{@{}c@{}}Parameters\\$Z,L,M,A,S$\end{tabular} &
\begin{tabular}[x]{@{}c@{}}Extra server\\space\end{tabular} &
Bandwidth &
Stash size \\
\hline
\multirow{4}{*}{\rotatebox{90}{Rigorous}} &
\cite{Stefanov13} & 5,20,--,--,-- & $9N$ & 210 & 114 \\
& \cite{Ren15} & 5,19,--,4,6 & $10N$ & 109 & 63 \\
& Th.~\ref{main theorem} & 3,15,112,--,-- & $2.59N$ & 471 & 32 \\
& Th.~\ref{main theorem}$^\dagger$ & 5,15,112,4,7 & $2.91N$ & 253 & 64 \\
\hline
\multirow{6}{*}{\rotatebox{90}{Aggressive}} &
\cite{Stefanov13} & 4,19,--,--,-- & $3N$ & 160 & \\
& \cite{Ren15} & 5,19,--,4,6 & $7N$ & 145 & \\
& Th.~\ref{main theorem} & 4,15,36,--,-- & $.25N$ & 288 & \\
& Th.~\ref{main theorem}$^\dagger$ & 5,15,36,4,6 & $.46875N$ & 163 & \\
& Th.~\ref{second main theorem} & 3,16,14,--,-- & $.0625N$ & 248 & \\
& Th.~\ref{second main theorem}$^\dagger$ & 5,15,28,4,7 & $.25N$ & 194 & \\
\end{tabular}
\end{table}
\section{Conclusion} \label{section: conclusion}
ORAM is a multifaceted problem and recently, researchers have been recognizing
the importance of rethinking the relevancy of multiple aspects of ORAM using
modern standards~\cite{Stefanov12, Bindschaedler15}.
In this paper, we provided another point of view and insight for this
exploration by introducing the notion of succinctness to ORAM and proposing
succinct ORAM constructions.
We think our methods are particularly suitable for secure processor setting.
It is interesting to consider succinct constructions optimized for other
settings.
As we already mentioned, we could not derive non-asymptotic bounds for
\cref{second main theorem} (for reasonable size of $N$) since the analysis of
Berenbrink et al.~\cite{Berenbrink00} for heavily loaded case of two-choice
balls-into-bins game, on which we rely, requires a large number of bins.
However, simulation results suggest that \cref{second main theorem} does have a
significant impact in practice even for reasonably small $N$ and it is
desirable to close this gap between theory and practice.
One obvious approach is to fine tune the analysis of Berenbrink et al. to get a
better (non-asymptotic) bound but this seems difficult because of the
complexity of the analysis.
Talwar and Wieder gave an alternative simpler analysis~\cite{Talwar14} but the
resulting bound is not as tight as that of Berenbrink et al.
\section*{Acknowledgement}
This work was supported by JSPS KAKENHI Grant Number 17H01693, 17K20023JST and
CREST Grant Number JPMJCR1402.
We thank Paul Sheridan for helpful discussion.
\bibliographystyle{plain}
|
1,116,691,500,244 | arxiv | \subsection{Error detecting algorithms} \label{sec:error_detection}
In this section, we focus on mechanisms to numerically detect errors that have not been detected by the underlying system or middleware.
We have identified several techniques that allow us to (likely) notice the occurrence of an error at several layers of numerical algorithms.
Table~\ref{tab:detection_chart} gives an overview of some detection techniques and the algorithmic components or numerical methods where they are applicable.
\begin{table}
\centering
\caption{Numerical error detection: Overview of error detection techniques and numerical ingredients and methods where they are applied.
Note that we mark a method as applicable only if it is or can be used in the respective algorithm itself, not only at lower level functionality, i.e., we do not mark checksums for multigrid as checksums are only used in the BLAS 2/3 kernels used as inner loops or in the GS/J/SOR smoothers.}
\label{tab:detection_chart}
\begin{tabular}{lcccccc}
&
\rotatebox[origin=l]{90}{exceptions} & \rotatebox[origin=l]{90}{checksum} & \rotatebox[origin=l]{90}{constraints} & \rotatebox[origin=l]{90}{tech error} & \rotatebox[origin=l]{90}{multi resolution} & \rotatebox[origin=l]{90}{redundancy} \\
\midrule
BLAS 2/3 & $\times$ & $\times$ &&&& $\times$ \\
Direct Solvers & $\times$ & $\times$ &&&& $\times$ \\
Krylov & $\times$ && $\times$ & $\times$ && $\times$ \\
Multilevel / Multigrid & $\times$ &&& $\times$ & $\times$ & $\times$ \\
Domain Decomposition & $\times$ &&&&& $\times$ \\
GS/Jac/SOR & $\times$ &&& $\times$ && $\times$ \\
Nonlinear Systems & $\times$ &&& $\times$ && $\times$ \\
Time Stepping (ODEs) & $\times$ &&& $\times$ & $(\times)$ & $\times$ \\
PDEs & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\
Quadrature & $\times$ && $\times$ & $\times$ & $\times$ & $\times$ \\
\end{tabular}
\end{table}
\subsubsection{Exceptions}
Exceptions are a way a program signals that something went wrong during execution. We consider the case where exceptions are caused by data corruption that can, for example, lead to division by zero or out-of-range access.
Most programming languages support a way of handling exceptions.
The algorithm programmer can register an exception handler that gets called whenever an exception occurs. If the error is recoverable, the exception handler will specify how best to continue afterwards.
If the error is not recoverable, the program will be aborted.
Exceptions are a straight-forward way to detect certain types of errors and can be applied to all numerical algorithms.
However, they obviously only see a small subset of all possible errors and it is not trivial to decide when to use exceptions handlers in the light of a trade-off between correctness, robustness and runtime efficiency.
\todURi{Here a description and/or references are missing how exceptions are used to deal with faults/errors/mitigation \textbf{DG: \cite{Engwer:2018:AHL} describes how to wrap ULFM into C++ exceptions, also in \cite{Altenbernd:2020:hpcse}}}
\subsubsection{Checksums} \label{sec:detect_checksum}
Checksums could be used at the hardware or middleware layer to detect errors, but here we will discuss checksums
as employed on the algorithmic layer where we have a more detailed knowledge about the existence of numerical or algorithmic invariants.
Checksum techniques have been used in various numerical algorithms. We list some examples below.
\noindent
\textbf{BLAS 2/3:}
Checksum encoding matrices, introduced by Huang and Abraham~\cite{Huang1984} requires (i) adding redundant data in some form (encoding), (ii) redesign of the algorithm to operate on the respective data structures (processing), and (iii) checking the encoded data for errors (detection). We ignore the recovery phase here and refer to Section~\ref{sec:error_aware_algorithms}. Checksums are used in FT-ScaLAPACK~\cite{Wu2014} for dense matrix operations such as MM, LU and QR factorization and more recently in hierarchical matrix multiplication~\cite{Austin2015}. Wu et al.\ give a good survey of checksum deployment in dense linear algebra~\cite{PanGDBTLCC2016}.
\noindent
\textbf{Gauss-Seidel/Jacobi/SOR and multigrid:}
In~\cite{Mishra:2003:Algorithm-Based}, checksums are used to detect errors in the Jacobi smoother, the restriction and interpolation operators of a multigrid method solving a two-dimensional Poisson equation.
\noindent
\textbf{Krylov subspace methods:}
Tao et al.\ propose a new checksum scheme using multiple checksum vectors for sparse matrix-vector multiplication, which is shown to be generally effective for several preconditioned Krylov iterative algorithms~\cite{TaoSKWLZKC2016}.
Also~\cite{Shantharam2011, agullo:hal-02495301} use checksums for protection within the conjugate gradient (CG) algorithm.
\noindent
\textbf{FFT:}
Checksum can also be used in \acrfull{fft}s similarly as in matrix-vector multiplication. Liang et al.~\cite{LiangCTLWLOLSC2017} develop a new hierarchical checksum scheme by exploiting the special discrete Fourier transform matrix structure and employ special checksum vectors. Checksums are applicable to many important kernels such as matrix-matrix multiplication, but are costly.
In addition, it can be difficult to specify a suitable threshold for \lq equality\rq\ in the presence of round-off errors.
For many numerical calculations such as scalar products, checksums are not applicable at all.
\subsubsection{Constraints}
In some applications, constraints for different types of variables are known. Examples are positivity constraints, conservation laws for physical quantities or known bounds for internal numerical variables.
\noindent
\textbf{Krylov subspace methods:}
Resilience was already of importance in the early days of digital computers. In the original \acrshort{pcg} paper~\cite{cg:52}, Hestenes and Stiefel noticed that the reciprocal value of $\alpha$ (the step length) is bounded above (repectively, below) by the reciprocal of the smallest eigenvalues (respectively the inverse of the largest eigenvalue) of the matrix. The inequality involving the largest eigenvalue (for which in practice it may be cheaper to get an approximation) was used to equip \acrshort{pcg} with error detection capabilities in~\cite{agullo:hal-02495301}.
\noindent
\textbf{Partial differential equations:}
Checking for bounds can be associated with minimal or extremely high cost depending on whether extra information has to be computed (such as eigenvalues of matrices) or not. Reliability is, in general, an issue as only those errors leading to violation of these constraints can be detected. An example of the use of problem-informed constraints can be found in \cite{mycek:2017sisc}. In this work, the authors derive a priori bounds for the discrete solution of second-order elliptic \acrshort{pde}s in a domain decomposition setting. Specifically, they show that the bounds take into account the boundary conditions, are cheap to compute, general enough to apply to a wide variety of numerical methods such as finite elements or finite differences, and provide an effective way to handle faulty solutions synthetically generated.
\subsubsection{Technical error information}
\label{sec:tec_err_info}
In many numerical large scale applications, the main computational task involves the approximate computation of integrals, algebraic systems, systems of \acrshort{ode}s or \acrshort{pde}s. For all these problems, various types of error information such as residuals, differences between iterations, round-off error estimates and discretization error estimates can be used as indicators of errors either by their size or by monotonicity criteria. We give several examples from literature for different classes of numerical algorithms.
\noindent
\textbf{Krylov subspace methods:}
Round-off error bounds can be used in Krylov subspace methods. They fit in the general framework of round-off error analysis~\cite{Higham96} and have been considered in the context of Krylov subspace methods in finite precision arithmetic~\cite{Strakos13,meurant2006lanczos}.
Vorst and Ye proposed a residual gap bound~\cite{Vorst2000} (bound for the norm of the residual gap between the true and the computed residuals) based on round-off error analysis that was later used as a criterion for actual error detection in~\cite{agullo:hal-02495301} when bit flips occur. The detection of errors in Krylov methods via violation of orthogonality is proposed in~\cite{CHen2013}.
\noindent
\textbf{Multigrid:}
Calhoun et.~al~\cite{Calhoun:2015:Towards} apply a residual/energy norm-based error detection for algebraic multigrid. They use two criteria: (i) the reduction of the residual norm as a weak criterion and (ii) the reduction of the quadratic form \[ E(\bm{x}) = \langle A\bm{x}, \bm{x} \rangle - 2 \langle \bm{x}, \bm{b} \rangle, \] when solving the linear system $A \bm{x} = \bm{b}$ for symmetric positive matrices.
The quadratic for $E$ calculated at level $i$ during the down-pass of a V-cycle should be less than the energy calculated at level $i$ during the down-pass of the next V-cycle.
When using the full approximation scheme residual norm reductions can also be verified at each level in the hierarchy of a multigrid-cycle.
The structure of the full approximation scheme additionally provides smart recovery techniques utilizing its lower resolution approximations~\cite{altenbernd2016fault}.
\noindent
\textbf{Time-stepping:}
For iterative time-stepping with spectral deferred corrections, monitoring the residual of the iteration can be used to detect errors in the solution vectors~\cite{grout2017achieving}. In the context of parallel-in-time integration with parareal, consecutive iterates are considered in~\cite{nielsen2016fault} to detect errors in the solution vector. In~\cite{benson2015silent}, an auxiliary checking scheme in contrast to the original base scheme is used to detect and correct errors during implicit and explicit time-integration. Estimating the local truncation error with two different methods is used in~\cite{guhur2016lightweight} to implement a resilient, high-order Runge-Kutta method. This ``Hot Rod'' approach is then also used for error correction.
\subsubsection{Multi-resolution}
Multi-resolution means that information is available at different resolution levels, in terms of spatial discretization (\acrshort{pde}), time discretization (\acrshort{ode} and \acrshort{pde}), order of discretization (PDE in space and time), matrix dimensions (numerical linear algebra, multigrid), frequencies, and so on.
This leads to a certain redundancy -- not an artificially introduced, but an inherently available one.
This redundancy can be used to detect discrepancies or anomalies and, hence, errors that could not be detected by the system. There are numerous examples for the mentioned problem classes, we outline one example in more detail here.
\noindent
\textbf{Sparse grids / Combination technique:}
Sparse grids \cite{bungartz04sparseGrids} are one particular class of multi-resolution methods.
There, via the use of hierarchical bases, certain structures often seen in $d$-dimensional data can be exploited to alleviate the curse of dimensionality, without a significant loss of accuracy.
Sparse grids have been successfully used in a wide range of problem classes where spatial discretization plays a role, such as interpolation \cite{Jakeman2011}, quadrature \cite{GerstnerDimAdapt,gerstnerQuad, Bungartz2003}, solvers for PDEs \cite{heene2018exahd, harding2014robust}, or machine learning tasks \cite{Garcke2006, garcke2007dimension, PeherstorferSGDE} (e.g., classification, regression, clustering, or density estimation).
One particular incarnation of sparse grid methods is the so-called combination technique \cite{griebel92CombiTechnique}.
There, based on an extrapolation-style approach, a linear combination of a specific set of full, but very coarse-grid solutions is used to get a sparse fine-grid solution.
The various coarse grid solutions can be obtained in a completely independent way, using (parallel) standard solvers.
This opens the way to (1) a natural two-level parallelization and to (2) an easy and cheap detection of system undetected errors: Since we actually compute solutions for the same problem on different (i.e., differently discretized) grids anyway, we can use these to detect anomalies -- just by comparing the available solutions.
And the detection leads immediately to a mitigation strategy (see Section~\ref{sec:error_aware_case_studies}), since we can easily exchange coarse grids in case of errors, just by changing the combination pattern \cite{obersteiner2017highly, ali-ft-gene-hpcs-2015, ali2016complex, heene2016massively, harding2015fault, parra16SDC}.
Therefore, this is an example for a smart algorithm that is able to do both detection and mitigation.
Further examples are mentioned in Section~\ref{sec:tec_err_info} as multi-resolution typically comes with corresponding error estimates based on differences between solutions at different resolution levels: multigrid and parallel time stepping.
\subsubsection{Redundancy}
Redundancy is a strategy for error detection that can be applied to all of the numerical algorithms mentioned in Table~\ref{tab:detection_chart}.
It covers two approaches.
In the first approach computational resources may be replicated twice or thrice. Such instances are called \acrshort{dmr} \cite{Weaver2001AFaultTolerant,Iyer2005recent} or \acrshort{tmr} \cite{vonNeumann56Proba,Scholzel2007Reduced}.
In the second approach the computations are repeated twice or thrice on the same resource \cite{austin1999DIVA,Vijaykumar02transient-faultrecovery}.
An advantage of this approach is the flexibility at the application level.
Note that the first approach costs more in space or resources, the second approach costs more in time.
The redundancy based error detection technique described in \cite{benoit2017replication} relies on in-depth analysis of application and platform dependent parameters (such as the number of processors and checkpointing time) to formalise the process of both resource and computation replication.
It provides a closed-form formula for optimal period size, resource usage and overall efficiency.
Ainsworth et.\ al~\cite{ainsworth2017multigrid} use replication of fault-prone components as an error detection technique in a multigrid method.
Also error detection in the time stepping methods from~\cite{benson2015silent} mentioned in Section~\ref{sec:tec_err_info} can be interpreted as redundancy based error detection.
The main disadvantage of replication is its cost in terms of performance,
although recomputing only some instructions instead of the whole application lowers the time redundancy overhead~\cite{Oh2002Error}.
However, redundancy in some calculations should in particular be considered as a possible strategy for error detection as in modern supercomputers the cost of arithmetic operations tends to decrease compared to communication time.
\subsection{Error aware algorithms} \label{sec:error_aware_algorithms}
In this section, we look at error correction techniques within an application.
We assume that the application has been notified that part of the algorithm's data is corrupted or lost.
In that context, mitigation or containment actions have to be undertaken at the algorithmic design level, where the appropriate actions depend on the data detection granularity and how the notification mechanism was activated.
It is possible to design both lossy and lossless mitigation procedures that are tailored to the numerical algorithms under consideration.
In Section~\ref{sec:numalgos:erroraware:relatedwork} we give a brief literature overview of ideas that can be used to complement numerical mitigation or containment procedures.
Then, in Section~\ref{sec:error_aware_case_studies}
we offer a more detailed discussion of some recent successful attempts by presenting a few case studies in the context of the solution of \acrfull{pde}.
\subsubsection{Error aware algorithms for the solution of linear systems}\label{sec:numalgos:erroraware:relatedwork}
A wealth of literature already exists on various, mostly isolated ideas and approaches that have appeared over time.
Checkpoint-restart methods are the most generic approaches towards resilience for a broad spectrum of applications, see Section~\ref{sec:multilevelCPR} for an introduction.
We first describe a general mental model to design resilient numerical algorithms independent of actual machine specifications that lead to what is nowadays referred to as \acrfull{lflr} techniques. Then we move to \lq classical\rq\ algorithm-based fault tolerance, which originally was developed to detect and correct single bit flips on systolic architectures devoted to basic matrix computations, see Section~\ref{sec:detect_checksum}. Finally, we discuss a range of ideas and techniques not covered by the case studies below.
\paragraph*{Local-failure local-recovery}
As far back as a decade ago, an abstract framework was developed to separate algorithm design from unclear machine specifications, see also Section~\ref{sec:logging}. The idea of a selective reliability model as introduced by Hoemmen~\cite{Hoemmen2011, bridges2012faulttolerant} is machine-oblivious and highly suitable for algorithm design for machines with different levels of (memory) reliability. It has led to the concept of \acrfull{lflr}~\cite{teranishi2014lflr}. This model provides application developers with the ability to recover locally and continue application execution when a process is lost. In ~\cite{teranishi2014lflr}, Teranishi and Heroux have implemented this framework on top of MPI-ULFM (Section~\ref{sec:ULFM}) and analyzed its performance when a failure occurs during the solution of a linear system of equations.
\paragraph*{Original algorithm-based fault tolerance with checksums
The term \acrfull{abft} was originally coined in conjunction with protecting matrix operations with checksums to handle bit flips~\cite{huang:1984}, mostly assuming exact arithmetic calculation for detection and mitigation.
(See Section~\ref{sec:detect_checksum} for a more detailed discussion on checksums).
The main drawback of checksums is that only limited error patterns can be corrected and its robust practical implementation in finite precision arithmetic can be complicated to tune to account for round-off errors.
A second drawback is that the checksum encoding, detection and recovery methods are specific to a particular calculation.
A new scheme needs to be designed and proved mathematically for each new operation.
A further drawback is to tolerate more errors, more encoded data is needed, which may be costly both in memory and in computing time.
\acrshort{abft} concepts have been extended to process failures for a wide range of matrix operations both for detection and mitigation purposes \cite{kim:1996,du:2012,bosilca:2009,chen:2008,jia:2013} and general communication patterns \cite{kabir:2016}.
\acrshort{abft} has also recently been proposed for parallel stencil-based operations to accurately detect and correct silent data corruptions~\cite{Cavelan:2019a}.
In these scenarios the general strategy is a combination of checkpointing and replication of checksums.
In-memory checkpointing \cite{jia:2013} can be used to improve the performance.
The main advantage of these methods is their low overhead and high scalability.
In practice, the significance of a bit flip strongly depends on its location, i.e., which bit in the floating point representation is affected.
Classical \acrshort{abft} has been extended to take into account floating point effects in the fault detection (checksums in finite precision) as well as in the fault correction and to recover from undetected errors (bit flips) in all positions without additional overhead \cite{moldaschl.etal:2017}.
\paragraph*{Iterative linear solvers}
Iterative linear solvers based on fixed point iteration schemes are, in general, examples of error oblivious algorithms, as described in Section \ref{sec:error_oblivious}.
The convergence history of the scaled residual norm observed within the iterative scheme often resembles the curves displayed in Figure~\ref{fig:erroraware:typicalbehaviour}.
In this case the iterative scheme is a multigrid method, as in \cite{Goeddeke:2015:FTF, huber2016resilience}.
The peaks in the residual occur after data has been lost
and when the iterations are allowed to restart with
some form of replacement of the lost data.
In the simplest case, the lost data may just be re-initialized with
the value of zero,
and recovery techniques to obtain better solutions are discussed in
Section~\ref{sec:error_aware_case_studies}.
It can be seen that, depending on when in the course of the iteration a small portion of the approximate solution suffers from an error,
we observe a delay in convergence, directly proportional to an increase in runtime.
In the case where errors appear too often, the solver might not recover and other mitigation actions might have to be considered.
\begin{figure}[htb]
\centering
\includegraphics[width=0.32\linewidth]{information_loss_early.pdf}\hfill
\includegraphics[width=0.32\linewidth]{information_loss_late.pdf}\hfill
\includegraphics[width=0.32\linewidth]{information_loss_multiple.pdf}
\caption{Convergence history of the residual norm as a function of the iteration count for three examples of information loss. From left to right: early, late, and multiple times}
\label{fig:erroraware:typicalbehaviour}
\end{figure}
Explicit recovery at the algorithmic level from undetected errors have
been studied for iterative linear solvers~\cite{pachajoa2017resilience}.
In contrast to restarting, a number of algorithm based recovery strategies have been proposed,
including approximate or heuristic interpolation methods~\cite{agullo:hal-01323192}.
An approach of exactly recovering the state of the iterative solver before the node failure has been investigated for the \acrfull{pcg} and related methods~\cite{pachajoa.etal:2018, levonyak.etal:2019}.
This also includes studying scenarios with multiple simultaneous node failures~\cite{pachajoa.etal:2019} and scenarios where no replacement nodes are available~\cite{pachajoa:2019}.
\newcommand{\ignore}[1]{}
\ignore{
\paragraph*{Combining checkpointing and numerical mitigation}
Shrink and substitute paradigms are tested to protect the GMRES Krylov solver against hard faults in \cite{ashraf:2018} and combined in a flexible approach featuring incomplete data recovery. An algorithmic rather than ULFM-based solution (see Section~\ref{sec:ULFM}) to recover from detected errors and shrinking processor pool is presented in \cite{pachajoa:2019}. Here, iterates and search directions of the conjugate gradient method are protected, and performance is achieved redistributing loads on reconstruction nodes rather than using replacement nodes. The related row permutations assume initial load balancing.
``Algorithm-based checkpointing'', i.e., hybrid forms between algorithm-based fault tolerance and in-memory checkpointing has been considered in \cite{pachajoa.etal:2019, levonyak.etal:2019} for variants of the \acrshort{pcg} algorithm. The basic idea is to identify the minimum amount of information which has to be checkpointed regularly in order to allow for the complete reconstruction of the correct state of the solver before the event of a node failure. After this reconstruction phase, which can be done very efficiently for several variants of \acrshort{pcg}, the solver can continue exactly as in the fault-free scenario.
}
\paragraph*{Approximated recovery and restart in sparse numerical linear algebra}
For matrix computations, eigensolvers or basic kernels such as iterative linear system solvers,
some recovery ideas rely on forming a small dimensional linear algebra problem where the inputs are the still valid data and the unknowns are the lost/corrupted ones.
The outcome of this procedure is subsequently used to replace the lost/corrupted data and the numerical algorithm is somehow started again from that meaningful initial guess.
The recovery procedure is tailored to the actual numerical algorithm.
As an example, consider a fixed point iteration scheme for a linear system and suppose the lost data are entries of the iterate vector, the most dynamically evolving data in this computational scheme.
Matrix entries of the iteration scheme related to the lost data, as well as some neighbouring entries,
serve to build the left-hand side of a linear problem (either a linear system or a least-square problem) while the right-hand side is built from valid data.
The solution of this small problem is then used to replace the corresponding lost entries of the iterate vector.
The complete, updated vector is taken as a new initial guess when restarting the fixed point iteration.
If the data is not corrupted too often the classical convergence theory still applies and because the new initial guess incorporates updates from the calculations performed before the error was detected, the global convergence rate is not strongly affected.
The method described in adaptive recovery techniques for extreme scale multigrid in Section~\ref{sec:error_aware_case_studies}
is an example application of this technique.
For numerical schemes based on nested subspace search,
such as Krylov subspace methods, closely related techniques have been successfully applied both for eigensolvers and linear solvers that further exploit the sparsity structure of the matrices to reduce the computational cost associated with the recovery procedure.
At the cost of a light checkpoint performed once when starting the linear solver (mostly the matrix and the right-hand side vector in case of linear system solution) this mitigation approach has no overhead
if the data is not corrupted during the solution computation.
We refer to~\cite{Langou:2004:fault:recovery,agullo:hal-01323192,agullo:hal-01347793} for some illustrations on those numerical remedies in a parallel distributed memory framework and to~\cite{jcmA:15} where these ideas are exploited for a lower granularity of data loss in a task-based runtime system.
See Section~\ref{sec:infrastructure} for references relevant to task-based runtime systems.
We also note that these ideas can be extended to hybrid iterative/direct numerical schemes, that have a domain decomposition flavor, where the recovery procedure can be enriched with additional features of the parallel numerical scheme such as redundancy or properties of the preconditioners~\cite{agullo:hal-01256316}. They can also be extended to the time domain in the context of multilevel parallel-in-time integration techniques~\cite{speck2017toward}.
\subsubsection{Error aware algorithms for the solution of partial differential equations}\label{sec:error_aware_case_studies}
The ideas introduced above in Section~\ref{sec:numalgos:erroraware:relatedwork} are application agnostic but naturally apply to linear systems arising from the discretization of a \acrshort{pde}.
In that latter case, more information from the underlying \acrshort{pde} can be closely tailored to intrinsic features of solvers such as multigrid.
In this section we discuss some research works on mitigation and containment that exploit the properties of \acrshort{pde}s to aid the recovery techniques.
We also present some mitigation processes that are only relevant in the \acrshort{pde} setting.
\paragraph*{Adaptive recovery techniques for extreme scale multigrid}
Some of the most efficient solvers of \acrshort{pde}, such as
parallel geometric multigrid methods \cite{hulsemann2006parallel,gmeiner2015towards},
can be based on the exchange of ghost layers in a non-overlapping domain partitioning.
This automatically leads to a
redundancy in interface data
between subdomains that
in turn permits the design of an efficient two-step recovery strategy for iterative solvers.
This is of particular interest in large-scale parallel
computations.
When each subdomain is large, then the ratio between the data on its surface and
the volume data in its interior becomes small.
When a processor fails, the information within one or several subdomains is lost.
For the recovery and continued solution,
the redundant ghost layer information is used
in a first step, to
recover locally either
Dirichlet- or Neumann-type data for the subdomains.
The global problem can then be formulated in two partitions,
the outer healthy subdomain
and the inner faulty subdomain, where the recovery must
reconstruct the lost data.
Both subproblems must be bi-directionally
coupled via the interface and the corresponding ghost layers of unknowns.
After re-initialization, the corrupted and reinitialized data could
pollute the solution globally, meaning that the locally increased
error in the faulty domain can spread
globally and thus also affect the healthy subdomain.
In order to avoid this pollution,
the communication between the healthy and faulty sub-problems is interrupted during the second step of the recovery process.
In the second step, we continue with the original iterative solver restricted to the healthy sub-problem and select a suitable one for the faulty one.
After some number of asynchronous iteration steps both sub-problems are reconnected, see \cite{huber2016resilience}, and the global iterative solver strategy is resumed.
Note that the reconnecting step is mandatory for the convergence of the iterative solver.
The tearing step separating the subdomains
is mandatory to preserve the accuracy of the dynamic data in the healthy sub-problem, and without this step the corrupted data from the faulty sub-domain pollutes the global solution.
Of critical importance for the performance of the method are the accuracy of the faulty sub-problem solver at re-connection time and the time spent in the recovery mode.
In the faulty domain, the lost data can be initialized with 0, or, alternatively,
compressed checkpointed data can be used as described in the following section on
compression techniques for checkpoint-restart.
Note, however, that with straight-forward compression techniques, compressed checkpoint data will only be useful
to recover the low frequency components in the faulty domain.
If the local recovery is performed with multigrid, then the low frequencies are
in any case cheap to recover, as compared to the cost of recomputing the lost
high frequency components.
The accuracy within a multigrid strategy can be easily controlled by a hierarchical sum of weighted residuals \cite{rude1994error}.
The overhead cost for the a-posterior error indicator is quite small compared to the overall solver cost. Having an estimate for the algebraic error in both sub-problems at hand, the re-connection step is determined automatically.
To speed up the time which is spent in the recovery, a so-called \lq superman strategy\rq\ is applied \cite{huber2016resilience}, see also Figure
\ref{fig:superman} for an illustration.
More resources compared to the situation before the fault are allocated to the faulty sub-problem.
A short recovery phase in combination with carefully selected re-coupling criteria then guarantees a highly efficient fault-tolerant solver.
Of special interest is a massively parallel multigrid method as base solver.
In combination with the tearing and intersection approach for the recovery, it results in a hybrid approach.
In case of a Stokes-type system, yielding after discretization a saddle point problem, the strategy can either be applied on the positive definite Schur complement for the pressure or, as it was done in \cite{huber2019adaptive}, on the indefinite velocity-pressure system.
In that case an all-at-once multigrid method with an Uzawa-type smoother acting on both solution components turns out to be most efficient, see \cite{drzisga2018block}.
Numerical and algorithmic studies including multiple faults and large-scale problems with more than $5 \cdot 10^{11}$ degrees of freedom and more than $245 000$ cores
have been demonstrated \cite{huber2016resilience,huber2019adaptive}.
The automatic re-coupling strategy is found to be robust with respect to the fault location and size and also handling multiple fault.
In many scenarios a complete recovery can be achieved
with almost no increase in runtime and while maintaining
excellent parallel efficiency.
\begin{figure}[tb]
\centering
\begin{minipage}{0.3\textwidth}
\includegraphics[width=\textwidth]{Felswand_02.jpg}
\end{minipage}
\hfill
\begin{minipage}{0.3\textwidth}
\includegraphics[width=\textwidth]{Felswand_03_A.jpg}
\end{minipage}
\hfill
\begin{minipage}{0.3\textwidth}
\includegraphics[width=\textwidth]{Felswand_04_A.jpg}
\end{minipage}
\caption{Illustration of the steps in the adaptive recovery technique for extreme scale multigrid.
Left: A detectable error occurred.
Middle: The communication between the healthy and faulty sub-domains is interrupted.
Right: The original iterative solver restricted to the healthy domain continues while another suitable solver is asynchronously used in the faulty domain.
Once the solution in the faulty domain reaches a certain accuracy, the communication between the domains is re-enabled.}
\label{fig:superman}
\end{figure}
\paragraph*{Adaptive mesh refinement, load balancing, and application level checkpointing}
\acrfull{amr} functionality and load balancing require similar data linearization- and transfer functionality as is needed for application level checkpointing.
This is exploited in the waLBerla framework \cite{schornbaum2016massively,schornbaum2018extreme,bauer2020walberla}
that features an object oriented design for composing coupled multiphysics simulation software.
waLBerla's load balancing is based on routines to transfer
simulation data between processors
so that functionality to serialize, pack, send, and unpack all relevant data is
already available as a by-product of the \acrshort{amr}
functionality.
Note that the waLBerla software architecture imposes this structure for Eulerian mesh based data
as well as for Lagrangian particle-based models and it canonically
extends to coupled Eulerian-Lagrangian multiphysics models.
For this to work transparently, the routines for migrating simulation data
must be suitably encapsulated.
Then this functionality can be used to write user level checkpoints
either on disk or in memory.
Note that writing checkpoints
will inevitably imply overheads in memory consumption and communication time,
but that restoring a checkpoint is cheap, since it initially only requires re-activating
the redundantly stored data.
This is especially true when in-memory checkpointing is used as explored and analyzed in~\cite{kohl2019scalable}.
The simple restoration of checkpointed data may of course lead to load imbalance,
but the functionality to redistribute load is also available as part of the
parallel \acrshort{amr} functionality.
In this sense, user-level checkpointing can be based in a natural,
efficient, and straightforward way on the functionality of parallel
\acrshort{amr} algorithms combined with load balancing functionality.
\paragraph*{Compression techniques to accelerate checkpoint-restart for Kryloy-MG solvers}
Compressed checkpointing is a possibility to improve the efficiency of classical checkpoint-restart schemes, both in terms of the overhead to generate the checkpoints and to recover the data if an error occurs. The added efficiency mainly comes from a reduced memory footprint which is beneficial for communication and storage.
It is particularly efficient if the compression method is tailored to the target application.
As an example, in-memory compressed checkpoints combined with \acrshort{lflr} (see Section~\ref{sec:numalgos:erroraware:relatedwork}) for iterative linear solvers, e.g., multigrid preconditioners in Krylov schemes, are described below.
\emph{Lossy Compression:} As already mentioned in Section~\ref{sec:error_aware_case_studies}, paragraph \lq Approximated recovery and restart\rq, initially only the dynamical data, i.e., the approximate solution, are protected.
Lossy compression allows a balance between the accuracy
of the discretization error of the assembled system and the numerical error within the solver.
Specifically in~\cite{Altenbernd:2020:hpcse}, the SZ library~\cite{di:2016,tao:2017,liang:2018} is employed, which prefers, by construction, structured data ideally associated with a structured grid.
Another important feature is that the compression accuracy can be prescribed and adapted to the situation.
Unfortunately, a higher compression accuracy usually leads to a lower compression rate and higher compression time, which is crucial in terms of resilience overhead.
Note that multigrid can be interpreted as a lossy compression technique in itself, with a number of mathematical peculiarities that need consideration~\cite{Goeddeke:2015:FTF}.
Multigrid algorithms use a hierarchy of grids to solve linear systems in an asymptotically optimal way.
This hierarchy can be used to restrict, i.e., lossily interpolate, the iterate from fine to coarse grids.
Such a lower-resolution representation of the iterate can then be stored as a compressed checkpoint.
Conversely, the multigrid prolongation (coarse-to-fine grid interpolation) operator is used to decompress the data. With only small additional computations, the multigrid hierarchy can also be used for error detection.
\emph{Recovery:} Several recovery techniques can be devised~\cite{Altenbernd:2020:hpcse}.
As a baseline approach checkpoint-restart is mimicked and the global iterate is simply replaced with its decompressed representation, independently of the compression strategy.
The second proposed approach follows the \acrshort{lflr} strategy and re-initializes only the local data that is lost on faulty computing nodes by using checkpoint data stored on neighbouring computing nodes.
Contrary to the first approach, this is mostly local and only needs minimal communication to receive a remotely stored backup.
In particular, the recovery procedure itself does not involve the participation of other processes except those sending the checkpointed data.
As a worst-case fallback when the backup data is not sufficient, a third recovery approach is established, which is still mostly local. Here, an auxiliary problem is solved iteratively with boundary data from the neighbouring computing nodes.
This is similar to the adaptive
recovery
techniques for extreme scale multigrid from above or the
approximated recovery and restart of Section~\ref{sec:numalgos:erroraware:relatedwork}.\todUR{It is a bit difficult to ref the individual sections here since they have no numbering. Should we
add proper subsection structure? \textbf{DG: At the beginning of \lq our\rq\ paragraph, I made a slightly different suggesion how to handle this problem.}}
An auxiliary problem is constructed, either by domain decomposition overlap or the operator structure, and solved with an initial guess based on the checkpoint data to accelerate the iterative recovery phase.
Experiments show that this approach can almost always restore the convergence speed of the fault-free scenario independently of the used backup technique, only the number of additional local recovery iterations varies.
For more details, we refer to \cite{Altenbernd:2020:hpcse}.
\paragraph*{Resilience with sparse grids}
Resilience can be added on various abstraction levels of the algorithm.
For PDE problems one traditionally adds resilience on the level of linear algebra operations, on the solver level for linear/non-linear equations, or on the time-stepping algorithm.
However, this may in some cases not be coarse-grained enough to minimize the overhead of resilience techniques, especially when errors occur rarely.
In \cite{obersteiner2017highly,heene2016massively, heene2018exahd, parra2015towards, parra16SDC, harding2015fault} the authors demonstrate a fault-tolerant framework for solving high-dimensional PDEs that applies fault tolerance on top of the individual PDE solver.
The framework boosts the scalability of black-box PDE solvers while making it simultaneously resilient to faults by applying the sparse grid combination technique.
In this technique the PDE simulation is distributed over many coarse grids, which can be processed in parallel.
At regular intervals the results of these grids are combined
to obtain the final sparse grid result.
In presence of faults the affected grids can be neglected and an alternative combination scheme is calculated via an optimization routine.
If too many grids are lost, the last combination result serves as an in-memory checkpoint to recompute the required grids.
In \cite{obersteiner2017highly} it is shown that this lossy recovery provides very good results even with high error frequencies.
At the same time the parallel efficiency is only slightly affected.
\paragraph*{Adaptive mesh refinement}
Adaptive refinement techniques in combination with finite element methods are well established for fault-free computations.
In terms of fault tolerance, this means that in addition to the assembled linear system, the geometric mesh structure must be protected.
This requires the reconstruction of the data structures containing the mesh hierarchy.
For the use of multigrid or multilevel methods, we also need to recover multiple levels of adaptive grid refinement after a fault has occurred.
The recovery process must take into account the intra-grid as well as the inter-grid data dependencies.
We refer to~\cite{stals2019} for a parallel adaptive multigrid method that uses a sophisticated dynamic data structures to store a nested sequence of meshes and the iterative evolving solution.
Stals demonstrates that it is possible to implement a fault recovery procedure that builds on the original parallel adaptive multigrid refinement algorithm~\cite{stals1995} in the case of a fail-stop fault.
It is assumed that a copy of the coarsest grid can always be accessed after a fault has occurred, i.e., it is stored off the processor.
The challenge in recovering an adaptively refined grid is that the mesh distribution changes during any load balancing procedures, i.e., the local information that was available during the original refinement process will have been modified or removed.
Nevertheless it is demonstrated that the neighbouring healthy processors contain enough intact information so that the necessary structure can be recovered to pass into the refinement routine.
In the case of uniform refinement, the original multilevel grid is recovered.
In the case of an adaptively refined grid, enough of the structure is recovered to re-establish the correct communication pattern allowing the solution process to run to completion, but potentially with reduced accuracy.
The neighbouring healthy processors will only contain enough information to guide the refinement around the edge of the recovered subgrid.
Further refinement within the interior of the recovered subgrid may be required to improve the accuracy of the solution.
These techniques were implemented with minimal disruption to the original code.
An example of one the few necessary modifications is that in the original code, communication was used to ensure that the elements were refined in the appropriate order to avoid degenerate grids.
In the resilient version of the the code that communication had to be removed as the refinement was restricted to the faulty processor.
\subsection{Error oblivious algorithms}\label{sec:error_oblivious}
In this section, we give examples of algorithms that are error oblivious in the sense that they can recover without assistance from errors that do not occur too frequently.
For example, many fixed point iterative solvers are able to execute to completion if, e.g., a bit flip error occurs in the solution vector.
However, every error likely increases the execution time of the algorithm.
We thus define two quality criteria for error oblivious algorithms and use to assess the examples in the remainder of this section: (i) correctness, and (ii) efficiency in terms of execution time.
Finding an algorithm that fulfills (i) and can also compete against error aware algorithms as described in Section~\ref{sec:error_aware_algorithms} remains an open problem.
Error oblivious usually means that an error slowly \lq leaves the system\rq\ during several iterative sweeps over the data. Error mitigation in error aware algorithms, on the other hand, requires specific measures to correct the error, and can only be applied when the error has been detected on a hardware, middleware or algorithmic layer, but removes the disturbance of the calculation process by the error immediately.
We do not expect the error oblivious algorithms to be impervious to all types of errors. An iterative method may be not error oblivious if the error changed the matrix entries. This concept is defined as selective reliability, see Section~\ref{sec:numalgos:erroraware:relatedwork}.
\subsubsection{Gossip based methods
A potentially interesting alternative in large-scale parallel environments that does not require any explicit error detection mechanisms utilizes gossip-based methods and their inherent resilience properties.
Such algorithms by nature build up redundancy in the system and can thus can efficiently recover automatically from various types of faults/errors without any need to explicitly detect them. In particular, Gansterer et al. have studied and extended the resilience of gossip-based aggregation and reduction methods \cite{casas.etal:2019, niederbrucker.gansterer:2013, gansterer.etal:2011}. Based on these building blocks, they have developed and analyzed several more complex resilient numerical algorithms, such as orthogonalization methods~\cite{gansterer.etal:2013, gansterer.etal:2011}, eigensolvers~\cite{strakova.etal:2013}, and least squares solvers~\cite{prikopa.etal:2020}.
While the strong resilience properties and execution-time robustness of these approaches are promising, there is a certain price in terms of basic runtime compared to classical deterministic numerical high performance algorithms. It remains to be investigated whether they can be competitive in a fault-prone, but otherwise classical system with global view and centralized control. Their competitiveness can be expected to increase significantly if some of these classical properties have to be weakened at the extreme scale.
\subsubsection{Fixed-point methods}
We view fixed-point methods as methods that converge globally when certain conditions are satisfied.
For example, the Jacobi iterative schemes will converge for any initial guess if the matrix is diagonally dominant.
Fixed-point based iterative methods are by design resilient to bit flips. However, the convergence delay can be significant. Anzt et al.~\cite{10.1145/2832080.2832081,ANZT2019100583} propose techniques improving the cost-robustness with little overhead.
A class of numerical algorithms that by design have properties attractive for resilience are asynchronous iterative methods~\cite{Baudet1978,spiteri1986,Bertsekas1983,bertsi1989,ebsmg1996,Szyld.98b,Frommer.Szyld.00,spiteri2020}.
In order to avoid misunderstandings, we point out that
this class of methods
is unrelated to the idea of asynchronous dynamic load balancing~\cite{doi:10.1002/nme.6237}
as addressed in Section~\ref{sec:runtime}.
Instead, asynchronous iterative methods, stemming from the concept of
chaotic iterations~\cite{chazan},
are fixed-point methods that seek the solution of a problem by independently updating subdomains -- which can be subdomains in the geometric sense, subsets, or individual components of the solution approximation -- according to some fixed-point linear or nonlinear iterative scheme.
A particularity of the asynchronous methods is that the independent updates neither adhere to a specific update order, nor synchronize in terms of a handshake with other updates, but still converge globally in the asymptotic sense.
In particular, these methods are robust with respect to some subdomains being updated at a much lower pace as each update just uses the most recent non-local information available. In that sense, asynchronous solvers can have good performance in unreliable environments where messages can be dropped or processes can become unresponsive for limited time.
Also, in cases where messages are corrupted (and corruption can be detected),
an asynchronous solver can simply drop such a message.
In cases where processes remain unresponsive, a mechanism is still needed to recover that process and its state, but the remaining processes can continue computing unchanged.
Therefore, asynchronous methods are
somehow error oblivious.
With the increasing cost of global synchronizations, and the attractive properties concerning fine-grained parallelization and resilience against communication delays and errors, asynchronous methods have gained attention in particular for numerical computations~\cite{Szyld.01}.
Chow et al.~\cite{DBLP:journals/siamsc/ChowP15,DBLP:conf/supercomputer/ChowAD15} developed an asynchronous algorithm for generating incomplete factorizations, Coleman et al.~\cite{DBLP:conf/springsim/ColemanSC17} further improved this algorithm by employing measures that reduce the runtime overhead when encountering errors.
More general is the idea of asynchronously updating subdomains in Schwarz decompositions.
In particular asynchronous restricted additive Schwarz methods and asynchronous optimized Schwarz methods have been identified to combine algorithm-inherent resilience with scalability on pre-exascale hardware architectures~\cite{DBLP:journals/pc/YamazakiCBD19,Magoules.Szyld.Venet.17,Glusa.etal.19,ElHaddad.Garay.Magoules.Szyld.20,Garay.Magoules.Szyld.18.3}.
Independently, asynchronous multilevel methods have been proposed and analyzed
under the name Fully Adaptive Multigrid method \cite{rude1993fully}.
Here the multigrid smoothing process is executed asynchronously so that it
can be employed for concurrent operations on different levels of the mesh hierarchy.
The iteration is executed in a Southwell style \cite{Southwell1949}
and is controlled by efficient hierarchical error estimators \cite{rude1994error}.
The parallel implementation \cite{rude1993mathematical} will
automatically correct errors.
More recently, asynchronous methods have been proposed for nonlinear multi-splitting~\cite{Szyld.Xu.00} and eigenvalue computations like Google's Pagerank algorithm~\cite{Kollias.Gallopoulos.Szyld.06}.
More recently, also the idea of asynchronously solving coarse-grid error correction equations has been investigated, leading to an asynchronous multigrid algorithm~\cite{WolfsonPou2019AsynchronousMM}.
While case studies reveal attractive properties, these newly developed asynchronous iterative methods (such as asynchronous multigrid) are not fixed-point iterations, and developing a convergence theory for those algorithms remains a challenge.
\subsubsection{Krylov subspace solvers}
A comprehensive overview about the use of selective reliability with Krylov methods in the presence of bit flips is given in James Elliott's PhD thesis~\cite{Elliott2015ResilientIL}.
Elliott evaluates the CG and GMRES solvers with the algebraic multigrid preconditioner,
see also~\cite{Altenbernd:2020:hpcse} for a more recent study.
Coleman et al.~\cite{DBLP:conf/springsim/ColemanSC17} consider Krylov subspace solvers in combination with the incomplete ILU algorithm ParILUT.
In~\cite{6877347} Elliot et al.\ investigate the effect of bit flips on the convergence of GMRES and propose strategies for minimizing the numerical impact.
The authors of \cite{BenacchioEtAl2020} present a monotonicity-based fault detection and correction procedure for a Generalized Conjugate Gradient Krylov solve and perform tests with manual fault injection. While the solver manages to converge even with large amounts of corrupted data, the basic recovery procedure speeds up convergence with minimal detection and correction overhead.
In \cite{Sao2013} the authors use a slightly different terminology and call their method numerically self-stabilizing, a term which originates in the context of distributed systems~\cite{dijkstra1974self}.
They introduce two error oblivious~\cite{dijkstra1974self} iterative linear solvers: one for the steepest descent and one for conjugate gradient. In the latter case, they consider necessary conditions for conjugate gradient to converge. Those conditions are borrowed from non-linear conjugate gradient~\cite{zoutendijk1970nonlinear} and are maintained in a correction step (typically performed every other ten iterations). The correction step does not explicitly correct errors, but re-computes quantities such as the residual at regular intervals. Therefore, we classify these methods as error oblivious instead of error aware.
\subsubsection{Domain decomposition}
In~\cite{griebel2020stochastic} Griebel and Oswald use probabilistic analysis to model the effect of errors on the convergence of the classical overlapping Schwarz algorithm.
They conclude that this method does indeed converge in the presence of errors.
Glusa et al.~\cite{DBLP:journals/corr/abs-1808-08172} mention that asynchronous domain decomposition methods are by definition fault-tolerant.
In \cite{Rizzi2016ftxs, RIZZI2018parcomp, Morris2016scala}, the authors discuss resiliency of a task-based domain decomposition preconditioner for elliptic PDEs.
By leveraging the domain decomposition approach, the problem is reformulated as a sampling problem, followed by a regression-based solution update. The regression is formulated and implemented such that it is oblivious to corrupted samples.
The authors combine this algorithmic approach with a server-client implementation based on ULFM, see Section~\ref{sec:ULFM}.
They show promising results of this approach in terms of resiliency to missing tasks, corrupted data and hardware failure.
\subsubsection{Time stepping}
In~\cite{grout2017achieving}, iterative time-stepping using spectral deferred corrections are shown to be error oblivious at the cost of more iterations for the affected time-step. With error-estimators in place, time-integration techniques like Runge-Kutta methods will repeat the calculation of a time-step with smaller step sizes, if errors in the solution vectors are relevant~\cite{chen2013comprehensive}. This type of algorithms is resilient against errors in the solution vector of the new time step. Repeating the new time step with a reduced time step size is not the optimal measure in case of an error where repeating the step with the same time step size would be more efficient, but it leads to correct results.
\subsection{Systems in support of resilient algorithms}\label{sec:future-systems}
We propose that resiliency will only be obtained by a multilayered approach incorporating operating systems, file systems, communication, programming models, algorithms, applications and education.
In terms of the layers covered by infrastructure, the goal is to increase systems and delivered performance while keeping the detectable errors in the upper algorithm based layers constant.
We refer the reader to the recently published report by Radojkovic et al.~\cite{EU_HPC} for an overview of the needs of the next generation \acrshort{hpc} systems.
\subsubsection{Error correcting codes}
Poulos et al.~\cite{PoulosFTXS2018} propose hardware \acrshort{ecc} assistance that can pass error syndrome information through to an application and use this to fix detected errors. When an \acrshort{ecc} hardware error occurs that results in a \acrfull{due}, the \acrshort{ecc} hardware generates a syndrome which is a byproduct of the error detection. For many \acrshort{ecc} schemes, a syndrome that corresponds to a DUE can be used to generate a list of possible corrections, one of which is taken to be the original uncorrupted data. In this work, the authors show that this set is relatively small, meaning that the set of potential values for an application to search for their correct answer (before corruption) is also small. They also study the error value distribution and show that for certain classes of problems it can be easy to identify obviously wrong answers. For the application studied in \cite{PoulosFTXS2018}, work was done to correct a hydrodynamics application using conservation laws and average of neighbor cells. This work requires changes to the hardware error reporting techniques and modification to the operating system to determine which application observed the DUE and pass it to an interrupt handler.
\subsubsection{Improving checkpoint/restart}
Independent of any additions, changes or new developments in the algorithmic or the system area, checkpoint/restart will remain a necessary component for any system. For one, no other technique can provide the needed resilience against full system outages; further, checkpoint/restart is also needed for developers to deal with limited job execution times and possible migration between systems or debugging purposes at large scale.
\paragraph*{Improving classical checkpoint/restart for homogeneous systems}
Observing the necessity of checkpoint/restart makes it critical to further optimize, enhance and support efficient checkpoint/restart mechanisms---even on classical, homogeneous systems---and provide users with library based solutions for core checkpoint/restart functionality. In particular, the following avenues should be pursued to optimize checkpoint/restart.
\begin{itemize}
\item Use additional algorithmic techniques to be able to reduce checkpoint frequency.
\item Reduce data to be written to disk by eliminating redundancy and possibly compressing checkpoint information.
Note that suitable data compression will typically require user-level knowledge,
suitable interfaces must be provided.
\item Overlap/Offload checkpoint operations to allow for asynchronous checkpoint/restart operations.
\item Integrate checkpoint/restart with novel programming approaches to minimize checkpointable state.
\item Keep the restart requirements local to the neighbour nodes of the failed node.
\item Localize checkpoint data to own or localized nodes.
This could be supported by local non-volatile memory,
as targets for checkpoint data.
While this has the potential to reduce communication, as it avoids remote data transfers, it may require additional hardware support to retrieve data from non-functional nodes, e.g., by accessing data through fast JTAG-like interfaces.
\item In memory checkpointing.
\item Exploit user-level knowledge for serializing, packing, compressing data,
see e.g.~how existing \acrshort{amr} functionality
\cite{kohl2019scalable} can be exploited for efficient checkpointing in Section.~\ref{sec:soa-correction-incremental}.
\end{itemize}
\paragraph*{Checkpoint/Restart for heterogeneous systems}
In addition to classical checkpoint/restart for homogeneous systems, node-local checkpoint/restart support for heterogeneous systems will help containing error and failure propagation. Such support may be provided transparently to the application by the underlying infrastructure, such as \acrshort{gpu} drivers or task-based environments, or exposed in the programming model, such as OpenMP Offload \cite{diaz2018evaluating}.
\subsubsection{Scheduler and resource management}
Support for resilience, especially at the workflow-level, has a direct impact on resource management in \acrshort{hpc} systems and hence requires new developments in this area as well.
\paragraph*{Node-level parallelism} With increasing node-level parallelism, the impact of \acrshort{os} noise (typically caused by unpredictable interrupts) becomes even more important. Therefore, dedicated node-level resources are needed to exclusively run the \acrshort{os} and minimize the impact of \acrshort{os} noise on the multi-threaded application running on the other cores.
\paragraph*{Adaptive system and application load balancing} The batch scheduler needs to adaptively balance the system load onto the available resources, via seamless application migration. While the application needs to adapt to the capabilities of the newly allocated resources, if different from the original allocation, without incurring performance penalties. The former has typically been implemented via checkpointing and process migration~\cite{malleability:2007}. The latter has typically been implemented for
applications that can adjust their granularity, e.g. from finer to coarser, depending on resource availability either triggered by the application or the system~\cite{malleability-invasive:2015}. When exposing and expressing parallelism in applications, in addition to accounting for and matching the multiple levels of hardware parallelism (nodes, sockets, cores), the decomposition granularity needs to be flexible to support evolvability and malleability and allow for adaptive load balancing at the application and system levels.
\paragraph*{Adaptive resource management} The batch scheduler in conjunction with the distributed runtime system employed by the application (e.g., MPI, Charm++, HPX) needs to support resources errors/failures and recover them without terminating the applications in the process. This approach should work both with rigid and moldable applications as well as with evolving and malleable applications.
\subsection{Programming models with inherent resiliency support}
\label{sec:future-pms}
Certain applications and algorithms may naturally be resilient against errors. This may lend them as natural candidates for asynchronous parallel execution (via asynchronous many-task programming). While this mitigates the challenges associated with bulk synchronous parallel execution, asynchronous parallel execution may influence, in the presence of silent errors, the convergence rate of the numerical algorithms and might lead to incorrect results.
Programming model and runtime support for resilience can offer transparent handling of errors and failures or can assist the application in handling them. Consistent programming model support for resilience based on realistic error/failure models is needed to properly handle such events with low overhead. Higher-level abstractions for programming resilient applications are needed to help with error/failure handling complexities and to offer reuse of concepts and codes across applications.
\subsection{Future directions for the solution of partial differential equations}\label{sec:future-grid-based}
In this section, we focus on discretizations for linear and non-linear partial differential equations as well as solvers for the resulting discrete and sparse systems of equations. We introduce a list of algorithmic properties that we found are, or can be, contributing to the resilience of the algorithms described in Section~\ref{sec:algorithms}.
Table~\ref{tab:grid-based-properties} lists these properties and indicates where we found relevant examples of how they can foster resilience for either linear or non-linear solvers or for spatial or time discretization.
In the following subsections we describe these examples in more detail and highlight the several (mutually related) properties that could be of interest in the context of resilient algorithms.
\begin{table}
\centering
\caption{Properties of numerical algorithms fostering or helping resilience}\label{tab:grid-based-properties}
\begin{tabular}{l|| cc}
categories & solvers & discretization \\
\hline \hline
redundancy & $\times$ & $\times$ \\
replication & $\times$ & \\ \hline
hierarchical methods & $\times$ & $\times$ \\
mixed precision & $\times$ & $\times$ \\ \hline
error control & $\times$ & $\times$ \\ \hline
locality-emphasizing schemes & & $\times$ \\
asynchronous methods & $\times$ & $\times$ \\
embarassingly parallel & $\times$ & \\ \hline
stochastic > deterministic & & $\times$ \\ \hline
iterative vs direct solvers & $\times$ & \\ \hline
matrix-free / low memory footprint & $\times$ & $\times$ \\
\end{tabular}
\end{table}
\subsubsection{Redundancy and replication}
\label{sec:future-gb-redundancy}
A failure that is not fixed by the system (hardware and middleware) typically results in a loss or corruption of data.
To tackle this problem, redundancy techniques can be used to detect and recover from data corruption and data loss. The performance of these algorithms is usually measured in the amount of memory and computational overhead they entail, the detection rate of errors, the rate of false-positives they achieve, and the accuracy of the recovery. Optimizing these performance indicators should be of main concern for future algorithm design. One existing class of algorithms that apply redundancy are multiresolutional techniques such as multigrid and the sparse grid combination technique described in Section \ref{sec:error_aware_algorithms}. They inherently add redundancy through the hierarchical structure. Sparse grid combination techniques calculate the same solution on different anisotropic grids. The coefficients of the combinations of the components grids can be recalculated if one or more nodes are lost due to faults. This redundancy of the component grids allows the algorithm to obtain an alternative approximation of the solution. However, if a component grid is distributed on too many nodes, then the approximation will fail if a fault occurs on any one of those nodes. Another class of algorithms add redundancy through recomputation with different models and configurations such as in ensemble or multifidelity techniques. A more straight-forward approach is to directly add redundancy through replication of certain algorithmic paths, cf. the following subsection on recalculation techniques.
Depending on the underlying architecture, replication can be a competitive option to increase detected and undetected error robustness.
If computation speed significantly outpaces memory access and communication, each operation can be executed multiple times while the data is still accessible in the RAM. This can be used for redundancy-based sanity checks of low-level operations or even for checksum-like approaches.
Overlapping data in parallel algorithms can serve as a starting point for mitigation, albeit not for detection. In the case studies explored in Section \ref{sec:error_aware_case_studies}, these are applied to elliptic PDEs, though an extension to other models should be feasible. Furthermore by even increasing the ghost layer size and thereby adding extra redundancy, other reconstruction possibilities might become possible. This could already be taken into account during the domain partitioning process.
\subsubsection{Hierarchy and mixed precision}
\label{sec:future-gb-hierarchy}
Hierarchical discretizations have proven to be advantageous in various respects. Related notions are multi-resolution or multi-level discretizations, but also (recursive) sub-structuring in the engineering nomenclature of the \acrfull{fem}.
Built into the hierarchy are problem-inherent information and structures that are well-suited for modern hierarchy-based solvers.
In \acrshort{fem}, for example, hierarchical bases carry information about both location and frequency, which leads to a
special built-in redundancy that can be
exploited for error detection (see Section \ref{sec:error_detection}).
Therefore, from a resilience perspective, hierarchy should be a core paradigm for discretization design.
This applies irrespectively of whether the hierarchical bases are formulated in the spatial ($h$) or the order ($p$) sense.
From a solver perspective, multigrid methods for elliptic and parabolic \acrshort{pde} problems are relevant approaches towards resilient numerical algorithms. They inherently act on different granularities, representations, scales, and levels and can be used to quantify differences between these levels.
For local recovery, local multigrid methods are highly efficient, especially
when they can be accelerated with the superman strategy \cite{huber2016resilience}.
Additionally the low-resolution duplicates can be used for some kind of approximate recovery or minimal rollback like re-application of the smoother on a specific level in a multigrid scheme.
Detection of errors within multigrid is often possible due to algebraic relations or on the basis of hierarchical multi-grid-inherent error estimates \cite{rude1994error, huber2019adaptive,altenbernd2016fault}, which hold true inside such schemes. As stated in Section \ref{sec:future-gb-redundancy}, the inherent redundancy incorporated in these algorithms is also beneficial.
Mixed-precision arithmetics are typically used within the numerical solver parts to speed-up computations. However, the discretization can enable the flexibility to store data at varying precision. Examples for this are hierarchical approaches such as hierarchical bases, where a function value is stored as a hierarchical surplus only. As another example, the usage of wavelets in multiresolutional analysis can serve. In both cases, contributions of higher levels typically require less accuracy, as only the most significant bits contribute to the overall point values.
\subsubsection{Error control} \label{sec:future-gb-errorcontrol}
For many numerical methods, a wide range of classical a priori and a posteriori error estimation techniques are available, see among many others \cite{bank1993posteriori, rude1994error, quarteroni2008numerical, ainsworth2011posteriori, karniadakis2013spectral, lambert:1991,hairer:2008}, which constitute the basis of many adaptive numerical algorithms.
Adaptive time discretization methods are the state of the art for \acrshort{ode} solvers, while, for \acrshort{pde} solvers, spatial adaptivity techniques are also widely used. Local time step adaptation is feasible in the framework of so called local time stepping or multirate approaches, where different components of the system can have different time step sizes, see \cite{gear:1984, savcenco:2007, seny2014efficient, fok:2015, sandu2019class, delpopolo:2019, bonaventura:2020a}, which are however still far from mainstream for most applications. For \acrshort{pde} solvers, local spatial adaptivity techniques are also very common\cite{bangerth2012algorithms,bangerth2013adaptive}, but their incorporation in operational applications is still a research topic, see e.g. \cite{piggott2009anisotropic, berger2011geoclaw, leveque2011tsunami, muller2013comparison, tumolo:2015} for developments concerning
oceanography and numerical weather forecasting.
The error estimations on which all these methods rely on also constitute the basis of an error detection mechanism, since some undetected errors, like bit flips on significant floating point digits, will result in errors exceeding the allowed error tolerances.
To some extent, these techniques are also examples of \acrshort{abft} or error oblivious approaches, since bit flips and other silent errors occurring during the computation of the solution at the next time step or on a refined mesh could be automatically corrected by the repeated computations triggered by the error threshold violation. Furthermore, silent errors in the data at the current time or mesh level could be identified by the failure of the time step or mesh refinement to correct the error.
Combined with other \acrshort{abft} strategies, adaptive discretization strategies based on error estimators can be a powerful and so far rather underrated tool for protecting a simulation from undetected errors in the solution vectors. On the other hand, error estimators should not be used as a black box for resiliency purposes. Indeed, errors can lead to severe over-resolution or, potentially, even under-resolution in space or time and the error estimators themselves could be affected by undetected errors.
As seen in Section \ref{sec:tec_err_info}, some iterative solvers for the solution of linear systems have invariants, such as monotonicity for Krylov solvers.
These properties can be put to good use in devising resilience strategies, for example activating an additional restart of the Arnoldi procedure as soon as an increase in the residual norm is observed.
The idea of interval arithmetic is to compute bounds of intervals that always contain the exact result \cite{Alefeld1983,Kulisch2002}.
Probabilistic methods for rounding error estimation \cite{vignes2004DSA,parker2001Monte,Frechtling:2015,denis2016,Verrou:2018} require several executions of arithmetic operations with different perturbations or different rounding modes (for instance three executions for Discrete Stochastic Arithmetic \cite{Eberhart2015REC}).
With both approaches, the comparison of several computed results enables one to control rounding errors (or detect and mitigate actually wrong results).
\subsubsection{Locality, asynchronicity and embarassingly parallelism}
\label{sec:future-gb-locality-async-embpar}
One important aspect of resilient algorithms is error confinement as global dependencies propagate errors to other processors and complicate recovery. Locality-emphasizing numerical algorithms achieve this by limiting dependencies to local areas or completely removing them. Consequently, error mitigation can be limited to a local subdomain. Typical examples for these schemes are domain decomposition, which splits the domain into several subareas, and classical discretization schemes such as finite elements, finite differences and finite volumes.
As mentioned in Section \ref{sec:numalgos:erroraware:relatedwork}, domain decomposition schemes such as additive Schwarz methods, or substructuring-inspired FETI \cite{Farhatmethodfiniteelement1991}
or also the fully adaptive multigrid method
\cite{rude1993fully} are naturally asynchronous and resilient to message loss.
In this context, we use the term asynchronous primarily in the sense of reducing the time synchronicity in parallel computations -- from communication-avoiding schemes via a reduction of synchronization points up to vastly decoupled schemes.
Using this inherent property, a failure in a subdomain would result in a message loss that does not hinder convergence in other subdomains, because a global wait for a message update and synchronization are not necessary.
In addition, asynchronous methods may better adapt to heterogeneous processors and networks than their synchronous counterparts as it has been shown in the context of Grid computing~\cite{myBCC05c,Chau2014}.
Both the localized and asynchronous approaches, achieve their impact through a decoupling of computations. Going further in this direction leads to embarrassingly or nearly embarrassingly parallel algorithms. These represent a group of algorithms where it is relatively easy to decouple subproblems in time or space. The subproblems can therefore be calculated completely independently, and errors do not propagate to other subproblems. Examples of such methods are Monte Carlo simulations and computations with the sparse grid combination technique. Since it is expected that only a few tasks will encounter errors and the scheduling is automatically balancing the load, the overall execution time does not suffer too much. Future algorithmic design should therefore aim at increasing asynchronicity and locality to move towards embarrassingly parallel problems.
\subsubsection{Stochastic}
\label{sec:future-gb-stochastic}
Stochastic methods can be superior to deterministic methods when it comes to resilience. Stochastic methods do not require the program to take a deterministic path, faulty parts can be neglected or exchanged easily by other results. A popular example are Monte Carlo methods where we sample randomly in the computation domain and can simply neglect failed samples. Ensemble methods are examples where different instances or models of a concrete problem setting are computed. Even if one of these computation fails, the ensemble computation can still return a -- maybe slightly less accurate -- result. Stochastic elements can therefore help the future algorithm design to reduce the dependencies on specific results of the computation. These methods, however, need to be evaluated not just by highlighting their resilience properties, but also taking into account the cost of a single run: if a single run is expensive to complete, simply discarding it might be impractical.
\subsubsection{Iterative methods} \label{sec:future-gb-iterative}
Iterative solvers may be viewed as inherently more robust than direct solvers because they do not compute their solution using a pre-defined sequence of numerical operations as direct solvers typically do. Indeed, by their nature, they perform a sequence of operations to update and improve their current approximation. If an error is encountered during computation, the probability of deleting this error or at least its effect may be higher than in a direct solver. Especially fixed-point-based methods (domain decomposition, relaxation, \ldots) may be viewed as inherently resilient as they have the property to always converge to the correct solution independent of the initial state (global convergence). Some errors may induce a low influence on convergence speed and can thus be safely ignored. In other cases, a restart -- optionally with recovery techniques -- may be employed to ensure both resilience and efficiency in terms of runtime.
\subsubsection{Low memory footprint -- matrix-free}
\label{sec:future-gb-matrix-free}
The classical approach to represent linear operators as sparse matrices
produces large amounts of static data which has to be restored upon failure.
Checkpoint-restart approaches feature high memory cost, naturally multiples of the storage needed for the solution vector. Algorithmic alternatives to checkpoint-restart require possibly complicated or costly re-assembly.
Matrix-free methods do not represent the operators as static data in the first place.
Therefore, large sparse matrix data structures do not have to be restored upon failure as they are computed on the fly anyway.
Extreme-scale applications will benefit from matrix-free approaches due to their low memory footprint, also in terms of runtime, (due to high memory access cost) and higher limits for the overall problem size \cite{bauer2017two,bauer2018stencil}.
In addition to saving memory and, therewith, reducing the risk of memory corruption, matrix-free methods can also be combined with automatic code generation \cite{Lengauer:2020:ExaStencils} in a stencil-based approach, i.e., for finite difference methods on uniform structured grids.
In such cases, the matrix entries may be \lq hard wired\rq\ into code, such as 5-point stencils for Laplace's equation.
Automatic code generation provides a means to increase resiliency in the code generator or domain specific language and, thus, facilitate resilience aware software development.
For finite element methods, one can use local assembly kernels
\cite{bauer2019large}.
Here, the trade-off between computation and storage and, in the future, resilience is relevant in particular for higher order elements.
\subsection{The final mile: towards a resilient ecosystem} \label{sec:future-ecosystem}
The future directions described above will provide critical enhancements towards providing resilient computation for numerical simulations. Alone, however, they are insufficient, as they must be embedded in the larger ecosystem and in the efforts to make that ecosystem support such novel resilience approaches. This requires another set of crucial developments.
\subsubsection{Tools to support resilience software development}
Developers will need the right tools to support their algorithmic efforts. These tools, as they exist today, are often designed without faults and errors in mind and, therefore, do not sufficiently support the development of resilient systems. In particular, we identified three areas in which enhanced tool support for resiliency is needed: a) introspection to help track errors and failures along with their root causes, b) validation through controlled fault scenarios to enable targeted testing of new error mitigation features, and c) transformation to transparently add error and failure checks into codes.
\paragraph*{Tools for introspection}
Introspection is critical to ensuring early error detection and the timely activation of correction and mitigation mechanisms throughout the various layers of the software ecosystem.
\textit{System Monitoring:}
Knowing about the health state of a system requires monitoring it and understanding its behavior. Future work needs to focus on scalable system monitoring, real-time analyzes of system monitoring data, and autonomous decision making on corrective actions for self-aware resilient systems. In order to gain a deeper understanding, types of monitored data should be homogenized across system and sites, and, if possible, sanitized logs should be available to the community.
\textit{Application and Data Structures Monitoring:} Applications need to automatically monitor their performance and correctness with the use of tools. The tools can be developed in abstraction, at the compiler-level, or at the runtime-level.
\paragraph*{Tools for validation}
Currently, there are no standard tools to test the correctness and performance of resilient algorithms under undetected errors and fault. This is due to a lack of fault injection tools that reflect realistic situations. DeBardeleben et~al.~\cite{FSEFI} have developed a hardware error simulator tool to understand the behavior of numerical algorithms under faulty hardware with a great accuracy, but this approach cannot evaluate the execution time of resilient algorithms at scale. Vendors provide fault injection
tools~\cite{4709308,DBLP:conf/ispass/HariTSKE17} for better execution efficiency, compromising the accuracy of the hardware behavior. Compiler approaches or other in-house error injections~\cite{calhoun14flipit,georgakoudis2017refine} could allow the program to execute as efficiently as the original binary, but the correctness is further compromised. There are also tools that can analyze an application's vulnerability very quickly but do not actually produce the application's faulty output. One technique for this, DiSCvar\cite{discvar}, uses algorithmic differentiation and exposes how changes to each variable impact output results. It is important to note that these techniques do not actually produce that corrupted output. Hence, they are very fast but they may not be useful to developers looking to explore precisely how corruption changes their application. It is likely that a combination of these techniques, which identify most critical regions of an application coupled with fault injection at those locations, may serve as a good compromise between the two techniques.
Any novel approaches that fill the gap between the accuracy and execution efficiency of error injections will facilitate the code development of resilient algorithms, and the new tools should be built with the existing continuous integration infrastructure. Such tools likely require hardware knowledge that is considered intellectual property by the semiconductor vendors. However, efforts which explore this space using open hardware technologies (RISC-V, Sparc, etc.) can shed light on this space but may be of varying usefulness when application developers look to understand how their applications will perform on hardware that has not been fault injected at the register transfer or microcode level.
\paragraph*{Tools for code transformation}
Compilers are able to generate binaries with resilience capability as suggested in the work by \cite{reis2005swift}; the generated binary instruments redundant computation, register allocations to enable error detection and correction during program execution.
The recent work by Lin \cite{lin2017simd} leverages LLVM to generate SIMD instructions to perform redundant computation and verification. Source-to-source code transformation has been proposed to enable triple modular redundancy in loops \cite{lidman2012rose} and automatic instrumentation of checkpointing \cite{rodriguez2010compiler}. Similarly, this idea can be extended to redundant threading for error mitigation, facilitated with OpenMP-like programming language extension \cite{hukerikar2014redundant}. These approaches automatically introduce resilience with some performance penalty, preventing the users from selective adaptation of resilience for performance optimization, and these redundant computations are benefited from the memory hierarchy, preventing doubling (or tripling) of the execution time.
In addition to such specific systems that support the addition of resilience to existing codes, automated generation of code, e.g., via \acrfull{dsl} can help with the transparent support of resilient computation. Examples for this can be stencil generators, as already discussed in Section~\ref{sec:future-gb-matrix-free}.
\subsubsection{User/Programmer education}
According to the system log study by~\cite{di2014lessons}, many application job failures are triggered by the mistakes of the users such as script errors and program bugs including excessive file and thread creations. This means that better software engineering practices and training of users should be pursued with similar efforts to the deployment of resilience strategies.
The Exascale Computing Project (ECP) by the US DOE has made a substantial investment on educating tools, software engineering and \acrshort{hpc} system usage for a variety of the users. Additionally, the scientific and mathematical library teams in the ECP have introduced software engineering policies~\cite{xSDK} to improve the software quality, documentation and testing process for better interoperability and composability of multiple library packages. This activity, though not directly relevant to resilience, will gradually help to reduce application errors and failures for large scale \acrshort{hpc} systems.
\subsection{Detected and transparently corrected errors} \label{sec:soa-transparent-corrected}
A wide range of errors can be detected and immediately corrected by various layers in the system, i.e., these errors become masked or absorbed and higher level layers do not have to be involved. The detection/correction mechanisms have an extra cost in terms of storage, processing and energy consumption.
\paragraph*{Hardware reliability}
At the hardware level several techniques exist to detect and correct errors. Most common examples are \acrfull{ecc}
to detect and correct single bit-errors,
\acrfull{crc} error correction for network packets or RAID-1 (or higher) for I/O systems. A more comprehensive discussion of these features can be found in the report ``Towards Resilient EU HPC Systems: A Blueprint'' by Radojkovic et al.~\cite{EU_HPC}.
\paragraph*{Operating system reliability}
\acrfull{os} have certain capabilities to interact with architectural resilience features, such as ECC and machine check exceptions. OSs are mostly concerned with resource management and error notification. However, some advanced OS resilience solutions exist such as Mini-ckpts~\cite{fiala16mini-ckpts}. It is a framework that enables application survival despite the occurrence of a fatal operating system failure or crash. It ensures that the critical data describing a process is preserved in persistent memory prior to the failure. Following the failure, the OS is rejuvenated via a warm reboot and the application continues execution effectively making the failure and restart transparent. The mini-ckpts rejuvenation and recovery process is measured to take \SIrange{3}{6}{\second} and has a failure-free overhead of \SIrange{3}{5}{\percent} for a number of key \acrshort{hpc} workloads.
\paragraph*{System-level checkpoint/restart}
\acrfull{blcr}~\cite{hargrove06berkeley} is a system-level checkpoint/restart solution that transparently saves and restores process state. In conjunction with a
\acrfull{mpi}~\cite{mpi} implementation, it can transparently save and restore the process states of an entire MPI application. An extension of \acrshort{blcr}~\cite{varma06scalable,wang07job,wang10hybrid2} includes enhancements in support of scalable group communication for MPI membership management, reuse of network connections, transparent coordinated checkpoint scheduling, a job pause feature, and full/incremental checkpointing. The transparent mechanism for job pause allows live nodes to remain active and roll back to the last checkpoint, while failed nodes are dynamically replaced by spares before resuming from the last checkpoint. A minimal overhead of 5.6\% is reported in case migration takes place, while the regular checkpoint overhead remains unchanged.
The hybrid checkpointing technique~\cite{gioiosa2005transparent} alternates between full and incremental checkpoints: At incremental checkpoints, only data changed since the last checkpoint is captured. This results in significantly reduced checkpoint sizes and overheads with only moderate increases in restart overhead. After accounting for cost and savings, the benefits due to incremental checkpoints are an order of magnitude larger than the overheads on restarts.
\paragraph*{Silent Data Corruption (SDC) protection}
FlipSphere~\cite{fiala16flipsphere} is a tunable, transparent
\acrfull{sdc} detection and correction library for \acrshort{hpc} applications. It offers comprehensive SDC protection for application program memory using on-demand memory page integrity verification. Experimental benchmarks show that it can protect \SIrange{50}{80}{\percent} of program memory with time overheads of \SIrange{7}{55}{\percent}.
\paragraph*{Proactive fault tolerance using process or virtual machine migration}
Proactive fault tolerance~\cite{engelmann09proactive,wang12proactive,nagarajan07proactive} prevents compute node failures from impacting running applications by migrating parts of an application, i.e., tasks, processes, or virtual machines, away from nodes that are about to fail. Pre-fault indicators, such as a significant increase in temperature, can be used to avoid an imminent failure through anticipation and reconfiguration. As computation is migrated away, application failures are avoided, which is significantly more efficient than checkpoint/restart if the prediction is accurate enough. The proactive fault tolerance framework consists of process and virtual machine migration, scalable system monitoring and online/offline system health analysis. The process-level live migration supports continued execution of applications during much of process migration and is integrated into an MPI execution environment. Experiments indicate that \SIrange{1}{6.4}{\second} of prior warning are required to successfully trigger live process migration, while similar operating system virtualization mechanisms require \SIrange{13}{24}{\second}. This error oblivious approach complements checkpoint/restart by nearly cutting the number of checkpoints by half when 70\% of the faults are handled proactively.
\paragraph*{Resiliency using task-based runtime systems}
\label{task-based-runtime}
Task-based runtime systems have appealing intrinsic features for resiliency due to the fault isolation they provide by design as they have a view of the task flow and dynamically schedule task on computing units (often to minimize the time to solution or energy consumption).
Once an error is detected and identified by the hardware or the algorithm, the runtime system can limit its propagation through the application by reasoning about the data dependencies among tasks~\cite{ocrhpec}.
For example, one can envision the scenario where an uncorrectable hardware error is detected triggering the runtime system to dynamically redistribute the tasks to the remaining resources available.
Task-based runtime systems can also limit the size of the state needed to be saved to enable restarting computations, when an error is encountered~\cite{thibault:tel-01959127,lion2020,lion:hal-02296118}.
In classical checkpoint-restart mechanisms, the size of the checkpoint can become very large for large-scale applications, and managing it can take up a significant portion of the overall execution.
A task-based runtime system simplifies the identification of points during the application execution when the state size is small, since only task boundaries need to be considered for saving the state.
Further, identification of idempotent tasks can greatly help task-based runtimes to further reduce the overheads by completely avoiding data backups specific to those tasks. Recent works on on-node task parallel programming models suggest that a simple extension of the existing task-based programming framework enables efficient localized recovery \cite{subasi2015nano,subasi2016runtime,paul2019hclib}.
The checkpointing itself can also be achieved completely asynchronously~\cite{thibault:tel-01959127,lion2020,lion:hal-02296118}. The runtime allows tasks to read data being saved, and only blocks those tasks that attempt to overwrite data being saved.
Since the runtime system knows which data will soon be overwritten by some tasks, it can prioritize the writing of the different pieces so as to have as little impact on the execution as possible.
At the restarting point, the runtime also has all information to be able to achieve a completely local recovery.
The replacement node can restart from the last valid checkpoint of the previously-failed node, while the surviving nodes can just replay the required data exchanges.
With the recent emergence of heterogeneous computing systems utilizing
\acrfull{gpu},
the task programming model is being used to offload computation from the
\acrfull{cpu}
to the GPU. VOCL-FT~\cite{7832845} offers checkpoint/restart for computation offloaded to GPU using OpenCL~\cite{opencl}.
It transparently intercepts the communication between the originating process and the local or remote GPU to automatically recover from \acrshort{ecc} errors experienced on the GPU during computation.
Another preliminary prototype design extends this concept in the context of OpenMP~\cite{openmp} using a novel concept for \acrfull{qos} and a corresponding \acrfull{api}~\cite{engelmann19concepts}.
While the programmer is specifying the resilience requirements for certain offloaded tasks, the underlying programming model runtime decides on how to meet them using a \acrshort{qos} contract, such as by employing task-based checkpoint-restart or redundancy.
\paragraph*{Resilience via complete redundancy}
The use of redundant MPI processes for error detection has been widely analyzed in the last decade~\cite{j137,thread-rep1,thread-rep2,thread-rep3}. Modular redundancy incurs high overhead, but offers excellent error detection accuracy and coverage with few to no false positive or false negatives.
Complete modular redundancy is typically too expensive for actual \acrshort{hpc} workloads. However, it can make sense for certain subsystems such as parts of a \acrshort{pfs}. The \acrfull{mds} of a networked \acrshort{pfs} is a critical single point of failure. An interruption of service typically results in the failure of currently running applications utilizing its file system. A loss of state requires repairing the entire file system, which could take days on large-scale systems, and may cause permanent loss of data. PFSs such as Lustre~\cite{lustre} often offer some type of active/standby fail-over mechanism for the MDS. A solution~\cite{he09symmetric} for the MDS of the \acrlong{pvfs} offers symmetric active/active replication using virtual synchrony with an internal replication implementation. In addition to providing high availability, this solution is taking advantage of the internal replication implementation by load balancing MDS read requests, improving performance over the non-replicated MDS.
\paragraph*{Resilience via partial redundancy}
Partial redundancy has been studied to decrease the overhead of complete redundancy~\cite{Elliott2012,ftxs12-rep,Subasi2015,Subasi2017}.
Adaptive partial redundancy has also been proposed wherein a subset of processes is dynamically selected for replication~\cite{George2012}.
Partial replication (using additional hardware) of selected MPI processes has been combined with \mbox{prediction-based} detection to achieve SDC protection levels comparable with those of full duplication~\cite{Berrocal16Exploring,Berrocal17Toward,FlipBack16}. A Selective Particle Replication
approach for meshfree particle-based codes protects the data of the entire application (as opposed to a subset) by selectively duplicating \SIrange{1}{10}{\percent} of the computations within processes incurring a \SIrange{1}{10}{\percent} overhead~\cite{Cavelan:2019}.
\paragraph*{Resilience via complete and/or partial redundancy}
RedMPI~\cite{fiala12detection2} enables a transparent redundant execution of MPI applications. It sits between the MPI library and the MPI application, utilizing the \acrfull{pmpi}
to intercept MPI calls from the application and to hide all redundancy-related mechanisms. A redundantly executed application runs with $r*m$ MPI processes, where $m$ is the number of MPI ranks visible to the application and $r$ is the replication degree. RedMPI supports partial replication, e.g., a degree of 2.5 instead of just 2 or 3, for tunable resilience.
It also supports a variety of message-based replication protocols with different consistency. Not counting in the need for additional resources for redundancy, results show that the most efficient consistency protocol can successfully protect \acrshort{hpc} applications even from high \acrshort{sdc} rates with runtime overheads from \SIrange{0}{30}{\percent}, compared to unprotected applications without redundancy. Partial and full redundancy can also be combined with checkpoint/restart~\cite{Elliott2012}. Non-linear trade-offs between different levels of redundancy can be observed when additionally using checkpoint/restart, since computation on non or less redundant resources is significantly less reliable than computation on fully or more redundant resources.
\paragraph*{Interplay between resilience and dynamic load balancing}
\label{sec:runtime}
Scheduling of application jobs at the system level contributes to exploiting parallelism by placing and (dynamically) balancing the batch jobs on the local site resources.
The jobs within a batch are already heterogeneous; yet, current batch schedulers rarely co-allocate, and most often only allocate, computing resources (while network and storage continue to be used as shared resources).
Dynamic system-level parallelism can arise when certain nodes become unavailable (due to hard and permanent errors) or recover (following a repair operation).
This can be exploited during execution by increasing opportunities for system-level co-scheduling in close proximity of jobs that exhibit different characteristics (e.g., co-scheduling a classical compute-intensive job in close proximity to a data-intensive job) and by dynamic resource reallocation to jobs that have lost resources due to failures or to waiting jobs in the queue.
\subsection{Detected errors mitigated with assistance}
\label{sec:soa-assistance-corrected}
In this section we focus on correction methods that need assistance from the upper layers in order to achieve resilience and correctness. It is important to note that there are multiple methods that offer assisted fault tolerance but some of them involve a few additional lines of code while others require rewriting the whole applications using a specific programming model. Therefore, we will divide this section into subsections depending on the programming and/or redesign effort that is required.
\subsubsection{Correction with incremental redesign}
\label{sec:soa-correction-incremental}
As explained in Section \ref{sec:soa-transparent-corrected},
it is possible to perform system-level checkpointing without any feedback from the application or the algorithm or any upper layer.
The issue with system-level checkpointing is that the size (and therefore the time and energy cost) of checkpointing is much larger than what is really required to perform a restart of the application.
Thus, application-level checkpointing is an attempt to minimize the size of checkpoints to the minimum required for the application to be able to restart.
\paragraph*{Performance modeling and optimization of checkpoint-restart methods}
Research on simulation tools assessing the performance of certain checkpoint-restart strategies is presented in various publications~\cite{levy2013using,
engelmann2013toward,
ashraf2018performance,
di2014optimization}.
Different theoretical approaches are used and tools are developed that either simulate a fictional software or wrap an actual application.
A lot of work has been done to examine and model the performance of multilevel checkpointing approaches~\cite{kohl2019scalable,zheng2012scalable,bautista2011fti,benoit2016optimal}.
Here, the parallel distribution of the snapshots as well as the target storage system are considered as objectives for performance optimization.
Asynchronous techniques are considered, such as non-blocking checkpointing,
where a subset of processes are dedicated to manage the creation and reconstruction of snapshots~\cite{coti2006blocking,sato2012design}.
As a measure to saving storage and speeding up I/O, data compression is another subject that is considered in the literature as, e.g., by Di and Cappello~\cite{di2016fast},
and in one of the case studies in Section~\ref{sec:error_aware_case_studies}.
Resilient checkpointing has been considered with the help of nonvolatile memory,
as for instance implemented in PapyrusKV~\cite{10.1145/3126908.3126943}, a resilient key-value blob-storage.
Other resilient checkpointing techniques include the self-checkpoint technique~\cite{8170311}, which reduces common redundancies while writing checkpoints, or techniques reducing the amount of required memory through hierarchical checkpointing~\cite{5645453}, or differential checkpointing~\cite{DBLP:conf/ccgrid/KellerB19}.
\paragraph*{Message logging} \label{sec:logging}
Message logging is a mechanism to log communication messages in order to allow partial restart as for example examined by Cantwell et al.~\cite{cantwell:2019}.
While improving on basic checkpointing strategies, message logging-based approaches can themselves entail large overheads because of log sizes.
The checkpointing protocol developed by Ropars et al.~\cite{ropars:2013} does not require synchronization between replaying processes during recovery and limits the size of log messages.
Other approaches combine task-level checkpointing and message logging with system-wide checkpointing~\cite{subasi:2018}.
This protocol features local message logging and only requires the restart of failing tasks.
It is also possible to combine message logging with local rollback and \acrfull{ulfm} (Section~\ref{sec:ULFM}) to improve log size~\cite{losada:2019}.
\paragraph*{Multilevel checkpointing libraries}\label{sec:multilevelCPR}
Current \acrshort{hpc} systems have deep storage hierarchies involving \acrlong{hbm},
\acrlong{dram}, \acrlong{nvm}, \acrlong{ssd} and the \acrshort{pfs}, among others.
Multilevel Checkpointing libraries offer a way to leverage the different storage layers in the system through a simple interface.
The objective is to abstract the storage hierarchy to the user,
so that one does not need to manually take care of where the data is stored or the multiple data movements required between storage levels.
Each level of checkpointing provides a different trade-off between performance and resilience,
where usually lower levels use close storage that offers higher performance but limited resilience,
and higher levels rely on stable storage (e.g., \acrshort{pfs}),
which is more resilient but slower.
Mature examples of multilevel checkpoint libraries are SCR~\cite{SCR}, FTI~\cite{bautista2011fti}, CRAFT~\cite{shahzad2018craft} and VeloC~\cite{nicolae2019veloc}. Both SCR and FTI provide support via simple interfaces for storing application checkpoint data on multiple levels of storage, including RAM disk, burst buffers, and the parallel file system. Both SCR and FTI provide redundancy mechanisms to protect checkpoint data when it is located on unreliable storage and can asynchronously transfer checkpoint data to the parallel file system in the background while the application continues its execution. In addition, FTI also supports transparent GPU checkpointing. Finally, VeloC is a merge of the interfaces of both FTI and SRC. Note that some of these libraries offer the option for keeping multiple checkpoints so that the application can roll-back to different points in the past if necessary.
\paragraph*{Containment Domains}
\acrfull{cds} provide a programming construct to facilitate the preservation-restoration model, including nesting control constructs, and durable storage~\cite{container}. The following features are attractive for large-scale parallel applications. First, \acrshort{cds} respect the deep machine and application hierarchies expected in exascale systems. Second, \acrshort{cds} allow software to preserve and restore states selectively within the storage hierarchy to support local recovery. This enables preservation to exploit locality of storage, rather than requiring every process to recover from an error, and limits the scope of recovery to only the affected processors. Third, since \acrshort{cds} nest, they are composable. Errors can be completely encapsulated, or escalated to calling routines through a well-defined interface. We can easily implement hybrid algorithms that combine both preservation-restoration and data encoding.
Use cases include an implementation of a parallel resilient hierarchical matrix multiplication algorithm using a combination of ABFT (for error detection) and \acrshort{cds} (for error recovery)~\cite{Austin2015}. It was demonstrated that the overhead for error checking and data preservation using the \acrshort{cds} library is exceptionally small and encourages the use of frequent, fine-grained error checking when using algorithm based fault tolerance.
\paragraph*{Application versioning}
\acrfull{gvr}~\cite{DBLP:journals/ijhpca/ChienBDFFIRZHLR17} accommodates APIs to enable multiple versioning of global arrays for the single program, multiple data programming model.
The core idea is the fact that naive data redundancy approaches potentially store wrong applications states due to the large latency associated with error detection and notification.
In addition to multiple versioning, GVR provides a signaling mechanism that triggers the correction of application states based on user-defined application error conditions.
Use cases include an implementation of resilient Krylov subspace solvers~\cite{DBLP:conf/vecpar/ZhengCT14}.
\paragraph*{Mitigating performance penalties due to resilience via dynamic load balancing} \label{sec:DLS4LB}
Detected and corrected errors induce variation in the execution progress of applications when compared to error-free executions.
This can manifest itself as load imbalance.
Many application-level load balancing solutions have been proposed over the years
and can help to address this problem.
We mention here a few available packages.
Available load balancing software includes
Zoltan~\cite{zoltan}
that requires users to describe the workload across processes as a
graph and offers an object oriented interface.
Further we mention \acrfull{dls4lb}~\cite{dls4l}, a recently developed library for MPI applications that contains a portfolio of self-scheduling based algorithms for load balancing.
StarPU~\cite{thibault:tel-01959127} proposes support for
asynchronous load-balancing~\cite{lion2020} for task-based applications.
The principle is to let the application submit only a part of its task graph, let some of it execute on the platform and observe the resulting computation balance.
A new workload distribution can then be computed and the application is allowed to submit more of the task graph, whose execution can be observed as well.
OmpSs~\cite{OMPSS} is an effort to extend OpenMP in order to support asynchronous execution of tasks including a transparent interface for hardware accelerators such as
\acrshort{gpu}s and \acrshort{fpga}s.
OmpSs is built on top of the Mercurium compiler~\cite{mercurium} and the nanos++ runtime system~\cite{nanos}.
HCLib~\cite{yan2009hierarchical} is a task-based programming model that implements locality-aware runtime and work-stealing.
It offers a C and C++ interface and can be coupled with inter-process
communication models, such as MPI.
Charm++~\cite{CHARM}
features an automatic hierarchical dynamic load balancing method that overcomes the scalability limitation of centralized load balancing as well as the poor performance of completely distributed systems. Such a technique can be triggered dynamically after a failure hits the system and the workload needs to be redistributed across workers.
\subsubsection{Correction with major redesign }
\label{sec:ULFM}
The correction of some detected errors might have a strong impact of the algorithm that has to implement the mitigation. The mitigation design can be made more affordable if some components of the software stack have already some appealing features to handle such situations.
\paragraph*{Resilience support in the Message Passing Interface (MPI)}
Most MPI implementations by default are designed to terminate all processes when errors are detected. However, this termination occurs irrespective of the scope of the error, requiring global shut-down and restart even for local errors in a single process. This inherent scalability issue can be mitigated if MPI keeps all survived processes to continue and/or if restart overheads are reduced. The MPI community has proposed several recovery approaches, such as FA-MPI~\cite{hassani2014fampi} or MPI-ULFM~\cite{Bland:2012} to enable alternatives of global shut-down, as well as better error handling extensions, like MPI\_Reinit~\cite{laguna2016mpi}, to reduce overhead and impact of failures.
Among these approaches, MPI-ULFM is the most advanced and well known. It provides a flexible low-level \acrshort{api} that allows application specific recovery via new error handling approaches and dynamic MPI communicator modification under process failures, although with significant complexities for the application developer using the new \acrshort{api}s. Several approaches have been proposed to mitigate this complexity by creating another set of library \acrshort{api}s built atop of MPI-ULFM~\cite{cantwell:2019,gamell2014explore,Fenix,teranishi2014lflr,Shahzad2019craft}. However, as of now, in part due to its complexity when used on real-world applications and limited support in system software, MPI-ULFM as a whole has not been adopted in the MPI standard and hence is not readily usable for typical \acrshort{hpc} application programmers. Nevertheless, various aspects of \acrshort{ulfm} are in the process of standardization and will provide more mechanisms in MPI to build at least certain fault tolerant applications, starting with the upcoming MPI 4.0 standard.
\paragraph*{Resilience abstractions for data-parallel loops}
Data-parallel loops are widely encountered in $N$-body simulations, computational fluid dynamics, particle hydrodynamics, etc. Optimizing the execution and performance of such loops has been the focus of a large body of work involving dynamic scheduling and load balancing. Maintaining the performance of applications with data-parallel loops running in computing environments prone to errors and failures is a major challenge. Most self-scheduling approaches do not consider fault-tolerance or depend on error and failure detection and react by rescheduling failed loop iterations (also referred to as tasks). A study of resilience in self-scheduling of data-parallel loops has been performed using SimGrid-based simulations of highly unpredictable execution conditions involving various problem sizes, system sizes, and application and systemic characteristics (namely, permanent node failures), that result in load imbalance~\cite{sukhija:2015}. Upon detecting a failed node, re-execution is employed to reschedule the loop iterations assigned to the failed node.
A \acrfull{rdlb} approach has recently been proposed for the robust self-scheduling of independent tasks~\cite{Mohammed:2019}. The \acrshort{rdlb} approach proactively and selectively duplicates the execution of assigned chunks of loop iterations and does not depend on failure or perturbation detection. For exponentially distributed permanent node failures, a theoretical analysis shows that \acrshort{rdlb} is linearly scalable and its cost decreases quadratically with increasing system size. The reason is that increasing the number of processors increases the opportunities for selectively and proactively duplicating loop iterations to achieve resilience. \acrshort{rdlb} is integrated into a dynamic loop scheduling library (DLS4LB, see Section~\ref{sec:DLS4LB}) for MPI applications. \acrshort{rdlb} enables the tolerance of up to ($P-1$) process failures, where $P$ is the number of processes executing an application. For execution environments with performance-related fluctuations, \acrshort{rdlb} boosts the robustness of \acrfull{dls} techniques by a factor up to 30
and decreases application execution time up to 7 times compared to their counterparts without rDLB.
\paragraph*{Resilience extension for performance portable programming abstractions} \label{sec:soa-correction-major}
With the increasing diversity of the node architecture of \acrshort{hpc} systems, performance portability has become an important property to support a variety of computing platforms with the same source code while achieving a comparative performance to those programmed with the platform specific programming models. Today, Kokkos~\cite{edwards2014kokkos} and Raja~\cite{Raja2019, RajaGithub} accommodate modern C++ \acrshort{api}s to permit an abstraction of data allocation and parallel loop execution for a variety of runtime software and node architectures. This idea can be extended to express the redundancy of data and computation to achieve resilience while hiding the details of the data persistence and redundant computation. Recently, the resilient version of Kokkos was proposed for a natural API extension of Kokkos' data (memory space) and parallel loop (execution space) abstractions to (1) enable resilience with minimal code refactoring for the applications already written with Kokkos and (2) provide common interface to call any external resilience libraries such as VeloC~\cite{nicolae2019veloc}. The new software will be released in a special branch in \url{https://github.com/kokkos/kokkos}.
The resilience abstraction idea has also been applied to task parallel programming models such as Charm++~\cite{CHARM}, HClib~\cite{yan2009hierarchical}, HPX~\cite{HPX}, OmpSs~\cite{OMPSS} and StarPU~\cite{thibault:tel-01959127} to integrate a variety of resilient task program execution options such as replay, replication, algorithm-based fault tolerance and task-based checkpointing.
\todUR{Here could be refs included to load balancing, savimg some of the material from the deleted paragraphs abov}
Task-based programming models indeed have a very rich view over the structure of the application computation, and notably its data, and have a lot of control over the computation execution, without any need for intervention from the application. Replaying a failed task consists of issuing it again with the same input, discarding the previous erroneous output, and replicating a task consists of issuing it several times with different output buffers and comparing the result. Dynamic runtime systems can then seamlessly introduce replay and replication heuristics, such as trying to run different implementations and/or computation units, without the application having to be involved beyond optionally providing different implementations to be tried for the same task.
The task graph view also allows for very optimized checkpointing~\cite{thibault:tel-01959127,lion2020,lion:hal-02296118}. In the task-based programming model, each checkpoint is a cut in the task graph, which can be expressed trivially within the task submission code, and only the data of the crossing edges need to be saved. Even better, the synchronization between the management of checkpoint data and application execution can be greatly relaxed. The transfer of the data to the checkpoint storage can indeed be started as soon as the data is produced within the task graph, and not only once all tasks before the checkpoint are complete. A checkpoint is then considered complete when all its pieces of data have been collected. It is possible that tasks occurring after the checkpoint may run to completion before the checkpoint itself is completed.
All in all, this allows for a lot more time for the data transfers to complete, and lessens the I/O bandwidth pressure.
\paragraph*{Software engineering approaches for resilience by design}
Resilience design patterns~\cite{hukerikar17rdp-12, hukerikar17pattern} offer an approach for improving resilience in extreme-scale \acrshort{hpc} systems. Frequently used in computer engineering, design patterns identify problems and provide generalized solutions through reusable templates. Reusable programming templates of these patterns can offer resilience portability across different \acrshort{hpc} system architectures and permit design space exploration and adaptation to different (performance, resilience, and power consumption) design trade-offs. An early prototype~\cite{ashraf18pattern-based} offers multi-resilience for detection, containment and mitigation of silent data corruption and MPI process failures.
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{System infrastructure techniques for resilience}
\label{sec:infrastructure}
\input{infrastructure.tex}
\section{Numerical algorithms for resilience}
\label{sec:algorithms}
\input{algorithms.tex}
\section{Future directions}
\label{sec:future-directions}
\input{future.tex}
\section{Conclusions}
\label{sec:conclusions}
This article presents a snapshot of current research on resilience for extreme scale computing.
It has grown out of the Dagstuhl seminar 20101 held March 1-6, 2020,
bringing experts from the field together on the topic
\emph{Resiliency in Numerical Algorithm Design for Extreme Scale Simulations}.
This seminar became a starting point to develop a synthesis between the system perspective on resilience and the algorithmic perspective.
While resilience is undoubtedly an issue for extreme scale computing,
it is less clear what algorithms on the user or application level can contribute to mitigate faults.
The seminar provided ample room to discuss these topics and thus became
the starting point for this article.
Many diverse aspects were found to be relevant, that require a holistic and multidisciplinary approach involving different and complementary scientific communities.
In particular, it clearly appeared that a fundamental distinction lies in whether faults are detected or not,
and if they are not automatically detected, whether they are detectable.
If they are, algorithms can often be developed to detect errors
and in a second stage to correct them.
It was found that some algorithms are naturally tolerant against faults
or have the intrinsic feature to be error oblivious.
They can thus be naturally applied on a system subject to errors.
Besides redundancy and checkpointing as classical techniques to mitigate faults,
new algorithm-based resilience techniques have been developed
for several classes of numerical algorithms.
This includes linear algebra and solvers for partial differential equations,
two classes of algorithms that are prominent in many scientific
workloads on supercomputers.
Some of these mitigation methods show remarkable success in the sense that
faults can be compensated algorithmically by recovery procedures
with only little extra cost in time or in silicon.
On the other hand it also becomes clear that integrating such techniques
in a computational infrastructure is still facing many obstacles.
This includes the still poorly defined interface between user-level
fault mitigation techniques and system level functionality,
as, it is, e.g., necessary to reliably and
quickly detect a device (core, memory, ...) failure on a large parallel machine.
Despite its breadth, the article is far from being comprehensive.
The selection of topics is a subjective overview of
current research in the field of resilience for extreme scale computing
and it delivers an outlook into possible and promising future research topics and solutions.
\bibliographystyle{plainurl
\input{ms.bbl}
\end{document}
|
1,116,691,500,245 | arxiv | \section{Introduction}
The optical spectra of metals in the infrared (IR) spectral region depend
sensitively on the interaction between electrons and phonons.
Deviations from the theoretical spectrum without any phonon contribution
stem from the so--called Holstein mechanism \cite{holstein} in which the
incident photon is absorbed in a second--order
process involving creation of both a phonon and an electron--hole pair.
The detailed description of this mechanism was given by Allen
\cite{allen}. Despite the fact that the physical mechanism of
electron--phonon coupling is rather well understood for almost three decades,
little process was made in the systematic experimental study of
coupling effects in optical (IR) spectra \cite{motulevich}, nor has the
phenomenology been explored beyond the lowest order effects.
The lack of systematic experimental investigations appears to be related to the
following two reasons:
Firstly, historically measurements of optical spectra of metals where
limited to photon energies $\omega \mbox{$\,\groesser\hspace{-\blib}\etwa\,$} 0.05 $ eV \cite{motulevich}.
We shall argue in this paper that the effects of the
electron--phonon interaction on the optical spectra of ordinary metals are rather
weak at these energies. The optical conductivity in this spectral range
can usually be described in the framework of the Drude formula
\begin{equation}\label{drude}
\sigma(\omega)=\frac{\omega_{\rm pl}^{2}}{4\pi}\,\frac{1}{-{\rm i}\/\omega+1/\tau}
\end{equation}
where $1/\tau$ is the relaxation rate of electrons due to their interaction
with impurities and phonons. It can be calculated using the commonly
adapted
Bloch--Gr\"uneisen--type formula.
Secondly, the measurements of the optical conductivity are complicated in
ordinary metals by the anomalous skin effect which is mostly present at low
temperatures and in the far--infrared (FIR) spectral region. It leads to serious
difficulties in the interpretation of reflectivity and absorbtivity
measurements. Only a few observations \cite{joyce,bednorz} are known to us where Holstein
processes where identified in the normal state of ordinary metals.
The discovery of the high-T$_{\rm c}$ superconductors (HTSC) has dramatically
changed the experimental situation. First of all, the experimental methods were
improved radically by extending the accessible energy range down to
$\approx 10$ cm$^{-1}$ and by increasing the accuracy of the measurements.
Secondly, it appears that HTSC systems allow the observation of
the electron--phonon interaction in the optical spectra more easily and
more clearly than it was possible in the ordinary metals. Thirdly, there
is no anomalous skin effect in these systems for light with electric field
parallel to the Cu--O planes.
It is the purpose of this paper to review the theoretical situation.
We then extend the treatment of the Fr\"ohlich Hamiltonian to
include vortex corrections.
Theoretical predictions are then compared with experimental observations following earlier
attempts to connect the IR reflectivity and absorption spectra of
HTSCs with features of strongly interacting electron--phonon systems
\cite{shulga,oleg,holger}.
We shall show that most but not all observations can be described rather well
within such a scheme and identify some pertinent open questions.
\section{Derivation of the optical conductivity in the framework of
the Fr\"ohlich Hamiltonian}\label{formulae}
We start from the description of metals with electron--phonon interaction by
the standard Fr\"ohlich Hamiltonian
\begin{eqnarray}\label{froehlich}
H &=& \sum\limits_{{\bf k},i} \epsilon_{{\bf k},i}\,
c^{\dagger}_{{\bf k},i}\,c_{{\bf k},i} +
\sum\limits_{{\bf k},{\bf q},i,i^{\prime},\lambda}
g_{\bf k}({\bf q},i,i^{\prime},\lambda)\,c^{\dagger}_{{\bf k},i}\,
c_{{\bf k}+{\bf q},i^{\prime}} \times
\\\nonumber
&&\times
\left( b^{\dagger}_{{\bf q}\lambda} + b_{-{\bf q}\lambda}\right) +
\sum\limits_{{\bf q},\lambda} \omega_{{\bf q}\lambda}
b^{\dagger}_{{\bf q}\lambda}\,b_{{\bf q}\lambda}\;.
\end{eqnarray}
Here the first term is the kinetic energy of an electron with given momentum
${\bf k}$ and band index $i\/$, the last term is the energy of the phonon
with momentum ${\bf q}$ and mode $\lambda$.
The second term represents the electron--phonon interaction, where
$g_{\bf k}({\bf q},i,i^{\prime},\lambda)$ is the matrix element of this interaction.
The use of this Hamiltonian for the self--consistent calculation of the
electron and phonon Green's functions cannot be rigourously justified in the general
case. It was shown \cite{allen1,zhenya,rainer} that this Hamiltonian can
be used for the calculation of the influence of the electron--phonon interaction
on the electronic properties for systems where no low--energy collective excitations
of electrons are present. First--principle calculations
\cite{zhenya} of the physical properties of a number of ordinary
metals have shown that the Fr\"ohlich Hamiltonian is a very good
starting point for the analysis of all features related to
electron--phonon interactions.
In HTSC materials we can not expect this Hamiltonian to describe the optical
response completely because additional low
energy excitations (e.\,g.\ spin and/or charge excitations) occur.
However,
these excitations contribute little to the general trend of
the ``anomalous'' optical properties of HTSC systems at high energies and
are simply ignored in this study.
The question we wish to answer here is the following: to what extend can
observed optical spectra in the normal state be understood in terms
of the most simple model of electron--phonon interaction as represented
by equation (\ref{froehlich}).
The many--body electron--phonon interaction are usually calculated by Green's
function method \cite{abrikosov}.
Let us introduce the electron and phonon one--particle thermodynamic Green's
functions
\begin{equation}
G_{i}({\bf k},\tau)= -\langle {\rm T}_{\tau}\,
c^{\dagger}_{{\bf k},i}(\tau)\,c_{{\bf k},i}(0)
\rangle\;\,
\end{equation}
and
\begin{equation}
D_{\lambda}({\bf q},\tau)= -\langle {\rm T}_{\tau}\,
b^{\dagger}_{{\bf q}\lambda}(\tau)\,b_{{\bf q}\lambda}(0)
\rangle\;.
\end{equation}
The Wick operator ${\rm T}_{\tau}$ reorders the operators following it
in such a way that $\tau$ increases from right to left. For non--interacting
particles the Fourier components of the Green's functions,
$G_{i}({\bf k},{\rm i}\/\omega_n)$ and $D_{\lambda}({\bf q},{\rm i}\/\omega_\nu)$,
have the very simple form
\begin{equation}\label{freeelectron}
G_{i}^{0}({\bf k},{\rm i}\/\omega_n) = \frac{1}{{\rm i}\/\omega_n -
\epsilon_{{\bf k}, i}}
\end{equation}
and
\begin{equation}
D_{\lambda}^{0}({\bf q},{\rm i}\/\omega_\nu) =
\left(\frac{1}{{\rm i}\/\omega_\nu - \omega_{{\bf q}\lambda}}
- \frac{1}{{\rm i}\/\omega_\nu + \omega_{{\bf q}\lambda}}\right)\;.
\end{equation}
Here ${\rm i}\/\omega_n = (2n+1)\pi{\rm T}$
and ${\rm i}\/\omega_\nu = 2\nu\pi{\rm T}$
are the Matsubara frequencies for fermions
and bosons, respectively, and ${\rm T}$ is the
temperature.
The value
$\epsilon_{{\bf k}, i}$ is the electron band energy and, correspondingly,
$\omega_{{\bf q}\lambda}$ is the phonon energy for the $\lambda$th mode.
In the following we do not consider the renormalisation of the phonon
Green's function due to the electron--phonon interaction. For
convenience we present the phonon function
$D_{\lambda}({\bf q},{\rm i}\/\omega_\nu)$ in the spectral form
\begin{equation}
D_{\lambda}({\bf q},{\rm i}\/\omega_\nu) =
\frac{1}{\pi}\,\int\limits_{0}^{\infty}{\rm d}\Omega\: {\rm Im}\,
D_{\lambda}({\bf q},\Omega)
\left(\frac{1}{{\rm i}\/\omega_\nu - \Omega}
- \frac{1}{{\rm i}\/\omega + \Omega}\right)\;.
\end{equation}
For the non--interacting case the spectral density
${\rm Im}\,D_{\lambda}({\bf q},\Omega)$ has the form
\begin{equation}
{\rm Im}\,D^{0}_{\lambda}({\bf q},\Omega) = \pi\,\delta(\Omega -
\omega_{{\bf q}\lambda})\;.
\end{equation}
In the following we tread the spectral function
${\rm Im}\,D_{\lambda}({\bf q},\Omega)$ as an experimental quantity and
calculate the influence of the
electron--phonon interaction on the electronic properties.
These properties are reflected in
the one--particle electron Green's function
and the optical conductivity $\sigma_{\alpha\beta}(\omega)$.
The one--particle Green's function for electrons in
the presence of the electron--phonon interaction
\cite{migdal}
can be written as
\begin{equation}
G^{-1}({\bf k},{\rm i}\/\omega_n) = G^{-1}_0({\bf k},{\rm i}\/\omega_n) -
\Sigma({\bf k},{\rm i}\/\omega_n)
\end{equation}
where $\Sigma({\bf k},{\rm i}\/\omega_n)$ is the electron self--energy part.
One of the main results of Migdal \cite{migdal} is that the electron self--energy
can be calculated using the simplest first--order approximation
in the electron--phonon interaction, neglecting all vertex corrections
as being small of the order of $\omega_{\rm D}/\epsilon_{\rm F}$.
Here $\omega_{\rm D}$ is a characteristic phonon energy and $\epsilon_{\rm F}$
is the Fermi energy of the electrons. An analytical expression of
$\Sigma({\bf k},{\rm i}\/\omega_n)$ is
\begin{eqnarray}
\Sigma_i({\bf k},{\rm i}\/\omega_n) &=& -{\rm T}\,
\sum\limits_{\omega_\nu}
\sum\limits_{{\bf k}^{\prime},i^{\prime},\lambda}
\left| g_{{\bf k}}({\bf k} - {\bf k}^{\prime},
i, i^{\prime}, \lambda)\right|^2
D_{\lambda}({\bf k} - {\bf k}^{\prime},{\rm i}\/\omega_\nu)\,\times
\\\nonumber&&\times
G({\bf k}^{\prime},{\rm i}\/\omega_n-{\rm i}\/\omega_\nu)\;.
\end{eqnarray}
The summation over the momentum ${\bf k}^{\prime}$ is
represented in integral form
\begin{equation}
\sum\limits_{{\bf k},i} = \sum\limits_{i}\int\limits_{-\infty}^{\infty}
{\rm d}\/\epsilon\: \sum\limits_{\bf k}\delta(\epsilon - \epsilon_{{\bf k},i})
\end{equation}
so that
$\Sigma_i({\bf k},{\rm i}\/\omega_n)$ becomes
\begin{eqnarray}\label{phonongf}
\Sigma_i({\bf k},{\rm i}\/\omega_n) &=& -{\rm T}\,
\sum\limits_{\omega_\nu}\,
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\sum\limits_{{\bf k}^{\prime},i^{\prime},\lambda}
\left| g_{{\bf k}}({\bf k} - {\bf k}^{\prime},
i, i^{\prime}, \lambda)\right|^2
\delta(\epsilon - \epsilon_{{\bf k}^{\prime},i^{\prime}})\times
\\\nonumber &&\times
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
{\rm Im}\,D_{\lambda}({\bf k}-{\bf k}^{\prime},\Omega)\,
G({\bf k}^{\prime},{\rm i}\/\omega_n-{\rm i}\/\omega_\nu)
\times\\\nonumber &&\times
\left(\frac{1}{{\rm i}\/\omega_\nu - \Omega}
- \frac{1}{{\rm i}\/\omega_\nu + \Omega}\right)\;.
\end{eqnarray}
The analysis of (\ref{phonongf}), as performed by Migdal,
shows that the essential values of ${\rm i}\/\omega_n$
and ${\rm i}\/\omega_\nu$ are of order $\omega_{\rm D}$.
This means that small frequencies of order $\omega_{\rm D}$ are
also significant for $\epsilon$. In this case $\epsilon$ can
be neglected in $\delta(\epsilon - \epsilon_{{\bf k},i})$.
The electron Green's function is used in the form given by Eq.\
(\ref{freeelectron}) leading to
\begin{eqnarray}\label{phonongf2}
\Sigma_i({\bf k},{\rm i}\/\omega_n) &=& -{\rm T}\,
\sum\limits_{\omega_\nu}
\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}^{\prime}})
\,\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^{2}_{i}({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\left(\frac{1}{{\rm i}\/\omega_\nu - \Omega}
- \frac{1}{{\rm i}\/\omega_\nu + \Omega}\right)
\times\\\nonumber &&\times
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\frac{1}{{\rm i}\/\omega_n -{\rm i}\/\omega_\nu -\epsilon}\;,
\end{eqnarray}
where the spectral function of the electron--phonon interaction
\begin{eqnarray}
\alpha^{2}_{i}({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
&=&
\sum\limits_{i^{\prime},\lambda}
\left| g_{{\bf k}}({\bf k} - {\bf k}^{\prime},
i, i^{\prime}, \lambda)\right|^2
{\rm Im}\,D_{\lambda}({\bf k}-{\bf k}^{\prime},\Omega)
\end{eqnarray}
was introduced.
The sum over the $\omega_\nu = 2\pi\nu\/{\rm T}$
can be easily performed with the result
\begin{eqnarray}\label{sumresult}
-{\rm T}\,\sum\limits_{\omega_\nu}
\left(\frac{1}{{\rm i}\/\omega_\nu - \Omega}
- \frac{1}{{\rm i}\/\omega_\nu + \Omega}\right)
\frac{1}{{\rm i}\/\omega_n -{\rm i}\/\omega_\nu -\epsilon}
&=&\hspace*{2cm}
\end{eqnarray}
\begin{eqnarray}\nonumber \hspace*{2cm}&=&
\frac{N(\Omega)+1-f(\epsilon)}{{\rm i}\/\omega_n-\Omega-\epsilon}
+
\frac{N(\Omega)+f(\epsilon)}{{\rm i}\/\omega_n+\Omega-\epsilon}
\;,
\end{eqnarray}
where $N(\Omega)$ and $f(\epsilon)$ are the Bose and Fermi function,
respectively. To obtain the one--particle Green's function describing the
electron excitation spectrum we use the analytic continuation
of the self--energy on the imaginary axis $\omega$.
This can easily be done by changing ${\rm i}\/\omega_n$ in
Eq.\ (\ref{sumresult}) to $\omega$. Consequently, the final
expression for the self--energy reads
\begin{eqnarray}\label{selfenergy}
\Sigma_i^{{\rm R},{\rm A}}({\bf k},\omega) &=&
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}^{\prime}})
\,\alpha^{2}_{i}({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
L(\omega\pm {\rm i}\/\delta,\Omega)
\;,
\end{eqnarray}
where
\begin{eqnarray}\label{l}
L(\omega\pm {\rm i}\/\delta,\Omega)
&=&
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\left[
\frac{N(\Omega)+1-f(\epsilon)}{\omega-\Omega-\epsilon\pm {\rm i}\/\delta}
+
\frac{N(\Omega)+f(\epsilon)}{\omega+\Omega-\epsilon\pm {\rm i}\/\delta}
\right]
\;.
\end{eqnarray}
The function $\Sigma^{{\rm R}\,({\rm A})}({\bf k},\omega)$
denotes the retarded (advanced) self--energy which is an analytical
function of the variable $\omega$ in the upper (lower)
half of the complex plane.
The integral in (\ref{l}) can be evaluated analytically, yielding
\begin{eqnarray}\label{l2}
L(\omega,\Omega)
&=&
-2\pi\/{\rm i}\/\left[ N(\Omega)+\frac12 \right] +
\Psi\left( \frac12 + {\rm i}\/\,\frac{\Omega-\omega}{2\pi\/{\rm T}}\right)
-
\Psi\left( \frac12 - {\rm i}\/\,\frac{\Omega+\omega}{2\pi\/{\rm T}}\right)
\,,
\end{eqnarray}
where $\Psi(z)$ is the digamma function.
The self--energy $\Sigma_i^{{\rm R},{\rm A}}({\bf k},\omega)$ expressed by
Eq.\ (\ref{selfenergy}) depends only on the direction of the momentum
${\bf k}$ on the Fermi surface. It is convenient to present
this dependence by expanding all functions involved
in this expression over the complete and orthonormal set of functions
introduced by Allen \cite{allen2}. These so--called ``Fermi surface harmonics'',
$\Psi_j({\bf k})$, satisfy the condition
\begin{eqnarray}\label{fortho1}
\sum\limits_{\bf k}\Psi_j({\bf k})\Psi_{j^{\prime}}({\bf k})\,
\delta(\epsilon_{\bf k} - \epsilon ) &=& \delta_{jj^{\prime}}
N(\epsilon)\;,
\end{eqnarray}
where
\begin{eqnarray}\label{fortho2}
N(\epsilon) &=& \sum\limits_{\bf k}\delta(\epsilon_{\bf k} - \epsilon )
\;.
\end{eqnarray}
In terms of this set we write
\begin{eqnarray}\label{sexpansion}
\Sigma_i^{{\rm R},{\rm A}}({\bf k},\omega)
&=&
\sum\limits_{j}\Sigma_{i,j}^{{\rm R},{\rm A}}(\omega)\Psi_j({\bf k})
\\\label{gexpansion}
G^{{\rm R},{\rm A}}_i({\bf k},\omega)
&=&
\sum\limits_{j}G_{i,j}^{{\rm R},{\rm A}}(\epsilon_{\bf k},\omega)
\Psi_j({\bf k})
\\\label{eliashbergfct}
\alpha^{2}_{j,j^{\prime}}({\bf k},\Omega)F(\Omega)
&=&
\sum\limits_{j,j^{\prime}}\sum\limits_{{\bf k}^{\prime},\lambda}
\delta(\epsilon_{{\bf k}^{\prime}})
\left\{
\left| g_{{\bf k}^{\prime}}({\bf k} - {\bf k}^{\prime},
i, i^{\prime}, \lambda)\right|^2
{\rm Im}\,D_{\lambda}({\bf k} - {\bf k}^{\prime},\omega)
\right\}_{jj^{\prime}}
\times\\\nonumber &&\times
\Psi_j({\bf k})\Psi_{j^{\prime}}({\bf k}^{\prime})\;.
\end{eqnarray}
It follows from Eqs.\ (\ref{sexpansion}-\ref{eliashbergfct})
that the self--energy coefficients $\Sigma_{i,j}^{{\rm R},{\rm A}}(\omega)$
are given by
\begin{eqnarray}
\Sigma_{i,j}^{{\rm R},{\rm A}}(\omega)
&=&
\sum\limits_{j^{\prime}}\,\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2_{j,j^{\prime}}(\Omega)F(\Omega)
L(\omega\pm {\rm i}\/\delta,\Omega)
\;,
\end{eqnarray}
where
\begin{eqnarray}
\alpha^2_{j,j^{\prime}}(\Omega)F(\Omega)
&=&
\frac{1}{N(0)}\,
\sum\limits_{{\bf k}}
\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}})
\delta(\epsilon_{{\bf k}^{\prime}})
\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\Psi_j({\bf k})\Psi_{j^{\prime}}({\bf k}^{\prime})
\;.
\end{eqnarray}
Here $N(0)$ denotes the density of electron
states on the Fermi surface.
The first Fermi harmonic is $\Psi_0({\bf k})=1$ and the function
\begin{eqnarray}
\alpha^2_{0,0}(\Omega)F(\Omega)
&=&
\frac{1}{N(0)}\,
\sum\limits_{{\bf k}}
\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}})
\delta(\epsilon_{{\bf k}^{\prime}})
\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\end{eqnarray}
is obviously the well known Eliashberg spectral
function which determines the superconductivity of
metals in the simple $s$--pairing case.
Introducing the real and imaginary parts of the self--energy,
$\Sigma_{1}({\bf k},\omega)$ and $\Sigma_{2}({\bf k},\omega)$,
respectively, the Green's function becomes
\begin{eqnarray}\label{greensrealimag}
G^{-1}({\bf k},\omega+{\rm i}\/\delta)
&=&
\omega-\epsilon_{\bf k}-\Sigma_{1}({\bf k},\omega+{\rm i}\/\delta)
-{\rm i}\/\Sigma_{2}({\bf k},\omega+{\rm i}\/\delta)
\;.
\end{eqnarray}
The pole of the Green's function determines the spectrum of
one--particle excitations. At small energies Eq.\ (\ref{greensrealimag})
can be rewritten as
\begin{eqnarray}\label{greensrealimag2}
G^{-1}({\bf k},\omega+{\rm i}\/\delta)
&=&
\omega\left(1-
\left.
\frac{\partial \Sigma_{1}({\bf k},\omega)}{\partial \omega}
\right|_{\omega=0}\right)
-{\rm i}\/\/\Sigma_{2}({\bf k},\omega+{\rm i}\/\delta)
\;.
\end{eqnarray}
Then the pole of $G$ occurs at $\omega_0$, which is given by
\begin{eqnarray}
\omega_0
&=&
E_{\bf k}-\frac{\rm i}{2\tau_{\bf k}}
\;,\\
E_{\bf k}
&=&
\left(1-\frac{\partial \Sigma_{1}({\bf k},\omega)}{\partial \omega}\right)^{-1}
\epsilon_{\bf k}
\;,\\\label{oneparticlerelaxation}
\frac{1}{2\tau_{\bf k}}
&=&
-\left(1-\frac{\partial \Sigma_{1}({\bf k},\omega)}{\partial
\omega}\right)^{-1}
\Sigma_{2}({\bf k},E_{\bf k})
\;.
\end{eqnarray}
Here
\begin{eqnarray}
\lambda_{\bf k}
&=&
-\left. \frac{\partial \Sigma_{1}({\bf k},\omega)}{\partial
\omega}\right|_{\omega=0}
\end{eqnarray}
describes the change of the effective mass of
the electron, while $\frac{1}{2\tau_{\bf k}}$ describes their
relaxation rate.
All functions can be rewritten in
terms of Fermi harmonics and the spectral function
$\alpha^2_{j,j^{\prime}}(\Omega)F(\Omega)$. We shall turn
to this problem later when we discuss the conductivity
of metals in the presence of electron--phonon interaction.
As usual we write the conductivity of a metal in the presence
of electron--phonon interaction in terms of the analytically
continued electromagnetic kernel $K_{\alpha\beta}(\omega)$,
\begin{eqnarray}\label{kernel}
\sigma_{\alpha\beta}(\omega) &=&
\frac{e^2K_{\alpha\beta}(\omega)}{4\pi\/{\rm i}\/\omega}\;,
\end{eqnarray}
which, in turn, is expressed through the one--particle
Green's function $G_{i}({\bf k},\omega)$ and corresponding vertex function
$\Gamma_{\beta}$. In the framework of the thermodynamical
theory of the perturbations the expression for $K_{\alpha\beta}(\omega)$
is
\begin{eqnarray}\label{kernel2}
K_{\alpha\beta}(\/{\rm i}\/\omega_n)
&=&
{\rm T}\sum\limits_{\omega_{n^{\prime}}}
\sum\limits_{{\bf k}^{\prime}}\upsilon^{\alpha}_{{\bf k}^{\prime}}\,
G({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}})
G({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}}+{\rm i}\/\omega)
\times\\\nonumber &&\times
\Gamma_{\beta}({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}},
{\rm i}\/\omega_{n^{\prime}}+{\rm i}\/\omega)\;,
\end{eqnarray}
where $\upsilon^{\alpha}_{{\bf k}^{\prime}}$ is the $\alpha$--component
of the electron velocity.
Using the ladder diagram approximation the equation for the vertex
function is written as \cite{holstein}
\begin{eqnarray}\label{vortex}
\Gamma_{\beta}({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}},
{\rm i}\/\omega_{n^{\prime}}+{\rm i}\/\omega_n)
&=&
\upsilon^{\beta}_{{\bf k}^{\prime}}+
{\rm T}\sum\limits_{{\bf k}^{\prime\prime},n^{\prime\prime}}
\left| \left\langle
g_{{\bf k}^{\prime\prime}}({\bf k}^{\prime} - {\bf k}^{\prime\prime},
i, i^{\prime}, \lambda)
\right\rangle
\right|^2
\times\\\nonumber &&\times
D_{\lambda}({\bf k}^{\prime} - {\bf k}^{\prime\prime},
{\rm i}\/\omega_{n^{\prime}}-{\rm i}\/\omega_{n^{\prime\prime}})
G({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}})
G({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}}
+{\rm i}\/\omega_n)
\times\\\nonumber &&\times
\Gamma_{\beta}({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}},
{\rm i}\/\omega_{n^{\prime\prime}}+{\rm i}\/\omega_n)
\;.
\end{eqnarray}
Before we solve Eqs.\ (\ref{kernel2}) and (\ref{vortex}) we
simplify them somewhat. Firstly, as the conductivity of
any metal in the absence of a magnetic field can be diagonalised
in the appropriate representation we omit in the following
the indices $\alpha$ and $\beta$ considering the conductivity as a
scalar.
This is done bearing in mind that the
absolute value of the conductivity in
non--cubic crystals is anisotropic and that the corresponding
functions determining the temperature and frequency
dependence of $\sigma(\omega,{\rm T})$ reflect this anisotropy. Secondly,
we omit the electron band indexes taking into account that interband
transitions can be calculated separately if needed. We also omit
the electron spin index multiplying by two the sum over the electron
momentum ${\bf k}$.
After that Eq.\ (\ref{vortex}) is rewritten in a simplified form
\begin{eqnarray}
\Gamma_x({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}},
{\rm i}\/\omega_{n^{\prime}}+{\rm i}\/\omega_n)
&=&
\upsilon^{x}_{{\bf k}^{\prime}}+
2{\rm T}\sum\limits_{{\bf k}^{\prime\prime}}
\sum\limits_{n^{\prime\prime}}
\upsilon^{x}_{{\bf k}^{\prime\prime}}
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\delta(\epsilon - \epsilon_{{\bf k}^{\prime\prime}})
\times\\\nonumber &&\times
\frac{1}{\pi}
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2({\bf k}^{\prime},{\bf k}^{\prime\prime})F(\Omega)
\times\\\nonumber &&\times
\left(
\frac{1}{{\rm i}\/\omega_{n^{\prime}} -
{\rm i}\/\omega_{n^{\prime\prime}} - \Omega}-
\frac{1}{{\rm i}\/\omega_{n^{\prime}} -
{\rm i}\/\omega_{n^{\prime\prime}} + \Omega}
\right)
\times\\\nonumber &&\times
G({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}})
G({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}}
+{\rm i}\/\omega_n)
\times\\\nonumber &&\times
\Gamma_{x}({\bf k}^{\prime\prime},{\rm i}\/\omega_{n^{\prime\prime}},
{\rm i}\/\omega_{n^{\prime\prime}}+{\rm i}\/\omega_n)
\;.
\end{eqnarray}
To establish some important steps in the derivation of
the general formula for $\sigma(\omega)$ we consider, as a first step,
the case where the vertex correction to the
bare vertex $\Gamma_{\bf k}^{x}$ can be neglected. Then the
expression for the electromagnetic response kernel
$K({\rm i}\/\omega_n)$ becomes
\begin{eqnarray}
K_{xx}({\rm i}\/\omega_n)&=&
2{\rm T}\sum\limits_{{\bf k}^{\prime}}
\sum\limits_{\omega_{n^{\prime}}}
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\delta(\epsilon - \epsilon_{{\bf k}^{\prime\prime}})
(\upsilon_{{\bf k}^{\prime}}^{x})^2
\times\\\nonumber &&\times
G({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}})
G({\bf k}^{\prime},{\rm i}\/\omega_{n^{\prime}}
+{\rm i}\/\omega_n)
\;.
\end{eqnarray}
We can now use the Poisson summation formula
\begin{eqnarray}
\sum\limits_{n=-\infty}^{\infty}F({\rm i}\/\omega_n)
&=&
-\frac{1}{2\pi\/{\rm i}\/\/{\rm T}\/}\int\limits_{ C}{\rm d}\/\omega\:
\frac{F(\omega)}{{\rm e}^{\frac{\omega}{\/{\rm T}\/}}+1}\;'
\end{eqnarray}
where the contour ${ C}$ encircles the imaginary
$\omega$--axis. After that we expand the $\omega$--contour
to infinity, picking up contributions from the singularities
of our integrand at ${\rm i}\/\omega_n=\epsilon_{{\bf k}^{\prime}}$
and ${\rm i}\/\omega_n=\epsilon_{{\bf k}^{\prime}}-{\rm i}\/\omega_n$.
As a result we find after rather lengthy but simple
calculation the analytically continued electromagnetic kernel
as
\begin{eqnarray}
K(\omega)&=&
2\sum\limits_{{\bf k}^{\prime}}(\upsilon_{{\bf k}^{\prime}}^{x})^2
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\delta(\epsilon - \epsilon_{{\bf k}^{\prime\prime}})
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime}\:
\times\\\nonumber &&\times
\left(
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right)
\right)\times
\\\nonumber &&\times
\Pi_{0}^{\rm RA}({\bf k}^{\prime},\omega^{\prime},\omega)\;.
\end{eqnarray}
Here $\Pi_{0}^{\rm RA}({\bf k}^{\prime},\omega^{\prime},\omega)$ has
the form
\begin{eqnarray}
\Pi_{0}^{\rm RA}({\bf k}^{\prime},\omega^{\prime},\omega)&=&
G^{\rm R}({\bf k}^{\prime},\omega^\prime+\omega)
G^{\rm A}({\bf k}^{\prime},\omega^\prime)\;,
\end{eqnarray}
where $G^{\rm R}({\bf k}^{\prime},\omega^\prime+\omega)$ and
$G^{\rm A}({\bf k}^{\prime},\omega^\prime)$
are the retarded and advanced Green's function, respectively.
Their
self--energy parts are given by
Eqs.\ (\ref{selfenergy} - \ref{l2}).
Just as in the case of the one--particle Green's function the
relevant values of $\omega$ are small in comparison to the
Fermi energy, the value of $\epsilon$ in $\delta(\epsilon
-\epsilon_{\rm k})$ can be neglected, and the integration over
$\epsilon$ can be carried out.
As result we find for the conductivity
\begin{eqnarray}\label{s}
\sigma(\omega)&=&
\frac{e^2}{4\pi\/{\rm i}\/\omega}
\sum\limits_{{\bf k}}(\upsilon_{{\bf k}}^{x})^2
\delta(\epsilon_{\bf k})
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime}\:
\left(
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right)
\right)\times
\\\nonumber &&\times
\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)\;,
\end{eqnarray}
where the function $\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)$
depends on the position of the momentum ${\hat{\bf k}}_{\rm F}$ on
the Fermi surface and is given by
\begin{eqnarray}\label{p0def}
\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
&=&
\frac{1}{\omega+\Sigma^{\rm R}({\hat{\bf k}}_{\rm F},\omega+\omega^{\prime})
-\Sigma^{\rm A}({\hat{\bf k}}_{\rm F},\omega^{\prime})}\;.
\end{eqnarray}
For the quasi--isotropic case we get, of course, the well known result
for the optical conductivity \cite{allen2,fuenfzehn}
\begin{eqnarray}\label{s1}
\sigma(\omega)&=&
\frac{\omega_{\rm pl}^{2}}{4\pi\/{\rm i}\/\omega}\,
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime}\:
\left(
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right)
\right)
\Pi_{0}(\omega,\omega^{\prime})\;,
\end{eqnarray}
where the plasma frequency of electrons $\omega_{\rm pl}$ is
\begin{eqnarray}
\omega_{\rm pl}^{2}
&=&
2e^2\sum\limits_{\rm k}(\upsilon_{{\bf k}}^{x})^2\delta(\epsilon_{\bf k})
\end{eqnarray}
and the function $\Pi_{0}(\omega,\omega^{\prime})$ is
\begin{eqnarray}\label{s3}
\Pi_{0}(\omega,\omega^{\prime})
&=&
\frac{1}{\omega+\Sigma^{\rm R}(\omega+\omega^{\prime})
-\Sigma^{\rm A}(\omega^{\prime})+
\frac{\rm i}{\tau_{\rm imp}}}\;.
\end{eqnarray}
Here we introduced the relaxation rate from
impurity scattering $\frac{1}{\tau_{\rm imp}}$.
For the anisotropic case we should expand all functions under the
integral in Eq.\ (\ref{s}) over the Fermi harmonics. The value
$(\upsilon_{{\bf k}}^{x})^2$ is just the square of the Fermi harmonic
of the order $N=1$ (for details see Ref.\ \cite{allen2}).
The expansion of the function
$\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)$
can be written as
\begin{eqnarray}
\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
&=&
\sum\limits_{j}\Pi_{j}(\omega^{\prime},\omega)F_{j}({\hat{\bf k}}_{\rm F})
\;.
\end{eqnarray}
The non--zero result for the conductivity will arise from the first
harmonic, $F_{j}({\hat{\bf k}}_{\rm F}) =1$, and
from the harmonics with even order
$N\ge 2$. The first harmonic gives the same result as
found for the isotropic case. An example of a higher harmonic is the one which
transforms as the
representation $\Gamma_{12}$ of the crystal symmetry. It has the form
\begin{eqnarray}
\Psi_{j}^{\Gamma_{12}}
&=&
\frac{v_{yx}^2-v_{xy}^2}{\left\langle(v_x^2-v_y^2)^2\right\rangle^{1/2}}
\;.
\end{eqnarray}
At this point we are not certain to which extent the higher harmonics
are relevant in HTSC systems but it is known that their influence is
small in normal metals.
This question should be considered in more detail
but is beyond the scope of this paper.
While the analytic continuation of Eqs. (\ref{kernel}), (\ref{kernel2})
in the zero--order approximation and absence of the
vertex function $\Gamma({\bf k},{\rm i}\/\omega_n,{\rm i}\/\omega_m)$
is straightforward, it becomes a non--trivial task in the presence
of $\Gamma({\bf k},{\rm i}\/\omega_n,{\rm i}\/\omega_m)$.
The difficulty arises from the existence of a manifold of functions
which can be obtained as a result of analytical continuation for
one variable while another is fixed. However, this difficulty can be
avoided by changing, as above, the sum over the Matsubara
frequencies to a contour integral. The contour consists of (three)
circuits around the imaginary axis of the variable $\omega^{\prime}$,
avoiding all
poles and branch cuts of the integrand. Using the methods developed
in Refs.\ \cite{holstein,sechzehn} we obtain
\begin{eqnarray}
\sigma(\omega)
&=&
\frac{2}{4\pi\/{\rm i}\/\omega}\,\sum\limits_{\bf k}
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\delta(\epsilon-\epsilon_{\bf k})\upsilon_{{\bf k}}^{x}
\int\limits_{-\infty}^{\infty}
\frac{{\rm d}\/\omega^{\prime}}{2\pi\/{\rm i}}\:
\left(
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right)
\right)
\times\\\nonumber &&\times
\Pi_{0}^{{\rm R}{\rm A}}({\bf k},\omega^{\prime},\omega)
\Gamma_{x}({\bf k},\omega^{\prime},\omega)
\end{eqnarray}
and
\begin{eqnarray}\label{gammaselfconsistent}
\Gamma_{x}({\bf k},\omega^{\prime},\omega)
&=&
\upsilon_{{\bf k}}^{x}+2\,\sum\limits_{{\bf k}^{\prime}}
\int\limits_{-\infty}^{\infty}{\rm d}\/\epsilon\:
\delta(\epsilon-\epsilon_{{\bf k}^{\prime}})
\int\limits_{-\infty}^{\infty}
\frac{{\rm d}\/\omega^{\prime\prime}}{2\pi\/{\rm i}}\:
\times\\\nonumber &&\times
\left[
{\rm tanh}\left(\frac{\omega^{\prime\prime}+\omega}{2\/{\rm T}}\right)
\lambda_{{\bf k}{\bf k}^{\prime}}(\omega^{\prime}-
\omega^{\prime\prime} +{\rm i}\/\delta)
-
{\rm tanh}\left(\frac{\omega^{\prime\prime}}{2\/{\rm T}}\right)
\lambda_{{\bf k}{\bf k}^{\prime}}(\omega^{\prime\prime}-
\omega^{\prime} -{\rm i}\/\delta)
+\right.
\\\nonumber&&
\left. +
{\rm coth}\left(\frac{\omega^{\prime}-\omega^{\prime\prime}}{
2\/{\rm T}}\right)\left(
\lambda_{{\bf k}{\bf k}^{\prime}}(\omega^{\prime\prime}-
\omega^{\prime} +{\rm i}\/\delta) -
\lambda_{{\bf k}{\bf k}^{\prime}}(\omega^{\prime}-
\omega^{\prime\prime} -{\rm i}\/\delta)
\right)\right]
\times\\\nonumber &&\times
\Pi_{0}^{{\rm R}{\rm A}}({\bf k}^{\prime},\omega^{\prime\prime},\omega)
\Gamma_{x}({\bf k}^{\prime},\omega^{\prime\prime},\omega)
\;,
\end{eqnarray}
where the function
\begin{eqnarray}
\lambda_{{\bf k}{\bf k}^{\prime}}(\omega)
&=&
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\left[
\frac{1}{\omega-\Omega}-\frac{1}{\omega+\Omega}
\right]
\end{eqnarray}
was introduced.
Taking into account that apart from
$\Pi_{0}({\bf k}^{\prime},\omega^{\prime\prime},\omega)$
all functions under the integral on the
right--hand side of Eq.\ (\ref{gammaselfconsistent})
depend only weakly on the variable $\epsilon_{{\bf k}^{\prime}}$
we can integrate over this variable and find
\begin{eqnarray}
\Gamma_{x}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
&=&
\upsilon_{{\bf k}}^{x}+\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}^{\prime}})
\int\limits_{-\infty}^{\infty}
{\rm d}\/\omega^{\prime\prime}\:
\Pi_{0}({\hat{\bf k}}^{\prime}_{\rm F},\omega^{\prime\prime},\omega)
\times\\\nonumber &&\times
\left[
I(\omega^{\prime}-{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
-
I(\omega^{\prime}+\omega+{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\right]
\Gamma_{x}({\hat{\bf k}}^{\prime}_{\rm F},\omega^{\prime\prime},\omega)
\;.
\end{eqnarray}
Here $\Pi_{0}({\hat{\bf k}}^{\prime}_{\rm F},\omega^{\prime\prime},\omega)$
is defined by Eq.\ (\ref{p0def}) and $I(\omega,\Omega,\omega^{\prime})$
is
\begin{eqnarray}
I(\omega,\Omega,\omega^{\prime})
&=&
\frac{1-f(\omega^{\prime})+N(\Omega)}{\omega-\Omega-\omega^{\prime}}
+
\frac{f(\omega^{\prime})+N(\Omega)}{\omega+\Omega-\omega^{\prime}}
\;.
\end{eqnarray}
At this stage it is useful to introduce a new function
$\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)$,
\begin{eqnarray}
\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
&=&
\Pi_{0}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
\Gamma_{x}({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
\;.
\end{eqnarray}
In terms of this function the conductivity can be expressed as
\begin{eqnarray}
\sigma(\omega)
&=&
\frac{2}{4\pi\/{\rm i}\/\omega}\,\sum\limits_{\bf k}
\delta(\epsilon_{\bf k})\upsilon_{{\bf k}}^{x}
\int\limits_{-\infty}^{\infty}
{\rm d}\/\omega^{\prime}\:
\left(
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right)
\right)
\times\\\nonumber &&\times
\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
\;.
\end{eqnarray}
For $\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)$
one can write the equation
\begin{eqnarray}\label{smallgamma}
\omega\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)
&=&
\upsilon_{{\bf k}}^{x}+\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}^{\prime}})
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\times\\\nonumber &&\times
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}
\left[
I(\omega^{\prime}-{\rm i}\/\delta,\Omega,\omega^{\prime\prime})-
I(\omega^{\prime}+\omega+{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\right]
\times\\\nonumber &&\times
\left(
\gamma_x({\hat{\bf k}}_{\rm F}^{\prime},\omega^{\prime\prime},\omega)
-
\gamma_x({\hat{\bf k}}_{\rm F},\omega^{\prime},\omega)\right)
\;.
\end{eqnarray}
Now we use the expansion of the functions in
Eq.\ (\ref{smallgamma}) in Fermi harmonics. This yields
\begin{eqnarray}\label{smallgamma2}
\omega\gamma_j(\omega^{\prime},\omega)
&=&
\left\langle\upsilon_{x}^2\right\rangle^{1/2}\delta_{jx}+
\sum\limits_{j^{\prime}}
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\times\\\nonumber &&\times
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}
\left[
I(\omega^{\prime}-{\rm i}\/\delta,\Omega,\omega^{\prime\prime})-
I(\omega^{\prime}+\omega+{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\right]
\times\\\nonumber &&\times
\left(
\alpha^2_{jj^{\prime}}(\Omega)F(\Omega)
\gamma_{j^{\prime}}(\omega^{\prime\prime},\omega)
-\sum\limits_{j^{\prime\prime}} C_{jj^{\prime}j^{\prime\prime}}
\alpha^2_{j^{\prime\prime}0}(\Omega)F(\Omega)
\gamma_{j^{\prime}}(\omega^{\prime\prime},\omega)
\right)
\;,
\end{eqnarray}
where we used the Clebsh--Gordon coefficients
\begin{eqnarray}
C_{jj^{\prime}j^{\prime\prime}}
&=&
\frac{1}{N(0)}\sum\limits_{\bf k}\delta(\epsilon_{{\bf k}})
\Psi_j({\bf k})\Psi_{j^{\prime}}({\bf k})
\Psi_{j^{\prime\prime}}({\bf k})
\;.
\end{eqnarray}
All following calculations are greatly simplified if
\begin{eqnarray}
\alpha^2_{jj^{\prime}}(\Omega)F(\Omega)
&=&
\frac{1}{N(0)}\sum\limits_{\bf k}\sum\limits_{{\bf k}^{\prime}}
\delta(\epsilon_{{\bf k}})\delta(\epsilon_{{\bf k}^{\prime}})
\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)
\Psi_j({\bf k})\Psi_{j^{\prime}}({\bf k}^{\prime})
\end{eqnarray}
has the diagonal form
\begin{eqnarray}
\alpha^2_{jj^{\prime}}(\Omega)F(\Omega)
&=&
\alpha^2_{j}(\Omega)F(\Omega)\delta_{jj^{\prime}}
\;.
\end{eqnarray}
In ordinary metals this assumption is well satisfied, where
$\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)$ depends
mainly on the difference of the momenta ${\bf k}-{\bf k}^{\prime}$.
There are also some restrictions on the non--diagonal
matrix elements $\alpha^2_{jj^{\prime}}(\Omega)F(\Omega)$,
$j \not= j^{\prime}$, connected with the crystal symmetries
(see e.\,g.\ Ref.\ \cite{allen2}). Nevertheless, it is difficult
to say something definite about the accuracy of this assumption
in HTSC systems without concrete calculations of the functions
$\alpha^2({\bf k},{\bf k}^{\prime},\Omega)F(\Omega)$.
Such calculations where never made until now.
We use this approximation as a first step.
In addition, we restrict ourself to the use of
the first two Fermi harmonics.
It is useful to search for the solution of Eq.\ (\ref{smallgamma2})
in the form
\begin{eqnarray}
\gamma_1(\omega^{\prime}\,\omega)
&=&
\frac{1}{\omega+\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)}
\;,
\end{eqnarray}
where, after lengthy but simple calculations,
the equations for the transport ``self--energies'' can be written in the
form
\begin{eqnarray}\label{srtr}
\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,\omega^{\prime})
&=&
\frac{1}{N(0)}
\sum\limits_{\bf k}\sum\limits_{{\bf k}^{\prime},\lambda}
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}\:
I(\omega^{\prime}+\omega+{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\left|g_{{\bf k}}({\bf k}^{\prime}-{\bf k},\lambda)\right|^2
\times\\\nonumber &&\times
{\rm Im}\/D_{\lambda}({\bf k}-{\bf k}^{\prime},\omega^{\prime})
\frac{\upsilon_{x}^2}{\left\langle\upsilon_{x}^2\right\rangle}
\left(
1-\frac{\upsilon_x^{\prime}}{\upsilon_{x}}
\frac{\omega+\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)}{\omega+
\Sigma_{\rm tr}^{\rm R}(\omega^{\prime\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime\prime},\omega^{\prime})}
\right)
\end{eqnarray}
and
\begin{eqnarray}\label{satr}
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)
&=&
\frac{1}{N(0)}
\sum\limits_{\bf k}\sum\limits_{{\bf k}^{\prime},\lambda}
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}\:
I(\omega^{\prime}-{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\left|g^2_{{\bf k}}({\bf k}^{\prime}-{\bf k},\lambda)\right|
\times\\\nonumber &&\times
{\rm Im}\/D_{\lambda}({\bf k}-{\bf k}^{\prime},\omega^{\prime})
\frac{\upsilon_{x}^2}{\left\langle\upsilon_{x}^2\right\rangle}
\left(
1-\frac{\upsilon_x^{\prime}}{\upsilon_{x}}
\frac{\omega+\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)}{\omega+
\Sigma_{\rm tr}^{\rm R}(\omega^{\prime\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime\prime},\omega^{\prime})}
\right)
\;.
\end{eqnarray}
The Eqs.\ (\ref{srtr}) and (\ref{satr}) are still implicitly
integral equations for $\Sigma_{\rm tr}^{{\rm R},{\rm A}}$.
However, the integrand on the right hand side of Eqs.\
(\ref{srtr})and (\ref{satr}) depends only weakly on the
functions $\Sigma_{\rm tr}^{{\rm R},{\rm A}}$. The detailed numerical
analysis of these equations, given by Allen \cite{allen} for the
case ${\rm T} = 0$, has shown that the assumption
\begin{eqnarray}
\frac{\omega-\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)}{\omega-
\Sigma_{\rm tr}^{\rm R}(\omega^{\prime\prime}+\omega,
\omega^{\prime})-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime\prime},\omega^{\prime})}
&=&1
\end{eqnarray}
is satisfied with good accuracy.
The only difference in Eq.\ (\ref{srtr}) compared to
Eqs.\ (\ref{s1}) and (\ref{s3}) is the appearance of the
transport ``self--energies'' $\Sigma_{\rm tr}^{{\rm R},{\rm A}}$
instead of the one--particle self--energies.
Accordingly, the equations for $\Sigma_{\rm tr}^{{\rm R},{\rm A}}$ can be
written in a form which largely resembles the equations for the
one--particle self--energies, namely
\begin{eqnarray}\label{str1}
\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega,\omega)
&=&
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2_{\rm tr}(\Omega)F(\Omega)
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}\:
I(\omega^{\prime}+\omega+{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\;,
\\\label{str2}
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime},\omega)
&=&
\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2_{\rm tr}(\Omega)F(\Omega)
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}\:
I(\omega^{\prime}-{\rm i}\/\delta,\Omega,\omega^{\prime\prime})
\;,
\end{eqnarray}
and
\begin{eqnarray}
\alpha^2_{\rm tr}(\Omega)F(\Omega)
&=&
\frac{1}{N(0)}
\sum\limits_{\bf k}\sum\limits_{{\bf k}^{\prime},\lambda}
\delta(\epsilon_{{\bf k}})\delta(\epsilon_{{\bf k}^{\prime}})
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime\prime}\:
\left|g^2_{{\bf k}}({\bf k}-{\bf k}^{\prime},\lambda)\right|^2
{\rm Im}\/D_{\lambda}(\Omega)
\times\\\nonumber &&\times
\frac{\upsilon_{{\bf k}}^2}{\left\langle\upsilon_{{\bf k}}^2\right\rangle}
\left(
1-\frac{\upsilon_{{\bf k}^{\prime}}}{\upsilon_{{\bf k}}}
\right)
\;.
\end{eqnarray}
The expression for the conductivity can then be written in a form
which is formally identical to Eq.\ (\ref{s1})
obtained without vertex corrections
\begin{eqnarray}\label{sv1}
\sigma(\omega)&=&
\frac{\omega_{\rm pl}^{2}}{4\pi\/{\rm i}\/\omega}\,
\int\limits_{-\infty}^{\infty}{\rm d}\/\omega^{\prime}\:
\left(
{\rm tanh}\left(\frac{\omega^{\prime}+\omega}{2\/{\rm T}}\right) -
{\rm tanh}\left(\frac{\omega^{\prime}}{2\/{\rm T}}\right)
\right)
\times\\\nonumber &&\times
\frac{1}{\omega+\Sigma_{\rm tr}^{\rm R}(\omega^{\prime}+\omega)-
\Sigma_{\rm tr}^{\rm A}(\omega^{\prime}+\omega)+
\frac{{\rm i}}{\tau_{\rm imp}}}
\;.
\end{eqnarray}
Equation (\ref{sv1}) has just the form which is
normally used for the calculation of the transport properties
of metals (see. e.\,g.\ \cite{zhenya,allen2}).
The expression (\ref{sv1}) for the conductivity can be simplified
even more in the case of weak electron--phonon interaction
\cite{oleg}, to yield the so--called ``extended''
Drude formula
\begin{eqnarray}\label{extended}
\sigma(\omega)
&=&
\frac{\omega_{\rm pl}^{2}}{4\pi}\,
\frac{1}{{\rm i}\/\omega-W(\omega)-\frac{1}{\tau_{\rm imp}}}
\;,
\end{eqnarray}
where
\begin{eqnarray}
W(\omega)
&=&
{\rm i}\/\omega\left(
1-\frac{m^*_{\rm tr}(\omega)}{m}\right)+\frac{1}{\tau_{\rm tr}(\omega)}
\;,
\\
W(\omega)
&=&
-2\/{\rm i}\/\int\limits_{0}^{\infty}{\rm d}\/\Omega\:
\alpha^2_{\rm tr}(\Omega)F(\Omega)\,
K\!\left(\frac{\omega}{2\pi\/{\rm T}},\frac{\Omega}{2\pi\/{\rm T}}\right)
\;,
\end{eqnarray}
and the function
$K\!\left(\frac{\omega}{2\pi\/{\rm T}},\frac{\Omega}{2\pi\/{\rm T}}\right)$
has the form
\begin{eqnarray}
K\left(x,y\right)
&=&
\frac{\rm i}{y}+\left\{\frac{y-x}{x}\left[
\Psi(1-{\rm i}\/x+{\rm i}\/y)-
\Psi(1+{\rm i}\/y)
\right]\right\} -
\left\{ y \longleftrightarrow -y \right\}
\;.
\end{eqnarray}
Firstly, we shall check the accuracy of the approximation of
the expression (\ref{sv1}) for $\sigma(\omega)$ by the ``extended''
Drude formula, Eq.\ (\ref{extended}). For that purpose we use the
transport spectral function $\alpha^2_{\rm tr}(\omega)F(\omega)$
of the form shown in Fig.\ \ref{fig0}.
A comparison between Eqs.\ (\ref{sv1}) and (\ref{extended})
is shown in Fig.\ \ref{fig1} for constants of coupling $\lambda_{\rm tr}=1$,
where
\begin{eqnarray}\label{alphatr}
\lambda_{\rm tr}&=&
2\,\int\limits_{0}^{\infty}\frac{{\rm d}\/\Omega}{\Omega}\;
\alpha^2_{\rm tr}(\Omega)F(\Omega)
\;.
\end{eqnarray}
There are some differences in the low energy regime, $\omega \mbox{$\,\kleiner\hspace{-\blib}\etwa\,$} 200$
cm$^{-1}$, shown in Fig.\ \ref{fig2}, especially at low temperatures.
One should be careful using
the ``extended'' Drude formula in this case. However, even in
this region the difference is rather small.
The difference between the conductivity calculated by the formulae
(\ref{sv1}) and (\ref{extended}) continues to be small even for
a constant of coupling $\lambda \simeq 2$.
The transport relaxation rate
\begin{eqnarray}
\frac{1}{\tau^*_{\rm tr}(\omega)}
&=&
\frac{1}{\tau_{\rm tr}(\omega)}\,
\frac{m}{m^*(\omega)}
\end{eqnarray}
has a universal behaviour for metals with different values of
$\lambda$ and different phonon density of states.
Figure \ref{fig3} shows the frequency dependence of the
relaxation rate $\frac{1}{\tau^*_{\rm tr}(\omega)}$ in
dimensionless form. Here $\omega_{\rm m}$ is the value of the
maximum phonon frequency (i.\,e.\ the end of the phonon spectrum).
The six different curves plotted in Fig.\ \ref{fig3} are
indistinguishable as function of the dimensionless
energy $\omega/\omega_{\rm m}$.
The quantity $\frac{1}{\tau^*_{\rm tr}(\omega)}$ is also rather
universal as a function of the dimensionless
temperature ${\rm T}/\pi\omega_{\rm m}$. Figure \ref{fig4}
shows $\frac{1}{\tau^*_{\rm tr}(\omega)}$ for ${\rm T} = 10$ K
and $100$ K.
Firstly, we would like to emphasise the quasi--linear
$\omega$--dependence of $\frac{1}{\tau^*_{\rm tr}(\omega)}$ over a
large energy interval, $0.5 \le \omega \le (3-4)\,\omega_{\rm m}$.
The function $\frac{1}{\tau^*_{\rm tr}(\omega)}$ increases with
increasing energy $\omega$ up to very high values, $\omega
\simeq 10\,\omega_{\rm m}$. This contrasts strongly with the
behaviour of the one--particle relaxation rate
$\frac{1}{\tau(\omega)}$ defined by
Eq.\ (\ref{oneparticlerelaxation}) and shown in the inset of
Fig.\ \ref{fig4}. It is well known \cite{siebzehn}
that the latter rapidly increases with energy and becomes constant
for $\omega\ge \omega_{\rm m}$.
This difference in the behaviour of the functions
$\frac{1}{\tau^*_{\rm tr}(\omega)}$ and $\frac{1}{\tau(\omega)}$
was first discussed in Ref.\ \cite{shulga}.
As mentioned above, there are only a few investigations
\cite{joyce,bednorz} of the influence of the electron--phonon interaction
on the optical spectra of normal state metals where the frequency dependence of
these effects was observed.
Besides the reasons mentioned above there is another very important reason
for the small number of such investigations. Namely, the absolute values of
the frequency dependence of the discussed effects is determined
by the value of $\frac{1}{\tau^*_{\rm tr}(\omega)}$ at
$\omega\rightarrow\infty$ \cite{shulga}. This function can be written as
\begin{eqnarray}
\lim\limits_{\omega\rightarrow\infty}\frac{1}{\tau^*_{\rm tr}(\omega)}
&=&
\pi\lambda\left\langle\omega\right\rangle
\;,
\end{eqnarray}
where
\begin{eqnarray}
\lambda\left\langle\omega\right\rangle
&=&
2\,\int\limits_{0}^{\infty}{\rm d}\/\omega\;
\alpha^2_{\rm tr}(\omega)F(\omega)
\;.
\end{eqnarray}
This value can be expressed as
\begin{eqnarray}
\frac{1}{\tau^*_{\rm tr}}
&\simeq &
(1-2)\,\lambda\omega_{\rm m}
\;,
\end{eqnarray}
as can be seen from Figs.\ \ref{fig3} and
\ref{fig4},
and it is very small for ordinary metals.
In lead, for example, one finds $\lambda\left\langle\omega\right\rangle
\approx 100$ cm$^{-1}$.
It is extremely difficult to observe this phenomena in the usual reflection
spectra. Therefore, the observation of Holstein processes in lead was
made \cite{joyce,bednorz} using the light scattering inside
a cavity whose walls hold the sample material. It was averaged over
at least $100$ such reflections to increase the accuracy of the experiment.
It was shown \cite{oleg,achzehn} that a
far--infrared measurement at low temperature, containing phonon--induced
structure, can be inverted to give the spectral function of the
electron--phonon
interaction $\alpha_{\rm tr}^2(\omega)F(\omega)$.
This function was obtained in Ref.\ \cite{achzehn} for lead in very
good agreement with experimental data from tunnelling measurements.
We shall come back to discuss this observation in some more detail.
\section{Electron--phonon interaction and optical spectra
of HTSC systems}
A large amount of work has been devoted to the study of optical
spectra of HTSC systems
(see for example the reviews \cite{neunzehn,zwanzig}). Investigations
include normal and superconducting state properties, doping dependence,
the effect of impurities, etc.\,.
Here, we shall restrict our analysis to optimally doped HTSC in the
normal state. In this case the optical spectra of all HTSC materials show
quite similar behaviour: their reflectivity drops nearly linearly
with energy from $R\simeq 1$ down to $R\simeq 0.1$ at the plasma edge
$\omega^*_{\rm p}$, where the values of $\omega^*_{\rm p}$ vary for different
HTSC materials, $1$ eV $\le\omega^*_{\rm p}\le1.8$ eV \cite{zwanzig}.
Measurements for different HTSC compounds further coincide in showing large
values for the conductivity in the energy interval
$2$ eV $< \omega < 15$ eV and well developed charge fluctuation spectra
(described by the energy loss function
$-{\rm Im}\left(\frac{1}{\epsilon(\omega)}\right)$) in the energy interval
$5$ eV $< \omega < 40$ eV \cite{einundzwanzig}.
The high energy part
of spectra above a few eV can be well described in terms of
the usual band structure calculations \cite{zwanzig,zhenya1}.
Moreover, the calculations \cite{zhenya1,dreiundzwanzig}
can also describe quite accurately some low energy interband transitions in
YBa$_2$Cu$_3$O$_7$ \cite{vierundzwanzig}.
The most unusual part of the optical spectra of HTSC systems is
connected with the strong decrease in reflectivity, almost linear in energy,
in the region
$0\le\omega\le\omega^*_{\rm p}\simeq 1$ eV \cite{fuenfundzwanzig}.
This behaviour certainly cannot be explained with the simple Drude approach.
Thomas {\it et al} \cite{sechsundzwanzig} proposed two different
ways of analysing such spectra. First, a one--component
Fermi--liquid approach, using the ``extended'' Drude formula with frequency
dependent mass $m^*(\omega)$ and relaxation rate $\frac{1}{\tau(\omega)}$.
This model successfully describes the heavy fermion systems
\cite{siebenundzwanzig}. Secondly, a two--component approach, where
the spectrum is decomposed in two components,
the Drude part and a mid--infrared (MIR) absorption band.
In the latter case there is no unique way of separating the two components.
As we have discussed above, including the electron--phonon interaction
in the consideration leads immediately to the representation of the
optical conductivity in terms of the ``extended'' Drude formula.
It has been shown earlier for some HTSC \cite{shulga,holger}
that the existence of strong electron--phonon coupling,
including some amount of MIR excitations, can indeed
explain their optical conductivity. We now consider this approach
in some more detail. To describe the optical spectra of HTSC systems
we use the formulae obtained in section \ref{formulae} using
a transport spectral function $\alpha^2_{\rm tr}(\omega)F(\omega)$
of the form shown in Fig.\ \ref{fig0}, multiplied by $\omega^2$.
This spectral function extends up to $\omega_{\rm m} = 735$ cm$^{-1}$ and
well resembles
the phonon spectra of HTSC systems. Generally, the optical properties
do not depend on the actual shape of $\alpha^2_{\rm tr}(\omega)F(\omega)$
but on some moments of it. Thus, we use the constant of
electron--phonon coupling $\lambda_{\rm tr}$ as defined in Eq.\
(\ref{alphatr}) as fit parameter.
Figure \ref{fig6} shows the reflectivity of optimally doped
La$_{2-x}$Sr$_{x}$CuO$_4$ (LSCO) at room temperature. We have used the values
$\omega_{\rm p}=1.8$ eV and $\epsilon_{\infty}=4.6$ obtained by one of us
\cite{achtundzwanzig} from band structure calculations. The contribution of
the electron--phonon coupling was supposed to be $\lambda_{\rm tr}=2.5$.
The resulting reflectivity corresponds well to measurements
on good quality films \cite{fuenfundzwanzig}. The conductivity
calculated in Ref.\ \cite{achtundzwanzig} describes well the
optical spectra of LSCO at energies above $\approx 3$ eV, as was
confirmed by experiment \cite{neunundzwanzig}.
Taking all this into account,
we can conclude that the band structure approach joined
with strong electron--phonon interaction can explain the overall
behaviour of the optical spectrum of LSCO in the energy range
$0<\omega\le 40$ eV.
Moreover, this approach also explains the temperature dependence of
the optical spectrum of LSCO quite well. Figure \ref{fig7}
shows the reflectivity of LSCO in the FIR at temperatures
${\rm T}=100,\; 200,\; 300$ K. The agreement between experimental
\cite{dreisig} and calculated curves is good considering
that the only fit parameter used was the constant of coupling
$\lambda_{\rm tr}$. In Fig.\ \ref{fig8} we show the reflectivity
for LSCO up to $\omega\approx 1$ eV for the same temperatures.
The agreement between calculated and measured curves is still
quite good on this larger energy scale. Nevertheless,
we should point out a small discrepancy in the MIR region.
While this discrepancy is very small at ${\rm T}=100$ K, it
becomes more pronounced with increasing temperature.
Introducing a MIR band, this
can be seen as further evidence for the temperature dependence
of such a band, as discussed in Ref.\ \cite{holger}.
We now focus on the temperature dependence of the
optical reflectivity in YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO), using the
values $\omega_{\rm p}=3$ eV and $\epsilon_{\infty}=5.8$
obtained from band structure calculations \cite{zhenya1}.
Figure \ref{fig9} shows the reflectivity of YBCO from Ref.\
\cite{einundreisig} compared to our calculations for
${\rm T}=100,\; 200,\; 300$ K, where we have used an impurity scattering
rate $\frac{1}{\tau_{\rm imp}}=300$ cm$^{-1}$ in our calculation of the
relaxation rate. The agreement between experimental and theoretical curves
seems to be again quite good. The only fit parameter used here
was $\lambda_{\rm tr}=3$. Certainly, there are some discrepancies
between data and theoretical curves in Fig.\ \ref{fig9}. We should
point out that discrepancies of nearly the same order exist between
different type of monocrystals and films. Also, our approach is based on the
quasi--isotropic approximation for the electron--phonon interaction
which is clearly an oversimplification in these quasi--two dimensional systems.
The inset of Fig.\ \ref{fig9} shows the calculated reflectivity of YBCO up to
$1.8$ eV at room temperature. It can be seen that there is an upturn
in the reflectivity at MIR frequencies which is certainly much larger
than in LSCO. This confirms the observation made in Ref.\
\cite{zweiunddreisig} that the MIR band is more pronounced in YBCO
than in LSCO.
Furthermore, we know from band structure calculations \cite{zhenya1}
that YBCO systems show interband transitions with rather strong intensity
in the MIR region.
Figure \ref{fig10} shows the behaviour of the transport relaxation
rate $\frac{1}{\tau^*_{\rm tr}(\omega)}$ obtained for YBCO at temperatures
${\rm T}=100,\; 200,\; 300$ K.
It well resembles the usual shape of such curves
in HTSC and might be compared to Fig.\ 2 in Ref. \cite{dreiundreisig},
where the relaxation rate for Bi$_2$Sr$_2$CaCu$_2$O$_8$ was derived
from the conductivity using the ``extended'' Drude formula.
It was also shown there that the behaviour of the relaxation rate
$\frac{1}{\tau^*_{\rm tr}(\omega)}$ cannot be explained in the
framework of marginal Fermi liquid theory \cite{vierundreisig}, since
$\frac{1}{\tau^*_{\rm tr}(\omega)}$ saturates at energies
$\omega>\omega_{c}\simeq 1500$ cm$^{-1}$ whereas according to
marginal Fermi liquid theory the relaxation rate should continue
in a straight line.
This argument was used in Ref.\ \cite{dreiundreisig} as evidence
against the one--component approach to the optical spectra
of HTSC systems.
However, the saturating behaviour for $\frac{1}{\tau^*_{\rm tr}(\omega)}$
can be clearly seen in Fig.\ \ref{fig10}, at nearly the same energy
as it was observed in Bi$_2$Sr$_2$CaCu$_2$O$_8$.
Figure \ref{fig11} shows the calculated resistivity for YBCO, demonstrating
clearly a linear temperature dependence over a large temperature interval.
To conclude this considerations, we would like to discuss shortly
the possibility of inverting reflectivity data to obtain
the transport spectral function $\alpha_{\rm tr}^2(\omega)F(\omega)$.
This has been done for lead at ${\rm T} = 0$ \cite{achzehn}, where the
the infrared data was obtained by applying a magnetic field to drive the
system into the normal state. There, a spectral function with detailed
structure could be obtained.
In HTSC systems the situation is more complex.
The superconducting state exists up to considerably high temperatures
${\rm T}>{\rm T}_{\rm c}$ and cannot be suppressed by magnetic fields
to allow IR measurements at ${\rm T} = 0$. At finite temperature the
inversion procedure is far more difficult and as a result such calculations
only yield the rough overall form of the transport spectral function.
Nevertheless, the resulting $\alpha_{\rm tr}^2(\omega)F(\omega)$
clearly resemble the phonon spectra of these systems \cite{oleg}.
\section{Conclusion}
The results obtained in this work demonstrate clearly,
that strong electron--phonon interaction, combined
with band structure calculations, describe
the overall behaviour of the optical spectra and the
main part of the transport properties of HTSC in a
straightforward manner.
However, we do not claim that this simple quasi--isotropic
approach can explain all details of the behaviour of HTSC systems,
even in the normal state, and even at optimal doping. There are a number
of open problems concerning the behaviour of the Hall coefficient, the
NMR relaxation rate of the copper sites. We would like to point out
the existence of at least two relaxation rates in the HTSC systems,
namely the quasi--particle relaxation rate
$\frac{1}{\tau(\omega)}$ and the transport $\frac{1}{\tau^*_{\rm tr}(\omega)}$
which are very different over a large energy range. We cannot rule out
that the relaxation rate involved in the Hall current, due to
strong and possibly anisotropic electron--phonon interaction,
will be different from the transport relaxation rate in those
systems. This could lead to the observed temperature dependence of the
Hall coefficient.
Last but not least, we should point out that the simple approach presented
here does not work at low temperatures. It cannot properly describe the
anisotropy of the superconducting order parameter, although it
yields the correct order of magnitude for the value of ${\rm T}_{\rm c}$. There
are additional phenomena besides electron--phonon coupling
which become important at low temperatures.
There are a number of different models which combine
a strong electron--phonon interaction with interband
Coulomb interaction, with the existence of a Van Hove singularity
in the electron spectrum, etc., of which a detailed discussion
is beyond the scope of this work.
\section*{Acknowledgement}
The authors would like to thank O.\ Dolgov for many helpful discussions.
They are also grateful to S.\ Shulga for providing his program for this
calculations. E.\ G.\ M.\ would like to thank the Royal Society and
the Department of Earth Sciences, University of Cambridge, for their
support and kind hospitality during his visit to Cambridge. He
also acknowledges the RFBI for the financial support during the early
stages of this work.
|
1,116,691,500,246 | arxiv |
\section{Preliminaries}
\subsection{Dynamical systems}
A \textit{dynamical system} is a pair $(X,f)$ consisting of a compact Hausdorff space $X$ and a continuous function $f\colon X \to X$. We say that the \textit{orbit} of $x$ under $f$ is the set of points $\{x, f(x), f^2(x), \ldots\}$; we denote this set by $\Orb_f(x)$. For a point $x \in X$, we define the $\omega$\textit{-limit set} of $x$ under $f$, denoted $\omega(x)$, to be the set of limit points of its orbit sequence. Formally
\[\omega(x)= \bigcap_{N \in \mathbb{N}}\overline{\{f^n(x) \mid n >N\}}.\]
Note that as $X$ is compact $\omega(x) \neq \emptyset$ for any $x \in X$ by Cantor's intersection theorem.
For a dynamical system $(X,f)$, a subset $A \subseteq X$ is said to be \textit{positively invariant} (under $f$) if $f(A) \subseteq A$. The system is \textit{minimal} if there are no proper, nonempty, closed, positively-invariant subsets of $X$. Equivalently, a system is minimal if $\omega(x)=X$ for all $x \in X$.
If $X$ is a metric space, a sequence $(x_i)_{i \geq 0}$ in $X$ is called a $\delta$-pseudo-orbit if $d(f(x_i), x_{i+1})< \delta$ for all $i \geq 0$.
\begin{definition}
Let $X$ be a metric space. The system $(X,f)$ has the \textit{orbital shadowing} property if for all $\epsilon>0$, there exists $\delta>0$ such that for any $\delta$-pseudo-orbit $( x_i)_{i \geq 0}$, there exists a point $z$ such that
\[d_H\left(\overline{\{x_i\}_{i\geq 0}}, \overline{\{f^i(z)\}_{i\geq 0}}\right) <\epsilon.\]
\end{definition}
Here $d_H$ denotes the Hausdorff metric, defined on the compact subsets of $X$, which is given by: \[d_H (A,A^\prime)= \inf \{\epsilon>0 \colon A \subseteq B_\epsilon (A^\prime) \text{ and } A^\prime \subseteq B_\epsilon (A)\}. \]
The following weakening of orbital shadowing was introduced in \cite{PiluginRodSakai2002}.
\begin{definition}
Let $X$ be a metric space. The system $(X,f)$ has the \textit{second weak shadowing} property if for all $\epsilon>0$, there exists
$\delta>0$ such that for any $\delta$-pseudo-orbit $(x_i)_{i\geq 0}$, there exists a point $z$ such that
\[\Orb(z) \subseteq B_\epsilon\left( \{x_i\}_{i\geq 0}\right).\]
\end{definition}
The following strengthening of orbital shadowing was introduced in \cite{GoodMeddaugh2016}. The authors demonstrate it to be distinct.
\begin{definition}
Let $X$ be a metric space. The system $(X,f)$ has the \textit{strong orbital shadowing} property if for all $\epsilon>0$, there exists $\delta>0$ such that for any $\delta$-pseudo-orbit $(x_i)_{i \geq 0}$, there exists a point $z$ such that, for all $N \in \mathbb{N}_0$,
\[d_H\left(\overline{\{x_{N+i}\}_{i\geq 0}}, \overline{\{f^{N+i}(z)\}_{i\geq 0}}\right) <\epsilon.\]
\end{definition}
\subsection{Uniform spaces}
Let $X$ be a nonempty set and $A \subseteq X \times X$. Let $A^{-1}=\{(y,x) \mid (x,y) \in A\}$; we call this the \textit{inverse} of $A$. The set $A$ is said to be \textit{symmetric} if $A=A^{-1}$. For any $A_1, A_2 \subseteq X \times X$ we define the composite $A_1 \circ A_2$ of $A_1$ and $A_2$ as
\[A_1 \circ A_2 = \{(x,z) \mid \exists y\in X : (x,y) \in A_1, (y,z) \in A_2\}.\]
For any $n \in \mathbb{N}$ and $A \subseteq X \times X$ we denote by $nA$ the $n$-fold composition of $A$ with itself, i.e.
\[nA=\underbrace{A\circ A\circ A \cdots A}_{n \text{ times}}.\]
The \textit{diagonal} of $X \times X$ is the set $\Delta= \{(x,x) \mid x \in X\}$. A subset $A \subseteq X\times X$ is called an \textit{entourage} if $A \supseteq \Delta$.
\begin{definition} A \textit{uniformity} $\mathscr{U}$ on a set $X$ is a collection of entourages of the diagonal such that the following conditions are satisfied.
\begin{enumerate}[label=\alph*.]
\item $E_1, E_2 \in \mathscr{U} \implies E_1 \cap E_2 \in \mathscr{U}$.
\item $E \in \mathscr{U}, E \subseteq D \implies D \in \mathscr{U}$.
\item $E \in \mathscr{U} \implies D \circ D \subseteq E$ for some $D \in \mathscr{U}$.
\item $E \in \mathscr{U} \implies D^{-1}\subseteq E$ for some $D \in \mathscr{U}$.
\end{enumerate}
\end{definition}
We call the pair $(X, \mathscr{U})$ a {\em uniform space}. We say $\mathscr{U}$ is {\em separating} if $\bigcap_{E \in \mathscr{U}} E = \Delta$; in this case we say $X$ is {\em separated}. A subcollection $\mathscr{V}$ of $\mathscr{U}$ is said to be a {\em base} for $\mathscr{U}$ if for any $E \in\mathscr{U}$ there exists $D \in \mathscr{V}$ such that $E \subseteq D$. Clearly any base $\mathscr{V}$ for a uniformity will have the following properties:
\begin{enumerate}
\item $E_1, E_2 \in \mathscr{U} \implies$ there exists $D \in \mathscr{V}$ such that $D \subseteq E_1 \cap E_2 $.
\item $E \in \mathscr{U} \implies D \circ D \subseteq E$ for some $D \in \mathscr{V}$.
\item $E \in \mathscr{U} \implies D^{-1}\subseteq E$ for some $D \in \mathscr{V}$.
\end{enumerate}
If $\mathscr{U}$ is separating then $\mathscr{V}$ will satisfy $\bigcap_{E \in \mathscr{V}} E = \Delta$.
\begin{remark}\label{RemarkSymFormBase} The symmetric entourages of a uniformity $\mathscr{U}$ form a base for said uniformity. In virtue of this, without loss of generality, we may assume that every entourage in the uniformity that we refer to is symmetric. This will be a standing assumption throughout this paper.
\end{remark}
For an entourage $E \in \mathscr{U}$ and a point $x \in X$ we define the set $B_E(x)= \{y \in X \mid (x,y) \in E\}$; we refer to this set as the $E$-\textit{ball about} $x$. This naturally extends to a subset $A \subseteq X$; $B_E(A)= \bigcup_{x \in A}B_E(x)$; in this case we refer to the set $B_E(A)$ as the $E$-\textit{ball about} $A$. We emphasise that (see \cite[Section 35.6]{Willard}):
\begin{itemize}
\item For all $x \in X$, the collection $\mathscr{B}_x \coloneqq \{ B_E(x) \mid E \in \mathscr{U} \}$ is a neighbourhood base at $x$, making $X$ a topological space. The same topology is produced if any base $\mathscr{V}$ of $\mathscr{U}$ is used in place of $\mathscr{U}$.
\item The topology is Hausdorff if and only if $\mathscr{U}$ is separating.
\end{itemize}
For a compact Hausdorff space $X$ there is a unique uniformity $\mathscr{U}$ which induces the topology and the space is metric if the uniformity has a countable base (see \cite[Chapter 8]{Engelking}). For a metric space, a natural base for the uniformity would be the $1/2^n$ neighbourhoods of the diagonal.
We may use uniformities to give appropriate definitions of orbital shadowing, second weak shadowing and strong orbital shadowing in the more general setting of uniform spaces. First of all, given an entourage $D \in \mathscr{U}$, a sequence $(x_i)_{i \geq 0}$ in $X$ is called a $D$-pseudo-orbit if $(f(x_i), x_{i+1}) \in D$ for all $i \geq 0$.
\begin{definition}
Let $X$ be a uniform space. The system $(X,f)$ has the \textit{orbital shadowing} property if for all $E \in \mathscr{U}$, there exists $D \in \mathscr{U}$ such that for any $D$-pseudo-orbit $( x_i)_{i \geq 0}$, there exists a point $z \in X$ such that
\[\Orb(z) \subseteq B_E\left( \{x_i\}_{i\geq 0}\right)\]
and
\[\{x_i\}_{i\geq 0} \subseteq B_E\left(\Orb(z) \right).\]
\end{definition}
\begin{definition}
Let $X$ be a uniform space. The system $(X,f)$ has \textit{second weak shadowing} if for all $E \in \mathscr{U}$, there exists
$D \in \mathscr{U}$ such that for any $D$-pseudo-orbit $(x_i)_{i\geq 0}$, there exists a point $z$ such that
\[\Orb(z) \subseteq B_E\left( \{x_i\}_{i\geq 0}\right).\]
\end{definition}
\begin{definition}
Let $X$ be a uniform space. The system $(X,f)$ has the \textit{strong orbital shadowing} property if for all $E \in \mathscr{U}$, there exists $D \in \mathscr{U}$ such that for any $D$-pseudo-orbit $( x_i)_{i \geq 0}$, there exists a point $z \in X$ such that, for all $N \in \mathbb{N}_0$,
\[\{f^{N+i}(z)\}_{i\geq 0}\subseteq B_E\left( \{x_{N+i}\}_{i\geq 0}\right)\]
and
\[\{x_{N+i}\}_{i\geq 0} \subseteq B_E\left(\{f^{N+i}(z)\}_{i\geq 0} \right).\]
\end{definition}
When $X$ is a compact metric space these definitions coincide with the previously given metric versions.
\medskip
Throughout this paper, as $X$ is a compact Hausdorff space, we denote the unique uniformity associated with $X$ by $\mathscr{U}$.
\section{Main results}
\begin{lemma}\label{Every}
Let $(X,f)$ be a dynamical system where $X$ is a compact Hausdorff space. Then $(X,f)$ satisfies the following:
\[\forall E \in \mathscr{U} \, \forall x \in X \exists n \in \mathbb{N} \exists z \in X \text{ s.t. } \]
\[ \bigcup_{i=1} ^n B_{E}\left(f^i(x)\right) \supseteq \omega(z).\]
\end{lemma}
\begin{proof} Take $E \in \mathscr{U}$ and pick $x \in X$. Let $E_0 \in \mathscr{U}$ be such that $2E_0 \subseteq E$. Take a finite subcover of the open cover $\{B_{E_0}(y) \mid y \in \omega(x)\}$ of $\omega(x)$. For each element of this subcover there exists $n$ such that $f^n(x)$ lies inside it. Pick one such $n$ for each element and then take the largest. The result follows.
\end{proof}
\begin{lemma}\label{Every2}
Let $(X,f)$ be a dynamical system where $X$ is a compact Hausdorff space. Then $(X,f)$ satisfies the following:
\[\forall E \in \mathscr{U} \, \exists n \in \mathbb{N} \text{ s.t. } \forall x \in X \exists z \in X \text{ s.t. } \]
\[ \bigcup_{i=1} ^n B_{E}\left(f^i(x)\right) \supseteq \omega(z).\]
\end{lemma}
\begin{proof}
Fix $E \in \mathscr{U}$. Let $E_0 \in \mathscr{U}$ be such that $2E_0 \subseteq E$. For each $x \in X$ let $n_x \in \mathbb{N}$ be as in the condition in Lemma \ref{Every} for $E_0$ and let $D_x \in \mathscr{U}$ be such that, for any $y \in X$, if $(x,y) \in D_x$ then, for each $i \in \{0,\ldots, n_x\}$, $(f^i(x), f^i(y)) \in E_0$. The collection $\{B_{D_x}(x) \mid x \in X\}$ forms an open cover. Let
\[\left\{B_{D_{x_i}}(x_i)\mid i \in \{1,\ldots, k\}\right\},\]
be a finite subcover. Take $n = \max_{i \in \{1, \ldots, k\}} n_{x_i}$. Then, by composition, for any $x \in X$ there exists $z \in X$ such that
\[\bigcup _{i=1} ^n B_E
\left(f^i(x)\right) \supseteq \omega(z). \]
\end{proof}
\begin{theorem}\label{thmOmega}
Let $(X,f)$ be a dynamical system where $X$ is a compact Hausdorff space. Then for any $E \in \mathscr{U}$
there exist $n \in \mathbb{N}$ and $D \in \mathscr{U}$ such that given any $D$-pseudo-orbit
$(x_i)_{i \geq 0}$ there exists $z\in X$ such that
\[B_E\left(\{x_i\}_{i=0} ^n \right) \supseteq \omega(z).\]
In particular,
\[B_E\left(\{x_i\}_{i \geq 0}\right) \supseteq \omega(z).\]
\end{theorem}
\begin{proof}
Let $E \in \mathscr{U}$ be given and let $E_0 \in \mathscr{U}$ be such that $2E_0 \subseteq E$. Take $n \in \mathbb{N}$ as in the condition in Lemma \ref{Every2} with respect to $E_0$. By uniform continuity we can choose $D \in \mathscr{U}$ such that every $D$-pseudo-orbit $E_0$-shadows the first $n$ iterates of its origin. Explicitly: Let $D_1 \subseteq E_0$ be an entourage such that, for any $y, z \in X$, if $(y,z) \in D_1$ then $(f(y),f(z)) \in E_0$. For each $i \in \{2,\ldots, n\}$ let $D_i \in \mathscr{U}$ be such that $2D_i \subseteq f^{-1}(D_{i-1}) \cap D_{i-1}$.
Now take $D\coloneqq D_n$. Suppose $(x_i)_{i \geq 0}$ is a $D$-pseudo-orbit. Then $(f^i(x_0),x_i) \in E_0$ for all $i \in \{0, \ldots n\}$. By the given condition there exists $z \in X$ such that
\[ \bigcup_{i=1} ^n B_{E_0}\left(f^i(x_0)\right) \supseteq \omega(z).\]
Since, for each $i \in \{0, \ldots, n\}$, $(f^i(x_0),x_i) \in E_0$ it follows from entourage composition, and the fact that $2E_0 \subseteq E$, that
\[ B_E\left( \{x_i\}_{i=0} ^n\right) \supseteq \omega(z).\]
\end{proof}
The fact that all compact Hausdorff systems exhibit second weak shadowing now follows as a simple corollary to Theorem \ref{thmOmega}. Note that Corollary \ref{CorOrbTrap} is a generalisation of \cite[Theorem 3.1]{PiluginRodSakai2002}.
\begin{corollary}\label{CorOrbTrap}
Let $(X,f)$ be a dynamical system where $X$ is a compact Hausdorff space. Then the system has second weak shadowing.
\end{corollary}
\begin{proof}
Let $E \in \mathscr{U}$ be given and let $D \in \mathscr{U}$ correspond to this as in Theorem \ref{thmOmega}. Take a $D$-pseudo-orbit $(x_i)_{i \geq 0}$. By Theorem \ref{thmOmega} there exists $z \in X$ such that
\[ B_E\left( \{x_i\}_{i\geq 0}\right) \supseteq \omega(z).\]
Since $\omega$-limit sets are positively invariant it follows that for any $y \in \omega(z)$
\[\Orb(y) \subseteq B_E\left( \{x_i\}_{i\geq 0}\right).\]
It remains to note that $\omega(z) \neq \emptyset$ as $X$ is compact.
\end{proof}
\begin{theorem}\label{thmMinimalIFF}
Let $X$ is a compact Hausdorff space and $f \colon X \to X$ be a continuous function. The system $(X,f)$ is minimal if and only if for any $E \in \mathscr{U}$ there exist $D \in \mathscr{U}$ and $n \in \mathbb{N}$ such that for any two $D$-pseudo-orbits $(x_i)_{i \geq 0}$ and $(y_i)_{i \geq 0}$
\[\{y_i\}_{i =0} ^n \subseteq B_E\left( \{x_i\}_{i=0} ^n\right)\]
and
\[\{x_i\}_{i =0} ^n \subseteq B_E\left( \{y_i\}_{i=0} ^n\right).\]
\end{theorem}
\begin{proof}
First suppose the system is minimal. Let $E \in \mathscr{U}$ be given. Take $D \in \mathscr{U}$ and $n \in \mathbb{N}$ corresponding to $E$ as in Theorem \ref{thmOmega}. Now let $(x_i)_{i \geq 0}$ and $(y_i)_{i \geq 0}$ be two $D$-pseudo-orbits. By Theorem \ref{thmOmega} there exist $z_1, z_2 \in X$ such that
$B_E\left(\{x_i\}^n _{i=0} \right) \supseteq \omega(z_1)$
and
$B_E\left(\{y_i\}^n _{i=0} \right) \supseteq \omega(z_2).$
As $(X,f)$ is minimal $\omega(z_1)=\omega(z_2)=X$. It follows that
$B_E\left(\{x_i\}^n _{i=0} \right)=B_E\left(\{y_i\}^n _{i=0} \right)=X.$
Hence
\[\{y_i\}_{i =0} ^n \subseteq B_E\left( \{x_i\}_{i=0} ^n\right)\]
and
\[\{x_i\}_{i =0} ^n \subseteq B_E\left( \{y_i\}_{i=0} ^n\right).\]
Now suppose the system is not minimal. Then there exists $x \in X$ such that $\omega(x) \neq X$. Pick $y \in \omega(x)$ and let $z \in X \setminus \omega(x)$. Take $E \in \mathscr{U}$ such that $B_E(z) \cap \omega(x) = \emptyset$. As $E$ is symmetric by our standing assumption, $z \notin B_E(\omega(x))$. Consider the pseudo-orbits given by the orbit sequences of $y$ and $z$: these are $D$-pseudo-orbits for any $D \in \mathscr{U}$. As $\omega$-limit sets are positively invariant, $\Orb(y) \subseteq \omega(x)$. Since $z \notin B_E(\omega(x))$ it also follows that $z \notin B_E(\Orb(y))$. In particular $\Orb(z) \not\subseteq B_E\left(\Orb(y)\right)$.
\end{proof}
For the case when $X$ is a compact metric space Theorem \ref{thmMinimalIFF} may be formulated as follows: A dynamical system $(X,f)$ is minimal precisely when for any $\epsilon>0$ there exist $\delta>0$ and $n \in \mathbb{N}$ such that for any two $\delta$-pseudo-orbits $(x_i)_{i \geq 0}$ and $(y_i)_{i \geq 0}$
\[d_H(\{x_i\}_{i =0} ^n , \{y_i\}_{i=0} ^n) <\epsilon.\]
\begin{corollary}
Let $X$ is a compact Hausdorff space and $f \colon X \to X$ be a continuous function. The system $(X,f)$ is minimal if and only if for any $E \in \mathscr{U}$ there exist $D \in \mathscr{U}$ and $n \in \mathbb{N}$ such that for any $D$-pseudo-orbit $(x_i)_{i \geq 0}$ we have $B_E\left(\{x_i\}_{i=0} ^n \right)=X$.
\end{corollary}
\begin{proof}
Immediate from the proof of Theorem \ref{thmMinimalIFF}.
\end{proof}
\begin{corollary}
Let $X$ be a compact Hausdorff space. If $(X,f)$ is a minimal dynamical system then it exhibits the strong orbital shadowing property.
\end{corollary}
\begin{proof}
Let $E \in \mathscr{U}$ be given. Take $D \in \mathscr{U}$ and $n \in \mathbb{N}$ corresponding to $E$ as in Theorem \ref{thmMinimalIFF}. Now let $(x_i)_{i \geq 0}$ be a $D$-pseudo-orbit and pick any $z \in X$. Since $(x_{N+i})_{i \geq 0}$ and $(f^{N+i}(z))_{i \geq 0}$ are $D$-pseudo-orbits for all $N \in \mathbb{N}_0$, by Theorem \ref{thmMinimalIFF},
\[\{f^{N+i}(z)\}_{i =0} ^n \subseteq B_E\left( \{x_i\}_{i=0} ^n\right)\]
and
\[\{x_{N+i}\}_{i =0} ^n \subseteq B_E\left( \{f^{N+i}(z)\}_{i=0} ^n\right).\]
\end{proof}
\begin{acknowledgements} The funding provided by EPSRC/University of Birmingham is
gratefully acknowledged. The author would also like to thank Chris Good for his support and guidance.
\end{acknowledgements} |
1,116,691,500,247 | arxiv | \section{Introduction}
Photo- and electroproduction of mesons off baryons provide arguably the most
direct routes to information about hadronic structure. At high energies, where
multi-meson production abounds, such processes can be described economically in
terms of pomeron and Regge-trajectory exchanges~\cite{Collins,DDLN,Gribov}. At
lower energies, single-meson production provides a direct avenue for baryon
spectroscopy~\cite{KR2010}, with theoretical descriptions that attempt to model
the contributing mechanisms as detailed as possible in terms of Feynman-type
exchange processes.
The present work is concerned with an intermediate-energy transition region,
where one starts within the Feynman-type picture and replaces some exchanges by
Regge trajectories in an attempt to bring the economic features of the
high-energy Regge approach to bear in the more traditional Feynman framework.
Specifically, we will apply such a hybrid framework to the generic
electromagnetic production process depicted in Fig.~\ref{fig:Msutc} of a meson
($m$) off an initial baryon ($b$) going over into a final baryon ($b'$), i.e.,
\begin{equation}
\gamma(k)+b(p)\to m(q)+b'(p')~,
\label{eq:process}
\end{equation}
where arguments denote the corresponding four-momenta.
\begin{figure}[b!]\centering
\includegraphics[width=\columnwidth,clip=]{Msutc.eps}
\caption{\label{fig:Msutc
Generic diagrams with external four-momenta of the photoproduction process of
Eq.~(\ref{eq:process}) satisfying $q+p'=k+p$. Labels $s$, $u$, and $t$ at the
hadronic $b\to m+b'$ vertices refer to Mandelstam variables of the respective
exchanged intermediate particles. Summations over all intermediate states
compatible with initial and final states are implied. The right-most diagram
depicts the contact-type interaction current. Time runs from right to left.}
\end{figure}
For this single-meson production process, it is argued that replacing the
$t$-channel single-meson exchange (third diagram in Fig.~\ref{fig:Msutc}) by
the exchange of an entire Regge trajectory, would lead to a better, simpler
description of the dynamics of the process in question, in
particular, if it is dominated by small-momentum transfers
\cite{GLV97,VGL98,Corthals06,Corthals07,NK2010,He14,HeNP14,NY11,HC12,NG13,Wang14,Chiang,Titov,Hyodo,Toki}.
However, the good success of such hybrid approaches notwithstanding, it is well
known that this replacement destroys gauge invariance even if the underlying
Feynman formulation was gauge invariant to start with.
One widely used recipe for restoring gauge invariance is the method of
Ref.~\cite{GLV97} which basically uses the residual function of the base state
of the $t$-channel Regge trajectory as a common suppression function for all
terms of the tree-level current. Current conservation --- i.e., $k_\mu
M^\mu=0$, where $M^\mu$ denotes the current --- is achieved in this method
because one starts from a conserved tree-level current, and multiplication by a
common suppression function does not destroy this property. Even though the
method is quite successful in providing good descriptions of data in many
applications (see, for example,
Refs.~\cite{GLV97,VGL98,Corthals06,Corthals07,NK2010,He14,HeNP14,NY11,HC12,NG13,Wang14,Chiang}),
there is no dynamical foundation for it.
We point out in this context that the current-conservation condition, $k_\mu
M^\mu=0$, only implies \textit{global} gauge invariance which is little more
than charge conservation. \textit{Local} gauge invariance, i.e., the
requirement that the physical observables are invariant under local U(1)
transformations of the fields, on the other hand, implies the very existence of
the electromagnetic field~\cite{PStext}. Since global gauge invariance follows
from local gauge invariance, but not the other way around, imposing current
conservation by itself to find ways of repairing a current that was damaged by
approximations, therefore, does not imply that the damage done to the
underlying electromagnetic field is repaired as well.
We will show here how \textit{local} gauge invariance can be restored when the
$t$-channel single-meson exchange is replaced by the exchange of an entire
mesonic Regge trajectory. The method as such is not restricted to the
$t$-channel and could also be applied to a $u$-channel description in terms of
baryon Regge trajectories in a straightforward
manner.\footnote{\label{foot:schannel
Technically, it could also be used for $s$-channel Reggeization, but since
the $s$-channel contribution for a given experiment is a constant, without
any angular dependence, it seems doubtful that there would be much point in
doing so, even if one ignores duality issues between $s$- and $t$-channel
processes~\cite{Collins,DDLN,Gribov}.}
The proposed mechanism is based on the necessary and sufficient conditions for
local gauge invariance formulated as generalized Ward-Takahashi identities for
the production current~\cite{Kazes1959,hh97}. These are \textit{off-shell}
conditions that automatically reduce to the familiar current-conservation
relation, $k_\mu M^\mu=0$, when taken on shell. The implementation of these
conditions results in contact-type interaction currents~\cite{hh98,hh06,hh11}
as minimal additions to a given current to restore local gauge invariance. The
method is well established within the usual Feynman picture and it has been
applied successfully to a variety of
photoprocesses~\cite{NH04,NH06,NOH06,HL08,yo08,NH09,NOH12,Huang12,HN12,HHN13,Roenchen13}.
The extension given here to include exchanges of Regge trajectories is
straightforward.
The paper is organized as follows. In the subsequent Sec.~\ref{sec:basics}, we
will recapitulate basic details of meson photoproduction within the general
field-theory approach of Haberzettl~\cite{hh97} and discuss, in particular, how
the set of generalized Ward-Takahashi identities ensures the local gauge
invariance of the production current. The Regge treatment of $t$-channel meson
exchanges considered in Sec.~\ref{sec:regge} is then immediately seen to
violate these conditions thus leading to a current that is not conserved. The
reason for this violation can be traced to the fact that higher-lying mass
states above the base state of the Regge trajectory have the wrong coupling to
the electromagnetic field. Using the residual function from the base state of
the Regge trajectory, we show then how to construct a contact current that
restores validity of the full set of generalized Ward-Takahashi identities and
therefore ensures local gauge invariance. As an illustration of the relevant
details, we treat in Sec.~\ref{sec:example} the example of the
strangeness-production reaction $\gamma+p\to K^+ + \Sigma^{*0}$. In
Sec.~\ref{sec:summary}, we will provide a summarizing discussion of the present
approach. Finally, in the Appendix, we write out the generic expressions
applicable to any single-meson production process that allow one to construct
the minimal contact currents necessary to maintain local gauge invariance.
\section{Photoproduction Basics}\label{sec:basics}
The following description is based on the field-theoretical approach of
Haberzettl~\cite{hh97} originally developed for pion photoproduction off the
nucleon. This formalism, however, is quite generic and can be readily applied
to meson-production processes off any baryon.
The basic topological structure of the single-pion production current $M^\mu$
was given a long time ago~\cite{GellMann54} as arising from how the photon can
couple to the underlying hadronic $\pi NN$ vertex. The resulting structure
depicted in Fig.~\ref{fig:Msutc} is generic and applies to all photo- and
electroproduction processes of a single meson off a baryon. The full current
$M^\mu$, therefore, can be written generically as
\begin{equation}
M^\mu = M^\mu_s+M^\mu_u+M^\mu_t +M^\mu_\IC~,
\label{eq:Msuti}
\end{equation}
as indicated in Fig.~\ref{fig:Msutc}, where the indices $s$, $u$, and $t$ here
refer to the Mandelstam variables of the respective exchanged intermediate
off-shell particle. This structure is based on topology alone and therefore
independent of the details of the individual current contributions. The first
three (polar) contributions are relatively simple; the real complication of the
problem lies in how complex the reaction mechanisms are that are taken into
account in the interaction current $\Mint^\mu$ because in principle $\Mint^\mu$
subsumes all mechanisms that do not have $s$-, $u$-, or $t$-channel poles, and
this comprises \textit{all} final-state interactions and therefore necessarily
all effects that arise from the coupling of various reaction
channels~\cite{hh97,hh11,Roenchen13}.
Here, we will ignore all of these reaction-dynamical complications and treat
the interaction current $M_\IC^\mu$ simply as a `black box' that must satisfy
certain four-divergence constraints~\cite{hh97}. If needed, one may add the
manifestly transverse contributions of the more complete
treatment~\cite{hh06,hh11} to the minimal explicit structure discussed here.
We emphasize that the particles explicitly entering all expressions here must
be physical particles. In other words, the Regge-specific implementation of the
formalism does not apply to bare particles. The corresponding propagators here
must describe physical particles, with poles at the respective physical masses,
but their structure is not limited otherwise, i.e., they may contain explicit
dressing functions or they can be simple Feynman-type propagators, however,
with physical masses, with the dressing mechanisms that gave them their
physical masses hidden in form factors. In other words, the diagrams of
Fig.~\ref{fig:Msutc} must be taken as representing the solution of the
meson-production problem and not as the Born-type bare input for a
Bethe-Salpeter- or Dyson-Schwinger-type reaction equation.
Also, for the purpose of gauge invariance, the only relevant intermediate
states in the $s$-, $u$-, and $t$-channel diagrams of Fig.~\ref{fig:Msutc} are
those where the photon does not initiate a transition (since transition
currents are transverse), i.e., where the states before and after the photon
interacts are the same particle with non-zero charge. Thus, for the present
purpose, without lack of generality, we may ignore all diagrams and
intermediate states that do not contribute to the four-divergence of the
production current $M^\mu$.
As a consequence, with this restriction, all three hadronic vertices in
Fig.~\ref{fig:Msutc} describe the same three-point vertex $b\to m+b'$, for
which we will use the notation $F(p_{b'},p_b)$, where the arguments here are
the incoming and outgoing baryon momenta, as depicted in
Fig.~\ref{fig:MBBvertex}. The vertex notation $F$ subsumes all coupling
operators and isospin dependence, etc., and depending on the specific reaction,
it may also carry Lorentz indices [see Eq.~(\ref{eq:modelvertex}) below, and
also the example in Sec.~\ref{sec:example}.] The three kinematic situations in
which this vertex appears in Fig.~\ref{fig:Msutc} are then uniquely identified
by the Mandelstam variables of the exchanged intermediate hadron,
\begin{subequations}\label{eq:Mandelstam}
\begin{align}
s&=(p+k)^2=(p'+q)^2~,
\\
u&=(p'-k)^2=(p-q)^2~,
\\
t&=(q-k)^2=(p-p')^2~,
\end{align}
\end{subequations}
and we will use
\begin{equation}
F_t=F(p',p)\,,~~F_u=F(p'-k,p)\,,~~F_s= F(p',p+k)
\end{equation}
to abbreviate the corresponding vertices, and generically write $F_x$ for
$x=s,u,t$.
\begin{figure}[t!]\centering
\includegraphics[width=.35\columnwidth,clip=]{MBBvertex.eps}
\caption{\label{fig:MBBvertex
Generic vertex $F(p_{b'},p_b)$ for $b\to m+b'$ with associated momenta. The
meson momentum $q_m=p_b-p_{b'}$ is given by four-momentum conservation across
the vertex.}
\end{figure}
\subsection{Generalized Ward-Takahashi identities}
First, to set the stage for the Regge treatment, we will recapitulate how the
local gauge-invariance requirements differ from mere current conservation,
i.e., global gauge invariance.
To preserve \textit{local} gauge invariance for the photoprocess
(\ref{eq:process}) the following set of \textit{off-shell} four-divergence
relations need to be satisfied~\cite{hh97,hh06,hh11}. At the very base are the
Ward-Takahashi identities (WTI)~\cite{Ward,Takahashi} for the individual
electromagnetic currents $J^\mu$ of mesons (index $m$) and baryons (indices
$b'$ or $b$),
\begin{subequations}\label{eq:WTI}
\begin{align}
k_\mu J_m^\mu(q,q-k) &= \Delta_{m}^{-1}(q) Q_{m}- Q_{m} \Delta_{m}^{-1}(q-k)~,
\\
k_\mu J_{b'}^\mu(p',p'-k) &= S_{b}^{-1}(p') Q_{b'}- Q_{b'} S_{b}^{-1}(p'-k)~,
\\
k_\mu J_b^\mu(p+k,p) &= S_{b}^{-1}(p+k) Q_{b}- Q_{b} S_{b}^{-1}(p)~,
\end{align}
\end{subequations}
where $\Delta_m(q)$, $S_{b'}(p')$ and $S_{b}(p)$ are the respective propagators
for the meson and baryons, with arguments providing their four-momenta, and
$Q_m$, $Q_{b'}$, and $Q_b$ denoting their associated charge operators. The
photoproducton current $M^\mu$ of Eq.~(\ref{fig:Msutc}) must satisfy the
generalized WTI (gWTI)~\cite{Kazes1959,hh97},
\begin{align}
k_\mu M^\mu &= \Delta_{m}^{-1}(q) Q_{m} \Delta_{m}(q-k)\, F_t
\nonumber\\
&\mbox{}\qquad
+ S_{b'}^{-1}(p') Q_{b'} S_{b'}(p'-k)\, F_u
\nonumber\\
&\mbox{}\qquad\quad - F_s\, S_b(p+k) Q_b S_b^{-1}(p)~,
\label{eq:gWTI}
\end{align}
and, finally, the interaction current $\Mint^\mu$ needs to satisfy the
condition
\begin{align}
k_\mu \Mint^\mu &= Q_{m}\, F_t + Q_{b'}\, F_u
- F_s\,Q_b~.
\label{eq:gWTIint}
\end{align}
In view of the isospin dependence of the vertices, charge operators and
vertices do not commute. Note that the right-hand side vanishes here
identically if all vertices are replaced by simple coupling constants for we
have then $F_x \to g \iso$, where $g$ is the coupling constant and $\iso$
generically denotes the isospin operator of the vertex, and hence $Q_m\iso
+Q_{b'}\iso - \iso Q_b\equiv 0$ provides charge conservation across the
photoprocess~\cite{hh97}. In a manner of speaking, therefore,
Eq.~(\ref{eq:gWTIint}) amounts to the formulation of the effective charge
difference across the reaction in the presence of hadronic vertices with
structure.
It is of paramount importance here that all three four-divergence equations are
off-shell relations, and that the off-shellness is a necessary requirement for
local gauge invariance~\cite{hh97} since it ensures that the (off-shell)
current $M^\mu$ provides the correct, consistent contributions to gauge
invariance even if it is embedded as an off-shell subprocess in a larger
process (for example, electromagnetic production of two or more
mesons~\cite{NOH06,FB20}).
With the off-shell WTIs (\ref{eq:WTI}) and (\ref{eq:gWTI}) given, global gauge
invariance follows trivially by taking the respective on-shell matrix elements,
with the inverse propagators in the four-divergences (\ref{eq:WTI}) and
(\ref{eq:gWTI}) then ensuring that the four-divergences vanish; in particular,
\begin{equation}
k_\mu M^\mu =0\qquad \text{(on shell)}~.
\label{eq:CurrConserved}
\end{equation}
To be sure, this is a necessary condition the physical production current needs
to satisfy that follows trivially from local gauge invariance, however, this
on-shell restriction by itself contains no information that allows one to
meaningfully `guess' at a nontrivial structure for $M^\mu$. Thus, it should not
be used as a starting point for restoring gauge invariance destroyed by
approximations.
The proper starting point should be the set of off-shell equations
(\ref{eq:WTI}), (\ref{eq:gWTI}), and (\ref{eq:gWTIint}). One easily sees here
that only two --- any two --- of these conditions are necessary to ensure the
validity of the respective third equation.
For the practical purpose of restoring gauge invariance, it is easiest to work
with Eqs.~(\ref{eq:WTI}) and (\ref{eq:gWTIint}). In any microscopic formulation
of photoprocesses, the single-hadron WTIs (\ref{eq:WTI}) are a given from the
start. Therefore, to obtain the gWTI (\ref{eq:gWTI}) for the full production
current $M^\mu$ and thus ensure the preservation of local gauge invariance, one
needs to construct an interaction current $\Mint^\mu$ that satisfies
Eq.~(\ref{eq:gWTIint}). Note, in particular, that the structure of this
equation does not change even if the external hadrons are on shell, and thus
--- quite in contrast to the current-conservation condition (\ref{eq:CurrConserved})
--- \textit{even its on-shell limit provides a nontrivial constraint} that ensures
that the on-shell result (\ref{eq:CurrConserved}) is a consequence of
\textit{local} gauge invariance and not just mere global gauge invariance.
\subsubsection{$t$-Channel contribution}\label{sec:tCh}
To see what needs to be done to restore gauge invariance in the Regge case, let
us first look at how the usual $t$-channel term as depicted by the third
diagram in Fig.~\ref{fig:Msutc} contributes to upholding local gauge
invariance.
Using the momenta of the diagram and stripped of all unnecessary factors, it
reads
\begin{equation}
M_t^\mu = J^\mu_m (q,q-k) \Delta_m(q-k) F_t~,
\label{eq:Feynman}
\end{equation}
and its four-divergence is given by
\begin{equation}
k_\mu M_t^\mu = \Delta_{m}^{-1}(q) Q_{m} \Delta_m(q-k) F_t
- Q_{m} F_t~.
\label{eq:kmuMt}
\end{equation}
The first term on the right hand side is precisely the first term appearing on
the right-hand side of the gWTI~(\ref{eq:gWTI}); the second term involving only
the vertex, but no propagator, is canceled by the first term on the right-hand
side of the interaction-current condition~(\ref{eq:gWTIint}). Similar
cancellations happen for the respective contributions from all three polar
current contributions and this cancellation mechanism ensures the validity of
the full gWTI --- and thus of local gauge invariance
--- once Eqs.~(\ref{eq:WTI}) and (\ref{eq:gWTIint}) are satisfied.
It is this cancellation mechanism that will be exploited in the subsequent
Regge treatment.
\section{Gauge-invariant Regge treatment}\label{sec:regge}
First, let us write the hadronic $b\to m+b'$ vertex (see
Fig.~\ref{fig:MBBvertex}) as\footnote{For a more general description of the
vertex, see discussion in Appendix \ref{app:beyond}.}
\begin{equation}
F(p_{b'},p_b) = \hF(q_m)\iso\, f(q_m^2,p_{b'}^2,p_b^2)~,
\label{eq:modelvertex}
\end{equation}
where the outgoing meson four-momentum $q_m$ is given by $q_m = p_b-p_{b'}$ in
terms of the incoming and outgoing baryon momenta. The operator $\hF$
describing the coupling structure of the vertex subsumes all strength
parameters, masses, signs, etc.; in the simplest case it is just a constant,
but in more complicated cases it contains derivatives of the outgoing meson
field which lead to the $q_m$ dependence. The extended structure of the vertex
is given by the scalar form factor $f$ normalized as
\begin{equation}
f(M_m^2,M_{b'}^2,M_b^2) =1~,
\label{eq:norm}
\end{equation}
where the squared momenta of (\ref{eq:modelvertex}) sit on there respective
mass shells. The operator $\iso$ summarily describes the isospin dependence of
the vertex, with relevant indices suppressed. Combined with the respective
charge operators $Q$ for the three legs of the vertex, one obtains~\cite{hh97}
\begin{equation}
Q_m\iso =e_m~,\quad Q_{b'}\iso=e_{b'}~,\quad \iso Q_b=e_b~,
\end{equation}
where
\begin{equation}
e_m + e_{b'} -e_b =0
\label{eq:conscharge}
\end{equation}
provides charge conservation across the reaction. Taken in an appropriate
isospin basis, the charge-isospin operators $e_m$, $e_{b'}$, and $e_b$ are
equal to the respective charges of the individual legs.
We will need only on-shell kinematics here where all external hadron legs of
Fig.~\ref{fig:Msutc} sit on their respective mass shells. The form factors
associated with the vertices $F_x$, $x=s,u,t$, for these cases are
\begin{subequations}
\begin{align}
f_s(s) &= f(M_m^2,M_{b'}^2,s)~,
\\
f_u(u) &= f(M_m^2,u,M_b^2)~,
\\
f_t(t) &= f(t,M_{b'}^2,M_b^2)~,
\end{align}
\end{subequations}
where the Mandelstam variables (\ref{eq:Mandelstam}) are used. The $t$-channel
vertex, in particular, then reads
\begin{equation}
F_t= F(p',p) = \hF(q-k)\,\iso\,f_t(t)~,
\end{equation}
and for the corresponding meson-exchange propagator, we may write without lack
of generality
\begin{equation}
\Delta_m(q-k) = \frac{N_m(q-k)}{t-M_m^2}~,
\label{eq:tpole}
\end{equation}
where the pole at $t=M_m^2$ was pulled out explicitly and the residual
numerator $N_m(q-k)$ defined by this relation may describe dressing effects
and/or the coupling structure of the propagator. In the simplest cases, $N_m$
equals unity for pseudoscalar mesons and $-g^{\beta\alpha}$ for vector mesons
(in Feynman gauge), for example.
Standard Reggeization of the $t$-channel meson exchange corresponds to the
replacement~\cite{GLV97,VGL98,Chiang,Titov,Hyodo,Toki,Corthals06, Corthals07}
\begin{equation}
\frac{1}{t-M_m^2} f_t(t) \to \mP_m(t)~,
\label{eq:replaceRegge}
\end{equation}
where $\mP_m(t)$ is the Regge-trajectory propagator appropriate for this
particular meson exchange. By construction (see details
Sec.~\ref{sec:resfunction}), it contains poles at higher-lying meson masses
along this particular trajectory, in addition to the primary pole at the base
of the trajectory at $t=M_m^2$ of $(\ref{eq:tpole})$. Moreover, the residue at
this primary pole,
\begin{equation}
\lim_{t\to M_m^2} (t-M_m^2) \mP_m(t) =1~,
\label{eq:limP}
\end{equation}
is exactly the same as that of the left-hand side of (\ref{eq:replaceRegge}).
The residual function,\footnote
It is this residual function that was used in Ref.~\cite{GLV97} as an overall
multiplicative factor for their gauge-invariance-restoring recipe. }
\begin{equation}
\mF_t(t)=(t-M_m^2)\,\mP_m(t)~,
\label{eq:residualfct}
\end{equation}
thus, is finite and normalized to unity at $t=M_m^2$, just like the usual
$t$-channel form factor $f_t(t)$. Details of $\mF_t$ will be given in the
subsequent Sec.~\ref{sec:resfunction}.
The Reggeized $t$-channel current now reads
\begin{equation}
M_t^\mu \to\mM_t^\mu = J^\mu_m (q,q-k)\,\Delta_m(q-k)\,\RF(p',p)~,
\label{eq:tReggeMod}
\end{equation}
where
\begin{equation}
\RF(p',p)= \hF(q-k)\,\iso\,\mF_t(t)
\end{equation}
describes the Reggeized vertex, with the corresponding four-divergence given by
\begin{align}
k_\mu \mM^\mu_t &=\Delta_m^{-1}(q)Q_m\,\,\Delta_m(q-k)\,\RF
-Q_m\, \RF~.
\label{eq:gaugeRviol}
\end{align}
The first term on the the right-hand side with the inverse meson propagator
depending on the external (outgoing) meson momentum vanishes on-shell and thus
provides an acceptable contribution for gWTI in analogy to Eq.~(\ref{eq:gWTI}).
The second term, however, has no counterpart in the four-divergence
(\ref{eq:gWTIint}) and thus violates local gauge invariance (and therefore
obviously also global gauge invariance).
This violation comes about because in the Regge treatment \textit{all}
particles on the trajectory are taken to couple to the photon with the
\textit{same} current $J^\mu_m$ as the primary base state, whereas if one were
to incorporate these contributions via Feynman-type exchange mechanisms, each
of the higher-lying states would couple transversely to the photon because the
corresponding currents are transition currents for the transition from
intermediate higher-mass states to the lower-mass primary base state, which is
the final meson state of the reaction, and such transverse transition currents
would not contribute to the four-divergence.
Clearly, to restore local gauge invariance, Regge treatment of the $t$-channel
by itself is not enough --- one \textit{must} also Reggeize the interaction
current $\Mint^\mu$ so that its four-divergence will provide the necessary
cancellation of the offending contribution in (\ref{eq:gaugeRviol}), thus in
essence restoring the transversality of these contributions with higher mass.
In other words, to preserve local gauge invariance, one must apply the
Reggeization process consistently across all elements of the production current
$M^\mu$. Since $t$-channel-type exchanges also contribute (as off-shell
processes) to the internal mechanisms of $\Mint^\mu$, an appropriate
Reggeization of such internal exchanges should provide the cancellation for the
offending term in Eq.~(\ref{eq:gaugeRviol}).
Obviously then, treating Regge consistently with local gauge invariance simply
entails consistently replacing the usual $t$-channel vertex $F_t$ by the
Reggeized vertex $\RF$ everywhere. In addition to the Reggeized $t$-channel
current (\ref{eq:tReggeMod}), this also requires modification of the contact
current,
\begin{equation}
F_t\to \RF\text{:}\qquad \Mint^\mu \to \mM_\IC^\mu~,
\end{equation}
such that the Reggeized contribution from the corresponding four-divergence,
\begin{equation}
k_\mu \mM_\IC^\mu = Q_m\, \RF + Q_{b'}\, F_u
- F_s\,Q_b~,
\label{eq:kMintRegge}
\end{equation}
now cancels the previously gauge-invariance-violating term from
(\ref{eq:gaugeRviol}).
The resulting Reggeized photoamplitude,
\begin{equation}
M^\mu \to \mM^\mu = M_s^\mu+M_u^\mu + \mM^\mu_t +\mM^\mu_\IC~,
\end{equation}
then, by construction, satisfies the appropriate gWTI,
\begin{align}
k_\mu \mM^\mu &= \Delta_m^{-1}(q) Q_m \Delta_m(q-k)\, \RF
\nonumber\\
&\mbox{}\qquad + S_{b'}^{-1}(p') Q_{b'} S_{b'}(p'-k)\, F_u
\nonumber\\
&\mbox{}\qquad\quad - F_s\, S_b(p+k) Q_b S_b^{-1}(p)~,
\end{align}
and thus is fully consistent with local gauge invariance.
The construction of the Reggeized contact current $\mM^\mu_\IC$ that produces
the correct four-divergence (\ref{eq:kMintRegge}) from the Reggeized vertex
$\RF$ follows exactly along the same lines as those given for un-Reggeized
contact currents $\Mint^\mu$~\cite{hh06}. The procedure is straightforward, and
we provide the corresponding generic expressions for the minimal interaction
current that restores local gauge invariance in the Appendix. However, to
understand how it works, it might be more illuminating to consider an example.
To this end, we discuss in Sec.~\ref{sec:example} a strangeness-production
process with a Kroll-Ruderman-type~\cite{Kroll1954} bare contact current.
\subsection{Regge residual function}\label{sec:resfunction}
To provide explicit expressions for the residual function
(\ref{eq:residualfct}), it is convenient to rewrite the standard expressions
for positive- and negative-signature Regge propagators given in
Refs.~\cite{Collins,GLV97} to obtain the unified form
\begin{align}
\mF_t(t)
= \left(\frac{s}{s_\scale}\right)^{\alpha_i(t)}
\frac{N\big[\alpha_i(t);\eta\big]}{\Gamma\big(1+\alpha_i(t)\big)}
\frac{\pi\alpha_i(t)}{\sin\big(\pi\alpha_i(t)\big)}~,
\label{eq:FRegge}
\end{align}
where the functions
\begin{equation}
\alpha_i(t) = \alpha_i'\,(t-M_i^2)~, \qquad \text{for}~~i=0,1~,
\end{equation}
are related to the usual Regge trajectories by
\begin{equation}
\alpha_\zeta(t) = \begin{cases}
\alpha_0(t)~, &\text{for}~~ \zeta=+1~,
\\
1+ \alpha_1(t)~, &\text{for}~~ \zeta=-1~.
\end{cases}
\end{equation}
Here, the signature for pseudoscalar mesons is $\zeta=+1$ (corresponding to
$i=0$) and $\zeta=-1$ (corresponding to $i=1$) for vector mesons. The masses
$M_i$ here are the lowest masses at the bases of the respective trajectories,
with their slopes given by $\alpha'_i$. For these base states, at $t=M_i^2$,
the residual function thus is given by a manageable $0/0$ situation.
Even though $\mF_t$ is also $s$-dependent analytically through the scale factor
$\left(s/s_\scale\right)^{\alpha_i(t)}$, this is irrelevant for our purposes
since for a given experiment, $s$ is fixed, and we may consider $\mF_t$ as a
function of $t$ for fixed $s$. The exponential scale factor suppresses the
Regge contribution for $s>s_\scale$ for (negative) physical values of $t$; the
scale parameter $s_\scale$ usually is chosen as $s_\scale=1$\,GeV.
The signature function $N$ appears here as
\begin{equation}
N[\alpha_i(t);\eta] = \eta + (1-\eta)e^{-i\pi\alpha_i(t)}~,
\label{eq:fitphase}
\end{equation}
where $\eta$ is a real parameter whose three standard values are
\begin{equation}
\eta =\begin{cases}
\frac{1}{2}~, & \text{pure-signature trajectories}~,
\\
0~, & \text{add trajectories: rotating phase}~,
\\
1~, & \text{subtract trajectories: constant phase}~.
\end{cases}
\end{equation}
In the pure-signature case, for $\eta=1/2$, $N$ vanishes for every odd integer
value of $\alpha_i(t)$, thus leaving only the even integer values to produce
poles in (\ref{eq:FRegge}). This corresponds to even and odd angular momenta,
\begin{equation}
\alpha_+ = 0,2,4,\ldots
\quad\text{and}\quad
\alpha_- = 1,3,5,\ldots~,
\end{equation}
associated with the states along the respective positive- or negative-signature
trajectories. Equation (\ref{eq:fitphase}) also subsumes treatment of strongly
degenerate trajectories~\cite{Collins,GLV97}, where the rotating phase
($\eta=0$) results from adding degenerate trajectories and the constant phase
($\eta=1$) arises from subtracting them. Which case applies is largely
determined semi-phenomenologically by $G$-parity arguments~\cite{GLV97}.
Going beyond these standard cases, since the signature function is largely
phenomenological anyway, one may consider $\eta$ as a convenient interpolating
fit parameter for optimizing the description of data for the value range $0 \le
\eta \le 1$. Note that $\exp(-i\pi\alpha_i)$ in (\ref{eq:fitphase}) is $+1$ at
the poles of the primary trajectory and $-1$ at the poles of the added or
subtracted secondary (degenerate) trajectory. Hence, taking into account the
minus sign arising from the negative slope of the denominator sine function in
(\ref{eq:FRegge}) at those secondary poles, this effectively changes the
coupling strength for the latter exchange by the factor $(1-2\eta)$ that can
vary between $+1$ and $-1$; it is positive or negative depending on whether its
degeneracy effect is more additive or subtractive, respectively. The coupling
strength of the primary trajectory remains unchanged. Clearly, if the
strong-degeneracy hypothesis is warranted for a particular application, fitted
values of $\eta$ should come out close to either $0$ or $1$.
At the base of the trajectories, one has
\begin{equation}
N[\alpha_i(M_i^2);\eta]=1
\end{equation}
for any value of $\eta$, thus ensuring the validity of the necessary condition
\begin{equation}
\mF_t(M_i^2) =1~,
\label{eq:FReggenormgen}
\end{equation}
for both $i=0,1$ for the residue of the corresponding Regge propagators. The
fact that the Regge residue function $\mF_t$ thus preserves the normalization
of the standard form factor $f_t$ is crucial for the construction of the
gauge-invariance-preserving contact current, as will be seen explicitly in the
following example.
\section{\boldmath Example: $\gamma +p \to K^+ + \Sigma^{*0}$}\label{sec:example}
In this reaction only the incoming proton and the outgoing kaon carry charge.
Hence, extracting the isospin operators from the respective vertices, the
relevant charge parameters are (in an appropriate isospin basis)
\begin{equation}
Q_{b'}\iso\to e_\Sigma =0~,\quad Q_m\iso\to e_K=e~,\quad \iso Q_b \to e_p = e~,
\end{equation}
where $e$ is the fundamental charge unit, and charge conservation obviously
reads
\begin{equation}
e_\Sigma+ e_K = e_p
\quad\text{or}\quad
e_K = e_p
~.
\end{equation}
Hence, as far as gauge invariance is concerned, only $s$- and $t$-channels and
a contact term contribute. It suffices to consider this as an on-shell process
if the corresponding un-Reggeized amplitude is constructed already such that it
obeys the appropriate gWTI. Moreover, we can ignore all possible resonance
contributions and other meson exchanges since they do not contribute to the
four-divergence (for a more complete discussion, see Ref.~\cite{He14}). The
only relevant exchange particles are the proton (with mass $M_N$) in the
$s$-channel and the kaon $K^+$ (with mass $M_K$) in the $t$-channel.
The $p\to K^+ \Sigma^{*0}$ vertices for the $s$- and $t$-channel terms are
given by~\cite{He14}
\begin{subequations}
\begin{align}
F_s \to F_s^\nu&=g \iso\,q^\nu f_s(s)~,
\\
F_t\to F_t^\nu &=g \iso\,(q-k)^\nu f_t(t)~,
\end{align}
\end{subequations}
with scalar form factors $f_x$ ($x=s,t$) normalized as
\begin{equation}
f_s(M_N^2)=1
\quad\text{and}\quad
f_t(M_K^2)=1~.
\label{eq:fnormalize}
\end{equation}
The constant $g$ subsumes all coupling constants, mass factors, signs, etc.,
$\iso$ generically describes the isospin dependence, and $q^\nu$ and
$(q-k)^\nu$ are the operators for $s$- and $t$-channel, respectively, providing
coupling to the spin-3/2 Rarita-Schwinger spinor of the outgoing $\Sigma^{*0}$
baryon.
The resulting current reads
\begin{equation}
M^{\nu\mu} = M_s^{\nu\mu} + M_t^{\nu\mu} + M^{\nu\mu}_\IC~,
\end{equation}
where the Lorentz indices $\mu$ and $\nu$ connect to the incoming photon state
and the outgoing Rarita-Schwinger spinor, respectively. Assuming validity of
the single-particle WTI for the proton and the kaon (which are trivially true),
$M^{\nu\mu}$ is locally gauge invariant, according to (\ref{eq:gWTIint}), if
the interaction current satisfies
\begin{align}
k_\mu M^{\nu\mu}_\IC
&=Q_K F^\nu_t - F^\nu_s Q_p
\nonumber\\
&=e_K g\, (q-k)^\nu f_t - e_p g\, q^\nu f_s~.
\label{eq:kMintSigma}
\end{align}
Then, explicitly writing out the $t$-channel contribution,
\begin{align}
M_t^{\nu\mu}
&= \frac{(2q-k)^\mu Q_K }{t-M_K^2} \, F^{\nu}_t
\nonumber\\
&= e_K g\, \,\frac{(2q-k)^\mu }{t-M_K^2} (q-k)^\nu f_t~,
\end{align}
we see that its (on-shell) four-divergence contribution,
\begin{equation}
k_\mu M_t^{\nu\mu} =-Q_K F^\nu_t= - e_K g\, (q-k)^\nu f_t~,
\label{eq:Mnumut}
\end{equation}
is canceled by the $t$-channel term in (\ref{eq:kMintSigma}). A similar finding
for the $s$-channel shows that the validity of (\ref{eq:kMintSigma}) is both
necessary and sufficient for making the current $M^{\nu\mu}$ locally gauge
invariant.
In the structureless limit, when all form factors are unity, the bare
interaction current $m^{\nu\mu}_c$ also must satisfy the analog of
(\ref{eq:kMintSigma}), i.e.,
\begin{align}
k_\mu m^{\nu\mu}_c &=e_K g\, (q-k)^\nu - e_p g\, q^\nu
\nonumber\\
&=k_\mu \left(-e_K g \,g^{\nu\mu}\right)~,
\end{align}
which shows that the minimal interaction current is given by
\begin{equation}
m^{\nu\mu}_c = -e_K g \,g^{\nu\mu}~.
\end{equation}
This is precisely the contact current resulting from the usual four-point
contact Lagrangian for the present process. This result is seen here to be an
immediate consequence of local gauge invariance.
To construct the corresponding minimal interaction current, we adapt the
generic expression (\ref{eq:MintGeneric}) provided in the Appendix to the
present case and obtain
\begin{equation}
M_\IC^{\nu\mu} = -e_Kg \, g^{\nu\mu} f_t + g\, q^\nu C^\mu~ .
\label{eq:Mintnumu}
\end{equation}
The auxiliary contact current,
\begin{align}
C^\mu &= -e_K (2q-k)^\mu \frac{f_t-1}{t-M_K^2}f_s - e_p (2p+k)^\mu \frac{f_s-1}{s-M_N^2}f_t
\nonumber\\
&\mbox{}\qquad
+\hat{A}(s,t) \left(1-f_t\right)\left(1-f_s\right)
\nonumber\\
&\mbox{}\qquad\qquad
\times\left[e_K\frac{(2q-k)^\mu}{t-M_K^2}+ e_p \frac{(2p+k)^\mu}{s-M_N^2}\right]~,
\label{eq:Caux}
\end{align}
follows from Eq.~(\ref{eq:CauxGeneric}). It was derived from imposing local
gauge-invariance requirements in the presence of vertices with
structure~\cite{hh97,hh06}. In view of the normalizations
(\ref{eq:fnormalize}), this current is manifestly nonsingular. The function
$\hat{A}(s,t)$ in front of the manifestly transverse term here is a
phenomenological (complex) function that must vanish at high energies, but
otherwise can be freely chosen to improve fits to the data.
It is now a trivial exercise to show that
\begin{equation}
k_\mu C^\mu = e_K f_t - e_p f_s
\end{equation}
and thus the interaction current (\ref{eq:Mintnumu}) indeed provides the
correct four-divergence (\ref{eq:kMintSigma}) to ensure local gauge invariance.
We emphasize in this context that the contact-type interaction current
constructed here provides only the \textit{minimal} structure necessary for
maintaining local gauge invariance. If the physics of the problem should make
it necessary to consider additional current contributions, they can only arise
from additional manifestly transverse currents and thus do not contribute when
taking the four-divergence of the current.
\subsubsection{Regge-trajectory exchange}
To Reggeize the $K^+$ exchange of the present example, the explicit expression
in analogy to (\ref{eq:replaceRegge}) reads~\cite{GLV97}
\begin{align}
\frac{f_t}{t-M_K^2} \to \frac{\mF_t}{t-M_K^2}~,
\label{eq:KaonReggePole}
\end{align}
with the residual function given by (\ref{eq:FRegge}) for $i=0$, where
\begin{equation}
\alpha_0(t) = \frac{t-M_K^2}{\Delta t_K}
\end{equation}
is the kaonic Regge trajectory, with slope
\begin{equation}
\alpha'_0 = \frac{1}{\Delta t_K} = 0.7\,\text{GeV}^{-2}~,
\end{equation}
which puts the Regge states at
\begin{equation}
t\to t_n=M_K^2 +n \Delta t_K~,\quad\text{for}~n=0,1,2,\ldots
\end{equation}
For pure pseudoscalar signature ($\zeta=+1~\Rightarrow~\eta=1/2$), only even
values are realized on the trajectory; for all other values of $\eta$, all
states contribute.
The Reggeized $t$-channel current reads now
\begin{equation}
M_t^{\nu\mu} \to \mM_t^{\nu\mu}
= e_K g \,\frac{(2q-k)^\mu}{t-M_K^2} (q-k)^\nu \mF_t~,
\end{equation}
with the associated modified interaction current
\begin{equation}
M_\IC^{\nu\mu} \to \mM_\IC^{\nu\mu} = -e_K g\, g^{\nu\mu} \mF_t +g\, q^\nu \mC^\mu
\end{equation}
and modified auxiliary current
\begin{align}
C^\mu \to \mC^\mu &= -e_K (2q-k)^\mu \frac{\mF_t-1}{t-M_K^2}f_s
\nonumber\\
&\mbox{}\quad - e_p (2p+k)^\mu \frac{f_s-1}{s-M_N^2}\mF_t
\nonumber\\
&\mbox{}\quad
+\hat{A}(s,t)\left(1-f_t\right)\left(1-f_s\right)
\nonumber\\
&\mbox{}\qquad
\times \left[e_K\frac{(2q-k)^\mu}{t-M_K^2}+ e_p \frac{(2p+k)^\mu}{s-M_N^2}\right]~.
\label{eq:Ccontact1}
\end{align}
Despite the Reggeization of the $t$-channel form factor, because of the limit
(\ref{eq:FReggenormgen}), this current is still nonsingular as far as the
primary propagator singularities here are concerned. Note in this respect that
there is no reason to replace $f_t$ by $\mF_t$ in the last term since this
current piece is manifestly transverse and does not contribute to the
four-divergence. However, no harm would result if one did replace it since the
difference can be absorbed in redefining $\hat{A}$.
The auxiliary current $\mC^\mu$ now does have higher-mass singularities at
unphysical $t>0$ from the Regge trajectory but those are necessary to
compensate the corresponding higher-mass contributions from the $t$-channel
exchange which have the wrong electromagnetic coupling that led to the
violation of gauge invariance.
It is obvious now that the Reggeized production current for this process,
\begin{equation}
M^{\nu\mu} \to \mM^{\nu\mu} = M_s^{\nu\mu} + \mM_t^{\nu\mu} + \mM^{\nu\mu}_\IC~,
\end{equation}
by construction does indeed satisfy the generalized Ward-Takahashi identity for
this process and thus provides a conserved current,
\begin{equation}
k_\mu \mM^{\nu\mu} = 0\qquad \text{(on shell)}~,
\end{equation}
as a matter of course.
\section{Summary and Discussion}\label{sec:summary}
We have considered here a mechanism to repair gauge invariance broken by
Reggeization of $t$-channel meson exchanges in single-meson photoproduction off
a baryon. Consistent with the underlying field-theoretical foundations of such
processes~\cite{hh97}, we have argued that this must be done by constructing
contact-type interaction currents whose four-divergence compensates for the
wrong coupling to the electromagnetic field of higher-mass contributions of the
Regge trajectory that is responsible for the violation of gauge invariance. The
construction principle was based on the underlying generalized Ward-Takahashi
identities whose validity ensure local gauge invariance.
We emphasize once more in this respect that mere (on-shell) current
conservation, $k_\mu M^\mu=0$, is not very helpful as a starting point for
repairing gauge-invariance violations. As argued, the goal of any repair
mechanism must be the construction of an interaction current $M^\mu_\IC$ that
satisfies the crucial four-divergence condition (\ref{eq:gWTIint}) for this
interaction current. The resulting local gauge-invariance property will then
automatically ensure a conserved on-shell current $M^\mu$.
The present way of maintaining local gauge invariance in terms of a Regge form
factor $\mF_t$ to replace the usual $t$-channel cutoff function $f_t$ shows
that when viewed from the Feynman perspective, the Regge approach basically can
be understood as a prescription for the functional form of the $t$-channel form
factor. Numerical test show that at (negative) physical $t$ (and fixed $s$),
the main features of $\mF_t$ that survive are the exponential scale factor and
the phase function,
\begin{equation}
S_t(t)=\left(\frac{s}{s_\scale}\right)^{\alpha_i(t)}N\big[\alpha_i(t);\eta\big]~.
\label{eq:ReggeScale}
\end{equation}
This exponential function falls off faster than any power-law form factor and
thus compared to a conventional phenomenological form factor drastically cuts
out the high-$|t|$ (i.e., backward-angle) scattering contributions.
The onset of the `Regge regime' is oftentimes very much under debate in
practical applications, in particular, if Regge exchanges are employed at
intermediate-energy ranges within hybrid approaches as discussed here that mix
Regge with the traditional Feynman picture. In this situation, it seems natural
to consider mechanisms for smooth transitions into that regime~\cite{Toki}. An
interpolating mechanism like $\RFF=\mF_t\,R+f_t\,(1-R)$, for example, that
determines an effective $t$-channel form factor $\RFF$ somewhere between its
non-Regge ($f_t$) and Regge ($\mF_t$) limits in terms of an ($s$- and
$t$-dependent) interpolating function $R$ can be fine-tuned to the requirements
of particular applications~\cite{Toki,NK2010}. Hence, fitting the interpolation
parameters to experimental data lets the data `decide' whether Regge exchanges
should be necessary or not for a particular process at a particular photon
energy. Since this would take much of the contention out of the debate, we
strongly advocate employing such interpolation schemes. This may be especially
advisable for energy ranges where details of baryon-resonance structure may
still play a role. Clearly, the procedure outlined here is not affected by such
an interpolation scheme since $\RFF$ is normalized to unity by construction and
may thus be used for building a contact current, just like $f_t$ or $\mF_t$.
A similarly useful interpolation procedure is provided by the $\eta$-dependence
of the signature function $N\big[\alpha_i(t);\eta\big]$ of
Eq.~(\ref{eq:fitphase}) that allows for the smooth transition from the
pure-signature case to the two limiting cases of adding or subtracting
degenerate trajectories and thus, again, lets the data decide which description
is better suited for a given application.
One should point out that fixing local gauge invariance as presented here does
not imply that the resulting expressions will automatically provide good
results for the problem at hand. It merely means that whatever is missing for a
good description will not be due to violation of local gauge invariance. In
other words, anything that should be found lacking as far as reproducing of
data is concerned would necessarily be resulting from manifestly transverse
current mechanisms not relevant for local gauge invariance.
The locally gauge-invariant Reggeization procedure outlined here is currently
being applied to describe Jefferson Lab data~\cite{clas14} for $\gamma + n \to
K^+ +\Sigma^*(1385)^-$ at photon energies between 1.5 and 2.5 GeV. The
preliminary results are encouraging; the full report will be published
elsewhere~\cite{WHH2015}.
Finally, we mention once more that the procedure given here can also be used
for the Reggeization of the $u$-channel in terms of baryonic Regge
trajectories. With the details given here, it should be quite obvious how to
implement this for the $u$-channel in a locally gauge-invariant manner (see
also footnote \ref{foot:schannel}).
\section*{Acknowledgment}
One of the authors (H.H.) gratefully acknowledges discussions with Kanzo
Nakayama. J.H. acknowledges partial support by the Major State Basic Research
Development Program in China under grant 2014CB845405 and the National Natural
Science Foundation of China under grant 11275235.
|
1,116,691,500,248 | arxiv | \section{Introduction}
In quantum mechanics, there exist two parallel themes \cite{tanor}. One is about the static properties of a system, namely the eigenstates and eigenvalues of the Hamiltonian. The other is about the dynamics of the system, namely how the wave function or the expectation values of various physical quantities evolve. While for the former, there exist many theorems which give us a good picture of the wave functions in many cases; for the latter, the relevant mathematics is far less developed, and hence we often have little intuition. Actually, the dynamics of a system can be very surprising \cite{akulin}. This is the case even in the single-particle case, as demonstrated by the celebrated phenomena of dynamical localization \cite{dl} and coherent destruction of tunneling \cite{cdt}.
Here we report some unexpected dynamics in the setting of the one-dimensional tight binding model, which is arguably the simplest model in solid state physics. It is about a very simple scenario. Take a tight binding model with periodic boundary condition and put a particle in some eigenstate, i.e., a Bloch state with some momentum. Then suddenly quench it by changing the potential of an arbitrary site. The rough picture is that the particle will be reflected by the newly introduced barrier, and the particle will perform Rabi oscillation between the initial Bloch state and its time-reversed counterpart. However, exact numerical simulation reveals the unexpected fact that the curves of some physical quantities like the probability of finding the particle in the initial state, are structured. Specifically, they show \emph{cusps} periodically in time.
The cusps here are somehow similar to the cusps observed previously in Ref.~\cite{scienceopen}, which are also in the tight binding model setting (the cusps there were observed earlier in quantum optics settings \cite{parker, meystre} but were not fully accounted for). The only difference in the scenario is that there the defect potential is modulated sinusoidally instead of being held fixed. However, the crucial difference is that, there the cusps (called kinks) are a perturbative effect and survive only in the weak driving limit, while here they are a generic \emph{nonperturbative} effect and thus are very \emph{robust}.
In the following, we shall first describe the phenomenon by presenting the numerical observations. Then we will identify the essential features of the underlying Hamiltonian, from which we define an idealized model. The phenomenon is then accounted for by solving the dynamics of the ideal model analytically. Finally, we discuss its physical implications.
\section{Periodically appearing cusps}\label{phenomenon}
\begin{figure*}[tb]
\centering
\includegraphics[width= 0.99 \textwidth ]{pipf1.eps}
\caption{(Color online) Time evolution of the probability of finding the particle in the initial Bloch state $|k_i\rangle $ ($P_i$, solid lines) and in the momentum-reversed Bloch state $|-k_i\rangle $ ($P_r$, dotted lines). Note that $P_i+P_r \neq 1$ in general as other Bloch states are occupied too, but $P_i +P_r =1$ to a good accuracy when the cusps show up. The values of the parameters $(N, k_i, U)$ are $(401, 80, 1.5)$, $(301, 75, 2)$, and $ (201, 50, 12)$ in (a), (b), and (c), respectively.
\label{evidence}}
\end{figure*}
\begin{figure}[tb]
\centering
\includegraphics[width= 0.40 \textwidth ]{details.eps}
\caption{(Color online) Details of the cusps enclosed by the red box in Fig.~\ref{evidence}(b). The cusps are smoothed on a small time scale.
\label{details}}
\end{figure}
The Hamiltonian of an $N$-site tight binding model with the periodic boundary condition is ($\hbar = 1$ throughout this paper)
\begin{equation}\label{tbmh}
\hat{H}_0 = - \sum_{l=0}^{N-1} ( |l\rangle \langle l+1 | + |l+1 \rangle \langle l |) .
\end{equation}
Here $|l\rangle $ is the Wannier state on site $ l $. The eigenstates are the well-known Bloch states $|k \rangle $ defined as $\langle l |k\rangle = \exp(i q l )/\sqrt{N}$. Here $k$ is an integer defined up to an integral multiple of $ N $ and $q = 2\pi k / N$ is the associated wave vector.
Now consider such a scenario. Initially the particle is in some Bloch state $|k_i \rangle $. Then at time $t= 0 $, the potential on some site $j $ is suddenly changed to $U$. That is, we add the term $\hat{H}_1 = U |j \rangle \langle j | $ to the Hamiltonian (\ref{tbmh}). Because of the periodic boundary condition, we can assume $j = 0 $ without loss of generality. In the ensuing nontrivial dynamics, two quantities of particular interest are the survival probability and the reflection probability
\begin{eqnarray}
P_i (t) = |\langle +k_i | \Psi(t) \rangle |^2, \quad P_r (t)= |\langle -k_i | \Psi(t) \rangle |^2,
\end{eqnarray}
which are, respectively, the probability of finding the particle in the initial Bloch state and the momentum-reversed Bloch state. Both quantities can be easily calculated numerically as in Fig.~\ref{evidence}. There we show the numerical results of $P_i$ and $P_r$ as functions of time.
The most prominent feature of the curves is the cusps. In each panel of Fig.~\ref{evidence}, the cusps are equally spaced in time. They appear simultaneously in the curves of $P_i$ and $P_r$. Sometimes, the cusp in one of the two curves is not so clearly visible, but the corresponding one in the other curve is well shaped. Of all of the three panels, panel (b) is especially regular. Not only the cusps appear periodically, both curves are simply periodic. Moreover, when the cusps happen, $P_{i,r} = 0.5$ or $1$.
One might wonder whether the cusps are cusps in the mathematical sense as the system is a finite one. Indeed, they are not. In Fig.~\ref{details}, the cusps enclosed by the red box in Fig.~\ref{evidence}(b) are magnified---They are quite smooth. Hence, the cusps appear to be cusps only on a relatively large time scale. However, as we shall see below, behind the rounded cusps here is an exactly solvable model consisting of infinitely many levels, where the cusps are cusps in the rigorous sense.
It is worthy to emphasize the essential difference between the cusps here and those observed previously in Ref.~\cite{scienceopen}. There, it is a first order perturbative effect. The cusps exist only in the weak driving limit, or specifically, only when the survival probability $P_i$ is close to unity, and between the cusps the survival probability is a linear function of time. In contrast, here apparently the cusps are still very sharp even when $P_i$ constantly drops to zero. Moreover, the functional form of the curves between the cusps is neither linear nor exponential, but as we shall see below, \emph{quadratic}.
\section{Explanation by a truncated and linearized model}\label{solution}
To account for the cusps in Fig.~\ref{evidence}, we need to have a close survey of the structures of the un-perturbed Hamiltonian $\hat{H}_0$ and the perturbation $\hat{H}_1$. Figure~\ref{plot} shows the dispersion relation, $\varepsilon (q) = -2 \cos q$, of $\hat{H}_0$. The perturbation $\hat{H}_1$ couples two arbitrary Bloch states with an equal amplitude
\begin{equation}\label{couple}
g = \langle k_1 | \hat{H}_1 | k_2 \rangle = U/N ,
\end{equation}
regardless of the values of $k_1$ and $ k_2$.
A crucial fact revealed by numerics is that in the evolution of the wave function, essentially only those few Bloch states with wave vectors $q \simeq \pm q_i$ contribute significantly to the wave function. Now since locally the dispersion curve $\varepsilon (q)$ can be approximated by a straight line (it is especially the case at $q = \pm \pi/2 $ where $\varepsilon''(q) = 0 $), we are led to truncate and linearize the model.
\begin{figure}[tb]
\centering
\includegraphics[width= 0.4 \textwidth ]{plot.eps}
\caption{(Color online) Dispersion relation $\varepsilon( q)= -2 \cos q $ of the tight binding model (\ref{tbmh}). The parameter $q_i = 2\pi k_i/ N$ denotes the wave vector of the initial Bloch state. The dotted straight lines are local linear approximations to the dispersion curve. Only the Bloch states inside the circles participate significantly in the dynamics and thus are retained in the truncated Hamiltonian.
\label{plot}}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width= 0.3 \textwidth ]{trajectory.eps}
\caption{(Color online) A generic (red solid lines) trajectory of $\psi_0 $ on the complex plane according to Eq.~(\ref{final1}a). It is analogous to the bouncing of a classical ball inside a circular billiard. The green dashed closed trajectory corresponds to the case of $\theta = \pi/2 $.
\label{trajectory}}
\end{figure}
Of all the Bloch states, we retain only two groups centered at $|\pm k_i\rangle $. Each group consists of $2M+1$ (the conditions on $M$ will be discussed later) states with wave numbers symmetrically distributed around $k_i $ or $-k_i $. Let us now refer to them as $\{| R_n \rangle \}$ and $\{ | L_n \rangle \}$, where $R$ and $L$ mean right-going and left-going, respectively, and $n$ ranges from $-M$ to $M$. By choice, $|R_n\rangle \equiv |k_i +n \rangle$ and $|L_n \rangle \equiv | -k_i-n \rangle$. After linearizing the dispersion curve at $\pm q_i $, the energy of the degenerate states $|R_n\rangle $ and $|L_n\rangle $ is $ n \Delta$, with
\begin{eqnarray}\label{delta}
\Delta = ( 4 \pi \sin q_i ) /N .
\end{eqnarray}
Here we have chosen the energy of $|\pm k_i \rangle $ as the zero of energy. Again, the perturbation $\hat{H}_1$ couples two arbitrary states in the retained set of states with equal amplitude $g= U/N$.
This truncated and linearized model can be partially diagonalized by introducing a new basis as
$ |A_n^{\pm} \rangle = (|R_n \rangle \pm |L_n\rangle )/\sqrt{2} $.
Referring to the original Hamiltonian $\hat{H}_0 $, they are even- and odd-parity states with respect to the defected site, respectively. It is easy to see that $|A_n^-\rangle $, which are eigenstates of the original Hamiltonian $\hat{H}_0$ with eigenvalues $n \Delta$, are also eigenstates of the total Hamiltonian $\hat{H } = \hat{H}_0 + \hat{H}_1 $ with the same eigenvalues. In the yet to be diagonalized subspace of $\{ |A_n^+ \rangle \}$, the matrix elements of $\hat{H}_0$ and $\hat{H}_1 $ are
\begin{eqnarray}\label{idealmodel}
\quad \langle A_{n_1}^+ | \hat{H}_0 | A_{n_2}^+ \rangle= n_1 \Delta \delta_{n_1, n_2 } , \;
\langle A_{n_1}^+ | \hat{H}_1 | A_{n_2}^+ \rangle = 2 g ,
\end{eqnarray}
for arbitrary $n_{1,2}$.
Now the scenario is like this. Initially the system is in the state $ |\Psi(0)\rangle =|k_i \rangle = | R_0\rangle $. The problem is, how does the probability $P_i $ of finding the system in the initial level $| R_0\rangle $ evolve in time? We have the decomposition
$ |\Psi (0 ) \rangle = (|A_0^- \rangle + |A_0^+ \rangle )/\sqrt{2} $.
Since $|A_0^-\rangle $ is an eigenstate of $\hat{H}$, we see that at an arbitrary time later, the wave function has the form
\begin{equation}\label{form}
|\Psi(t)\rangle = \frac{1}{\sqrt{2}} |A_0^- \rangle + \frac{1}{\sqrt{2}} \sum_{n=-M}^M \psi_n (t ) |A_n^+ \rangle.
\end{equation}
By the initial condition, $\psi_n (0) = \delta_{n,0 }$. The probabilities $P_i $ and $P_r $, which are our primary concern, can be expressed as
\begin{eqnarray}\label{pipr2}
P_i = \frac{1}{4}\left | 1+ \psi_0 \right |^2 , \quad P_r = \frac{1}{4}\left | 1- \psi_0 \right |^2.
\end{eqnarray}
Hence, the aim is to calculate $\psi_0(t)$.
\begin{figure*}[tb]
\centering
\includegraphics[width= 0.425 \textwidth ]{traj_S.eps}
\includegraphics[width= 0.425 \textwidth ]{traj_psi0.eps}
\caption{(Color online) Time evolution (blue solid lines) of (a) the auxiliary quantity $S$ and (b) $\psi_0$ for $M = 10$ and $g/ \Delta = 0.125 $. In each panel, the red line indicates the analytical predictions of (\ref{final1}a) or (\ref{final1}b). Compare (b) with Fig.~\ref{trajectory}.
\label{trajS}}
\end{figure*}
Projecting the time-dependent Schr\"odinger equation $i \frac{\partial }{\partial t} |\Psi(t)\rangle = \hat{H} |\Psi(t)\rangle $ onto $|A_n^\dagger\rangle $, we get $ i \frac{\partial }{\partial t } \psi_n = n \Delta \psi_n + 2g S $,
where $S$ is a collective, auxiliary quantity defined as
\begin{equation}\label{S}
S(t) \equiv \sum_{m=-M}^{M} \psi_m (t) .
\end{equation}
The point is that it is independent of $n $. Introducing the so-called Heisenberg time $T = 2\pi /\Delta $, we see that for fixed $M$, with respect to the reduced time $t/T$, the dynamics of the truncated model is determined by the ratio $g/\Delta $ alone.
The quantity $\psi_n $ can be solved formally as
\begin{equation}\label{solu2}
\psi_n (t) = e^{-i n \Delta t } \delta_{n,0} - i 2 g \int_0^t d \tau e^{-i n \Delta (t-\tau)} S(\tau) .
\end{equation}
Plugging this into (\ref{S}), we get an integral equation of $S$,
\begin{equation}\label{S2}
S (t) = 1 - i2 g \int_0^t d \tau \left( \sum_{n=-M}^{M } e^{-i n \Delta (t-\tau)} \right) S(\tau) .
\end{equation}
Here to proceed analytically further, we use some fact verified by numerics (see Figs.~\ref{trajS} and \ref{check}). Numerically, it is observed that as $M \rightarrow \infty $, the trajectories of $\psi_n $ (in particular, $\psi_0$) converge quickly. Therefore, for sufficiently large $M $, it is legitimate to replace the finite summation in the bracket by an infinite one. That is,
\begin{equation}\label{S3}
S (t) \simeq 1 - i 2g \int_0^t d \tau \left( \sum_{n=-\infty}^{+\infty } e^{-i n\Delta (t-\tau)} \right) S(\tau) .
\end{equation}
Using the Poisson summation formula \cite{grafakos}, the kernel $ \sum_{n=-\infty}^{+\infty } e^{-i n \Delta (t-\tau)}$ can be rewritten as
$ T \sum_{n = -\infty }^{+\infty} \delta (t - \tau - n T ) $. Substituting this new form of the kernel into (\ref{S3}), we get
$ S (t) = 1 -i g T S(t) $,
for $0< t < T $, by noting that $\int_0^\infty dt \delta(t) = 1/2 $. We then solve for $0< t < T $,
\begin{equation}\label{S5}
S(t) = \frac{1}{1 + i g T } ,
\end{equation}
which is a constant. Substituting this into (\ref{solu2}), we get
\begin{equation}\label{firstperiod1}
\psi_{0}(t) = 1 - \frac{i 2g t}{1 + i g T } = \frac{1- i2g( t- T/2) }{1+ i g T } ,
\end{equation}
which is linear in $t$. We note that as $t\rightarrow T^- $,
\begin{equation}\label{lim1}
{\psi}_{0}(t) \rightarrow \frac{1- i gT }{1+ i g T } = e^{-i\theta }
\end{equation}
for some $\theta \in (-\pi, \pi) $. That is, after one period, $\psi_0 $ returns to its initial value, except for a phase accumulated. This complete revival means, $\psi_n (t+ T) = \psi_n (t) e^{-i\theta}$ for all $n$. We thus have for $ t= rT +s $, with $r$ being a nonnegative integer and $s \in [0, T) $,
\begin{subequations}\label{final1}
\begin{eqnarray}
{\psi}_{0}(t) &= & \frac{1- i2g( s- T/2 ) }{1+ i g T } e^{-ir\theta }, \\
S(t) &= &\frac{1}{1+ i g T} e^{-i r \theta } .
\end{eqnarray}
\end{subequations}
In Fig.~\ref{trajectory}, the trajectory of $\psi_0$ on the complex plane is illustrated. It bounces inside the unit circle elastically like a ball. Hence, its kinematics has the regularity of the irrational rotation \cite{irrational}. We see that $|\psi_0|^2$ is a periodic function of time $t$. At $t=r T$, it returns to unity and in-between it is a quadratic function of $t$. By (\ref{pipr2}), $P_i+ P_r = (1+|\psi_0|^2)/2$. Hence, generally $P_i+P_r < 1 $ as $|\psi_0 |^2<1$, but at $t =r T $, when $|\psi_0|^2 =1 $, we have $P_i+P_r =1$, which is satisfied to a good accuracy in Fig.~\ref{evidence}.
\begin{figure}[tb]
\centering
\includegraphics[width= 0.4 \textwidth ]{toy.eps}
\caption{(Color online) Time evolution of $|\psi_0|^2$ for finite values of $M$, with $g/ \Delta =0.5$. In each panel, the blue solid line indicates the numerical exact value while the green dashed line the analytical formula (\ref{final1}a), which corresponds to the $M\rightarrow \infty $ limit. In each period, the latter is a parabola.
\label{check}}
\end{figure}
\begin{figure*}[tb]
\centering
\includegraphics[width= 0.99 \textwidth ]{pipf2.eps}
\caption{(Color online) Comparison between the numerical exact values of $P_{i,r}$ and the analytical predictions. The panels correspond to those in Fig.~\ref{evidence} one to one and in order. The analytical curves are solid (respectively, dotted) if the corresponding numerical curves are dotted (respectively, solid). The dotted lines are hardly visible, which proves that the numerical and analytical results agree with each other very well.
\label{compare}}
\end{figure*}
A peculiar feature of (\ref{S5}) and (\ref{final1}b) is that $S$ is not continuous at $t= rT$. For example, by the definition (\ref{S}), $S(t=0 ) = \psi_0(t=0) = 1$, however by (\ref{S5}), $S(t=0^+ ) \neq 1$. This should be an artifact of our treatment involving the $M\rightarrow \infty $ limit. To see how this difficulty is solved for finite $M $, we demonstrate the typical time evolution behavior of $S $ with $ M =10$ in Fig.~\ref{trajS}(a). We see that in the interval of $rT < t <(r+1) T$, $S$ oscillates rapidly around the constant value predicted by (\ref{final1}b), and at about $t= rT $, the orbit of $S $ quickly transits from around one constant value to around the next. Along with the time evolution of $S$ in Fig.~\ref{trajS}(a), we show in Fig.~\ref{trajS}(b) the time evolution of $\psi_0 $. We see that the numerical exact value of $\psi_0 $ follows the analytical prediction of (\ref{final1}a) closely, with much smaller oscillation amplitude than $S $. This is reasonable in view of (\ref{solu2}), where $S$ appears in the integral and thus its oscillation is averaged out.
Further evidences demonstrating that the simple formula (\ref{final1}a) is a good approximation for finite $M$ (Anyway, there are only a finite number of levels in the original tight binding model) are presented in Fig.~\ref{check}. There we see that even for $M=5$, the formula (\ref{final1}a) captures the behavior of $|\psi_0 |^2$ on the scale of $T$ very well, and as $M$ increases, the curve converges to that predicted by (\ref{final1}a) very quickly.
Having verified that (\ref{final1}a) is reliable even for finite levels, we now apply the theory to the original problem.
There we have $\Delta = 4\pi \sin q_i /N$ and $g = U/N$. Using (\ref{pipr2}) and (\ref{final1}a), we can calculate $P_{i,r}$ in Fig.~\ref{evidence} analytically. The results are presented in Fig.~\ref{compare} together with those numerical data in Fig.~\ref{evidence}. We see that the analytical approximation and the numerical exact results agree very well.
We can also understand the regularity of Fig.~\ref{evidence}(b) now. For $U=2$ and $q_i = \pi/2$, $gT =1$ (regardless of the value of $N$) and hence $\theta = \pi/2$ and the trajectory of $\psi_0$ is the closed one in Fig.~\ref{trajectory}. By (\ref{pipr2}), it results in the regular behavior of $P_{i,r}$ in Fig.~\ref{evidence}(b).
Another regularity in all panels of Fig.~\ref{evidence} and Fig.~\ref{compare} is that, by Fig.~\ref{trajectory}, the cusps of $P_{i,r}$ are located on the curves of $(1\pm \cos \omega t)/2$ respectively, with $\omega = \theta /T $. This fact is in accord with the rough picture that in the long term, the particle performs Rabi oscillation between the two Bloch states $| \pm k_i \rangle$. But since $\omega \neq 2 g $, we see that, because of coupling to other Bloch states, the oscillation frequency is not simply determined by the direct coupling $g$ between the two. From the point of view of quantum chaos, the system in question has a very regular dynamics.
\begin{figure}[tb]
\centering
\includegraphics[width= 0.45 \textwidth ]{kinks.eps}
\caption{(Color online) (a) Population on the Bloch state $|k_i + 1\rangle $ and (b) total population on those right-moving Bloch states. In each panel, there are actually two curves, the blue solid one for the numerical exact result and the green dotted one for the analytical approximation [Eqs.~(\ref{pn}) and (\ref{Pr3})]. But they coincide well. The scenario is as in Fig.~\ref{evidence}(c). In (b), the tips of the cusps are located on the red dashed curve of $(1+ \cos \omega t)/ 2$ with $\omega = \theta /T $.
\label{kinks}}
\end{figure}
Finally, we are prepared to discuss the conditions on $M $. On the one hand, $M $ should be large enough so that the $M\rightarrow \infty $ limit of the dynamics of $\psi_0 $ has almost been achieved. On the other hand, $M $ should be small enough so that the linearization approximation is valid. For fixed values of $U$ and $q_i $, as the ratio $g/\Delta = U /4\pi \sin q_i $ is independent of $N$, the lower bound set by the first condition is $N$-independent. On the contrary, the upper bound set by the second condition is obviously linearly proportional to $N $. Therefore, for arbitrary $U $ and $q_i $, there will be room for $M $ if $N $ is sufficiently large. When it is the case, the ideal model with $M=\infty $ is a good approximation of the realistic model, as far as the quantities $P_{i,r}$ are concerned.
\section{Populations on other Bloch states}\label{otherBS}
So far, we have focused on $\psi_0$, the quantity relevant for calculating the populations on the Bloch states $|\pm k_i\rangle $. But the exact solution above allows us also to calculate all other $\psi_n$, which are related to the populations on other Bloch states. By (\ref{solu2}), similar to (\ref{final1}a), we have ($t = r T + s$)
\begin{eqnarray}
\psi_n (t) = \frac{2g }{n \Delta (1+ i g T)}(e^{-i n\Delta s} -1) e^{-ir \theta },
\end{eqnarray}
for $n \neq 0 $. By (\ref{form}), the analytic prediction of the population on the Bloch states $|\pm (k_i + n )\rangle $ is
\begin{eqnarray}\label{pn}
P_n (t) = \frac{1}{4}|\psi_n (t )|^2 = \frac{4g^2 \sin^2 (n\Delta t/2)}{(1+g^2 T^2) n^2 \Delta^2}.
\end{eqnarray}
This is expected to be a good approximation for small $n$. Indeed, as shown in Fig.~\ref{kinks}(a), Eq.~(\ref{pn}) agrees with the numerical exact result for $P_1$ very well for the scenario in Fig.~\ref{evidence}(c). The crucial feature of (\ref{pn}) is that the amplitude of $P_n $ shrinks as $1/n^2 $. This is in line with the numerical finding that only those Bloch states in the vicinity of $|\pm k_i \rangle $ are significantly populated. It also explains why the $M = \infty $ limit is relevant for the original model, which has only a finite number of levels and a globally nonlinear spectrum---The $n$-large states are only negligibly populated both in the ideal model and in the realistic model.
Another quantity of interest is the total population on those Bloch states moving to the right, i.e.,
\begin{eqnarray}\label{Pr}
P_R (t ) = \sum_{k=1}^{[N/2]} |\langle k | \Psi(t) \rangle |^2.
\end{eqnarray}
Here the lower and upper bounds of summation actually do not need to touch the band edges, as the contribution is primarily from the vicinity of $k_i $. By the correspondence $|k_i + n\rangle \leftrightarrow |R_n \rangle $ for small $n $, $P_R $ should can be approximated by
\begin{eqnarray}\label{Pr2}
\sum_{n\in \mathbb{Z}} |\langle R_n | \Psi (t)\rangle |^2 =\frac{1}{4} \bigg(|1+ \psi_0|^2 + \sum_{n\neq 0 } |\psi_n |^2 \bigg) .
\end{eqnarray}
Using (\ref{final1}a), (\ref{pn}), and the equality $ \sum_{n \in \mathbb{Z}} \sin^2 n \alpha/n^2 \alpha^2 = \pi/\alpha $, for $0<\alpha < \pi$ \cite{scienceopen,fermi}, it is straightforward to reduce (\ref{Pr2}) to
\begin{eqnarray}\label{Pr3}
\cos^2 \frac{r\theta }{2 }- \sin \left[ \left(r+ \frac{1}{2}\right)\theta \right] \frac{g s }{\sqrt{1+ g^2 T^2}}.
\end{eqnarray}
It is a \emph{piecewise linear} function of $t$. As shown in Fig.~\ref{kinks}(b), it is a good approximation of the numerical exact result. The regularity here is that the tips of the cusps are located on the curve of $(1+ \cos \omega t)/2$ with $\omega = \theta / T$.
\section{Conclusions and discussions}\label{conclusion}
In conclusion, we have found the reflection dynamics of a Bloch state against a site defect to be nonsmooth, in that many quantities show cusps periodically in time. The point is that there exists an ideal model with an infinite number of levels, whose dynamics can be solved exactly and shows cusps in the mathematical sense. The model is defined by (\ref{idealmodel}), with the two characteristics of equally spaced levels and equal coupling between two arbitrary levels. The realistic tight binding model with one defected site realizes the ideal model approximately, and its dynamics is guided by that of the latter.
Admittedly, our explanation of the cusps is primarily mathematical. Physically, we note that the period of the cusps, the Heisenberg time $T = 2\pi/ \Delta $, is exactly the time for a wave packet with wave vector $ \pm q_i $ traversing the whole lattice for one loop \cite{scienceopen}. That is, the sudden jump of the slope of $P_{i,r} $ occurs when the scattered wave packet comes back to the defect site. Therefore, the phenomenon should be an interference effect.
The cusps are reminiscent of the quantum dynamical phase transition discovered by Heyl \emph{et al}. \cite{heyl}, but in a single-particle model. They are different on the one hand from those reported in Refs.~\cite{scienceopen, parker, meystre} in that they are deeply \emph{nonperturbative}, and on the other hand from those in Refs.~\cite{stey,ligare,zhou} in that the functions in-between the cusps are not exponential but \emph{quadratic} or \emph{linear}. The different behaviors stem from the different hamiltonian structures. In Refs.~\cite{stey,ligare,zhou}, the model has the level-band structure, namely, an extra level couples to a band, and there is no coupling between the levels inside the band. In contrast, here the ideal model consists of only a band, and two arbitrary levels in the band are coupled.
Although we do not believe the phenomenon reported here is universal, we do think it provides a good example demonstrating that the dynamics of a model, even the simplest one, can be very surprising. The point is that, thorough understanding of the static properties of a model (as we do for the model in question) does not imply thorough understanding of its dynamical properties. There is a big gap from the former to the latter in many cases. In view of the intensive study of nonequilibrium dynamics of many-body systems nowadays \cite{nonequilibrium}, it is worthy to emphasize that, actually the dynamics of single-body or few-body systems is already complex enough and far from being fully understood.
\section{Acknowledgments}
This work is supported by the Fujian Provincial Science Foundation under grant number 2016J05004.
|
1,116,691,500,249 | arxiv | \section{Introduction}
Networks and solutions of semiflexible polymers arise in a variety of contexts and have a wide range of materials science and biological applications. In the biological realm, active cytoskeletal networks play important roles in cell function, such as cellular transport and organization\cite{2006A,0Cell}. Filamentous actin, microtubules, and other protein filaments make up the cytoskeletal network, which, activated by the motions of out-of-equilibrium molecular motors, is responsible for many of the mechanical functions of cells~\cite{2014Modeling}. The unusual material properties of such active biopolymer networks have stimulated the search for new synthetic active materials.
The design of active functional materials and systems capable of performing specific tasks in response to internal and external signals is an important objective of research in this area~\cite{2018Intelligent}. Synthetic polymer gels have been used to construct such active systems~\cite{2014Evol,2017Oscillatory,2018Pulsatile}, and smart polymeric materials that exhibit biomimetic behavior have been made. The chemomechanical coupling between nonlinear oscillating chemical reactions and the mechanical properties of gels has been exploited to construct self-oscillating gels that undergo spontaneous, homogeneous, periodic swelling and de-swelling in a closed solution under constant conditions without the need of external stimuli~\cite{1998Self,2000In}. The mechanism that gives rise to the chemomechanical self-oscillation in hydrogels activated by the Belousov-Zhabotinsky reaction involves changes in the gel structure induced by the periodic redox changes in the oxidized and reduced states of the bound catalyst in this reaction~\cite{2003Mechanical,2006Modeling} These gels have been proposed as analogs of nerve pulses, the rhythmic beating of cardiac cells, and deformable muscles in animals~\cite{Lin2016Retrograde}.
Active biological filament networks derive their activity from molecular motors that attach and detach from the biofilaments. Likewise, synthetic active motors can attach to filaments in a network and such attachment not only tames the detrimental effects of orientational Brownian motion but also changes the properties of the network~\cite{2020Active}. By contrast, here we consider active filament systems where the constituent filaments themselves possess active properties because they contain embedded synthetic nanomotors. Filaments with active elements have been made in the laboratory by joining chemically synthesized small colloidal or Janus particles\cite{ramirez2013polloidal,biswas2017linking,nishiguchi2018flagellar}. Theoretical investigations of freely-moving active filaments\cite{ghosh2014dynamics,isele2015self,winkler2017active} and clamped beating filaments with spontaneous oscillations\cite{laskar2013hydrodynamic,sarkar2017spontaneous} have been carried out. The coupling of thermal and active noise, hydrodynamic interactions and polymer conformational changes suggests that interesting structural and dynamical features may arise in networks of such active filaments.
The active filaments we consider are constructed by inserting chemically-powered nanomotors that move by a diffusiophoretic mechanism into a semiflexible polymer chain. The monomers that comprise the chain and the geometrical forms of the nanomotors can be of various types. In the diffusiophoretic mechanism, catalytic chemical reactions on the motor that take place under nonequilibrium conditions lead to diffusiophoretic forces that propel the motors in solution. Nonequilibrium constraints are achieved by controlling the supply of reactive species to the system. The aspect of this mechanism that we exploit is that simply by changing the nonequilibrium conditions, the signs of the diffusiophoretic forces can be made to change so that the direction of motor motion is reversed. The changing diffusiophoretic forces act on the filament segments leading to chemomechanical coupling that can alter the conformational state of the filament. This effect is illustrated in Fig.~\ref{fig:fm-bm_6seg} that shows how the polymeric filaments can be stretched or contracted by active nanomotors. Thus, by changing the chemical concentrations one can control the conformational structure to the polymeric filament.
\begin{figure}[htbp]
\centering
\vspace{-0.3cm}
\resizebox{0.8\columnwidth}{!}{%
\includegraphics{./plot/fm_1f0.pdf} }
\resizebox{0.9\columnwidth}{!}{%
\includegraphics{./plot/bm_1f0.pdf}}
\vspace{-0.5cm}
\caption{\label{fig:fm-bm_6seg} (color online) Instantaneous configurations of a single filament with six chemically-powered dimer motor segments. The left and right three motors, respectively, are oriented in directions opposite to each other. Each motor segment consists of catalytic (red) and noncatalytic (blue) beads. (a) The top filament has forward-moving motors with $(k_{b+}=10^{-2},\; k_{b-}=10^{-3})$, (b) the bottom filament has backward-moving motors with $(k_{b+}=10^{-3},\; k_{b-}=10^{-2})$. There are $N_{fn}=44$ beads in the filament. }
\end{figure}
Section~\ref{sec:single filament} describes in detail how the active filaments are constructed, the diffusiophoretic mechanism, and how the conformational dynamics of a single active filament responds to constraints that change the signs of the diffusiophoretic forces the embedded motors exert on the filament. Section~\ref{sec:network} considers systems of many active filaments with embedded motors, and it is shown that the conformational system states are qualitatively different when the embedded motors tend to elongate or contract the constituent filaments. The response of many-filament systems to periodic variations in the concentration constraints is the topic of Sec.~\ref{sec:tuning} where oscillating gel-like dynamics is observed. The conclusions are given in Sec.~\ref{sec:conclusion}, followed by Sec.~\ref{sec:app} where additional details of the model construction and simulation algorithm are given.
\section{Conformational dynamics of active polymeric filaments}\label{sec:single filament}
The polymeric filaments are constructed from monomers (termed beads) that are connected by stiff harmonic springs and three-body potentials with bending energy characterized by $\kappa_b$ that is used to determine the stiffness of the filaments. To activate the filaments we embed sphere-dimer nanomotor segments made by linking catalytic $C$ and noncatalytic $N$ nano-colloidal spheres.~\cite{Ruckner2007} The nanomotor segments could be inserted with random positions and orientations in the chain or with specified arrangements. We consider the latter situation in this work. The filament beads with embedded motor segments interact with the fluid species through repulsive Lennard-Jones potentials. Interactions among fluid particles are described by multiparticle collision dynamics~\cite{Malevanets_Kapral_99,kapral:08,gompper:2009}, and the time evolution of the entire system is carried out using a method that combines molecular dynamics with multiparticle collision dynamics~\cite{Malevanets_Kapral_00}. Full details system parameters and algorithm are given in Appendix~\ref{sec:app}.
The diffusiophoretic mechanism~\cite{Anderson_89,Golestanian2007,debuyl:13,kapral2013perspective} that underlies nanomotor propulsion involves the reversible interconversion of reactant $A$ and product $B$ species, $A+C \underset{k_-}{\stackrel{k_+}{\rightleftharpoons}} B+C$, on the catalytic motor bead. This reaction leads to an asymmetric distribution of reactants and products in the motor vicinity. The locally-produced concentration gradient, in conjunction with the different interaction potentials of the $A$ and $B$ species with the motor beads, gives rise to a body force on the motor. Due to momentum conservation a fluid flow is generated in the vicinity of the motor that causes it to move with a velocity, $\bm{V}_d =\bm{F}_d/\zeta$, directed along its bond unit vector $\hat{\bf u}$ pointing from the $N$ to $C$ beads. The diffusiophoretic force is $\bm{F}_d$ and $\zeta$ is the friction that the motor experiences~\cite{GK18,GK19}.
Active directed motion is only possible if the system is out of equilibrium, as seen in the form of the diffusiophoretic force, $\bm{F}_d= f_d \hat{\bf u} (k_+\bar{c}_A -k_- \bar{c}_B)$, where $f_d$ depends on geometrical factors and motor-fluid interaction potentials, and $\bar{c}_A$ and $\bar{c}_B$ are the fixed concentrations of the $A$ and $B$ species far from the motor~\cite{GK19}. At equilibrium $c^{\rm eq}_B/c^{\rm eq}_A=k_+/k_-$ and the diffusiophoretic force vanishes, while under nonequilibrium conditions where detailed balance is broken its sign depends on the relative values of $\bar{c}_A$ and $\bar{c}_B$. These concentrations can be controlled either by coupling to reservoirs of fixed concentrations of $A$ and $B$ species or, as in the present work, by nonequilibrium fluid phase reactions, $B \underset{k_{b-}}{\stackrel{k_{b+}}{\rightleftharpoons}} A $, that supply reactant and remove product. For fluid phase reactions $\bar{c}_A$ and $\bar{c}_B$ are the steady state values whose ratio is given by $\bar{c}_B/\bar{c}_A=k_{b-}/k_{b+}$, where the effective rate coefficients $k_{b\pm}$ depend on constant ``pool" chemical species and can serve as control parameters~\cite{HSGK18} (see also Sec.~\ref{sec:app}).
To see how changing nonequilibrium concentration constraints can lead to chemomechanical coupling, we consider filaments with six motor segments where three motors have their catalytic heads in the same direction along the filament while the remaining three motors have their catalytic heads in the opposite direction, as shown in Fig.~\ref{fig:fm-bm_6seg}. Provided $k_+/k_- \ne k_{b-}/k_{b+}$ detailed balance will be broken and active motion will be possible. We adopt the convention where motors that move in a direction with their catalytic sphere at the motor head ($F_d =\hat{\bm{u}}\cdot\bm{F_}d> 0$) are termed forward-moving, while those moving with their noncatalytic sphere at the motor head ($F_d < 0$) are backward-moving. Taking $k_+=k_-$, if $k_{b+} > k_{b-}$, $F_d > 0$, and the motor will move in the forward direction, and backward for $k_{b+} < k_{b-}$. Since $F_d > 0$ for forward-moving motors the diffusiophoretic forces they exert will tend to compress the filament, while backward-moving motors with $F_d < 0$ will tend to stretch the filament (Fig.~\ref{fig:fm-bm_6seg}) .
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.5cm,width=8.0cm]{./plot/1fil_3cases.pdf}
\caption{\label{fig:1fil_3figs}
(color online) Probability density $P(L_{ee})$ of the end-to-end distance. The plots are for a single filament with forward-moving (red), backward-moving (blue) and inactive (green) motors. The insets show instantaneous conformations of a single filament with six chemically-powered dimer motor segments after a long transient time, $t\sim10000$. From top to bottom, respectively, the filaments have backward-moving, inactive and forward-moving motors.}
\end{center}
\end{figure}
Figure~\ref{fig:1fil_3figs} shows the probability densities $P(L_{ee})$ of the filament end-to-end length $L_{ee}$ for both forward-moving ($(k_{b+}= 10^{-2},\; k_{b-}=10^{-3})$) and backward-moving ($(k_{b+}= 10^{-3},\; k_{b-}=10^{-2})$) embedded motors. This figure also compares active filaments with inactive filaments where $(k_{b+}= k_{b-}=5.5 \times 10^{-3})$ and the system satisfies detailed balance. One can see the distinct, strongly localized probability distributions for the three different constraint conditions.
The embedded nanomotors can also change the bending rigidity of the filaments. This can be seen in the structure of the tangent-tangent correlation function, $g(s)=\left \langle \mathbf{t}(\tau+s)\cdot \mathbf{t}(\tau)\right \rangle$, where $\mathbf{t}(s)$ is the unit tangent vector at arc length $s$ of the filament's contour. This function, which depends on the bending stiffness of the filament, is shown in Fig.~\ref{fig:ttcompare_1f_40f} for the different nonequilibrium constraints. The curve for forward-moving motors can be fit by $g(s)=\exp[-s/l_{p}]$ with $l_{p}=18.0$. The persistence length $l_{p}$ of a single filament is defined in terms of the competition of bending ($\kappa$) and thermal energies, $l_{p}=\kappa/k_{B}T$, which gives $\kappa=3.6$. This value is much smaller than $\kappa_{b}=5.0$ in the bending potential, indicating that filaments with embedded forward-moving motors are more pliable than filaments without such motors. The simple exponential form is not valid for stiffer filaments with backward-moving and inactive motor segments. These results again show how qualitatively different filament conformational dynamics can arise from the application of different nonequilibrium chemical constraints.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.0cm,width=8.5cm]{./plot/ttcompare_1f_40f.pdf}
\caption{\label{fig:ttcompare_1f_40f}
(color online) Tangent correlation functions $g(s)$ for a single filament under different nonequilibrium chemical constraints: embedded forward-moving motors (red circles), inactive motors (green squares), and backward-moving motors (blue stars). Results for a many-filament system ($N_{f}=40$) with embedded forward-moving motor segments (purple diamonds) are also shown. Fits to $g(s) \sim exp(-s/l_{p})$ for filaments with embedded forward-moving motor segments: a single filament (red dashed line), $l_{p}=18.0$; a filament in a system with $N_{f}=40$ filaments (purple dashed line), $l_{p}=12.5$.}
\end{center}
\end{figure}
\section{Conformational dynamics of networks of interacting active filaments}\label{sec:network}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[height=6.5cm,width=18.5cm]{./plot/40f_3figs0.pdf}
\caption{\label{fig:40f_3figs}(color online) (a) Instantaneous configuration of a system with forty active filaments with embedded forward-moving dimer motor segments. One filament is marked by yellow arrows to show its ends, and the yellow dashed lines indicate the directions in which the filament bends. (b) The same as (a) but for active filaments with embedded backward-moving motor segments. (c) The average end-to-end distance $L_{ee}$ for filaments in the network. The plots are for filaments with embedded forward-moving (red), backward-moving (blue) and inactive motors (green). The error bars represent $\pm$ one standard deviation computed from averages over all the filaments in the system and over ten realizations of the dynamics.}
\end{center}
\end{figure*}
We now show how the conformational structure and dynamics of many interacting active filaments depends on the nonequilibrium chemical constraints that take the system out of equilibrium. To construct the systems we study, filaments of the same length as described in the previous section are randomly distributed in the simulation box with random orientations. There are no permanent cross links between the filaments, although physical interactions can lead to transient linking. The system evolves under specified concentration constraints as discussed above. The resulting system is an entangled collection of filaments that has features similar to those of polymer gels with geometrical physical links rather than permanent links formed by chemical bonds. A configuration extracted from the dynamics of an ensemble of $N_{f}=40$ active filaments with embedded forward-moving dimer motor segments is shown in Fig.~\ref{fig:40f_3figs}(a). In this image one can see the complex entangled arrangement the filaments adopt, as well its inhomogeneous structure with regions where the filaments aggregate. One filament is marked to show that its conformation is similar to that for the isolated filaments discussed in the previous section.
If instead the active filaments have embedded backward-moving motor segments the chains are stretched and the corresponding interacting filament system has a different structure as shown in Fig.~\ref{fig:40f_3figs}(b). Now the system of filaments is much more homogeneous and, as can be seen in the marked filament, the individual filaments are indeed stretched.
In the previous section we showed that filaments with embedded forward-moving motors are pliable. In a system of entangled filaments the bending rigidity is reduced even further. This can be seen in the form of the average tangent correlation function $g(s)$ in Fig.~\ref{fig:ttcompare_1f_40f}. A fit of $g(s)$ for filaments in the many-filament system with $g(s) \sim \exp(-s/l_{p})$ yields $l_{p}=12.5$, which should be compared with $l_{p}=18.0$ for an isolated filament. Thus, the bending stiffness $\kappa=2.5$ is smaller in the many-filament system, due to the influence of strong interactions with other filaments in the system.
\begin{figure}[htbp]
\begin{center}
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{./plot/gr2_compare_40fil.pdf}}
\caption{\label{fig:grcu_40fila}
(color online) The $NN$ radial distribution function $g_N(r)$ for (a) a collection of $240$ motors in solution, and (b) for $240$ embedded motor segments in a system with forty filaments. The plots in these figures are for forward-moving (red circles and lines) and backward-moving (blue stars and lines) motors. Panel (b) also shows results for the
many-filament system subject to the square-wave period forcing (see Sec.~\ref{sec:tuning}) where the fluid-phase reaction rate coefficients $k_{b\pm}(t)$ oscillate with period $\tau_b=5000$. The curves were obtained averages over ten realizations of the dynamics and time averages over the time intervals $t=6000-7000$ (cyan diamonds and lines) and $t=8000-9000$ (purple triangles and lines).}
\end{center}
\end{figure}
The inhomogeneous structure of the many-filament system with embedded forward-moving motor segments and the lack of such structure for filaments with embedded backward-moving motor segments
can be attributed to chemotactic interactions among the motors. The collective dynamics of chemically-powered nanomotors in solution has been investigated extensively.~\cite{Wang2015,Elgeti2015,Zoettl2016,Illien2017,CRRK18} It is known that collections of forward-moving dimer motors aggregate strongly in solution due to chemotactic attraction~\cite{Thakur2012,Qiao21,Wagner2017,Colberg2017}, while backward-moving motors show little or no cluster formation~\cite{Colberg2017}. To investigate this effect we compare the collective dynamics of $240$ dimer motors in solution with that in the forty-filament system, which also has $240$ dimer motors. Figures~\ref{fig:grcu_40fila} (a) and (b) show plots of the $NN$ steady-state radial distribution function, $g(r)$,
\begin{equation}\label{eq:gnn}
g_{N}(r)= \frac{V}{4 \pi r^2 N_{\rm M}}\Big\langle \sum_{j<i=1}^{N_{\rm M}}{\delta \left ( \left | ( \mathbf{r}_{N_{i}}-\mathbf{r}_{N_{j}}) \right | -r\right )} \Big\rangle,
\end{equation}
where $r$ is the magnitude of the distance between the motor $N$ spheres, $N_{\rm M}$ is the number of motors and the angle brackets denote an average over time and realizations. In Fig.~\ref{fig:grcu_40fila}(a) the peak in $g_N(r)$ for forward-moving motors indicates cluster formation, while there is a much weaker tendency for backward-moving to form clusters. This clustering tendency is enhanced in the active many-filament system with forward-moving motors as seen in Fig.~\ref{fig:grcu_40fila}(b), while there is no cluster formation for embedded backward-moving motors. The $g_N(r)$ results in this figure were obtained by counting only motors on other filaments, and excluding those on the same filament, in order to remove the built-in correlations due to those motors embedded in the same filament.
\section{Oscillating active gel-like systems}\label{sec:tuning}
The active conformational states of the many-filament systems described in the previous section were shown to depend strongly on the nonequilibrium chemical constraints that are parameterized by the fluid-phase reaction rate constants, which are themselves controlled by the concentrations of the pool chemical species (see Sec.~\ref{sec:app}).
Here we show how an oscillating active gel-like state can be obtained by periodic variation of these rate constants. In particular, we take the fluid-phase reaction rate constants to be given by
\begin{equation}\label{eq:bulk-rates}
k_{b\pm}(t)=\bar{k}_b \pm \Delta_b \sin \Omega t,
\end{equation}
where $\Omega =2 \pi /\tau_b$ with $\tau_b$ the period of the oscillation.
If we take $\bar{k}_b= 5.5 \times 10^{-3}$, the rate coefficient value for inactive motors, and $\Delta_b=4.5 \times 10^{-3}$, the rate coefficients will oscillate between those for forward-moving and backward-moving motors studied in the previous sections. The diffusiophoretic forces that the embedded motors exert on the filaments will also oscillate, leading to changes in the conformational structures of the active gel-like states. Such oscillatory behavior in the mean filament end-to-end length is shown in Fig.~\ref{fig:40f_T5000_eed_bm}. The average end-to-end lengths never achieve the values seen for rate coefficients fixed at the forward and backward-moving cases (see Fig.~\ref{fig:1fil_3figs}) corresponding to the extrema of $k_{b\pm}(t)$. This is due to the much slower response of the filament conformational changes to driving by the periodically-varying diffusiophoretic forces that induce chemomechanical coupling.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.5cm,width=9.0cm]{./plot/40f_1fcos_t5k0.pdf}
\caption{\label{fig:40f_T5000_eed_bm}
(color online) (left ordinate, dashed line) Fluid-phase reaction rate coefficients $k_{b\pm}(t)$ versus time for cosine function oscillations with period $\tau_b=5000$, and (right ordinate, red line with error bars) the average end-to-end distance $L_{ee}$ of a filament in a forty-filament system. The results were obtained from averages over all filaments in the system and over ten realizations of the dynamics.}
\end{center}
\end{figure}
To examine this effect in more detail we consider a simple mean field description of the reaction kinetics,
\begin{eqnarray}
\frac{d}{dt} c_A(t) &=& -k_m n_c c_A(t) +k_m n_c c_B(t)\nonumber \\
&&-k_{b-}(t) c_A(t) + k_{b+}(t) c_B(t),
\end{eqnarray}
with a similar equation for $c_B$. Here $n_c$ is the number density of catalytic spheres in the system. Using the condition $c_A + c_B =c_0$, where $c_0$ is the constant total concentration of reactive species, we may write this equation as
\begin{eqnarray}
\frac{d}{dt} c_A(t) &=& -2(k_m n_c + \bar{k}_b) c_A(t) +(k_m n_c + \bar{k}_b)c_0\nonumber \\
&&+c_0 \Delta_b \; \sin \Omega t
\end{eqnarray}
whose solution is
\begin{eqnarray}
c_A(t)&=& e^{-t/\tau_{\rm ch}} c_A(0) +\frac{n_0}{2}\Big(1- e^{-t/\tau_{\rm ch}}\Big)
+c_0 \frac{\Delta_b \Omega}{\tau_{\rm ch}^{-2} + \Omega^2}e^{-t/\tau_{\rm ch}}\nonumber \\
&& + c_0 \frac{\Delta_b}{\tau_{\rm ch}^{-2} + \Omega^2} \Big(\tau_{\rm ch}^{-1}\sin \Omega t -\Omega \cos \Omega t \Big),
\end{eqnarray}
where the chemical relaxation time is $\tau_{\rm ch}=1/[2(k_m n_c + \bar{k}_b)]$, defined in terms of $k_m$, the reaction rate coefficient on a catalytic sphere. This rate coefficient can be written as $k_{m\pm}=k^0_\pm k_D/(k^0_+ +k^0_- +k_D)$, where $k^0_\pm=p_\pm R_c^2 (8 \pi k_BT/\mu)^{1/2}$ and $k_D=4 \pi D R_c$ with $R_c$ the effective radius of the catalytic sphere for interactions with the reactive species. Here we neglect any screening by the noncatalytic spheres, the dimers and other filament beads. Henceforth we take $p_+=p_-$ and $k_{m\pm}=k_m$. Using the system parameters given in the Appendix we find $\tau_{\rm ch} \approx 85$. This time is very short compared to that for the conformational relaxation time of the filaments, $\tau_f \approx 10000$ (see Fig.~\ref{fig:40f_3figs}). For times longer than this very short relaxation time, the chemical concentrations are given by $c_A(t) \approx c_0 \Big[\frac{1}{2}+ \tau_{\rm ch} \Delta_b \sin \Omega t \Big]$ and oscillate with period $\tau_b$. As a result, the filaments cannot fully reach the maximum elongation and minimum contraction limits for sufficiently rapid variations of the fluid-phase rate coefficients. This effect manifests itself in the smaller than maximum oscillation amplitudes seen in Fig.~\ref{fig:40f_T5000_eed_bm}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.5cm,width=9.0cm]{./plot/40f_1flength_t5k_short.pdf}
\caption{\label{fig:40f_1flength_t5k}
(color online) Same as Fig.~\ref{fig:40f_T5000_eed_bm} except square-wave oscillatory variations of the fluid-phase reaction rate coefficients $k_{b\pm}(t)$ are applied.}
\end{center}
\end{figure}
The character of the dynamics depends on the form of the periodic oscillation. For example, if fluid-phase rate coefficients have a square-wave oscillatory form, one obtains the results shown in Fig.~\ref{fig:40f_1flength_t5k}. For this oscillatory dynamics, even for the same short oscillation period, the average conformational changes are somewhat larger. Provided the half-period of the oscillation is longer than the characteristic filament conformational relaxation time, the system will oscillate between the fully collapsed forward-moving motor case and the fully expanded backward-moving motor case.
The homogeneity of the gel-like network also changes under periodic variation of the concentration constraints. The $NN$ radial distribution function $g_N(r)$ for a system with square-wave oscillation is shown in Fig.~\ref{fig:grcu_40fila}(b). The curves were obtained averages over ten realizations of the dynamics and time averages over the time intervals $t=6000-7000$ (cyan diamonds and lines) and $t=8000-9000$ (purple triangles and lines). There is a prominent peak at $r=5.0$ in the $t=8000-9000$ data that corresponds to $NN$ clustering, while there is only very weak structural ordering for $t=6000-7000$. Similar variations are seen for cosine variations but the clustering tendency is somewhat weaker. Thus, not only does the average end-to-end filament length change during the oscillation cycle, but so does the inhomogeneous structure of the filament system.
\section{Summary and Conclusion}\label{sec:conclusion}
Chemically-powered nanomotors embedded in polymeric filaments are not only the source of active conformational dynamics but also provide a means for the control of this activity. This control is achieved by making use of the basic microscopic reversibility of catalytic reactions on the motor surfaces along with fact that detailed balance is broken to promote active motion by chemical constraints on the system. Through such constraints the diffusiophoretic forces that the embedded motors exert on the filaments can be changed in a prescribed manner. In systems containing many interacting filaments different conformational states with stretched or partially collapsed filaments can be selectively accessed by such control of the constraints. By periodically varying the chemical constraints cyclic expansion and contraction of the gel-like filament states was obtained, which are reminiscent of the similar states in oscillating hydrogels activated by the Belousov-Zhabotinsky reaction. If the fluid phase reactions that control the nonequilibrium state can sustain oscillations and couple to the motor reactions~\cite{Robertson2015}, then autonomous control of the filament oscillations could also be achieved.
Additional features of the collective active filament states are present because of the embedded filament motors. The diffusiophoretic motors in bulk solution undergo active self assembly if they are forward moving but not when they move backward. Consequently, the many-filament systems with partially-collapsed filaments are highly inhomogeneous because the tendency of the motors to cluster but those with elongated filaments are more homogeneous and do not cluster.
The results presented here suggest other possibilities for constructing active filament systems. If permanent cross links among the filaments are included, active filament networks with two or three dimensional geometries can be constructed for materials science applications. In addition to providing another mechanism for constructing oscillating gel-like states, some features specific to embedded motor-filaments, such as the tendency of the embedded motors to form or prevent filament cluster formation, could be used to induce inhomogeneous strains in the network produce specific distortions of the system.
In the examples presented in this paper different chemical constraints were imposed by assuming that pool chemical species involved in fluid phase reactions could be varied. However, rather than using coupling to fluid phase reactions to maintain the system out of equilibrium, the nonequilibrium state of the reactive species concentrations could be controlled by reservoirs at the boundaries of the system. In addition, a specific arrangement and choice of orientations of the embedded motors was made to induce elongation or contraction of the filaments. Other choices will give rise to different types collective filament states and this feature could be useful in the design of active materials.
\section{Computational Details}\label{sec:app}
Simulations of the dynamics of the active filament systems are carried out in a $75 \times 75 \times 75$ box with periodic boundaries. All filament beads have the same size with radius $\sigma_{\rm F}=1.5$ except those for the motor segments. The dimer motor segments have radius of $\sigma_{\rm C}=1.0$ and $\sigma_{\rm N}=2.0$. The beads are linked by stiff harmonic springs if their separation is less than the sum of the radii of the two beads. For the regular (non-motor) filament beads, $V_{\rm bond}^{FF}=\frac{1}{2}k_{s}(r-r_{\rm eq}^{FF})^2$, the equilibrium bond length is $r_{\rm eq}^{FF} = 1.0$. The dimer motor segments have $V_{\rm bond}^{CF}=\frac{1}{2}k_{sd}(r-r_{\rm eq}^{CF})^2$, $V_{\rm bond}^{NF}=\frac{1}{2}k_{sd}(r-r_{\rm eq}^{NF})^2$, $V_{\rm bond}^{CN}=\frac{1}{2}k_{sd}(r-r_{\rm eq}^{CN})^2$, where the equilibrium bond length is $r_{\rm eq}^{CF} = 1.75$, $r_{\rm eq}^{NF} = 2.75$, $r_{\rm eq}^{CN} = 3.5$, and the spring constants $k_{\rm s} = 50$ and $k_{\rm {sd}} = 100$. The bending stiffness of a filament is controlled by a three-body potential, $V_{bend}=\kappa_{b}[1-\cos \theta ]$, with $\kappa_{b}=5.0$, and $\cos \theta =\hat{r}_{i-1,i}\cdot \hat{r}_{i,i+1}$, where $\hat{r}_{i,j}=(\mathbf{r}_{i}-\mathbf{r}_{j})/\left | \mathbf{r}_{i}-\mathbf{r}_{j}\right |$. Also, it is necessary to have a strong enough repulsive LJ potential ($\epsilon_{D}=5.0$) for interactions between the monomers in the same filament to avoid the overlap between filament beads in strongly bent configurations.
The fluid phase contains $N=N_{A}+N_{B}$ particles of species $A$ and $B$. With the exception of the harmonic spring potentials discussed above and used to construct the filaments, all other intermolecular interactions take place through repulsive Lennard-Jones (LJ) potentials of the form
\begin{equation}
V_{\alpha \alpha'} =4\epsilon_{\alpha \alpha'} \Big[\Big(\frac{\sigma_{\alpha \alpha'}}{r_{i j}}\Big)^{12}-\Big(\frac{\sigma_{\alpha \alpha'}}{r_{i j}}\Big)^{6}+ \frac{1}{4} \Big]\theta(r_c-r_{i j}),
\end{equation}
where $\theta(r)$ is the Heaviside function and the separation between a particle
$i$ of type $\alpha$ and a particle $j$ of type $\alpha'$ is $r_{i j} = |\bm{r}_i-\bm{r}_j|$. We let the symbols $\alpha,\alpha'=C,N$ denote the catalytic and noncatalytic monomers in the filament. The repulsive potential between two beads in different filaments has $\sigma_{\alpha \alpha}=2 \sigma_{\alpha}+\sigma$, $\sigma_{\alpha \alpha'}=\sigma_{\alpha' \alpha}= \sigma_{\alpha}+\sigma_{\alpha'}+\sigma$,
$\sigma_{\alpha F}=\sigma_{F\alpha}= \sigma_{\alpha}+\sigma_{F}+\sigma$, with $\sigma=1.0$. Filament beads interact with other beads in neighboring filaments with strength $\epsilon_{FF}=1.0$. The interaction strengths of the repulsive interactions between motor beads and the filament are $\epsilon_{\alpha \alpha}=\epsilon_{\alpha \alpha'}= \epsilon_{\alpha'\alpha}=\epsilon_{\alpha F}=\epsilon_{F\alpha}= 1.0$. The $A$ and $B$ fluid particle have identical effective radii $\sigma_{A}=\sigma_{B}=0.25$, and energy parameters $\epsilon _{AC}=\epsilon _{BC}=\epsilon _{NA}=1.0$, $\epsilon _{NB}=0.1$ and $\epsilon_{AF}=\epsilon_{BF}=0.1$ for their interactions with the motor beads.
All solvent species have the same mass $m$, whereas masses of the motor and the filament beads are chosen to be $m_{\alpha} = (d_{\alpha}/d_{S})^{3}$ m so that they have the same mass density as a solvent particle. The average solvent density is $n_{0}=N/L^{3} \sim 9$.
The hybrid multiparticle collision dynamics-molecular dynamics simulation method consists of free streaming and collision steps.~\cite{Malevanets_Kapral_99,Malevanets_Kapral_00} In the streaming step, the dynamics of all the species is governed by molecular dynamics and propagated by Newton's equations of motion. In this step there is no net force among solvent particles. Instead, the interactions among the solvent particles are described by multiparticle collisions dynamics.\cite{kapral:08,gompper:2009}
In the collision step, at discrete times $\tau$, the system is divided into cubic cells $\xi$ with size $a_{0}=1$. The rotation operators $\hat{\omega}_{\alpha}=\pi/2$ are assigned to each cell from some set of rotation operators. The post-collision velocity $\mathbf{v}_{i}(t+\tau)$ of each particle $i$ within the same cell can be obtained according to the rotation rule
$\mathbf{v}_{i}(t+\tau)=\mathbf{v}_{cm}(t)+\hat{\omega}_{\alpha}(\mathbf{v}_{i}(t)
-\mathbf{v}_{cm}(t))$, where the center-of-mass velocity $\mathbf{v}_{cm}$ of each cell $\xi$ is calculated from $\mathbf{v}_{cm}= \sum_{j=1}^{N_{c}}\mathbf{v} _j/N_{c}$ where $N_{c}$ is the total number of particles in the cell. Grid shifting is employed to ensure Galilean invariance.~\cite{Ihle_Kroll_01}
The fluid phase reactions as assumed to take place under nonequilibrium conditions by a mechanism
$B +P_1 \rightleftharpoons A +P_2$, where the "pool" chemical species $P_1$ and $P_2$
are assumed to be in excess and the values of their concentrations are incorporated in the
rate coefficients $k_{b\pm}$. The reactive version of multiparticle collision dynamics is used to carry out the reactive dynamics in the fluid phase~\cite{Rholf2008}.
The molecular dynamics time step is $\Delta t_{MD}=0.001$ Newton's equations are solved using the velocity-Verlet algorithm with a time step $\Delta t_{MPC}=0.5 $. The system temperature is $k_{B}T=0.2$. The viscosity of the fluid is $\eta=1.282$ and the fluid-particle self-diffusion coefficients are given by $D_{A}=D_{B}=D_{0}=0.118$.
In our simulations, all quantities are reported in dimensionless units based on energy in units of $\epsilon$, mass in units of $m$ and distance in units of $\sigma$.
\section*{Conflicts of interest}
There are no conflicts of interest to declare.
\section*{Acknowledgements}
\label{sec:acknowledge}
We thank Mu-jie Huang of the University of Toronto for useful discussions.
This work was supported by the National Natural Science Foundation of China (12004086) and the Natural Sciences and Engineering Research Council of Canada. Computations were supported by Compute Canada and performed on the GPC supercomputer at the SciNet HPC Consortium.
|
1,116,691,500,250 | arxiv | \section{Introduction}
In general relativity spacetime geometry is fully described by the metric. That is to say, the metric does not only define distances, which is its primary role, but also defines parallel transport, as it is used to construct the Levi-Civita connection. However, in principle this does not have to be the case. The metric and the connection can be independent quantities. In this case one would need field equations that would determine the dynamics of both the metric and the connection.
How can one construct such a theory?
In some textbooks, see for example Refs.~\cite{grav,wald}, an independent variation with respect to the metric and the connection of what is formally the Einstein--Hilbert action is considered as an alternative way to arrive to Einstein's equations. This variation is called Palatini variation. Indeed the variation with respect to the connection leads to a non-dynamical equation fixing the latter to be equal to the Levi-Civita connection of the metric, and under this condition the field equations for the metric become Einstein's equations. However, there is a very crucial implicit assumption: that the matter action does not depend on the connection. This is equivalent to assuming that covariant derivatives of matter fields are defined with the Levi-Civita connection of the metric instead of the independent connection. Then the independent connection does not really carry the geometrical meaning described previously, see Refs.~\cite{sot2,T&S} for a discussion.
Note that if one allows the connection to enter the matter action, the resulting theory is no longer general relativity \cite{hehlkerl}.
Additionally, one can easily argue that within a metric-affine setting the Einstein--Hilbert form of the action is not necessarily well motivated anyway: under the assumption that the connection is the Christoffel symbol of the metric, the Einstein--Hilbert action is indeed the unique diffeomorphism invariant action which leads to second order field equations (modulo topological terms and total divergences). However, this is not the case if the connection is allowed to be independent and it is not assumed to be symmetric: in this case there are other invariants one should in principle include in the action, even with the same dimensions as the Ricci scalar.
The situation gets more complicated once one decides to consider the role of higher order terms. Again, such actions have been studied mostly under the simplifying but geometrically unappealing assumption that the connection does not enter the matter action. Such theories are dubbed Palatini theories of gravity. Particular attention has been payed to $f({\cal R})$ models, i.e.~to actions where the Lagrangian is some algebraic function of the Ricci scalar of the independent connection, ${\cal R}$. Such actions were introduced and initially studied by Buchdahl \cite{buchdahl}. They have recently attracted a lot of interest as possible infrared modifications of general relativity \cite{palgen} (see Refs.~\cite{T&V,deF,phd} for reviews). However, Palatini $f({\cal R})$ gravity models with infrared corrections with respect to GR have been shown to be non-viable for various reasons: they are in conflict with the standard model of particle physics \cite{flanagan,padilla} and they violate solar system tests as their post-Newtonian metric has an algebraic dependence on the matter fields \cite{olmonewt,sotnewt}. Singularities have been shown to arise on the surface of well known spherically symmetric matter configurations \cite{Barausse:2007pn}, which render the theory at best incomplete and provide a very strong viability criterion. This criterion is almost independent of the functional form of the Lagrangian, the only exception being Lagrangians with corrections which become important only in the far ultraviolet (as in this case the singularities manifest at scales where non-classical effects take over) \cite{Olmo:2008pv} .
Generalized Palatini theories of gravity have also been considered. For example, Lagrangians of the form $f({\cal R}^{(\mu\nu)}{\cal R}_{(\mu\nu)})$ (parentheses indicating symmetrization) were studied in Ref.~\cite{Allemandi:2004wn} and Lagrangians of the form ${\cal R}+f({\cal R}^{(\mu\nu)}{\cal R}_{(\mu\nu)})$ were considered in Ref.~\cite{Li:2007xw}, with attention being focussed on cosmology. In Refs.~\cite{Olmo:2009xy,olmo2,new} Lagrangians of the more general form $f(R,R^{\mu\nu}R_{\mu\nu})$ were studied, see however Ref.~\cite{STV}.
Unlike the exceptional case of the Einstein--Hilbert action as mentioned above, generalized Palatini theories of gravity are distinct from the theories one would get starting from the same action (formally) and applying standard metric variation. One cannot say that their dynamics have been well understood in general. That is because the dynamic of the most well studied class, $f({\cal R})$ are rather peculiar and not representative. Indeed, in Palatini $f({\cal R})$ gravity the independent connection does not carry any dynamics and can be algebraically eliminated in favour of the metric and the matter fields \cite{sotplb,Sotiriou:2007zu}. This result has recently been generalized to $f(R)$ theories with non-symmetric connections, {\em i.e.}~theories that allow for torsion \cite{Sotiriou:2009xt}. The lack of extra dynamics with respect to general relativity can also be seen by the fact that Palatini $f(R)$ gravity has been shown to be dynamically equivalent to Brans--Dicke theory with Brans--Dicke parameter $\omega_0=-3/2$ \cite{flanagan,olmonewt,sot1} (again, irrespectively of how general the connection is allowed to be \cite{Sotiriou:2009xt}). This is a particular theory within the Brans--Dicke class in which the scalar does not carry any dynamics and can be algebraically eliminated in favour of the matter fields.
The algebraic elimination of the connection (or the corresponding scalar field in the Brans--Dicke representation) will introduce extra matter interactions \cite{flanagan,padilla} and Palatini $f({\cal R})$ theories will essentially be equivalent to general relativity with modified source terms. In fact, this property is what lies in the heart of all the viability issues mentioned earlier \cite{Barausse:2007pn}. However, this is not a generic property of generalized Palatini gravity, as it has been recently demonstrated in Ref.~\cite{STV}, but just a peculiarity of $f({\cal R})$ actions. Generic higher order actions lead to extra dynamical degrees of freedom.
What we would like to understand here is what happens when one jumps from the Palatini approach, to the more general and better motivated metric-affine approach, where the independent connection is allowed to enter the matter action, define the covariant derivative, and, therefore, retain its geometrical significance. In particular, we would like to understand under which circumstances this connection becomes an auxiliary field, which can be algebraically eliminated, and when it actually does carry dynamics. Note that there are well known examples, such as Einstein--Cartan theory \cite{hehl} (which is a metric-affine theory with the additional constraint that the connection is metric, but not symmetric), where the independent connection can be eliminated algebraically, leading to general relativity with extra matter interactions. In this specific case, this is a four-fermion interaction. See also Ref.~\cite{yuri} for an example of a more general action with the same property. What happens for more general theories, however, and especially how the dynamics of the connection will be affected by considering higher order terms in the action, has not been systematically understood.
In order to address this issue we follow an approach motivated by effective field theory. We will consider the metric-affine action as an effective action, possibly arising from some more fundamental theory at some appropriate limit. We will then employ power counting in order to construct the most general action order by order. This will allow us to arrive at model independent statements and avoid considering fine-tuned actions, which can lead to misleading results.
The paper is organized in the following way. Section \ref{general} is devoted to presenting our conventions and briefly reviewing the metric-affine set up. In section \ref{2order} we construct the most general second order action and show that it does not lead to a dynamical connection. In section \ref{higher} we move on to consider actions with higher order invariants and we show that, remarkably, the situation changes radically and degrees of freedom residing in the connection become excited. In section \ref{fofr} we consider $f({\cal R})$ actions as a particular example, as they have been extensively studied in the literature, even in the metric-affine setting \cite{T&S}. We do not consider them because we expect them to be representative examples of the dynamics of metric-affine gravity. On the contrary, our intention is to explicitly demonstrate that they are not. Section \ref{discuss} contains a discussion of our results and our conclusions.
\section{General setup for metric-affine theories}
\label{general}
We start by clarifying our notation and conventions. The covariant derivative of the connection $\Gamma_{\;\;\mu\nu}^{\rho}$ acting on a tensor is defined as
\begin{eqnarray}\label{def}
\nabla_\mu A^\nu_{\phantom{a}\sigma}=\partial_\mu A^\nu_{\phantom{a}\sigma}+\Gamma^\nu_{\phantom{a}\alpha\mu} A^\alpha_{\phantom{a}\sigma}-\Gamma^\alpha_{\phantom{a}\sigma\mu} A^\nu_{\phantom{a}\alpha}\,.
\end{eqnarray}
It is important to stress that the position of indices must be taken very carefully into account, since the connection are not assumed to be symmetric. The antisymmetric part of the connection is commonly referred to as the Cartan torsion tensor
\begin{eqnarray}
\label{cartan}
S_{\mu\nu}^{\phantom{ab}\lambda}\equiv \Gamma^{\lambda}_{\phantom{a}[\mu\nu]}\,.
\end{eqnarray}
The failure of the connection to covariantly conserve the metric is measured by the non-metricity tensor
\begin{eqnarray}\label{nonmet}
Q_{\lambda\mu\nu}\equiv-\nabla_{\lambda}g_{\mu\nu}\,.
\end{eqnarray}
Using the connection one can construct the Riemann tensor
\begin{equation}
\label{riemann}
{\cal R}^\mu_{\phantom{a}\nu\sigma\lambda}=-\partial_\lambda\Gamma^\mu_{\phantom{a}\nu\sigma}+\partial_\sigma\Gamma^\mu_{\phantom{a}\nu\lambda}+\Gamma^\mu_{\phantom{a}\alpha\sigma}\Gamma^\alpha_{\phantom{a}\nu\lambda}-\Gamma^\mu_{\phantom{a}\alpha\lambda}\Gamma^\alpha_{\phantom{a}\nu\sigma}\, .
\end{equation}
which has no dependence on the metric. Notice that the Riemann tensor here has only one obvious symmetry: it is antisymmetric in the last two indices. All other symmetries one might by accustomed to from general relativity are not present for an arbitrary connection \cite{schro}.
Without any use of the metric we can also define as $\mathcal{R}_{\mu\nu}$ the Ricci tensor built with the connection $\Gamma_{\;\;\mu\nu}^{\rho}$
\begin{eqnarray}\label{ricci}
{\cal R}_{\mu\nu}\equiv {\cal R}^\lambda_{\phantom{a}\mu\lambda\nu}=\partial_{\lambda}\Gamma^{\lambda}_{\;\;\mu\nu}-\partial_{\nu}\Gamma^{\lambda}_{\;\;\mu\lambda}+\Gamma^{\lambda}_{\;\;\sigma\lambda}\Gamma^{\sigma}_{\;\;\mu\nu}-\Gamma^{\lambda}_{\;\;\sigma\nu}\Gamma^{\sigma}_{\;\;\mu\lambda}\,.
\end{eqnarray}
$\mathcal{R}=g^{\mu\nu} {\cal R}_{\mu\nu}$ is the corresponding Ricci scalar.
Note that there is an intrinsic ambiguity in the definition of the Ricci tensor in metric-affine theories as the limited symmetries of the Riemann tensor allow now for an alternative definition as
\begin{equation}
\hat{{\cal R}}_{\mu\nu}\equiv {\cal R}^\sigma_{\phantom{a}\sigma\mu\nu}=\,-\partial_\nu\Gamma^\sigma_{\phantom{a}\sigma\mu}+\partial_\mu\Gamma^\sigma_{\phantom{a}\sigma\nu}\,.
\end{equation}
This tensor is called the homothetic curvature.
For a symmetric connection it is equal to the antisymmetric part of ${\cal R}_{\mu\nu}$ and, therefore, it need not be separately considered. This is not the case for a non-symmetric connection. Note however, that the homothetic curvature is fully antisymmetric and as such it leads to a vanishing scalar when contracted with the metric.\footnote{See Ref.~\cite{T&S} for a more detailed discussion about the ambiguities in the definition of the Ricci tensor. Note also that, unlike the usual Ricci tensor, the homothetic curvature tensor has a direct physical interpretation: it measures the change of the length of a vector when it is transported along a closed loop. When the homothetic curvature vanishes, the connection is volume
preserving, {\em i.e.}~lengths and volumes do not change during parallel transport. We thank Yuri Obukhov for bringing this to our attention.}
As already mentioned in the Introduction, the key characteristic of metric-affine gravity is that the affine connection $\Gamma_{\;\;\mu\nu}^{\rho}$ is not assumed to have any {\em a priori} relation with the metric. On the other hand, it is assumed to define parallel transport and the covariant derivative of matter fields, so it inevitably enters the matter action, see Ref.~\cite{sot2} for a discussion. That is, in metric-affine gravity couplings between the connection and the matter fields are allowed. This is the main difference from (generalized) Palatini theories of gravity, as mentioned earlier. The action will, therefore be of the following general form
\begin{equation}
S=S_G+S_M=\int d^4x\sqrt{-g} \left[ \mathcal{L}_G(g_{\mu\nu}, \Gamma_{\;\;\mu\nu}^{\rho})+\mathcal{L}_M\left(g_{\mu\nu}, \Gamma_{\;\;\mu\nu}^{\rho}, \psi\right)\right]\,,
\end{equation}
where $g$ is the determinant of the metric $g_{\mu\nu}$, $\psi$ collectively denotes the matter fields, and $S_M$ is the matter action. We have written the dependence of ${\cal L}_M$ on the various fields explicitly to avoid confusion here, but we will suppress it from now on and just use $S_M$ instead in order to lighten the notation. Clearly, specific choices of matter fields will not couple to the connection of course, such as a scalar field or a gauge field. See Ref.~\cite{T&S} for a detailed discussion on such matters and on the general characteristics of metric-affine gravity.
One now needs to specify the exact form of the Lagrangian $\mathcal{L}_G$. In Ref.~\cite{hehlkerl} an action linear in ${\cal R}$ was consider and in Ref.~\cite{T&S} the most general $f({\cal R})$ family was studied extensively. Instead of an {\em ad hoc} choice inspired by some similarity with the Einstein--Hilbert action and its generalizations, we would like to follow here an effective field theory approach in order consider the most general action possible at each order. To construct this action, we should carry on a power counting analysis which will reveal the whole set of appropriate terms order by order. We set $c=1$ and we can choose the engineering dimensions
\begin{equation}
[dx]=[dt]=[l]
\end{equation}
where $l$ is a place holder symbols with dimension of a length. Then we have
\begin{eqnarray}
&&[g_{\mu\nu}]=[1]\,, \quad [\sqrt{-g} dx^4]=[l^4]\,, \quad[\Gamma^{\lambda}_{\phantom{a}\mu\nu}]=[l^{-1}]\,,\quad [{\cal R}_{\mu\nu}]=[l^{-2}]\,.
\end{eqnarray}
Now consider as a simple example the action
\begin{equation}
\label{action}
S_G=\frac{1}{l_p^2} \int dx^3 dt \sqrt{-g} {\cal R}\,.
\end{equation}
Requiring that this action is dimensionless implies that the coupling constant $l_p$ must have dimensions of a length and can be naturally associated to the Planck length. What we mean by order of the gravitational theory is also clear now: we mean the highest order in $l_p^{-1}$ powers appearing in the Lagrangian (which, since one cannot choose the metric and the connection to be dimensionless at the same time, does not correspond to the order in its derivatives).
\section{Second order action}
\label{2order}
Clearly the action written above is not the most general one we could write in metric-affine gravity. It is just an example inspired by the analogy with standard GR and the Einstein--Hilbert action. To begin with, we could include a cosmological constant term, which is of lower order. But such a term would not play any important role in our arguments so we will omit it for simplicity. What other terms can we write at the second order? Under the assumption that the connection is torsionless and metric compatible (Levi-Civita), there exist no other term which respects diffeomorphism invariance, as it is well known. But, in the more general metric-affine setting we are considering here, there is at least two more tensors one could imagine using in order to construct invariants:
\begin{itemize}
\item The aforementioned ``second Ricci'' tensor $\hat{{\cal R}}_{\mu\nu}$. However, this tensor has dimensions $[l^{-2}]$ and is antisymmetric, so there is no quantity one can construct out of it at second order;
\item the Cartan torsion tensor of eq.~(\ref{cartan}), which has the same dimensions as $\Gamma^{\lambda}_{\phantom{a}\mu\nu}$. Therefore, terms with one derivative of $S_{\mu\nu}^{\phantom{ab}\lambda}$ or terms quadratic in $S_{\mu\nu}^{\phantom{ab}\lambda}$ will be of the same order as ${\cal R}$.
\end{itemize}
Due to the symmetries of $S_{\mu\nu}^{\phantom{ab}\lambda}$ there is only a single term with a derivative we can write
\begin{equation}
\label{derterm}
g^{\mu\nu}\nabla_\mu S_{\nu\sigma}^{\phantom{ab}\sigma}\,.
\end{equation}
For the same reason, there are just three terms quadratic in $S_{\mu\nu}^{\phantom{ab}\lambda}$ one can write
\begin{equation}
\label{3terms}
g^{\mu\nu}S_{\mu\lambda}^{\phantom{ab}\lambda}S_{\nu\sigma}^{\phantom{ab}\sigma}\,, \qquad g^{\mu\nu}S_{\mu\lambda}^{\phantom{ab}\sigma}S_{\nu\sigma}^{\phantom{ab}\lambda}\,, \qquad g^{\mu\alpha}g^{\nu\beta}g_{\lambda\gamma}S_{\mu\nu}^{\phantom{ab}\lambda}S_{\alpha\beta}^{\phantom{ab}\gamma}\,.
\end{equation}
Note that the term in eq.~(\ref{derterm}) has been considered by Papapetrou and Stachel in \cite{papa}.
A subtle point is the following. The term in eq.~(\ref{derterm}) is not a total divergence as $\nabla_\mu$ is not defined with the Levi-Civita connection of the metric. On the other hand, one can think to decompose the connection as
\begin{equation}\label{decomp}
\Gamma^{\lambda}_{\phantom{a}\mu\nu}= \left\{^{\lambda}_{\phantom{a}\mu\nu}\right\}+C^{\lambda}_{\phantom{a}\mu\nu}\,,
\end{equation}
i.e.~in its Levi-Civita part and the rest. Now, using this decomposition we can split the covariant derivative in (\ref{derterm}) in a metric compatible part, which will lead to a total divergence, and the rest, which will lead to terms consisting of contractions between $C^{\lambda}_{\phantom{a}\mu\nu}$ and the Cartan torsion tensor. Since the non-metricity is not zero, these terms are different than the ones already considered above in eq.~(\ref{3terms}). Thus the term in eq.~(\ref{derterm}) is non-trivial.
This brings us to another puzzle though: $C^{\lambda}_{\phantom{a}\mu\nu}$ is a tensor, so why not consider terms constructed with it as well? Actually, $C^{\lambda}_{\phantom{a}\mu\nu}$ can always be decomposed in terms of torsion $S_{\mu\nu}^{\phantom{ab}\lambda}$ and non-metricity $Q_{\lambda\mu\nu}$, so the question then reduces to whether we should also consider terms constructed with $Q_{\lambda\mu\nu}$ or not. From a power counting/field theory perspective nothing prevents us from doing so, and these would indeed be terms of the same order. However, from this perspective we should also consider, for instance, the Ricci tensor of the metric $R_{\mu\nu}$. In fact, $Q_{\lambda\mu\nu}$ and $R_{\mu\nu}$ share a common characteristic which is crucial for our discussion: They cannot be expressed without using derivatives of the metric (even if instead of (\ref{nonmet}) one tries to define $Q_{\lambda\mu\nu}$ using the connection, then still the Levi-Civita connection would be needed as well). Therefore, the puzzle reduces to whether or not we should be considering invariants constructed with derivatives of the metric.
Clearly, field theoretic considerations cannot give an answer to this question. Such terms should be considered unless we are willing to invoke some principle excluding them, along the line of minimal coupling in general relativity. Such a principle has been discussed in Ref.~\cite{T&S}. In simple terms it would be the requirement that the metric be used only for raising and lowering indices. We choose to follow this prescription here, as it seems sensible from a geometrical perspective (the purpose of the metric being to measure distances) and it significantly reduces the number of terms one can consider.
Another way to reduce the number of terms without invoking a minimal coupling principle would be to require that the connection be metric compatible. This would force $Q_{\lambda\mu\nu}$ to vanish, without necessarily implying torsion has to vanish as well. We would then remain with exactly the same terms written above. However, in this case the term in eq.~(\ref{derterm}) would indeed differ from the first term in eq.~(\ref{3terms}) only by a total surface term and one would be able to omit it.
Let us consider the consider the most general second-order action we have just constructed in our setting
\begin{eqnarray}\label{2ndorder}
S=\frac{1}{16\,\pi\,l_p^2} \int dx^4 \sqrt{-g} &&\left(g^{\mu\nu}{\mathcal R}_{\mu\nu}+a_1g^{\mu\nu}\nabla_{\mu}S_{\nu\sigma}^{\;\;\;\;\sigma}+a_2g^{\mu\nu}S_{\mu\lambda}^{\;\;\;\;\lambda}S_{\nu\sigma}^{\;\;\;\;\sigma}\right.\\
&& \left.+a_3g^{\mu\nu}S_{\mu\lambda}^{\;\;\;\;\sigma}S_{\nu\sigma}^{\;\;\;\;\lambda}+a_4g^{\mu\alpha}g^{\nu\beta}g_{\lambda\gamma}S_{\mu\nu}^{\;\;\;\;\lambda}S_{\alpha\beta}^{\;\;\;\;\gamma}\right)+S_M\,,\nonumber
\end{eqnarray}
where the $a_i$'s represent the various coupling constant. Varying independently with respect to metric and connection yields
\begin{eqnarray}
\label{maeq1}
&&{\cal R}_{(\mu\nu)}-\frac{1}{2} g_{\mu\nu} \mathcal{R}+a_1\left\{\nabla_{(\mu}S_{\nu)}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}\nabla_{\alpha}S_{\beta}\right\}\nonumber\\
&&\quad +a_2\left\{-\frac{1}{2} g_{\mu\nu} S_{\alpha}S^{\alpha}+S_{\mu}S_{\nu}\right\}+a_3\left\{-\frac{1}{2} g_{\mu\nu} g^{\alpha\beta}S_{\alpha\lambda}^{\;\;\;\;\sigma}S_{\beta\sigma}^{\;\;\;\;\lambda}+S_{\mu\lambda}^{\;\;\;\;\sigma}S_{\nu\sigma}^{\;\;\;\;\lambda}\right\}\nonumber\\
&&\quad +a_4\left\{-\frac{1}{2} g_{\mu\nu}S_{\rho\sigma\lambda} S^{\rho\sigma\lambda}+2 S_{\alpha\mu}^{\;\;\;\;\lambda}S^{\alpha}_{\phantom{a}\nu\lambda}-S_{\rho\sigma\mu}S^{\rho\sigma}_{\phantom{ab}\nu}\right\}=\kappa T_{\mu\nu}\,,\\
\label{vargamdef}
&&\frac{1}{\sqrt{-g}}\left[-\nabla_\lambda\left(\sqrt{-g}g^{\mu\nu}\right)+\nabla_\sigma\left(\sqrt{-g}g^{\sigma\mu}\right)\delta^{\nu}_{\;\;\;\lambda}-a_1\nabla_{\alpha}\left(\sqrt{-g} g^{\alpha[\mu}\right)\delta^{\nu]}_{\;\;\lambda}\right]\nonumber\\&&\quad+\left(2-a_1\right)g^{\mu\nu}S_{\lambda}-2S^{(\mu}\delta^{\nu)}_{\;\;\;\lambda}+2(a_1+a_2-1)S^{[\mu}\delta^{\nu]}_{\;\;\;\lambda}+2a_3g^{\alpha[\mu}S_{\alpha\lambda}^{\;\;\;\;\nu]}\nonumber\\
&&\quad +2a_4g^{\alpha[\mu}g^{\nu]\beta}g_{\lambda\gamma}S_{\alpha\beta}^{\;\;\;\;\gamma}=\kappa \Delta_{\lambda}^{\;\;\mu\nu}\,.
\end{eqnarray}
where $S_{\alpha}\equiv S_{\alpha\beta}^{\;\;\;\;\beta}$, $\kappa=8\,\pi\,l_p^2$ and
\begin{equation}
T_{\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta S_M}{\delta g^{\mu\nu}}\,,\hspace{1.5cm}\Delta_{\lambda}^{\;\;\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta S_M}{\delta\Gamma_{\;\;\mu\nu}^{\lambda}}\,.
\end{equation}
$\Delta_{\lambda}^{\;\;\mu\nu}$ is now as the \textit{hypermomentum} and, as already identified in \cite{hehl2}, it encapsulates all the information related to the spin angular momentum of matter, the intrinsic part of dilation current and the shear current. $T_{\mu\nu}$ on the other hand is sometimes referred to as the stress-energy tensor, in analogy with general relativity. However, it should be stressed that this terminology might be misleading within the metric-affine framework as this tensor does not have the properties usually associated with the stress-energy tensor is general relativity. For instance, it is not necessarily divergence free, it does not reduce to the special relativistic stress energy tensor at a suitable limit and of course it does describe but some properties of matter, given the existence of $\Delta_{\lambda}^{\;\;\mu\nu}$ as well. In fact, it is best if it is just considered as nothing more than a short hand notation for the functional derivative of the matter action with respect to the metric.
Our present aim is to check whether it is possible to fully eliminate the connection from the field equations. Let us consider the contraction of $\lambda$ index in (\ref{vargamdef}) respectively with $\mu$ and $\nu$
\begin{eqnarray}\label{nablag}
\frac{3}{2}\frac{a_1}{\sqrt{-g}}\nabla_\mu\left(\sqrt{-g}g^{\mu\nu}\right)=\kappa\Delta_{\mu}^{\;\;\mu\nu}+(4a_1+3a_2+a_3+2a_4)S^{\nu}\,,
\end{eqnarray}
\begin{eqnarray}
\frac{6-3a_1}{2 \sqrt{-g}}\nabla_\nu\left(\sqrt{-g}g^{\mu\nu}\right)=\kappa\Delta_{\nu}^{\;\;\mu\nu}-(2a_1+3a_2+a_3+2a_4-6)S^{\mu}\,.
\end{eqnarray}
Combining these two equations gives $S_\nu$ and the trace $\nabla_\mu\left(\sqrt{-g}g^{\mu\nu}\right)$ as functions of the hypermomentum
\begin{eqnarray}
S^{\nu}=\frac{\kappa}{(1-a_1)a_1+3a_2+a_3+2a_4} \left[(a_1-1)\Delta^{\nu}-\tilde{\Delta}^{\nu}\right]\,,
\end{eqnarray}
\begin{eqnarray}\label{torsvec}
\frac{1}{\sqrt{-g}}\nabla_\mu\left(\sqrt{-g}g^{\mu\nu}\right
&=&\frac{2}{3}\left\{\kappa\Delta^{\nu}+\frac{\kappa(a_1+3)\left[(a_1-1)\Delta^{\nu}-\tilde{\Delta}^{\nu}\right]}{(1-a_1)a_1+3a_2+a_3+2a_4} \right\}\,,
\end{eqnarray}
where we defined the two quantities $\Delta^{\mu}\equiv\Delta_{\alpha}^{\;\;(\alpha\mu)}$ and $\tilde{\Delta}^{\mu}\equiv\Delta_{\alpha}^{\;\;[\alpha\mu]}$. Eq. (\ref{torsvec}) can be inserted in (\ref{vargamdef}) to eliminate the second and the third term in order to get
\begin{eqnarray}\label{newf}
&&\frac{1}{\sqrt{-g}}\left[-\nabla_\lambda\left(\sqrt{-g}g^{\mu\nu}\right)\right]+2a_3g^{\alpha[\mu}S_{\alpha\lambda}^{\;\;\;\;\nu]}+2a_4g^{\alpha[\mu}g^{\nu]\beta}g_{\lambda\gamma}S_{\alpha\beta}^{\;\;\;\;\gamma}=\nonumber\\
&&=\kappa \Delta_{\lambda}^{\;\;\mu\nu}\!\!-\frac{2}{3}\left[\kappa \Delta^{\mu}+(a_1+3)S^{\mu}\right]\delta^{\nu}_{\;\;\;\lambda}\!+\frac{2}{3}a_1\left\{\kappa \Delta^{[\mu}+(a_1+3)S^{[\mu}\right\}\delta^{\nu]}_{\;\;\;\lambda}-\nonumber\\&&\qquad-\left(2-a_1\right)g^{\mu\nu}S_{\lambda}+2S^{(\mu}\delta^{\nu)}_{\;\;\;\lambda}-2(a_1+a_2-1)S^{[\mu}\delta^{\nu]}_{\;\;\;\lambda}\,,
\end{eqnarray}
while we will refrain from replacing $S_\nu$ for compactness.
Using the identities
\begin{equation}
\nabla_\mu{\sqrt{-g}}=\partial_\mu{\sqrt{-g}}-\Gamma^{\alpha}_{\;\;\alpha\mu}\sqrt{-g}
\end{equation}
and
\begin{equation}
g_{\mu\nu}\partial_{\lambda}(\sqrt{-g}g^{\mu\nu})=2\sqrt{-g}\,\partial_{\lambda}\ln\sqrt{-g}
\end{equation}
we can write the trace of eq.~(\ref{newf}) with the metric in the $\mu$ and $\nu$ indices as
\begin{eqnarray}
\frac{\partial_{\lambda}\sqrt{-g}}{\sqrt{-g}}=-\frac{1}{2}\kappa g_{\mu\nu}\Delta_{\lambda}^{\;\;\mu\nu}+\frac{1}{3}\kappa g_{\mu\lambda}\Delta^{\mu}+\left(4-\frac{5}{3}a_1\right)S_{\lambda}+\Gamma^{\alpha}_{\;\;\alpha\lambda}\,.
\end{eqnarray}
Eliminating the density related terms and suitably lowering the indices eq.~(\ref{newf}) can eventually take the form
\begin{eqnarray}
&&\partial_{\lambda}g_{\sigma\rho}-\Gamma^{\mu}_{\;\;\rho\lambda}g_{\mu\sigma}-\Gamma^{\nu}_{\;\;\sigma\lambda}g_{\nu\rho}+2a_3S_{[\sigma|\lambda|\rho]}+2a_4S_{\sigma\rho\lambda}=\nonumber\\
&&=g_{\sigma\rho}\left(4-\frac{5}{3}a_1\right)S_{\lambda}+\frac{1}{3}\kappa g_{\sigma\rho}\Delta_{\lambda}-\frac{1}{2}\kappa g_{\sigma\rho}\Delta_{\lambda\;\;\mu}^{\;\;\mu}+\nonumber\\&&\quad+\kappa\Delta_{\lambda\sigma\rho}-\left(2-a_1\right)g_{\sigma\rho}S_{\lambda}+2S_{(\sigma}g_{\rho)\lambda}-2(a_1+a_2-1)S_{[\sigma}g_{\rho]\lambda}+\nonumber\\
&&\quad+\frac{2}{3}(a_1+3)\left(a_1S_{[\sigma}g_{\rho]\lambda}-S_{\sigma}g_{\rho\lambda}\right)+\frac{2}{3}\kappa\left(a_1\Delta_{[\sigma}g_{\rho]\lambda}-\Delta_{\sigma}g_{\rho\lambda}\right)\,.
\end{eqnarray}
We can now split this last expression in its antisymmetric and symmetric part with respect to the two indices $\sigma$ and $\rho$
\begin{eqnarray}\label{tofindtorsion}
2a_3S_{[\sigma|\lambda|\rho]}+2a_4S_{\sigma\rho\lambda}&=&\Theta_{\lambda\sigma\rho}\,,\nonumber\\
\label{tofindsymcon}
\partial_{\lambda}g_{\sigma\rho}-\Gamma^{\mu}_{\;\;\rho\lambda}g_{\mu\sigma}-\Gamma^{\nu}_{\;\;\sigma\lambda}g_{\nu\rho}&=&\kappa\Delta_{\lambda(\sigma\rho)}-\frac{2}{3}\left[\kappa\Delta_{(\sigma}+a_1S_{(\sigma}\right]g_{\rho)\lambda}\nonumber\\
-g_{\sigma\rho}\Big[\left(\frac{2}{3}a_1-2\right)S_{\lambda}&+&\frac{1}{2}\kappa \Delta_{\lambda\;\;\mu}^{\;\;\mu}-\frac{1}{3}\kappa\Delta_{\lambda}\Big]\,,
\end{eqnarray}
where we have introduced the short hand notation
\begin{equation}
\Theta_{\lambda\sigma\rho}\equiv\kappa\Delta_{\lambda[\sigma\rho]}+\frac{2}{3}(a_1-1)\left[\kappa\Delta_{[\sigma}+\left(a_1-\frac{3a_2}{a_1-1}\right)S_{[\sigma}\right]g_{\rho]\lambda}\,.
\end{equation}
Adding suitable permutations of (\ref{tofindtorsion}) and (\ref{tofindsymcon}) we obtain
\begin{eqnarray}\label{torsion}
S_{\rho\nu\mu}\!\!\!&=\!\!\!&\frac{a_3}{2a_3(a_3+a_4)-4a_4^{\;2}}\left[\Theta_{\nu\rho\mu}-\Theta_{\rho\nu\mu}-\left(2\frac{a_4}{a_3}-1\right)\Theta_{\mu\rho\nu}\right]\,,\\
\label{gammafin}
\Gamma^{\xi}_{\;\;(\sigma\rho)}\!\!\!&=\!\!\!&\left\{^{\xi}_{\;\;\sigma\rho}\right\}+2S_{(\rho\;\;\sigma)}^{\;\;\;\xi}-\frac{1}{2}\kappa g^{\xi\lambda}(-\Delta_{\lambda(\sigma\rho)}+\Delta_{\rho(\sigma\lambda)}+\Delta_{\sigma(\rho\lambda)})\nonumber\\
\hspace{-1cm}&&-\frac{\kappa}{3}g^{\xi\lambda}\left(2\Delta_{(\sigma}g_{\rho)\lambda}-3\Delta_{\lambda}g_{\sigma\rho}\right)+g^{\xi\lambda}\left[S_{\lambda}g_{\sigma\rho}+\left(\frac{2}{3}a_1-2\right)S_{(\sigma}g_{\rho)\lambda}\right]\nonumber\\
\hspace{-1cm}&&-\frac{\kappa}{4}g^{\xi\lambda}\left[g_{\sigma\rho}\Delta_{\lambda\mu}^{\;\;\;\;\mu}-2g_{\lambda(\rho}\Delta_{\sigma)\mu}^{\;\;\;\;\;\;\mu}\right]\,.
\end{eqnarray}
Eqs.~(\ref{torsion}) and (\ref{gammafin}) give the antisymmetric and symmetric parts of the connection algebraically in terms of the hypermomentum and the metric. Under the condition that the matter action depends at most linearly on the connection, the above statement is equivalent to saying that we have algebraically expressed the connection in terms of the matter fields and the metric. This assumption is indeed satisfied for all common matter actions, such as scalar and gauge field, in which the matter action does not depend on the connection, and fermions, where the matter action is linear in the connection. This condition can be violated for some fields, for example massive vector fields, especially if not trivial couplings between the connection and the matter are introduced. However, as long as the matter action contains only first order derivatives of the matter fields (in order for the matter fields to satisfy second order equations of motion), $\Delta_{\lambda}^{\;\;\mu\nu}$ will only depend algebraically on the connection. This implies that, even though some more complicated manipulations will be required, the connection can always be expressed algebraically in terms of the matter field and the metric (at least at the component level).
This establishes that the independent connection in (up to) second order metric-affine actions does not carry any dynamics and it can be algebraically eliminated.
Consider now using eqs.~(\ref{gammafin}) and (\ref{torsion}) to completely eliminate the connection in eq.~(\ref{maeq1}). One would then get an equations of the form
\begin{equation}
\label{grmodsource}
R_{\mu\nu}-\frac{1}{2} R\, g_{\mu\nu}=\kappa \mathcal{T}_{\mu\nu},
\end{equation}
where $R_{\mu\nu}$ and $R$ are the Ricci tensor and the Ricci scalar of the metric $g_{\mu\nu}$ respectively, and $\mathcal{T}_{\mu\nu}$ will be some a second rank tensor which depends on the metric, $\Delta_{\lambda}^{\;\;\mu\nu}$ and $T_{\mu\nu}$. The expression for $\mathcal{T}_{\mu\nu}$ in terms of these three quantities is rather lengthy and we will refrain from writing it here. However, it should already be clear that the theory described by eq.~(\ref{grmodsource}) is general relativity with modified matter interactions. For fields for which the hypermomentum vanishes, $\mathcal{T}_{\mu\nu}=T_{\mu\nu}$.
\section{Higher orders}
\label{higher}
We can now move on to higher orders. Since the connection has three indices and the derivative one index, there is no $[l^{-3}]$ scalar quantity one can construct out of them. Similarly, one cannot construct an $[l^{-3}]$ scalar quantity using curvature invariants. The next order is $[l^{-4}]$. The terms that could straightforwardly lead to invariants after (several) contractions with the metric are
\begin{eqnarray}
\label{list}
&& {\cal R}^{\alpha}_{\phantom{a}\beta\gamma\delta} {\cal R}^{\mu}_{\phantom{a}\nu\lambda\sigma}, \qquad \nabla_\mu \nabla_\nu {\cal R}^{\alpha}_{\phantom{a}\beta\gamma\delta},\qquad {\cal R}^{\alpha}_{\phantom{a}\beta\gamma\delta} S_{\mu\nu}^{\phantom{ab}\lambda} S_{\tau\omega}^{\phantom{ab}\rho},\qquad {\cal R}^{\alpha}_{\phantom{a}\beta\gamma\delta} \nabla_\rho S_{\mu\nu}^{\phantom{ab}\lambda} \nonumber\\&& S_{\mu\nu}^{\phantom{ab}\lambda} \nabla_\rho {\cal R}^{\alpha}_{\phantom{a}\beta\gamma\delta},\qquad S_{\mu\nu}^{\phantom{ab}\lambda}S_{\alpha\beta}^{\phantom{ab}\sigma}S_{\gamma\delta}^{\phantom{ab}\kappa}S_{\tau\omega}^{\phantom{ab}\rho}, \qquad S_{\mu\nu}^{\phantom{ab}\lambda}S_{\alpha\beta}^{\phantom{ab}\sigma}\nabla_\rho S_{\gamma\delta}^{\phantom{ab}\kappa}\nonumber\\
&&S_{\mu\nu}^{\phantom{ab}\lambda}\nabla_\rho \nabla_\kappa S_{\alpha\beta}^{\phantom{ab}\sigma}, \qquad \nabla_\rho S_{\mu\nu}^{\phantom{ab}\lambda} \nabla_\kappa S_{\alpha\beta}^{\phantom{ab}\sigma}, \qquad \nabla_\mu \nabla_\nu \nabla_\rho S_{\alpha\beta}^{\phantom{ab}\sigma},
\end{eqnarray}
Clearly each of these terms can lead to various invariants. It goes beyond the purpose of the paper to list all possible terms.\footnote{An exhaustive list of all possible second and fourth order invariants one can construct in the more limiting case where the non-metricity vanishes can be found in \cite{high.ord.terms}. Given that our minimal coupling assumption prevents us from the using of the non-metricity to construct invariants (see also below), this list should cover our case as well.} However, before going further, the following subtle points are worth mentioning:
\begin{enumerate}
\item Due to the symmetries (or lack thereof) of the Riemann tensor when constructed with an independent connection, there are more invariants than in the purely metric case. For example ${\cal R}_{\mu\nu}$ is not symmetric and hence ${\cal R}_{\mu\nu}{\cal R}_{\kappa\lambda}g^{\mu\lambda}g^{\nu\kappa}$ and ${\cal R}_{\mu\nu}{\cal R}_{\kappa\lambda} g^{\mu\kappa}g^{\nu\lambda}$ are not equal.
\item $\nabla_\mu$ is constructed with the independent connection and, hence, total divergences such as $\nabla_\mu u^\mu$ do not lead to pure surface terms and cannot be discarded.
\item Since the metric is not covariantly conserved by the independent connection taking the covariant derivatives first and contracting, or contracting first and then taking a derivative does not lead to the same result. For example the terms $g^{\mu\nu} g^{\alpha\beta} \nabla_\mu \nabla_\nu {\cal R}_{\alpha\beta}$ and $g^{\mu\nu} \nabla_\mu \nabla_\nu {\cal R}$ differ.
\end{enumerate}
Regarding point (ii) one could propose to split the covariant derivative into a metric covariant derivative, which is a surface term, and the rest, such as in (\ref{decomp}). However, writing the rest explicitly would require the use of metric derivatives through the use of the Levi-Civita connection, as discussed above. Something similar can be said about point (iii). The two terms given as an example differ by a term including a covariant derivative of the metric.
This raises the question of whether both of them should be considered. As mentioned earlier, whether terms including derivatives of the metric should be included is really a matter of choice that can be answered only by invoking some minimal coupling principle. If one wants to use the metric purely for contracting indices as suggested previously, then the terms including derivatives of the metric should be suppressed.
Let us now move on to consider the effect of the higher order terms on the dynamics of the connection. Considering the most general fourth order action is formidable due to the vast number of invariant one would have to include. However, carefully considering isolated terms of different type can still reveal the complete picture.
Clearly there are term in eq.~(\ref{list}) that would not introduce new degrees of freedom if they were added to action (\ref{2ndorder}) as they do not contain extra derivatives, such as the $S^4$ term (indices suppressed). Such terms exist at all even orders, e.g. $S^{2n}$ (again indices suppressed). On the other hand $[l^{-4}]$ terms which contain two derivatives of the Cartan torsion tensor, such as $(\nabla S)^2$ (indices suppressed) would inevitable make the torsion dynamical.
What about fourth order curvature invariants? Let us for the moment set aside the term $\mathcal{R}^2$, since it belongs to the general $f(\mathcal{R})$ class, which we will discuss extensively later, and as we will see it constitutes a rather special case.
A much more characteristic example to consider, which is simple enough to keep calculations tractable and yet general enough to give us the bigger picture is the following
\begin{equation}
\label{actionl4}
S=\frac{1}{16\,\pi\,l_p^2} \int dx^4 \sqrt{-g}\left[ {\cal R} +l_p^2 {\cal R}_{\mu\nu}{\cal R}_{\kappa\lambda}(a g^{\mu\kappa}g^{\nu\lambda}+b g^{\mu\lambda}g^{\nu\kappa})\right]+S_M
\end{equation}
As mentioned earlier, when $\mathcal{R}_{\mu\nu}$ is not symmetric, as in our case, the 2 terms in the parenthesis will not lead to the same invariant. In fact, the action can be re-written as
\begin{equation}
S=\frac{1}{16\,\pi\,l_p^2} \int dx^4 \sqrt{-g}\left[ {\cal R} +l_p^2 c_1 {\cal R}_{(\mu\nu)}{\cal R}^{(\mu\nu)}+ l_p^2 c_2 {\cal R}_{[\mu\nu]}{\cal R}^{[\mu\nu]}\right]+S_M
\end{equation}
where $c_1=a+b$ and $c_2=a-b$.
This latter form of the action makes the variation easier. The field equations for the metric and the connection are respectively
\begin{eqnarray}
\label{geq}
&&{\cal R}_{(\mu\nu)}-\frac{1}{2} \left({\cal R}+l_p^2 c_1 {\cal R}_{(\alpha\beta)}{\cal R}^{(\alpha\beta)}+ l_p^2 c_2 {\cal R}_{[\alpha\beta]}{\cal R}^{[\alpha\beta]}\right) g_{\mu\nu}\nonumber \\&&\qquad\qquad+2 l_p^2 \,c_1 R_{(\alpha\mu)}R_{(\beta \nu)}g^{\alpha\beta}+2 l_p^2 \,c_2 R_{[\alpha\mu]}R_{[\beta \nu]}g^{\alpha\beta}=\kappa T_{\mu\nu}\,,\\
\label{geq2}
&&\frac{1}{\sqrt{-g}}\Big\{-\nabla_\lambda\left[\sqrt{-g} g^{\mu\nu}+2\sqrt{-g} \left(l_p^2 c_1{\cal R}^{(\mu\nu)}+l_p^2 c_2{\cal R}^{[\mu\nu]}\right) \right]+\nonumber\\
&&\qquad\qquad+\nabla_\sigma\left[\sqrt{-g}g^{\mu\sigma}+2 \sqrt{-g} \left(l_p^2 c_1{\cal R}^{(\mu\sigma)}+l_p^2 c_2{\cal R}^{[\mu\sigma]}\right)\right]\delta_{\;\;\lambda}^{\nu}\Big\}+\nonumber\\
&&\qquad\qquad+2S^{\;\;\;\;\sigma}_{\lambda\sigma}\left[g^{\mu\nu}+2\left(l_p^2 c_1{\cal R}^{(\mu\nu)}+l_p^2 c_2{\cal R}^{[\mu\nu]}\right)\right]\nonumber\\&&\qquad\qquad-2S^{\;\;\;\;\sigma}_{\alpha\sigma}\delta^{\nu}_{\;\;\lambda}\left[g^{\mu\alpha}+2\left(l_p^2 c_1{\cal R}^{(\mu\alpha)}+l_p^2 c_2{\cal R}^{[\mu\alpha]}\right)\right]+\nonumber\\&&\qquad\qquad
+4 \left(l_p^2 c_1{\cal R}^{(\mu\alpha)}+l_p^2c_2{\cal R}^{[\mu\alpha]}\right)S_{\alpha\lambda}^{\;\;\;\;\nu}=\kappa\Delta_{\lambda}^{\;\;\mu\nu}\,.
\end{eqnarray}
In the previous section we were able to use the field equation for the connection in order to algebraically express the latter in terms of the metric and the matter fields. Inspecting eq.~(\ref{geq2}), however, one easily realized that, unlike eq.~(\ref{vargamdef}), it appears to include derivatives of the connection due to the presence of $\mathcal{R}_{\mu\nu}$. One could think to use eq.~(\ref{geq}) in order to algebraically express ${\cal R}_{\mu\nu}$ (at least at component level) in terms of the metric and the matter fields (this idea is actually inspired by the specific case of $f(R)$ actions in the more restricted setting of the Palatini formalism where the connection does not couple to the matter --- this will be discussed below). If this were the case, one could eliminate ${\cal R}_{\mu\nu}$ from eq.~(\ref{geq2}) and turn it again into an algebraic equation for the connection.
However, this is not possible for generic values of $c_1$ and $c_2$, or $a$ and $b$ for the following simple reasons:
\begin{itemize}
\item ${\cal R}_{\mu\nu}$ is not necessarily symmetric and, therefore, has 16 independent components, whereas eq.~(\ref{geq}) leads to only 10 components equation as it is symmetric in $\mu$ and $\nu$. Therefore, it cannot be used to determine ${\cal R}_{\mu\nu}$ fully, in terms of the metric and the components of $T_{\mu\nu}$.
\item $T_{\mu\nu}$ is not necessarily independent of the connection, as it may include covariant derivatives of certain matter fields. Therefore, even if one would impose such conditions so that eq.~(\ref{geq}) could be solved algebraically to give ${\cal R}_{\mu\nu}$ in term of the metric and $T_{\mu\nu}$, {e.g.}~impose the constraint ${\cal R}_{[\mu\nu]}=0$ {\em a priori}, that would not actually help in algebraically expressing the connection as a function of the matter fields and the metric (at least for generic matter fields).
\end{itemize}
It should then be clear that the independent connection cannot be eliminated in metric-affine gravity once generic higher order curvature invariants have been added.
The same issue has been considered in Ref.~\cite{STV} for the simpler case of generalized Palatini gravity, {i.e.}~under the assumption that connection does not enter the matter action. This would corresponds to a vanishing $\Delta_{\lambda}^{\;\;\mu\nu}$. The first of the difficulties just discussed is still present in this case when trying to eliminate the connection algebraically by the procedure described above. However, since $T_{\mu\nu}$ is independent of the connection in generalized Palatini gravity, the second difficulty raised here is not really an issue. Hence, it is easier in this framework to write down exceptional Lagrangians for which the connection can be eliminated (it is just an auxiliary field). We refer the reader to Ref.~\cite{STV} for more details. We refrain here from discussing similar exceptions or special cases for metric-affine gravity, as this would require severe fine tuning and/or {\em a priori} constraints.
Also, we shall not consider explicitly the effect of the mixed terms which include both the Cartan torsion tensor and the Riemann or the Ricci tensor, as this would not add anything new to the qualitative understanding we presented so far. What should be clear by now is that the presence of terms including derivatives of the Cartan torsion tensor or higher order curvature invariants generically leads to a dynamical connection. Therefore, higher than second order actions generically lead to more dynamical degrees of freedom.
\section{Metric-affine $f(\mathcal{R})$ gravity as a special case}
\label{fofr}
Metric-affine $f({\cal R})$ theories of gravity have been extensively studied lately \cite{T&S}. They constitute a distinct class within higher order actions, in the sense that they allows one to treat terms of different and arbitrarily high order on the same footing. Therefore, even though within the metric-affine setup there is no reason to single out $f({\cal R})$ actions as better motivated ones --- on the contrary, restricting an action to be of this type requires fine tuning --- their simplicity is indeed a good argument for adopting them as toy-models from which to extract general lessons. On the other hand, exactly because they are so special, it is dubious whether $f({\cal R})$ actions can be considered as representative higher order metric-affine theories from the point of view of their dynamics. This is something that is worth exploring further, which is part of our motivation for considering them separately here.
The other part of our motivation comes from the observation that in the simpler setting of generalized Palatini gravity, where the connection does not enter the matter action, the whole $f({\cal R})$ class constitutes an exception for which the independent connection does not carry dynamics and can be algebraically eliminated \cite{sot2,Sotiriou:2009xt}. This is true even if the connection is not assumed to be symmetric \cite{Sotiriou:2009xt}. It is, hence, worth exploring in detail what happens in the more general metric-affine framework, in order to avoid confusion and misconceptions.
The action for $f({\cal R})$ theories reads
\begin{equation}
S=\frac{1}{16\,\pi\,l_p^{\;2}}\int d^4x\sqrt{-g} f(\mathcal{R})+S_M
\end{equation}
This action as it stands cannot lead to consistent field equations in the presence of matter, as the gravity part of the action has a symmetry that is not shared by the matter action. The Ricci scalar of the connection ${\cal R}$ remain invariant under the projective transformation
\begin{eqnarray}
\Gamma_{\;\;\mu\nu}^{\rho} \rightarrow\Gamma_{\;\;\mu\nu}^{\rho}+\delta_{\;\;\mu}^{\rho}\xi_{\nu}
\end{eqnarray}
($\xi_{\mu}$ being an arbitrary covariant vector field). Consequently any function $f({\cal R})$ and any action of the $f({\cal R})$ type will also be projective invariant. However, matter actions that depend on the connection will not be projective invariant. This has been discussed several times in the literature \cite{hehlkerl,schro, sand,T&S,phd}.
To resolve the inconsistency one needs to somehow break the projective invariance in the gravity sector. The only way to do that, given that we do not want to alter the form of the action, is to constraint the connection to some extent. The meaning of projective invariance is very similar to usual gauge invariance, in the sense that it implies that the connection can be determined only up to a projective transformation. So, to break gauge invariance we need a constraint that that would act as ``gauge fixing''. Clearly, given the nature of the projective transformation we essentially need to fix a vector. It has been argued in Refs.~\cite{T&S,phd} that the best choice for $f({\cal R})$ gravity is to set
\begin{eqnarray}
S_{\mu}\equiv S_{\alpha\mu}^{\;\;\;\;\alpha}=0
\end{eqnarray}
This constraint can be imposed implicitly, but also explicitly by adding to the action the Lagrange multiplier
\begin{eqnarray}
S_{LM}=\int d^4 x\sqrt{-g} B^{\mu}S_{\mu}\,.
\end{eqnarray}
Varying the total action with respect to metric $g^{\mu\nu}$, connection $\Gamma_{\;\;\mu\nu}^{\rho}$ and Lagrange Multiplier $B^{\mu}$ lead, after some simple manipulations \cite{T&V, T&S}, to the following set of field equations
\begin{equation}\label{fe1}
f'(\mathcal{R}) \mathcal{R}_{(\mu\nu)}-\frac{1}{2}f(\mathcal{R})g_{\mu\nu}=\kappa T_{\mu\nu}\,,\\
\end{equation}
\begin{eqnarray}\label{fe2}
-\nabla_{\lambda}(\sqrt{-g}f'(\mathcal{R})g^{\mu\nu})+\nabla_{\sigma}\left(\sqrt{-g}f'(\mathcal{R})g^{\sigma\mu}\right)\delta^{\nu}_{\;\;\lambda}
+2\sqrt{-g}f'(\mathcal{R})(g^{\mu\nu}S_{\lambda\sigma}^{\;\;\;\;\sigma}\nonumber\\
-g^{\mu\rho}\delta^{\nu}_{\;\;\lambda}S_{\rho\sigma}^{\;\;\;\;\sigma}+g^{\mu\sigma}S_{\sigma\lambda}^{\;\;\;\;\nu})=\kappa\sqrt{-g} \left(\Delta_{\lambda}^{\;\;\mu\nu}-\frac{2}{3}\Delta_{\sigma}^{\;\;\sigma[\nu}\delta^{\mu]}_{\;\;\lambda}\right)\!,
\end{eqnarray}
\begin{equation}
\label{fe3}
S_{\alpha\mu}^{\;\;\;\;\alpha}=0\,.
\end{equation}
where a prime denotes differentiation with respect to the argument.
{We now check whether it is possible to eliminate algebraically the connection from the field equations. This can be done following a similar procedure as the one used in section \ref{2order}.} A contraction of eq.~(\ref{fe2}) yields
\begin{eqnarray}
\nabla_{\sigma}\sqrt{-g}f'(\mathcal{R})g^{\sigma\mu}=\kappa\frac{2}{3}\sqrt{-g}\Delta_{\lambda}^{\;\;(\mu\lambda)}\,.
\end{eqnarray}
We can use this equation in order to eliminate the second term in (\ref{fe2}) to get
\begin{eqnarray}\label{field}
&&-\nabla_{\lambda}(\sqrt{-g}f'(\mathcal{R})g^{\mu\nu})+2\sqrt{-g}f'(\mathcal{R})g^{\mu\sigma}S_{\sigma\lambda}^{\;\;\;\;\nu}=\nonumber \\&&\qquad\qquad=\kappa\sqrt{-g} \left(\Delta_{\lambda}^{\;\;\mu\nu}-\frac{2}{3}\Delta_{\sigma}^{\;\;\sigma[\nu}\delta^{\mu]}_{\;\;\lambda}-\frac{2}{3}\Delta_{\sigma}^{\;\;(\mu\sigma)}\delta^{\nu}_{\;\;\lambda}\right)\,.
\end{eqnarray}
Using the identity
\begin{equation}
g_{\mu\nu}\partial_{\lambda}(\sqrt{-g}f'(\mathcal{R})g^{\mu\nu})=4\sqrt{-g}\partial_{\lambda}f'(\mathcal{R})+2f'(\mathcal{R})\sqrt{-g}\partial_{\lambda}\ln\sqrt{-g}\,,
\end{equation}
and after contracting eq.~(\ref{field}) with the metric in the $\mu$ and $\nu$ indices one gets
\begin{equation}\label{sqrt}
\partial_{\lambda}\ln\sqrt{-g}=\frac{1}{2}\left[-\frac{\kappa}{f'}\left(g_{\mu\nu}\Delta_{\lambda}^{\;\;\mu\nu}-\frac{2}{3}g_{\mu\lambda}\Delta_{\sigma}^{\;\;(\mu\sigma)}\right)-4\frac{\partial_{\lambda}f'}{f'}+2\Gamma^{\sigma}_{\;\;\sigma\lambda}\right]\,.
\end{equation}
Eliminating the density related terms and lowering the indices as we did in the previous section eq.~(\ref{field}) yields
\begin{eqnarray}\label{minchia}
&&-\partial_{\lambda}g_{\alpha\beta}-g_{\alpha\beta}\frac{\partial_{\lambda}f'}{f'}+\Gamma^{\mu}_{\;\;\beta\lambda}g_{\mu\alpha}+\Gamma^{\mu}_{\;\;\lambda\alpha}g_{\mu\beta}=\frac{\kappa}{f'}\Bigg[\frac{1}{2}g_{\alpha\beta}g_{\mu\nu}\Delta_{\lambda}^{\;\;\mu\nu}\\&&\qquad-\frac{1}{3}g_{\alpha\beta}g_{\mu\lambda}\Delta_{\sigma}^{\;\;(\mu\sigma)}-g_{\mu\alpha}g_{\nu\beta}\Delta_{\lambda}^{\;\;\mu\nu}+\frac{1}{3}\left(g_{\alpha\lambda}g_{\nu\beta}\Delta_{\sigma}^{\;\;\sigma\nu}+g_{\lambda\beta}g_{\mu\alpha}\Delta_{\sigma}^{\;\;\mu\sigma}\right)\Bigg]\,.\nonumber
\end{eqnarray}
Adding suitable permutations of eq.~(\ref{minchia}) one get
\begin{eqnarray}\label{gamma}
\Gamma^{\rho}_{\;\;\alpha\beta}=\left\{^\rho_{\phantom{a}\alpha\beta}\right\}+\frac{1}{2f'}\left[\partial_{\alpha}f'\delta^{\rho}_{\;\;\beta}+\partial_{\beta}f'\delta^{\rho}_{\;\;\alpha}-g^{\rho\lambda}g_{\alpha\beta}\partial_{\lambda}f'\right]+\frac{\kappa}{f'}W_{\alpha\beta}^{\;\;\;\;\rho}\,,
\end{eqnarray}
where $\left\{^\rho_{\phantom{a}\alpha\beta}\right\}$ is the usual Levi-Civita connection associated with the metric $g_{\mu\nu}$ and $W_{\alpha\beta}^{\;\;\;\;\rho}$ is a tensor encompassing all the hypermomenta terms
\begin{eqnarray}
W_{\alpha\beta}^{\;\;\;\;\rho}=&&-\frac{1}{2}\left\{\frac{1}{2}g_{\alpha\beta}g_{\mu\nu}\Delta^{\rho\mu\nu}-g_{\mu\nu}\delta^{\rho}_{\;\;(\alpha}\Delta_{\beta)}^{\;\;\mu\nu}+\Delta_{\beta\;\;\alpha}^{\;\;\rho}+\Delta_{\alpha\beta}^{\;\;\;\;\rho}\right.\\&&-\Delta^{\rho}_{\;\;\alpha\beta}-g_{\alpha\beta}\Delta_{\sigma}^{\;\;(\rho\sigma)}+\frac{1}{3}\delta^{\rho}_{\;\;\alpha}g_{\mu\beta}\left(2\Delta_{\sigma}^{\;\;[\sigma\mu]}+\Delta_{\sigma}^{\;\;(\sigma\mu)}\right)\nonumber\\&&\left.+\frac{1}{3}\delta^{\rho}_{\;\;\beta}g_{\mu\alpha}\left(2\Delta_{\sigma}^{\;\;[\sigma\mu]}+\Delta_{\sigma}^{\;\;(\sigma\mu)}\right)\right\}\,.\nonumber
\end{eqnarray}
Eq.~(\ref{gamma}) provides and expression for the connection in terms of the metric, the hypermomentum but also ${\cal R}$, via the presence of $f$. So, we essentially run into the same difficulties we faced in the previous section when trying to eliminate the connection. However, here ${\cal R}$ is just a scalar quantity.
Consider the trace of eq.~(\ref{fe1})
\begin{equation}\label{tracefe1}
\mathcal{R}f'(\mathcal{R})-2f(\mathcal{R})=\kappa T\,.
\end{equation}
For a given function $f$ this is an algebraic equation in ${\cal R}$.
Setting aside pathological cases in which this equation has no root, and the exceptional case where $f(\mathcal{R})\propto\mathcal{R}^2$, which corresponds to a conformally invariant gravitational action (see Refs.~\cite{fer,sot1,T&S} for more details), eq.~(\ref{tracefe1}) can be used to express $\mathcal{R}$ as an algebraic function of $T$. This expression can in turn be used to eliminate ${\cal R}$ in favour of $T$ in the $f$ terms in eq.~(\ref{gamma}). Therefore, from now on we can be thinking of eq.~(\ref{gamma}) as expressing the affine connection as a function of just derivatives of metric, $T_{\mu\nu}$ and the hypermomentum. This mean we are clear of the first difficulty encountered for generic fourth order actions.
This is not the case for the second point we made previously though, {\em i.e.}~that $T_{\mu\nu}$, and hence $T$, can generically depend on the connection. Even though the requirement that the matter satisfies second order differential equations of motion essentially implies that the dependence of $T_{\mu\nu}$ on the connection will be algebraic (there can be only first covariant derivatives of the matter fields is $S_M$), the fact that there are first order derivatives of $f'$ in eq.~(\ref{gamma}) is enough to give derivatives of the connection. Therefore, in metric-affine $f({\cal R})$ the connection satisfies a dynamical equation in general.
A remarkable observation is the following. Taking the antisymmetric part of (\ref{gamma}) in its lower indices we get
\begin{eqnarray}
\Gamma^{\rho}_{\;\;[\alpha\beta]}\equiv S_{\alpha\beta}^{\phantom{ab}\rho}&=&\Delta_{[\beta\;\;\alpha]}^{\;\;\;\rho}+\Delta_{[\alpha\beta]}^{\;\;\;\;\;\;\rho}-\Delta^{\rho}_{\;\;[\alpha\beta]}\\&=&g^{\rho\lambda}\left(\Delta_{\beta[\lambda\alpha]}+\Delta_{\alpha[\beta\lambda]}-\Delta_{\lambda[\alpha\beta]}\right)\,.\nonumber
\end{eqnarray}
This implies that the torsion is still non-dynamical and vanishes for matter fields with vanishing $\Delta_{\gamma}^{\;\;[\alpha\beta]}$. It is only the symmetric part of the connection that carries dynamics. As already stressed in \cite{T&S} torsion is non-propagating in metric-affine $f({\cal R})$ and it is introduced by matter fields having $\Delta_{\gamma}^{\;\;[\alpha\beta]}\neq0$.
The fact that the connection appears to satisfy a first order differential equation, at least if one assumes that $T$ does not include any derivatives of the connection, seems worrying. However, it is very difficult to tell if this is indeed a problem. Neither do we have the exact form of the equation, nor do we know which degrees of freedom hiding in the connection will actually be excited.
Of course, for matter fields which do not couple to the connection (scalar field, gauge fields) or if one imposes that the independent connection does not enter the matter action $S_M$, $T_{\mu\nu}$ is independent of the connection as well and $\Delta_{\lambda}^{\;\;\mu\nu}=0$. In this case the connection can indeed be eliminated and one recovers the results of Palatini $f({\cal R})$ gravity \cite{Sotiriou:2009xt}. Another special case is the one where $f({\cal R})={\cal R}$, as is this case $f'=1$ and ${\cal R}$ is no longer present in the eq.~(\ref{gamma}), which now takes the form
\begin{eqnarray}\label{gammaR}
\Gamma^{\rho}_{\;\;\alpha\beta}=\left\{^\rho_{\phantom{a}\alpha\beta}\right\}+\kappa\,W_{\alpha\beta}^{\;\;\;\;\rho}\,.
\end{eqnarray}
One can then write
\begin{eqnarray}
\label{ricciR}
\hspace{-0.5cm}\cal{R}_{(\alpha\beta)}&\equiv&\partial_{\rho}\Gamma_{\;\;(\alpha\beta)}^{\rho}-\partial_{(\beta}\Gamma_{\;\;\alpha)\rho}^{\rho}+\Gamma_{\;\;\sigma\rho}^{\rho}\Gamma_{\;\;(\alpha\beta)}^{\sigma}-\Gamma_{\;\;\sigma(\beta}^{\rho}\Gamma_{\;\;\alpha)\rho}^{\sigma}=\\
\hspace{-0.5cm}&=&R_{\alpha\beta}+\kappa\left[\bar{\nabla}_\rho W_{(\alpha\beta)}^{\;\;\;\;\;\;\rho}-\bar{\nabla}_{(\beta}W_{\alpha)\rho}^{\;\;\;\;\;\;\rho}+W_{\sigma\rho}^{\;\;\;\;\;\;\rho}W_{(\alpha\beta)}^{\;\;\;\;\;\;\sigma}-W_{\sigma(\beta}^{\;\;\;\;\;\;\rho}W_{\alpha)\rho}^{\;\;\;\;\;\;\sigma}\right]\,,\nonumber
\end{eqnarray}
where $R_{\mu\nu}$ is the Ricci tensor of the metric $g_{\mu\nu}$ and $\bar{\nabla}_\mu$ is the covariant derivative defined with the Levi-Civita connection of the same metric. Contracting with the metric one gets
\begin{eqnarray}
\label{rsR}
{\cal R}&=&R+\kappa\left[2 \bar{\nabla}_{[\rho}W_{\phantom{a}\mu]}^{\mu\phantom{a}\rho}+W_{\sigma\rho}^{\phantom{ab}\rho}W_{\mu}^{\phantom{a}\mu\sigma}-W_{\sigma}^{\phantom{a}\mu\rho}W_{\mu\rho}^{\phantom{ab}\sigma}\right]\,.
\end{eqnarray}
We can now use eqs.~(\ref{gammaR}), (\ref{ricciR}) and (\ref{rsR}) in order to completely eliminate the connection and end up with the single field equation for the metric
\begin{eqnarray}
\label{eq:field}
G_{\alpha\beta}&&=\kappa\, T_{\alpha\beta}+\frac{\kappa}{2}g_{\alpha\beta}\left\{2 \bar{\nabla}_{[\rho}W_{\;\;\mu]}^{\mu\;\;\;\;\rho}+W_{\sigma\rho}^{\;\;\;\;\;\;\rho}W_{\mu}^{\;\;\mu\sigma}-W_{\sigma}^{\;\;\mu\rho}W_{\mu\rho}^{\;\;\;\;\;\;\sigma}\right\}\nonumber\\
&&-\kappa\left\{\bar{\nabla}_\rho W_{(\alpha\beta)}^{\;\;\;\;\;\;\rho}-\bar{\nabla}_{(\beta}W_{\alpha)\rho}^{\;\;\;\;\;\;\rho}+W_{\sigma\rho}^{\;\;\;\;\;\;\rho}W_{(\alpha\beta)}^{\;\;\;\;\;\;\sigma}-W_{\sigma(\beta}^{\;\;\;\;\;\;\rho}W_{\alpha)\rho}^{\;\;\;\;\;\;\sigma}\right\},
\end{eqnarray}
where, as usual,
\begin{equation}
G_{\alpha\beta}\equiv R_{\alpha\beta}-\frac{1}{2} R\, g_{\alpha\beta}\,,
\end{equation}
is the Einstein tensor of the metric $g_{\mu\nu}$. Therefore, $f({\cal R})={\cal R}$ metric-affine gravity reduces to general relativity with extra matter interactions. This is anyway clearly just a subcase of the most general second order action we examined in section \ref{2order} with vanishing $a_i$'s.
However, we have shown for any other function $f({\cal R})$ the connection cannot be algebraically eliminated in the presence of matter fields that couple to it.
\section{Discussion and Conclusions}
\label{discuss}
We have studied the dynamics of theories of gravity in which the metric and the connection are independent quantities. Instead of restricting ourselves to a specific action, which would inevitably affect the generality of our conclusions, we chose to follow an approach inspired by effective field theory and attempt to understand how are the dynamics of the theory affected when increasing the order of the various invariant included in the action. To this end we first considered the most general action formed by second order invariants and then moved on to examine how these would be modified by including different types of higher order terms in the action. In both cases we imposed a generalized minimal coupling principle in order to reduce the number of terms to be considered, which excludes invariants constructed with the non-metricity or the metric curvature.
Our main conclusions are the following:
\begin{itemize}
\item Even for the most general action one can construct with second order invariants the connection does not carry any dynamics and can always be algebraically eliminated. That is, at this order, metric-affine gravity can always be written as general relativity with a modified source term or extra matter interactions. No extra degrees of freedom are excited.
\item Including higher order terms in the action changes the situation radically. The connection (or parts of it) becomes dynamical and so, it cannot be eliminated algebraically. The theory now propagates more degrees of freedom than general relativity. Thus, seen as an effective field theory, metric-affine gravity is rather peculiar and its dynamics can deceive: at the lowest order the extra degrees of freedom appear to lose their dynamics and become auxiliary fields, but once higher order terms are taken into account the extra degrees of freedom do propagate. To avoid exciting extra degrees of freedom significant fine tuning and extra {\em a priori} constraints are required.
\item $f(\mathcal{R})$ actions, which have been previously considered in metric-affine gravity, appear to constitute a distinct class with special properties. Even though the connection does carry dynamics in the presence of fields coupling to it --- unlike the simplified case of Palatini $f({\cal R})$ gravity --- torsion remains non-propagating. The propagating degrees of freedom reside only in the symmetric part of the connection. In this sense, $f({\cal R})$ actions cannot be considered representative examples of generic higher order metric-affine theories.
\end{itemize}
From an effective field theory perspective it seems that there are dynamical degrees of freedom in metric-affine gravity which appear to ``freeze" at low energies and can be eliminated in favour of extra matter interaction. This implies that a possible low energy manifestation of metric-affine gravity could be revealed in matter experiments in terms of such interactions, but the phenomenology of metric-affine theories is not limited to that. It is much richer and it includes extra propagating degrees of freedom, which can potentially be detected. A typical, but certainly not the only, example would be the presence of propagating torsion, whose consequences have been studied in a limiting setting \cite{carroll}.\footnote{See also Ref.~\cite{Shapiro:2010zq}, which appeared during the completion of this manuscript, and references therein.}
It goes beyond the scope of the current study to examine further the phenomenology of metric-affine gravity. It would be very interesting to understand in more detail how the extra degrees of freedom behave in the regime where they are dynamical and how exactly do they modify matter interactions at low energies. It is also crucial to examine the predictions of such theories for energy conservation and violations of the various formulations of the equivalence principle. Such considerations would allow us to place constraints on metric-affine theories.
A probable point of concern can be our use of the generalized minimal coupling principle. One could argue that it is not compatible with our effective field theory perspective as radiative corrections would not respect such a principle. One could also feel uneasy treating non-metricity and torsion on a different footing. Indeed, the minimal coupling principle is used here mostly as a way to reduce the number of terms one has to take into consideration and it should not necessarily be considered as a fundamental principle. Abandoning it and considering the most general action possible is the next step.
As a closing remark, we would like to mention the following. Clearly, one might question how fundamental is the geometrical interpretation of metric-affine gravity. In fact, since for second order actions one can always eliminate the independent connection, the latter can be regarded as an auxiliary field. Even for actions with higher order terms though, where degrees of freedom residing in the connection will be excited, one could have an equivalent representation without an independent affine connection (recall that an independent connection can always be written as the Levi-Civita connection plus a tensor). Indeed, which representation one choose is a matter of preference, at least at a classical level, as the dynamical content of the theory is one and the same. On the other hand, it is worth pointing out that the choice of representations becomes a factor when constructing the action of the theory. It influences our choices regarding the presence of some terms by making some exclusion principles, such as minimal coupling and its generalizations, more or less appealing (see also Ref.~\cite{Sotiriou:2007zu} for a more general discussion on this issue). This is a subtle point that needs to be taken seriously into account.
\section*{Acknowledgments}
The authors would like to thank Roberto Percacci for drawing their attention to a field approach towards metric-affine gravity and Yuri Obukhov for discussions. TPS was supported in part by the STFC and in part by a Marie Curie Incoming International Fellowship.
|
1,116,691,500,251 | arxiv | \section{Introduction}\label{sec:intro}
The cutwidth of a graph is defined as the minimum possible {\em{width}} of a linear ordering of its vertices, where the width of an ordering $\sigma$ is the maximum, among all the prefixes of $\sigma$,
of the number of edges that have exactly one vertex in a prefix.
Due to its natural definition, cutwidth has various applications in a range of practical fields of computer science:
whenever data is expected to be roughly linearly ordered and dependencies or connections are local, one can expect the cutwidth of the corresponding graph to be small.
These applications include circuit design, graph drawing, bioinformatics, and text information retrieval; we refer to the survey of layout parameters of Díaz, Petit, and Serna~\cite{DiazPS02} for a broader discussion.
As finding a layout of optimum width is NP-hard~\cite{garey1979computers}, the algorithmic and combinatorial aspects of cutwidth were intensively studied.
There is a broad range of polynomial-time algorithms for special graph classes~\cite{HeggernesLMP11,HeggernesHLN12,Yannakakis85}, approximation algorithms~\cite{LeightonR99}, and fixed-parameter algorithms~\cite{ThilikosSB05,ThilikosSB05a}.
In particular, Thilikos, Bodlaender, and Serna~\cite{ThilikosSB05,ThilikosSB05a} proposed a fixed-parameter algorithm for computing the cutwidth of a graph that runs\footnote{Thilikos, Bodlaender, and Serna~\cite{ThilikosSB05,ThilikosSB05a} do not
specify the parametric dependence of the running time of their algorithm. A careful analysis of their algorithm yields the above claimed running time bound.}
in time~$2^{\ensuremath{\mathcal{O}}(k^2)}\cdot n$,
where $k$ is the optimum width and $n$ is the number of vertices. Their approach is to first compute the pathwidth of the input graph, which is never larger than the cutwidth.
Then, the optimum layout can be constructed by an elaborate dynamic programming procedure on the obtained path decomposition.
To upper bound the number of relevant states, the authors had to understand how an optimum layout can look in a given path decomposition.
For this, they borrow the technique of {\em{typical sequences}} of Bodlaender and Kloks~\cite{BodlaenderK96}, which was introduced for a similar reason, but for pathwidth and treewidth instead of cutwidth.
Since the class of graphs of cutwidth at most $k$ is closed under immersions,
and the immersion order is a well-quasi ordering of graphs\footnote{All graphs considered in this paper may have parallel edges, but no loops.}~\cite{RobertsonS10},
it follows that for each $k$ there exists a {\sl finite} obstruction set $\mathcal{L}_k$ of graphs such that a graph has cutwidth at most $k$ if and only if it does not admit any graph from $\mathcal{L}_k$ as an immersion.
However, this existential result does not give any hint on how to generate, or at least estimate the sizes of the obstructions.
The sizes of obstructions are important for efficient treatment of graphs of small cutwidth; this applies also in practice, as indicated by Booth et al.~\cite{BoothGLR92} in the context of VLSI design.
The estimation of sizes of minimal obstructions for graph parameters like pathwidth, treewidth, or cutwidth, has been studied before.
For minor-closed parameters pathwidth and treewidth,
Lagergren~\cite{Lagergren98} showed that any minimal minor obstruction to admitting a path decomposition of width $k$ has size at most single-exponential in $\ensuremath{\mathcal{O}}(k^4)$,
whereas for tree decompositions he showed an upper bound double-exponential in $\ensuremath{\mathcal{O}}(k^5)$ .
Less is known about immersion-closed parameters, like cutwidth.
Govindan and Ramachandramurthi~\cite{GOVINDAN2001189} showed that the number of minimal immersion obstructions for the class of graphs of cutwidth at most $k$ is at least $3^{k-7}+1$,
and their construction actually exemplify minimal obstructions for cutwidth at most $k$ with ${(3^{k-5}-1)}/{2}$ vertices.
To the best of our knowledge, nothing was known about upper bounds for the cutwidth case.
\subsection{Results on obstructions.} Our main result concerns the sizes of obstructions for cutwidth
\begin{theorem}\label{thm:main}
Suppose a graph $G$ has cutwidth larger than $k$, but every graph with fewer vertices or edges (strongly) immersed in $G$ has cutwidth at most $k$.
Then $G$ has at most $2^{\ensuremath{\mathcal{O}}(k^3\log k)}$ vertices and edges.
\end{theorem}
\noindent
The above result immediately gives the same upper bound on the sizes of graphs from the minimal obstruction sets $\mathcal{L}_k$
as they satisfy the prerequisites of Theorem~\ref{thm:main}.
This somewhat matches the $({3^{k-5}-1})/{2}$ lower bound of Govindan and Ramachandramurthi~\cite{GOVINDAN2001189}.
Our approach for Theorem~\ref{thm:main} follows the technique used by Lagergren~\cite{Lagergren98} to prove that minimal minor obstructions for pathwidth at most $k$ have sizes single-exponential in $\ensuremath{\mathcal{O}}(k^4)$.
Intuitively, the idea of Lagergren is to take an optimum decomposition for a minimal obstruction, which must have width $k+1$, and to assign to each prefix of the decomposition one of finitely many ``types'',
so that two prefixes with the same type ``behave'' in the same manner. If there were two prefixes, one being shorter than the other, with the same type, then one could replace one with the other, thus
obtaining a smaller obstruction. Hence, the upper bound on the number of types, being double-exponential in $\ensuremath{\mathcal{O}}(k^4)$, gives some upper bound on the size of a minimal obstruction.
This upper bound can be further improved to single-exponential by observing that types are ordered by a natural domination relation, and the shorter a prefix is, the weaker is its type.
An important detail is that one needs to make sure that the replacement can be modeled by minor operations.
For this, Lagergren uses the notion of {\em{linked path decompositions}} (a weaker variant of {\em{lean path decompositions}}; cf.~\cite{Thomas90,BellenbaumD02}).
To prove Theorem~\ref{thm:main}, we perform a similar analysis of prefixes of an optimum ordering of a minimal obstruction.
We show that prefixes can be categorized into a bounded number of types, each comprising prefixes that have the same ``behavior''.
Provided two prefixes with equally strong type appear one after the other, we can ``unpump'' the part of the graph in their difference.
To make sure that unpumping is modeled by taking an immersion, we define {\em{linked orderings}} for cutwidth and reprove the analogue of the result of Thomas~\cite{Thomas90} (see~\cite{BellenbaumD02} for simplified proofs):
there is always an optimum-width ordering that is linked.
We remark this already follows from more general results on submodular functions: the same is true for parameters like \emph{linear rank-width}, as observed by Kant\'{e} and Kwon~\cite{KanteK14}, which in turns follows from the proof of an analogous theorem of Geelen et~al.~\cite{GeelenGW02} that applies to branch-decompositions, and thus, e.g., to parameters known as \emph{branch-width} and \emph{carving-width}.
The proof of the upper bound on the number of types essentially boils down to the following setting. We are given a graph $G$ and a subset $X$ of vertices, such that at most $\ell$ edges have exactly one endpoint in $X$.
The question is how $X$ can look like in an optimum-width ordering of $G$. We prove that there is always an ordering where $X$ is split into at most $\ensuremath{\mathcal{O}}(k\ell)$ blocks, where $k$ is the optimum width.
This allows us to store the relevant information on the whole $X$ in one of a constant number of types (called {\em{bucket interfaces}}).
The swapping argument used in this proof holds the essence of the typical sequences technique of Bodlaender and Kloks~\cite{BodlaenderK96}, while being, in our opinion, more natural and easier to understand.
As an interesting byproduct, we can also use our understanding to treat the problem of removing edges to get a graph of small cutwidth.
More precisely, for parameters $w,k$, we consider the class of all graphs $G$, such that $w$ edges can be removed from $G$ to obtain a graph of cutwidth at most $k$.
We prove that for every constant $k$, the minimal (strong) immersion obstructions for this class have sizes bounded {\em{linearly}} in $w$. Moreover we give an exponential lower bound to the number of these obstructions. These results are presented in Section~\ref{sec:remddist}.
\subsection{Algorithmic results.} Consider the following ``compression'' problem: given a graph $G$ and its ordering $\sigma$ of width $\ell$, we would like to construct, if possible, a new ordering
of the vertices of $G$ of width at most $k$, where $k<\ell$.
Then the types defined above essentially match states that would be associated with prefixes of $\sigma$ in a dynamic programming algorithm solving this problem. Alternatively, one can think
of building an automaton that traverses the ordering $\sigma$ of width $\ell$ while constructing an ordering of $G$ of width at most $k$.
Hence, our upper bound on the number of types can be directly used to limit the state space in such a dynamic programming procedure/automaton, yielding an FPT algorithm for the above problem.
With this result in hand, it is not hard to design of an exact FPT algorithm for cutwidth.
One could introduce vertices one by one to the graph, while maintaining an ordering of optimum width.
Each time a new vertex is introduced, we put it anywhere into the ordering, and it can be argued that the new ordering has width at most three times larger than the optimum.
Then, the dynamic programming algorithm sketched above can be used to ``compress'' this approximate ordering to an optimum one in linear FPT time.
The above approach yields a quadratic algorithm. To match the optimum, linear running time,
we use a similar trick as Bodlaender in his
linear-time algorithm for computing the treewidth of the graph~\cite{Bodlaender96}. Namely, we show that instead of processing vertices one by one,
we can proceed recursively by removing a significant fraction of all the edges at each step, so that their reintroduction increases the width at most twice.
We then run the compression algorithm on the obtained 2-approximate ordering to get an optimum one.
The main point is that, since we remove a large portion of the graph at each step, the recursive equation on the running time solves to a linear function, instead of quadratic.
This gives the following.
\begin{theorem}\label{thm:algo}
There exists an algorithm that, given an $n$-vertex graph $G$ and an integer $k$, runs in time $2^{\ensuremath{\mathcal{O}}(k^2\log k)}\cdot n$ and either correctly concludes that the cutwidth of $G$ is larger than $k$,
or outputs an ordering of $G$ of width at most $k$.
\end{theorem}
The algorithm of Theorem~\ref{thm:algo} has running time slightly larger than that of Thilikos, Bodlaender, and Serna~\cite{ThilikosSB05,ThilikosSB05a}.
The difference is the $\log k$ factor in the exponent, the reason for which is that we use a simpler bucketing approach to bound the number of states, instead of the more entangled,
but finer, machinery of typical sequences.
We believe the main strength of our approach lies in its explanatory character.
Instead of relying on algorithms for computing tree or path decompositions, which are already difficult by themselves, and then designing a dynamic programming algorithm on a path decomposition,
we directly approach cutwidth ``via cutwidth'', and not ``via pathwidth''. That is, the dynamic programming procedure for computing the optimum cutwidth ordering on an approximate cutwidth ordering
is technically far simpler and conceptually more insightful than performing the same on a general path decomposition.
We also show that the ``reduction-by-a-large-fraction'' trick of Bodlaender~\cite{Bodlaender96} can be performed also in the cutwidth setting, yielding a self-contained, natural, and understandable algorithm.
\section{Preliminaries}\label{sec:prelims}
We denote the set of non-negative integers by $\ensuremath{\mathbb{N}}$ and the set of positive integers by $\ensuremath{\mathbb{N}}^+$.
For $r,s\in \ensuremath{\mathbb{N}}$ with $r\leq s$, we denote $[r]=\{1,\ldots,r\}$ and $\intv{r}{s}=\{r,\ldots,s\}$.
Notice that $[0]=\emptyset$.
\paragraph{Graphs.}
All graphs considered in this paper are undirected, without loops, and may have multiple edges.
The vertex and edge sets of a graph $G$ are denoted by $V(G)$ and $E(G)$, respectively.
For disjoint $X,Y\subseteq V(G)$, by $E_G(X,Y)$ we denote the set of edges of $G$ with one endpoint in $X$ and one in $Y$.
If $S\subseteq V(G)$, then we denote $\delta_{G}(S)=|E_G(S,V(G)\setminus S)|$.
We drop the subscript if it is clear from the context.
Every partition $(A,B)$ of $V(G)$ is called a {\em{cut of $G$}}; the {\em{size}} of the cut $(A,B)$ is $\delta(A)$.
\paragraph{Cutwidth.}
Let $G$ be a graph and $\sigma$ be an ordering of $V(G)$.
For $u,v\in V(G)$, we write $u<_{\sigma} v$ if $u$ appears before $v$ in $\sigma$.
Given two disjoint sequences $\sigma_{1}=\langle x_1,\ldots,x_{r_{1}}\rangle$
and $\sigma_{2}=\langle y_1,\ldots,y_{r_{2}}\rangle$ of vertices in $V(G)$, we define their {\em{concatenation}} as $\sigma_1\circ \sigma_{2}=\langle x_1,\ldots,x_{r_{1}},y_1,\ldots,y_{r_{2}}\rangle$.
For $X\subseteq V(G)$, let $\sigma_X$ be the ordering of $X$ induced by $\sigma$, i.e., the
ordering obtained from $\sigma$ if we remove the vertices that do not belong in $X$.
For a vertex $v$ we denote by $V_{v}^{\sigma}$ the set $\{u\in V(G) \mid u \leq_{\sigma} v\}$.
A \emph{$\sigma$-cut} is any cut of the form $(V^\sigma_v,V(G) \setminus V^\sigma_v)$ for $v\in V(G)$.
The {\em cutwidth of an ordering $\sigma$ of $G$} is defined as $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G) = \max_{v \in V(G)} \delta(V^{\sigma}_{v})$. The {\em cutwidth of $G$}, $\ensuremath{\mathbf{cw}}\xspace(G)$, is the minimum of $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)$ over
all possible orderings of $V(G)$.
\paragraph{Obstructions.}
Let $\leq $ be a partial order on graphs.
We say that $G'\lneqq G$ if $G'\leq G$ and $G'$ is not isomorphic to $G$.
A graph class ${\cal G}$ is {\em{closed under $\leq$}} if whenever $G'\leq G$ and $G\in {\cal G}$, we also have that $G'\in {\cal G}$.
Given a partial order $\leq$
and a graph class ${\cal G}$ closed under $\leq$, we define the {\em{(minimal) obstruction
set}} of ${\cal G}$ w.r.t. $\leq $, denoted by ${\bf obs}_{\leq}({\cal G})$, as the set containing all graphs
where the following two conditions hold:
\begin{itemize}
\item[]\hspace{-.97cm} {\bf O1}: $G\not\in {\cal G}$, i.e., $G$ is not a member of ${\cal G}$, and
\item[]\hspace{-.97cm} {\bf O2}: for each $G'$ with $G'\lneqq G$, we have that $G'\in{\cal G}$.
\end{itemize}
We say that a set of graphs ${\cal H}$ is a {\em $\leq$-antichain} if it does not contain any pair of comparable elements wrt. $\leq$. By definition, for any class
${\cal G}$ closed under $\leq$, the set ${\bf obs}_{\leq}({\cal G})$ is an antichain.
\looseness=-1
\paragraph{Immersions.}
Let $H$ and $G$ be graphs. We say that $G$ contains $H$ as an
\emph{immersion} if there is a pair of functions $(\phi, \psi)$, called
an $H$-\emph{immersion model of $G$}, such that $\phi$ is an injection from $V(H)$ to $V(G)$ and
$\psi$ maps every edge $uv$ of $H$ to a path of $G$ between
$\phi(u)$ and $\phi(v)$ so that
different edges are mapped to edge-disjoint paths.
Every vertex in the image of $\phi$ is called a \emph{branch vertex}.
If we additionally demand that no internal vertex of a path in $\psi(E(H))$ is a branch vertex, then we say that $(\phi, \psi)$
is a {\em strong $H$-immersion model} and $H$ is a
{\em strong immersion} of $G$. We denote by $H\leq_{\rm i} G$ ($H\leq_{\rm si} G$) the fact that $H$ is an immersion (strong immersion) of $G$; these are partial orders.
Clearly, for any two graphs $H$ and $G$, if $H\leq_{\rm si}G$ then $H\leq_{\rm i}G$. This implies the following observation:
\begin{observation}
\label{osesimme}
If ${\cal G}$ is a graph class closed under $\leq_{\rm i}$,
then ${\bf obs}_{\leq_{\rm i}}({\cal G})\subseteq {\bf obs}_{\leq_{\rm si}}({\cal G})$.
\end{observation}
Robertson and Seymour proved in~\cite{RobertsonS10} that every
$\leq_{\rm i}$-antichain is finite and conjectured the same for $\leq_{\rm si}$.
It is well-known that for every $k\in \mathbb{N}$, the class ${\cal C}_{k}$ of graphs of cutwidth at most $k$ is closed under immersions.
It follows from the results of~\cite{RobertsonS10} that $\ensuremath{\mathbf{obs}}_{\leq_{\rm i}}(\ensuremath{\mathcal{C}}_{k})$ is finite; the goal of this paper is to provide good estimates on the sizes of graphs in $\ensuremath{\mathbf{obs}}_{\leq_{\rm si}}({\cal C}_{k})$.
As the cutwidth of a graphs is the maximum cutwidth of its connected components, it follows that graphs in $\ensuremath{\mathbf{obs}}_{\leq_{\rm si}}(\ensuremath{\mathcal{C}}_{k})$ are connected.
Moreover, every graph in $\ensuremath{\mathbf{obs}}_{\leq_{\rm si}}(\ensuremath{\mathcal{C}}_{k})$ has cutwidth exactly $k+1$, because the removal of any of its edges decreases its cutwidth to at most $k$.
\section{Bucket interfaces}\label{sec:interfaces}
Let $G$ be a graph and $\sigma$ be an ordering of $V(G)$.
For a set $X\subseteq V(G)$, the {\em{$X$-blocks}} in $\sigma$ are the maximal subsequences of consecutive vertices of $\sigma$ that belong to $X$.
Suppose $(A,B)$ is a cut of $G$. Then we can write
$\sigma = b_1\circ\ldots\circ b_p,$
where $b_1,\ldots,b_p$ are the $A$- and $B$-blocks in $\sigma$; these will be called jointly {\em{$(A,B)$-blocks}}.
The next lemma is the cornerstone of our approach: we prove that given a graph $G$ and a cut $(A,B)$ of $G$,
there exists an optimum cutwidth ordering of $G$ where number of blocks depends only on the cutwidth and the size of $(A,B)$.
\begin{lemma}\label{lem:num_blocks}
Let $\ell\in \mathbb{N}^+$ and $G$ be a graph. If $(A,B)$ is a cut of $G$ of size $\ell$,
then there is an optimum cutwidth ordering $\sigma$ of $V(G)$ with
at most $(2\ell+1) \cdot (2\ensuremath{\mathbf{cw}}\xspace(G)+3)+2\ell$ $(A,B)$-blocks.
\end{lemma}
\begin{proof}
Let $\sigma$ be an optimum cutwidth ordering such that, subject to the width being minimum, the number of $(A,B)$-blocks it defines is also minimized.
Let $\sigma = b_1 \circ b_2 \circ \dots \circ b_r$, where $b_1, b_2, \dots, b_r$ are the $(A,B)$-blocks of $\sigma$.
If $\sigma$ defines less than three blocks, then the claim already follows, so let us assume $r \geq 3$.
Consider any ordering $\sigma'$ obtained by swapping two blocks, i.e., $\sigma' = b_1 \circ \dots \circ b_{j-1} \circ b_{j+1} \circ b_j \circ b_{j+2} \dots b_r$, for some $j \in [r-1]$.
Observe that since the blocks $b_1, \dots, b_r$ alternate as $A$-blocks and $B$-blocks, the ordering $\sigma'$ has a strictly smaller number of blocks;
indeed, either $j-1 \geq 1$, in which case $b_{j-1} \circ b_{j+1}$ defines a single block of $\sigma'$, or $j=1$ and hence $j+2 \leq r$, in which case $b_j \circ b_{j+2}$ does.
Therefore, by choice of $\sigma$, for each $j\in [r-1]$, swapping $b_j$ and $b_{j+1}$ in $\sigma$ must yield an ordering with strictly larger cutwidth.
We call a block \emph{free} if it does not contain any endpoint of the cut edges $E_G(A,B)$.
We now prove that any sequence of consecutive free blocks in $\sigma$ has at most $2\ensuremath{\mathbf{cw}}\xspace(G)+3$ blocks.
Since the cut $(A,B)$ has size $\ell$, there are at most $2\ell$ blocks that are not free.
This implies the claimed bound on the total number of all blocks in $\sigma$.
Suppose, to the contrary, that there exists a sequence of $q>2\ensuremath{\mathbf{cw}}\xspace(G)+3$ consecutive free blocks in $\sigma$. Let these blocks be
$b_r,b_{r+1},\ldots,b_{s}$, where $s-r+1=q$.
For $j \in [r,s-1]$, we define $\mu(j)$ to be the size of the cut between all vertices inside or preceding the vertices of block $b_j$ and all vertices inside or following the vertices
of block $b_{j+1}$ in $\sigma$; see Figure~\ref{fig:num-blocks}.
\begin{claim}\label{clm:clm1}
For all $j\in [r+1,\dots,s-2]$, we have that $\mu(j-1) > \mu(j)$ or $\mu(j) < \mu(j+1)$.
\end{claim}
\begin{proof}
Suppose that for some $j\in [r+1,s-2]$,
$\mu(j) \geq \max(\mu(j-1),\mu(j+1))$.
We will then show that the ordering $\sigma'$ obtained by swapping the blocks $b_{j}$ and $b_{j+1}$ still has optimum cutwidth, a contradiction to the choice of $\sigma$.
Notice that for every vertex $v$ preceding all vertices of $b_{j}$ or succeeding all vertices of $b_{j+1}$, $\delta(V^{\sigma'}_{v})=\delta(V^{\sigma}_{v})$.
Thus, it remains to show that for any vertex $v$ belonging to the block $b_{j}$ or to the block $b_{j+1}$, also $\delta(V^{\sigma'}_{v})\leq \delta(V^{\sigma}_{v})$.
Let $p_{j}$ be the number of edges of $G$ with one endpoint in the block $b_{j}$ and the other endpoint preceding (in $\sigma$) all vertices of $b_{j}$.
Let also $s_{j}$ be the number of edges of $G$ with one endpoint in $b_j$ and the other endpoint succeeding (in $\sigma$) all vertices of $b_{j}$ (and hence succeeding all vertices of block $b_{j+1}$, since both $b_{j}$ and $b_{j+1}$ are free).
Notice that $\mu(j)=\mu(j-1) - p_{j} + s_{j}$ and recall that $\mu(j)\geq\mu(j-1)$. This yields that $s_{j} \geq p_{j}.$
Similarly, let $p_{j+1}$ be the number of edges of $G$ with one endpoint in $b_{j+1}$ and the other endpoint preceding all vertices of the block $b_{j+1}$
(and, in particular, all vertices of block $b_{j}$).
Let also $s_{j+1}$ be the number of edges of $G$ with one endpoint in $b_{j+1}$ and the other endpoint succeeding all vertices of block $b_{j+1}$. Again, we have $\mu(j+1)=\mu(j) - p_{j+1} + s_{j+1}$ and $\mu(j)\geq\mu(j+1)$. This yields that $p_{j+1} \geq s_{j+1}.$
Let $v$ be a vertex of the block $b_{j}$.
Recall that the blocks $b_{j}$ and $b_{j+1}$ are free and thus, there are no edges between them.
Observe then that $\delta(V^{\sigma'}_{v})=\delta(V^{\sigma}_{v})+s_{j+1}-p_{j+1}\leq \delta(V^{\sigma}_{v})$.
Symmetrically, for any vertex $v$ in $b_{j+1}$, observe that $\delta(V^{\sigma'}_{v})=\delta(V^{\sigma}_{v})+p_{j}-s_{j}\leq \delta(V^{\sigma}_{v})$.
Thus, $\ensuremath{\mathbf{cw}}\xspace_{\sigma'}(G)\leq \ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)=\ensuremath{\mathbf{cw}}\xspace(G)$, a contradiction.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale = 1]
\node at (0,0) {$\cdots$};
\node[A] (v1) at (1,0) {};
\draw (-0.2,-0.3) -- (1,-0.3);
\node[B] (v2) at (2,0) {};
\node[B] (v3) at (3,0) {};
\node[B] (v4) at (4,0) {};
\draw (2,-0.3) -- (4,-0.3);
\node[A] (v5) at (5,0) {};
\node[A] (v6) at (6,0) {};
\node[A] (v7) at (7,0) {};
\draw (5,-0.3) -- (7,-0.3);
\node[B] (v8) at (8,0) {};
\node[B] (v9) at (9,0) {};
\node[B] (v10) at (10,0) {};
\node[B] (v11) at (11,0) {};
\draw (8,-0.3) -- (11,-0.3);
\node[A] (v12) at (12,0) {};
\node[A] (v13) at (13,0) {};
\draw (12,-0.3) -- (13,-0.3);
\node at (14,0) {$\cdots$};
\draw (0,0.9) to [bend left=8] (14,1.0);
\draw (0,0.3) to[bend left] (v1);
\draw (v2) to[bend left] (v4);
\draw (v3) to[bend left=20] (v4);
\draw[yellow!35!black,ultra thick] (0,0.5) to [bend left] (v6);
\draw[yellow!35!black,ultra thick] (v1) to[bend left=25] (v5);
\draw[yellow!35!black,ultra thick] (v1) to[bend left=25] (v7);
\draw (v3) to[bend left=25] (v8);
\draw (v5) to [bend left=20] (v6);
\draw (v6) to [bend left=20] (v7);
\draw[very thick] (v6) to [bend left=20] (14, 0.8);
\draw (v2) to [bend left] (v10);
\draw (v8) to [bend left] (v10);
\draw (v9) to [bend left=20] (v10);
\draw (v9) to [bend left] (v11);
\draw (v8) to [bend left] (v11);
\draw (v9) to [bend left] (14,0.6);
\draw (v13) to [bend left=20] (14,0.2);
\draw (v12) to [bend left=20] (v13);
\draw (v12) to [bend left] (14,0.4);
\node at (3,-0.6) {block $j-1$};
\node at (4.5,1.8) {$\mu(j-1)$};
\draw[dashed] (4.5,1.6)--(4.5,-0.3);
\node at (6,-0.6) {block $j$};
\node at (7.5,1.8) {$\mu(j)$};
\draw[dashed] (7.5,1.6)--(7.5,-0.3);
\node at (9.5,-0.6) {block $j+1$};
\node at (11.5,1.8) {$\mu(j+1)$};
\draw[dashed] (11.5,1.6)--(11.5,-0.3);
\node at (12.5,-0.6) {~~~~~~~~block $j+2$};
\end{tikzpicture}
\caption{A cut $(A,B)$ is highlighted (blue, red), with the corresponding blocks underlined and cuts between them marked with dashed lines.
Edges counted as $p_{j}$ and $s_{j}$ are thickened.}
\label{fig:num-blocks}
\end{figure}
Claim~\ref{clm:clm1} shows that for all $j\in [r+1,s-2]$, we have $\mu(j-1) > \mu(j)$ or $\mu(j) < \mu(j+1)$. It follows that any non-decreasing pair $\mu(j-1)\leq \mu(j)$
must be followed by an increasing pair $\mu(j) < \mu(j+1)$. Hence, if $j_{\min}$ is the minimum index such that $\mu(j_{\min})\leq \mu(j_{\min}+1)$,
then the sequence $\mu(j)$ has to be strictly decreasing up to $j_{\min}$ and strictly increasing from $j_{\min}+1$ onward. Since $\mu(j) \leq \ensuremath{\mathbf{cw}}\xspace(G)$ for all $j$,
the length $q$ of the sequence of consecutive free blocks cannot be longer than $2\ensuremath{\mathbf{cw}}\xspace(G)+3$ in total, concluding the proof.
\end{proof}
We use the above lemma to bound the number of ``types'' of prefixes in graph orderings.
To describe such a prefix, i.e., one side of a cut in a graph, we use the following definition.
\begin{definition}\label{def:boundaried-graph}
A \emph{$k$-boundaried graph} is a pair $\mathbf{G}=(G,\bar{x})$ where $G$ is a graph and $\bar{x}=(x_{1},\dots,x_k)$ is a $k$-tuple of the graph's {\em{boundary vertices}} (ordered, not necessarily distinct).
The \emph{extension} of $\mathbf{G}$ is the graph $G^{*}$ obtained from $G$ by adding $k$ new vertices $x_{1}',\dots,x_k'$ and edges $x_{1} x_{1}', \dots, x_k x_k'$.
The \emph{join} $\mathbf{A} \oplus \mathbf{B}$ of two $k$-boundaried graphs $\mathbf{A} = (A, \bar{x}), \mathbf{B}=(B,\bar{y})$ is the graph obtained from the disjoint union of $A$ and $B$ by adding an edge $x_{i} y_{i}$ for $i\in[k]$.
\end{definition}
From Lemma~\ref{lem:num_blocks} we derive that for any given cut $(A,B)$ of size $\ell$ of a graph $G$ with $\ensuremath{\mathbf{cw}}\xspace(G)\leq k$, there is an optimum cutwidth ordering in which
the vertices of $A$ occur in $\ensuremath{\mathcal{O}}(k\ell)$ blocks.
Our next goal is to show that the only information about $A$ that can affect the cutwidth of $G$ is: the placing of the endpoints of each cutedge ($x_{i}$ and $x_{i}'$) into blocks, and
the cutwidth of each block (as an induced subgraph of $A$ or $A^{*}$).
Recall that for an ordering $\sigma$ of $V(G)$, \emph{$\sigma$-cuts} are cuts of the form $(V^\sigma_v, V(G)\setminus V^\sigma_v)$, for $v \in V(G)$.
\begin{definition}
Let $G$ be a graph and $\sigma$ be an ordering of its vertices. An \emph{$\ell$-bucketing} of $\sigma$ is a function
$T\colon V(G) \to [\ell]$ such that $T(u)\leq T(v)$ for any $u$ appearing before $v$ in $\sigma$.
For every $i\in [\ell]$, the set $T^{-1}(i)$ will be called a {\em bucket}; a bucket is naturally ordered by $\sigma$.
For every bucket $T^{-1}(i)$, $i \in [\ell]$, let $\mathtt{cuts}(G,\sigma,T,i)$ be the family of $\sigma$-cuts containing on one side all vertices of buckets appearing before
$i$ and a prefix (in $\sigma$) of the $i$-th bucket.
For an ordering $\sigma$ of the vertices of a graph $G$, define the \emph{width} of the bucket $i$, $i\in[\ell]$, as the maximum width of any cut in the family $\mathtt{cuts}(G,\sigma,T,i)$.
Formally,
\begin{eqnarray*}
\mathtt{cuts}(G,\sigma,T,i) & = & \left\{ \left(T^{-1}([1,i-1])\ \cup\ L,\quad R\ \cup\ T^{-1}([i+1,\ell])\right) \colon \right. \\
& &~~\left. (L,R)\mbox{ is a }\sigma\mbox{-cut of }T^{-1}(i)\right\},\\
\mathtt{width}(G,\sigma,T,i) & = & \max \left\{\, \left|E_G(L,R)\right|\ \colon\ (L,R) \in \mathtt{cuts}(G,\sigma,T,i)\, \right\}.
\end{eqnarray*}
\end{definition}
\noindent
Notice that every $\sigma$-cut of $G$ is in $\mathtt{cuts}(G,\sigma,T,i)$ for at least one bucket $i \in [\ell]$;
since $\ensuremath{\mathbf{cw}}\xspace_\sigma(G)$ is the maximum of $\left|E_G(L,R)\right|$ over $\sigma$-cuts $(L,R)$, we have
\begin{equation}\label{eq:cwdthsgm}\ensuremath{\mathbf{cw}}\xspace_\sigma(G) = \max_{i\in[\ell]}\ \mathtt{width}(G,\sigma,T,i).\end{equation}
For two $k$-boundaried graphs $\mathbf{A}=(A,\bar{x}),\mathbf{B}=(B,\bar{y})$, we slightly abuse notation and understand the edges $x_{1}x_{1}',\dots,x_kx_k'$ in $A^{*}$ to be the same as $y_{1}'y_{1},\dots,y_k'y_k$ in $B^{*}$ and as $x_{1} y_{1},\dots, x_k y_k$ in $\mathbf{A}\oplus\mathbf{B}$.
That is, for an ordering $\sigma$ of $\mathbf{A}\oplus\mathbf{B}$ with $\ell$-bucketing $T$,
we define $T|_{A^{*}}(v)$ to be $T(v)$ for $v \in V(A)$ and $T(y_{i})$ for $v=x_{i}'$.
We define $\sigma|_{A^{*}}$ as an ordering that orders $x_{i}'$ just as $\sigma$ orders $y_{i}$, with the order between $x_{i}'$ and $x_j'$ chosen arbitrarily when $y_{i}=y_j$.
The following lemma shows that if an $\ell$-bucketing respects the sides of a cut, then the width of any bucket can be computed as the sum of contributions of the sides.
\begin{lemma}\label{lem:width_by_parts}
Let $k,\ell$ be positive integers and $\mathbf{A}=(A,\bar{x}),\mathbf{B}=(B,\bar{y})$ be two $k$-boundaried graphs.
Let also $\sigma$ be a vertex ordering of $\mathbf{A}\oplus\mathbf{B}$ with $\ell$-bucketing $T$.
If $T^{-1}(i)$ does not contain any vertex of $A$, for some $i\in [\ell]$,
that is, $T^{-1}(i) \cap V(A) = \emptyset$,
then it holds that
$\mathtt{width}(\mathbf{A}\oplus \mathbf{B}, \sigma, T, i) =
\mathtt{width}(A, \sigma|_{A}, T|_{A}, i) + \mathtt{width}(B^{*}, \sigma|_{B^{*}}, T|_{B^{*}}, i)$.
\end{lemma}
\begin{proof
Consider any cut $(L,R)$ in $\mathtt{cuts}(G,\sigma,T,i)$.
Observe that for every edge $e$ of $E_{\mathbf{A}\oplus \mathbf{B}}(L,R)$ one of the following holds:
\begin{enumerate}
\item $e\in E_{A}(L \cap V(A), R\cap V(A))$ or
\item $e\in E_{B}(L \cap V(B), R \cap V(B))$ or
\item $e\in E_{G}(L\cap V(A), R\cap V(B))$, or
\item $e\in E_{G}(R\cap V(A), L\cap V(B))$.
\end{enumerate}
Since we do not distinguish between the vertices $x_{i}$ and the vertices $y_{i}'$, we equivalently obtain that for every edge $e\in E_{\mathbf{A}\oplus \mathbf{B}}(L,R)$,
$e$ is either an edge in $E_A(L \cap V(A), R\cap V(A))$ or an edge in $E_{B^{*}}(L \cap V(B^{*}), R \cap V(B^{*}))$.
Observe that $(L \cap V(A), R\cap V(A))$ is a cut in $\mathtt{cuts}(A,\sigma|_{A},T|_{A},i)$ and
$(L \cap V(B^{*}), R\cap V(B^{*}))$ is a cut in $\mathtt{cuts}(B^{*},\sigma|_{B^{*}},T|_{B^{*}},i)$.
Therefore, the total number of edges crossing these cuts is at most $\mathtt{width}(A, \sigma|_{A}, T|_{A}, i) + \mathtt{width}(B^{*}, \sigma|_{B^{*}}, T|_{B^{*}}, i)$.
This proves that $$\mathtt{width}(\mathbf{A}\oplus \mathbf{B}, \sigma, T, i) \leq
\mathtt{width}(A, \sigma|_{A}, T|_{A}, i) + \mathtt{width}(B^{*}, \sigma|_{B^{*}}, T|_{B^{*}}, i).$$
For the converse inequality, observe that since the bucket $T^{-1}(i)$ does not contain any vertices of $A$, we have $T|_{A}^{-1}(i) = \emptyset$.
Hence there is exactly one cut in $\mathtt{cuts}(A,\sigma|_A,T|_A,i)$, namely $(L_A,R_A)$, where $L_A = T^{-1}(\{1,\dots,i-1\})\cap V(A)$ and $R_A=T^{-1}(\{i+1,\dots,\ell\}) \cap V(A)$.
Let $(L_B,R_B)$ be a cut in $\mathtt{cuts}(B^{*},\sigma|_{B^{*}},T|_{B^{*}},i)$ maximizing $|E_{B^{*}}(L_B,R_B)|$.
Then, since we assumed that $T^{-1}(i)$ does not contain any vertices of $A$ (and thus, may only contain vertices of $B$), it follows that
$(L_A \cup L_B, R_A \cup R_B)$ is a cut in $\mathtt{cuts}(G,\sigma,T,i)$.
As above, every edge of $\mathbf{A}\oplus\mathbf{B}$ crossing this cut is either in $E_A(L_A,R_A)$ or in $E_{B^{*}}(L_B,R_B)$, where we again do not distinguish between
the vertices $x_{i}$ and $y_{i}'$.
Hence
\begin{eqnarray*}
\mathtt{width}(\mathbf{A}\oplus \mathbf{B}, \sigma, T, i) & \geq & |E_{\mathbf{A}\oplus \mathbf{B}}(L,R)| \\
& =& |E_A(L_A,R_A)| + |E_{B^{*}}(L_B,R_B)|\\
& = & \mathtt{width}(A,\sigma|_A,T|_A,i) + \mathtt{width}(B^{*}, \sigma|_{B^{*}}, T|_{B^{*}}, i).
\end{eqnarray*}
\end{proof}
Replacing the roles of $\mathbf{A}$ and $\mathbf{B}$ above, we obtain that if $T^{-1}(i)$ does not contain any vertex of $B$,
then $$\mathtt{width}(\mathbf{A}\oplus \mathbf{B}, \sigma, T, i) = \mathtt{width}(A^{*}, \sigma|_{A^{*}}, T|_{A^{*}}, i) + \mathtt{width}(B, \sigma|_{B}, T|_{B}, i).$$
Intuitively, this implies that the cutwidth of $\mathbf{A}\oplus\mathbf{B}$ depends on $A$ only in the widths of each block relative to $A$ and $A^{*}$ (in any bucketing where
buckets are either $A$-blocks or $B$-blocks).
Therefore, replacing $\mathbf{A}$ with another boundaried graph whose extension has an ordering and bucketing with the same widths preserves cutwidth (as long as endpoints of the cut edges are placed in the same buckets too).
This is formalized in the next definition.
\begin{definition}\label{def:interface}
For $k,\ell\in\mathbb{N}$, a \emph{($k$,$\ell$)-bucket interface} consists of functions:
\begin{itemize}
\item $b,b': [k] \to [\ell]$ identifying the buckets which contain $x_{i}$ and $x_{i}'$, respectively and
\item $\mu,\mu^{*}: [\ell] \to [0,k]$ corresponding to the widths of buckets.
\end{itemize}
A $k$-boundaried graph $\mathbf{G}$ \emph{conforms} with a $(k,\ell)$-bucket interface
if there exists an ordering $\sigma$ of the vertices of $G^{*}$ and an $\ell$-bucketing $T$ of $G^*$ such that:
\begin{itemize}
\item $T(v)$ is odd for $v\in V(G)$ and even for $v\in \{x_{1}',\dots,x_k'\}$,
\item $T(x_{i}) = b(i)$ and $T(x_{i}') = b'(i)$,\ for each $i\in [k]$,
\item $\mathtt{width}(G, \sigma|_{G}, T|_{G}, j) \leq \mu(j)$,\ for each $j\in [\ell]$,
\item $\mathtt{width}(G^{*}, \sigma, T, j) \leq \mu^{*}(j)$,\ for each $j\in [\ell]$.
\end{itemize}
\end{definition}
\begin{observation}\label{obs:bcketsze} For all $k,\ell\in\ensuremath{\mathbb{N}}^+$ there are $
\leq 2^{2(k\log\ell +\ell\log(k+1))}$ $(k,\ell)$-bucket interfaces.
\end{observation}
We call two $k$-boundaried graphs $\mathbf{G}_{1}, \mathbf{G}_{2}$ \emph{($k$,$\ell$)-similar} if the sets of $(k,\ell)$-bucket interfaces they conform with are equal.
The following lemma subsumes the above ideas. The proof follows easily from Lemma~\ref{lem:width_by_parts} and the fact that $\ensuremath{\mathbf{cw}}\xspace_\sigma(G) = \max_{i\in[\ell]}\ \mathtt{width}(G,\sigma,T,i)$ (Eq.~\eqref{eq:cwdthsgm}).
\begin{theorem}\label{thm:myhill_nerode}
Let $k,r$ be two positive integers. Let also $\mathbf{A}_{1}$ and $\mathbf{A}_{2}$ be two $k$-boundaried graphs that are $(k,\ell)$-similar,
where $\ell=(2k+1) \cdot (2r+4)$.
Then for any $k$-boundaried graph $\mathbf{B}$ where $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus \mathbf{B})\leq r$, it holds that
$\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus \mathbf{B})=\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus \mathbf{B})$.
\end{theorem}
\begin{proof
Let $\mathbf{A}_{i}=(A_{i},\bar{x}^i), \mathbf{B}=(B,\bar{y})$ and suppose that $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus \mathbf{B})\leq r$.
By Lemma~\ref{lem:num_blocks}, there is an optimum cutwidth ordering $\sigma_{1}$ of the vertices of $\mathbf{A}_{1}\oplus \mathbf{B}$ that has at most $\ell-1$ $(V(A_{1}),V(B))$-blocks.
In particular $\ensuremath{\mathbf{cw}}\xspace_{\sigma_{1}}(\mathbf{A}_{1}\oplus \mathbf{B})=\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus \mathbf{B})\leq r$.
By adding an empty block at the front, if necessary, we may assume that the number of blocks is at most $\ell$, while odd-indexed blocks are $V(A_1)$-blocks and even-indexed blocks are $V(B)$-blocks.
Then, there is an $\ell$-bucketing $T_{1}$ of $\sigma_{1}$ such that
$T_{1}(v)$ is odd for $v \in A_{1}$ and even for $v \in B$.
Therefore $\sigma_{1}|_{A_{1}^{*}}$ and $T_{1}|_{A_{1}^{*}}$ certify that the following $(k,\ell)$-bucket interface conforms with $\mathbf{A}_{1}$:
\begin{itemize}
\item $b(i) = T_{1}(x^{1}_{i})$ and $b'(i)=T_{1}|_{A_{1}^{*}}({x^{1}_{i}}')=T_{1}(y_{i})$ for $i\in[k]$,
\item $\mu(i) = \mathtt{width}(A_{1}, \sigma_{1}|_{A_{1}}, T_{1}|_{A_{1}}, i)$ and $\mu^{*}(i) = \mathtt{width}(A_{1}^{*}, \sigma_{1}|_{A_{1}^{*}}, T_{1}|_{A_{1}^{*}}, i)$ for $i \in [\ell]$.
\end{itemize}
By $(k,\ell)$-similarity there is an ordering $\sigma_{2}$ of $A_{2}^{*}$ and its $\ell$-bucketing $T_{2}$ such that:
\begin{itemize}
\item each bucket $T_{2}^{-1}(i)$ is contained in $A_{2}$ for odd $i\in[\ell]$ and in $\{{x^{2}_{1}}',\dots,{x^{2}_{k}}'\}$ for even $i\in[\ell]$
\item $b(i) = T_{2}(x^{2}_{i})$ and $b'(i)=T_{2}({x^{2}_{i}}')$ for $i\in[k]$,
\item $\mu(i) \geq \mathtt{width}(A_{2}, \sigma_{2}|_{A_{2}}, T_{2}|_{A_{2}}, i)$ and $\mu^{*}(i) \geq \mathtt{width}(A_{2}^{*}, \sigma_{2}|_{A_{2}^{*}}, T_{2}|_{A_{2}^{*}}, i)$ for $i \in [\ell]$.
\end{itemize}
Given this, we define an assignment of vertices into buckets $\Pi\colon V(\mathbf{A}_{2} \oplus \mathbf{B}) \to [\ell]$ as follows.
\begin{itemize}
\item $\Pi(v) = T_{1}(v)$ for $v \in V(B)$ and
\item $\Pi(v) = T_{2}(v)$ for $v \in V(A_{2})$.
\end{itemize}
Clearly,
\begin{align}
&\Pi|_B = T_{1}|_B \qquad \textrm{and} \label{eq:relakghraelgnawl}\\
&\Pi|_{A_{2}} = T_{2}|_{A_{2}}.\label{eq:ejkrsghearkgh}
\end{align}
We claim that $\Pi|_{A_{2}^{*}} = T_{2}|_{A_{2}^{*}}$ and $\Pi|_{B^{*}} = T_{1}|_{B^{*}}$ also hold.
Indeed,
\begin{align*}
\Pi|_{A_{2}^{*}}(x^{2'}_{i}) & = \Pi(y_{i}) & && && & && \mbox{(we consider } x^{2'}_{i} \mbox{ as } y_{i})\\
&= T_{1}(y_{i}) & & & && & &&\mbox{(by definition)}\\
& = b'(i) & &&& && & & ((k,\ell)\mbox{-bucket interface)}\\
& = T_{2}(x^{2'}_{i}) && & && & && ((k,\ell)\mbox{-similarity)}
\end{align*}
\noindent and, similarly
\begin{align*}
\Pi|_{B^{*}}(y_{i}') & = \Pi(x^{2}_{i}) & && && & &&\mbox{(we consider } y_{i}' \mbox{ as } x^{2}_{i})\\
& = T_{2}(x^{2}_{i}) & && && & && \mbox{(by definition)}\\
& = b(i) & && && & && ((k,\ell)\mbox{-bucket interface)}\\
& = T_{1}(x^{1}_{i}) & && && & && ((k,\ell)\mbox{-similarity)}\\
& = T_{1}|_{B^{*}}(y_{i}') & && && & && \mbox{(by definition).}
\end{align*}
Thus, we obtain that
\begin{eqnarray}
\Pi|_{A_{2}^{*}} & = & T_{2}|_{A_{2}}\label{eq:eskrlgjhaekj}\\
\Pi|_{B^{*}} & = & T_{1}|_{B^{*}}.\label{eq:ergerglkjwag}
\end{eqnarray}
Note also that vertices of $A_{2}$ are mapped to odd buckets and vertices of $B$ are mapped to even buckets.
We use $\Pi$ to define an ordering $\pi$ of the vertices of $\mathbf{A_{2}}\oplus \mathbf{B}$ as follows. Formally, we let $u<_{\pi} u$ if and only if one of the following conditions hold:
\begin{enumerate}
\item $\Pi(u) < \Pi(v)$,
\item $u <_{\sigma_{2}} v$ and $\Pi(u)=\Pi(v)$ is odd, or
\item $u <_{\sigma_{1}} v$ and $\Pi(u)=\Pi(v)$ is even.
\end{enumerate}
Note that this is a linear ordering as it first sorts the vertices according to the bucket they belong to and then according to the ordering induced in this bucket by the orderings $\sigma_{1}$
and $\sigma_{2}$. Note also that by definition $\Pi$ is an $\ell$-bucketing of $\pi$.
Recall that, from Eq.~\eqref{eq:eskrlgjhaekj}, $\Pi|_{A_{2}^{*}}=T_{2}|_{A_{2}}$. This, together with the observation that the vertices of $A_{2}$ are mapped to odd buckets of $\Pi$, implies that
\begin{align}
& \pi|_{A_{2}^{*}}=\sigma_{2}|_{A_{2}^{*}}\qquad \textrm{ and that}\label{eq:eqkdjgnsekjrga}\\
& \pi|_{A_{2}}=\sigma_{2}|_{A_{2}}.\label{eq:eqgaelrghera}
\end{align}
Moreover, recall that $\Pi|_{B^{*}} = T_{1}|_{B^{*}}$. This, together with the fact that the vertices of $B$ are mapped to even buckets of $\Pi$, implies that
\begin{align}
& \pi|_{B^{*}} = \sigma_{1}|_{B^{*}} \qquad \textrm{and that}\label{eq:eqsegjkrserger}\\
& \pi|_{B} = \sigma_{1}|_{B}.\label{eq:eqeqwrjkawefl}
\end{align}
We now bound the width of each bucket. Let $i\in[\ell]$. Notice that if $i$ is even the by construction $\Pi^{-1}(i)$ contains only vertices from $B$.
Therefore,
\begin{eqnarray}
\mathtt{width}(\mathbf{A}_{2}\oplus\mathbf{B},\pi,\Pi,i) & = & \mathtt{width}(A_{2},\pi|_{A_{2}},\Pi|_{A_{2}},i) + \mathtt{width}(B^{*},\pi|_{B^{*}},\Pi|_{B^{*}},i)\nonumber \\
& = & \mathtt{width}(A_{2},\sigma_{2}|_{A_{2}},T_{2}|_{A_{2}},i) + \mathtt{width}(B^{*},\sigma_{1}|_{B^{*}},T_{1}|_{B^{*}},i)\nonumber\\
& \leq & \mu(i) + \mathtt{width}(B^{*},\sigma_{1}|_{B^{*}},T_{1}|_{B^{*}},i) \nonumber\\
& = & \mathtt{width}(A_{1},\sigma_{1}|_{A_{1}},T_{1}|_{A_{1}},i) + \mathtt{width}(B^{*},\sigma_{1}|_{B^{*}},T_{1}|_{B^{*}},i)\nonumber\\
& = & \mathtt{width}(\mathbf{A}_{1}\oplus\mathbf{B},\sigma_{1},T_{1},i),\label{eq:wethweahwea}
\end{eqnarray}
where the first equality follows from Lemma~\ref{lem:width_by_parts}, the second equality holds by
Eq.~\eqref{eq:ejkrsghearkgh},~\eqref{eq:eqgaelrghera},~\eqref{eq:eqsegjkrserger}, and~\eqref{eq:ergerglkjwag},
the third inequality follows from the $(k,\ell)$-bucket interface, and the fifth equality follows from Lemma~\ref{lem:width_by_parts}.
We similarly argue, using $\mu^{*}$ instead of $\mu$, that for odd $i\in[\ell]$,
$\mathtt{width}(\mathbf{A}_{2}\oplus\mathbf{B},\pi,\Pi,i)=\mathtt{width}(\mathbf{A}_{1}\oplus\mathbf{B},\sigma_{1},T_{1},i).$
In particular,
\begin{eqnarray}
\mathtt{width}(\mathbf{A}_{2}\oplus\mathbf{B},\pi,\Pi,i) & = & \mathtt{width}(A_{2}^{*},\pi|_{A_{2}^{*}},\Pi|_{A_{2}^{*}},i) + \mathtt{width}(B,\pi|_{B},\Pi|_{B},i)\nonumber \\
& = & \mathtt{width}(A_{2}^{*},\sigma_{2}|_{A_{2}^{*}},T_{2}|_{A_{2}^{*}},i) + \mathtt{width}(B,\sigma_{1}|_{B},T_{1}|_{B},i)\nonumber\\
& \leq & \mu^{*}(i) + \mathtt{width}(B,\sigma_{1}|_{B},T_{1}|_{B},i) \nonumber\\
& = & \mathtt{width}(A_{1}^{*}, \sigma_{1}|_{A_{1}^{*}}, T_{1}|_{A_{1}^{*}}, i) + \mathtt{width}(B,\sigma_{1}|_{B},T_{1}|_{B},i)\nonumber\\
& = & \mathtt{width}(\mathbf{A}_{1}\oplus\mathbf{B},\sigma_{1},T_{1},i).\label{eq:eqdskjrgneskjrng}
\end{eqnarray}
Similarly, to Eq.~\ref{eq:wethweahwea}, we get that the first equality follows from Lemma~\ref{lem:width_by_parts}, the second equality holds by
Eq.~\eqref{eq:eskrlgjhaekj},~\eqref{eq:eqkdjgnsekjrga},~\eqref{eq:relakghraelgnawl}, and~\eqref{eq:eqeqwrjkawefl},
the third inequality follows from the $(k,\ell)$-bucket interface, and the fifth equality follows from Lemma~\ref{lem:width_by_parts}.
Therefore, from Eq.~\eqref{eq:wethweahwea} and~\eqref{eq:eqdskjrgneskjrng}
we obtain that
$$\ensuremath{\mathbf{cw}}\xspace_\pi(\mathbf{A}_{2}\oplus\mathbf{B}) = \max_{i\in[\ell]}\ \mathtt{width}(\mathbf{A}_{2}\oplus\mathbf{B},\pi,\Pi,i) \leq \max_{i\in[\ell]}\ \mathtt{width}(\mathbf{A}_{1}\oplus\mathbf{B},\sigma_{1},T_{1},i) = \ensuremath{\mathbf{cw}}\xspace_{\sigma_{1}}(\mathbf{A}_{1}\oplus\mathbf{B}).$$
Moreover, since $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus\mathbf{B})\leq \ensuremath{\mathbf{cw}}\xspace_\pi(\mathbf{A}_{2}\oplus\mathbf{B})$ and $\sigma_{1}$ is an optimum cutwidth ordering for $\mathbf{A}_{`}\oplus\mathbf{B}$,
it follows that $$\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus\mathbf{B})\leq\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus\mathbf{B})\leq r.$$
So in particular $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus\mathbf{B})\leq r$. By applying the same reasoning, but with $\mathbf{A}_1$ and $\mathbf{A}_2$ reversed, we obtain also the converse inequality $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus\mathbf{B})\leq\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus\mathbf{B})$.
This proves that indeed $\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{2}\oplus\mathbf{B})=\ensuremath{\mathbf{cw}}\xspace(\mathbf{A}_{1}\oplus\mathbf{B})$.
\end{proof}
\section{Obstruction sizes and linked orderings}\label{sec:linked}
In this section we establish the main result on sizes of obstructions for cutwidth.
We first introduce linked orderings and prove that there is always an optimum ordering that is linked.
\begin{definition}[linked ordering]
An ordering $\sigma$ of $V(G)$ is \emph{linked} if for any two vertices $u\leq_{\sigma} u'$, there exist $\min \{\delta(V^{\sigma}_{v}) \mid u\leq_{\sigma} v\leq_{\sigma} u' \}$ edge-disjoint paths between $V^{\sigma}_{u}$ and $V(G)\setminus V^{\sigma}_{u'}$ in $G$.
\end{definition}
\begin{lemma}[\!\!\cite{GeelenGW02,KanteK14}]\label{lem:linked}
For each graph $G$, there is a linked ordering $\sigma$ of $V(G)$ with $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)=\ensuremath{\mathbf{cw}}\xspace(G)$.
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that the graph is connected.
Let $\sigma$ be an optimum cutwidth ordering of $V=V(G)$. Subject to the optimality of $\sigma$, we choose $\sigma$ so that $\sum_{v\in V} \delta(V^{\sigma}_v)$ is minimized.
We prove that $\sigma$ defined in this manner is in fact linked.
Assume the contrary. Then by Menger's theorem, there exist vertices $u <_\sigma u'$ in $V$ and $i\in \mathbb{N}$ such that
$\delta(V^{\sigma}_{v})> i$ for every $u\leq_{\sigma} v\leq_{\sigma} u'$, but a minimum cut $(A,B)$ of $G$ with $V^{\sigma}_{u}\subseteq A$ and $V\setminus V^{\sigma}_{u'}\subseteq B$ has size $\delta(A) \leq i$.
We partition $A$ into sets $A_{1}$ and $A_{2}$, where $A_{1}=V^{\sigma}_{u}$ and $A_2=A\setminus A_1$, and we partition $B$ into sets $B_{1}$ and $B_{2}$, where
$B_{2}=V\setminus V^{\sigma}_{u'}$ and $B_1=B\setminus B_2$ (see Figure~\ref{fig:linked-proof}).
Notice that $A_{2}=A\setminus V^{\sigma}_{u}=\{v\mid u<_{\sigma} v \leq_{\sigma}u'\}\cap A$ and that
$B_{1}=B\setminus (V\setminus V^{\sigma}_{u'})=\{v\mid u<_{\sigma} v \leq_{\sigma}u'\}\cap B$.
Let $\sigma'$ be the ordering of $V$ obtained by concatenating
$\sigma|_{A_{1}}$, $\sigma|_{A_{2}}$, $\sigma|_{B_{1}}$, and $\sigma|_{B_{2}}$.
We prove that $\delta(V^{\sigma'}_{v})\leq \delta(V^{\sigma}_{v})$, for every $v\in V$.
Observe first that for every vertex $v\in A_{1}\cup B_{2}$ it holds that $V^{\sigma'}_{v}=V^{\sigma}_{v}$ and thus, $\delta(V^{\sigma'}_{v})= \delta(V^{\sigma}_{v})$.
Let now $v\in A_{2}$. Then $V^{\sigma'}_{v}=V^{\sigma}_{v}\cap A$. By the submodularity of cuts it follows that
$\delta(V^{\sigma}_{v}\cup A)+\delta(V^{\sigma}_{v}\cap A)\leq \delta(A)+\delta(V^{\sigma}_{v})$.
Notice that
$(V^{\sigma}_{v}\cup A,V\setminus(V^{\sigma}_{v}\cup A))$ is also a cut separating $A_{1}=V^{\sigma}_{u}$ and $B_{2}=V\setminus V^{\sigma}_{u'}$. From the minimality of $(A,B)$ it follows that
$\delta(A)\leq \delta(V^{\sigma}_{v}\cup A)$. Therefore, $\delta(V^{\sigma}_{v}\cap A)\leq \delta(V^{\sigma}_{v})$. As $V^{\sigma'}_{v}=V^{\sigma}_{v}\cap A$,
we obtain that $\delta(V^{\sigma'}_{v})\leq \delta(V^{\sigma}_{v})$.
Symmetrically, let now $v\in B_{1}$. Then $V^{\sigma'}_{v}=V^{\sigma}_{v}\cup A$. By the submodularity of cuts we have
$\delta(V^{\sigma}_{v}\cup A)+\delta(V^{\sigma}_{v}\cap A)\leq \delta(A)+\delta(V^{\sigma}_{v})$.
Notice that $(V^{\sigma}_{v}\cap A,V\setminus(V^{\sigma}_{v}\cap A))$ is a cut separating $A_{1}$ and $B_{2}$. From the minimality of $(A,B)$ it follows that
$\delta(A)\leq \delta(V^{\sigma}_{v}\cap A)$. Therefore, $\delta(V^{\sigma}_{v}\cup A)\leq \delta(V^{\sigma}_{v})$. As $V^{\sigma'}_{v}=V^{\sigma}_{v}\cup A$,
we obtain that $\delta(V^{\sigma'}_{v})\leq \delta(V^{\sigma}_{v})$.
\begin{figure}[b]
\centering
\begin{tikzpicture}[scale =1]
\node[A] (v1) at (1,0) {};
\node[A] (v2) at (2,0) {};
\node[A] (v3) at (3,0) {};
\node[A,label=below:$u$] (v4) at (4,0) {};
\draw (4.5,-0.3)--(4.5,0.3);
\node[A] (v5) at (5,0) {};
\node[B] (v6) at (6,0) {};
\node[A] (v7) at (7,0) {};
\node[B] (v8) at (8,0) {};
\node[B] (v9) at (9,0) {};
\node[A,label=below:$u'$] (v10) at (10,0) {};
\draw (10.5,-0.3)--(10.5,0.3);
\node[B] (v11) at (11,0) {};
\node[B] (v12) at (12,0) {};
\node[B] (v13) at (13,0) {};
\node (A1) at (2.5,-0.6) {$A_1$};
\node (A2) at (7.5,-0.6) {$A_2 \cup B_1$};
\node (B2) at (12,-0.6) {$B_2$};
\begin{scope}[shift={(0,-1.5)}]
\node[A] (v1) at (1,0) {};
\node[A] (v2) at (2,0) {};
\node[A] (v3) at (3,0) {};
\node[A,label=below:$u$] (v4) at (4,0) {};
\draw (4.5,-0.3)--(4.5,0.3);
\node[A] (v5) at (5,0) {};
\node[B] (v6) at (8,0) {};
\node[A] (v7) at (6,0) {};
\node[B] (v8) at (9,0) {};
\node[B] (v9) at (10,0) {};
\node[A,label=below:$u'$] (v7) at (7,0) {};
\draw (10.5,-0.3)--(10.5,0.3);
\node[B] (v11) at (11,0) {};
\node[B] (v12) at (12,0) {};
\node[B] (v13) at (13,0) {};
\node (A1) at (2.5,-0.6) {$A_1$};
\node (A2) at (6,-0.7) {$A_2$};
\node (B1) at (9,-0.7) {$B_1$};
\node (B2) at (12,-0.6) {$B_2$};
\end{scope}
\end{tikzpicture}
\caption{An ordering of vertices with the minimum cut $(A,B)$ between $A_1$ and $B_2$ of size $i$ highlighted in blue and red. Below, the modified ordering, with cutwidth bounded using submodularity.}
\label{fig:linked-proof}
\end{figure}
Thus, $\delta(V^{\sigma'}_{v})\leq \delta(V^{\sigma}_{v})\leq\ensuremath{\mathbf{cw}}\xspace(G)$ for every $v\in V$, and hence $\ensuremath{\mathbf{cw}}\xspace_{\sigma'}(G)=\ensuremath{\mathbf{cw}}\xspace(G)$.
Finally, note that $\delta(V^{\sigma'}_{v}) = \delta(A) \leq i < \delta(V^{\sigma}_{v})$ for the last vertex $v$ in $A$. Thus $\sum_v \delta(V^{\sigma'}_v)<\sum_v \delta(V^{\sigma}_v)$, contradicting the choice of $\sigma$.
Therefore, $\sigma$ is a linked ordering of $V$ with $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)=\ensuremath{\mathbf{cw}}\xspace(G)$.
\end{proof}
The rest of Section~\ref{sec:linked} is devoted
to the proof of Theorem~\ref{thm:main}.
Before we proceed with this proof, we need a series of auxiliary lemmas.
For every $s,r\in \mathbb{N}^+$, we set $A_{s,r}=[s,s+r-1]$. We prove the following.
\begin{lemma}\label{lem:word_unpump}
Let $N$ be a positive integer.
For every $s,r\in \mathbb{N}^+$ and every word $w$ over $A_{s,r}$ of length $N^{r}$
there is a symbol $k\in A_{s,r}$ and a subword $u$ of $w$ such that (a) $u$ contains only numbers not smaller than $k$, and (b) $u$ contains the number $k$ at least $N$
times.
\end{lemma}
\begin{proof}
We prove the lemma by induction on $r$.
Notice that for $r=1$, $A_{s,r}=\{s\}$ and thus the only word $w$ of length $N$ is $s^{N}$. Thus, the lemma holds with $k=s$ and $u=w$.
We proceed to the inductive step for $r>1$.
Let now $s\in \mathbb{N}$ and let $w$ be a word over $A_{s,r}$ of length $N^{r}$. If $s$ occurs at least $N$ times, then again, the lemma holds with $k=s$ and $u=w$.
Thus, we may assume that $s$ occurs at most $N-1$ times. Then, since $w$ has length at least $N^{r}$, there
exists a subword $w'$ of $w$ of length at least $N^{r-1}$ over $A_{s,r}\setminus\{s\} = A_{s+1,r-1}$.
From the inductive hypothesis, there exists $k \in A_{s+1,r-1}\subseteq A_{s,r}$ and a
subword $u$ of $w'$ such that $k$ occurs at least $N$ times in $u$ and $u$ contains only numbers at least $k$. Since $w'$ is a subword of $w$, $u$ is also a
subword of $w$. This completes the inductive step and the proof of the lemma.
\end{proof}
We use Lemma~\ref{lem:word_unpump} only for $s=1$, giving the following corollary.
\begin{corollary}\label{lem:wordlngth}
Let $r,N$ be positive integers and let $w$ be a word of length $N^{r}$ over the alphabet $[r]$. Then there is a number $k\in [r]$ and a
subword $u$ of $w$ such that (a) $u$ contains only numbers not smaller than $k$, and (b) $u$ contains the number $k$ at least $N$ times.
\end{corollary}
We also need one additional statement about boundaried graphs and bucket interfaces.
\begin{lemma}\label{lem:subset_bucket}
Let $k,\ell\in\mathbb{N}$.
Suppose $\mathbf{A}=(A,\bar{x})$ and $\mathbf{B}=(B,\bar{y})$ are two $k$-boundaried graphs,
and suppose further that there is an immersion model $(\phi,\psi)$ of $A$ in $B$ such that $\phi(x_i)=y_i$, for all $i=1,2,\ldots,k$.
Then for every $(k,\ell)$-bucket interface $(b,b',\mu,\mu^*)$, if $\mathbf{B}$ conforms to $(b,b',\mu,\mu^*)$ then also $\mathbf{A}$ conforms to $(b,b',\mu,\mu^*)$.
\end{lemma}
\begin{proof}
First, we extend the immersion model $(\phi,\psi)$ to an immersion model $(\phi^*,\psi^*)$ of $A^*$ in $B^*$ by putting $\phi^*(x_i')=y_i'$ and $\psi^*(x_ix_i')=y_iy_i'$ for all $i\in [k]$.
Suppose that ordering $\sigma$ of $V(B^*)$ and its $\ell$-bucketing $T$ certify that $\mathbf{B}$ conforms to $(b,b',\mu,\mu^*)$.
We define ordering $\sigma'$ of $V(A^*)$ and its $\ell$-bucketing $T'$ as follows:
\begin{itemize}
\item For $u,v\in V(A^*)$, we put $u<_{\sigma'}v$ if and only if $\phi^*(u)<_{\sigma'}\phi^*(v)$.
\item For $u\in V(A^*)$, we put $T'(u)=T(\phi^*(u))$.
\end{itemize}
It is easy to see that $T'$ is an $\ell$-bucketing of $\sigma'$.
We now verify that $\sigma'$ and $T'$ certify that $\mathbf{A}$ conforms to $(b,b',\mu,\mu^*)$.
The first two conditions of conforming follow directly from the definition of $\sigma'$ and $T'$, so we are left with the third and the fourth condition.
For the third condition, take any $j\in [\ell]$. It suffices to show that for any cut $(L',R')\in \mathtt{cuts}(A,\sigma'|_A,T'|_A,j)$, we have that $|E_A(L',R')|\leq \mu(j)$.
By the construction of $(\sigma',T')$ it follows that there is a cut $(L,R)\in \mathtt{cuts}(B,\sigma|_B,T|_B,j)$ such that $\phi(L')\subseteq L$ and $\phi(R')\subseteq R$.
Since $(\sigma,T)$ certified that $\mathbf{B}$ conforms to $(b,b',\mu,\mu^*)$, we have that $|E_B(L,R)|\leq \mu(j)$.
Take any $uv\in E_A(L',R')$, and observe that $\psi(uv)$ is a path in $B$ leading from $\phi(u)\in L$ to $\phi(v)\in R$.
Consequently, one of the edges of this path must belong to $E_B(L,R)$.
Since paths $\psi(uv)$ are pairwise edge-disjoint for different edges $uv\in E_A(L',R')$, we infer that
$$|E_A(L',R')|\leq |E_B(L,R)|\leq \mu(j).$$
This establishes the third condition. The fourth condition follows by the same argument applied to graphs $A^*$ and $B^*$, instead of $A$ and $B$.
\end{proof}
The following theorem is the technical counterpart of Theorem~\ref{thm:main}. Its proof is based
on Theorem~\ref{thm:myhill_nerode}, Lemma~\ref{lem:linked}, Observation~\ref{obs:bcketsze} and the idea of ``unpumping'' repeating types, presented in the introduction.
The linkedness is used to make sure that within the unpumped segment of the ordering, one can find the maximum possible number of edge-disjoint paths between the parts of the graph on the left side and on the right side
of the segment. This ensures that the graph obtained from unpumping can be immersed in the original one.
\begin{theorem}\label{restated}
Let $k$ be a positive integer. If $G\in \ensuremath{\mathbf{obs}}_{\leq_{\rm si}}(\ensuremath{\mathcal{C}}_{k})$, then $|V(G)|\leq N^{k+1}$, where $N=2^{2((k+1)\log\ell +\ell\log(k+2))}+2$ and $\ell=(2k+3) \cdot (2k+6)$.
\end{theorem}
\begin{proof
Take any $G\in \ensuremath{\mathbf{obs}}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{k})$ and assume, towards a contradiction, that $|V(G)|> N^{k+1}$. Let
$\sigma=\langle v_{1},v_{2},\dots, v_{|V(G)|}\rangle$ be a linked optimum cutwidth ordering of $G$, which exists by Lemma~\ref{lem:linked}.
We define $c_{i}=\delta(V^{\sigma}_{v_{i}})$, that is, $c_{i}$ is the size of
the cut between the vertices of $G$ up to $v_{i}$ and the rest of the graph. Notice that since $G\in\ensuremath{\mathbf{obs}}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{k})$, we have that $\ensuremath{\mathbf{cw}}\xspace(G)=k+1$ and $G$ is connected.
This implies that $c_{i}\in [k+1]$, for every $i\in [|V(G)|-1]$.
Observe that $c_{1}c_{2}\dots c_{|V(G)|-1}$ is a word of length at least $N^{k+1}$ over the
alphabet $[k+1]$. From Corollary~\ref{lem:wordlngth}, it follows that there exist $1\leq s\leq t< |V(G)|$ and $q\in [k+1]$ such that for every $s\leq i\leq t$ we have $c_{i}\geq q$,
and there also exist $N$ distinct indices $s\leq i_{1}<i_{2}<\dots<i_{N}\leq t$ such that $c_{i_j}=q$, for every $j\in [N]$. Without loss of generality
we may assume that $i_{1}=s$ and $i_{N}=t$.
For each $j\in [N]$, we can define a $q$-boundaried graph $\mathbf{G}_{j}=(G_{j},(z_{j}^{1},z_{j}^{2},\dots,z_{j}^{q}))$ in the following way.
First, by linkedness, we find edge-disjoint paths $P_{1}, \dots, P_{q}$ between $V^\sigma_{v_{i_{1}}}$ and $V\setminus V^\sigma_{v_{i_{N}}}$.
Notice that for each $j\in [N]$ the cut $E_G(V^{\sigma}_{v_{i_j}},V(G)\setminus V^{\sigma}_{v_{i_j}})$ contains exactly one edge of each path $P_{i}$. Denote this edge by $e_{j}^{i}$, for $i\in [q]$.
For $i\in [q]$, let $x_j^i$ be the endpoint of $e_{j}^{i}$ that belongs to $V^{\sigma}_{v_{i_j}}$, and let $y_j^i$ be the endpoint that does not belong to $V^{\sigma}_{v_{i_j}}$.
We construct $G_j$ by taking $G[V^{\sigma}_{v_{i_j}}]$, adding fresh boundary vertices $(z_{j}^{1},z_{j}^{2},\dots,z_{j}^{q})$, and adding one fresh edge $x_j^iz_j^i$ for each $i\in [q]$.
Now consider any pair of indices $1\leq j_1<j_2\leq N$. Observe that there exists an immersion model $(\phi,\psi)$ of $\mathbf{G}_{j_1}$ in $\mathbf{G}_{j_2}$ such that
$\phi(z_{j_1}^i)=z_{j_2}^i$ for each $i\in [q]$. Indeed, we can put $\phi(u)=u$ for each $u\in V(G_{j_1})$ and $\phi(z_{j_1}^i)=z_{j_2}^i$ for each $i\in [q]$.
Then $\psi$ can be defined by taking $\psi(e)=e$ for each $e\in E(G_{j_1})$ and mapping each edge $x_{j_1}^iz_{j_1}^i$ to an appropriate infix of the path $P_i$, extended by the edge
$x_{j_2}^iz_{j_2}^i$. Consequently, $\mathbf{G}_{j_1}$ and $\mathbf{G}_{j_2}$ satisfy the prerequisites of Lemma~\ref{lem:subset_bucket}.
We infer that if by $\zeta(j)$ we denote the set of $(q,\ell$)-bucket interfaces to which $\mathbf{G}_j$ conforms, then
$$\zeta(1)\supseteq \zeta(2)\supseteq \ldots\supseteq \zeta(N-1)\supseteq \zeta(N).$$
Observation~\ref{obs:bcketsze} implies that $N$ is larger by more than $1$ than the total number of $(q,\ell)$-bucket interfaces.
It follows that there exists an index $j$, $1\leq j<N$, such that
$$\zeta(j)=\zeta(j+1).$$
In other words, the $q$-boundaried graphs $\mathbf{G}_j$ and $\mathbf{G}_{j+1}$ are $(q,\ell)$-similar.
Define a $q$-boundaried graph $\mathbf{G'}=(G',(y_{j+1}^1,\ldots,y_{j+1}^q))$ by taking $G'=G[V(G)\setminus V^\sigma_{i_{j+1}}]$.
It can be now seen that $\mathbf{G}_{j+1}\oplus\mathbf{G'}$ is exactly the graph $G$ with every edge of the cut $E_G(V^{\sigma}_{v_{i_j}},V(G)\setminus V^{\sigma}_{v_{i_j}})$ subdivided once.
Since subdividing edges does not change the cutwidth of the graph, we have that
\begin{equation}\label{eq:subdivide}
\ensuremath{\mathbf{cw}}\xspace(\mathbf{G}_{j+1}\oplus\mathbf{G'})=\ensuremath{\mathbf{cw}}\xspace(G)>k.
\end{equation}
On the other hand, $q$-boundaried graphs $\mathbf{G}_j$ and $\mathbf{G}_{j+1}$ are $(q,\ell)$-similar.
Since $\ell\geq (2q+3) \cdot (2q+6)$, by Theorem~\ref{thm:myhill_nerode} we conclude that
\begin{equation}\label{eq:myhill}
\ensuremath{\mathbf{cw}}\xspace(\mathbf{G}_{j}\oplus \mathbf{G}')=\ensuremath{\mathbf{cw}}\xspace(\mathbf{G}_{j+1}\oplus \mathbf{G}').
\end{equation}
Examine the graph $\mathbf{G}_{j}\oplus \mathbf{G}'$. In the join operation, we added an edge $z_j^iy_{j+1}^i$ for each $i\in [q]$, which means each vertex $z_j^i$
has exactly two incident edges in $\mathbf{G}_{j}\oplus \mathbf{G}'$: one connecting it to $x_j^i$ and one connecting it to $y_{j+1}^i$.
Let $H$ be the graph obtained from $\mathbf{G}_{j}\oplus \mathbf{G}'$ by dissolving every vertex $z_j^i$, i.e., removing it and replacing edges $x_{j}^iz_j^i$ and $z_j^iy_{j+1}^i$ with a fresh
edge $x_j^iy_{j+1}^i$. Subdividing edges does not change the cutwidth of a graph, so we obtain that:
\begin{equation}\label{eq:dissolve}
\ensuremath{\mathbf{cw}}\xspace(H)=\ensuremath{\mathbf{cw}}\xspace(\mathbf{G}_{j}\oplus \mathbf{G}')
\end{equation}
Finally, it is easy to see that $G$ admits $H$ as a strong immersion:
a strong immersion model of $H$ in $G$ can be constructed by mapping the vertices and edges of $G_j$ and $G'$ identically, and then mapping each of the remaining edges $x_j^iy_{j+1}^i$ to a corresponding infix of
the path $P_i$. Also, since $i_j<i_{j+1}$, the graph $H$ has strictly less vertices than $G$. However, from Eq.~\eqref{eq:subdivide},~\eqref{eq:myhill}, and~\eqref{eq:dissolve} we conclude that
$\ensuremath{\mathbf{cw}}\xspace(H)=\ensuremath{\mathbf{cw}}\xspace(G)>k$. This contradicts the assumption that $G\in\ensuremath{\mathbf{obs}}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{k})$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Theorem~\ref{restated} provides an upper bound on the number of vertices of a graph in $\ensuremath{\mathbf{obs}}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{k})$.
Observe that since such a graph has cutwidth $k+1$, each of its vertices has degree at most $2(k+1)$.
It follows that any graph from $\ensuremath{\mathbf{obs}}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{k})$ has $2^{\ensuremath{\mathcal{O}}(k^3\log k)}$ vertices and edges.
Finally, by Observation~\ref{osesimme} we have ${\bf obs}_{\leq_{\rm i}}(\ensuremath{\mathcal{C}}_{q})\subseteq {\bf obs}_{\leq_{si}}(\ensuremath{\mathcal{C}}_{q})$, so
the same bound holds also for immersions instead of strong immersions. This concludes the proof of Theorem~\ref{thm:main}.
\end{proof}
\section{An algorithm for computing cutwidth}\label{sec:algo}
In this section we present an exact FPT algorithm for computing the cutwidth of the graph.
First, we need to give a dynamic programming algorithm that given an approximate ordering $\sigma$ of width $r$, finds, if possible,
an ordering of width at most $k$, where $k\leq r$ is given.
Our algorithm takes advantage of the given ordering $\sigma$ and essentially computes, for each subgraph of $G$ induced by a prefix of $\sigma$, the $(r,\ell)$-bucket interfaces it conforms to.
More precisely, in Lemma~\ref{lem:dpalgcor} we show that if $G$ has
an optimum ordering of width $k$, then there is an optimum ordering were \emph{each} of these induced subgraphs
occupies at most $\ell = O(rk)$ buckets, allowing to restrict our search to $(r,\ell)$-bucket profiles (a variant of bucket interfaces to be defined later, refined so as to consider border vertices more precisely).
The proof slightly strengthens that of Lemma~\ref{lem:num_blocks}.
\begin{lemma}\label{lem:dpalgcor}
Let $G$ be a graph with an ordering $\sigma$ of width $r$.
Then there exists also an ordering $\tau$ of optimum width, i.e., with $\ensuremath{\mathbf{cw}}\xspace_\tau(G)=\ensuremath{\mathbf{cw}}\xspace(G)$,
that has the following property: for every prefix $X$ of $\sigma$, the number of $X$-blocks in $\tau$ is at most $2r\cdot \ensuremath{\mathbf{cw}}\xspace(G)+\ensuremath{\mathbf{cw}}\xspace(G)+4r+2$.
\end{lemma}
\begin{proof}
Lemma~\ref{lem:num_blocks} asserts that for each cut $(A,B)$ of $G$ of size at most $r$, there exists an optimum-width ordering of $V(G)$ where the number of $(A,B)$-blocks is at most
$$(2r+1) \cdot (2\ensuremath{\mathbf{cw}}\xspace(G)+3)+2r=4r\cdot \ensuremath{\mathbf{cw}}\xspace(G)+2\ensuremath{\mathbf{cw}}\xspace(G)+8r+3.$$
As $A$-blocks and $B$-blocks appear alternately, at most half rounded up of the $(A,B)$-blocks can be $A$-blocks.
Hence, the number of $A$-blocks in such an optimum-width ordering is at most $2r\cdot \ensuremath{\mathbf{cw}}\xspace(G)+\ensuremath{\mathbf{cw}}\xspace(G)+4r+2$; we denote this quantity by $\lambda$.
The proof of Lemma~\ref{lem:num_blocks} in fact shows that for any ordering $\sigma$ of $V(G)$ and any cut $(A,B)$ of $G$ of size at most $r$, either $\sigma$ already has at most $2\lambda-1$ $(A,B)$-blocks, or an ordering $\sigma'$ can be obtained from $\sigma$ by swapping its $(A,B)$-blocks so that $\sigma'$ has strictly less $(A,B)$-blocks.
Therefore, by reordering $(A,B)$-blocks of $\sigma$, we eventually get a new ordering which has at most $2\lambda-1$ $(A,B)$-blocks, and hence at most $\lambda$ $A$-blocks.
For $i=1,2,\ldots,|V(G)|-1$, let $(A_i,B_i)$ be the cut of $G$, where $A_i$ is the prefix of $\sigma$ of length $i$, while $B_i$ is the suffix of $\sigma$ of length $|V(G)|-i$.
Let $\tau_0$ be any optimum-width ordering of $G$. We now inductively construct orderings $\tau_1,\tau_2,\ldots,\tau_{|V(G)|-1}$, as follows: once $\tau_i$ is constructed, we apply the above reordering procedure to $\tau_i$ and cut $(A_{i+1},B_{i+1})$.
This yields a new ordering $\tau_{i+1}$ of optimum width such that the number of $A_{i+1}$-blocks in $\tau_{i+1}$ is at most $\lambda$.
Furthermore, $\tau_{i+1}$ is obtained from $\tau_i$ by reordering $A_{i+1}$- and $B_{i+1}$-blocks in $\tau_i$.
Hence, whenever $X$ is a subset of $A_{i+1}$, then
any $X$-block in $\tau_i$ remains consecutive in $\tau_{i+1}$, as it is contained in one $A_{i+1}$-block in $\tau_i$ that is moved as a whole in the construction of $\tau_{i+1}$.
Consequently, if for all $j\leq i$ we had that the number of $A_j$-blocks in $\tau_i$ is at most $\lambda$, then this property is also satisfied in $\tau_{i+1}$.
It is now clear that a straightforward induction yields the following invariant: for each $j\leq i$, then number of $A_j$-blocks in $\tau_i$ is at most $\lambda$.
Therefore $\tau=\tau_{|V(G)|-1}$ gives an ordering with the claimed properties.
\end{proof}
\paragraph{Bucket profiles.}
We now define a refinement of the widths of the buckets of a bucket interface as well as a refinement of the notion
of bucket interfaces. They are used in the dynamic programming algorithm of Lemma~\ref{lem:dynamicCtw}.
\begin{definition}
Let $(G,\bar{x})$ be a $k$-boundaried graph and let $S = \{x_1,\dots,x_k,x_1',\dots,x_k'\}\subseteq V(G^*)$.
Let now $\sigma$ be an ordering of $V(G^*)$ and $T$ be an $\ell$-bucketing of $\sigma$.
For every bucket $T^{-1}(i)$, $i \in [\ell]$, let $T^{-1}(i)\cap S=\{v_{1},v_{2},\dots,v_{p}\}$ for some $v_{1}<_{\sigma} v_{2}<_{\sigma} \dots <_{\sigma} v_{p}$; we then define
$$T^{-1}_{j}(i) =
\begin{cases}
\{v\in T^{-1}(i)\colon v<_{\sigma} v_{1} \} & \mbox{for } j=0,\\
\{v\in T^{-1}(i)\colon v_{j}<_{\sigma} v<_{\sigma} v_{j+1} \} & \mbox{for } j\in [p-1],\\
\{v\in T^{-1}(i)\colon v_{p}<_{\sigma} v \} & \mbox{for } j=p.
\end{cases}$$
Let also $\mathtt{cuts}(G,\sigma,T,i,j)$ be the family of $\sigma$-cuts containing on one side all vertices
appearing before $v_{j-1}$ (or, if $j=0$, all vertices of buckets appearing before
bucket $i$) and a prefix (in $\sigma$) of $T^{-1}_{j}(i)$. For an ordering $\sigma$ of the vertices of a graph $G$,
define the \emph{width of $j$-th segment} $T^{-1}_{j}(i)$ of the bucket $i$, $i\in[\ell]$, $j\in [0,p]$,
as the maximum width of any cut in the family $\mathtt{cuts}(G,\sigma,T,i,j)$.
Formally,
\begin{eqnarray*}
\mathtt{cuts}(G,\sigma,T,i,j) & = & \left\{ \left(T^{-1}(\{1,\dots,i-1\})\cup L, T^{-1}(\{i+1,\dots,\ell\})\cup R\right) \colon \right. \\
& &~~\left. (L,R)\mbox{ is a }\sigma\mbox{-cut of }T^{-1}(i)\mbox{ with }v_j\in L\mbox{ and }v_{j+1}\in R\right\},\\
\mathtt{width}(G,\sigma,T,i,j) & = & \max \left\{\, \left|E_G(L,R)\right|\ \colon\ (L,R) \in \mathtt{cuts}(G,\sigma,T,i,j)\, \right\}.
\end{eqnarray*}
\end{definition} %
\noindent We also need to refine the notion of a $(k,\ell)$-bucket interface.
\begin{definition}\label{def:profile}
For $k,\ell\in\mathbb{N}$, a \emph{($k$,$\ell$)-bucket profile} consists of functions:
\begin{itemize}
\item $b,b': [k] \to [\ell]$ identifying the buckets which contain $x_{i}$ and $x_{i}'$, respectively,
\item $p,p':[k]\to [k]$ highlighting the ordering between the vertices $x_{i}$ and $x_{i}'$ inside a bucket, respectively,
\item $\nu: [\ell]\times [0,k] \to [0,k]$ corresponding to the widths of segments of buckets defined by the vertices $x_{i}$, respectively.
\end{itemize}
A $k$-boundaried graph $\mathbf{G}$ \emph{conforms} with a $(k,\ell)$-bucket profile,
if there exists an ordering $\sigma$ of the vertices of $G^{*}$ and an $\ell$-bucketing $T$ such that:
%
\begin{itemize}
\item $T(v)$ is odd for $v\in V(G)$ and even for $v\in \{x_{1}',\dots,x_k'\}$,
\item $T(x_{i}) = b(i)$ and $T(x_{i}') = b'(i)$, for each $i\in [k]$,
\item $p(i)<p(j)$, if $b(i)=b(j)$ and $x_{i}<_{\sigma} x_{j}$, and $p'(i)<p'(j)$ if $b'(i)=b'(j)$ and $x_{i}'<_{\sigma} x_{j}'$,
\item $\mathtt{width}(G, \sigma|_{G}, T|_{G}, j,s) = \nu(j,s)$, for each $j\in [\ell]$ and $s\in [0,k]$.
\end{itemize}
\end{definition}
From the fact that the boundary vertices of a $k$-boundaried graph $\mathbf{G}$ split the buckets defined by $T$ into at most $2k$ segments in total it follows that:
\begin{observation}\label{obs:prflsze}
For any pair $(k,\ell)$ of positive integers, there is a set of at most $$2^{2k(\log\ell +\log k)+(\ell+2k)\log(k+1)}$$ $(k,\ell)$-bucket profiles that
a $k$-boundaried graph $\mathbf{G}$ can possibly conform with, and this set can be constructed in time polynomial in its size.
\end{observation}
The $(k,\ell)$-bucket profiles that Observation~\ref{obs:prflsze} refers to will be called {\em{valid}}.
By making use of these two notions we ensure that we will be able to update the widths of each bucket every time
a new vertex is processed by the dynamic programming algorithm. We are now ready to prove Lemma~\ref{lem:dynamicCtw}.
\begin{lemma}\label{lem:dynamicCtw} Let $r\in\ensuremath{\mathbb{N}}^+$.
Given a graph $G$ and an ordering $\sigma$ of its vertices with $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)\leq r$,
an ordering $\tau$ of the vertices of $G$ with $\ensuremath{\mathbf{cw}}\xspace_{\tau}(G)=\ensuremath{\mathbf{cw}}\xspace(G)$ can be computed
in time $2^{\ensuremath{\mathcal{O}}(r^2 \log r)} \cdot |V(G)|$.
\end{lemma}
\begin{proof
The algorithm attempts to compute an ordering of width $k$ for consecutive $k=0,1,2,\ldots$.
The first value of $k$ for which the algorithms succeeds is equal to the value of the cutwidth, and then the constructed ordering may be
returned. Since there is an ordering of width $r$, we will always eventually succeed for some $k\leq r$, which implies that we will make at most $r+1$ iterations.
Hence, from now on we may assume that we know the target width $k\leq r$ for which we try to construct an ordering.
Given a graph $G$ and an ordering $\sigma$ of its vertices with $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G)\leq r$ we
denote by $G_{w}$ the graph induced by the vertices of the prefix of $\sigma$ of length $w$.
Then we naturally define the boundaried graph $\mathbf{G}_{w}$, where we introduce a boundary vertex $x_i$
for each edge $e_i$ of the cut $E_G(V(G_{w}),V(G)\setminus V(G_{w})$. Note that this cut has at most $r$ edges.
By Lemma~\ref{lem:dpalgcor}, we know that there is an optimum-width ordering $\tau$ such that every prefix $V(G_w)$ of $\sigma$ has at most $\ell$ blocks in $\tau$.
Our dynamic programming algorithm will simply inductively reconstruct all $(k,\ell)$-bucket profiles that may correspond to $V(G_w)$-blocks in $\tau$, for each consecutive $w$ in the ordering $\sigma$, eventually reconstructing $\tau$, if $\ensuremath{\mathbf{cw}}\xspace_\tau(G)\leq k$.
We now construct an auxiliary directed graph $D$ that will model states and transitions of our dynamic programming algorithm.
Let $\ell=4rk+2k+8r+4$.
First, for every $w\in [0,|V(G)|]$ and every valid $(k,\ell)$-bucket profile $P$, we add
a vertex $(w,P)$ to $D$.
Thus, by Observation~\ref{obs:prflsze}, the digraph $D$ has at most $$2^{2k(\log\ell+\log k)+(\ell+2k)\log(k+1)}\cdot (|V(G)|+1)=2^{\ensuremath{\mathcal{O}}(r^{2}\log r)}\cdot |V(G)|$$ vertices.
We add an edge $((w,P),(w+1,P'))$, whenever the $(k,\ell)$-bucket profile
$P$ can be {\em{expanded}} to the $(k,\ell)$-bucket profile $P'$ in the sense that we explain now.
We describe which bucket profiles $P'$ expand $P$ by guessing where the new vertex would land in the bucket profile $P$, assuming that $\mathbf{G}_w$ conforms to $P$.
After the guess is made, the updated profile $P$ becomes the expanded profile $P'$.
Different guesses lead to different profiles $P'$ which extend $P$; this corresponds to different ways in which the construction of the optimum ordering can continue.
As describing the details of this expansion relation is a routine task, we prefer to keep the description rather informal, and leave working out all the formal details to the reader.
Let $v_{w+1}$ be the $(w+1)$-st vertex in the ordering $\sigma$, that is, $v_{w+1}\in V(G_{w+1})\setminus V(G_{w})$.
We construct (by guessing) a $(k,\ell)$-bucket profile $P'$ from the $(k,\ell)$-bucket profile $P$ in the following
way.
First, we guess an even bucket of $P$ to place each one of the vertices in $V(G_{w+1}^{*})\setminus V(G_{w}^{*})$:
the new vertices of the extension that correspond to new edges of the cut $E_G(V(G_{w+1}),V(G)\setminus V(G_{w+1}))$ that are incident to $v_{w+1}$.
Notice that each bucket contains, at any moment, at most $r$ vertices. Therefore, we have at most $r+1$ possible choices
about where each vertex will land in each bucket (including the placing in the order, as indicated by the function $p'(\cdot)$.
Notice also that there are at most $r+1$ vertices in $V(G_{w+1}^{*})\setminus V(G_{w}^{*})$. Therefore we have at most
$(\ell (r+1))^{r+1}$ options for this guess.
Next, we choose the place $v_{w+1}$ is going to be put in.
If $v_{w+1}$ is an endpoint of an edge from the cut $E_G(V(G_w),V(G)\setminus V(G_w))$, then this place is already indicated by functions $b'(\cdot)$ and $p'(\cdot)$ in the bucket profile $P$;
if there are multiple edges in the cut $E_G(V(G_w),V(G)\setminus V(G_w))$ that have $v_{w+1}$ as an endpoint, then all of them must be placed next to each other in the same even bucket (otherwise $P$ has no extension).
Otherwise, if $v_{w+1}$ is not an endpoint of an edge from $E_G(V(G_w),V(G)\setminus V(G_w))$, we guess the placing of $v_{w+1}$ by guessing
an even bucket (one of at most $\ell+1$ options) together with a segment between two consecutive extension vertices in this bucket (one of at most $r+1$ options).
The placing of $v_{w+1}$ may lead to one of three different scenarios; we again guess which one applies.
First, $v_{w+1}$ can establish a new odd bucket and split the even bucket into which it was put into two new even buckets, one
on the left and one on the right of the new odd bucket containing $v_{w+1}$; the other extension vertices placed in this bucket are split accordingly.
Second, $v_{w+1}$ can be present at the leftmost or rightmost end of the even bucket it is placed in, so it gets merged into the neighboring odd bucket.
Finally, if the even bucket in which $v_{w+1}$ is placed did not contain any other extension vertices of $G_w^*$, then $v_{w+1}$ can be declared to be the last vertex placed in this bucket,
in which case we merge it together with both neighboring odd buckets.
In these scenarios, whenever the extended profile turns out to have more than $\ell$ buckets, we discard this option.
Having guessed how the placing of $v_{w+1}$ will affect the configuration of buckets, we proceed with updating the sizes of cuts, as indicated by the function $\nu(\cdot)$.
For this, we first examine all the edges of the cut $E_G(V(G_w),V(G)\setminus V(G_w))$ that have $v_{w+1}$ as an endpoint. These edges did not contribute to the values of $\nu(\cdot)$ in the bucket
profile $P$, but should contribute in $P'$. Note that given the placement of $v_{w+1}$, for each such edge we exactly see over which segments this edge ``flies over'', and therefore we can update the values
of $\nu(\cdot)$ for these segments by incrementing them by one. Finally, when $v_{w+1}$ got merged to a neighboring odd bucket (or to two of them), we may also need to take into account one more cut in
the value of $\nu(\cdot)$ for the last/first segment of this bucket: the one between $v_{w+1}$ and the vertices placed in this bucket.
It is easy to see that from the value of $\nu(\cdot)$ for the segment
in which $v_{w+1}$ is placed, and the exact placement of the endpoints of all the boundary edges, we can deduce the exact size of this cut. Hence, the relevant value of $\nu(\cdot)$ can be efficiently updated by
taking the maximum of the old value and the deduced size of the cut. We update $\nu$ in a similar fashion when $v_{w+1}$ merges with both neighboring odd buckets.
If at any point any of the values of $\nu(\cdot)$ exceeds $k$, we discard this guess.
This concludes the definition of the extension. For every $(k,\ell)$-bucket profile $P$ and every $(k,\ell)$-bucket profile $P'$ that extends it, we add to $D$ an arc from $(w,P)$ to $(w+1,P')$.
It is easy to see from the description above that, given $P$ and $P'$, it can be verified in time polynomial in $r$ whether such an arc should be added.
Finally, in the graph $D$ we determine using, say, depth-first search, whether there is a directed path from node $(0,P_\emptyset)$ to node $(|V(G)|,P_{\rm full})$, where $P_\emptyset$ is an empty bucket profile
and $P_{\rm full}$ is a bucket profile containing just one odd bucket.
It is clear from the construction that if we find such a path, then by applying operations recorded along such a path we obtain an ordering of the vertices of $G$ of width at most $k$.
On the other hand, provided $k=\ensuremath{\mathbf{cw}}\xspace(G)$, by Lemma~\ref{lem:dpalgcor} we know that there is always an optimum-width ordering $\tau$ such that every prefix of $\sigma$ has at most $\ell$ blocks in $\tau$.
Then the $(k,\ell)$-bucket profiles naturally defined by the prefixes of $\sigma$ in $\tau$ define a path from $(0,P_\emptyset)$ to $(|V(G)|,P_{\rm full})$ in $D$.
The graph $D$ has $2^{\ensuremath{\mathcal{O}}(r^{2}\log r)}\cdot|V(G)|$ vertices and arcs, and the depth-first search runs in time linear in its size.
It is also trivial to reconstruct the optimum-width ordering of the vertices of $G$ from the obtained path in linear time.
This yields the promised running time bounds.
\end{proof}
Having the algorithm of Lemma~\ref{lem:dynamicCtw}, a standard application of the iterative compression technique immediately yields a $2^{\ensuremath{\mathcal{O}}(k^2 \log k)} \cdot n^2$ time
algorithm for computing cutwidth, as sketched in Section~\ref{sec:intro}.
Simply add the vertices of $G$ one by one, and apply the algorithm of Lemma~\ref{lem:dynamicCtw} at each step.
However, we can make the dependence on $n$ linear by adapting the approach of Bodlaender~\cite{Bodlaender96}; more precisely, we make bigger steps.
Such a big step consists of finding a graph $H$ that can be immersed in the input graph $G$, which is smaller by a constant fraction, but whose cutwidth is not much smaller.
This is formalised in Lemma~\ref{lem:reduce}. For its proof we
we need the following definition and a known result about obstacles to small cutwidth.
\begin{definition}
A {\em perfect binary tree} is a rooted binary tree in which all interior nodes have two children and all leaves have the same distance from its root.
The {\em height} of a perfect binary tree is the distance between its root and one of its leaves.
\end{definition}
\begin{lemma}[\!\!\cite{Takahashi94-Mi,KORACH199397,GOVINDAN2001189}]\label{lem:ekjgherk}
If $T$ is a perfect binary tree of height $2k$, then $\ensuremath{\mathbf{cw}}\xspace(T)\geq k$.
\end{lemma}
\begin{lemma}\label{lem:reduce}
There is an algorithm that given a positive integer $k$ and a graph $G$, works in time $\ensuremath{\mathcal{O}}(k^2 \cdot |V(G)|)$ and either concludes that $\ensuremath{\mathbf{cw}}\xspace(G) > k$,
or finds a graph $H$ immersed in $G$ such that $|E(H)| \leq |E(G)| \cdot (1-1/(2k+1)^{4(k+1)+3})$ and $\ensuremath{\mathbf{cw}}\xspace(G) \leq 2\ensuremath{\mathbf{cw}}\xspace(H)$.
Furthermore, in the latter case, given an ordering $\sigma$ of the vertices of $H$, an ordering $\tau$ of the vertices of $G$ with $\ensuremath{\mathbf{cw}}\xspace_{\tau}(G)\leq 2\ensuremath{\mathbf{cw}}\xspace_{\sigma}(H)$
can be computed in $\ensuremath{\mathcal{O}}(|V(G)|)$ time.
\end{lemma}
\begin{proof
Without loss of generality we assume that $G$ is connected, because we can apply the algorithm on the connected components of $G$ separately and then take the disjoint union of the results.
Observe first that we may assume that every vertex in $G$ is incident to at most $2k$ edges, as otherwise, we could immediately conclude that $\ensuremath{\mathbf{cw}}\xspace(G)>k$.
This also implies that every vertex in $G$ has at most $2k$ neighbors; by $N(v)$ we denote the set of neighbors of a vertex $v$,
and $N(X)=(\bigcup_{v\in X} N(v))\setminus X$ for a vertex subset $X$.
Let $G'$ be the graph obtained from $G$ by exhaustively dissolving any vertices of degree $2$ whose neighbors are different.
That is, having such a vertex $v$, we delete it from the graph and replace the two edges incident to it with a fresh edge between its neighbors, and we proceed doing this as long as there are
such vertices in the graph.
Clearly, the eventually obtained graph $G'$ can be immersed in $G$, we have $|E(G')|\leq |E(G)|$, the degree of every vertex in $G'$ is the same to its degree in $G$, and $\ensuremath{\mathbf{cw}}\xspace(G') \leq \ensuremath{\mathbf{cw}}\xspace(G)$.
However, observe that any ordering of the vertices of $G'$ can be turned into an ordering of the vertices of $G$ with the same width by placing each dissolved vertex in any place
between its two original neighbors. Thus, $\ensuremath{\mathbf{cw}}\xspace(G')=\ensuremath{\mathbf{cw}}\xspace(G)$.
Moreover, $G'$ can be constructed in linear time by inspecting, in any order, all the vertices that have degree $2$ in the original graph $G$.
It is also easy to see that, given an ordering of vertices of $G'$, one can reconstruct in linear time an ordering of $G$ of at most the same width.
Altogether, it is now enough to either conclude that $\ensuremath{\mathbf{cw}}\xspace(G')>k$ or
find a graph $H$ immersed in $G'$ such that $$|E(H)| \leq |E(G')| \cdot (1-1/(2k+1)^{4(k+1)+2})$$ and $\ensuremath{\mathbf{cw}}\xspace(G') \leq 2\ensuremath{\mathbf{cw}}\xspace(H')$.
Therefore, from now on we may assume that if the graph $G'$ contains a vertex that is incident to two edges then this vertex is incident to an edge of multiplicity 2.
Let $V_{1}$ be the set of vertices of degree 1 in $G'$.
We consider two cases depending on the size of $V_{1}$.\\
\noindent {\bf Case 1.} $|V_{1}| \geq |E(G')|/(2k+1)^{4(k+1)+2}$.
Notice first that $V_{1}\subseteq N(N(V_{1}))$, and recall that every vertex in $G'$ is incident to at most $2k$
edges and therefore has at most $2k$ neighbors.
It follows then that $|V_{1}| \leq 2k \cdot |N(V_{1})|$ and hence $|N(V_{1})| \geq |E(G')|/(2k+1)^{4(k+1)+3}$.
Let $H$ be the graph obtained from $G'$ by removing, for each vertex in $N(V_{1})$, one of its neighbors in $V_1$.
Then $|E(H)| \leq |E(G')| \cdot (1 - 1/(2k+1)^{4(k+1)+3})$ and $H$ is immersed in $G'$ (as it is an induced subgraph). Hence, $H$ is also immersed in $G$.
Furthermore, let $\sigma$ be any ordering of the vertices of $H$. Then, we can obtain an ordering of the vertices of $G'$ by placing each deleted vertex next to its original neighbors.
Notice that this placement increases the width of $\sigma$ by at most 1 in total, and thus by a multiplicative factor of at most 2.
As we already showed how to obtain an ordering of $V(G)$ from a given ordering of $V(G')$, the lemma follows for the case where
$|V_{1}| \geq |E(G')|/(2k+1)^{4(k+1)+2}$.\\
\noindent {\bf Case 2.} $|V_{1}| < |E(G')|/(2k+1)^{4(k+1)+2}$.
For every $v\in V(G')$ and every positive integer $s$, we define
$B_{s}(v)$ to be the ball of radius $s$ around $v$, that is, the set of vertices at distance at most $s$ from $v$ in $G'$.
Recall that every vertex of $G'$ has at most $2k$ neighbors and observe then that $|B_{s}(v)| \leq (2k+1)^{s}$.
We construct a set of vertices $v_{1}, v_{2}, \dots, v_{\ell} \in V(G')$ whose pairwise distance is greater than $4(k+1)$ in the following greedy way.
Having chosen $v_1, \dots, v_i$, if $B_{4(k+1)}(v_1) \cup \dots \cup B_{4(k+1)}(v_{i})\neq V(G')$ then let $v_{i+1}$ be any vertex outside of
$B_{4(k+1)}(v_1) \cup \dots \cup B_{4(k+1)}(v_{i})$. If such a vertex does not exist, we stop by putting $\ell=i$ and consider the set $v_{1},v_{2},\dots,v_{\ell}$.
Observe here that we can calculate $B_{4(k+1)}(v_{i})$ by breadth-first search in $\ensuremath{\mathcal{O}}((2k+1)^{4(k+1)+1})$ time, by stopping the search at depth $4(k+1)$.
However, note we do not need to revisit a previously visited vertex, unless we reach it with fewer steps. That is, starting with $i=0$, we mark which vertices we have already visited (the set $B_{4(k+1)}(v_1) \cup \dots \cup B_{4(k+1)}(v_i)$) and remember minimum distances from $\{v_1,\dots,v_i\}$ to each previously visited vertex. Considering vertices in any order, we let $v_{i+1}$ be the first not yet visited. We then mark the new ball of radius $4(k+1)$ around it, but only exploring a previously visited vertex when the minimum distance to it strictly decreases by adding $v_{i+1}$. This way, we explore each vertex at most $4(k+1)$ times, as this is an upper bound on the minimum distance of any vertex when first visited. Hence the sequence $v_1,\dots,v_\ell$ can be computed in $\ensuremath{\mathcal{O}}(k^2 |V(G)|)$ time.
We now estimate the length $\ell$ of the sequence.
Recall that for every $i\in[\ell]$, $|B_{4(k+1)}(v_i)|\leq (2k+1)^{4(k+1)}$ and that $V(G)=\bigcup_{i\in [\ell]} B_{4(k+1)}(v_{i})$.
From the above and the fact that $|E(G')|\leq 2k\cdot |V(G')|$ (as every vertex of $G'$ is incident to at most $2k$ edges of $G'$),
it follows that $$\ell\geq |V(G')|/(2k+1)^{4(k+1)} \geq |E(G')|/(2k+1)^{4(k+1)+1}.$$
By construction, the distance between $v_{i}$ and $v_{j}$ is greater than $4(k+1)$, for distinct $i,j \in [\ell]$.
Therefore, the balls $B_{2(k+1)}(v_{1}), \dots, B_{2(k+1)}(v_{\ell})$ are vertex-disjoint.
Moreover, since we have that $|V_{1}| < |E(G')|/(2k+1)^{4(k+1)+2}$, at most $|E(G')|/(2k+1)^{4(k+1)+2}$ of those balls contain a vertex of degree 1.
Therefore, the remaining $\ell - |E(G')|/(2k+1)^{4(k+1)+2}$ balls are disjoint with $V_{1}$.
Let $I\subseteq [\ell]$ be the set of indices for which the balls $B_{2(k+1)}(v_{i})$, $i\in I$, are disjoint from $V_{1}$.
Observe that
$$|I|\geq \ell - |E(G')|/(2k+1)^{4(k+1)+2} \geq |E(G')|/(2k+1)^{4(k+1)+2}.$$
\begin{claim} In time $\ensuremath{\mathcal{O}}(|E(G')|)$ we can either conclude that $\ensuremath{\mathbf{cw}}\xspace(G')>k$, or for each $i\in I$ find a cycle in $G'$ passing only through the vertices of the ball $B_{2(k+1)}(v_i)$.
\end{claim}
\begin{proof}
Suppose for some $i\in I$, $B_{2(k+1)}(v_i)$ does not contain a cycle.
We will prove that every vertex in $G'[B_{2(k+1)}(v_i)]$ has degree at least 3 in $G'$, and that every edge appears with multiplicity~1.
Notice first that every edge of the graph $G'[B_{2(k+1)}(v_{i})]$ has multiplicity 1, as otherwise an edge with multiplicity at least 2 would form a cycle,
a contradiction. Notice also that $B_{2(k+1)}(v_{i})$ does not have any vertex that has degree 2 in $G$.
Indeed, recall that by the construction of the graph $G'$ any vertex of degree 2 is incident only to one edge of multiplicity 2, which is again a
contradiction. Moreover, by the choice of $i\in [I]$, we obtain that $B_{2(k+1)}(v_{i})\cap V_{1}=\emptyset$ and therefore, $G'[B_{2(k+1)}(v_{i})]$
does not have any vertex that has degree 1 in $G$.
We conclude that every vertex in $G'[B_{2(k+1)}(v_i)]$ has degree at least 3 in $G$, and every edge appears with multiplicity~1.
Recall that the subgraph of $G'$ induced by $B_{2(k+1)}(v_i)$ contains the full breadth-first search tree of vertices at distance at most $2(k+1)$ from $v_{i}$.
If $G'[B_{2(k+1)}(v_i)]$ did not contain any cycle, then it would be equal to this breadth-first search tree, and in this tree all vertices except possibly the last layer would have degrees at least 3.
Hence, $G'$ would contain as a subgraph a perfect binary tree of height $2(k+1)$.
From Lemma~\ref{lem:ekjgherk}, this tree has cutwidth at least $k+1$.
The algorithm can thus check (by breadth-first search) for a cycle in the subgraph induced by $B_{2(k+1)}(v_i)$.
If it does not find any such cycle it immediately concludes that $\ensuremath{\mathbf{cw}}\xspace(G)=\ensuremath{\mathbf{cw}}\xspace(G') > k$.
If for every $i\in I$, the breadth-first search in $G'[B_{2(k+1)}(v_{i})]$ finds a cycle, then the algorithm obtained, in total time $\ensuremath{\mathcal{O}}(|E(G')|)$, a set of at least
$|I|\geq E(G')/(2(k+1))^{4(k+1)+2}$ vertex-disjoint (and hence edge-disjoint) cycles.
\renewcommand{\qed}{\cqedsymbol}\end{proof}
Let us assume that the algorithm has now found a set ${\cal C}$ of at least $E(G')/(2k+1)^{4(k+1)+2}$ edge-disjoint cycles and
let $H$ be the subgraph obtained from $G'$ by removing one, arbitrarily chosen, edge $e_C$ from each cycle $C\in {\cal C}$.
Then $H$ can be immersed in $G'$ and $|E(H)| \leq |E(G')| \cdot (1-1/(2k+1)^{4(k+1)+2})$.
To complete the proof of the lemma we will prove that if $\sigma$ is any ordering of the vertices of $H$ then $\sigma$ is also an ordering of the vertices of $G'$ such that
$\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G')\leq 2\ensuremath{\mathbf{cw}}\xspace_{\sigma}(H)$.
Notice that by reintroducing an edge $e_C$ of $G'$ to $H$ we increase the width of the $\sigma$-cuts separating its endpoints by exactly 1.
Observe also that since $e_C$ belongs to the cycle $C$, the rest of the cycle forms a path $P_C$ in $H$ that connects the endpoints of $e_C$.
Therefore, each of the $\sigma$-cuts separating the endpoints of $e_C$ has to contain at least one edge of $P_C$.
Since for different edges $e_C$, for $C\in \mathcal{C}$, the corresponding paths $P_C$ are pairwise edge-disjoint and they are present in $H$,
it follows that the size of each $\sigma$-cut in $G'$ is at most twice the size of this $\sigma$-cut in $H$.
Therefore $\ensuremath{\mathbf{cw}}\xspace_{\sigma}(G')\leq 2 \ensuremath{\mathbf{cw}}\xspace_{\sigma}(H)$.
Thus, $H$ can be returned, concluding the algorithm.
\end{proof}
We are now ready to put all the pieces together.
\begin{proof}[Proof of Theorem~\ref{thm:algo}]
Given an $n$-vertex graph $G$ and an integer $k$, one can in time $2^{\ensuremath{\mathcal{O}}(k^2\log k)}\cdot n$ either conclude that $\ensuremath{\mathbf{cw}}\xspace(G)>k$,
or output an ordering of $G$ of width at most $k$.
The proof follows the same recursive Reduction\&Compression scheme as the algorithm of Bodlaender~\cite{Bodlaender96}.
By applying Lemma~\ref{lem:reduce}, we obtain a significantly smaller immersion $H$, and we recurse on $H$. This recursive call either concludes that $\ensuremath{\mathbf{cw}}\xspace(H)>k$, which implies $\ensuremath{\mathbf{cw}}\xspace(G)>k$,
or it produces an ordering of $H$ of optimum width $\ensuremath{\mathbf{cw}}\xspace(H)\leq k$. This ordering can be lifted,
using Lemma~\ref{lem:reduce} again, to an ordering of $G$ of width $\leq 2k$. Given this ordering, we apply the dynamic programming procedure of Lemma~\ref{lem:dynamicCtw} to
construct an optimum ordering of $G$ in time $2^{\ensuremath{\mathcal{O}}(k^2\log k)}\cdot |V(G)|$.
Since at each recursion step the number of edges of the graph drops by a multiplicative factor of at least $1/(2k+1)^{4(k+1)+3}$, we see that the graph $G_i$ at level $i$ of the recursion will have at most
$(1-1/(2k+1)^{4(k+1)+3})^i\cdot |E(G)|$ edges. Hence, the total work used by the algorithm is bounded by the sum of a geometric series:
\begin{eqnarray}
\sum_{i=0}^{\infty}\, 2^{\ensuremath{\mathcal{O}}(k^2 \log k)} \cdot |E(G_{i})| & \leq & 2^{\ensuremath{\mathcal{O}}(k^2 \log k)} \cdot |E(G)| \cdot \sum_{i=0}^{\infty}\, (1-1/(2k+1)^{4k+7})^{i} \nonumber\\
& = & 2^{\ensuremath{\mathcal{O}}(k^2 \log k)} \cdot |E(G)| \cdot (2k+1)^{4k+7} \\
& = & 2^{\ensuremath{\mathcal{O}}(k^2 \log k)} \cdot |E(G)|.\nonumber
\end{eqnarray}
\end{proof}
\pagebreak
\section{Obstructions to edge-removal distance to cutwidth }\label{sec:remddist}
Throughout this section, by $\ensuremath{\mathcal{O}}_k(w)$ we mean a quantity bounded by $c_k\cdot w+d_k$, for some constant $c_k,d_k$ depending on $k$ only.
Given a graph $G$ and a $k\in\ensuremath{\mathbb{N}}$, we define
the parameter ${\bf dcw}_k(G)$
as the minimum number of edges that can be deleted from
$G$ so that the resulting graph has cutwidth at most $k$ (so ${\bf dcw}_k(G)$ fits in the wider category of ``graph modification parameters'').
In other words:
$${\bf dcw}_k(G)=\min\{ |F| \colon F\subseteq E(G) \mbox{~and~} \ensuremath{\mathbf{cw}}\xspace(G\setminus F)\leq k\}$$
Let ${\cal C}_{w,k}=\{G\mid {\bf dcw}_k(G)\leq w\}$. Notice that ${\cal C}_{k}={\cal C}_{0,k}$.
In this section, we provide bounds to the sizes of the obstruction sets
of the class of graphs $G$ with ${\bf dcw}_k(G)\leq w$, for each $k,w\in\ensuremath{\mathbb{N}}$.
Our results are the following.
\begin{theorem}
\label{mainlin}
For every $w,k\in\ensuremath{\mathbb{N}}$,
every graph in ${\bf obs}_{\leq_{\rm si}}({\cal C}_{w,k})$
has $\ensuremath{\mathcal{O}}_k(w)$ vertices.
\end{theorem}
\begin{theorem}
\label{maisnlilowern}
For every $k,w\in\ensuremath{\mathbb{N}}$ where $k\geq 7$,
the set ${\bf obs}_{\leq_{\rm i}}({\cal C}_{w,k})$
contains at least $\binom{3^{k-7} + w + 1}{w+1}$
non-isomorphic graphs.
\end{theorem}
From Observation~\ref{osesimme}, both bounds of
Theorems~\ref{mainlin} and~\ref{maisnlilowern} holds for both immersions and strong immersions as well.
Given a collection ${\cal H}$ of graphs, we define
the parameter ${\bf aic}_{\cal H}(G)$
as the minimum number of edges whose removal from $G$
creates an ${\cal H}$-immersion free graph, that is, a graph that does not admit any graph from ${\cal H}$ as an immersion.
In both subsections that follow, we need the following observation.
\begin{observation}
\label{obpot}
For every graph $G$ and every $w\in \ensuremath{\mathbb{N}}$, it holds that ${\bf dcw}_w(G)={\bf aic}_{{\rm obs}({\cal C}_{w})}(G)$.
\end{observation}
We remark that, within the same set of authors, we have recently studied kernelization algorithms for edge removal problems to immersion-closed classes.
The following result has been obtained in~\cite{GiannopoulouPTRW2016}:
whenever a finite collection of graphs ${\cal H}$ contains at least one planar subcubic graph, and all graphs from ${\cal H}$ are connected, then the problem
of computing ${\bf aic}_{\cal H}(G)$, parameterized by the target value, admits a linear kernel.
These prerequisites are satisfied for ${\cal H}={\rm obs}({\cal C}_{w})$, and hence the problem of computing ${\bf dcw}_w(G)$, parameterized by the target value $k$, admits a linear kernel.
The connections between kernelization procedures and upper bounds on minimal obstruction sizes have already been noticed in the literature; see e.g.~\cite{FominLMS12}.
Intuitively, whenever the kernelization rules apply only minor or immersion operations, the kernelization algorithm can be turned
into a proof of an upper bound on the sizes of minimal obstacles for the corresponding order.
Unfortunately, this is not the case for the results of~\cite{GiannopoulouPTRW2016}: the main problem is the lack of linked decompositions for parameter tree-cut width, which plays the central role.
Here, the situation is different, as we know that there are always linked orderings of optimum width.
We therefore showcase how to use the linkedness to obtain a linear upper bound on the sizes of obstructions for ${\cal C}_{w,k}$.
The arguments are somewhat similar as in~\cite{GiannopoulouPTRW2016}: we use the idea of protrusions, adapted to the setting of edge cuts, and we try to replace protrusions with smaller ones having the same behavior.
The main point is that linkedness ensures us that the replacement results in an immersion of the original graph.
\subsection{Upper bound on obstruction size}
A \emph{partial $q$-boundaried graph} is a pair $\mathbf{G}=(G,\bar{x})$ where $G$ is a graph and $\bar{x}=(x_{1},\dots,x_q)$
is a $q$-tuple that consists either of vertices of $G$ or from empty slots (that are indices that do not correspond to vertices of $G$). If $x_{i}$ is an empty slot, we denote it by $x_{i}=\diamond$.
The extension of such $\mathbf{G}$ is defined just as for $q$-boundaried graphs, but we put
$x'_{i}=\diamond'$ iff $x_{i}=\diamond$.
Intuitively a partial $q$-boundaried graph extends the notion of a boundaried graph by allowing the vertices of the boundary to carry
indices from a set whose cardinality might be bigger than the boundary.
Let $H$ be a graph and let $(X_{1},X_{2})$ be its cut where $q=\delta(X_{1})$.
Let $E_H(X_1,X_2)=\{e_{1},\ldots,e_{q}\}$ where $e_{i}=\{x_{i}^{1},x_{i}^{2}\}$, $i\in[q]$, and such that
$x_i^j\in X_{j}$ for $(i,j)\in[q]\times[2]$.
For $j\in[2]$, we
say that the pair $(X_1,X_2)$ {\em{generates}}
the $q$-boundaried graph ${\bf A}_j=({A}_j,\overline{x}_{j})$ if $A_i=G[X_i]$
and $\overline{x}_{i}=(x_{1}^{j},\ldots,x_{q}^{j})$.
We denote by ${\cal B}_{q,h}$ the collection containing every $q$-boundaried graph that can be generated from some cut $(X_{1},X_{2})$ of some graph $H$ where $|V(H)|+|E(H)|\leq h$ and $q=\delta(X_{1})$.
Moreover, we denote by ${\cal M}_{q,h}$ the set of all partial $q$-boundaried graphs $\mathbf{F}'=(F',\bar{x}')$
such that, for some $\mathbf{F} =(F,\bar{x})$ of $\mathcal{B}_{q,h}$, $F'$ is a subgraph of $F$ and a vertex $x_i$ in $\bar{x}'$ is an empty slot iff
$x_{i}\in V(F)\setminus V(F')$.
In other words, ${\cal M}_{q,h}$ contains all partial $q$-boundaried graphs that can be
generated by a graph whose number of edges and vertices does not exceed $h$.
We insist that if ${\bf H}=(H,\overline{x})\in{\cal M}_{q,h}$, then the
vertices of $H$ are taken from some fixed repository of $h$ vertices and
that an element $x_{i}$ of $\overline{x}$ is either an empty slot (i.e., $x_{i}=\diamond$) or
the $i$-th vertex of some predetermined ordering $(x_{1},\ldots,x_{q})$ of $q$ vertices
from this repository. This permits us to assume that $|{\cal M}_{q,h}|$ is bounded by some function that depends only on $q$ and $h$.
Let ${\bf G}=(G,\overline{x})$ be a $q$-boundaried graph
and ${\bf H}=(H,\overline{y})$ be a partial $q$-boundaried graph. Let also
$G^*$ and $H^*$ be the extensions of $\mathbf{G}$ and $\mathbf{H}$, respectively.
We also assume that, for all $i\in[q]$, either $y_i=x_i$ or $y_i=\diamond$. For an edge subset $R\subseteq E(G^*)$,
we say that
${\bf H}$ is an {\em $R$-avoiding strong immersion in ${\bf G}$} if there is an $H^*$-immersion model $(\phi,\psi)$ of $G^*\setminus R$ where, for every $i\in[q]$ such that $y_{i}\neq\diamond$,
it holds that $\phi(y_{i})=x_{i}$
and
$\phi(y_{i}^{\prime})=x_{i}'\neq\diamond$. We now define the {\em $R$-avoiding $(q,h)$-folio} of ${\bf G}$ as the set of all partial $q$-boundaried graphs in ${\cal M}_{q,h}$
that are $R$-avoiding strong immersions in ${\bf G}$ and we denote it by ${\bf folio}_{q,h,R}({\bf G})$.
We finally define
$${\cal F}_{q,h}({\bf G})=\{{\bf folio}_{q,h,R}({\bf G})\mid R\subseteq E(G^*)\ \text{and}\ |R|\leq q\}.$$
Given two $q$-boundaried graphs ${\bf G}_{1}$ and ${\bf G}_{2}$
we write ${\bf G}_{1}\sim_{q,h}{\bf G}_{2}$ in order to denote that
${\cal F}_{q,h}({\bf G}_{2})={\cal F}_{q,h}({\bf G}_{2})$.
As ${\cal F}_{q,h}$
maps each $q$-boundaried graph to a collection of subsets of ${\cal M}_{q,h}$ we have the following.
\begin{lemma}
\label{k8io90op}
There is some function $f_1:\ensuremath{\mathbb{N}}^2\rightarrow \ensuremath{\mathbb{N}}$
such that for every two non-negative integers $q$ and $h$, the
number of equivalence classes of $\sim_{q,h}$ is at most
$f_1(q,h)$.
\end{lemma}
The next lemma is a consequence of the definition of the function ${\cal F}_{q,h}$.
\begin{lemma}\label{euitq}
Let ${\cal H}$ be some set of connected graphs, each of at most $h$ vertices, and let ${\bf G}_i=(G_i,\overline{x}_{i}), i\in\{1,2\}$ be two $q$-boundaried graphs such that ${\bf G}_{1}\sim_{q,h}{\bf G}_{2}$
and such both $G_{1},G_{2}$ are ${\cal H}$-immersion free.
Then, for every $q$-boundaried graph ${\bf F}=(F,\overline{y})$, it holds that ${\bf aic}_{\cal H}({\bf F}\oplus {\bf G}_{1})={\bf aic}_{\cal H}({\bf F}\oplus {\bf G}_{2})$.
\end{lemma}
The proof is omitted as it is very similar to the one in~\cite{Chatzidimitriou15} where a similar encoding
was defined in order to treat the topological minor relation.
To see the main idea, recall
that ${\cal F}_{q,h}({\bf G}_{i})$ registers all different ``partial occurrences'' of
graphs of $\leq h$ vertices (and therefore also of graphs of ${\cal H}$)
in ${\bf G}_i'$, for all possible ways to obtain ${\bf G}_i'$ from ${\bf G}_i$ after removing at most $q$ edges. This encoding is indeed sufficient to capture the behavior
of all possible edge sets whose removal from ${\bf F}\oplus{\bf G}_{i}$ creates an ${\cal H}$-free graph. Indeed, as both $G_{1}$ and $G_{2}$ are ${\cal H}$-immersion free, any such
set should have at most $q$ edges inside ${\bf G}_{i}$ as, if not, the $q$-boundary edges
between ${\bf F}$ and ${\bf G}_i$ would also make the same job. A similar discussion is also present in~\cite{GiannopoulouPTRW2016}.
Given a graph $G$ and $X\subseteq V(G)$, we write ${\bf cw}_{\sigma}(G,X)=\delta_G(X)+{\bf cw}_{\sigma_{X}}(G[X])$. We require the following extension of the definition of linked orderings.
\begin{definition}[extended linked ordering]
Let $G$ be a $n$-vertex graph and $X\subseteq V(G)$.
An ordering $\sigma=\langle v_{1},\ldots,v_{n}\rangle$ of $G$ is $X$-\emph{linked} if $X=\{v_{n-|X|+1},\ldots,v_{n}\}$
and
for every $i,j\in[n-|X|,n]$ where $i<j$
there exist $\min \{\delta(\{{v_{1},\ldots,v_h}\}) \mid i\leq h\leq j \}$ edge-disjoint paths between $\{v_{1},\ldots,v_{i}\}$ and $\{v_{j},\ldots,v_{n}\}$ in $G$.
\end{definition}
The proof of the following result is very similar to the one of Lemma~\ref{lem:linked}. We just move $X$ to the end of the ordering, in the order given by $\sigma$,
and apply exhaustively the same refinement step based on submodularity, but only to the subordering induced by $X$.
\begin{lemma}
\label{linkedhelp}
For every graph $G$ and every subset $X$
of $V(G)$, if there exists an ordering $\sigma$ of $G$
such that ${\bf cw}_{\sigma}(G,X)\leq r$, then
there exists an $X$-linked ordering $\sigma'$ of $G$
such that ${\bf cw}_{\sigma'}(G,X)\leq r$.
\end{lemma}
Let $w_1,w_2\in\ensuremath{\mathbb{N}}$, $G$ be a graph, and $X\subseteq V(G)$. We say that $X$ is an {\em $(w_1,w_2)$-cutwidth-edge-protrusion} of $G$
if $\delta(X)\leq w_1$ and ${\bf cw}(G[X_{i}])\leq w_2$.
The next lemma uses an idea similar to the one of Lemma~\ref{restated}. Here $\sim_{q,h}$ plays the role of $(q,\ell)$-similarity.
\begin{lemma}
\label{wmopri9o}
There is a computable function $f_2:\ensuremath{\mathbb{N}}^2\rightarrow\ensuremath{\mathbb{N}}$ such that the following holds:
Let $k$ be a non-negative integer and let ${\cal H}$ be a
finite set of connected graphs, each having at most $h$ vertices and edges.
Let also $G$ be a graph and let $X$ be a $(2k,k)$-cutwidth-edge-protrusion of $G$.
If $|X|>f_2(k,h)$, then $G$ contains as a proper strong immersion
a graph $G'$ where ${\bf aic}_{\cal H}(G)={\bf aic}_{\cal H}(G')$.
\end{lemma}
\begin{proof}
We set $f_{2}(k,h)=(f_{1}(3k,h)+1)^{3k+1}-1$.
We have that $|X|>f_{2}(k,h)$ or, equivalently, $|X|\geq (f_{1}(3k,h)+1)^{3k+1}$. We set $\ell=|X|$.
Let $\sigma^*=\langle x_{1},\ldots,x_{\ell}\rangle$ be an ordering of the vertices in $X$
such that ${\bf cw}_{\sigma^*}(G[X])\leq k$.
Let $\sigma=\langle v_{1},\ldots,v_{n-\ell},v_{n-\ell+1},\ldots,v_{n}\rangle$
be any ordering of $V(G)$ such that $\sigma^*$ is a suffix of $\sigma$, i.e. $\langle x_{1},\ldots,x_{\ell}\rangle=\langle v_{n-\ell+1},\ldots,v_{n}\rangle$.
It follows that ${\bf cw}_{\sigma}(G,X)\leq {\bf cw}_{\sigma^*}(G[X])+\delta_{G}(X)\leq k+2k=3k$.
From Lemma~\ref{linkedhelp}, there is an $X$-linked ordering $\sigma'$ of $V(G)$, where ${\bf cw}_{\sigma'}(G,X)\leq 3k$.
We set $k_i=\delta_{G[X]}(\{x_{1},\ldots,x_{i-(n-\ell)})+\delta_{G}(X)$
and observe that $k_{i}\leq k+2k=3k, i\in[n-\ell,n-1]$.
We set up the alphabet $\mathbb{A}=\{0,1,\ldots,3k\}$
and we see $w=k_{n-\ell},k_{n-\ell+1},\ldots,k_{n-1}$
as a word on $\mathbb{A}$. Let also $N=f_{1}(3k,h)$.
Notice that $|w|=|X|=\ell\geq (f_{1}(3k,h)+1)^{3k+1}=(N+1)^{|\mathbb{A}|}$.
From Corollary~\ref{lem:wordlngth}, if
$|w|\geq (N+1)^{|\mathbb{A}|}$,
there are $a,b\in [n-\ell,n-1], a<b$
and some $p\in\mathbb{A}$
such that $k_{a},k_{b}\geq p$
and $p$ appears in $\{k_{a},\ldots,k_{b}\}$
at least $N+1$ times.
Let these appearance be at indices $a\leq i_1<i_2<\ldots<i_{N+1}\leq b$.
By $X$-linkedness, there are $p$ edge-disjoint paths $P^i$, for $i\in [p]$, from $\{v_1,\ldots,v_{i_1}\}$ to $\{v_{i_{N+1}},\ldots,$
$v_{|V(G)|}\}$.
Observe that for each $j\in [N+1]$, each path $P^i$ must cross exactly one edge of the cut $\delta_G(\{v_1,\ldots,v_{i_j}\})$; let this edge be $z^i_jw^i_j$, where
$z^i_j\in \{v_1,\ldots,v_{i_j}\}$ and $w^i_j\notin \{v_1,\ldots,v_{i_j}\}$.
For each $j\in [N+1]$ we define $p$-boundaried graphs
${\bf F}_j=(F_j,(z^1_j,\ldots,z^p_j))$, where
$F_{j}=G[\{v_{1},\ldots,v_{i_j}\}]$, and
${\bf G}_{j}=(G_{j},(w^1_j,\ldots,w^p_j))$
where $G_{j}=G[\{v_{i_j+1},\ldots,v_{|V(G)|}\}]$.
As, from Lemma~\ref{k8io90op},
the equivalence relation $\sim_{3k,h}$ has at most $N$ equivalent classes,
there are $j_1,j_2$ such that $a\leq j_1<j_2\leq b$
such that
${\bf G}_{j_1}\sim_{3k,h} {\bf G}_{j_2}$. Let $G'={\bf F}_{i_{j_1}}\oplus {\bf G}_{i_{j_2}}$; it is easy to observe that $G'$ is a proper immersion of $G$, because the edges added when joining can be modeled using
appropriate infixes of the paths $P_i$.
From Lemma~\ref{euitq}, however, we have ${\bf aic}_{\cal H}({\bf F}_{i}\oplus {\bf G}_{i})={\bf aic}_{\cal H}({\bf F}_{i}\oplus {\bf G}_{j})$, and therefore
${\bf aic}_{\cal H}(G)={\bf aic}_{\cal H}(G')$.
\end{proof}
\begin{lemma}
\label{mroklp3}
Let $k,w,\ell\in\ensuremath{\mathbb{N}}$ and let $G$ be a graph. If ${\bf dcw}_k(G)\leq w$
and $|V(G)|\geq \ell\cdot (2w+1)+2w$,
then $G$ has a $(2k,k)$-cutwidth-edge-protrusion $X$
where $|X|\geq \ell$.
\end{lemma}
\begin{proof}
We denote $n=|V(G)|$.
Let $F$ be a set of edges of $G$ such that if $G'=G\setminus F$, then ${\bf cw}(G')\leq k$.
Let $\sigma=\langle v_{1},\ldots,v_{n}\rangle$ be an ordering of $V(G')$
such that ${\bf cw}_{\sigma}(G')\leq k$.
Let $I$ denote the indices in $\sigma$ of the endpoints of the edges
in~$F$ and notice that $|I| \leq 2w$. We consider the maximal
intervals of $[n]$ that do not intersect~$I$.
The set $[n] \setminus I$ has $n-|I|\geq n-2w$ elements, that are distributed
among at most $|I|+1\leq 2w+1$ such intervals. By the pigeonhole
principle, one interval $\{i, \dots, j\}$ has
at least $\frac{n - 2w}{2w+1} \geq \ell$ elements.
Consider now the set $X=\{v_{i},\ldots,v_{j}\}$, $|X|\geq \ell$.
Notice that if $\sigma'=\langle v_{i},\ldots,v_{j} \rangle$,
then ${\bf cw}_{\sigma'}(G[X])\leq k$. Moreover
there are at most
$\delta_{G'}(\{v_{1},\ldots,v_{i-1}\})+\delta_{G'}(\{v_{j+1},\ldots,v_{n}\})\leq
2k$ edges with one vertex in $X$ and the other not in $X$. Therefore,
$\delta_{G}(X)\leq 2k$ and $X$ is
a $(2k,k)$-cutwidth-edge-protrusion of $G$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{mainlin}]
We set ${\cal H}={\bf obs}_{\leq_{\rm im}}({\cal C}_{k})$.
By Theorem~\ref{restated}, there is a function $f_{3}:\ensuremath{\mathbb{N}}\rightarrow\ensuremath{\mathbb{N}}$ such that graphs from ${\cal H}$ have at most $h=f_{3}(k)$ vertices.
Let $G\in{\bf obs}_{\leq_{\rm im}}({\cal C}_{w,k})$.
This means that ${\bf dcw}_k(G)=w+1$, while,
for every proper strong immersion $G'$ of $G$, it holds that
${\bf dcw}_k(G')\leq w$.
This, together with Observation~\ref{obpot} and Lemma~\ref{wmopri9o}, implies
that $G$ cannot have
a $(2k,k)$-cutwidth-edge-protrusion $X$ of more than $\ell=f_{2}(k,h)$, vertices. As ${\bf dcw}_k(G)=w+1$, Lemma~\ref{mroklp3} implies that $|V(G)|< \ell\cdot (2w+3)+2(w+1)=\ensuremath{\mathcal{O}}_{k}(w)$
vertices.
\end{proof}
Notice that Theorem~\ref{mainlin} can be seen as an application of Lemma~\ref{lem:linked} on the existence of linked orderings of optimum cutwidth (along with Lemma~\ref{linkedhelp}, that is an easy extension of it). We stress that this bound is constructive as, by going through the proof, one can make an estimation of the functions of $k$ hidden in the $\ensuremath{\mathcal{O}}_{k}$ notation. In future work we plan to further develop the above technique of proving such linear bounds for other edge modification problems. Some analogous work for vertex modification problems have been done in~\cite{FominLMS12} where the parameter $k$ is the number of vertices that one should remove in order to transform a graph to
one of treewidth at most $k$. The corresponding bound
in~\cite{FominLMS12} is $w^{\ensuremath{\mathcal{O}}_{k}(1)}$, is non-constructive, and follows a distinct (more elaborated) approach.
\subsection{Lower bound on number of obstructions}
We now focus on the proof of Theorem~\ref{maisnlilowern}.
We need the following result.
\begin{theorem}[\!\!\cite{GOVINDAN2001189}]\label{t:gov}
For every $k \geq 7$, the number of non-isomorphic connected minimal obstructions in
${\bf obs}_{\leq_{\rm i}}({\cal C}_{k})$ is at least $3^{k-7} + 1$.
\end{theorem}
Recall that, given a graph class ${\cal H}$, we defined ${\bf aic}_{\cal H}(G)$ as the minimum number of edges
of $G$ whose removal yields an ${\cal H}$-immersion-free graph.
We set ${\cal C}_{w,{\cal H}}=\{G\mid {\bf aic}_{\cal H}(G)\leq w\}$. In particular ${\cal C}_{0,{\cal H}}$ is the class of all
${\cal H}$-immersion free graphs. If $G$ and $H$ are graphs, we denote by $G\uplus H$ the disjoint union of $G$ and $H$.
The following observations follow directly from the definition of ${\bf aic}_{\cal H}$.
\begin{observation}
\label{obskl}
If $G$ and $H$ are graphs, then $H\leq_{\rm i}G$ implies that ${\bf aic}_{\cal H}(H)\leq {\bf aic}_{\cal H}(G)$.
\end{observation}
\begin{observation}
\label{obskl33}
If $G$ and $H$ are graphs, then ${\bf aic}_{\cal H}(G\uplus H) = {\bf aic}_{\cal H}(G) + {\bf aic}_{\cal H}(H)$
\end{observation}
\begin{observation}
\label{obs0o9}
If $G\in{\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}})$, then ${\bf aic}_{\cal H}(G)=w+1$.
\end{observation}
\begin{lemma}\label{c:zero}
Let ${\cal H}$ be some $\leq_{\rm i}$-antichain.
For every non-negative integer $w$, if $G_1, \dots,G_{w+1}$ are (not necessarily
distinct) members of ${\cal H}$, then $\biguplus_{i=1}^{w+1} G_i\in{\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}})$.
\end{lemma}
\begin{proof}
Let $G = \biguplus_{i=1}^{w+1} G_i$. To prove that
$G\in {\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}})$ we have to show that it satisfies {\bf O1} and {\bf O2}.
Notice that since $\mathcal{H}$ is an $\leq_{\rm i}$-antichain, ${\bf aic}_{\cal H}(H) = 1$ for every $H \in \mathcal{H}$.
By Observations~\ref{obskl33} and~\ref{obs0o9}, ${\bf aic}_{\cal H}(G) = \sum_{i=1}^{w+1}{\bf aic}_{\cal H}(G_i) = w+1$ and {\bf O1} holds. Therefore, $G\not\in {\cal C}_{w,{\cal H}}$.
Let now $G'$ is a proper immersion of $G$.
This mean that $G'=\biguplus_{i=1}^{w+1}G'_{i}$
where $G'_{i}\leq_{\rm i}G_{i}$ and at least one of $G_{1}',\ldots,G_{w+1}'$ is different than $G_{i}$. W.l.o.g. we assume that this graph is $G_{w+1}$.
As ${\cal H}$ is a $\leq_{\rm i}$-antichain, $G'_{w+1}$
is not isomorphic to a graph of ${\cal H}$. Therefore ${\bf aic}_{\cal H}(G'_{w+1})=0$.
Then, by Observations~\ref{obskl} and~\ref{obskl33}, ${\bf aic}_{\cal H}(G')=\sum_{i=1}^{k}{\bf aic}_{\cal H}(G'_{i}) +{\bf aic}_{\cal H}(G'_{w+1})\leq \sum_{i=1}^{w}{\bf aic}_{\cal H}(G_{i}) +0=w$ and {\bf O2} holds.
\end{proof}
\begin{theorem}
\label{u7ui87u}
If $k$ is a non-negative integer and ${\cal H}$
is a $\leq_{\rm i}$-antichain that contains at least $q$ connected graphs, then $|{\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}})|\geq \binom{ q+ w}{w+1}$.
\end{theorem}
\begin{proof}
Let ${\cal H}'$ be some subset of ${\cal H}$ containing $q$ connected graphs.
Using Lemma~\ref{c:zero}, we observe that every multiset
of cardinality $w+1$ whose
elements belong to ${\cal H}'$ corresponds to a different
(i.e.\ non-isomorphic)
obstruction of ${\cal C}_{w,{\cal H}}$.
Therefore, $|{\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}})|$ is at least the number of multisets of cardinality
$w+1$ the elements of which are taken from a set of cardinality $q$, which is
known to be~$\binom{q+w}{w+1}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{maisnlilowern}]
From Observation~\ref{obpot}, ${\cal C}_{w,k}={\cal C}_{w,{\cal H}_{k}}$, where ${\cal H}_{k}={\bf obs}_{\leq_{\rm i}}({\cal C}_{k})$.
This means that ${\bf obs}_{\leq_{\rm i}}({\cal C}_{w,k})={\bf obs}_{\leq_{\rm i}}({\cal C}_{w,{\cal H}_k}).$ The result follows from Theorems~\ref{t:gov} and \ref{u7ui87u}.
\end{proof}
\section{Conclusions}\label{sec:conc}
In this paper we have proved that the immersion obstructions for admitting a layout of cutwidth at most $k$ have sizes single-exponential in $\ensuremath{\mathcal{O}}(k^3\log k)$.
The core of the proof can be interpreted as bounding the number of different behavior types for a part of the graph that has only a small number of edges connecting it to the rest.
This, in turn, gives an upper bound on the number of states for a dynamic programming algorithm that computes the optimum cutwidth ordering on an approximate one.
This last result, complemented with an adaptation of the reduction scheme of Bodlaender~\cite{Bodlaender96} to the setting of cutwidth, yields a direct and self-contained FPT algorithm
for computing the cutwidth of a graph. In fact, we believe that our algorithm can be thought of ``Bodlaender's algorithm for treewidth in a nutshell''. It consists of the same
two components, namely a recursive reduction scheme and dynamic programming on an approximate decomposition, but the less challenging setting of cutwidth makes both components
simpler, thus making the key ideas easier to understand.
For an alternative attempt of simplification of the
algorithm of Bodlaender and Kloks~\cite{BodlaenderK96}, applied for the case of pathwidth, see~\cite{Furer2016}.
In our proof of the upper bound on the number of types/states, we used a somewhat new bucketing approach. This approach holds the essence of the typical sequences of Bodlaender and Kloks~\cite{BodlaenderK96},
but we find it more natural and conceptually simpler. The drawback is that we lose a $\log k$ factor in the exponent. It is conceivable that we could refine
our results by removing this factor provided we applied typical sequences directly, but this is a price that we are willing to pay for the sake of simplicity and being self-contained.
An important ingredient of our approach is the observation that there is always an optimum cutwidth ordering that is linked:
the cutsizes along the ordering precisely govern the edge connectivity between prefixes and suffixes.
Recently, there is a growing interest in parameters that are tree-like analogues of cutwidth: tree-cut width~\cite{Wollan15} and carving-width~\cite{SeymourT94}.
In future work, we aim to explore and use linkedness for tree-cut decompositions and carving decompositions in a similar manner as presented here.
\subsection{Acknowledgements.} The second author thanks Mikołaj Bojańczyk for the common work on understanding and reinterpreting the Bodlaender-Kloks dynamic programming algorithm~\cite{BodlaenderK96},
which influenced the bucketing approach presented in this paper.
We also thank O-joung Kwon for pointing us to~\cite{GeelenGW02,KanteK14}, as well as an anonymous referee for noting that the running time in Lemma~\ref{lem:reduce} can be reduced to polynomial by amortization.
\pagebreak
\bibliographystyle{plainurl
|
1,116,691,500,252 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth, trim=110 00 120 0, clip]{figures/intro.pdf}
\caption{The proposed method uses a graph representation for detections and tracks. A neural message passing based architecture performs matching of detections and tracks and provides a learning based framework for track initialization, effectively replacing heuristics that are required in current approaches.}
\label{fig:method_overview}
\end{figure}
Autonomous systems require a comprehensive understanding of their environment for a safe and efficient operation. A task at the core of this problem is the capability to robustly track objects in 3D in an online-setting, which enables further downstream tasks like path-planning and trajectory prediction~\cite{alahi_social_2016, Hong_2019_CVPR, zaech_action_2020}.
Nevertheless, tracking multiple objects in 3D in order to operate an autonomous system, poses major challenges.
First, in the online setting, data association, track initialization, and termination need to be solved under additional uncertainty, as only past and current observations can be utilized. Furthermore, covering occlusions requires extrapolation with a predictive model rather than interpolation as in the offline case. Finally, when using LIDAR for data acquisition, no comprehensive appearance data is available and data association needs to primarily rely on object dynamics. This is further complicated by the presence of fast moving objects such as cars.
With the release of large scale datasets for 3D tracking~\cite{nuscenes2019,Argoverse,kesten_lyft_2019,sun_scalability_2020}, a considerable amount of work on 3D MOT has been initiated~\cite{chiu_probabilistic_2020,kim_eagermot_2020,weng_gnn3dmot_2020,Weng-2020-123397,yin_centerbased_2020}. Most of these works address the aforementioned challenges by either linking detections directly in a learning based manner or use comprehensive motion models together with handcrafted matching metrics. All of these methods require a large set of heuristics and, to the best of our knowledge, none of the methods approaches the aforementioned challenges jointly.
In contrast to this, recent work in 2D MOT \cite{braso_learning_2020, bergmann_tracking_2019} aims at reducing the amount of heuristics by modeling all tasks in a single learnable pipeline using graph neural networks. However, most of these approaches are limited to the offline setting and driven by appearance-based association that cannot be readily employed in the 3D counterpart.
To establish the missing link between learning based methods and powerful predictive models in 3D MOT, we propose a unified graph representation that merges tracks and their predictive models with object detections into a single graph. This learnable formulation effectively replaces heuristics that are required in current methods. A visualization of the graph is depicted in Figure \ref{fig:method_overview}.
Contrary to previous works, our learnable matching between tracks and detections is integrated into a closed-loop tracking pipeline, alleviating the need for handcrafted features. However, this raises the question of how to effectively train such a learnable system, as the generated tracks influence the data distribution seen during subsequent iterations. In this work, we propose a two-stage training procedure for semi-online training of the algorithm, where the data seen during training is generated by the model itself.
In summary, the contributions of our work are threefold:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item A unified graph representation for learnable online 3D MOT that jointly utilizes predictive models and object detection features.
\item A track-detection association method that explicitly utilizes relational information between detections to further improve track stability.
\item A training strategy that allows us to faithfully model online inference during learning itself.
\end{itemize}
We perform extensive experiments on the challenging nuScenes dataset. Our approach sets a new state-of-the-art, achieving an AMOTA score of 0.656 while reducing the number of ID-switches by 58\%.
\section{Related Work}
\label{sec:rel_work}
\paragraph*{2D MOT} is a well investigated task, with the MOT challenge~\cite{dendorfer_mot20_2020,leal-taixe_motchallenge_2015,milan_mot16_2016} and its corresponding dataset as the current performance reference. The general goal in 2D MOT is to detect and track known objects of a single type or multiple types.
A widely adopted approach to MOT is tracking by detection, where detections are available from an independently trained detection module and data association is performed by the tracker. Due to the nature of the task, a wide range of approaches cast tracking as a graph problem~\cite{braso_learning_2020,li_global_2008,roshanzamir_gmcptracker_2012,tang_multiple_2017}.
Following the paradigm of combining detection and tracking into a single module, Tracktor~\cite{bergmann_tracking_2019} uses the box regression module of faster RCNN~\cite{ren_faster_2017} to propagate and refine object bounding boxes between frames.
A range of tracker extensions are commonly used in all approaches, including modules such as camera motion compensation~\cite{bergmann_tracking_2019} or object re-identification (ReID)~\cite{karthik_simple_2020,ma_customized_2019}.
In general, most of the 2D MOT methods profit from the high framerate available in videos~\cite{bergmann_tracking_2019}. Furthermore, state-of-the-art 2D object detectors achieve a high accuracy~\cite{he_mask_2017,ren_faster_2017,tan_efficientdet_2020}, such that the focus of tracking has shifted from the rejection of false positives towards a pure data assignment task~\cite{braso_learning_2020}.
Closest to ours approach, NMPtrack~\cite{braso_learning_2020} introduces Neural Message Passing (NMP)
as a graph neural solver for offline 2D pedestrian tracking. Starting from a network flow formulation, the problem is transformed into a classification problem and data assignment is solved with NMP. Also using NMP as the network solver, we propose a graph representation that is capable of \textbf{online 3D tracking} and integrate a state filter for track representation. In contrast to~\cite{braso_learning_2020}, we do not require the complete sequence of frames to be available and do not assume that false positive detections are absent. Therefore, we are able to perform online tracking, while considering predictions in frame-gaps and taking imperfect object detectors into account.
\paragraph*{3D MOT} extends the challenge of MOT to tracking multiple objects in 3D~\cite{nuscenes2019, geiger_vision_2013}. With 3D MOT as a problem at the core of autonomous driving, a wide range of datasets that focus on tracking of objects in driving scenes is available~\cite{Argoverse,kesten_lyft_2019,nuscenes2019,sun_scalability_2020}. Due to the nature of the task, 3D MOT is usually performed online, which adds additional challenges and requires additional heuristics. For detecting objects, any 3D modality would be suitable, nevertheless, most datasets provide LIDAR scans which are used in most methods, including ours. As 3D object detection from LIDAR is still an open research question and less robust than 2D detection, 3DMOT mostly follows the tracking by detection framework~\cite{chiu_probabilistic_2020,kim_eagermot_2020,weng_gnn3dmot_2020,Weng-2020-123397,yin_centerbased_2020}.
One line of work in 3D MOT establishes tracks directly from the output of an object detector and forms tracks by connecting detected objects between frames. These approaches can directly use the output of an object detector~\cite{yin_centerbased_2020} or more advanced features including 2D information for every detection~\cite{weng_gnn3dmot_2020, zhang_robust_2019}. In this framework, Weng \etal~\cite{weng_gnn3dmot_2020} are the first to use a graph neural network to estimate the affinity matrix, which is then solved using the Hungarian algorithm. Since this group of trackers does not establish a predictive model for each track, they cannot directly account for missed detections or occlusions and require heuristics for these cases.
Another group of trackers~\cite{chiu_probabilistic_2020,kim_eagermot_2020,Weng-2020-123397} resolves this issue by generating a separate representation of tracks and performs tracking by matching active tracks and detections at each timestep. AB3DMOT~\cite{Weng-2020-123397} uses a Kalman filter~\cite{kalman_new_1960} to represent the track state and matches tracks and detections based on intersection over union (IoU). Chiu \etal~\cite{chiu_probabilistic_2020} extend this approach by matching based on the Mahalanobis distance~\cite{mahalanobis1936generalized} to resolve the issue that object size, orientation and position are on different scales. EagerMOT~\cite{kim_eagermot_2020} uses tracks parameterize in 2D and 3D simultaneously to gain performance from multiple modalities. All of these approaches rely on heuristics to generate new tracks, as track initialization can hardly be learned in a purely offline training approach.
\section{Method}
\label{sec:method}
We model the online 3D MOT problem on a graph, where detections are nodes and the optimal sequences of edges that connect the same objects throughout time need to be found. The resulting core tasks are data association by matching of nodes, track initialization while rejecting false positive detections, interpolation of missed/occluded detections, and termination of old tracks.
Without access to future frames due to the time causal nature in the online setting, all of the aforementioned tasks become challenging. In the case of track initialization, for instance, a new detection in the current frame with no link to a track could be a false positive or the first detection of a new track. And similarly for track termination, where an existing track that is not matched to any detections in the current frame may need to be terminated or may only encounter a missed or occluded object. While these dilemmas could often be resolved when future frames become available over time, online tracking performance is crucial for real-time decision systems since it directly influences the behavior of the system.
To jointly resolve these challenges in a learnable framework, we formulate a graph that merges tracks with their underlying dynamic model and detections into a single representation for online MOT. Based on the detections of the last $T$ frames and the active tracks, a graph is built that represents the possible connections between tracks and detections. Starting with local features at every node and edge, NMP is used to distribute information through the graph and to merge it with the local information at each edge and node during multiple iterations. Finally, edges and nodes are classified as active or inactive.
Based on the active edges that connect track and detection nodes, we formulate an optimization problem for data association. This jointly considers matches between tracks and detections and matches between detections at different timesteps to improve the track stability. Based on the connectivity of the remaining active detection nodes, tracks are initialized.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\linewidth, trim=50 120 120 30, clip]{figures/graph.pdf}
\caption{The proposed tracking graph combines tracks, represented by a sequence of track nodes and detections in a single representation. During the NMP iterations, information is exchanged between nodes and edges, and thus, distributed globally throughout the graph.}
\label{fig:graph}
\end{figure}
\subsection{Graph Representation of Online MOT}
Approaching 3D MOT as tracking by detection can be formulated as finding the set of tracks $\mathcal{T} = \{{T_1}, ..., {T_m}\}$ that underlie the observed set of noisy detections $\mathcal{O} = \{\mathcal{O}_{t_0}, ... \mathcal{O}_{t_T}\}$. We parameterize a track as the state of the underlying Kalman filter and a detection by its estimated parameters such as bounding box, class and velocity.
To find a robust and time-consistent solution, three tasks need to be solved:
\begin{enumerate}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item Assignment of detections to existing track.
\item Linking of detections across timesteps.
\item Classification of false positive detections.
\end{enumerate}
While either 1.\ and 2.\ would be sufficient on their own to perform tracking, finding a joint solution promotes stability of the tracks. Furthermore, utilizing a track model is beneficial, since it aggregates information of the complete sequence of matched observations which is required to interpolate missing detections.
The three tracking tasks can be naturally formulated as one joint classification problem on a tracking graph $G = (V_{D}, V_{T}, E_{DD}, E_{TD})$ that is built from detection nodes $V_{D}$ and track nodes $V_{T}$. Detection edges $E_{DD}$ connect pairs of detection nodes at different timesteps and track edges $E_{TD}$ connect track and detection nodes at the same timestep. The complete tracking graph is visualized in Figure \ref{fig:graph}.
Note that track nodes have sparser connections than detection nodes. They are only connected to the detections at the same timestep and to the neighboring timesteps of the same track. We chose this pattern since connected tracks and detections need to be temporally consistent and the relation between track nodes is determined by the Kalman prediction step. One additional characteristic of track nodes is that nodes that correspond to the same track form a track-subgraph called $G_{T,n}$, which is highlighted with a blue shaded area in Figure \ref{fig:graph}. These subgraphs are important since they share the same state that is linked with a dynamic model. Next, we discuss the types of nodes and edges used in our graph in more detail.
\paragraph{Notation:}
Symbols with subscript $D$ belong to detection nodes and symbols with subscript $T$ to track nodes. Symbols with subscript $DD$ belong to detection edges and symbols with subscript $TD$ to track edges.
Nodes are indexed with integer numbers from the set $\mathcal{I}$ for detection nodes and from $\mathcal{K}$ for track nodes. Edges are referred to by the indices of the connected nodes, \ie, $E_{TD,ki}$ describes a track edge from $V_{T,k}$ to $V_{D,i}$. As the graph is undirected, the notation also holds when the order of the indices is switched. To make our notation easy to read we always use the same index variables. More precisely, the index variables $i,j,m \in \mathcal{I}$ are used to refer to detection node indices and index variables $k,p,q \in \mathcal{K}$ refer to track node indices. The newest timeframe available to the algorithm during online tracking is denoted as $t$ and the timeframe of a specific node is referred to as $t_i$. Finally, tracks are indexed with their track ID $n$.
\paragraph{Detection nodes}
are generated from the detected objects $\mathcal{O}$ and are initialized from the feature $\vct{x}_{D,i}$ containing the position, size, velocity, orientation, one-hot encoded class, detection score, and the distance of the detected object relative to the acquisition vehicle. The position is given in a unified coordinate system which is centered at the mean of all the detections in the graph. The orientation, relative to the same unified coordinate system, is expressed by the angle's $\sin$ and $\cos$.
\paragraph{Track nodes}
represent the state of an active track, i.e., each track generates one track node at every timestep. This groups the track nodes into track-subgraphs. The feature $\vct{x}_{T,k}$ at every track node is defined by the position, size, orientation, and the one-hot encoded class of the tracked object.
The tracks are modeled by a Kalman filter with 11 states corresponding to the position, orientation, size, velocity and angular velocity. Parameters are learned from the training set as proposed by~\cite{chiu_probabilistic_2020}.
\paragraph{Detection edges}
refer to edges between a pair of detection nodes $V_{D,i}, V_{D,j}$ at two different frames $t_i \neq t_j$. They are parameterized by $\vct{x}_{DD,ij}$
containing the frame time difference, position difference, size difference, and the differences in the predicted position assuming constant velocity.
To reduce the connectivity of the tracking graph, detection edges are only established between detections of the same class and truncated with a threshold on the maximal distance between two nodes. This implicitly corresponds to a constraint on the maximum velocity an object can achieve. Graph truncation makes inference more efficient, track sampling more robust and helps to reduce the strong data imbalance between active and inactive edges.
\paragraph{Track edges}
are connections between a track node $V_{T,k}$ and a detection node $V_{D,i}$ at the same timestep $t_k = t_i$. These edges are modeled with the feature $\vct{x}_{TD,ki}$, where the three entries are the differences in position, size and rotation, respectively.
\paragraph{Classification}
Given the unified tracking graph $G$, the tracking problem is transformed to the following classification tasks:
\begin{enumerate}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item Classification of active track edges $E_{TD}$.
\item Classification of active detection edges $E_{DD}$.
\item Classification of active detection nodes $V_{D}$.
\end{enumerate}
An approach to solving these tasks jointly is presented in the following.
\subsection{Neural Message Passing for Online Tracking}
\label{sec:nmp}
Given only the raw information described in the previous section, classifying edges as active is hard and error-prone.
To generate a good assignment, the network should have access to the global and local information present in the tracking graph.
To archive this exchange of information within the graph, we rely on a graph-NMP network \cite{braso_learning_2020}. Our message passing network for data assigning in the unified tracking graph consists of four stages:
\paragraph{1) Feature embedding:}
The input to the NMP network are embeddings of the raw edge and node features. To generate the 128 dimensional embeddings, the raw features are normalized and subsequently processed with one of four different Multi-Layer Perceptrons (MLP), one for each type of node/edge. This results in the initial features $h_{D,i}^{(0)}, h_{T,k}^{(0)}, h_{DD,ij}^{(0)}, h_{TD,ki}^{(0)}$.
\paragraph{2) Neural message passing:}
Initially, all information contained in the embeddings is local and thus, not sufficient for directly solving the data assignment problem. Therefore, the initial embeddings are updated using multiple iterations of NMP that distribute information throughout the graph. An NMP iteration consists of two steps. First, the edges of the graph are updated based on the features of the connected nodes. In the second step, the features of the nodes are updated based on the features of the connected edges. The networks used to process messages in NMP are shared between all iterations $l = 1,...,L$ of the algorithm. Next, we will describe the NMP iteration for each node and edge type in detail.
\paragraph{Detection Edges} $E_{DD,ij}$ at iteration $l$ are updated with a single MLP $\mathcal{N}_{DD}$ that takes as an input the features of the two connected detection nodes $h_{D,i}^{(l-1)}, h_{D,j}^{(l-1)}$, the current feature of the edge $h_{D,j}^{(l-1)}$ and the initial feature $h_{DD,ij}^{(0)}$
\begin{equation}
h_{DD,ij}^{(l)} = \mathcal{N}_{DD}\left([h_{D,i}^{(l-1)}, h_{D,j}^{(l-1)}, h_{DD,ij}^{(l-1)}, h_{DD,ij}^{(0)}]\right).
\label{eq:det_edge_update}
\end{equation}
Adding the current and initial edge feature to the input vector corresponds to introducing a skip connection into the unrolled algorithm.
\paragraph{Track edges} $E_{TD,ki}$ are updated according to the same principle as detection edges, using information from connected nodes, but with a separately trained MLP $\mathcal{N}_{TD}$. The update rule is given as
\begin{equation}
h_{TD,ki}^{(l)} = \mathcal{N}_{TD}\left([h_{T,k}^{(l-1)}, h_{D,i}^{(l-1)}, h_{TD,ki}^{(l-1)}. h_{TD,ki}^{(0)}]\right).
\label{eq:track_edge_update}
\end{equation}
\paragraph{Detection nodes} are updated with a time-aware node model proposed by Bras\'o \etal~\cite{braso_learning_2020} that we extend with an additional input from connected track edges. Given a fixed detection node $V_{D,i}$, the following messages are generated for every detection edge $E_{DD,ij}$ and tracking edge $E_{TD,ki}$ connected to it,
\begin{align}
m_{D,ij}^{(l)}&=\mathcal{N}^{\text{past}}_D \left([h_{DD,ij}^{(l)}, h_{D,i}^{(l-1)}, h_{D,i}^{(0)}]\right)\; ,\; j\in N^{\text{past}}_i \nonumber\\
m_{D,ij}^{(l)}&=\mathcal{N}^{\text{fut}}_D \left([h_{DD,ij}^{(l)}, h_{D,i}^{(l-1)}, h_{D,i}^{(0)}]\right)\;\; ,\; j\in N^{\text{fut}}_i \label{eq:det_node_messages}\\
m_{D,ki}^{(l)}&=\mathcal{N}^{\text{track}}_D \left([h_{TD,ki}^{(l)}, h_{D,i}^{(l-1)}, h_{D,i}^{(0)}]\right) ,\; k\in N^{\text{track}}_i.\nonumber
\end{align}
The first two message types are time-aware detection messages that consider detection edges to past and future nodes and the third type processes the track edges. The three messages are computed with separate MLPs; $\mathcal{N}^{\text{past}}_D$ is applied to edges $E_{DD,ij}$ that are connected to detection nodes in a frame prior to the considered node. $\mathcal{N}^{\text{fut}}_D$ is the network used to process information on edges $E_{DD,ij}$ that are connected to detection nodes of future frames. Finally, $\mathcal{N}^{\text{track}}_D$ is the network used for track edges. All networks get the current and initial feature of node $V_{D,i}$ as an input to establish skip connections. Note that in the first and last time frame, where no past respectively future edges are available, zero padding is used.
The messages formed at the incident nodes are aggregated separately for the three types of connections by a symmetric aggregation function $\Phi$
\begin{equation}
\begin{aligned}
&m_{D, i, \text{past}}^{(l)} \hspace{-0.4cm}&&= \Phi \left(\left\{ m_{D,ij}^{(l)}\right\}_{j\in N^{\text{past}}_i}\right)\\
&m_{D, i, \text{fut}}^{(l)} \hspace{-0.4cm}&&= \Phi \left(\left\{ m_{D,ij}^{(l)}\right\}_{j\in N^{\text{fut}}_i}\right)\\
&m_{D, i, \text{track}}^{(l)} \hspace{-0.4cm}&&= \Phi \left(\left\{ m_{D,ki}^{(l)}\right\}_{k\in N^{\text{track}}_i }\right).
\end{aligned}
\label{eq:det_node_agg_messages}
\end{equation}
Aggregation functions commonly used in NMP are summation, the mean or the maximum of all the inputs.
In our implementation, we choose the summation aggregation function.
The node feature is updated with the output of a linear layer, processing the aggregated messages as
\begin{equation}
h_{D,i}^{(l)} = \mathcal{N}_{D}\left(\left[m_{DD, i, \text{past}}^{(l)}, m_{DD, i, \text{fut}}^{(l)}, m_{TD, i, \text{track}}^{(l)}\right]\right).
\label{eq:det_node_update}
\end{equation}
At \textbf{track nodes} only track edges are incident and therefore no separate handling of edges is required. The messages sent from track edges are formed by
\begin{equation}
m_{T,ki}^{(l)}= \mathcal{N}_T\left([h_{TD,ki}^{(l)}, h_{T,k}^{(l-1)}, h_{T,k}^{(0)}]\right),
\label{eq:track_node_messages}
\end{equation}
and accumulated using the aggregation function $\Phi$ as before
\begin{equation}
m_{T, k}^{(l)}= \Phi \left(\left\{ m_{T,ki}^{(l)}\right\}_{i \in N_k}\right).
\end{equation}
Finally, the message is processed by a single linear layer
\begin{equation}
h_{T,k}^{(l)} = \mathcal{N}_{T}^{'}\left(m_{T, k}^{(l)}\right).
\label{eq:track_node_update}
\end{equation}
These NMP steps are performed for $L$ iterations, which generates a combination of local and global information at every node and edge of the graph.
\paragraph{3) Classification:}
The node and edge features available after performing NMP can be used to classify detection nodes, detection edges, and track edges as active or inactive. Detection nodes need to be classified as active if they are part of a track or initialize a new track and as inactive if they represent a false positive detection. Detection edges and track edges are classified as active if the adjacent nodes represent the same object. For each of the tasks, a separate MLP that takes the final features, $h_{D,i}^{(L)}, h_{DD,ij}^{(L)}$, and $h_{TD,ij}^{(L)}$, is used to estimate the labels
$y_{D,i}$,
$y_{DD,ij}$, and
$y_{TD,ki}$.
The result of the classification stage are three sets. First, the set of active detection node indices
\begin{equation}
\mathcal{A}_{D} = \left\{i \in \mathcal{I} \;|\; y_{D,i} \geq 0.5\right\}.
\end{equation}
Secondly, the set of active detection edge indices
\begin{equation}
\mathcal{A}_{DD} = \left\{i,j \in \mathcal{I} \times \mathcal{I} \;|\; y_{DD,ij} \geq 0.5\right\}.
\end{equation}
Finally, the set of active track edge indices
\begin{equation}
\mathcal{A}_{TD} = \left\{k,i \in \mathcal{K} \times \mathcal{I} \;| \; t_k = t_i \land y_{TD,ki} \geq 0.5\right\}.
\end{equation}
Note that during training, classification is not only performed on the final features $h^{(L)}$ but also during earlier NMP iterations.
This distributes the gradient information more evenly throughout the network and helps to reduce the risk of vanishing gradients.
\paragraph{4) Track update:}
In the last stage of our algorithm, we use the sets of active nodes and edges, to update and terminate existing tracks as well as to initialize new tracks. We achieve this with a greedy approach that maximizes the connectivity of the graph.
\paragraph{Updates} of tracks are performed by finding the matching detection nodes in the graph for each track and time step. An assignment is a set of detection node indices
\begin{equation}
\mathcal{F}_n \subset \mathcal{I} : ~|\mathcal{F}_n| \leq T \text{ and } \forall i,j \in \mathcal{F}_n: t_i \neq t_j \text{ if } i \neq j
\end{equation}
from different timesteps. We define the best assignment as the set of indices corresponding to detection nodes that are 1)~all connected to the track-subgraph $G_{T,n}$ and 2)~have the most active detection edges connecting them.
To find the best assignment for a track $n$, we start with the set of detection node indices that are connected to a track node $V_{T,k}$ through an active track edge.
\begin{equation}
\mathcal{C}_{D,k}^{node} = \left\{ i \in \mathcal{I} \;| ki \in \mathcal{A}_{TD}\right\}.
\end{equation}
By considering all track nodes of the track-subgraph $G_{T,n}$, the set of detection edge indices connected to a track is defined as
\begin{equation}
\mathcal{C}_{D,n} = \bigcup_{k\in G_{T,n}} \mathcal{C}_{D,k}^{node}.
\end{equation}
Finally, the set of active detection edge indices between these nodes is derived as
\begin{equation}
\mathcal{C}_{DD,n} = \left\{ ij \in \mathcal{C}_{D,n} \times \mathcal{C}_{D,n} \;| ij \in \mathcal{A}_{DD}\right\}.
\end{equation}
The quality of the assignment $\Gamma$ representing the optimization problem is the number of detection edges between the assignment nodes that is also present in $\mathcal{C}_{DD,n}$
\begin{equation}
\Gamma = \left|\left\{\mathcal{F}_n \times \mathcal{F}_n\right\} \cap \mathcal{C}_{DD,n}\right|.
\end{equation}
A solution for all tracks is searched with a greedy algorithm, while never assigning a detection node multiple times. As older tracks are more likely true positive tracks, updating is done by descending age of tracks. If there are multiple solutions with the same cost, we employ the following tie breaking rules. First, solutions with the lowest number of nodes are selected. If this does not make the problem unambiguous, the solution that maximizes the sum of 3D detection scores of the selected detection nodes is chosen. The complete algorithm is provided in the supplementary material and a visualization of this approach is shown in Figure \ref{fig:track_update}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\linewidth, trim=30 260 560 20, clip]{figures/track_update.pdf}
\caption{Visualization of different update scenarios, with only active edges in the graph. The graph represents a single track and two detections at each time step. a) Shows the ideal case where a track is matched to one node at every timestep and each detection node is connected with each other. b) Represents the case where a match at one timestep is dropped and the track is only matched to two detection nodes. c) Shows a situation, where the proposed approach is able to decide for the globally best solution, even though two detection nodes have been matched to the track in the last frame.}
\label{fig:track_update}
\end{figure}
\paragraph{Termination} of tracks is based on the time since the last update. If a track has not been updated for three timesteps or 1.5s, it is terminated.
\paragraph{Initialization} of tracks takes into account detection nodes and the corresponding detection edges.
Our approach consists of two steps, split over two consecutive frames.
First, all active detection nodes in the most recent frame that have not been used for a track update are labeled as preliminary tracks. In the next iteration of the complete algorithm, these nodes are in the second to last frame. A full track is generated for each of these nodes that are connected to an unused active detection node in the newest frame by an active detection edge. If multiple active detection edges exist, the edge that connects to the node with the highest detection score is chosen.
\subsection{Training Approach}
\label{sec:training_approach}
When training an online tracker, we face one fundamental challenge, which is the distribution mismatch of track nodes during training and inference. While the track nodes available during training are derived from the ground truth annotations in the dataset, the track nodes encountered during inference are generated by the algorithm itself in a closed loop.
\paragraph{Data augmentation:}
We use data augmentation to make the model more robust against changes in the distribution of tracks and detections as well as to simulate rare scenarios. Although the data naturally contain imperfections such as missed detections and noise on the physical properties of objects, we perform four additional data augmentation steps. Detections are dropped randomly from the graph to simulate missed or occluded detections. Noise is added to the position of the detected objects.
This allows us to counteract the well-known issue of detector overfitting~\cite{nuscenes2019}, where the detections used for training the tracking algorithm are considerably better than the detections available during inference, as the detector was trained on the same data as the tracker.
To model track termination, all detections assigned to randomly drawn tracks are removed.
Finally, track initialization is simulated by dropping a complete track while keeping the corresponding detection nodes. This ensures that the case of track initialization is encountered often during training.
\paragraph{Two-stage training:}
\label{sec:two_stage}
Data augmentation helps to train a better data association model, however, even with data augmentation, the model does not learn to perform association decisions in a closed loop. To overcome this challenge one could train with fixed length episodes where only the beginning is determined by the ground truth, and for the remaining part, the model data associations and tracks are used to train the model. However, such an approach comes with two issues. First, it is inherently hard to train due to potentially large errors and exploding gradients. Secondly, this approach is computationally costly on large datasets as no precomputed data can be used. Thus, we propose a two-stage training scheme as an alternative that approaches the same challenge. In this setting, a model is trained first on offline data with strong data augmentation. To do so, the results obtained from a LIDAR detection model~\cite{zhu_classbalanced_2019, yin_centerbased_2020} are matched with the annotation data available for the training and validation dataset. The detections matched to tracks are then processed with the Kalman filter model to generate track data for training.
After training the full model on the offline data with data augmentation, the model can be used for inference in an online setting. We run the tracker on the complete training dataset and generate tracks that show a distribution closer to the online-case. This results in a new dataset, which contains the same set of detections as before, but updated tracks.
By retraining the model on this second stage dataset, together with all data augmentation steps used before, considerable performance gains can be accomplished.
\paragraph{Training parameters:}
We train all models with the Adam~\cite{DBLP:journals/corr/KingmaB14} optimizer for four epochs with a batch size of 16 and a learning rate of $0.0005$. Focal loss~\cite{lin_focal_2017} with $\beta = 1$ is used for classification of edges and nodes, weight decay is set to 0.01 and weights are initialized randomly. In all experiments, graphs with $T=3$ timesteps are considered.
\section{Experiments and Results}
All experiments are performed on the publicly available nuScenes dataset~\cite{nuscenes2019} with LIDAR detections only. Scores on the test set are centrally evaluated and results on the validation set are computed with the official developer's kit. NuScenes is known to be more challenging than previous datasets~\cite{weng_gnn3dmot_2020}, thus, providing a suitable platform to test state-of-the-art detection and tracking approaches.
To demonstrate that our method generalizes across significantly different object detectors and provides the same advantages in all scenarios, we perform all experiments with two different object detectors.
\subsection{Detection Data}
To verify the performance of our method with multiple detectors, we choose the two state-of-the-art detectors CenterPoint~\cite{yin_centerbased_2020} and MEGVII~\cite{zhu_classbalanced_2019} that are based on very different techniques. While CenterPoint currently provides the best performance of all publicly available methods, MEGVII is used by many previous methods. We perform all experiments with both detectors and thus, allow for a fair comparison between approaches.
\subsection{MPN Baseline}
\label{sec:baseline}
To show the merit of an explicit graph representation, we implement our method without track nodes and track edges as a baseline. This corresponds to an adaptation of the tracker introduced in~\cite{braso_learning_2020} to the online and 3D MOT setting. In this case, tracks are modeled as a sequence of detections and matching is performed with the classified detection edges and nodes. This method is denoted as MPN-baseline in the following.
\begin{table*}[tbh]
\vspace{3pt}
\begin{center}
\small
\input{tables/results_test}
\end{center}
\caption{Results on the nuScenes test set. Methods marked with asterisk use private detections and thus, no direct comparison is possible. }
\label{table:results_test}
\end{table*}
\subsection{Tracking Results}
The results on the nuScenes test set are shown in Table \ref{table:results_test}. It depicts all competitive LIDAR based methods, which were benchmarked on nuScenes and have at least a preprint available. Our approach achieves an AMOTA score of 0.656, outperforming the state-of-the-art tracker CenterPoint~\cite{yin_centerbased_2020} by 1.8\% using the same set of detections. Compared to CenterPoint-Ensemble, which uses multiple models and an improved set of object detections that are not publicly available, we improve by 0.6\%. Finally, ID switches and track fragmentation are reduced by 58\% and 30\% respectively. This improved track stability can be explained by the integration of the predictive track model into the learning framework.
Our algorithm runs with 12.3\;fps or 81.3\;ms latency on average on an Nvidia TitanXp GPU. As 57.8\;ms of this time is used for graph generation and post-processing and only 23.4\;ms is required for NMP and classification, major gains may be achieved with a more efficient implementation. Further details about the runtime are given in the supplementary material.
Table \ref{table:results_val} shows the results of the current state-of-the-art 3D trackers with two different sets of detections, making them comparable. In this scenario, our approach gains $2.8\%$ AMOTA score compared to CenterPoint~\cite{yin_centerbased_2020} on their own detection data and 3.3\% on the reference MEGVII~\cite{zhu_classbalanced_2019} detections. Again the advantages of using a dedicated model for tracks becomes apparent in the number of ID-switches, which are reduces by 47\% and 43\% using our model on Centerpoint~\cite{yin_centerbased_2020} and MEGVII~\cite{zhu_classbalanced_2019}, respectively.
\begin{table}[tb]
\vspace{3pt}
\begin{center}
\small
\input{tables/results_val}
\end{center}
\caption{Results on the nuScenes validation set. MPNTrack$^{\dag}$ corresponds to the method in~\cite{braso_learning_2020} adapted to the online setting as described in Section~\ref{sec:baseline}.
\label{table:results_val}}
\end{table}
\subsection{Ablation Study}
We evaluate the modules of our tracker in an ablation study shown in Table \ref{table:ablation}. We perform the full study on both sets of detections and for the two training scenarios. The results labeled \textit{online} in Table \ref{table:ablation} refer to our two-stage training pipeline and results labeled \textit{offline} correspond to only training in the first stage of this approach, where no data is generated by the tracker itself. In all cases, inference is performed online.
The results indicate that all implemented modules benefit our method. The highest impact is achieved by propagating information globally using NMP. Next to this, removing information from edges impacts performance for both training approaches. Without node information, the performance drop depends on the dataset. While for Centerpoint the performance drop is severe, it is smaller in the case of MEGVII detections, especially in the offline training case. This may be explained by the quality of detections. While the position information is encoded on nodes and edges, information like object size is only contained on the nodes. Such information has only small variations between different objects and thus, it can only be used effectively if the detection quality is high, as given for CenterPoint.
To remove track nodes, we use the baseline implementation as introduced in Section \ref{sec:baseline}. As only detections are used, this approach does not suffer from a distribution mismatch and two-stage training is neither necessary nor possible. Therefore, while the impact for offline training seems reasonable, the overall impact in the full method is significant. To show the benefit of using detection and track edges jointly for the track update, a naive matching only using track edges in the latest frame is used. This approach performs worse than not using a separate track representation at all and supports our approach of using global information for matching. Finally, focal loss gives a small advantage in all settings and data augmentation helps, especially for offline training. This can be explained, as in the two-stage training, the data distribution is closer to the distribution encountered during inference and thus, less data augmentation is required.
\begin{table}[tb]
\vspace{3pt}
\begin{center}
\small
\input{tables/ablation}
\end{center}
\caption{Comparative ablation study performed with detections from CenterPoint~\cite{yin_centerbased_2020} and MEGVII~\cite{zhu_classbalanced_2019}. Online refers to the two-stage training introduced in Section \ref{sec:two_stage} and offline to the basic training approach not using self generated data.}
\label{table:ablation}
\end{table}
\section{Conclusion}
We proposed a unified tracking graph representation that combines detections and tracks in one graph, which improves tracking performance and replaces heuristics. We formulated the online tracking tasks as classification problems on the graph and solve them using NMP. To efficiently update tracks, we introduce a method that jointly utilizes matches between all types of nodes. For training, we propose a semi-online training approach that allows us to efficiently train the network for the closed-loop tracking task. Finally, we performed exhaustive numerical studies showing state-of-the-art performance with a drastically reduced number of ID switches. As our proposed method provides a flexible learning based framework, it allows for a wide range of possible extensions and enables the way towards integrating fully learning based track state representations.
\textbf{Acknowledgements}: This work is funded by Toyota Motor Europe via the research project TRACE-Z\"urich.
\FloatBarrier
\clearpage
{\small
\bibliographystyle{ieee_fullname}
\input{bibliography.bbl}
}
\FloatBarrier
\clearpage
|
1,116,691,500,253 | arxiv | \section{Introduction}
\label{sec:intro}
\bigskip
Black holes are probably the most spectacular prediction of General Relativity. From a theoretical perspective, a crucial moment which lent credibility to the assumption of their existence in reality was Kerr's analytic construction of a stationary rotating black hole solution in Einstein gravity in four spacetime dimensions \cite{Kerr:1963ud}. With the development of string theory and other extra dimensions and/or higher derivative theories, it has become important to extend the Kerr solution to higher number of dimensions $D$ and/or to more general diffeomorphism covariant theories of gravity. The generalization to $D>4$, in Einstein gravity, was done by Myers and Perry in \cite{Myers:1986un}. Since then, a number of corresponding black hole solutions in different supergravity theories were constructed (for reviews see, e.g., \cite{Youm:1997hw,Maeda:2011sh}). However, despite a lot of effort, there is still not a single explicit analytic black hole solution in any generalized theory of gravity with higher curvature terms in the action. A related problem, important also on phenomenological grounds, is that one would like to have dynamical solutions, e.g., with in-falling matter, in which a Kerr black hole is created; however so far none has been found.
It is not hard to locate the roots for this failure of extending the Kerr solution in the abovementioned directions. The Kerr solution (and its Myers-Perry generalization) belongs to a special class of spacetimes for which the metric can be written in Kerr-Schild form with flat seed metric. This dramatically reduces the number of unknown functions from the start. The failure of attempts that used the Kerr-Schild ansatz in some higher-curvature theories of gravity
shows that the ansatz has limited use for black hole constructions, and that the Einstein action is somewhat special in this respect. Without some alternative simplifying property of the metric, the task of finding analytic stationary rotating black hole solutions in any $D>3$ theory seems to be hopeless. A possible strategy is to turn to different types of perturbative calculations, with the hope of extracting some information which could be useful for nonperturbative constructions.
In this paper we study asymptotically flat stationary rotating black hole solutions in theories with purely gravitational Chern-Simons terms \cite{CS} in the action in $D>3$ spacetime dimensions. One can name several reasons why these terms are interesting by themselves, including their special properties. Though they give diffeomorphism covariant contribution to the equations of motion \cite{Solodukhin:2005ns,Bonora:2011mf}, they are not manifestly diff-covariant. This leads to interesting consequences, e.g., for the black hole entropy \cite{Tachikawa:2006sz,Bonora:2011gz} and anomalies for the boundary theories (as in AdS constructions) \cite{Solodukhin:2005ns}. Topological considerations \cite{BCDPSp2} become relevant due to these terms, which moreover break parity in the purely gravitational sector. Gravitational Chern-Simons terms are present in some superstring/M theory low energy effective actions (depending on type and compactification), and though they appear more frequently in the form of mixed gauge-gravitational Chern-Simons Lagrangian terms,\footnote{The role of mixed gauge-gravitational Chern-Simons terms for black hole constructions in superstring effective theories is reviewed in
\cite{Mohaupt:2007mb,Sen:2007qy,Castro:2008ne,deWit:2009de,Prester:2010cw}. In some cases it was shown that all higher-derivative $\alpha'$-corrections to near-horizon properties of extremal black holes are originated solely by such Chern-Simons terms, though low energy effective actions contain infinite number of higher-derivative terms \cite{Prester:2008iu,Cvitan:2007hu}.} some compactifications to 7-dimensional spacetime may lead to purely gravitational Chern-Simons Lagrangian terms. It should be recalled that, despite the mentioned recent developments, there is much less understanding of the consequences of gravitational Chern-Simons terms in $D>3$, then in the simplest case of $D=3$
\cite{DJT1, DJT2} which has been thoroughly studied in the literature (for the reviews see \cite{Witten:2007kt,Kraus:2006wn,Sen:2007qy}). One of the aims of this paper is to try and fill some of these gaps.
The contribution to the equations of motion due to gravitational Chern-Simons Lagrangian terms is, at least apparently, terribly involved in $D>3$. Such terms exist only in $D=4k-1$, $k \in \mathbb N$, which implies that stationary rotating black holes are characterized by $2k-1$ angular momenta. However, due to their special properties, connected to parity violation, it is possible to obtain some exact results. For example, we show that if the solution for the metric has ``enough'' isometries (which, in the case of interest here, typically occurs when two or more angular momenta vanish) then adding a gravitational Chern-Simons term in the action does not change the black hole solutions. So, to find situations where a gravitational Chern-Simons contribution is nontrivial, one has to consider rotating black holes with at least $2k-2$ nonvanishing angular momenta. This is very complicated already in $D=7$. For this reason we have turned to perturbative calculations in a special case, that of a $D=7$ solution in which all angular momenta are equal. We have constructed the lowest order corrections to the Myers-Perry metric in an expansion in the Chern-Simons coupling constant and angular momentum, and we have showed that the gravitational Chern-Simons term affects all the black hole characteristics we have calculated -- horizon, ergoregion and black hole entropy (at least in this perturbative sense). Our perturbative solution does not allow expressing the metric in Kerr-Schild form with a flat seed metric. This implies that to find exact analytic solutions, if they exist, in such more general theories with gravitational Chern-Simons Lagrangian terms, one needs a new ansatz.
The outline of the paper is as follows. Section \ref{sec:eom} is devoted to establishing some general results. We show that a gravitational Chern-Simons Lagrangian term does not change stationary rotating black hole solutions and the corresponding black hole entropy if two or more angular momenta are zero. This is a consequence of the more general theorem derived in \cite{BCDPSp}. In Section \ref{sec:d7bh} we specialize to the particular theory in $D=7$ obtained by adding a gravitational Chern-Simons Lagrangian term to Einstein-Hilbert action. In Section \ref{sec:pertD7} we turn to the perturbative calculation in the Chern-Simons coupling constant, in the special case when all three angular momenta are equal. A few Appendices are devoted to details of calculations.
\vspace{20pt}
\section{A few general considerations}
\label{sec:eom}
\bigskip
We are interested in gravity theories in $D=2n-1$ dimensions ($n\in2\mathbf{N}$) with
Lagrangians of the form
\begin{equation} \label{lagrgen}
\mathbf{L} = \mathbf{L}_0 + \lambda \, \mathbf{L}_{\mathrm{gCS}}
\end{equation}
where $\mathbf{L}_0$ is some general manifestly diffeomorphism-invariant Lagrangian density
and $\mathbf{L}_{\mathrm{gCS}}$ is the purely gravitational Chern-Simons (gCS) Lagrangian density given by
\begin{equation}
\label{LgCS}
\mathbf{L}_{\mathrm{gCS}} = n \int_0^1 dt \; \mathrm{str} (\mathbf{\Gamma} \, \mathbf{R}_t^{n-1})
\end{equation}
Here $\mathbf{R}_t = t d\mathbf{\Gamma} + t^2 \mathbf{\Gamma}\Gam$, $\mathbf{\Gamma}$ is the Levi--Civita connection and $\mathrm{str}$ denotes a symmetrized trace, which is an example of an invariant symmetric polynomial of the Lie algebra of the $SO(1,D-1)$ group. In (\ref{lagrgen}) $\lambda$ denotes the gCS coupling constant, which is dimensionless and may be quantized \cite{BCDPSp2,Witten:2007kt,Lu:2010sj}. Since the $n=2$ ($D=3$) case is studied in detail in the literature, we shall focus on $n\ge4$ cases.
Adding gravitational CS terms to the Lagrangian brings about additional terms in the equations of motion.
It was shown in \cite{Solodukhin:2005ns} that the equation for the metric tensor $g_{\alpha\beta}$
acquires an additional term $C^{\alpha\beta}$ which, for the gCS term (\ref{LgCS}), is of the form
\begin{equation}\label{gCSeom}
C^{\alpha\beta} = - \frac{n}{2^{n-1}} \ \epsilon^{\nu_1 \cdots \nu_{2n-2} (\alpha} \, \nabla_{\!\rho} \, \left( \tensor{R}{^{\beta)}_{\sigma_1}_{\nu_1}_{\nu_2}} \,
\tensor{R}{^{\sigma_1}_{ \sigma_2 }_{\nu_3}_{\nu_4}} \cdots
\tensor{R}{^{\sigma_{n-3}}_{ \sigma_{n-2}}_{\nu_{2n-5}}_{\nu_{2n-4}}}
\tensor{R}{^{\sigma_{n-2}}^{\rho}_{\nu_{2n-3}}_{\nu_{2n-2}}} \right)
\end{equation}
The tensor $C^{\alpha\beta}$ is symmetric, traceless and covariantly conserved
\begin{equation} \label{Cprop}
C^{\alpha\beta} = C^{\beta\alpha} \;\;, \qquad C^\alpha_\alpha = 0 \;\;, \qquad
\nabla_\alpha \, C^{\alpha\beta} = 0
\end{equation}
In $D=3$ $C^{\alpha\beta}$ is known as Cotton tensor, and in higher dimensions it can be regarded as some sort of generalization thereof \cite{Solodukhin:2005ns}.
The peculiar properties of gCS terms make them rather special. They have
a topological character (leading to quantization of their coupling constant), they are not manifestly diffeomorphism covariant but their contribution to equations of motion (\ref{gCSeom}) is diff-covariant, they are parity-odd, and conformally covariant \cite{CS,Solodukhin:2005ns}. We are interested in investigating how they affect black hole solutions found in theories where they are absent, once they are added to the theory. However, as we elaborated in \cite{BCDPSp,Bonora:2011mf}, it appears that
it is not easy to find physically interesting configurations for which the gCS contribution to the equations of motion (\ref{gCSeom}) is nonvanishing and are at the same time simple enough to be analytically tractable.\footnote{Notable exceptions are nontrivial analytically tractable solutions obtained in \cite{Lu:2010sj} by ``squashing'' maximally symmetric spaces. Such solutions may play a role in AdS/CFT constructions.} In \cite{BCDPSp} we proved a theorem for any metric in $D$ dimensions of the form
\begin{equation} \label{vaneom}
ds^2 = g_{\mu\nu}(x) \, dx^\mu dx^\nu = g_{ab}(y) \, dy^a dy^b + f(y) \, h_{ij}(z) \, dz^i dz^j
\end{equation}
where local coordinates are split as $x^\mu = (y^a, z^i)$, $\mu = 1,\ldots,D$, $a=1,\ldots,d$,
and $i=1,\ldots,p$ ($d+p=D$), and $g_{ab}(y)$ and $h_{ij}(z)$ are arbitrary tensors depending only on the
$\{y^a\}$ and $\{z^i\}$ coordinates, respectively. It turns out that if $d>1$ and $p>1$ the gCS contribution to the equations of motion vanishes, i.e.,
\begin{equation}
C^{\mu\nu}[g] = 0 \;\;.
\end{equation}
Due to the conformal covariance of the $C^{\mu\nu}$ tensor, the theorem extends to any metric which is conformally equivalent to (\ref{vaneom}).
As discussed in \cite{BCDPSp}, this theorem covers many classes of metrics usually discussed in the
literature. In particular, it also applies to all spacetimes with local $SO(k)$ isometry, with $k\ge 3$. It
appears that if we want to study gCS Lagrangian terms with nontrivial influence, stationary rotating asymptotically flat black hole solutions are the next simplest objects.
We shall be interested also in thermodynamics of black holes. It was shown in
\cite{Tachikawa:2006sz,Bonora:2011gz} that a gCS Lagrangian term (\ref{LgCS}) brings in an additional term in the black hole entropy formula. For a theory with Lagrangian (\ref{lagrgen}) the latter is given by
\begin{equation} \label{entgen}
S = S_0 + \lambda S_{\mathrm{gCS}} \;\;.
\end{equation}
$S_0$ is Wald black hole entropy \cite{Iyer:1994ys} due to the Lagrangian $\mathbf{L}_0$. In coordinate systems of the type standardly used in the literature (like the generalized Boyer-Lindquist type of coordinates we
use in this paper) $S_{\mathrm{gCS}}$ can be calculated from
\begin{equation} \label{entgCS}
S_{\mathrm{gCS}}[g] = 4\pi n \int_\mathcal{B} \mathbf{\Gamma}_N \mathbf{R}_N^{n-2} \;\;,
\end{equation}
where $\mathcal{B}$ is the $(D-2)$-dimensional bifurcation surface of the black hole horizon
\cite{Bonora:2011gz}.
In a forthcoming paper, \cite{BCDPSp}, by using conformal invariance of (\ref{entgCS}), we shall prove a theorem according to which, for black hole metrics of the form (\ref{vaneom}), with $p \ge 1$ and coordinates $z$ tangential to the bifurcation surface of the horizon, the gCS entropy term (\ref{entgCS}) vanishes.
Using the just mentioned theorems, we can already state one general result. If for stationary rotating
black hole $p$ of angular momenta $J_i$ are zero, then the spacetime usually has $SO(2p)$ isometry.
Let us restrict to the cases in which this is valid.\footnote{We restrict ourselves here to ``standard'' black holes with horizon topology given by a sphere $S^{D-2}$. In this case the above symmetry statement is valid when there is no matter outside the horizon. However, it can be violated if there is matter with symmetry breaking energy-momentum tensor (e.g., rigid matter which does not rotate in corresponding directions but with the shape which breaks the $SO(2p)$ isometry). Such systems are excluded in our analysis.} Then, if $p\ge2$ such spacetime falls under the class of the above theorems guaranteeing $C^{\mu\nu} = 0$ and
$S_{\mathrm{gCS}} = 0$. This leads us to the following clearcut statement:
\medskip
\noindent
\emph{If in the theory with some arbitrary Lagrangian $\mathbf{L}_0$, a solution has
two or more vanishing angular momenta $J_i$, then introducing a Lagrangian gCS term (as in
(\ref{lagrgen})) does not change the solution nor the corresponding black hole entropy.}
\medskip
If the black hole solution with only one vanishing angular momentum is also of the form
(\ref{vaneom}), then by the second theorem the gCS entropy term (\ref{entgCS}) again vanishes. However,
though this indeed applies to \emph{all known} stationary rotating black hole solutions (e.g., the Myers-Perry black holes we discuss in the next section), for the general Lagrangian (\ref{lagrgen}) there is no guarantee that solutions with only one angular momentum vanishing are of the form (\ref{vaneom}). Indeed, we shall show in the next section on an explicit example that, when only one angular momentum is vanishing, a gCS term, due to its parity-odd structure, forces the solution to depart from the form (\ref{vaneom}).
In conclusion, we see that if we want to study the problem in which gCS Lagrangian terms have non-trivial influence on stationary rotating black hole solutions, we cannot take more then one angular momentum to be zero, because in those cases both solution and entropy are unchanged when we ``switch on" coupling
constant $\lambda$ in (\ref{lagrgen}). If only one angular momentum is zero, the solution is generally affected, but the first order correction in gCS coupling $\lambda$ of the gCS entropy term vanishes. So, to find a \emph{completely} non-trivial problem, in which all interesting ingredients are non-vanishing, we need to analyze black holes with all angular momenta nonvanishing. If we add to this that in $D=3$ dimensions it is known that a gCS term does not change rotating black hole solutions such as BTZ black hole (though it contributes to horizon and asymptotic charges such as entropy, mass and angular momentum), it follows that we have to go to $D \ge 7$ dimensions.
\vspace{20pt}
\section{Stationary rotating black holes in $D=7$}
\label{sec:d7bh}
\bigskip
Following the conclusion of the previous section, from now on we specialize to the simplest non-trivial
case with action
\begin{equation} \label{lhecs}
\mathbf{L} = \mathbf{L}_0 + \lambda \, \mathbf{L}_{\mathrm{gCS}}
= \frac{1}{16 \pi G_N} \boldsymbol{\epsilon} R + \lambda \, \mathbf{L}_{\mathrm{gCS}}.
\end{equation}
Such theory in $D=3$ is known as topologically massive
gravity and was first considered in \cite{DJT1, DJT2}. We are interested in finding stationary rotating asymptotically flat black hole solutions in $D=7$.
\vspace{10pt}
\subsection{Myers-Perry black holes}
\label{ssec:MPBH}
For $\lambda=0$ we have ordinary general relativity with Einstein-Hilbert Lagrangian for which
stationary rotating asymptotically flat black holes, with the horizon topology of the 5-sphere $S^5$, are described by Myers-Perry solutions (MP BH) \cite{Myers:1986un,Myers:2011yc}. Here we review the basic properties of Myers-Perry solutions we shall need in our calculations.
In generalized Boyer-Lindquist coordinates the MP metric in $D=7$ is given by
\begin{eqnarray} \label{mpbh}
ds_{\mathrm{MP}}^2 = -dt^2
+ \frac{\mu \, r^2}{\Pi \, F} \left( dt - \sum_{i=1}^3 a_i \mu_i^2 \, d\phi_i \right)^{\!2}
+ \frac{\Pi \, F}{\Pi - \mu \, r^2} dr^2 + \sum_{i=1}^3 (r^2 + a_i^2) (d\mu_i^2 + \mu_i^2 d\phi_i^2)
\end{eqnarray}
where
\begin{equation} \label{fpidef}
F = F(r,\vec{\mu}) = 1 - \sum_{i=1}^3 \frac{a_i^2 \, \mu_i^2}{r^2 + a_i^2} \; , \qquad \qquad
\Pi = \Pi(r) = \prod_{i=1}^3 (r^2 + a_i^2)
\end{equation}
and the coordinates $\mu_i$ are not all independent but satisfy
\begin{equation} \label{mucon}
\sum_{i=1}^3 \mu_i^2 = 1 \;\;.
\end{equation}
From the asymptotic behavior of the metric (\ref{mpbh}) it can be shown \cite{Myers:1986un} that
four free parameters $\mu$ and $a_i$ ($i=1,2,3$) determine the mass $M$ and angular momenta
$J_i$ with
\begin{eqnarray}
M &=& \frac{5 \, \pi^2}{16 \, G_N} \mu \label{MPmass} \;\;, \\
J_i &=& \frac{\pi^2}{8 \, G_N} \mu \, a_i = \frac{2}{5} M a_i \label{MPangm} \;\;.
\end{eqnarray}
We shall assume $\mu > 0$ from now on. The event horizon of the MP BH is located at $r = r_H$ where the horizon radius $r_H$ is the largest solution of the polynomial equation
\begin{equation} \label{rh0eq}
\Pi(r_H) - \mu \, r_H^2 = 0 \;\;.
\end{equation}
Eq. (\ref{rh0eq}) is a cubic equation in $r^2$, with three solutions which we denote
$r_{\mathrm{min}}^2$, $r_-^2$, and $r_{\mathrm{max}}^2 \equiv r_H^2$. The exact expressions for
roots is rather awkward (see \cite{Doukas:2010be}) and we shall not use it. For later purposes we
note the obvious relation (obtained from one of Vieta's formulae)
\begin{equation} \label{rootsprod}
r_{\mathrm{min}}^2 \,r_-^2 \, r_H^2 = - (a_1 a_2 a_3)^2 \;\;.
\end{equation}
To keep our analysis simple we restrict to the case in which the largest solution satisfies
$r_{\mathrm{max}}^2 = r_H^2 > 0$.\footnote{For a discussion of the subtleties of extending spacetime
to the $r^2 < 0$ region see \cite{Myers:1986un} and a review \cite{Myers:2011yc}.} A necessary, but not sufficient, condition for this is $\mu > \sum_i \prod_{j \ne i} a_j^2$. In this case all the roots are real, and satisfy $r_{\mathrm{min}}^2 < 0 \le r_- \le r_H^2$. The surface defined by $r=r_-$ is the inner horizon, which is hidden from the outside observer by event horizon $r = r_H$.
Using (\ref{rh0eq}) and (\ref{mpbh}) one obtains that the horizon area is given by
\begin{equation}
A_H = \pi^3 \mu \, r_H \;\;.
\end{equation}
The ergosurface is an infinite redshifted surface, located outside the event horizon, defined by the
condition $g_{tt} = 0$, which for MP BH metric (\ref{mpbh}) leads to an equation
\begin{equation} \label{MPergo}
\Pi(r) \, F(r,\vec{\mu}) = \mu \, r^2 \;\;.
\end{equation}
As we are interested in black hole thermodynamics, let us quote that the entropy $S$, temperature
$T$, and angular velocities $\Omega_i$ of the MP BH are given by
\begin{eqnarray} \label{MPent}
S &=& \frac{A_H}{4 \, G_N} = \frac{\pi^3}{4 \, G_N} \mu \, r_H \;\;, \\
T &=& \frac{\kappa}{2\pi} = \frac{\Pi'(r_H) - 2\mu \, r_H}{4\pi \mu \, r_H^2} \;\;,
\label{MPtemp} \\
\Omega_i &=& \frac{a_i}{r_H^2 + a_i^2} \;\;.
\label{MPomega}
\end{eqnarray}
MP black holes with coincident inner and outer horizon radii, $r_- = r_H$, obviously have $T=0$, which means that they are extremal black holes.
A general MP BH in $D=2m+1$ with generic choice of parameters $\mu$ and $\vec{a}$ is quite
complicated to analyze. One reason is that for generic choice of the parameters $\mu$ and $\vec{a}$ one
has a rather ``modest'' isometry group $\mathbf{R} \times U(1)^m$. There are two mechanisms by which one can straightforwardly enlarge the isometry group in a simple way and/or simplify calculations:
\begin{itemize}
\item[(a)]
Taking $k$ of the angular momenta $J_i$ vanishing, which for a MP black hole means taking the corresponding $a_i$ to vanish. This enlarges the factor $U(1)^k$ to $SO(2k)$ in the isometry group.
\item[(b)]
Taking $k$ of the angular momenta $J_i$ to be equal, which for a MP black hole means taking the corresponding $a_i$ to be equal. This enhances the factor $U(1)^k$ to $U(k)$.
If all $J_i$ are equal, then we obtain cohomogeneity-1 metrics in which all ``angular" dependence is determined, and the only freedom left is in a number of functions of the radial coordinate $r$.
\end{itemize}
In case (a), already if just one $a_j = 0$, a direct consequence is that the radius of the inner
horizon is $r_- = 0$, and the polynomial in (\ref{rh0eq}) is of one order smaller, which simplifies solving for the event horizon radius $r_H$. In the case of our main interest, $D=7$, by taking $a_3 = 0$ we obtain
\begin{equation}
r_H = \frac{1}{\sqrt{2}}\left( -(a_1^2 + a_2^2) + \sqrt{4 \mu + (a_1^2 - a_2^2)^2} \right)^{1/2}
\end{equation}
where a (necessary and sufficient) condition to have $r_H^2 > 0$ is $\mu > a_1^2 a_2^2$.
We can now make further simplifications either by applying (a) again, or (b). By taking also $a_2 = 0$
the isometry group is enlarged from $\mathbf{R} \times U(1)^3$ to $\mathbf{R} \times U(1) \times SO(4)$. If, on the other hand, we restrict to $a_1 = a_2 \equiv a$, then the symmetry is enlarged to $\mathbf{R} \times U(1) \times U(2)$ and we obtain a simple expression for $r_H$
\begin{equation}
r_H = \left( \sqrt{\mu} - a^2 \right)^{1/2}
\end{equation}
Another variant of the possibility (c) in $D=7$ is to have all three parameters $a_i$ equal,
$a_1 = a_2 = a_3 \equiv a$, with isometry group $\mathbf{R} \times U(3)$. From (\ref{rh0eq}) and
$\mu > 0$ then it follows that $r_H^2 > 0$ requires $\mu > 27\, a^4/4$. From (\ref{fpidef}) and (\ref{mucon}) it follows that $F$ is a function of $r$ only
\begin{equation} \label{Faequal}
F = F(r) = 1 - \frac{a^2}{r^2 + a^2} = \frac{r^2}{r^2 + a^2}
\end{equation}
which, together with (\ref{MPergo}), yields an especially simple expression for the location of the ergosurface: $r = r_e$, where
\begin{equation}
r_e = \left( \sqrt{\mu} - a^2 \right)^{1/2}
\end{equation}
\vspace{10pt}
\subsection{Adding gCS Lagrangian terms: exact results}
\label{ssec:gCSexact}
We now turn our attention to the full Lagrangian (\ref{lhecs}) with $\lambda \ne 0$, for which we would like to find solutions describing stationary rotating black holes which we denote $\bar{g}_{\mu\nu}$. Equations
of motion now read
\begin{equation} \label{eomD7l}
R^{\alpha\beta} - \frac{1}{2} g^{\alpha\beta} R - 16\pi \, G_N \lambda \, C^{\alpha\beta} = 0
\end{equation}
where $C_{\mu\nu}$ is the contribution of the gCS term which in $D=7$ is obtained by putting
$n=4$ in (\ref{gCSeom})
\begin{equation} \label{cD7}
C^{\alpha\beta} = - \frac{1}{2} \ \epsilon^{\nu_1 \cdots \nu_6 (\alpha} \, \nabla_{\!\rho} \,
\left( \tensor{R}{^{\beta)}_{\sigma_1}_{\nu_1}_{\nu_2}} \,
\tensor{R}{^{\sigma_1}_{ \sigma_2 }_{\nu_3}_{\nu_4}} \,
\tensor{R}{^{\sigma_2}^{\rho}_{\nu_5}_{\nu_6}} \right)
\end{equation}
Contracting (\ref{eomD7l}) with $g_{\alpha\beta}$ and using the fact that $C^{\alpha\beta}$ is traceless,
(\ref{Cprop}), it follows that $R = 0$. Inserting this back in (\ref{eomD7l}) we obtain the equations of motion in simpler form
\begin{equation} \label{eomD7}
R^{\alpha\beta} - 16\pi \, G_N \lambda \, C^{\alpha\beta} = 0 \;\;.
\end{equation}
The entropy is given by
\begin{equation} \label{entD7}
S[\bar{g}] = S_0[\bar{g}] + \lambda S_{\mathrm{gCS}}[\bar{g}]
= \frac{A_H[\bar{g}]}{4 \, G_N} + 16\pi \lambda \int_\mathcal{B} \mathbf{\Gamma}_N[\bar{g}] \mathbf{R}_N[\bar{g}]^2 \;\;,
\end{equation}
where $A_H[\bar{g}]$ is the horizon area calculated from the metric $\bar{g}_{\mu\nu}$ which is a
solution to the full equations of motion (\ref{eomD7}). It is convenient for later discussions to write solutions of (\ref{eomD7}) in the following form
\begin{equation}
\bar{g}_{\alpha\beta} = g_{(0)\alpha\beta} + \delta g_{\alpha\beta} \;\;, \qquad \quad
g_{(0)\alpha\beta} = (g_{\mathrm{MP}})_{\alpha\beta}
\end{equation}
where $g_{\mathrm{MP}}$ is Myers-Perry black hole, which is a solution for $\lambda = 0$. For a
generic MP black hole metric we obtain (see Appendix \ref{app:tech:S1g0})
\begin{equation} \label{entgCSMP}
S_{\mathrm{gCS}}[g_{\mathrm{MP}}] = 128 \, \pi^4 \frac{\mu}{r_H} a_1 a_2 a_3
\left( \sum_{i=1}^3 \frac{1}{r_H^2 + a_i^2} \right)^{\!3}
\end{equation}
Observe that (\ref{entgCSMP}) automatically vanishes when one or more angular momentum
parameters $a_i$ vanish. The result (\ref{entgCSMP}) is especially interesting when
$\delta g_{\alpha\beta} = 0$, in which case it gives the full gCS contribution to the black hole entropy. In generic
cases, when $\delta g_{\alpha\beta} \ne 0$, it gives a part of the first-order correction to the black hole entropy in the perturbative expansion in $\lambda$ (the second part comes from $S_0[\bar{g}]$ term in (\ref{entD7}).)
There is little hope to find exact solutions with generic angular momenta of such highly involved field equation as (\ref{eomD7})-(\ref{cD7}). We now turn to analysis of special cases with enhanced isometry group, and thereafter we turn to perturbative calculations.
\subsubsection{$J_2 = J_3 = 0$, $J_1 \ne 0$}
\label{sssec:two0a}
Let us us start with the most symmetric case involving rotating black holes in $D=7$. As noted in
Sec. \ref{ssec:MPBH}, when two angular momenta are zero, e.g., $J_2 = J_3 = 0$, the symmetry
enhances to $\mathbf{R} \times U(1) \times SO(4)$. From the general discussion in Sec. \ref{sec:eom} we
then know that both solution and the black hole entropy remain the same as in the $\lambda = 0$ case. This means
\begin{equation} \label{gtwo0a}
\bar{g}_{\alpha\beta} = g_{(0)\alpha\beta} = (g_{\mathrm{MP}})_{\alpha\beta}
\end{equation}
and
\begin{equation} \label{enttwo0a}
S[\bar{g}] = S_0[g_0] = \frac{A_H[g_0]}{4 \, G_N} \;\;.
\end{equation}
where $S_0$ is Bekenstein-Hawking entropy and $g_0$ is the MP black hole with $a_2 = a_3 = 0$.
As a check, we see that result (\ref{enttwo0a}) also follows directly from (\ref{entgCSMP}).
If we want to see nontrivial effects of the gCS Lagrangian term we have to go to less symmetric cases.
\subsubsection{$J_3 = 0$, $J_1 \ne 0$, $J_2 \ne 0$}
\label{sssec:one0a}
Now we take just one vanishing angular momentum, e.g., $J_3$ (so $J_3 = 0$, $J_1 \ne 0$,
$J_2 \ne 0$). In this case in general there is no important enhancement of the symmetry group of isometries. For the corresponding MP black hole by explicit calculation we have established that
\begin{equation} \label{cmnnv}
C^{\alpha\beta}[g_{\mathrm{MP}}] \ne 0 \;\;, \qquad
\textrm{when \ $J_3 = 0$ , $J_1 \ne 0$ , $J_2 \ne 0$}
\end{equation}
so a gCS contribution to the equations of motion are in this case nontrivial and MP black holes are no longer solutions, i.e.,
\begin{equation}
\bar{g}_{\alpha\beta} \ne (g_{\mathrm{MP}})_{\alpha\beta} \;\;, \qquad
\textrm{when \ $J_3 = 0$ , $J_1 \ne 0$ , $J_2 \ne 0$}
\end{equation}
The equations of motion still look too complicated to offer much hope for finding exact solutions. However, we
can get some information from a perturbative analysis. Direct calculation shows that nonvanishing
components in (\ref{cmnnv}) are $C^{t\phi_3}[g_{\mathrm{MP}}]$, $C^{\phi_1\phi_3}[g_{\mathrm{MP}}]$
and $C^{\phi_2\phi_3}[g_{\mathrm{MP}}]$, which shows that a perturbative solution (around $\lambda = 0$) is not of the form (\ref{vaneom}) when $\lambda \ne 0$.
Let us turn our attention to the black hole entropy. If we plug the MP metric with $J_3 =0$ (which means
$a_3 = 0$) into the gCS entropy term, from (\ref{entgCSMP}) we obtain
\begin{equation} \label{entMP1a0}
S_{\mathrm{gCS}}[g_{\mathrm{MP}}] = 0 \;\;, \qquad \textrm{(for a MP BH with } a_3 = 0)
\end{equation}
It follows that up to first-order in a perturbative expansion in $\lambda$, the black hole entropy is given by
Bekenstein-Hawking area formula. However, as a perturbed solution is not of the form (\ref{vaneom}), it
is possible that a gCS entropy term gives nonvanishing contribution starting from second order in
$\lambda$.
\subsubsection{$J_i = J \ne 0$ for all $i=1,2,3$}
\label{sssec:allJequal}
The case in which all angular momenta are equal and nonvanishing deserves a special place. On the one
hand, it keeps all the non-trivial consequences of the most generic case. This means that all quantities (except charges defined at asymptotic infinity), both geometric and thermodynamic, are affected by the presence of the gCS Lagrangian term.\footnote{We shall show this explicitly in Sec. \ref{sec:pertD7}.}
On the other hand the symmetry group of isometries enhances to $\mathbf{R} \times U(3)$
which induces significant constraints on the metric. This combination makes this case an ideal laboratory for calculations, and we shall explore it in detail perturbatively in Sec. \ref{sec:pertD7}.
We have already shown how results in this case simplify for $\lambda = 0$, which is for MP black holes
with $a_i = a \ne 0$, $i=1,2,3$. Let us just note that the result (\ref{entgCSMP}) also simplifies and becomes
\begin{equation} \label{sCSa0}
S_{\mathrm{gCS}}[g_{\mathrm{MP}}] = 3456 \, \pi^4 \left( \frac{a}{r_H} \right)^{\!3}
\end{equation}
where $r_H$ is horizon radius of MP black hole.
\subsubsection{gCS terms and interior of black holes}
\label{sssec:genJ}
Here we pause for the moment to address an interesting issue raised in \cite{Solodukhin:2005ah} on the basis of 3-dimensional analysis, which can be put as a question ``Does gravitational Chern-Simons terms see interior of black holes?". We shall argue here that in $D>3$ the answer is negative, and that the apparently positive answer in $D=3$ is probably a coincidence.
Let us first state the issue. It is known that in $D=3$ the Hilbert-Einstein action supplemented with a negative cosmological constant term leads to the BTZ solutions \cite{Banados:1992wn} describing stationary
rotating black holes. The difference with our problem, aside from the number of dimensions, is the presence of the negative cosmological constant term $\Lambda = - 1/\ell^2$ (which is necessary in $D=3$ if we want to have black hole solutions at all) implying that BTZ solutions are asymptotically AdS. Including a gCS Lagrangian term in $D=3$ does not affect stationary rotating black hole solutions (they are still BTZ) but does change the entropy, which
can be written in the form \cite{Solodukhin:2005ah} (in Appendix \ref{app:BTZ} one can find a short review of $D=3$ case)
\begin{equation} \label{entD3}
S = \frac{A_H}{4 \, G_N} - \mathrm{sign}(j) \frac{\beta}{\ell} \frac{A_-}{4\, G_N} \;\;, \qquad
\beta \equiv \, 32\pi G_N \lambda
\end{equation}
where $\ell$ is the standard parametrization of the cosmological constant, $A_-$ is the area of the
inner horizon, and $j$ is angular momentum parameter We see that the gCS contribution to the entropy is divided into two parts, one coming from
Einstein-Hilbert action and depending only on a geometrical property (area) of the (outer) event horizon,
and one coming from gCS Lagrangian term and depending only on a geometrical property (again area) of the inner horizon. In \cite{Solodukhin:2005ah} it was speculated that this may not be coincidental but indicates that a gCS term ``seems to see interior of the black hole''.
We investigate here the same assertion in $D>3$. As a warm-up, let us see what happens when one or more angular momenta vanish. Applying the general analysis of Sec. \ref{sec:eom},
we know that when two or more angular momenta vanish, (i) the solution is unchanged, so it is given by the corresponding Myers-Perry metric (\ref{mpbh}) (with two or more parameters $a_i$ vanishing), (ii)
$S_{\mathrm{gCS}} = 0$. From (i) and (\ref{rootsprod}) it follows that radius and area of inner horizon vanish. This, combined with (ii), shows that (\ref{entD3}) is true in this case. However, we shall now show
that this is not true for generic angular momenta when $S_{\mathrm{gCS}} \ne 0$. We note
that in $D>3$ it is impossible to have $\delta g_{\alpha\beta} = 0$ while at the same time $S_{\mathrm{gCS}} \ne 0$, a situation present for $D=3$ BTZ black holes. It is this fact that makes the analysis more complicated, and in fact we are forced to turn to a perturbative expansion.
We treat the gCS coupling $\lambda$ as a perturbation parameter. In this
way a solution for the metric to (\ref{eomD7})-(\ref{cD7}) can be expanded as
\begin{equation} \label{gpert}
\bar{g}_{\alpha\beta} = \sum_{k=0}^\infty \lambda^k \, g^{(k)}_{\alpha\beta} \;\;, \qquad \qquad
g^{(0)}_{\alpha\beta} = (g_{\mathrm{MP}})_{\alpha\beta}
\end{equation}
where $(g_{\mathrm{MP}})_{\alpha\beta}$ is the (generic) MP metric. Using this in (\ref{entD7}) one can obtain a similar expansion in $\lambda$ for the black hole entropy. However, we prefer here to write this expansion in the following way
\begin{equation} \label{entpert}
S[\bar{g}] = \frac{A_H[\bar{g}]}{4 \, G_N} + \lambda \, S_{\mathrm{gCS}}[\bar{g}]
= \frac{A_H[\bar{g}]}{4 \, G_N} + \lambda \, S_{\mathrm{gCS}}[g^{(0)}] + O(\lambda^2)
\end{equation}
Comparing with (\ref{entD3}) we see that since $S_{\mathrm{gCS}}[\bar{g}]$ should be some function of intrinsic geometric quantities connected to the inner horizon of the solution $\bar{g}_{\alpha\beta}$ (like, e.g.,
$A_-[\bar{g}]$), then $S_{\mathrm{gCS}}[g^{(0)}]$ should give the same for the MP metric
$g^{(0)}_{\alpha\beta}$.
We have already calculated this in $D=7$ and the result is presented in Eq. (\ref{entgCSMP}). We have not found any interpretation of this result in terms of geometric quantities linked to the inner horizon, or more generally, in terms of some other simple geometrical properties interior to event horizon $r_H$. This conclusion does not change if we generalize to (A)dS black holes (by introducing a cosmological constant $\Lambda$ in Lagrangian $\mathbf{L}_0$), at least not for generic values of $\Lambda$.\footnote{This follows simply from the fact that the limit $\Lambda \to 0$ is well-defined and smooth in $D>3$, so it leads to our asymptotically flat results and corresponding conclusions.}
Why and how the area of the inner horizon appears in $D=3$ in (\ref{entD3})? For our argument it is enough to restrict our attention to the more symmetric case in which all angular momenta
are equal, which for MP black holes in $D = 2m+1$ dimensions ($m$ is odd integer) requires $a_i = a$, $i=1,\ldots,m$. Let us assume that the formula (\ref{sCSa0}) generalizes to
\begin{equation} \label{egCSDg}
S_{\mathrm{gCS}}[g^{(0)}] = c_m \left( \frac{a}{r_H} \right)^{\!m}
\end{equation}
where $c_m$ are some constants.\footnote{However simple (\ref{egCSDg}) may look, for $D>3$ ($m>1$) we again did not managed to find any interpretation for it purely in terms of intrinsic geometric properties
of the inner horizon $r = r_-$.}
In $D=3$ ($m=1$) $g^{(0)}$ is BTZ black hole metric, for which one has
\begin{equation} \label{arhrm}
a = (r_H r_-)/\ell \;\;,
\end{equation}
where $r_-$ is again radius of the inner horizon. Using (\ref{arhrm}) one
obtains
\begin{equation} \label{gCS3Dexp}
S_{\mathrm{gCS}}[g_{\mathrm{BTZ}}] = c_1 \frac{a}{r_H} = c_1 \frac{r_-}{\ell}
= \frac{c_1}{2\pi} \frac{A_-}{\ell}
\end{equation}
which is in fact the way how Eq. (\ref{entD3}) is originally obtained. However, the above mechanism is not possible in $D>3$, because generally
\begin{equation}
\prod_{i=1}^m |a_i| = \prod_{i=1}^m |r_i^2|^{1/2}
\end{equation}
in the asymptotically flat case ($\Lambda = 0$), and
\begin{equation}
\prod_{i=1}^m |a_i| = \frac{1}{\ell} \prod_{i=1}^{m+1} |r_i^2|^{1/2}
\end{equation}
in the asymptotically (A)dS case ($\Lambda = \pm 1/\ell^2$), where $r_i^2$ are complete set of
roots of the horizon-defining polynomial equation (Eq. (\ref{rh0eq}) in $\Lambda=0$ case). Only in
$D=3$ ($m=1$) (where there are only AdS black holes) one has $a^m = r_H r_-$, so that after dividing
by $r_H$ one is left with $r_-$ alone in (\ref{egCSDg}).\footnote{In general
$D=2m+1$, $m$ odd, dimensional analysis tells us there could be terms in $S_{\mathrm{gCS}}$ of the
form $a^m/(\ell^{m-1} r_H)$, in which $r_H$ cancels. However, only in $D=3$ one is left just with $r_-$.} Other roots, aside $r_H$ and $r_-$, are not defining other inner horizons and are, as far as we know, deplete of any direct geometrical meaning. We now see that the fact that in $D=3$ one has
$S_{\mathrm{gCS}} \propto A_-$ is probably just a coincidental consequence of the more fundamental
relation (\ref{egCSDg}).
\vspace{20pt}
\section{Perturbative calculations in $D=7$: case $a_i = a$}
\label{sec:pertD7}
\bigskip
\subsection{Is perturbative expansion in $\lambda$ viable?}
\label{ssec:pertOK}
Searching for exact solutions to the equations of motion (\ref{eomD7})
\begin{equation} \label{eom2}
R_{\nu\sigma}[\bar{g}] = 16\pi G_N \lambda \, C_{\nu\sigma}[\bar{g}] \;\;,
\end{equation}
where $G_{\nu\sigma}$ is the Einstein tensor and $C_{\nu\sigma}$ the contribution of gCS Lagrangian term (\ref{gCSeom}), is probably futile. So we would like to turn to a perturbative analysis. But, of course, we have to be
sure that a perturbative expansion in the gCS coupling $\lambda$ makes sense at all. Due to topological reasons
it was argued in the literature \cite{Witten:2007kt,Lu:2010sj,BCDPSp2} that only for special discrete (``quantized'') values of $\lambda$, defined through some ``quantization condition'' of the form
\begin{equation} \label{quantCS}
\lambda_n = n \,\lambda_1 \;\;, \qquad\qquad n \in \mathrm{Z} \;\;,
\end{equation}
can one give unambiguous meaning to a gCS term in the action.\footnote{For $D=3$ it was argued in \cite{Witten:2007kt}, for $D=7$ in \cite{Lu:2010sj}, and for general case in \cite{BCDPSp2}. The argument is based on a standard application of path-integral quantization to gravity.}
The value of the constant $\lambda_1$ depends on what is exactly the space of allowed configurations.
Taken at face value, this quantization may invalidate perturbation theory in $\lambda$.
We would like to argue that even if (\ref{quantCS}) is correct\footnote{One way to counter (\ref{quantCS})
is by noting that the argument used in obtaining (\ref{quantCS}) is quantum mechanical, and assumes
that ``naive'' path integral formulation of gravity in which one integrates over metrics (or connections and vielbeins) is meaningful in nonperturbative regime. This is normally a standard quantization prescription, but gravity is hardly ``normal'' theory, especially in $D>3$ where general relativity cannot be put in the
form of the gauge theory. Indeed, we know basically nothing for sure about quantum gravity, so
a skeptical view on the correctness of the quantization of gCS coupling constant is not unmotivated.}, perturbation theory in $\lambda$ can be made meaningful. One can achieve this by scaling additional parameters
of the theory, which for the stationary black holes are $G_N$, $\mu$ and $a_i$. As in this case there are
two independent dimensionless parameters, there are several ways one can do this. We present two
possibilities:\footnote{For sake of clarity, we restrict ourself here to the case where all
parameters $a_i$ are equal, $a_i = a$.}
\begin{itemize}
\item[(a)]
We take as two independent dimensionless parameters $c_{\lambda N} \equiv \lambda G_N / \mu^{5/4}$
and $(a/\mu^{1/4})$, and take $c_{\lambda N} \ll 1$ by making the scaling parameter
$G_N / \mu^{5/4}$ sufficiently small while keeping $a$ and $\mu$ fixed. It is obvious that an expansion in
$\lambda$ can be trivially written as an expansion in $c_{\lambda N}$. This is the well-known scenario when one takes Planck length $l_{\mathrm{Pl}} = G_N^{1/5}$ to be much smaller then physical scales in the problem.
\item[(b)]
We define a dimensionless parameter $c_{\lambda a} \equiv \lambda \, G_N a / \mu^{3/2}$,
and take $c_{\lambda a} \ll 1$ by making the scaling parameter $(a / \mu^{1/4})$ sufficiently small while keeping $G_N / \mu^{5/4}$ fixed. This is meaningful because, as we show below, one can
write the expansion in $\lambda$ as an expansion in $c_{\lambda a}$ with good convergence properties for small $a/\mu^{1/4}$.
\end{itemize}
We are interested here in the case (b). Let us first discuss two subtleties. In both cases, (a) and (b), we can formally treat the expansion in $\lambda$ independently of the expansions of other quantities which are small in the relevant scaling parameters ($G_N / \mu^{5/4}$ and $a / \mu^{1/4}$, respectively). This is because one can make the effective coupling $c_{\lambda_n} \ll 1$ for arbitrarily high $n$ in quantization law (\ref{quantCS}), by making the relevant scaling parameter sufficiently small. However, for specifically chosen $\lambda_n$, at the end of calculation one should group all the terms with the same powers of the small scaling parameters ($G_N / \mu^{5/4}$ and $a / \mu^{1/4}$, respectively).
We would like to argue that the claim in (b) is sound. We start from the equations of motion
(\ref{eom2}) and consider a perturbative solution in $\lambda$ around Myers-Perry metric (\ref{mpbh}). It is obvious that a perturbative expansion for the metric can be written in the form
\begin{equation} \label{pertsol}
\bar{g}_{\nu\sigma} = \sum_{k=0}^\infty c_{\lambda N}^k \, g^{(k)}_{\nu\sigma}
\end{equation}
where $c_{\lambda N} = \lambda \, G_N / \mu^{5/4}$, $g^{(0)}_{\nu\sigma}$ is MP black hole solution with all parameters $a_i$ equal, $a_i = a$, and $g^{(k)}_{\nu\sigma}$ depend on $\mu$ and $a$ (but not on
$\lambda$ and $G_N$). We assume that $c_{\lambda N}$ is small enough so that expansion
(\ref{pertsol}) is convergent. If $\lambda$ is quantized, and so assumes finite value from the set
(\ref{quantCS}), one can make $c_{\lambda N}$ small as we like by appropriately tuning Newton's constant $G_N$.
Now we want to show that in the perturbative expansion every power of $\lambda$ is accompanied by a factor of $a$. Following formally a standard procedure we insert (\ref{pertsol}) in (\ref{eom2}) and collect terms with the same power of $\lambda$. It is important to note that $g^{(0)}_{\nu\sigma}$ is analytic in
$a$ around $a=0$, as are all operators obtained by expanding both sides in (\ref{eom2}). This allows us to make Taylor expansions in $a$. In the first order one gets (we show this
explicitly in Sec. \ref{ssec:pertD7a})
\begin{equation}
G'[g^{(0)}] \cdot g^{(1)} = C[g^{(0)}]
\end{equation}
where, for the sake of simplicity, we use an abstract notation (the symbol $G'[g^{(0)}]$ is in fact a linear differential operator acting on $g^{(1)}_{\nu\sigma}$). The key point is that right hand side (i.e.,the gCS term) generates an extra
factor of $a^2$ (in view of the $a$-dependence), in such a way that every component in
$g^{(1)}_{\nu\sigma}$ has an extra factor of $a$ compared with $g^{(0)}_{\nu\sigma}$. This means that if we
make the redefinition $g^{(1)}_{\rho\sigma} \equiv a^2 h^{(1)}_{\nu\sigma}$ the expansion in (\ref{pertsol})
becomes\footnote{In fact, as we show in Sec. \ref{ssec:pertD7a}, $g^{(1)}_{t\phi_i}$ contains a multiplicative factor of $a^2$, while all other components of $g^{(1)}_{\nu\sigma}$ have a multiplicative factor $a^3$.}
\begin{equation} \label{pert1sol}
\bar{g}_{\nu\sigma} = g^{(0)}_{\nu\sigma} + (c_{\lambda N} a^2) h^{(1)}_{\nu\sigma} +
\sum_{k=2}^\infty c_{\lambda N}^k \, g^{(k)}_{\nu\sigma}
\end{equation}
At the second order we obtain a (differential) equation
\begin{equation} \label{1oreq}
G'[g^{(0)}] \cdot g^{(2)} = a^2 \, C'[g^{(0)}] \cdot h^{(1)} - a^4 \, G''[g^{(0)}] \cdot g^{(1)} \cdot g^{(1)}
\end{equation}
It can be shown that $C'[g^{(0)}] \cdot h^{(1)} \propto a^2$. It then follows from (\ref{1oreq}) that
$g^{(2)}_{\nu\sigma}$ has (at least) an extra multiplicative factor $a^4$ compared with $g^{(0)}_{\nu\sigma}$.
Defining $g^{(2)}_{\nu\sigma} \equiv a^4 h^{(2)}_{\nu\sigma}$ and using this in (\ref{pert1sol}) we get
\begin{equation} \label{pert2sol}
\bar{g}_{\nu\sigma} = g^{(0)}_{\nu\sigma} + (c_{\lambda N} a^2) h^{(1)}_{\nu\sigma} +
(c_{\lambda N} a^2)^2 h^{(2)}_{\nu\sigma} + \sum_{k=3}^\infty c_{\lambda N}^k \, g^{(k)}_{\nu\sigma}
\end{equation}
Repeating this procedure we finally get
\begin{equation} \label{pertsolinf}
\bar{g}_{\nu\sigma} = \sum_{k=0}^\infty (c_{\lambda N} a^2)^k h^{(k)}_{\nu\sigma}
\end{equation}
where $h^{(0)}_{\nu\sigma} \equiv g^{(0)}_{\nu\sigma}$ and all $h^{(k)}_{\nu\sigma}$ are analytic in
$a$ around $a=0$. We now see that our perturbative expansion is an effective expansion in $(\lambda a^2)$.
We can write (\ref{pertsolinf}) in the following form
\begin{equation} \label{psolinfga}
\bar{g}_{\nu\sigma} = \sum_{k=0}^\infty (c_{\lambda a})^k \, \tilde{h}^{(k)}_{\nu\sigma} \;\;, \qquad \qquad
\tilde{h}^{(k)}_{\nu\sigma} = (\mu^{1/4} a)^k h^{(k)}_{\nu\sigma}
\end{equation}
where $c_{\lambda a} = \lambda \, G_N a / \mu^{3/2}$ is a dimensionless parameter. What is interesting in this new parametrization is that $\tilde{h}^{(k)}_{\nu\sigma}$, beside being analytic in $a$, also satisfies
\begin{equation}
\lim_{a \to 0} \tilde{h}^{(k)}_{\nu\sigma} = 0
\end{equation}
We now see that (\ref{psolinfga}) is expansion in $(\lambda a)$ with the coefficients which become very small when $a/\mu^{1/4}$ is small, improving the convergence of the expansion in that regime. Comparing (\ref{psolinfga}) with the expansion (\ref{pertsol}), we conclude that (\ref{psolinfga}) can be made sensible even for $\lambda$ and $G_N/\mu^{5/4}$ finite, if we take $a$ small enough. This is exactly our claim in (b).
\vspace{10pt}
\subsection{Perturbative expansion in $\lambda$: Equations of motion}
\label{ssec:pertD7a}
Our aim is to find perturbative stationary rotating asymptotically flat black hole solutions in $D=7$ in a theory with Lagrangian (\ref{lhecs}) to first-order in gCS coupling $\lambda$. For simplicity we specialize to the case when all angular momenta $J_i$, $i=1,2,3$, are equal. We perturb around MP black holes which are parametrized by two numbers ($\mu$, $a$), because in this case
$a_i = a$, $i=1,2,3$. As we discussed in Sec. \ref{sec:d7bh}, this case is rich enough to expect all relevant quantities to be perturbed by the gCS terms.
As in (\ref{pertsol}) we search for the perturbative solution
\begin{equation} \label{pertsol1}
\bar{g}_{\nu\sigma} = g^{(0)}_{\nu\sigma} + \alpha \, g^{(1)}_{\nu\sigma} + O(\alpha^2) \;,
\end{equation}
where for convenience we defined
\begin{equation} \label{alpha}
\alpha \equiv 16\pi \, G_N \lambda \;\;.
\end{equation}
Putting (\ref{pertsol1}) in EOM (\ref{eom2}) and using gauge condition
\begin{equation} \label{gauge}
g^{(0)\nu\rho} g^{(1)}_{\nu\rho} = 0 \;\;, \qquad\qquad \nabla^{\nu} g^{(1)}_{\nu\rho} = 0,
\end{equation}
one obtains
\begin{equation} \label{eom1st}
-\frac{1}{2} \nabla^{\beta} \nabla_{\beta} \, g^{(1)}_{\nu\sigma}
+ R^{\beta}{}_{\nu\sigma}{}^{\rho} g^{(1)}_{\beta \rho} = C_{\nu\sigma}[g^{(0)}]
\end{equation}
In (\ref{gauge}) and (\ref{eom1st}) covariant derivative $\nabla_\nu$, Riemann tensor
$R_{\beta\nu\sigma\rho}$ and $C_{\nu\sigma}$ are constructed from the unperturbed metric
$g^{(0)}_{\nu\sigma}$, which is also used for raising and lowering indices. By solving (\ref{eom1st}) one obtains the first-order correction to metric $g^{(1)}_{\nu\sigma}$.
In our case $g^{(0)}_{\nu\sigma}$ is the MP black hole metric with all angular momenta equal, i.e.,
\begin{equation} \label{Jequal}
a_i = a \;\;, \qquad\qquad i=1,2,3 \;\;.
\end{equation}
From (\ref{mpbh}), (\ref{fpidef}) and (\ref{mucon}) one gets
\begin{eqnarray}
ds_{(0)}^2 &\equiv& g^{(0)}_{\nu\sigma} dx^\nu dx^\sigma
\nonumber \\
&=& -dt^2 + \frac{\mu \, r^2}{\Pi \, F} \left( dt - a \sum_{i=1}^3 \mu_i^2 \, d\phi_i \right)^{\! 2} \!
+ \frac{\Pi \, F}{\Pi - \mu \, r^2} dr^2 + (r^2 + a^2) \sum_{i=1}^3 (d\mu_i^2 + \mu_i^2 d\phi_i^2)
\label{mpbhJeq}
\end{eqnarray}
where now
\begin{equation} \label{fpidefJeq}
F = F(r) = 1 - \frac{a^2}{r^2 + a^2} = \frac{r^2}{r^2 + a^2} \; , \qquad \qquad
\Pi = \Pi(r) = (r^2 + a^2)^3
\end{equation}
Condition (\ref{Jequal}) substantially simplifies the MP metric. In fact, it can be shown that
the dependence on the coordinates $\vec{\mu}$ is completely fixed by the enhanced symmetries induced
by (\ref{Jequal}). We use this to write $g^{(1)}_{\nu\sigma}$ in the following form
\begin{eqnarray}
ds_{(1)}^2 &\equiv& g^{(1)}_{\nu\sigma} dx^\nu dx^\sigma
\nonumber \\
&=& f_{t}(r) \left(\mu - (a^2 + r^2)^2 \right) dt^2 + f_{r}(r) \frac{r^2 (a^2 + r^2)^2}{\Pi - \mu r^2} dr^2 +
h(r) (a^2+r^2) ( d\mu_i^2 +\mu_i^2 d\phi_i^2 )
\nonumber \\
&& - f_{t\phi}(r) \frac{a \, \mu}{(a^2+r^2)^2} \mu_i^2 dt \, d\phi_i
+ f_{\phi}(r) \frac{a^2\mu}{(a^2 + r^2)^2} \mu_i^2 \mu_j^2 d\phi_i d\phi_j
\label{g1Jeq}
\end{eqnarray}
where $f_t$, $f_r$, $h$, $f_{t\phi}$, and $f_\phi$ are five unknown functions of the coordinate $r$ alone, to be
found by solving the equations of motion. We see that in the special case (\ref{Jequal}), due to the enhancement of symmetry, the problem generally (i.e., not only in perturbation theory) boils down to solving a system of \emph{ordinary} differential equations, which is of immense help.
Writing (\ref{mpbhJeq}) and (\ref{g1Jeq}) in the gauge conditions (\ref{gauge}) imposes two constraints on
unknown functions, which we use to express $f_t(r)$ and $f_{t\phi}(r)$ in terms of the remaining three
functions. The result is
\begin{eqnarray}
g^{(1)}_{tt} &=& \frac{1}{3r(r^2 + a^2)^3} \left\{ 2r \left[\mu \, a^2 f_{\phi}(r)
+ \left( 4(r^2 + a^2)^3 - \mu (2 r^2 + a^2) \right) f_r(r) \right. \right.
\nonumber \\
&& \left. \left. + \left( 5(r^2 + a^2)^3 - 2 a^2 \mu \right) h(r) \right]
+ (r^2 + a^2) \left( (r^2 + a^2)^3 - r^2 \mu \right) f'_r(r) \right\}
\nonumber \\
g^{(1)}_{t\phi_i} &=& \frac{\mu_i^2}{6a\mu r(r^2 + a^2)^3} \left\{ \mu a^2 r \left(
\mu (3 r^2 + 5 a^2) - (r^2 + a^2)^3 \right) f_{\phi}(r) \right.
\nonumber \\
&& + r \left( 5(r^2 + a^2)^6 - \mu (r^2 - 6 a^2) (r^2 + a^2)^3 - 2 \mu^2 a^2 (2 r^2 + a^2) \right) f_r(r)
\nonumber \\
&& - r \left( 5(r^2 + a^2)^6 - 3 \mu (5 r^2 + 3 a^2) (r^2 + a^2)^3 + 4 \mu^2 a^4 \right) h(r)
\nonumber \\
&& \left. + (r^2 + a^2) \left( (r^2 + a^2)^6 - \mu (r^2 - a^2) (r^2 + a^2)^3 - r^2 a^2 \mu^2 \right) f'_r(r)
\right\}
\nonumber \\
g^{(1)}_{rr} &=& \frac{r^2 (r^2+a^2)^2}{(r^2+a^2)^3-r^2 \mu} f_r(r)
\nonumber \\
g^{(1)}_{\mu_1\mu_1} &=& \frac{(r^2+a^2) (1-\mu_2^2)}{1-\mu_1^2-\mu_2^2} h(r)
\nonumber \\
g^{(1)}_{\mu_1\mu_2} &=& \frac{(r^2+a^2) \mu_1 \mu_2}{1-\mu_1^2-\mu_2^2} h(r)
\nonumber \\
g^{(1)}_{\mu_2\mu_2} &=& \frac{(r^2+a^2) (1-\mu_1^2)}{1-\mu_1^2-\mu_2^2} h(r)
\nonumber \\
g^{(1)}_{\phi_i\phi_j} &=&
\frac{a^2 \mu \, \mu_i^2\mu_j^2}{(r^2+a^2)^2} f_\phi(r) + \delta_{ij} (r^2+a^2) \mu_i^2 \, h(r)
\label{g1tpi}
\end{eqnarray}
Inserting (\ref{mpbhJeq}) and (\ref{g1tpi}) in the EOM (\ref{eom1st}) we obtain the following system
of differential equations for the remaining unknown functions $f_r(r)$, $h(r)$, and $f_\phi(r)$
\begin{eqnarray}
f''_r(r) &=& \frac{\mu \, r^2 (7 r^2 + 3 a^2) - (15 r^2 - a^2)(r^2 + a^2)^3}{r (r^2 + a^2)
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f'_r(r)
- \frac{8 r^2 \left( 5 (r^2 + a^2)^3 - a^2 \mu \right)}{(r^2 + a^2)^2
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_r(r)
\nonumber \\
&& + \frac{8 a^2 \mu \, r^2}{(r^2 + a^2)^2 \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_\phi(r)
+ \frac{8 r^2 \left( 5 (r^2 + a^2)^3 - 2 a^2 \mu \right)}{(r^2 + a^2)^2
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} h(r)
\nonumber \\[1mm]
&& + \frac{3456 \, a^3 \mu^3 r^2 (7 r^2 - a^2)}{(r^2 + a^2)^{10}
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]}
\label{freq} \\[3mm]
h''(r) &=& \frac{2r}{r^2 + a^2} f'_r(r) + \frac{4 r^2 \left( 2 (r^2 + a^2)^3 - a^2 \mu \right)}{(r^2 + a^2)^2
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_r(r)
\nonumber \\[1mm]
&& - \frac{4 a^2 \mu \, r^2}{(r^2 + a^2)^2 \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_\phi(r)
- \frac{(5 r^2 - a^2)(r^2 + a^2)^2 - \mu \, r^2}{r \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} h'(r)
\nonumber \\[1mm]
&& - \frac{8 r^2 \left( (r^2 + a^2)^3 - a^2 \mu \right)}{(r^2 + a^2)^2
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} h(r)
+ \frac{1728 \, a^3 \mu^3 r^2}{(r^2 + a^2)^9 \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]}
\label{fpeq}
\end{eqnarray}
\begin{eqnarray}
f''_\phi(r) &=& - \frac{2r \left( 5 (r^2 + a^2)^3 + 4 a^2 \mu \right)}{a^2 \mu (r^2 + a^2)} f'_r(r)
- \frac{4 r^2 \left( 10 (r^2 + a^2)^6 + 3 a^2 \mu (r^2 + a^2)^3 - 4 a^4 \mu^2 \right)}{a^2 \mu
(r^2 + a^2)^2 \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_r(r)
\nonumber \\[1mm]
&& + \frac{5 r^2 + a^2}{r (r^2 + a^2)} f'_\phi(r)
+ \frac{4 r^2 \left( 5 (r^2 + a^2)^3 + 4 a^2 \mu \right)}{(r^2 + a^2)^2
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} f_\phi(r)
\nonumber \\[1mm]
&& - \frac{2 r \left( 5 (r^2 + a^2)^6 - 3 \mu (5 r^2 + a^2) (r^2 + a^2)^3 + 4 a^4 \mu^2 \right)}{a^2 \mu
(r^2 + a^2) \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} h'(r)
\nonumber \\[1mm]
&& + \frac{8 r^2 \left( 5 (r^2 + a^2)^6 - a^2 \mu (r^2 + a^2)^3 - 4 a^4 \mu^2 \right)}{a^2 \mu
(r^2 + a^2)^2 \left[ (r^2 + a^2)^3 - \mu \, r^2 \right]} h(r)
\nonumber \\[1mm]
&& - \frac{1728 \, a \, \mu^2 r^2 \left( (215 r^2 - 57 a^2) (r^2 + a^2)^3
- 2 \mu (105 r^4 - 33 a^2 r^2 - 2 a^4) \right)}{(r^2 + a^2)^{10}
\left[ (r^2 + a^2)^3 - \mu \, r^2 \right]}
\label{heq}
\end{eqnarray}
\vspace{10pt}
\subsection{Solving at lowest order in $a$}
\label{ssec:solalaw}
Equations (\ref{freq})-(\ref{heq}) still appear nasty enough to be solved exactly, so we turn to slowly
rotating black holes, i.e., $a/\mu^{1/4} \ll 1$.\footnote{A similar double perturbative perturbative expansion was performed in \cite{Yunes:2009hc} for the case of perturbation of Einstein gravity with massless scalar field in $D=4$ by a mixed Chern-Simons Lagrangian term. In contrast to our case, in this theory the lowest-order correction does not capture changes in the horizon properties like area and temperature.} In this regime, solutions of (\ref{freq})-(\ref{fpeq}), with proper asymptotic behavior to describe asymptotically flat black holes, are given by
\begin{eqnarray}
f_r(r) &=& \frac{432}{5} \frac{a^3 \mu^3}{r^{16} \left(r^4-\mu \right)} + O(a^5)
\nonumber \\
f_\phi(r) &=& -1296 \, \frac{a \, \mu^2}{r^{14}} - \frac{5 \, r^6}{a^2 \mu} h(r) + O(a^3)
\nonumber \\
h(r) &=& \frac{2592}{5} \frac{a^3}{\mu^2}\, \tilde{h}(r^4/\mu) + O(a^5) \;\;,
\label{sola}
\end{eqnarray}
where the function $\tilde{h}(u)$ is given by
\begin{eqnarray}
\tilde{h}(u) &=& - Q_{1/2}(2u-1) \int_1^u \frac{dx}{x^5} P_{1/2}(2x-1)
\nonumber \\
&& + P_{1/2}(2u-1) \left( \int_\infty^u \frac{dx}{x^5} Q_{1/2}(2x-1)
- i \frac{\pi}{2} \int_1^\infty \frac{dx}{x^5} P_{1/2}(2x-1) \right)
\label{hhdef}
\end{eqnarray}
and $P_\nu$ and $Q_\nu$ are standard Legendre functions. Details of the calculation are presented in Appendix \ref{app:tech}. Using (\ref{sola}) and (\ref{hhdef}) in (\ref{g1tpi}) we obtain
$g^{(1)}_{\mu\nu}$ at the lowest order in $a$
\begin{subequations}
\allowdisplaybreaks
\begin{align}
g^{(1)}_{tt} &= - \frac{6048}{5} \frac{a^3 \mu^3}{r^{20}} + O(a^5)
\\
g^{(1)}_{t\phi_i} &= - \frac{72}{5} \frac{a^2 \mu^3 (43 \, r^4 - 45\, \mu)}{r^{18}(r^4 - \mu)} \mu_i^2
+ O(a^4)
\\
g^{(1)}_{rr} &= \frac{432}{5} \frac{a^3 \mu^3}{r^{12} \left(r^4-\mu \right)^2} + O(a^5)
\\
g^{(1)}_{\mu_1\mu_1} &= \frac{2592}{5} \frac{a^3}{\mu^2} r^2 \tilde{h}(r^4/\mu)
\frac{1-\mu_2^2}{1-\mu_1^2-\mu_2^2} + O(a^5)
\\
g^{(1)}_{\mu_1\mu_2} &= \frac{2592}{5} \frac{a^3}{\mu^2} r^2 \tilde{h}(r^4/\mu)
\frac{\mu_1 \mu_2}{1-\mu_1^2-\mu_2^2} + O(a^5)
\\
g^{(1)}_{\mu_2\mu_2} &= \frac{2592}{5} \frac{a^3}{\mu^2} r^2 \tilde{h}(r^4/\mu)
\frac{1-\mu_1^2}{1-\mu_1^2-\mu_2^2} + O(a^5)
\\
g^{(1)}_{\phi_i\phi_j} &= - 1296 \frac{a^3 \mu^3}{r^{18}}
\left[ 1 + 2 \frac{r^{20}}{\mu^5} \tilde{h}(r^4/\mu) \right] \mu_i^2 \mu_j^2
+ \delta_{ij} \frac{2592}{5} \frac{a^3}{\mu^2} r^2 \, \tilde{h}(r^4/\mu) \, \mu_i^2 + O(a^5)
\end{align}
\label{g1a1o}
\end{subequations}
where $i,j = 1,2,3$ and $\mu_3^2 = 1 - \mu_1^2 - \mu_2^2$. Note that the ``ugly'' part containing
$\tilde{h}$ cancels in $g^{(1)}_{tt}$ and $g^{(1)}_{t\phi_i}$ in the lowest order in $a$.
Let us check that our perturbed solution still describes an asymptotically flat black hole. We will do this by
checking the behavior of the perturbed metric in two limits - asymptotic infinity and near-horizon. For this
we need the corresponding behavior of the function $\tilde{h}(u)$ which we defined in (\ref{hhdef}).
The asymptotic behaviour of the function $\tilde{h}(u)$ in the $u \to \infty$ limit is of the form
\begin{equation} \label{hhasymp}
\tilde{h}(u) = C u^{-3/2}
+ O(u^{-5/2}) \;\;.
\end{equation}
where the constant $C$ is
\begin{equation} \label{cval}
C = - \frac{\pi}{16} \int_1^\infty \frac{dx}{x^5} P_{1/2}(2x-1) \approx -0.0593 \ldots
\end{equation}
This means that $\tilde{h}(r^4/\mu) \propto 1/r^6$, so the asymptotic behavior of (\ref{g1a1o}) at
the limit $r \to \infty$ in the lowest order in $a$ is
\begin{equation} \label{falloff}
g^{(1)}_{tt} \sim O(r^{-20}) \;, \quad g^{(1)}_{t\phi_i} \sim O(r^{-18}) \;, \quad
g^{(1)}_{rr} \sim O(r^{-20}) \;, \quad g^{(1)}_{\mu_i\mu_j} \sim O(r^{-4}) \;, \quad
g^{(1)}_{\phi_i\phi_j} \sim O(r^{-4}) \;.
\end{equation}
We see explicitly that the perturbed solution is still asymptotically flat and that the fall-off conditions
(\ref{falloff}) guarantee that the metric perturbation (\ref{g1a1o}) does not change the relations between
asymptotic quantities (energy and angular momentum) and black hole parameters ($\mu$ and $a$).
However, we should ask what happens in higher orders in the perturbation parameter $a/\mu^{1/4}$. To
answer this we have performed a detailed analysis by perturbatively solving eqs.(\ref{freq})-(\ref{fpeq})
in the regime $r \gg \mu^{1/4}$, using (\ref{g1a1o}) as starting point, to all relevant orders in
$u = r^4/\mu$ and $a/\mu^{1/4}$. We have found that $g^{(1)}_{\mu\nu}$ has the following asymptotic
behavior at $r \to \infty$
\begin{equation} \label{fallofftrue}
g^{(1)}_{tt} \sim \frac{a^5}{r^{16}} \;, \quad g^{(1)}_{t\phi_i} \sim \frac{a^4}{r^{10}} \;, \quad
g^{(1)}_{rr} \sim \frac{a^5}{r^{12}} \;, \quad g^{(1)}_{\mu_i\mu_j} \sim \frac{a^3}{r^4} \;, \quad
g^{(1)}_{\phi_i\phi_j} \sim \frac{a^3}{r^4} \;.
\end{equation}
We see that after including \emph{all} orders of $a/\mu^{1/4}$ in $g^{(1)}_{\mu\nu}$, asymptotics have changed, but not in a significant way - the conclusion is that Myers-Perry relations
(\ref{MPmass}) and (\ref{MPangm}) are still valid.
In the limit $r \to \mu^{1/4}$ (i.e., $u \to 1$), the function $\tilde{h}$ has the following expansion
\begin{equation} \label{thexpu1}
\tilde{h}(u) = \tilde{h}(1) + \frac{1}{4} (77 + 23 \, \tilde{h}(1)) (u-1)
+ \frac{5}{64} (847 + 173 \, \tilde{h}(1)) (u-1)^2 + O(u^3)
\end{equation}
where
\begin{equation} \label{thu1}
\tilde{h}(1) = - \int_1^\infty \frac{dx}{x^5} \left( Q_{1/2}(2x-1) + i \frac{\pi}{2} P_{1/2}(2x-1) \right)
\approx - 0.15336 \ldots
\end{equation}
This implies that the metric perturbation (\ref{g1a1o}) has the expected behavior for
the black hole in the vicinity of the horizon, which, at the zeroth-order in $a$, is located at
$r = \mu^{1/4}$. We shall see that part of the expansion (\ref{thexpu1}) proportional to the ``ugly"
constant $\tilde{h}(1)$ does not contribute to near-horizon quantities (event horizon and ergosurface\footnote{Generically, the ergosurface is not in the near-horizon region. However, as we are doing a
perturbative calculation in $a$, for $|a|/\mu^{1/4} \ll 1$ the ergosurface is perturbatively close to the
horizon.} properties), as it cancels in the calculations.
Now we are ready to calculate corrections to various black hole parameters. Below we present
the main results while technical details of the calculations can be found in Appendix \ref{app:tech:pertq}.
\subsubsection{Event horizon}
\label{sssec:evhor}
We can find the location of the event horizon in standard fashion from
\begin{equation} \label{rHeq}
\bar{g}^{rr}(\bar{r}_H) = 0
\end{equation}
From (\ref{pertsol1}), (\ref{mpbhJeq}) and (\ref{g1a1o}) follows
\begin{equation}
\bar{g}^{rr}(r) = \frac{(r^2 + a^2)^3 - r^2 \mu}{r^2 (r^2 + a^2)^2}
- \alpha \left( \frac{432}{5} \frac{a^3 \mu^3}{r^{20}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
which, plugged in (\ref{rHeq}), gives
\begin{equation} \label{rhoriz}
\bar{r}_H = r_{H0} + \alpha \left( \frac{108}{5} \frac{a^3}{\mu^{7/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
where $r_{H0}$ is the horizon radius for a MP black hole with $a_i =a$. Taking into account
the possibility that the gCS coupling constant $\lambda$ (which by (\ref{alpha}) implies the same for $\alpha$)
is quantized, we must eventually view formally the double expansion in (\ref{rhoriz}) (over $\alpha$ and $a$)
as a single expansion (over $a$).
\footnote{We explained this in detail in Sec. \ref{ssec:pertOK}.}
The final result is
\begin{equation} \label{rHapert}
\bar{r}_H = \mu^{1/4} - \frac{3}{4} \frac{a^2}{\mu^{1/4}} + \frac{108}{5} \frac{\alpha \, a^3}{\mu^{7/4}}
+ O(a^4)
\end{equation}
We note that the same result for the event horizon is obtained from an analysis of circular orbits, which leads to the horizon condition
\begin{equation}
\left( \sum_i g_{t\phi_i} \right)^{\!\!2} - g_{tt} \sum_{i,j} g_{\phi_i \phi_j} = 0
\end{equation}
The details can be found in Appendix \ref{app:tech:pertq}.
The location of the horizon is a coordinate dependent result, so by itself the result
(\ref{rhoriz})-(\ref{rHapert}) does not say much.\footnote{Note that (\ref{rhoriz})-(\ref{rHapert}) naively suggest that for $a>0$ the gCS term tends to``enlarge" the horizon (at lowest order of perturbation around $a=0$), but calculating the horizon area (\ref{areaH}) shows that it actually tends to ``shrink" it.} We
have to calculate proper, coordinate independent, quantities connected with event horizon. One such obvious is the proper area of the horizon, which we also need to find the black hole entropy. In Appendix \ref{app:tech:area} we show that it is given by
\begin{equation} \label{areaH}
\bar{A}_H = A_H^{(0)}
- \alpha \left( 540 \pi^3 \frac{a^3}{\mu^{3/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
where the first term on the right side is the horizon area of the Myers-Perry black hole
\begin{equation}
A_H^{(0)} = \pi^3 \mu \, r_{H0}
\end{equation}
By expanding $r_{H0}$ (horizon radius of the MP black hole) in $a$ we obtain
\begin{equation} \label{aHapert}
\bar{A}_H = \pi^3 \left( \mu^{5/4} - \frac{3}{4} a^2 \mu^{3/4}
- 540 \, \alpha \frac{a^3}{\mu^{3/4}} \right) + O(a^4)
\end{equation}
Now we see that the gCS Lagrangian term induces a real change on geometry of black hole solutions.
\subsubsection{Ergosurface}
\label{sssec:ergos}
The location of the ergosurface is obtained from the infinite red-shift condition
\begin{equation} \label{rerdef}
\bar{g}_{tt}(\bar{r}_e) = 0 \;\;.
\end{equation}
From (\ref{pertsol1}), (\ref{mpbhJeq}) and (\ref{g1a1o})
\begin{equation}
\bar{g}_{tt}(r) = -1 + \frac{\mu}{(r^2 + a^2)^2}
- \alpha \left( \frac{6048}{5} \frac{a^3 \mu^3}{r^{20}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
follows.
By inserting this in (\ref{rerdef}) we obtain that the ergosurface is defined by the condition $r=\bar{r}_e$,
where
\begin{equation} \label{rergo}
\bar{r}_e = \sqrt{\mu^{1/2} - a^2} - \alpha \left( \frac{1512}{5} \frac{a^3}{\mu^{7/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
By expanding the first term (which is the ergosurface radius of the MP black hole) and collecting powers of $a$ we obtain
\begin{equation}
\bar{r}_e = \mu^{1/4} - \frac{1}{2} \frac{a^2}{\mu^{1/4}} - \frac{1512}{5} \frac{\alpha \, a^3}{\mu^{7/4}}
+ O(a^4)
\end{equation}
\subsubsection{Angular velocity}
\label{sssec:angvel}
If we write the horizon generating null Killing vector $\bar\chi$ as
\begin{equation} \label{killgen}
\bar\chi = \frac{\partial}{\partial t} + \bar\Omega_H \sum_i \frac{\partial}{\partial \phi_i}
\end{equation}
then $\bar\Omega_H$ is the angular velocity of the horizon. We can obtain it from the null-condition
on the horizon
\begin{equation}
\bar{\chi}^2(\bar{r}_H) \equiv \left. \bar\chi^\mu \bar\chi^\nu \bar g_{\mu\nu}\right|_{r=\bar r_H} = 0
\end{equation}
From (\ref{killgen}) and the form of the metric it follows
\begin{equation} \label{avform}
\bar{\Omega}_H
= - \left. \frac{\sum_i \bar g_{t\phi_i} }{\sum_{i,j} \bar g_{\phi_i\phi_j}} \right|_{r=\bar r_H}
\end{equation}
Putting (\ref{pertsol1}), (\ref{mpbhJeq}), (\ref{g1a1o}), and (\ref{rhoriz}) in (\ref{avform}) we obtain
\begin{equation} \label{avsol}
\bar{\Omega}_H = \frac{a}{r_{H0}^2 + a^2}
- \alpha \left( 648 \frac{a^2}{\mu^{2}} + O(a^4) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
By expanding the first term (which is $\Omega_H$ of the MP black hole) and collecting powers of
$a$ we obtain
\begin{equation}
\bar{\Omega}_H = \frac{a}{\sqrt{\mu}} - 648 \, \alpha \frac{a^2}{\mu^{2}}
+ \frac{1}{2} \frac{a^3}{\mu} + O(a^4)
\end{equation}
\subsubsection{Surface gravity and black hole temperature}
\label{sssec:temper}
The surface gravity $\bar{\kappa}$ is defined by
\begin{equation}
\bar\chi^\mu \bar\nabla_\mu \bar\chi^\nu = \bar\kappa \bar\chi^\nu \qquad
\textrm{on the horizon $r = \bar r_H$}
\end{equation}
Using (\ref{killgen}), (\ref{avsol}), (\ref{pertsol1}), (\ref{mpbhJeq}), (\ref{g1a1o}), and (\ref{rhoriz})
we obtain
\begin{equation}
\bar\kappa = \frac{3 r_{H0}}{r_{H0}^2 + a^2}-\frac{1}{r_{H0}}
+ \alpha \left( 1944 \frac{a^3}{\mu^{9/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
By expanding the first term (which is $\kappa$ of the MP black hole) and collecting powers of
$a$ we obtain
\begin{equation}
\bar\kappa = \frac{2}{\mu^{1/4}} - \frac{3}{2} \frac{a^2}{\mu^{3/4}}
+ 1944 \, \alpha \frac{a^3}{\mu^{9/4}} + O(a^4)
\end{equation}
The black hole temperature $\bar{T}_H$ is obtained from surface gravity via
\begin{equation}
\bar{T}_H = \frac{\bar\kappa}{2\pi}
\end{equation}
\subsubsection{Black hole entropy}
\label{sssec:entropy}
As discussed in Sec. \ref{sec:eom} a black hole entropy in our case is given by
\begin{equation} \label{Stot}
\bar{S}_{\mathrm{bh}} = \bar{S}_{\mathrm{BH}} + \lambda \bar{S}_{\mathrm{gCS}}
\end{equation}
where $S_{\mathrm{BH}}$ is the Bekenstein-Hawking entropy proportional to the proper horizon area
$A_H$
\begin{equation}
\bar{S}_{\mathrm{BH}} = \frac{\bar{A}_H}{4 \, G_N}
\end{equation}
and $S_{\mathrm{gCS}}$ is the contribution induced by Lagrangian gCS term given by
\cite{Tachikawa:2006sz,Bonora:2011gz}
\begin{equation}
S_{\mathrm{gCS}} = 16\pi \int_\mathcal{B} \mathbf{\Gamma}_N \mathbf{R}_N^2
\end{equation}
By using (\ref{pertsol1}), (\ref{mpbhJeq}), (\ref{g1a1o}), (\ref{rhoriz}) and (\ref{alpha}) we obtain
\begin{equation} \label{SBH}
\bar{S}_{\mathrm{BH}} = \frac{A_H^{(0)}}{4 \, G_N}
- \lambda \left( 2160 \pi^4 \frac{a^3}{\mu^{3/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
where the first term is the entropy of the Myers-Perry black hole. In the same way we obtain
\begin{equation} \label{SgCS}
\bar{S}_{\mathrm{gCS}} = 3456 \, \pi^4 \left( \frac{a}{r_{H0}} \right)^{3}
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
Interestingly the simple result (\ref{SgCS}) is $a$-exact in lowest order in $\lambda$. Plugging
(\ref{SBH}) and (\ref{SgCS}) into (\ref{Stot}) gives us the black hole entropy
\begin{equation}
\bar{S}_{\mathrm{bh}} = \frac{A_H^{(0)}}{4 \, G_N}
+ \lambda \left( (6 \pi)^4 \frac{a^3}{\mu^{3/4}} + O(a^5) \right)
+ \sum_{k=2}^\infty \alpha^k \, O(a^{2k})
\end{equation}
or, written purely as expansion in $a$,
\begin{equation}
\bar{S}_{\mathrm{bh}} = \frac{\pi^3}{4 \, G_N} \left( \mu^{5/4} - \frac{3}{4} a^2 \mu^{3/4} \right)
+ (6 \pi)^4 \lambda \frac{a^3}{\mu^{3/4}} + O(a^4)
\end{equation}
\vspace{20pt}
\section{Conclusion}
\label{sec:concl}
\bigskip
We have investigated in some detail the influence of adding a purely gravitational Chern-Simons Lagrangian term in the action of some diffeomorphism covariant theory of gravity, on asymptotically flat stationary rotating black hole solutions and corresponding black hole entropy. We have shown that the structure of the Chern-Simons term, characterized by its parity violating properties, does not have any effect when two or more angular momenta vanish. Perturbative arguments indicate that, instead, in
cases when at most one angular momentum is zero, the influence of gravitational Chern-Simons terms is nontrivial, both on the solutions and the entropy.
In an attempt to find black hole solutions we have specialized to what seems to be the simplest nontrivial case, i.e.
Einstein gravity supplemented with a gravitational Chern-Simons Lagrangian term in $D=7$, and black holes with all angular momenta equal. We have calculated the first-order correction of the gravitational Chern-Simons entropy term and argued that it does not correspond to any geometric property of the interior of black hole (like the inner horizon surface area), contrary to the conjecture made in
\cite{Solodukhin:2005ah} which was based on the analysis of rotating AdS black holes in $D=3$. Due to the complexity of the equations of motion,
we have not been able to find exact analytic solutions. We have turned to a double perturbative expansion, in Chern-Simons coupling constant and angular momentum, and constructed the first-order correction to Myers-Perry solution. We have explicitly calculated corrections to horizon area, ergoregion and black hole entropy, all of which are nonvanishing. A perturbative analysis shows that the influence of the gravitational Chern-Simons Lagrangian term is completely nontrivial: it changes the type of metric - the perturbed metric does not seem to fall into the Kerr-Schild class with a flat seed metric. This is unfortunate because the Kerr-Schild ansatz was the crucial tool used for constructing Kerr and Myers-Perry solutions. It remains to be seen whether our perturbative results can suggest some new ansatz which could be used in analytic constructions.
An obvious extension would be to include a cosmological constant and consider asymptotically (A)dS solutions. In Einstein gravity exact analytic solutions of this type were obtained in \cite{Gibbons:2004js}, so one could naively expect that extension of our treatment to this case should be straightforward. However this is not the case - introduction of cosmological constants seriously complicates the calculations. This interesting problem is currently under investigation.
\vspace{30pt}
{\bf Acknowledgments
\bigskip
\noindent
One of us (L.B.) would like to thank the Theoretical Physics Department, University of Zagreb, for hospitality and financial support during his visits there. M.C., P.D.P., S.P.\ and I.S.\ would like to thank SISSA for hospitality and financial support during visits there and would also like to acknowledge support by the Croatian Ministry of Science, Education and Sport under the contract no.~119-0982930-1016. The work of L.B. was supported in part by the MIUR-PRIN contract 2009-KHZKRX. Visits have been financially supported in part by the University of Zagreb Development Fund.
\vspace{30pt}
\section*{Appendix
|
1,116,691,500,254 | arxiv | \section{Preliminary}
\label{sec:back}
\subsection{Existing Fault Study}
Researchers have investigated Android and Symbian OSes' failures~\cite{Maji10} and Windows Phone app crashes~\cite{Ravindranath14}.
As for the bugs of Android apps, a number of studies exist in different aspects: performance~\cite{Liu:ICSE2014}, energy~\cite{Banerjee16}, fragmentation~\cite{Wei16}, memory leak~\cite{ShahriarNM14,santhanakrishnan2016}, GUI failures~\cite{Adamsen15,AmalfitanoRPF18}, resource usage~\cite{LiuWXC16,Liu16}, API stability~\cite{McDonnell13}, security~\cite{Enck11,meng2017} and \hbox{\emph{etc}}\xspace.
However, none of them focus on \emph{functional bugs}, which are also critical to user loyalty and app success.
Our work focuses on this scope.
One of the first attempts at classifying functional bugs is from Hu \hbox{\emph{et al.}}\xspace~\cite{Hu11}. They classify 8 bug types from 10 apps. Other efforts~\cite{Zaeem14,Coelho15}, however, have different goals: Coelho \hbox{\emph{et al.}}\xspace~\cite{Coelho15} analyze exceptions to investigate the bug hazards of exception-handling code (\hbox{\emph{e.g.}}\xspace, cross-type exception wrapping), Zaeem \hbox{\emph{et al.}}\xspace~\cite{Zaeem14} study bugs to generate testing oracles for a specific set of bug types.
None of them give a comprehensive analysis, and the validity of their conclusions are unclear.
Therefore, to our knowledge, we are the first to investigate Android app crashes, and give an in-depth analysis.
Our study focuses on the framework-specific exceptions (\emph{framework exception} for short throughout the paper) that can crash apps, \hbox{\emph{i.e.}}\xspace, those exceptions thrown from methods defined in the Android framework due to an app's violation of constraints enforced by the framework. Note we do not consider the framework exceptions caused by the bugs of the framework itself. We do not analyze application exceptions (leave this as our future work) and library exceptions (since different apps may use different third-party libraries whose analysis requires other information).
\subsection{Exception Model in Android}
Android apps are implemented in Java, and thus inherit Java's exception model.
Java has three kinds of exceptions.
(1) \texttt{RuntimeException}, the exceptions that are thrown during the normal operation of the Java Virtual Machine when the program violates the semantic constraints (\hbox{\emph{e.g.}}\xspace, null-pointer references, divided-by-zero errors).
(2) \texttt{Error}, which represents serious problems that a reasonable application should not try to catch (\hbox{\emph{e.g.}}\xspace, \texttt{OutOfMemeoryError}).
(3) \texttt{Checked Exception} (all exceptions except (1) and (2)), these exceptions are required to be declared in a method or constructor's \texttt{throws} clause (statically checked by compilers), and indicate the conditions that a reasonable client program might want to catch. For \texttt{RuntimeException} and \texttt{Error}, the programmers themselves have to handle them at runtime.
Figure~\ref{fig:exception_trace_example} shows an example of \texttt{RuntimeException} trace. The bottom part represents the \emph{root exception}, \hbox{\emph{i.e.}}\xspace, \texttt{NumberFormatException},
which indicates the root cause of this exception.
Java uses \emph{exception wrapping}, \hbox{\emph{i.e.}}\xspace, one exception is caught and wrapped in another (in this case, the \texttt{RuntimeException} of the top part), to propagate exceptions. Note the root exception can be wrapped by multiple exceptions, and the flow from the bottom to the top denotes the order of exception wrappings.
An \emph{exception signaler} is the method (\texttt{invalidReal} in this case) that throws the exception, which is the first method call under the root exception declaration .
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{pic/exception_trace_ex.pdf}
\end{center}
\vspace{-10pt}
\caption{An example of \texttt{RuntimeException} trace}
\label{fig:exception_trace_example}
\vspace{-10pt}
\end{figure}
\section{Conclusion}
\label{sec:con}
This paper conducts a large-scale analysis of framework exceptions in Android apps. We constructed a comprehensive dataset that contains 16,245 unique exception traces. After investigating 8,243 framework exceptions, we identified their characteristics, evaluated their manifestation via popular bug detection tools, and reviewed their fixes. Our findings enables several follow-up research. We demonstrated the usefulness of our findings by two prototype tools.
\subsection{Discussion}
Through this study, we find:
(1) Besides the trivial errors, developers are most likely to introduce Lifecycle, Memory/Hardware, Concurrency, and UI update errors, which
requires more fixing efforts.
(2) Bug detection tools need more enhancement. Static analysis tools could integrate new rules especially for UI update, Lifecycle, Framework Constraint errors. Testing tools could integrate specific testing strategies to detect these errors.
(3) To counter framework exceptions, developers should gain more understanding on Android system; different supporting tools should be developed to reproduce exceptions for debugging, locate their root causes for fixing, and check API compatibility across different SDKs.
Linares-V\'{a}squez \hbox{\emph{et al.}}\xspace~\cite{Linares2017} also investigated android app bugs very recently, but our study significantly differs from theirs.
We focuses on framework exceptions and give a comprehensive, deep analysis, including exception manifestations, root causes,
abilities of existing bug analysis tools, and fixing practices. While they focus on designing mutation operators from existing bugs, and their 38 operators only cover 8 out of 75 instances distilled by our study. We believe our results can further improve existing mutation operators.
The validity of our study may be subject to some threats.
(1) \emph{Representativeness of the subjects}. To counter this, we collected all the subjects (total 2486 apps at the time of our study) from F-Droid, which the largest database of open-source apps, and covers diverse app categories. We believe these subjects are the representatives of real-world apps.
(2) \emph{Comprehensiveness of app exceptions}. To collect a comprehensive set of exception traces, we mine from Github and Google Code; and apply testing tools, which leads to total 16,245 exceptions.
To our knowledge, this is the largest study for analyzing Android app exceptions.
(3) \emph{Completeness/Correctness of exception analysis}. For completeness, (i) we investigated 8,243 framework exceptions, and carefully inspected all common exception types. (ii) We surveyed previous work~\cite{Hu11,AmalfitanoFTCM12,MachiryTN13,MahmoodMM14,Zaeem14,HuYTY14,Bielik15,AmalfitanoFTTM15,Coelho15,MaoHJ16,lifecycle16,stoat17,model16} that reported exceptions, and observed all exception types and patterns were covered by our study. For correctness, we cross-validated our analysis on each exception type, and also referred to the patches from developers and Stack Overflow posts to validate our analysis. The whole dataset is also made publicly available.
\section{Applications of Our Study}
\label{sec:appl}
This section discusses the follow-up research motivated by our findings, and also demonstrates usefulness by two prototype tools.
\subsection{Benchmarking Android App Exceptions}
Our study outputs a large and comprehensive dataset of Android app exceptions (especially for framework exceptions), which includes total 16,245 unique app exceptions and 8,243 unique framework exceptions. Each exception is accompanied with buggy code version, exception trace, error category, and possible fixes.
We believe this dataset can (1) provide an effective and reliable basis for comparing dynamic/static analysis tools; (2) enable the research on investigating fault localization techniques and give a large set of exceptions as benchmarks; and (3) enable patch generation by comparing the exceptions and their fixes.
\subsection{Improving Exception Detection}
Dynamic testing and static analysis are the two avenues to detect faults.
However, more improvements should be made on both sides.
\noindent{\emph{\textbf{Dynamic Testing}}}. Enhancing testing tools to detect specific errors is very important. For example,
(1) \emph{Generate meaningful as well as corner-case inputs to reveal parameter errors}. We find random strings with specific formats or characters are very likely to reveal unexpected crashes. For instance, Monkey detects more \texttt{SQLiteException}s than the other tools since it can generate strings with special characters like \quotes{"} and \quotes{\%} by randomly hitting the keyboard. When these strings are used in SQL statements, they can fail SQL queries without escaping.
(2) \emph{Enforce environment interplay to reveal lifecycle, concurrency and UI update errors}. We find some special actions, \hbox{\emph{e.g.}}\xspace, change device orientations, start an activity and quickly return back without waiting it to finish, put the app at background for a long time (by calling another app) and return back to it again, can affect an app's internal states and its component lifecycle. Therefore, these actions can be interleaved with normal UI actions to effectively check robustness.
(3) \emph{Consider different app and SDK versions to detect regression errors}. We find app updates may introduce unexpected errors. For example, as shown in Figure~\ref{fig:Atarashii}, the changes of database scheme can crash the new version since the developers have not carefully managed database migration from the old version. (4) \emph{More advanced testing criteria}~\cite{Borges17,Su17} are desired.
\noindent{\emph{\textbf{Static Analysis}}}.
Incorporating new checking rules into static analysis tools to enhance their abilities is highly valuable.
Through our study, we find it is feasible to check some framework exceptions, especially for framework constraint, lifecycle and UI update errors.
For example, to warn the potential crash in Figure~\ref{fig:Local-GSM-Backend}, static analysis can check whether the task running in the thread uses \texttt{Handler} to dispatch messages, if it uses, \texttt{Looper\#prepare()} must be called at the beginning of \texttt{Thread\#run()}; to warn the potential crash in Figure~\ref{fig:Bankdroid}, static analysis can check whether there is appropriate checking on activity state before showing a dialog from a background thread. In fact, there is already some initial work~\cite{lifecycle16} that implements lifecycle checking on Lint.
\noindent{\emph{\textbf{Demonstration of Usefulness}}}.
We enhanced Stoat~\cite{stoat17} with two strategies: (1) randomly generate inputs with 5 specific formats (\hbox{\emph{e.g.}}\xspace, empty string, lengthy string, null) or characters (\hbox{\emph{e.g.}}\xspace, \quotes{"}, \quotes{\%}) to fill in \texttt{EditText}s or \texttt{Intent}'s fields; (2) randomly inject 3 types of special actions mentioned above into normal UI actions. We applied Stoat on dozens of most popular apps (\hbox{\emph{e.g.}}\xspace, Facebook, Gmail, Google+, WeChat) from Google Play, and successfully detected 3 previously unknown bugs in Gmail (one parameter error) and Google+ (one UI update error and one lifecycle error). All of these bugs were detected in the latest versions at the time of our study, and have been reported to Google and got confirmed. However, these bugs have not been found by Monkey and Sapienz, while other testing tools, \hbox{\emph{e.g.}}\xspace, CrashScope~\cite{MoranVBVP17} and AppDoctor~\cite{HuYTY14} only consider 2 and 3 of these 8 enhancement cases, respectively.
\subsection{Enabling Exception Localization}
We find developers usually take days to fix a framework exception. Thus, automatically locating faulty code and proposing possible fixes are highly desirable. Our study can shed light on this goal.
\noindent{\emph{\textbf{Demonstration of Usefulness}}}. We have built a framework exception localization tool, ExLocator, based on Soot~\cite{soot}, which takes as input an APK file and an exception trace, and outputs a report that explains the root cause of this exception. It currently supports 5 exception types from UI update, Lifecycle, Index, and Framework Constraint errors (As Figure~\ref{fig:fix_efforts} shows, these errors are more difficult to fix). In detail, it first extracts method call sequences and exception information from the exception trace, and classifies the exception into one of our summarized fault categories, and then utilizes data-/control-flow analysis to locate the root cause. The report gives the lines or methods that causes the exception, the description of the root cause and possible fixing solutions, and closely related Stack Overflow posts.
We applied our tool on total 27 randomly selected cases from Github, and correctly locates 25 exceptions out of 27 (92\% precision) by comparing with the patches by developers. By incorporating additional context information from Android framework (\hbox{\emph{e.g.}}\xspace, which framework classes use \texttt{Handler}), our tool successfully identified the root causes of the remaining two cases.
However, all previous fault localization work~\cite{Sinha09,Jiang10,Mirzaei15,Wu14} can only handle general exception types.
\section{Overview}
\label{sec:overview}
\begin{figure*}[htp]
\begin{center}
\includegraphics[width=0.95\textwidth]{pic/study_overview.pdf}
\end{center}
\vspace{-10pt}
\caption{Overview of our study and its applications}
\label{fig:study_overview}
\end{figure*}
Figure~\ref{fig:study_overview} shows the overview of our study. We select F-droid~\cite{fdroid} apps as our subjects (Section~\ref{sec:subjects}), and use two methods, \hbox{\emph{i.e.}}\xspace, mining issue repositories and applying testing tools, to collect exception traces (Section~\ref{sec:collection}). We investigate exception traces and other resources (\hbox{\emph{e.g.}}\xspace, Android documentation, app source code, Stack Overflow posts) to answer RQ1$\sim$RQ4 (Section~\ref{sec:emp}). This study enables several follow-up research detailed in Section~\ref{sec:appl}.
\subsection{App Subjects}
\label{sec:subjects}
We choose F-droid, the largest repository of open-source Android apps, as the source of our study subjects,
since it has three important characteristics: (1) F-droid contains a large set of apps. At the time of our study, it has more than 2,104 unique apps and 4,560 different app versions, and maintains their metadata (\hbox{\emph{e.g.}}\xspace, source code links, release versions).
(2) The apps have diverse categories (\hbox{\emph{e.g.}}\xspace, Internet, Personal, Tools), covering different maturity levels of developers, which are the representatives of real-world apps.
(3) All apps are open-source and hosted on Github, Google Code, SourceForge and \hbox{\emph{etc}}\xspace, which makes it possible for us to access their source code and issue repositories for analysis.
\subsection{Data Collection}
\label{sec:collection}
Table~\ref{table:study_statistics} summarizes the statistics of the collected exception traces. We also collect other data for analysis from Stack Overflow and static analysis tools. The details are explained as follows.
\begin{table}[h]
\footnotesize
\centering
\caption{Statistics of collected crashes}
\vspace{-10pt}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{cccc} \hline
\textbf{Approach} &\textbf{\#Projects} &\textbf{\#Crashes} &\textbf{\#Unique Crashes} \\ \hline \hline
\tabincell{c}{Hosting Platforms\\\scriptsize{(Github/Google Code)}} &\tabincell{c}{ 2174\\\scriptsize{(2035/137)} } & \tabincell{c}{7764\\\scriptsize{(7660/104)}} & \tabincell{c}{6588\\\scriptsize{(6494/94)} } \\
\tabincell{c}{Testing Tools\\\scriptsize{(Monkey/Sapienz/Stoat)}} & \tabincell{c}{ 2104\\\scriptsize{(4560 versions)} } &\tabincell{c}{13271\\\scriptsize{(3758/4691/4822)}} & \tabincell{c}{9722\\\scriptsize{(3086/4009/3535)}} \\ \hline
Total &2486 \scriptsize{(1792 overlap)} &21035 & 16245\\
\hline
\end{tabular}
\label{table:study_statistics}
\vspace{-10pt}
\end{table}
\noindent{\emph{\textbf{Github and Google Code}}}.
We collected exception traces from Github and Google Code since they host over 85\% (2,174/2,549) F-droid apps.
To automate data collection, we implemented a web crawler to automatically crawl the issue repositories of these apps, and collected the issues that contain exception traces. In detail, the crawler visits each issue and its comments to extract valid exception traces. Additionally, it utilizes Github and Google Code APIs to collect project information such as package name, issue id, number of comments, open/closed time. We took about two weeks and successfully scanned 272,629 issues from 2,174 apps, and finally mined 7,764 valid exception traces (6,588 unique) from 583 apps.
\noindent{\emph{\textbf{Automated Testing Tools.}}}
We set up as follows: (1) We chose three state-of-the-art Android app testing tools with different testing techniques: Google Monkey~\cite{Monkey} (random testing), Sapienz (search-based testing), and Stoat (model-based testing).
(2) We selected all the recent release versions (total 4,560 versions of 2,104 apps, each app has 1$\sim$3 recent release versions) maintained by F-droid as testing subjects. Each tool is configured with default setting and given 3 hours to thoroughly test each version on a single Android emulator. Each emulator is configured with KitKat Android OS (SDK 4.3.1, API level 18). The evaluation is deployed on three physical machines (64-bit Ubuntu/Linux 14.04). Each machine runs 10 emulators in parallel.
(3) We collect coverage data by Emma~\cite{emma} or JaCoCo~\cite{jacoco} to enable the testing of Sapienz and Stoat.
The evaluation took four months, and finally detected total 13,271 crashes (9,722 unique). In detail, Monkey detected 3,758 crashes (3,086 unique), Sapienz 4,691 crashes (4,009 unique), Stoat 4,822 crashes (3,535 unique).
During testing, we record each exception trace with bug-triggering inputs, screenshots and detection time and \hbox{\emph{etc}}\xspace, to help our analysis. Further. we find the issue repositories of Github/Google Code only record 545 unique crashes for these recent versions, which accounts for only 5.6\% of those detected by testing tools. This indicates these detected exception traces can effectively complement the mined exceptions.
\noindent{\emph{\textbf{Stack Overflow.}}}
According to exception traces mined from the two sources above, we also collect the most relevant posts on Stack Overflow by searching posts with key word ``Android'', exception types and detailed descriptions. We record information like create time, number of answers, question summary. We mined totally 15,678 posts of various exceptions.
\noindent{\emph{\textbf{Static Analysis Tools.}}}
We also collect data from four state-of-the-art static analysis tools (Lint, PMD, FindBugs, SonarQube), which either serves as a plug-in of Android Studio or supports Android projects. We apply each tool on apps to collect potential bugs, warnings or code smells for in-depth analysis.
\section{Introduction}
\label{sec:intro}
Mobile applications have gained great popularity in recent years.
For example, Google Play, the most popular Android market, has over three million apps, and more than 50,000 apps are continuously published on it each month~\cite{appbrain}.
To ensure competitive edge, app developers and companies strive to deliver high-quality apps.
One of their primary concerns is to prevent fail-stop errors, such as app crashes from occuring in release versions.
Despite the availability of off-the-shelf testing platforms (\hbox{\emph{e.g.}}\xspace, Roboelectric~\cite{robolectric}, JUnit~\cite{junit}, Appium~\cite{appium}), and static checking tools (\hbox{\emph{e.g.}}\xspace, Lint~\cite{lint}, FindBugs~\cite{findbugs}, SonarQube~\cite{sonarqube})~\cite{KochharTNZL15,LinaresVasquez15}, many released apps still suffer from crashes --- two recent efforts~\cite{MaoHJ16,stoat17} discovered hundreds of previously unknown crashes in popular and well-tested commercial apps.
Moreover, researchers have contributed a line of work~\cite{AnandNHY12,vanderMerwe2012,MachiryTN13,AzimN13,YangPX13,ChoiNS13,MahmoodMM14,Hao2014,AmalfitanoFTTM15,MaoHJ16,Su16,stoat17,GuC0SDML17,Song2017} to detect app crashes, but none of them have investigated the root causes.
It leaves developers unaware of how to avoid and fix these bugs, and hinders the improvement of bug detection, fault localization~\cite{Sinha09,Jiang10,Mirzaei15,Wu14}, and fixing~\cite{GaoZWXZM15} techniques.
As observed by our investigation on 272,629 issues from 2,174 Android apps hosted on Github and Google Code, developers are unable to resolve nearly 40\% reported crashes,\footnote{Filtered by the keywords \quotes{crash} or \quotes{exception} in their issue descriptions.} which greatly compromises app quality.
This situation underlines the importance of characterizing a large number of diverse real-world app crashes and investigating how to effectively detect and fix them.
However, such a study is difficult and yet to be carried out, which has motivated this work.
When an app crashes, the Android runtime system will dump an exception trace that provides certain clues of the issue (\hbox{\emph{e.g.}}\xspace, the exception type, message, and the invoked methods).
Each exception can be classified into one of three categories --- \emph{application exception}, \emph{framework exception}, and \emph{library exception} --- based on which architecture layer threw the exception.
In particular, our study focuses on framework exceptions, which account for majority of app crashes (affecting over 75\% of the projects), as revealed by our investigation in Section~\ref{sec:rq1}.
We face two key challenges in carrying out the study. The first is the \emph{lack of comprehensive dataset}. To enable crash analysis, we need a comprehensive set of crashes from a large number of real-world apps. Ideally, for each crash, it includes exception trace, buggy source code, bug-triggering inputs, and the patches (if exists). However, to the best of our knowledge, no such dataset is publicly available. Despite open-source project hosting platforms maintain issue repositories, such as Github, our investigation reveals only a small set of crash issues (16\%) are accompanied with exception traces. Among them, even if the issue is closed, it is not necessarily associated with the buggy code version.
The second concerns \emph{difficulties in crash analysis}. Analyzing crashes needs understanding of the application logic as well as the Android framework (or libraries). It is also necessary to cross-validate the root causes (\hbox{\emph{e.g.}}\xspace, reproducing crashes, investigating knowledge from developers). However, no reliable tool exists that can facilitate our analysis.
To overcome these challenges and conduct this study, we made substantial efforts.
We have collected 16,245 unique exception traces from 2,486 open-source Android apps by (1) mining their issue repositories hosted on Github and Google Code; and (2) applying state-of-the-art app testing tools (Monkey~\cite{Monkey}, Sapienz~\cite{MaoHJ16}, and Stoat~\cite{stoat17}) on their recent versions (corresponding to 4,560 executables) to complement the mined data. The whole data collection process took four months. We identified 8,243 unique framework exceptions, and spent nearly six person-months carefully investigating these crashes by examining the source code of apps and the Android framework, fixes from developers, bug reports from testing tools, and technical posts on Stack Overflow. We aim to answer the following research questions:
\begin{itemize}[leftmargin=*]
\item \textbf{\emph{{RQ1}}}: \emph{Compared with other exception categories, are framework exceptions recurring that affect most Android apps? }
\item \textbf{\emph{{RQ2}}}: \emph{What are the common faults made by developers that cause framework exceptions? }
\item \textbf{\emph{{RQ3}}}: \emph{What is the current status of bug detection techniques on detecting framework exceptions? Are they effective?}
\item \textbf{\emph{{RQ4}}}: \emph{How do developers fix framework exceptions? Are there any common practices? What are the difficulties for fixing?}
\end{itemize}
Through answering the above questions, we aim to characterize Android app crashes (caused by framework exceptions in particular) and provide useful findings to developers as well as researchers.
For example, our investigation reveals framework exceptions are indeed recurring.
Moreover, they require more fixing efforts (on average 4 days per exception) but have lower issue closing rate (only 53\%) than application exceptions (67\%). Through careful inspection, we distilled 11 common faults that developers are most likely to make, yet have not been well-investigated by previous work~\cite{Hu11,Zaeem14,Coelho15}.
We further evaluate the detection abilities of current dynamic testing and static analysis techniques on framework exceptions.
We are surprised to find static analysis tools are almost completely ineffective (only gives correct warnings on 4 out of total 77 exception instances), although there are some plausible ways to improve them. Dynamic testing tools, as expected, can reveal framework exceptions, but still far from effective on certain fault categories. Their testing strategies have a big impact on the detection ability. In addition, we find most exceptions can be fixed by four common practices with small patches (less than 20 code lines), but developers still face several challenges during fixing.
Our findings enables several follow-up research, \hbox{\emph{e.g.}}\xspace, bug detection, fault localization, and patch generation for android apps.
To demonstrate the usefulness of our findings, we have optimized Stoat, a dynamic testing tool, and implemented ExLocator, an exception localization tool, for android apps. The results are promising: Stoat quickly revealed 3 previously unknown bugs in Gmail and Google+; ExLocator is able to precisely localize the root causes of identified exceptions in real apps.
To summarize, this paper makes the following contributions:
\begin{itemize}
\item To our knowledge, we conducted the first large-scale study to characterize framework-specific exceptions in Android apps, and identified 11 common fault categories that developers are most likely to make. The results provide useful insights for developers and researchers.
\item Our study evaluated the state-of-the-art exception detection techniques, and identified common fixing practices of framework exceptions. The findings shed light on proposing more effective bug detection and fixing techniques.
\item Our findings enable several follow-up research with a large-scale and reusable dataset~\cite{dataset} that contains 16,245 unique exception traces from 2,486 open-source apps. Our prototype tools also demonstrate the usefulness of our findings.
\end{itemize}
\section{Related Work}
\label{sec:rel}
\noindent{\emph{\textbf{Empirical Study}}}.
Researchers have conducted several empirical studies to investigate various aspects of Android, \hbox{\emph{e.g.}}\xspace, performance~\cite{Liu:ICSE2014}, fragmentation~\cite{Wei16}, resource usage~\cite{LiuWXC16,Liu16}, API stability~\cite{McDonnell13} and security~\cite{Enck11}.
Although there is some existing work~\cite{Hu11,Zaeem14,Coelho15} on analyzing Android bugs or exceptions, their goals are different: Coelho \hbox{\emph{et al.}}\xspace~\cite{Coelho15} analyzed exception traces to investigate the bug hazards of exception-handling code (\hbox{\emph{e.g.}}\xspace, cross-type exception wrapping), and Zaeem \hbox{\emph{et al.}}\xspace~\cite{Zaeem14} studied app bugs to study generation of testing oracles on a particular set of bug types.
Hu \hbox{\emph{et al.}}\xspace~\cite{Hu11} summarized some bug categories, but they only investigated 10 apps and the generality and completeness of their conclusions are unclear.
Other researchers study Android and Symbian OSes' failures~\cite{Maji10} and Windows Phone app crashes~\cite{Ravindranath14}.
Therefore, to our knowledge, we are the first to investigate Android app crashes, and give an in-depth analysis.
\noindent{\emph{\textbf{Crash Detection, Localization, and Fixing}}}.
Researchers have contributed a line of research~\cite{AnandNHY12,vanderMerwe2012,MachiryTN13,AzimN13,YangPX13,ChoiNS13,MahmoodMM14,Hao2014,
AmalfitanoFTTM15,MaoHJ16,stoat17} to detect app crashes, but have not investigated the root causes of them.
As for crash localization, Sinha \hbox{\emph{et al.}}\xspace~\cite{Sinha09} combine dynamic analysis and static backward data-flow analysis to locate a subclass of runtime exceptions, \hbox{\emph{e.g.}}\xspace, \texttt{NullPointer}, \texttt{Arithmetic} exceptions, in \texttt{Java}.
Jiang \hbox{\emph{et al.}}\xspace~\cite{Jiang10} develop a similar approach.
Mirzaei \hbox{\emph{et al.}}\xspace~\cite{Mirzaei15} adopt a statistical fault localization technique~\cite{Jones05} to rank suspicious code that causes the crash. But the effectiveness is unclear.
CrashLocator~\cite{Wu14} uses control-flow analysis and slicing techniques to identify faulty functions via recovering the crash stack into a static call graph. However, they can only locate faults at function level.
Gao \hbox{\emph{et al.}}\xspace~\cite{GaoZWXZM15} use stack trace information to query Stack Overflow posts, and generate promising fixes for Android apps.
Instead, our exception localization tool identifies the root causes based on our understanding, which is more precise and effective.
\section{Empirical Study}
\label{sec:emp}
\subsection{RQ1: Exception Categories}
\label{sec:rq1}
\noindent{\textbf{Exception Categories}}.
To investigate app crashes, we group their exception traces into three different categories according to their exception signalers.
In detail, we refer to Android-18 API documentation~\cite{android_doc} and use the rules below (adopted by prior work~\cite{Coelho15}) to categorize exceptions:
(1) \emph{Application Exception}: the signaler is from the app itself (identified by the app's package name).
(2) \emph{Framework Exception}: the signaler is from the Android framework, \hbox{\emph{i.e.}}\xspace, from these packages: ``\texttt{android.*}'', ``\texttt{com.android.*}'', ``\texttt{java.*}'', and ``\texttt{javax.*}''.
(3) \emph{Library Exception}: the signaler is from the core libraries reused by Android (\hbox{\emph{e.g.}}\xspace, ``\texttt{org.apache.*}'', ``\texttt{org.json.*}'', ``\texttt{org.w3c.*}'' and \hbox{\emph{etc}}\xspace) or third-party libraries used by the app.
\begin{table}[h]
\centering
\caption{Statistics of the exceptions from Github and Google Code grouped by their signalers (M: Median)}
\vspace{-10pt}
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{tabular}{lccccccc} \hline
\textbf{\tabincell{c}{Exception\\ Category}} &\textbf{\#Projects} &\textbf{Occurences} &\textbf{\#Types} &\textbf{\tabincell{c}{Issue \\Duration\\{\tiny{M (Q1/Q3)}}}} & \textbf{\tabincell{c}{Fixing\\Rate}}\\ \hline \hline
App & 268 (45.8\%) & 1552 (23.6\%) &88 (34\%) & 2 (0/17) & 67\% \\
Framework & 441 (75.3\%) & 3350 (50.8\%) &127 (50\%) & 4 (1/30) & 53\% \\
Library & 253 (43.2\%) & 1686 (25.6\%) &132 (52\%) & 3 (1/16) & 57\% \\
\hline
\end{tabular}
\label{table:exception_sources}
\vspace{-10pt}
\end{table}
Table~\ref{table:exception_sources} classifies the exceptions from Github and Google Code according to the above rules, and shows
the number of their affected projects, occurrences, number of exception types, issue durations (the days during the issue was opened and closed), and the fixed issue rate (the percentage of closed issues). From the table, we observe two important facts:
(1) \emph{Framework exceptions are more pervasive and recurring}. It affects 75.3\% projects, and occupies 50.8\% exceptions.
(2) \emph{Framework exceptions require more fixing effort}. On average, it takes 2 more times effort (see column 5) to fix a framework exception than an application exception
These facts are reasonable. First, most apps heavily use APIs provided by \emph{Android Development Framework} (ADF) to achieve their functionalities.
ADF enforces various constraints to ensure the correctness of apps, however, if violated, apps may crash and throw exceptions.
Second, fixing application exceptions is relatively easy since developers are familiar with the code logic. However, when fixing framework exceptions, they have to understand and locate the constraints they violate, which usually takes longer.
\subsection{RQ2: Taxonomy of Framework Exceptions}
\label{sec:rq3}
This section investigates framework exceptions. We classify them into different categories by their root causes. \emph{Root cause}, from the view of developers, is the initial cause of the exception.
\noindent\emph{\textbf{Exception Buckets}}.
Following the common industrial practice, we group framework exceptions into different buckets. Each bucket contains the exceptions that share the similar root cause.
To achieve this, we use the exception type, message and signaler to approximate the root cause.
For example, the exception in Figure~\ref{fig:exception_trace_example} is labeled as (\texttt{NumberFormatException}, ``\emph{invalid double}'', \texttt{invalidReal}). Finally, we get 2,016 buckets, and find the exceptions from the top 200 buckets have occupied over 80\% of all exceptions. The remaining 20\% buckets have only 5 exceptions or fewer in each of them.
Therefore, we focus on the exceptions of the top 200 buckets.
\noindent\emph{\textbf{Analysis Methods}}. We randomly select a number of exceptions from each bucket, and
use three complementary resources to facilitate root cause analysis: (1) \emph{Exception-Fix Repository}. We set up a repository that contains pairs of exceptions and their fixes. In particular, (i) from 2,035 Android apps hosted on Github,
we mined 284 framework exception issues that are closed with corresponding patches. To set up this mapping, we checked each commit message by identifying the keywords \quotes{fix}/\quotes{resolve}/\quotes{close} and the issue id. (ii) We also manually checked the remaining issues to include valid ones that are missed by the keyword rules. We finally got 194 valid issues. We investigate each exception trace and its patch to understand the root causes.
(2) \emph{Exception Instances Repository}. From the 9,722 exceptions detected by testing tools, we filtered out the framework exceptions, and mapped each of them with its exception trace, source code version, bug-triggering inputs and screenshots. When an exception type under analysis is not included or has very few instances in the exception-fix repository, we refer to this repository to facilitate analysis by using available reproducing information.
(3) \emph{Technical Posts}. For each exception type, we referred to the posts from Stack Overflow collected in Section~\ref{sec:collection} when needing more information from developers and cross-validate our understanding.
\noindent\emph{\textbf{Taxonomy}}.
We analyzed 86 exception types\footnote{After the investigation on a number of \texttt{NullPointerException}s, we find most of them are triggered by null object references. So we did not continue to analyze them.} (covering 84.6\% of all framework exceptions), and finally distilled 11 common faults that developers are most likely to make. Table~\ref{table:root_causes} lists them by the order of closing rate from highest to lowest. We explain them as follows.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_Bankdroid_lifecycle.pdf}
\end{center}
\vspace{-10pt}
\caption{An Example of Lifecycle Error}
\label{fig:Bankdroid}
\vspace{-10pt}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_Nextcloud_concurreny.pdf}
\end{center}
\vspace{-10pt}
\caption{An Example of Concurrency Error}
\label{fig:Nextcloud}
\vspace{-5mm}
\end{figure}
\noindent \emph{\textbf{$\bigcdot$ Component Lifecycle Error}}. Android apps are comprised of different components. Each component is required to follow a prescribed lifecycle paradigm, which defines how the component is created, used and destroyed~\cite{activity_lifecycle}. For example, \texttt{Activity} provides a core set of six callbacks to allow developers to know its current state.
If developers improperly handle the callbacks or miss state checking before some tasks, the app can be fragile considering the complex environment interplay (\hbox{\emph{e.g.}}\xspace, device rotation, network interrupt).
\emph{Bankdroid}~\cite{Bankdroid} (Figure~\ref{fig:Bankdroid}) is an app for providing service of Swedish banks. The app uses a background thread \texttt{DataRetrieverTask} to perform data retrieval, and pops up a dialog when the task is finished. However, if the user edits and updates a bank from \texttt{BankEditActivity} (which starts \texttt{DataRetrieverTask}), during which he presses the back button, the app will crash when the updates finish. The reason is that the developers fail to check \texttt{BankEditActivity}'s state (in this case, \emph{destroyed}) after the task is finished. The bug triggers a \texttt{BadTokenException} and was fixed in revision 8b31cd3~\cite{Bankdroid_revision}. Besides, \texttt{Fragment}~\cite{fragment}, a reusable class implementing a portion of \texttt{Activity}, has much more complex lifecycle. It provides a core set of 12 callbacks to manage its state transition, which makes lifecycle management more challenging, \hbox{\emph{e.g.}}\xspace, state loss of \texttt{Fragment}s, attachment loss from its activity.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_cgeo_ui_update.pdf}
\end{center}
\vspace{-10pt}
\caption{An Example of UI Update Error}
\label{fig:cgeo}
\vspace{-10pt}
\end{figure}
\noindent \emph{\textbf{$\bigcdot$ Concurrency Error}}. Android system provides such concurrent programming constructs as \texttt{AsyncTask} and \texttt{Thread} to execute intensive tasks.
However, improper handling concurrent tasks may bring data race~\cite{Bielik15} or resource leak~\cite{LiuWXC16}, and even cause app crashes.
Nextcloud Notes~\cite{Nextcloud} (Figure~\ref{fig:Nextcloud}), a cloud-based notes-taking app that automatically synchronizes local and remote notes, when the app attempts to re-open an already-closed database~\cite{Nextcloud_issue}. The exception can be reproduced by executing these two steps repeatedly: (1) open any note from the list view; (2) close the note as quickly as possible by pressing back-button. The app creates a new \texttt{NoteSyncTask} every time when a note sync is requested, which connects with the remote sever and updates the local database by calling \texttt{updateNote()}. However, when there are multiple update threads, such interleaving may happen and crash the app: \emph{Thread A} is executing the update, and \emph{Thread B} gets the reference of the database; \emph{Thread A} closes the database after the task is finished, and \emph{Thread B} tries to update the closed database. The developers fixed this exception by leaving the database unclosed (since \texttt{SQLiteDatabase} already implemented thread-safe database access mechanism) in revision aa1a972~\cite{Nextcloud_revision}.
\noindent \emph{\textbf{$\bigcdot$ UI Update Error}}. Each Android app owns a UI thread, which is in charge of dispatching events and rendering user interface. To ensure good responsiveness, apps should offload intensive tasks to background threads. However, many developers fail to keep in mind that \emph{Android UI toolkit is not thread-safe and one should not manipulate UI from a background thread}~\cite{thread}.
\emph{cgeo}~\cite{cego} (Figure~\ref{fig:cgeo}) is a popular full-featured client for geocaching. When refreshing \texttt{cacheList} (\texttt{cacheList} is associated with a \texttt{ListView} via an \texttt{ArrayAdapter}), the developers query the database and substitute this list with new results (via \texttt{clear()} and \texttt{addAll()}) in \texttt{doInbackground}. However, the app crashes when the list is refreshed. The reason is that \texttt{cacheList} is maintained by the UI thread, which internally checks the equality of item counts between \texttt{ListView} and \texttt{cacheList}. But when a background thread touches \texttt{cacheList}, the checking will fail and an exception will be thrown. The developer realized this, and fixed it by moving the refreshing operations into \texttt{onPostExecute}, which instead runs in the UI thread (in revision d6b4e4d~\cite{cego_revision}).
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_Local-GSM-Backend_framework_constraint.pdf}
\end{center}
\vspace{-10pt}
\caption{An Example Violating Framework Constraints}
\label{fig:Local-GSM-Backend}
\vspace{-10pt}
\end{figure}
\begin{table*}[t]
\centering
\footnotesize
\caption{Statistics of 11 common fault categories, and the evaluation results of static analysis tools on them, sorted by \emph{closing rate} in descending order.}
\label{table:root_causes}
\begin{tabular}{lcc||c|cccc|l}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{Category (Name for short)}} & \multirow{2}{*}{Occurrence} & \multirow{2}{*}{\#S.O. posts} & \multicolumn{1}{l|}{\multirow{2}{*}{\#Instance}} & \multicolumn{4}{c|}{Static Tools} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}} Closing\\ Rate\end{tabular}} \\ \cline{5-8}
\multicolumn{1}{c}{} & & & \multicolumn{1}{l|}{} & Lint & \multicolumn{1}{l}{FindBugs} & \multicolumn{1}{l}{PMD} & \multicolumn{1}{l|}{SonarQube} & \\ \hline
API Updates and Compatibility (\textbf{API}) & 68& 60 & 7 & - & - & - & - & 93.3\% \\
XML Layout Error (\textbf{XML}) & 122& 246 & 4 & 1 & - & - & - & 93.2\% \\
API Parameter Error (\textbf{Parameter}) & 820& 819 & 6 & - & - & - & - & 88.5\% \\
Framework Constraint Error (\textbf{Constraint}) & 383& 1726 & 12 & 3 & - & - & - & 87.7\% \\
Others (Java-specific errors) & 249& 4826 & 10 & - & - & - & - & 86.1\% \\
Index Error (\textbf{Index}) & 950& 218 & 4 & - & - & - & - & 84.1\% \\
Database Management Error (\textbf{Database}) & 128& 61 & 3 & - & - & - & - & 76.8\% \\
Resource-Not-Found Error (\textbf{Resource}) & 1303& 7178 & 5 & - & - & - & - & 75.3\% \\
UI Update Error (\textbf{UI}) & 327& 666 & 3 & - & - & - & - & 75.0\% \\
Concurrency Error (\textbf{Concurrency}) & 372& 263 & 7 & - & - & - & - & 73.5\% \\
Component Lifecycle Error (\textbf{Lifecycle}) & 608& 1065 & 11 & - & - & - & - & 58.8\% \\
Memory/Hardware Error (\textbf{Memory}) & 414& 792 & 3 & - & - & - & - & 51.6\% \\ \hline
\end{tabular}
\end{table*}
\noindent \emph{\textbf{$\bigcdot$ Framework Constraint Error}}. Android framework enforces various constraints for app development.
For example, \texttt{Handler} is part of Android framework for managing threads, which allows to send and process messages or runnable objects associated with a thread's message queue~\cite{handler}. \emph{Each \texttt{Handler} instance must be associated with a single thread and the message queue of this thread\footnote{A thread by default is not associated with a message queue; to create it, \texttt{Looper\#prepare()} should be called in the thread~\cite{looper}.}}. Otherwise, a runtime exception will be thrown.
\emph{Local-GSM-Backend}~\cite{LocalGSM} (Figure~\ref{fig:Local-GSM-Backend}), a popular cell-tower based location lookup app, uses a thread \texttt{worker} to monitor the changes of telephony states via \texttt{PhoneStateListener}. However, the developers are unaware that \texttt{PhoneStateListener} internally maintains a \texttt{Handler} instance to deliver messages~\cite{PhoneStateListener}, and thus requires setting up a message loop in \texttt{worker}. They later fixed it by calling \texttt{Looper\#prepare()} (in revision 07e4a759~\cite{LocalGSM_revision}).
Other constraints include performance consideration (avoid performing network operations in the main UI thread~\cite{Network}, permission consideration (require run-time permission grant for dangerous permissions~\cite{permission} since Android 6.0, otherwise \texttt{SecurityException}) and \hbox{\emph{etc}}\xspace.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_Atarashii_database_management.pdf}
\end{center}
\vspace{-10pt}
\caption{An Example of Database Management Error}
\label{fig:Atarashii}
\vspace{-10pt}
\end{figure}
\noindent \emph{\textbf{$\bigcdot$ Database Management Error}}.
Improper manipulating database columns/tables causes many exceptions. Besides this,
improper data migration for version updates is another major reason.
\emph{Atarashii}~\cite{Atarashii} (Figure~\ref{fig:Atarashii}) is a popular app for managing the reading and watching of anime. When the user upgrades from v1.2 to v1.3, the app crashes once started. The reason is that the callback \texttt{onCreate()} is only called if no old version database file exists, so the new database table \emph{friends} is not successfully created when upgrading. Instead, \texttt{onUpgrade()} is called, it crashes the app because the table \emph{friends} does not exist (fixed in revision b311ec3~\cite{Atarashii_revision}).
\noindent \emph{\textbf{$\bigcdot$ API Updates and Compatibility}}. Android system is evolving fast. API updates and implementation (\hbox{\emph{e.g.}}\xspace, SDKs, core libraries) changes can affect the robustness of apps. Device fragmentation~\cite{Wei16} aggravates this problem.
For example, \texttt{Service} should be started explicitly since Android 5.0; the change of the comparison contract of \texttt{Collections\#sort()}~\cite{collection_issue} since JDK 7 crashes several apps since the developers are unaware of this.
\noindent \emph{\textbf{$\bigcdot$ Memory/Hardware Error}}. Android devices have limited resources (\hbox{\emph{e.g.}}\xspace, memory). Improper using of resources may bring exceptions. For example, \texttt{OutOfMemoryError} occurs if loading too large Bitmaps; \texttt{RuntimeException} appears when \texttt{MediaRecorder\#stop()} is called but no valid audio/video data is received.
\noindent \emph{\textbf{$\bigcdot$ XML Design Error}}. Android supports UI design and resource configuration in the form of XML files. Although IDE tools have provided much convenience, mistakes still exist, \hbox{\emph{e.g.}}\xspace, misspelling custom UI control names, forgetting to escape special characters (\hbox{\emph{e.g.}}\xspace, \quotes{\$}, \quotes{\%}) in string texts, failing to specify correct resources in \texttt{colors.xml} and \texttt{strings.xml}.
\noindent \emph{\textbf{$\bigcdot$ API Parameter Error}}. Developers make this type of mistakes when they fail to consider all possible input contents or formats, and feed malformed inputs as the parameters of APIs. For example, they tend to directly use the results from \texttt{SharedPreference} or database queries without any checking.
\noindent \emph{\textbf{$\bigcdot$ Resource Not Found Error}}. Android apps heavily use external resources (\hbox{\emph{e.g.}}\xspace, databases, files, sockets, third-party apps and libraries) to accomplish tasks. Developers make this mistake when they ignore checking their availability before use.
\noindent \emph{\textbf{$\bigcdot$ Indexing Error}}. Indexing error happens when developers access data, \hbox{\emph{e.g.}}\xspace, \emph{database}, \emph{string}, and \emph{array}, with a wrong index value. One typical example is the \texttt{CursorIndexOutOfBounds} exception caused by accessing database with incorrect cursor index.
In Table~\ref{table:root_causes}, column 2 and 3, respectively, counts the occurrences of each category and the number of Stack Overflow posts on discussing these faults; column 4 shows the number of distinct exception types of each category (total 75 instances). We find that (1) Besides the ``trivial" errors such as Resource-Not-Found Error, Index Error and API Parameter Error, app developers are more likely to make Android specific errors, \hbox{\emph{e.g.}}\xspace, Lifecycle Error, Memory/Hardware Error, Android Framework Constraint Error. (2) developers also discuss more on Android Framework Constraint Error, Lifecycle Error and API Parameter Error.
Additionally, we find existing mutation operators~\cite{DengOAM17,Linares2017} designed for detecting app bugs can cover only a few of these 75 instances.
Deng \hbox{\emph{et al.}}\xspace's 11 operators~\cite{DengOAM17} can only detect 2 instances (the remaining ones detect UI and event handling failures instead of fatal crashes); MDroid+~\cite{Linares2017} proposes 38 operators, but can only cover 8 instances.
\vspace{2pt}
\noindent\fbox{
\parbox{0.95\linewidth}{
\textbf{Answer to RQ2:} \textit{We distilled 11 fault categories that explain why framework exceptions are recurring. Among them, developers make more mistakes on Lifecycle Error, Memory/Hardware Error and Android Framework Constraint Error. Existing mutation operators are inadequate for detecting these errors.}
}
}
\vspace{-2pt}
\subsection{RQ3: Auditing Bug Detection Tools}
\label{sec:rq4}
Dynamic testing and static analysis are the two main avenues to help detect software bugs.
This section investigates the detection abilities of these two techniques on framework exceptions (categorized in Section~\ref{sec:rq3}).
In particular, we select three state-of-the-art testing tools, \hbox{\emph{i.e.}}\xspace, Monkey, Sapienz, and Stoat;
and four static analysis tools widely used by android developers~\cite{LinaresVasquez15}, \hbox{\emph{i.e.}}\xspace, Lint, FindBugs, PMD, and SonarQube.
Lint, developed by Google, detects code structural problems, and scans for android-specific bugs~\cite{lintchecks}.
PMD uses defect patterns to detect bad coding practices.
FindBugs, provided as a plugin in Android Studio, also enforces various checking rules, and adopts control- and data-flow analysis to scan potential bugs (\hbox{\emph{e.g.}}\xspace, null-pointer dereferences).
SonarQube is a continuous code quality platform that provides bug reports for suspicious code.
\noindent\emph{\textbf{Static Analysis Tools}}.
We randomly select 75 distinct exception instances (corresponding to column 4 in Table~\ref{table:root_causes}) from Github that cover all manifestations of root faults, and checkout the corresponding buggy code version to investigate how many of them can be detected by static analysis tools.
Our investigation finds static tools specialize in detecting bad practices, code smells, and potential bugs that may lead to severe errors, but with a mass of false alarms.
As shown in Table~\ref{table:root_causes}, FindBugs, PMD, and SonarQube fail to report any warnings on these bugs. Lint only identifies 4 out of 75 bugs, which include one XML error (the resource file ``string.xml'' contains an illegal character ``\$'') and three framework constraint errors (duplicate resource ids within a layout file; Fragment cannot be instantiated; using the wrong AppCompat method).
In addition, although these tools claim to support android projects, we have not found any android-specific rules in FindBugs and SonarQube, and only three android rules~\cite{pmdrules} in PMD. Lint defines 281 android rules~\cite{lintchecks} but detects only a few bugs.
Therefore, the current static analysis tools focus more on basic Java defects, and much less effective in detecting framework exceptions of Android apps.
\noindent\emph{\textbf{Dynamic Testing Tool}}.
We apply testing tools on each app (total 2,104) with the same configuration in Section~\ref{sec:collection}.
As we observed, they can detect many framework exceptions. To understand their abilities, we use two metrics\footnote{We have not presented the results of trace length, since we find the three tools cannot dump the exact trace that causes a crash. Instead, they output the whole trace, which cannot reflect their detection abilities.}.
(1) \emph{detection time} (the time to detect an exception). Since one exception may be found multiple times, we use the time of its first occurrence. (2) \emph{Occurrences} (how many times an exception is detected during a specified duration).
Figure~\ref{fig:time} and Figure~\ref{dynamic:occurrence}, respectively, show the detection time and occurrences of exceptions by each tool grouped by the fault categories.
Figure~\ref{fig:time} shows concurrency errors are hard to detect for all three tools (requiring longer time). But for other fault categories, the time varies on different tools. For example,
Sapienz is better at database errors (since Sapienz implements a strategy, \hbox{\emph{i.e.}}\xspace, fill strings in \texttt{EditText}s, and then click ``OK'' instead of ``Cancel'' to maximize code coverage, which is more likely to trigger database operations); Monkey and Sapienz are better at lifecycle errors (since both of them emit events very quickly without waiting the previous ones to take effect, \hbox{\emph{e.g.}}\xspace, open and quickly close an activity without waiting the activity finishes its task).
Figure~\ref{dynamic:occurrence} shows it is easy for three tools to detect API compatibility, Resource-Not-Found and XML errors since the occurrences of these errors are much more than those of the others.
But for other categories, \hbox{\emph{e.g.}}\xspace, Concurrency, Lifecyle, Memory, UI update errors, all of three tools are far from effective regardless of their testing strategies. The main reason is that these errors contain non-determinism (interact with threads).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pic/detection_time.pdf}
\vspace{-10pt}
\caption{Detection time of exceptions by each tool}
\label{fig:time}
\vspace{-10pt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pic/occurrence.pdf}
\vspace{-10pt}
\caption{Occurrences of exceptions by each tool}
\label{dynamic:occurrence}
\vspace{-10pt}
\end{figure}
After an in-depth inspection, we find that some Database errors are hard to trigger because the app has to construct an appropriate database state (\hbox{\emph{e.g.}}\xspace, create a table or insert a row, and fill in specific data) as the precondition of the bug, which may take a long time. As for Framework Constraint errors, some exceptions require special environment interplay. For example, \texttt{InstantiationException} of Fragment can only be triggered when a Fragment (without an empty constructor) is destroyed and recreated. To achieve this, a testing tool needs to change device rotation at an appropriate timing (when the target Fragment is on the screen), or pause and stop the app by switching to another one, and stay there for a long time (let Android OS kill the app), and then return back to the app. Concurrency bugs are hard to trigger since they usually need right timings of events.
\vspace{1mm}
\noindent\fbox{
\parbox{0.95\linewidth}{
\textbf{Answer to RQ3:} \textit{Existing static analysis tools are ineffective in detecting framework exceptions. Dynamic testing tools are still far from effective in detecting database, framework constraint and concurrency errors.}
}
}
\subsection{RQ4: Fixing Patterns and Efforts}
\label{sec:rq5}
This section uses the exception-fix repository constructed in RQ2 (194 instances) to investigate the common practices of developers to fix framework exceptions. We categorize their fixing strategies by (1) the types of code modifications (e.g., modify conditions, reorganize/move code, tweak implementations); (2) the issue comments and patch descriptions. We finally summarized 4 common fix patterns, which can resolve over 80\% of the issues.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_Local-qBittorrent-Controller_CursorIndexOutOfBound.pdf}
\end{center}
\vspace{-10pt}
\caption{Example fixes by adding conditional checks}
\label{fig:example_fixes_conditional_checks}
\vspace{-10pt}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{pic/code_snippet_MozStumbler_UI_update.pdf}
\end{center}
\vspace{-10pt}
\caption{Example fixes by moving code into correct thread}
\label{fig:example_fixes_correct_thread}
\vspace{-10pt}
\end{figure}
\noindent \emph{\textbf{$\bigcdot$ Refine Conditional Checks}}. Missing checks on API parameters, activity states, index values, database/SDK versions, external resources can introduce unexpected exceptions. Developers usually fix them via adding appropriate conditional checks. For example, Figure~\ref{fig:example_fixes_conditional_checks} (a) checks cursor index to fix \texttt{CursorIndexOutOfBound}, Figure~\ref{fig:example_fixes_conditional_checks} (b) checks the state of the activity attached by a Fragment to fix \texttt{IllegalState}, and Figure~\ref{fig:example_fixes_conditional_checks} (c) checks the input of an \texttt{EditText} to fix \texttt{NumberFormat}.
We find most of exceptions from \emph{Parameter Error}, \emph{Indexing Error}, \emph{Resource Error}, \emph{Lifecycle Error}, and \emph{API Error} can be fixed by this strategy.
\noindent \emph{\textbf{$\bigcdot$ Move Code into Correct Thread}}. Messing up UI and background threads may incur severe exceptions. The common practice to fix such problems is to move related code into correct threads. Figure~\ref{fig:example_fixes_correct_thread} fixes \texttt{CalledFromWrongThread} by moving the code of modifying UI widgets back to the UI thread (via \texttt{Activity\#runOnUiThread()}) that creates them. Similar fixes include moving the showings of \texttt{Toast} or \texttt{AlertDialog} into the UI thread instead of the background thread since they can only be processed in the \texttt{Looper} of the UI thread~\cite{Toast_exception,AlertDiaglog_exception}. Additionally, moving extensive tasks (\hbox{\emph{e.g.}}\xspace, network access, database query) into background thread can resolve such performance exceptions as \texttt{NetworkOnMainThread} and ``Application Not Responding" (ANR)~\cite{ANR}.
\noindent \emph{\textbf{$\bigcdot$ Work in Right Callbacks}}. Inappropriate handling lifecycle callbacks of app components (\hbox{\emph{e.g.}}\xspace, \texttt{Activity}, \texttt{Fragment}, \texttt{Service}) can severely affect the robustness of apps. The common practice to fix such problems is to work in the right callback. For example, in \texttt{Activity}, putting \texttt{BroadcastReceiver}'s register and unregister into \texttt{onStart()} and \texttt{OnStop()} \emph{or} \texttt{onResume()} and \texttt{OnPause()} can avoid \texttt{IllegalArgument}; and committing a \texttt{FragmentTransaction} before the activity's state has been saved (\hbox{\emph{i.e.}}\xspace, before the callback \texttt{onSaveInstanceState()}) can avoid state loss exception~\cite{state_loss,Shan16}.
\noindent \emph{\textbf{$\bigcdot$ Adjust Implementation Choices}}. To resolve other exceptions, developers have to adjust the implementation or do code refactoring. For example, to fix \texttt{OutOfMemory} caused by loading Bitmap, the common practice is to optimize memory usage by resizing the original bitmap~\cite{bitmap_memory}; to fix data race exceptions, the common practice is to adopt mutex locks (\hbox{\emph{e.g.}}\xspace, add \emph{synchronized} to allow the execution of only one active thread) or back up the shared data~\cite{MPDroid_issue}.
To further understand the characteristics of developer fixes, we group these issues by their root causes, and compute three metrics:
(1) \emph{Issue Duration}, which indicates how long the developers took to fix the issue (Figure~\ref{fig:fix_efforts}(a)); (2) \emph{Number of Changed Code Lines}, \hbox{\emph{i.e.}}\xspace, the number of code lines\footnote{To reduce ``noises", we exclude comment lines (\hbox{\emph{e.g.}}\xspace, ``//...''), annotation lines (\hbox{\emph{e.g.}}\xspace, ``@Override''), unrelated code changes (\hbox{\emph{e.g.}}\xspace, ``import *.*'', the code for new features).} the developers changed to fix this issue (Figure~\ref{fig:fix_efforts}(b)); and (3) \emph{Issue Closing Rate}, \hbox{\emph{i.e.}}\xspace, how many issues have been closed (the last column in Table~\ref{table:root_causes}).
We can see that the fixes for \emph{Parameter Error}, \emph{Indexing Error}, \emph{Resource Error}, and \emph{Database Error} require fewer code changes (most patches are less than 20 lines). Because most of them can be fixed by refining conditional checks. We also note \emph{UI Error}, \emph{Concurrency Error}, and \emph{Memory/Hardware Error} require larger code patches.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{pic/fix_effort.pdf}
\end{center}
\vspace{-10pt}
\caption{Fixing Effort}
\label{fig:fix_efforts}
\vspace{-10pt}
\end{figure}
Further, by investigating the discussions and comments of developers when fixing, we find three important reasons that reveal the difficulties they face.
\noindent \emph{\textbf{$\bigcdot$ Difficulty of Reproducing and Validation}}. One main difficulty is how to reproduce exceptions and validate the correctness of fixes~\cite{MoranVBVP16}. Most users do not report complete reproducing steps/inputs and other necessary information (\hbox{\emph{e.g.}}\xspace, exception trace, device model, code version) to developers. Even if the exception trace is provided, reproducing such exceptions as non-deterministic ones (\hbox{\emph{e.g.}}\xspace, concurrency errors) is rather difficult. In such cases, after fixing the issue, they choose to leave it for the app users to validate before closing the issue. As shown in Figure~\ref{fig:fix_efforts} and Table~\ref{table:root_causes}, concurrency errors have longer issue durations and lower fixing rate.
\noindent \emph{\textbf{$\bigcdot$ Inexperience with Android System}}. A good understanding of Android system is essential to correctly fix exceptions. As the closing rates in Table~\ref{table:root_causes} indicate, developers are more confused by Memory/Hardware Error, Lifecycle Error, Concurrency Error, and UI Error. We find some developers use simple \emph{try-catch} or compromising ways (\hbox{\emph{e.g.}}\xspace, use \texttt{commitAllowingStateLoss} to allow activity state loss) as workarounds. However, such fixes are often fragile.
\noindent \emph{\textbf{$\bigcdot$ Fast Evolving APIs and Features}}. Android is evolving fast. As reported, on average, 115 API updates occur each month~\cite{McDonnell13}. Moreover, feature changes are continuously introduced. However, these updates or changes may make apps fragile when the platform they are deployed is different from the one they were built; and the developers are confused when such issues appear. For example, Android 6.0 introduces runtime permission grant --- If an app uses dangerous permissions, developers have to get permissions from users at runtime. However, we find several developers choose to delay the fixing since they have not fully understand this new feature.
\vspace{2pt}
\noindent\fbox{
\parbox{0.95\linewidth}{
\textbf{Answer to RQ4:} \textit{Refining conditional checks, using correct thread types, working in the right callbacks, adjusting implementation choices are the 4 common fix practices.
Memory/Hardware, Lifecycle, Concurrency, and UI update Error are more difficult to fix.}
}
}
\section{Acknowledgements}
We appreciate the anonymous reviewers for their valuable
feedback.
This work is partially supported by NSFC Grant 61502170, NTU Research Grant NGF-2017-03-033 and NRF Grant CRDCG2017-S04.
Lingling Fan is partly supported by ECNU Project of Funding Overseas Short-term Studies,
Ting Su partially supported by NSFC Grant 61572197 and 61632005, and Geguang Pu partially supported by
MOST NKTSP Project 2015BAG19B02 and STCSM Project 16DZ1100600. Zhendong Su is partially supported by
United States NSF Grants 1528133 and 1618158, and by a Google Faculty Research Award.
\clearpage
\bibliographystyle{ACM-Reference-Format}
\balance
|
1,116,691,500,255 | arxiv | \section{Introduction}
Energy functionals play an important role in differential geometry, and specially in the study of embedded extremal surfaces. For example, in biology, the minimization of such functionals associated to mechanical properties of cell membranes, can be used to explain the shape of biological structures under different types of stress \cite{Lomholt_2006,https://doi.org/10.48550/arxiv.1709.04399}. In mathematics, the study of minimal surfaces and of extrema of isoperimetric inequalities or volume maximization, play an important historical role. In this sense, minimal area surfaces appear ubiquitously when predicting the shapes of membranes with surface tension \cite{isenberg1978science,reilly1982mean}. In point of fact, it has been known since antiquity that the sphere and its lower-dimensional analogues, are the forms that maximize volume for a fixed surface area.
Besides the interest of energy functionals and extremal surfaces \textit{per se}, in the context of quantum field theories and many-body systems, Entanglement Entropy (EE) has been regarded as a valuable computational resource \cite{2006PhDT........59E,Horodecki:2009zz,Amico:2007ag}. It has been also considered as an order parameter indicative of phase transitions in quantum systems, of the presence of short or long range correlations, of topological order, etc.\cite{Kitaev:2005dm,Klebanov:2007ws,Levin:2006zz}. It was very interesting then, when in the framework of the anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, a relation between the EE of the spatial region of a CFT and the area of a minimal codimension-2 surface in the dual bulk spacetime was found. Indeed, this formula is reminiscent of the Bekenstein-Hawking entropy formula for black holes in Einstein gravity \cite{Ryu:2006ef}, which points to a connection between information and geometry \cite{Maldacena:2013xja,Stanford:2014jda,Balasubramanian:2013lsa}.
In the saddle-point approximation of the duality, one considers that the on-shell action of the bulk gravity theory defines the generating functional of connected correlators of the corresponding dual CFT \cite{Witten:1998qj,Gubser:1998bc}. In this picture, from the near-boundary expansion of the bulk fields, one identifies both the external source and its canonical conjugate, i.e. the holographic response function of the respective CFT operator \cite{deHaro:2000vlm}. Then, the holographic EE (HEE) measures the amount of entanglement between a spatial region in the CFT --the entangling region-- and its exterior, and can be obtained holographically as a codimension-2 functional which depends on the bulk gravity action and is therefore a geometric object.
In particular, the HEE for Einstein-AdS gravity is well-known to be given by the area of the Ryu-Takayanagi (RT) minimal surface \cite{Ryu:2006ef}, which is anchored at the conformal boundary. This area is divergent due to the pole in the AdS metric. In order to isolate the universal part, standard holographic techniques in codimension-2 have been considered \cite{Taylor:2016aoi}, such that renormalized area is inherited from renormalized bulk gravity action.
In this work we present a prescription to derive a codimension-2 local conformal invariant, named $L_{\Sigma}$, directly from Conformal Gravity (CG) evaluated on the replica orbifold. This object can be applied to obtain the renormalized HEE for Einstein-AdS spacetimes, and also for deriving energy functionals such as Willmore energy \cite{willmore1996riemannian,Toda2017Willmore} and reduced Hawking mass \cite{Fischetti:2016fbh}.
Obtaining the energy functionals from $L_{\Sigma}$ is an example of Conformal Renormalization, where Einstein-AdS gravity is consistently embedded into CG, in order to provide bulk renormalization, as discussed in the four and six-dimensional cases in Ref.\cite{Anastasiou:2020mik}. This hints at a connection between conformal invariance and renormalization of bulk/codimension-2 functionals.
This paper is organized as follows. In section \ref{Section 2}, we consider the renormalized area formula of Ref.\cite{Alexakis:2010zz} and we study its properties under conformal transformation in order to emphasize that it is not a conformal invariant. Then, we restore conformal invariance by constructing $L_{\Sigma}$, that reduces to the renormalized area when evaluated on boundary-anchored minimal surfaces on Einstein-AdS ambient spacetimes. We also relate $L_{\Sigma}$ to the Willmore energy. In section \ref{CGconical}, we obtain $L_{\Sigma}$ directly from the evaluation of the CG action on a conically-singular manifold. In section \ref{ReducedHawking}, we use $L_{\Sigma}$ to derive the reduced Hawking mass \cite{Fischetti:2016fbh}, by evaluating the former on Einstein-AdS ambient spacetimes. Finally, in section \ref{Conclusions}, we conclude with a summary of our results.
\section{Obtaining $L_{\Sigma}$: the hard way}\label{Section 2}
The RT formula \cite{Ryu:2006ef} for the computation of the holographic entanglement entropy (HEE) opened a new window in the analysis of energy functionals and its properties. Most importantly, it were Lewkowycz and Maldacena \cite{Lewkowycz:2013nqa} who proved the conjectured RT formula and provided a systematic way to derive HEE functionals. There, the holographic application of the replica trick induces an orbifold structure in the bulk in the presence of conical singularities due to the replica symmetry. The evaluation of the Einstein-Hilbert action in the orbifold make manifest the area functional located at the cosmic brane where the conical singularities are located. Indeed, for the conical expansion of the Ricci scalar we get
\begin{equation}
\int\limits_{M^{\left(\alpha\right)}} d^{4}x \sqrt{g} R^{\left(\alpha\right)}=\int\limits_{M} d^{4}x\sqrt{g}R + 4 \pi \left(1-\alpha\right) \mathcal{A} \left[\Sigma\right]\,, \label{RicciFPS}
\end{equation}
where $\mathcal{A} \left[\Sigma\right]=\int\limits_{\Sigma} d^{2}y \sqrt{\gamma} $ is the area functional.
It becomes clear from the previous construction that the area can be naturally extracted out of the Ricci scalar in the presence of cones \cite{Fursaev:2013fta}. At this point, one should wonder whether the renormalized codimension-2 area arises out of the renormalized bulk EH action. Indeed, earlier works \cite{Taylor:2016aoi} indicate that this is the case, applying standard holographic renormalization techniques.
\subsection{Topological terms in four-dimensional AdS gravity}
Of particular relevance for the current discussion is the case of four dimensions, where the terms required for the background independent renormalization of the action, i.e. the holographic renormalization counterterms, have been extensively studied \cite{deHaro:2000vlm,Henningson:1998ey}. In this case, the introduction of the counterterms is equivalent to the addition of a boundary term with a fixed coupling constant and explicit dependence on both the intrinsic and extrinsic curvature, dubbed second Chern form $B_3$. The latter works as a resummation of the standard counterterms \cite{Miskovic:2009bm,Anastasiou:2020zwc} and the corresponding renormalized Einstein-AdS action reads
\begin{equation}
I_{\text{ren}}= \frac{1}{16 \pi G_{N}} \int\limits_{M} d^{4}x \sqrt{g} \left(R-2 \Lambda \right) + \frac{\ell^2}{64\pi G_{N}} \int\limits_{\partial M} d^{3}x\,B_{3} \,,
\label{Iren}
\end{equation}
with the cosmological constant defined in terms of the AdS radius as $\Lambda=-\frac{3}{\ell^2}$, and the surface terms given by
\begin{equation}
B_{3}= 4 \sqrt{h}\, \delta^{a_{1} a_{2} a_{3}}_{b_{1} b_{2} b_{3}} K^{b_{1}}_{a_{1}} \left(\frac{1}{2} \frak{R}^{b_{2} b_{3}}_{a_{2} a_{3}} \left(h\right) - \frac{1}{3} K^{b_{2}}_{a_{2}}K^{b_{3}}_{a_{3}}\right) \,.
\label{chernform}
\end{equation}
Here $g_{\mu \nu}$ and $h_{ab}$ are the bulk and the boundary metric, respectively, where the Greek indices denote bulk coordinates and the Latin indices $(a,b)$ correspond to boundary coordinates (see Appendix \ref{AppendixA} for conventions). In this respect, adding up the Chern form to the bulk Lagrangian provides a more geometric approach to the problem of renormalization of AdS gravity. Indeed, this term arises as the boundary correction to the Euler characteristic in the Gauss-Bonnet theorem for non-compact manifolds, i.e.,
\begin{equation}
\int\limits_{M} d^{4}x\,\mathcal{E}_{4} = 32 \pi^2 \chi \left[M\right] + \int\limits_{\partial M} d^{3}x\,B_{3} \,,
\label{Eulertheorem}
\end{equation}
where
\begin{equation}
\mathcal{E}_{4} = \sqrt{g} \left(Rie^2 - 4 Ric^2+ 4 R^{2}\right) \,,
\label{GaussBonnet}
\end{equation}
and $\chi\left[M\right]$ is the Euler characteristic of the manifold $M$.
An immediate consequence of the above relation is the fact that the use of extrinsic counterterms is equivalent, up to the Euler characteristic, to the addition of a topological invariant of the Euler class. This implies
\begin{equation}
I_{\text{ren}} = \frac{1}{16 \pi G_{N}} \int\limits_{M} d^{4}x \sqrt{g} \left(R-2 \Lambda \right) + \frac{\ell^2}{64 \pi G_{N}} \int\limits_{M} d^{4}x\,\mathcal{E}_{4} -\frac{\pi \ell^2}{2 G_{N}} \chi \left[M\right] \,.
\label{IrenGB}
\end{equation}
The main advantage of this prescription is that one can interchange a boundary for a bulk term, since they are locally equivalent. It also unveils topological features of the corresponding manifold, captured by the topological number $\chi \left[M\right]$.
\subsection{Renormalized area from topological terms and conical defects}
The generalized form of gravitational entropy considers the evaluation of the corresponding bulk Lagrangian in a squashed conically singular manifold \cite{Lewkowycz:2013nqa}.
Along this line, Fursaev, Patrushev and Solodukhin (FPS) in Ref.\cite{Fursaev:2013fta} analyzed the behavior of quadratic terms in the curvature in the vicinity of a squashed cone with an angular deficit $2\pi(1-\alpha)$. This analysis implies that the square of the Riemann tensor is decomposed as
\begin{align}
&\int\limits_{M^{\left(\alpha\right)}} d^{4}x \sqrt{g}
\left(Rie^{\left(\alpha\right)}\right)^{2} =\int\limits_{M} d^{4}x\sqrt{g}\, Rie^{2} \nonumber\\
& + 8\pi \left(1-\alpha\right) \int\limits_{\Sigma} d^{2}y \sqrt{\gamma} \left(R_{ABAB} - \mathcal{K}^{\left(A\right)}_{ij} \mathcal{K}^{ij}_{\left(A\right)}\right) + \mathcal{O}\left(\left(1-\alpha\right)^2\right) \,, \label{Riemsquared}
\end{align}
while the Ricci tensor squared reads
\begin{align}
&\int\limits_{M^{\left(\alpha\right)}} d^{4}x \sqrt{g} \left(Ric^{\left(\alpha\right)}\right)^{2}= \int\limits_{M} d^{4}x \sqrt{g}\, Ric^{2} \nonumber \\
& + 4\pi \left(1-\alpha\right) \int\limits_{\Sigma} d^{2}y \sqrt{\gamma} \left(R_{AA} -\frac{1}{2} \mathcal{K}^{\left(A\right)}_{ij} \mathcal{K}^{ij}_{\left(A\right)}\right) +\mathcal{O}\left(\left(1-\alpha\right)^2\right) \,. \label{Ricsquared}
\end{align}
In turn, the square of the Ricci scalar splits as follows
\begin{align}
& \int\limits_{M^{\left(\alpha\right)}} d^{4}x \sqrt{g} \left(R^{\left(\alpha\right)}\right)^{2} = \int\limits_{M} d^{4}x \sqrt{g}\, R^{2} + 8\pi \left(1-\alpha\right) \int\limits_{\Sigma}\sqrt{\gamma}\,R + \mathcal{O}\left(\left(1-\alpha\right)^2\right) \,, \label{RicciScalaquared}
\end{align}
while the Euler characteristic is expanded as
\begin{equation}
\chi \left[M^{\left(\alpha\right)}\right] = \chi \left[M\right]+ \left(1-\alpha\right) \chi\left[\Sigma\right]\,.\label{EulercharFPS}
\end{equation}
Here $M^{\left(\alpha\right)}$ represents the four-dimensional orbifold, and $\Sigma$ is the codimension-2 manifold located at the tip of the conical singularity which is described by the embedding function $x^{\mu}=x^{\mu} \left(y^{i}\right)$ and the induced metric $\gamma_{ij}$, where the Latin indices $(i,j)$ denote codimension-2 coordinates. The labels $\left(A,B\right)$ denote the orthogonal directions to $\Sigma$ and $\mathcal{K}^{\left(A\right)}$ is its extrinsic curvature along the normal direction $n^{\left(A\right)}$. The bulk curvature terms at the r.h.s. of the above relations correspond to the regular part of them.
Taking into account Eqs.\eqref{Riemsquared}-\eqref{RicciScalaquared}, the Gauss-Bonnet term in the presence of squashed cones, adopts the form
\begin{equation}
\int\limits_{M^{\left(\alpha\right)}} d^{4}x\, \mathcal{E}_{4}^{\left(\alpha\right)} = \int\limits_{M} d^{4}x\, \mathcal{E}_{4} + 8 \pi \left(1-\alpha\right) \int\limits_{\Sigma} d^{2}y\, \mathcal{E}_{2} \,, \label{GBFPS}
\end{equation}
where $\mathcal{E}_{2}=\sqrt{\gamma} \mathcal{R}$ is the corresponding topological term in two dimensions. The fact that extrinsic curvatures are not present in the codimension-2 functional is a remarkable feature of the addition of a topological term in the bulk. It also brings in topological contributions to quantum information theoretic measures, e.g., in the context of HEE.
In sum, when the renormalized Einstein-AdS action in Eq.\eqref{IrenGB} is evaluated on the orbifold $M^{\left(\alpha\right)}$, we get
\begin{equation}
I_{\text{ren}}^{\left(\alpha\right)} = I_{\text{ren}} +\frac{\left(1-\alpha\right)}{4G_{N}} \left(\mathcal{A} \left[\Sigma\right] +\frac{\ell^2}{2} \int\limits_{\Sigma} d^{2}y\, \mathcal{E}_{2} - 2 \pi \ell^2 \chi \left[\Sigma\right]\right) \,.
\end{equation}
Note that the term that is linear in $\left(1-\alpha\right)$ can be identified with the renormalized area given by Alexakis and Mazzeo in Ref.\cite{Alexakis:2010zz}. In order to show this, we rewrite the conical contribution as
\begin{align}
\mathcal{A}_{ren}\left [\Sigma \right ]&= \int\limits_{\Sigma} d^{2}y \sqrt{\gamma} +\frac{\ell^2}{2} \int\limits_{\Sigma} d^{2}y\, \mathcal{E}_{2} - 2 \pi \ell^2 \chi \left[\Sigma\right] \nonumber \\
&=\frac{\ell^2}{2} \int\limits_{\Sigma}d^{2}y \sqrt{\gamma} \left(\mathcal{R} + \frac{2}{\ell^2}\right) - 2 \pi \ell^2 \chi \left[\Sigma\right] \nonumber \\
&= \frac{\ell ^{2}}{4}\int\limits_{\Sigma }d^{2}y\sqrt{\gamma}\delta _{ij}^{km}\left (\mathcal{R}_{km}^{ij} +\frac{1}{\ell ^{2}}\delta _{km}^{ij}\right ) -2\pi \ell ^{2}\chi \left [\Sigma \right ] \,.
\label{Aren}
\end{align}
Considering the Gauss-Codazzi relation the latter can be cast as
\begin{align}
\mathcal{A}_{ren}\left [\Sigma \right ]
&= \frac{\ell ^{2}}{4}\int\limits _{\Sigma }d^{2}y\sqrt{\gamma}\delta _{ij}^{ms}\left (R_{ms}^{ij} +2\mathcal{K}_{ms}^{\left (A\right )}\mathcal{K}_{\left (A\right )}^{ij} +\frac{1}{\ell ^{2}}\delta _{ms}^{ij}\right ) -2\pi \ell ^{2}\chi \left [\Sigma \right ] \nonumber\\
&= \frac{\ell ^{2}}{4}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\delta _{ij}^{ms}\left (W_{\left(\text{E}\right) ms}^{ij} +2\mathcal{K}_{ms}^{\left (A\right )}\mathcal{K}_{\left (A\right )}^{ij}\right )-2\pi \ell ^{2}\chi \left [\Sigma \right] \,,
\end{align}
where $W_{\left(\text{E}\right) \mu \nu}^{\alpha \beta}$ corresponds to the curvature of the AdS group for (pseudo)Riemannian manifolds without torsion. The same object can also be identified with the Weyl tensor for Einstein spacetimes
\begin{equation}
W_{\left(\text{E}\right) \mu \nu}^{\alpha \beta}=R_{\mu \nu}^{\alpha \beta}+\frac{1}{\ell^{2}}\delta_{ \mu \nu}^{\alpha \beta} \,,
\label{WeylEinstein}
\end{equation}
which comes from the generic form of the Weyl
\begin{equation}
W_{\mu \nu}^{\alpha \beta}=R_{\mu \nu}^{\alpha \beta}- \left(S^{\alpha}_{\mu} \delta^{\beta}_{\nu} -S^{\beta}_{\mu} \delta^{\alpha}_{\nu}-S^{\alpha}_{\nu} \delta^{\beta}_{\mu} +S^{\beta}_{\nu} \delta^{\alpha}_{\mu} \right) \,, \label{Weyltensor}
\end{equation}
where the Einstein condition in the Schouten tensor, $S_{\mu \nu}= -\frac{1}{2 \ell^{2}} g_{\mu \nu}$, is considered.
Finally, in order to express the above formula in a more standard form, one may use the traceless part of the extrinsic curvature, defined as
\begin{equation}
P_{ij}^{\left (A\right )} =\mathcal{K}_{ij}^{\left (A\right )} -\frac{1}{2}\mathcal{K}^{\left (A\right )}\gamma _{ij}\,.
\end{equation}
In doing so, one obtains
\begin{equation}
\mathcal{A}_{ren}\left [\Sigma \right ]= \frac{\ell ^{2}}{2}\int\limits_{\Sigma }d^{2}y\sqrt{\gamma}\left [W_{\left(\text{E}\right)ij}^{ij} -P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij} +2\left (H^{\left (A\right )}\right )^{2}\right ]-2\pi \ell ^{2}\chi \left [\Sigma \right] \,,
\label{Arenweyl}
\end{equation}
where $H^{\left (A\right )}=\frac{1}{2}\mathcal{K}^{\left(A\right)}$ is the mean curvature of $\Sigma$.
This functional matches the corresponding formula for $\mathcal{A}_{ren}$ given in Ref.\cite{Alexakis:2010zz}.
Hence, the renormalized area naturally arises as the conical contribution of the renormalized Einstein-AdS action in the presence of squashed conical singularities. In a compact notation, the renormalized action on the orbifold takes the form
\begin{equation}
I_{\text{ren}}^{\left(\alpha\right)} = I_{\text{ren}} + \frac{\left(1-\alpha\right)}{4 G_{N}}\mathcal{A}_{\text{ren}} \left[\Sigma\right] \,,
\end{equation}
reflecting the fact that $\mathcal{A}_{ren}$ is inherited from bulk renormalization.
Even though $ I_{\text{ren}}$ is finite for any four-dimensional Einstein spacetime which is asymptotically AdS (AAdS), there are certain subtleties in the class of hypersurfaces that can be renormalized by $\mathcal{A}_{\text{ren}}$. As it was pointed out in Ref.\cite{Alexakis:2010zz}, $\mathcal{A}_{\text{ren}}$ successfully renders finite the area of any minimal or non-minimal surface that is anchored orthogonally to the conformal boundary. However, when the intersection is not orthogonal then the corresponding $\mathcal{A}_{\text{ren}}$ has to be corrected \cite{Fischetti:2016fbh}. Minimal surfaces satisfy trivially this relation, what makes this functional adequate for the calculation of the HEE.
\subsection{Renormalized area is not conformal invariant for arbitrary manifolds}
In the mathematical literature, conformal invariance plays a key role in the definition of Renormalized Volume for asymptotically hyperbolic spacetimes \cite{FG,Graham:1999jg,Albin:2005qka}. As a matter of fact, this bulk functional is expressed in terms of conformal invariants in four \cite{Anderson2000L2CA} and six \cite{Chang:2005ska} dimensions. It is expected that codimension-2 descendants of these structures would be conformally invariant, as well. It is also reasonable to think that this property will give rise to energy functionals for surfaces which are a proper measure of the deviation with respect to extremality (e.g., sphericity, as in the soap bubbles) irrespective of their size.
Taking the above argument as motivation, we will study the behavior of the renormalized area \eqref{Arenweyl} under conformal tranformation of the ambient metric $g_{\mu \nu } =e^{2\phi }\hat{g}_{\mu \nu }$.
The completeness relation
\begin{equation}
g_{\mu \nu } =n^{\left (A\right )}_{\mu }n_{\left (A\right )\nu } +e^{i}_{\mu }e^{j}_{\nu }\gamma _{i j} \,,
\end{equation}
with $n_{\mu }^{\left (A\right )}$ and $e_{i}^{\mu }$ being the corresponding normal and frame vectors, respectively, dictates the transformation of the following objects under Weyl rescaling, i.e.,
\begin{equation}
n_{\mu }^{\left (A\right )} =e^{\phi }\hat{n}_{\mu }^{\left (A\right )} \,,\ \gamma _{ij} =e^{2 \phi }\hat{\gamma }_{ij} \,.
\label{weyltranformnormalgamma}
\end{equation}
Furthermore, the extrinsic curvature transforms as
\begin{equation}
\mathcal{K}_{ij}^{\left (A\right )} = e^{\phi} \left(\hat{\mathcal{K}}_{ij}^{\left (A\right )} +\hat{\gamma} _{ij} \hat{n}^{\left(A\right)} \partial \phi\right) \,,
\label{Kconformtrans}
\end{equation}
where the contracted indices of $\hat{n}^{\left(A\right)} \partial \phi$ have been omitted for simplicity. Then it is straightforward to show that its trace scales as
\begin{equation}
\mathcal{K}^{\left (A\right )} = e^{-\phi} \left(\hat{\mathcal{K}}^{\left (A\right )} +2 \hat{n}^{\left(A\right)} \partial \phi\right) \,,
\label{Ktrconf}
\end{equation}
leading, in turn, to a conformally covariant $P_{ij}^{\left (A\right )}$ of weight 1. Finally, $W_{\left(\text{E}\right)ij}^{ij}$ transforms as
\begin{equation}
W_{\left(\text{E}\right)\mu \nu}^{\lambda\sigma} =e^{-2\phi} \left[\hat{W}_{\left(\text{E}\right)\mu \nu}^{\lambda \sigma}+\frac{1}{\ell^2} \delta_{\mu \nu}^{\lambda \sigma} \left(e^{2\phi}-1\right)-4\delta_{[\mu}^{[\lambda}\hat{T}_{\nu]}^{\sigma]}\right]\,,
\label{Weyleinstein}
\end{equation}
where
\begin{equation}
\hat{T}_{\nu}^{\lambda}=\hat{\nabla}^{\lambda}\hat{\nabla}_{\nu}\phi-\hat{\nabla}_{\nu}\phi\hat{\nabla}^{\lambda}\phi+\frac{1}{2}\delta_{\nu}^{\lambda}\hat{\nabla}_{\mu}\phi\hat{\nabla}^{\mu}\phi \,.
\end{equation}
Thus, the renormalized area functional \eqref{Arenweyl} transforms as
\begin{align}
\mathcal{A}_{\text{ren}}\left [\Sigma \right ]&= \frac{\ell ^{2}}{2}\int\limits_{\Sigma }d^{2}y\sqrt{\hat{\gamma }}\left[\hat{W}_{\left(\text{E}\right)ij}^{ij} -\hat{P}_{ij}^{\left (A\right ) }\hat{P}_{\left (A\right )}^{ij} +2\left (\hat{H}^{\left (A\right )}\right )^{2} +2\hat{\mathcal{K}}_{\left (A\right )}\hat{n}^{\left (A\right )} \partial \phi \right. \nonumber \\
& \left. +2\left (\hat{n}^{\left (A\right )} \partial \phi \right )^{2} +2 \left(\frac{e^{2\phi}-1}{\ell^2}-\hat{T}_{i}^{i}\right) \right] -2\pi \ell ^{2}\chi \left [\Sigma \right] \,.
\label{Arenconformalgen}
\end{align}
It is evident from this expression that $\mathcal{A}_{\text{ren}}$ is not symmetric under local conformal transformations when a generic surface is embedded in an arbitrary ambient metric.
\subsection{Restoring conformal invariance in codimension 2}\label{Willmoreenergy}
Rendering Eq.\eqref{Arenweyl} conformally invariant for an arbitrary surface is a non-trivial task. Indeed, this is equivalent to finding the compensating terms which restore conformal invariance of the renormalized area functional, what is technically involved.
However, the problem gets simpler once specific conditions are imposed in Eq.\eqref{Arenconformalgen}. In particular, when the ambient metric is an Einstein spacetime both in the physical and in the conformal frame, then $W_{\left(\text{E}\right)ij}^{ij}$ turns conformally covariant with a weight factor -2, as can be seen from Eq.\eqref{Weyleinstein}. This constraint makes the last parentheses in Eq.\eqref{Arenconformalgen} to vanish.
Additionally, considering a minimal embedding surface $\Sigma$, corresponds to the vanishing of the trace of the extrinsic curvature, which can equivalently be expressed in the conformal frame as
\begin{equation}
\hat{\mathcal{K}}^{\left (A\right )} = -2 \hat{n}^{\left(A\right)} \partial \phi \,.
\label{minimalitycondition}
\end{equation}
Applying the aforementioned conditions in Eqs.\eqref{Arenweyl} and \eqref{Arenconformalgen}, the renormalized area can be rewritten as
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma \right ]= \frac{\ell ^{2}}{2}\int\limits _{\Sigma }d^{2}y\sqrt{\gamma }\left(W_{\left(\text{E}\right)ij}^{ij} -P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij}\right) -2\pi \ell ^{2}\chi \left [\Sigma \right] \,.
\label{ArenWillmore}
\end{equation}
N.B. that Eq.\eqref{ArenWillmore} corresponds to the evaluation of a codimension-2 conformal invariant, $L_{\Sigma}$, given by
\begin{equation}
L_{\Sigma}= \frac{\ell ^{2}}{2}\int\limits _{\Sigma }d^{2}y\sqrt{\gamma }\left(W_{ij}^{ij} -P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij}\right) -2\pi \ell ^{2}\chi \left [\Sigma \right] \,,
\label{Lsigma}
\end{equation}
for minimal surfaces on Einstein-AdS spacetimes which are anchored at the boundary.
\subsection{Willmore energy for Einstein ambient spaces}
A characteristic example of a functional with manifest conformal symmetry is Willmore energy. This is defined in a compact and orientable two-dimensional surface immersed in $\mathbb{R}^{3}$ \cite{Toda2017Willmore,willmore1996riemannian,marques2014willmore}. In particular, the Riemannian manifold $\mathbb{R}^{3}$ arises at the conformal frame of a constant time slice $t=const$ in a pure $AdS_4$ metric $g_{\mu \nu}$ \cite{Fonda:2015nma}. In this case the Weyl tensor vanishes identically and the $\left(A\right)$ label can be dropped, since $\mathcal{K}^{\left(t\right)}=0$. As a consequence, Eq.\eqref{ArenWillmore} depends explicitly on the square of the traceless extrinsic curvature and the Euler characteristic. Nevertheless, taking into account that in general
\begin{equation}
P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij} =\mathcal{K}_{ij}^{\left (A\right )}\mathcal{K}_{\left (A\right )}^{ij} -\frac{1}{2}\left (\mathcal{K}^{\left (A\right )}\right )^{2} \,, \label{psquared}
\end{equation}
and considering the Gauss-Codazzi relations
\begin{align}
R_{ij}^{ij} &=R +R_{ABAB}-2R_{AA} \,, \nonumber \\
& =\mathcal{R} +\mathcal{K}_{ij}^{\left (A\right )}\mathcal{K}_{\left (A\right )}^{ij} -\left (\mathcal{K}^{\left (A\right )}\right )^{2} \,,
\label{GaussCodazzi}
\end{align}
we arrive in the following expression
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma \right ] = -\frac{\ell ^{2}}{2}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\left (R^{ij}_{ij}-\mathcal{R} +2 H^{2}\right)-2\pi \ell ^{2}\chi \left [\Sigma \right] \,.
\label{Arenwillmore2}
\end{equation}
The evaluation of the latter in $\mathbb{R}^{3}$ requires moving to the conformal frame $\hat{g}_{\mu \nu}$, where the Riemann curvature vanishes identically, leading to
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma \right ] = \frac{\ell ^{2}}{2}\int \limits_{\Sigma }d^{2}y\sqrt{\hat{\gamma}}\left (\hat{\mathcal{R}} -2 \hat{H}^{2}\right) -2\pi \ell ^{2}\chi \left [\Sigma \right ] \,.
\label{Arenwillmore3}
\end{equation}
Note, that up to this point we have not constrained the localization of $\Sigma$. In our construction, this can be a compact submanifold deep in the bulk or a cosmic brane anchored at the conformal boundary.
For a compact codimension-2 surface $\Sigma$, the Euler theorem
\begin{equation}
\int \limits_{\Sigma_{\text{comp}} }d^{2}y\sqrt{\hat{\gamma}}\hat{\mathcal{R}} =4\pi \chi \left [\Sigma_{\text{comp}} \right ] \,,
\label{2DEulertheorem}
\end{equation}
simplifies Eq.\eqref{Arenwillmore3} significantly, which now reads
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma_{\text{comp}} \right ]= -\ell^{2}\mathcal{W}\left [\Sigma_{\text{comp}} \right ] \,, \label{conformalWillmorecompact}
\end{equation}
where
\begin{equation}
\mathcal{W}\left [\Sigma \right ] =\int\limits_{\Sigma }d^{2}y\sqrt{\gamma} H^2 \,,
\end{equation}
is the Willmore energy functional.
In the case of a boundary-anchored non-compact $\Sigma$, one has to follow the prescription proposed in Refs.\cite{Alexakis:2010zz,Fonda:2015nma}. There, a closed surface $2\Sigma$ is constructed by doubling $\Sigma$, such that $2\Sigma = \Sigma \cup \tilde{\Sigma}$, with $\tilde{\Sigma}$ being a minimal surface embedded into the mirror manifold $\tilde{M}$ beyond the conformal boundary. Taking into account the Euler theorem \eqref{2DEulertheorem} for the compact surface $2\Sigma$ and considering that the Euler characteristic behaves as $\chi \left(2\Sigma\right)=2\chi \left(\Sigma\right)$, leads to
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma_{\text{non-comp}} \right ]= -\frac{\ell^{2}}{2}\int \limits_{2\Sigma }d^{2}y\sqrt{\hat{\gamma}}\hat{H}^2 \,, \label{conformalWillmore}
\end{equation}
or in terms of the Willmore energy,
\begin{equation}
\mathcal{A}_{\text{ren}}\left [\Sigma_{\text{non-comp}} \right] = -\frac{\ell^{2}}{2}\mathcal{W}\left [2\Sigma \right ] \,.
\label{ArenWillmorenoncomp}
\end{equation}
Here, Eqs.\eqref{conformalWillmorecompact} and \eqref{ArenWillmorenoncomp} make manifest the conformal invariance of Willmore energy for conformally flat ambient metrics since both arise as special cases of \eqref{ArenWillmore}.
The previous analysis made explicit the fact that, in general, the renormalized area functional is not a local conformal invariant of the codimension-2 hypersurface. However, one may restore conformal invariance in particular cases, such as for minimal hypersurfaces embedded in an Einstein spacetime.
\section{Obtaining $L_{\Sigma}$: the easy way.} \label{CGconical}
It is well-known that Einstein spacetimes are Bach-flat spacetimes. In particular, Einstein spacetimes arise as solutions of CG. Maldacena in Ref.\cite{Maldacena:2011mk} shows the emergence of Einstein-AdS gravity from 4D CG at tree level when Neumann boundary conditions for the metric are considered. The equivalence between the action of Conformal Gravity evaluated on Einstein spaces and renormalized Einstein-AdS gravity was made explicit in Ref.\cite{Anastasiou:2016jix}. The resulting Einstein-AdS action is indeed free from IR divergences. Thus, in the Einstein sector, both theories describe the same physics, what can be extended to the corresponding boundary field theories when considering the gauge/gravity duality.
\subsection{Embedding Einstein-AdS gravity in CG}
Going deeper into the rabbit-hole, one realizes the appearance of the MacDowell-Mansouri action \cite{MacDowell:1977jt} out of the renormalized Einstein-AdS action given in Eq.\eqref{IrenGB}, i.e., for that particular Gauss-Bonnet coupling. Indeed, this action takes the form
\begin{equation}
I_{\text{ren}} = \frac{\ell^2}{256 \pi G_{N}} \int\limits_{M} d^{4}x \sqrt{g} \delta^{\mu_{1} \mu_{2} \mu_{3} \mu_{4}}_{\nu_{1} \nu_{2} \nu_{3} \nu_{4}} \left(R^{\nu_{1} \nu_{2}}_{\mu_{1} \mu_{2}} + \frac{1}{\ell^2} \delta^{\nu_{1} \nu_{2} }_{\mu_{1} \mu_{2}}\right) \left(R^{\nu_{3} \nu_{4}}_{\mu_{3} \mu_{4}} + \frac{1}{\ell^2} \delta^{\nu_{3} \nu_{4} }_{\mu_{3} \mu_{4}}\right) - \frac{\pi \ell^2}{2 G_{N}} \chi \left[M\right] \,,
\label{MacDowellMansouri}
\end{equation}
which suggests a connection to CG, since it can be expressed as the square of the Weyl tensor for Einstein spaces, $W_{\left (E\right )\mu \nu }^{\alpha \beta }$, that is
\begin{equation}
I_{\text{ren}}
=\frac{\ell ^{2}}{64\pi G_{N}}\int\limits _{M}d^{4}x\sqrt{g}\,W_{\left (E\right )\mu \nu }^{\kappa \lambda }W_{\left (E\right )\kappa \lambda }^{\mu \nu } -\frac{\pi \ell ^{2}}{2G_{N}}\chi \left [M\right ] \,. \label{renEH}
\end{equation}
It is, therefore, an interesting possibility to consider the renormalized Einstein-AdS action as coming from CG, i.e.,
\begin{equation}
I_{\text{CG}} =\frac{\ell ^{2}}{64\pi G_{N}}\int \limits_{M}d^{4}x\sqrt{g}\,W_{\mu \nu }^{\kappa \lambda }W_{\kappa \lambda}^{\mu \nu } -\frac{\pi \ell ^{2}}{2G_{N}}\chi \left [M\right] \,. \label{IConformalGrav}
\end{equation}
This functional describes the dynamics of a higher-derivative gravity theory, as the corresponding field equations are of fourth order in derivatives. It has been shown that this action is finite for generic asymptotically AdS boundary conditions \cite{Grumiller:2013mxa}. It is then reasonable to think that, by a proper embedding of Einstein-AdS gravity in CG, the Einstein sector of the theory would inherit the cancellation of IR divergences in the radial, holographic coordinate.
\subsection{Conical contributions of CG}
In order to determine the conical contribution of CG when evaluated on the orbifold $M^{\left (\alpha\right )}$, we consider its expression in terms of the Riemann curvature given by
\begin{equation}
I_{\text{CG}}=\frac{\ell ^{2}}{64\pi G_{N}}\int \limits_{M}d^{4}x\sqrt{g}\left (Rie^{2} -2 Ric^{2} +\frac{1}{3}R^{2}\right ) -\frac{\pi \ell ^{2}}{2G_{N}}\chi \left [M\right ] \,.
\end{equation}
Then, we use the FPS expressions for the Euler characteristic and the quadratic terms in the curvature given in Eqs.\eqref{Riemsquared}-\eqref{RicciScalaquared}, obtaining
\begin{align}
I^{\left(\alpha\right)}_{CG} &=\frac{\ell ^{2}}{64\pi G_{N}}\int \limits_{M^{\left (\alpha \right )}}d^{4}x\sqrt{g}\left \vert W^{\left (\alpha \right )}\right \vert ^{2} -\frac{\pi \ell ^{2}}{2G_{N}}\chi \left [M^{\left (\alpha \right )}\right ] \,,
\end{align}
where
\begin{equation}
\int\limits_{M^{\left(\mathcal{\alpha}\right)}} d^{4}x \sqrt{g}\left \vert W^{\left (\alpha \right )}\right \vert ^{2} = \int\limits_{M} d^{4}x \sqrt{g} \left \vert W\right \vert ^{2} +8\pi \left (1 -\alpha \right ) \int\limits_{\Sigma} d^{2}y \sqrt{\gamma}K_{\Sigma } +\mathcal{O}\left (\left (1 -\alpha \right )^{2}\right ) \, \label{ICGconicalexpa}
\end{equation}
denotes the expansion of the Weyl squared term in the conical parameter.
In the previous expression,
\begin{equation}
K_{\Sigma } =R_{ABAB} -R_{AA} +\frac{1}{3}R +\frac{1}{2}\left (\mathcal{K}^{\left (A\right )}\right )^{2} -\mathcal{K}_{ij}^{\left (A\right )}\mathcal{K}_{\left (A\right )}^{ij} \label{Ksigma1}\,
\end{equation}
is a conformal covariant term on the 2D manifold $\Sigma $ endowed with the metric $\gamma _{ij}$ \cite{Solodukhin:2008dh}. Indeed, one can show that $K_{\Sigma }$ consists of the sum of two objects: i) the subtraces on $\Sigma $ of the bulk Weyl tensor and ii) the square of the traceless part of the extrinsic curvature. In particular, the intrinsic curvature terms can be resumed as
\begin{equation}
W_{ij}^{ij} =R_{ABAB} -R_{AA} +\frac{1}{3}R \,, \label{wabab}
\end{equation}
whereas the extrinsic curvature terms are given in Eq.\eqref{psquared}
such that Eq.\eqref{Ksigma1} can equivalently be cast in the form
\begin{equation}
K_{\Sigma }=W_{ij}^{ij} -P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij} \,.
\label{KSigma}
\end{equation}
As it has been shown in the previous section, this is a conformally covariant combination of weight -2. In this case however, no restrictions have been imposed on the class of surfaces or spacetimes for which the last formula is valid for. As a consequence, the conical parameter expansion of the CG action in the orbifold becomes
\begin{equation}
I_{\text{CG}}^{\left(\mathcal{\alpha}\right)} = I_{\text{CG}} + \frac{\left(1-\alpha\right)}{4 G_{N}} L_{\Sigma} +\mathcal{O}\left (\left (1 -\alpha \right )^{2}\right )
\end{equation}
where the first order conical contribution obtains the form
\begin{equation}
L_{\Sigma } = \frac{\ell^{2}}{2}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}K_{\Sigma } - 2\pi \ell^{2} \chi \left [\Sigma \right] \label{ICGconical} \,.
\end{equation}
Hence, it becomes clear that the manifest conformal symmetry of the bulk action is induced on the cod-2 functional constructed by the conical contributions. Note that neither the shape nor the compactness of $\Sigma$ are constrained. As a consequence, $L_{\Sigma}$ refers to an arbitrary two-dimensional surface immersed in a generic four-dimensional metric. When AdS asymptotics are considered and $\Sigma$ is a surface anchored at the boundary, then $L_{\Sigma}$ is interpreted as the HEE for CFTs dual to Conformal Gravity.
Interestingly enough, the last expression \eqref{ICGconical} reduces to the conformally invariant form of the renormalized area \eqref{ArenWillmore} when evaluated on Einstein spacetimes. However, the former is valid for a generic surface $\Sigma$ whereas the latter applies only to minimal surfaces. The relation of the $L_{\Sigma}$ functional to the renormalized area along with its manifest conformal invariance is a feature of having considered conformal invariance in the bulk as the starting point. This codimension-2 conformal invariant will be further applied to derive other energy functionals in Section \ref{ReducedHawking}.
\subsection{Willmore energy}
The conformal invariance of the functional $L_{\Sigma}$ suggests that one should be able to recover Willmore energy at the proper limit. Our starting point in this analysis, is the Weyl contribution of Eq.\eqref{ICGconical}, which is decomposed as
\begin{equation}
W_{ij}^{ij} =R_{ij}^{ij} -2S_{i}^{i} \,.
\end{equation}
This form comes from the codimension-2 sub-trace of Eq.\eqref{Weyltensor}. Furthermore, taking into account the Gauss-Codazzi relation of Eq.\eqref{GaussCodazzi} along with Eq.\eqref{psquared}, the $L_{\Sigma}$ functional can be rewritten as
\begin{equation}
L_{\Sigma } =\frac{\ell^{2}}{2}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma }\left [\mathcal{R} -2\left (H^{\left (A\right )}\right )^{2} -2 S_{i}^{i}\right ] -2\pi \ell^{2} \chi \left [ \Sigma \right ] \,.
\end{equation}
Making contact with Willmore energy requires the hypersurface $\Sigma$ to be compact, in order to simplify the integral of the intrinsic Ricci scalar (Gauss-Bonnet density) and the Euler characteristic using the Euler theorem of Eq.\eqref{2DEulertheorem}. This consideration yields
\begin{equation}
L_{\Sigma_{\text{comp}}} = - \ell^{2}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\left [\left (H^{\left (A\right )}\right )^{2} +S_{i}^{i}\right ] \,. \label{ICG3}
\end{equation}
When the surface $\Sigma$ is embedded an arbitrary Riemannian manifold $\mathcal{M}_3$, defined in a constant time slice of the AAdS bulk, the above expression is interpreted as the Conformal Willmore energy \cite{Mondino_2018}. In the case of a pure AdS bulk, the latter reduces to the standard Willmore energy functional when embedded in $\mathbb{R}^3$. As has been discussed in the previous section, this is achieved by considering the unphysical conformal frame $\hat{g}$ where the metric is that of $\mathbb{R}^3$ and therefore the transformed Schouten tensor vanishes identically, leading to
\begin{align}
L_{\Sigma_{\text{comp}}}\left[\mathbb{R}^3\right] & = -\ell^{2}\int \limits_{\Sigma_{\text{comp}} }d^{2}y\sqrt{\hat{\gamma}} \hat{H}^{2} \nonumber \\
&=-\ell^{2} \mathcal{W}\left [\Sigma \right] \,.
\label{Lsigmacompact}
\end{align}
When non-compact surfaces anchored at the conformal boundary are considered then one can use the doubling construction (discussed in subsection \ref{Willmoreenergy}). In this case, it is straightforward to show that
\begin{equation}
L_{\Sigma_{\text{non-comp}}} \left[\mathbb{R}^3\right]=-\frac{\ell^2}{2} \mathcal{W}\left [2\Sigma \right] \,.
\label{Lsigmanoncompact}
\end{equation}
Note that the renormalized area $\mathcal{A}_{\text{ren}}$ and the Willmore energy $\mathcal{W}$ functionals can be obtained as particular cases of the conformal invariant $L_{\Sigma}$. Point in fact, $\mathcal{A}_{\text{ren}}$ is recovered when considering Einstein spacetimes and minimal boundary-anchored surfaces, and $\mathcal{W}$ is obtained for compact --or doubled-- surfaces in pure AdS. This is reminiscent of the equivalence in the bulk, between CG and Einstein-AdS gravity when the former is evaluated in Einstein spacetimes.
The conical contribution in Eq.\eqref{ICGconicalexpa} is general and it will be used in the following section to derive the reduced Hawking mass from CG.
\section{Reduced Hawking mass from $L_{\Sigma}$}\label{ReducedHawking}
The results of the previous section show how starting from the codimension-2 conformal invariant $L_{\Sigma}$, one can derive the renormalized area and the Willmore energy functionals. This is achieved by imposing certain restrictions on both the ambient spacetime and the codimension-2 surface under consideration.
Nonetheless, $L_{\Sigma}$ is defined for generic surfaces. Thus, it is expected that it could provide a generalization of renormalized area for non-extremal surfaces in the Einstein limit, such as surfaces that are not orthogonally anchored at the AdS boundary.
Our starting point is the conical contribution of the CG action, given in Eq.\eqref{ICGconical}. Evaluating this expression for Einstein spacetimes amounts to the replacement of the Weyl tensor with $W_{\left (E\right )\mu \nu }^{\alpha \beta }$. Thus, one gets that
\begin{equation}
L_{\Sigma }\left [E\right ] = \frac{\ell^{2}}{2}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\left (W_{\left (E\right )ij}^{ij} -P_{ij}^{\left (A\right )}P_{\left (A\right )}^{ij}\right ) -2\pi \ell^{2} \chi \left [\Sigma \right] \,,
\label{LEinstein}
\end{equation}
which, taking into account Eqs.\eqref{psquared} and\eqref{GaussCodazzi}, can be cast in the form
\begin{equation}
L_{\Sigma }\left [E\right ] = \frac{\ell^{2}}{4}I_{H}\left [\Sigma \right ]-2\pi \ell^{2} \chi \left [\Sigma \right] \,, \label{ICGconical2}
\end{equation}
where
\begin{equation}
I_{H}\left [\Sigma \right ] =2\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\left [\mathcal{R} +\frac{2}{\ell ^{2}} -\frac{1}{2}\left (\mathcal{K}^{\left (A\right )}\right )^{2}\right ] \,, \label{hawkingmass}
\end{equation}
is the generalization of the Hawking mass for AAdS spaces, which is referred to as reduced Hawking mass in Ref.\cite{Fischetti:2016fbh}. The authors in that reference exhibit the finiteness of this object when an arbitrary boundary-anchored hypersurface $\Sigma $ in a constant time slice is considered.
Further properties of the reduced Hawking mass can be worked out by rewriting Eq.\eqref{ICGconical2} as
\begin{equation}
L_{\Sigma }\left [E\right ] =\mathcal{A}_{\text{ren}}\left [\Sigma \right ] -\frac{\ell ^{2}}{4}\int \limits_{\Sigma }d^{2}y\sqrt{\gamma}\left (\mathcal{K}^{\left (A\right )}\right )^{2} \,, \label{ICGconical3}
\end{equation}
in terms of the renormalized area $\mathcal{A}_{\text{ren}}\left [\Sigma \right ]$ of an arbitrary two-dimensional hypersurface $\Sigma $, anchored orthogonally to the boundary \cite{Alexakis:2010zz}. It is then manifest that $L_{\Sigma }\left [E\right ]$ becomes proportional to the renormalized area $\mathcal{A}_{\text{ren}}\left [\Sigma \right ]$, when $\Sigma $ is minimal.
Eq.\eqref{ICGconical3} indicates that $\mathcal{A}_{\text{ren}}$ diverges when $\Sigma $ is anchored to the boundary at an arbitrary angle. It is the functional $I_{H}\left [\Sigma \right ]$ the one that correctly cancels the divergences in the most general case.
As a consequence, the reduced Hawking mass generalizes the concept of renormalized area to non-minimal hypersurfaces.
The different energy functionals obtained from $L_{\Sigma}$, with the corresponding restrictions on the ambient space $M$ and on the codimension-2 surface $\Sigma$, are shown in Table \ref{Table 1}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|}\hline
\diagbox[width=10em]{$\Sigma$}{$M$}&
Einstein & pure AdS \\ \hline
min & $\mathcal{A}_{\text{ren}}$ & $\mathcal{W}$\\ \hline
non-min & $I_{H}$ & \\ \hline
\end{tabular}
\caption{Energy functionals from $L_{\Sigma}$}\label{Table 1}
\end{table}
\section{Conclusions}\label{Conclusions}
In this work, we have obtained a local conformally invariant object in codimension-2 $L_{\Sigma}$, from four-dimensional CG evaluated on the replica orbifold using the FPS relations \cite{Fursaev:2013fta}, which inherits its conformal symmetry from the bulk. $L_{\Sigma}$ reduces to energy functionals which generalize renormalized area, such as reduced Hawking mass and Willmore energy, for generic Einstein or pure AdS ambient spaces respectively. Said functionals may be used for codimension-2 surfaces which are boundary-anchored at an arbitrary angle, whether they are minimal or not.
The relations between the functionals, both at the bulk and at the codimension-2 levels, as well as the fact that they are embedded in conformal invariant structures, are summarized in Figure \ref{Figure 1}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{Diagram1}
\caption{Diagram showing the relations between the functionals discussed in the paper.}
\label{Figure 1}
\end{figure}
The presented procedure streamlines the derivation discussed in Ref.\cite{Anastasiou:2020smm}, where the conformal invariant was obtained starting from the renormalized area of minimal surfaces given in Ref.\citep{Alexakis:2010zz}.
Then, conformal invariance was restored by rewriting the expression in a manifestly invariant form, for restricted Einstein spacetimes, as shown in Section \ref{Section 2}.
We establish the fundamental role of the bulk conformal symmetry in the renormalization of geometrical structures residing in codimension-2 submanifolds. The new result is the construction of $L_{\Sigma}$ out of the conical contributions of CG. In this way, this functional has a manifest local conformal symmetry, acquired from the bulk action. Most importantly, when AAdS Einstein spacetimes are considered and for a surface $\Sigma$ anchored at the boundary, we recover the reduced Hawking mass. This quantity is finite for an arbitrary boundary anchored 2D hypersurface, and it is monotonous under an inverse mean curvature flow, as shown in Ref.\cite{Fischetti:2016fbh}. This fact highlights the role of conformal symmetry in the renormalization procedure. As the functional can be obtained from $L_{\Sigma}$, a possible reinterpretation of the monotonicity property is suggested.
The procedure here presented opens the possibility of the derivation of generalized energy functionals in higher dimensions, starting from conformal invariance in the bulk.
\section{Acknowledgments}
We thank Marika Taylor and Nicolas Boulanger for interesting discussions. We also thank Prof. Boulanger for his kind hospitality at U. Mons during the completion of this work. We appreciate the help of Andrés Argandoña in designing Figure 1. The work of GA is funded by ANID, Convocatoria Nacional Subvenci\'on a Instalaci\'on en la Academia Convocatoria A\~no 2021, Folio SA77210007. The work of IJA is funded by ANID, REC Convocatoria Nacional Subvenci\'on a Instalaci\'on en la Academia Convocatoria A\~no 2020, Folio PAI77200097. The work of RO is funded by ANID Grant N$^{\circ }$1190533 {\it Black holes and asymptotic symmetries}, and ANID/ACT210100 Anillo Grant {\it Holography and its applications to High Energy Physics, Quantum Gravity and Condensed Matter Systems}.
|
1,116,691,500,256 | arxiv | \section{Introduction} \label{sec:intro}
Exoplanet exploration was identified by the Decadal Survey on Astronomy and Astrophysics 2020 as one of the top scientific priorities; in particular, the identification and characterization of Earth-like planets will play a key role in the search for biochemical signatures of life in the universe \citep{NRC_2020Decadal}. The survey identified high-contrast imaging and spectroscopy as cornerstones for the future of exoplanet science, and prioritized a coronagraphic instrument on a flagship space mission, along with development of the US Extremely Large Telescope (ELT) program. It recommended that a large ($\sim 6$ m diameter) Infrared/Optical/Ultraviolet (IR/O/UV) space telescope with capabilities for coronagraphic spectroscopy be the first mission to enter the Great Observatories Mission and Technology Maturation Program \citep{NRC_2020Decadal}. Meanwhile, telescopes on the ground can characterize young giant planets, and have the potential to reach reflected light planets around M stars for the first time \citep{ruane_2019_spie}.
While conventional coronagraphs dramatically reduce the photon noise from the star, they are practically limited to angular separations greater than a few $\lambda/D$ (the size of a resolution element, where $\lambda$ is the wavelength and $D$ the telescope diameter). The ability to access closer-in exoplanets would greatly increase the expected yield of detectable planets, since yield scales approximately inversely with the inner working angle (IWA), with yield $\propto$ IWA $^{-0.98}$ \citep{Stark_2015}. Additionally, planets observable with coronagraphy in the visible and near-infrared regime may fall within the inaccessible inner working angle at longer wavelengths, where features of key biosignatures such as carbon monoxide and methane exist. Gaining access to closer separations at those longer wavelengths will thus enable better characterization of planets detected.
Meanwhile, techniques such as nonredundant masking interferometry \citep{nrm_tuthill} or cross-aperture nulling interferometry \citep{bracewell_1978,pfn} can access very small angular separations. However, these approaches result in lower efficiency than coronagraphy since only a small portion of the aperture is used. The Vortex Fiber Nuller (VFN) is an instrument concept that straddles the space between the two approaches, with a smaller IWA than coronagraphs but more efficient at routing the planet light to a diffraction-limited spectrograph than single-baseline cross-aperture interferometry \citep{Ruane2018_VFN}. This technique is capable of characterizing exoplanets within 1 $\lambda/D$, requires few optical elements, and is compatible with many coronagraph designs as a complementary characterization tool.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.25]{figures/vfn_focal_fig.jpeg}
\caption{(a)~ Schematic of a focal-plane VFN with a single-mode fiber. The beam is focused onto a vortex mask, which imparts a different phase pattern on the star and planet point-spread-functions. The beam is then collimated and refocused onto a single-mode fiber. The on-axis star light rejected while the planet light gets partially coupled. (b)~Coupling efficiency, $\eta$, or throughput, of a planet as a function of its angular separation from the star.}
\label{fig:fulldiagram}
\end{center}
\end{figure*}
\section{Concept}
\subsection{Vortex Fiber Nulling}
The Vortex Fiber Nuller is an instrument concept that enables spectroscopy of exoplanets within 1 $\lambda/D$, using a vortex mask to generate a vortex phase pattern on the incoming beam \citep{Ruane2018_VFN}. Figure~\ref{fig:fulldiagram}(a) shows that when the beam is on-axis (such as light from a star), the resulting pattern is orthogonal to the fundamental mode of a single-mode fiber (SMF) and does not couple to it. This result can be demonstrated by calculating the coupling efficiency of a field $f(r,\theta)$ with the SMF mode $\psi_{01}(r)$:
\begin{equation} \label{eq:overlap_integral}
\int \psi_{01}(r) f(r,\theta) dA.
\end{equation}
For the field created by a vortex, the integral is separable, and the polar term is given by:
\begin{equation}
\int_0^{2\pi} \exp(il\theta) d\theta,
\end{equation}
where $l$ is an integer that denotes the vortex charge. This integral evaluates to 0 for $l \neq 0$, reflecting that the vortex field is orthogonal to the SMF mode.
However, as shown in Fig.~\ref{fig:fulldiagram}(b), off-axis planet light from $\sim 0.5 \lambda/D$ to $\sim 1.3 \lambda/D$ can couple in, with a peak throughput of $19 \%$ at 0.9 $\lambda/D$. The coupled planet light can thus be directed to a spectrograph for immediate characterization, while the starlight is rejected. A focal-plane VFN is explored in this work, but \citet{ruane_2019_spie} showed that the vortex can also be placed in the pupil plane, resulting in a pupil-plane VFN that operates on the same principle of rejecting on-axis starlight with an imprinted vortex.
The range of angular separations probed by the VFN is smaller than the inner working angle of all classical coronagraphs, and is a region known to harbor potentially habitable exoplanets detected via radial velocity (RV) and transit methods. Additional advantages of the VFN compared to classical coronagraphs include its relative insensitivity to telescope aperture shape, polarization aberrations, and many wavefront aberration modes \citep{Ruane2018_VFN}. Since its conceptual development, the VFN concept has been tested in the lab, achieving azimuthally averaged peak coupling of $16\%$ (close to the theoretical limit) and starlight suppression of $6 \times 10^{-5}$, which can be attributed to the minor wavefront errors in the system \citep[Monochromatic; Broadband,][]{Echeverri_VFN,echeverri_spie_2019}.
While the original VFN design is already compelling, it has several drawbacks. The planet throughput is relatively low, with a theoretical limit of $\sim 20\%$, depending on the configuration. The measurement from a VFN also lacks spatial information — since the coupling map is circularly symmetric, there is no way to determine from the data the position angle of the planet, information that is (in the absence of other measurements) necessary for constraining the orbital parameters of the planet. Since there is only one flux measurement and the coupling into the SMF varies with the radial separation of the planet, there is also a degeneracy between the planet flux and its separation. Here, we present an augmentation to the VFN that enhances throughput and provides additional constraints on the orbit and flux of the planet, while retaining the functionality of the VFN concept. This new design relies on a device called the mode-selective photonic lantern.
\subsection{Mode-Selective Photonic Lanterns} \label{sec:mspl}
A photonic lantern is a photonic mode converter that adiabatically interfaces between a multi-mode port and several single-mode ports, where the distribution of flux in the single-mode outputs is related to the power in each mode at the multi-mode input \citep{LeonSaval_PL_2013}. Photonic lanterns have been proposed for use in astrophysics for spectrometer coupling \citep{lin_2021} and for focal-plane wavefront sensing, allowing for the measurement of the input wavefront while maintaining single-mode fiber outputs suited for injection into spectrographs for spectral characterization \citep{jovanovic-ESS-2016,corrigan2018,Norris_2020}. Each mode at the few-mode fiber (FMF) face of the lantern is mapped to a SMF output, such that light coupling to a given mode at the FMF side will result in flux in the corresponding SMF core. The device is bi-directional, so light injected into one of the SMF ports will propagate into the mode corresponding to that port at the FMF face.
While standard photonic lanterns have similar cores and are not designed with a particular mode structure in mind, mode-selective photonic lanterns \citep[MSPL,][]{LeonSaval_MSPL} utilize dissimilar cores that enable ports to be mapped into LP modes, defined in \citet{lp_mode_def} as "the set of linearly polarized propagation modes of optical fibers with radially symmetric index profiles in the approximation of weak guidance." A partially mode-selective photonic lantern has one port corresponding to the LP 01 mode, while the rest of the ports exhibit an unspecified structure. In a fully mode-selective photonic lantern, all ports correspond to LP modes. Figure \ref{fig:mspl} shows a schematic of a six-port MSPL based on the design from \citet{LeonSaval_MSPL}, where each port corresponds to one of the first six LP modes.
To synergize the action of the VFN with symmetry properties of the LP modes, we propose to replace the single-mode fiber of the original VFN with a MSPL, resulting in a Photonic Lantern Nuller (PLN) instrument concept that improves upon the original design.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.4\textwidth]{figures/mspl_diagram_custom.png}
\includegraphics[scale = 0.65]{figures/lp_modes.pdf}
\caption{Left: Schematics of a 6-port mode-selective photonic lantern spatial-multiplexer fiber system. Each LP mode at the few mode fiber (FMF) face is mapped to one of the six single-mode ports of the SMF face, such that light with an LP mode shape at the FMF side will result in flux in the corresponding SMF core. The device is bi-directional, so light injected into one of the SMF ports will propagate into the LP mode corresponding to that port at the FMF face. Right: The field amplitudes of the first six LP modes, corresponding to the ideal modes of six-port MSPL.}
\label{fig:mspl}
\end{center}
\end{figure*}
\subsection{VFN with a Mode-Selective Photonic Lantern}
The PLN replaces the single-mode fiber of the VFN by a MSPL as described in Section \ref{sec:mspl}. Specifically, the light after the vortex mask is focused onto the FMF face of the MSPL and propagates through to the single-mode outputs. Each output port can then be coupled into individual SMFs and routed to photodetectors or spectrographs. The port corresponding to the LP 01 mode provides the same response as the VFN, where on-axis light is nulled while off-axis light can couple. Additionally, if we label the LP mode azimuthal order by $m'$ analogously to the Zernike polynomials, i.e. positive $m'$ indicating an azimuthal component of $\cos(m'\theta)$ and negative $m'$ indicating $\sin(m'\theta)$, then, a photonic lantern port combined with an optical vortex with azimuthal charge $l$, will result in an on-axis null \textit{except} when $l\pm m'=0$. This result can be derived by extending Equation \ref{eq:overlap_integral} to an arbitrary fiber mode $\psi_{n'm'}$, and separating out the polar integral:
\begin{equation} \label{eq:lp_mode_overlap}
\begin{split}
\int_0^{2\pi} \exp(il\theta) \cos(m'\theta) d\theta, \quad m' & \geq 0, \quad \mathrm{or}\\
\int_0^{2\pi} \exp(il\theta) \sin(m'\theta) d\theta, \quad m' & < 0.
\end{split}
\end{equation}
Recalling the exponential trigonometric identities $\cos(x) = (e^{ix}+e^{-ix})/2$ and $\sin(x)=(e^{ix}-e^{-ix})/(2i)$, we find that these overlap integrals evaluate to 0 for $l \pm m' \neq 0$. Thus, on-axis nulls are created in multiple ports, from which planet spectra can be extracted. Additionally, the existence of ports with $m' \neq 0$ allows for a nuller configuration with no vortex at all, as the overlap integrals for the LP11ab and LP21ab ports evaluate to zero when $l=0$. This means that the photonic lantern can be used by itself as a nuller, as contemporaneously presented in \citet{Tuthill2022-NIH}.
To demonstrate these properties, we simulate the PLN configurations using HCIPy \citep{por2018hcipy}. Our optical propagation model propagates the desired input wavefront through a circular pupil (with $\lambda/D$ chosen to equal 1), and then into a focal plane. For the configuration without a vortex, this becomes the final focal-plane electric field. For the configurations with a vortex, either a charge 1 or 2 vortex is applied in the focal plane. As with the VFN, a vortex with charge higher than 2 results in lower peak throughput and larger IWA, so we do not focus on them in this work.
The square of the overlap integral of the focal-plane electric field distribution with each LP mode gives the relative intensity coupled into the corresponding port. We explore using an MSPL with six LP modes and a V number, "the normalized frequency parameter that determines the number of modes" \citep{vnum_def}, equal to 4.71. Our simulations assume perfect mode shapes as well as perfect transitions, free from cross-coupling and losses. Characterizing the impact of these real-world imperfections, from realistic designs as well as from fabrication errors, is left for future work.
Given wavelength, the optimal coupling into the lantern depends on the the mode field diameter (MFD) of the lantern modes and the focal ratio $F\#$ \citep{ruane_2019_spie}. While the real MFDs of photonic lanterns are tunable within a small range (Leon-Saval, private communication), in practice, the coupling in a real system will be optimized by changing the focal ratio. However, since our simulations already set $\lambda/D=1$ and $F\#=1$, we optimize coupling by tuning the MFD (expressed in units of $\lambda/D$). Specifically, for each configuration (no vortex, $l=1$, $l=2$), we simulate a range of MFDs and find the value that maximizes the peak of the x-axis cross-section of the summed throughput of the nulled ports. Although Section \ref{ssec:detection} shows that summed throughput does not fully predict instrument performance, it is still a useful proxy for choosing the MFD, as optimizing directly for detection capability would require knowledge of the level and distribution of on-sky wavefront error, which is not predictable \textit{a priori}.
From our simulations, we find that the optimal MFD is 2.8 $\lambda/D$ for the no vortex and charge 1 cases, and 3.2 $\lambda/D$ for the charge 2 case. We present the results of our simulations using these diameters. Figure \ref{fig:coupling_donuts} shows the ideal spatial coupling efficiency for a point source as a function of angular separation from the optical axis, or coupling map, for every port (top panels) along with the line profile along the horizontal axis (bottom panels). We also plot the total flux collected across all ports (dashed pink lines) as well as the total flux collected from only the nulled ports satisfying $l\pm m'=0$ (solid black lines). The total nulled throughput curves demonstrate that the additional ports increase both the peak throughput as well as the field of view for which planet light couples.
\begin{figure*}[t]
\begin{center}
No Vortex \hspace{120pt} Charge 1 \hspace{120pt} Charge 2
\fbox{\includegraphics[scale = 0.35]{figures/c0_donuts.pdf}}
\fbox{\includegraphics[scale = 0.35]{figures/c1_donuts.pdf}}
\fbox{\includegraphics[scale = 0.35]{figures/c2_donuts.pdf}}
\includegraphics[scale = 0.37]{figures/c0_lineprof.pdf}
\includegraphics[scale = 0.37]{figures/c1_lineprof.pdf}
\includegraphics[scale = 0.37]{figures/c2_lineprof.pdf}
\caption{\label{fig:coupling_donuts} Coupling maps for each port with no vortex (top left), and a charge 1 (top middle) and charge 2 (top right) vortex. The maps span -3 $\lambda/D$ to 3 $\lambda/D$ in each direction. Bottom left: Throughput line profiles with no vortex. The four nulled ports satisfying $l\pm m'=0$ are LP 11ab and LP 21ab. Bottom middle: Throughput line profiles with a charge 1 vortex. The four nulled ports satisfying $l\pm m'=0$ are LP 01, LP 21ab, and LP02. Bottom right: Throughput line profiles for each port with a charge 2 vortex. The four nulled ports satisfying $l\pm m'=0$ are LP 01, LP 11ab, and LP02. Although nulls in the LP 21ab ports are not guaranteed by symmetry, in this case, their central throughputs are spuriously low, and including them in the data analysis may provide some additional gains.
}
\end{center}
\end{figure*}
While MSPLs with more than six ports can in theory be fabricated, manufacturing MSPLs with large numbers of modes remains a practical challenge because the adiabaticity of the lantern transition becomes more difficult to achieve as the number of modes increases \citep{Velazquez-Benitez2018}. While larger port numbers may become available with the advancement of photonics technology, Figure \ref{fig:port_num_coupling} shows that increasing the total number of ports brings diminishing returns in throughput, especially at angular separations $<\lambda/D$. In addition, using fewer ports has the advantage that it requires fewer detector pixels, which are always at a cost premium. Considering these factors, and that MSPLs with more than six ports are not readily manufacturable with current photonics technology, we choose to focus our investigations on a PLN design with a six-port MSPL.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale = 0.38]{figures/c0_portnum_lineprofs.pdf}
\includegraphics[scale = 0.38]{figures/c1_portnum_lineprofs.pdf}
\includegraphics[scale = 0.38]{figures/c2_portnum_lineprofs.pdf}
\caption{\label{fig:port_num_coupling} Line profiles for summed throughput of nulled ports for PLNs with no vortex (left), a charge 1 vortex (middle) and a charge 2 vortex (right), using MSPLs with varying numbers of output ports. As the number of ports increases, each additional port brings decreasing returns in additional throughput. The current limit of what can be practically manufactured is six ports. Thus, we choose to use a six-port MSPL in our PLN design, which balances the total throughput of the nulled ports with what is practically manufacturable. Note that a higher V number of 8.48 was necessary to generate up to 19 LP modes. Here, we wish to compare the effect of port number independently of V number effects, so fix the V number at 8.48 for all port numbers. Thus, due to the difference in V number, the line profiles shown in this analysis have slightly different shapes from those in Figure \ref{fig:coupling_donuts}.}
\end{center}
\end{figure*}
\section{Sensitivity to Aberrations} \label{sec:aberrations}
\subsection{Zernike Aberrations}
One benefit of the original VFN was its insensitivity to many low order Zernike wavefront error modes. If the charge of the vortex is denoted by $l$, and the Zernike aberrations are denoted by $Z_n^m(r,\theta)$, where $n$ is the radial order and $m$ indicating the azimuthal structure, i.e. $\cos(m\theta)$ for positive $m$ and $\sin(m\theta)$ for negative $m$, then only aberrations that cancel out the vortex charge ($l\pm m=0$) will couple. This can be demonstrated analogously to the case of LP modes, replacing the $m'$ of a given port in Equation \ref{eq:lp_mode_overlap} with the $m$ of a given Zernike mode.
\begin{figure*}[t]
\begin{center}
No Vortex
\includegraphics[scale = 0.28]{figures/c0_zsensitivities.pdf}
Charge 1
\includegraphics[scale = 0.28]{figures/c1_zsensitivities.pdf}
Charge 2
\includegraphics[scale = 0.28]{figures/c2_zsensitivities.pdf}
\caption{\label{fig:z_sensitivities} Stellar coupling rate as a function of individual Zernike polynomial amplitude, with no vortex (top), a charge 1 vortex (middle), and a charge 2 vortex (bottom). For the nulled ports, solid lines indicate modes predicted to couple (those satisfying $l \pm (m' + m) =0$), while dashed lines indicate modes that are not predicted to couple (to first order, though higher-order coupling effects can be seen). Values of $\eta_s$ falling below $10^{-6}$ are likely numerical noise, and are not shown. Lines that fall entirely below $10^{-6}$ are light grey in the legend.}
\end{center}
\end{figure*}
The additional photonic lantern ports obey a similar principle, but the structure of the LP mode and the Zernike mode will interact, and the polar overlap integral is now given by
\begin{equation} \label{eq:full_overlap_integral}
\begin{split}
& \int_0^{2\pi} \exp(il\theta) \cos(m'\theta) \cos(m\theta) d\theta, \quad m',m \geq 0, \quad \mathrm{or}\\
& \int_0^{2\pi} \exp(il\theta) \cos(m'\theta) \sin(m\theta) d\theta, \quad m' \geq 0, m < 0, \quad \mathrm{or}\\
& \int_0^{2\pi} \exp(il\theta) \sin(m'\theta) \cos(m\theta) d\theta, \quad m' < 0, m \geq 0, \quad \mathrm{or}\\
& \int_0^{2\pi} \exp(il\theta) \sin(m'\theta) \sin(m\theta) d\theta, \quad m',m < 0.
\end{split}
\end{equation}
Thus, for each port, only aberrations satisfying $l \pm (m' + m) =0$ will couple (to first order). Figure \ref{fig:z_sensitivities} shows the simulated stellar coupling, $\eta_s$, as a function of the input amplitude of the first ten Zernike aberrations. In this work, we compute coupling normalized to the summed intensity of the beam, such that the stellar coupling is equivalent to the null-depth. The fact that the LP 01 port is sensitive primarily to tip, tilt, and coma (for charge 1) and astigmatism followed by second-order responses to tip and tilt (for charge 2) is consistent with theoretical predictions as well as the numerical simulations presented in \citet{ruane_2019_spie}. The results for the other ports show that, as predicted by the azimuthal order conditions, each port is only sensitive to a few specific lower-order aberrations satisfying $l \pm (m' + m) =0$. For example, the LP 21ab ports with a charge 1 vortex and the LP 11ab ports with a charge 2 vortex are all insensitive to defocus ($m=0$) and astigmatism ($m=\pm2$). The LP 02 ports have the same azimuthal order as the corresponding LP 01 ports, and thus reject the same low-order aberrations.
\subsection{Tip-tilt Jitter}
\citet{ruane_2019_spie} predicted that for ground-based observatories, tip-tilt jitter (evolving much faster than the typical exposure times) will likely be a significant contribution to degradation of the VFN's null-depth. We thus present simulations of average null-depth achieved ($\eta_s$) as a function of the standard deviation of tip-tilt jitter ($\sigma_{tt}$). For each data point, 100 independent realizations of tip-tilt are generated, with amplitude drawn from a normal distribution with standard deviation $\sigma_{tt}$ and position angle drawn uniformly between 0 and $2 \pi$. The 100 frames are then averaged to calculate an averaged $\eta_s$. The results are presented in Figure \ref{fig:jitter_coupling}. For example, to achieve a null depth of $10^{-3}$ in the LP11ab ports of the no vortex PLN, the standard deviation of tip-tilt jitter must be smaller than $\sim 0.1 \lambda/D$. To achieve a null depth of $10^{-3}$ in the LP01 port of the charge 1 and charge 2 configurations, the standard deviation of tip-tilt jitter must be smaller than $\sim 0.1 \lambda/D$ and $\sim 0.3 \lambda/D$, respectively. For context, the Keck Planet Imager and Characterizer (KPIC) instrument at the Keck II telescope, a fiber injection unit for high resolution spectroscopy that currently has an VFN mode as well as the capability to test a future PLN on-sky, typically achieves on-sky jitter standard deviations of 6-7 mas, corresponding to 0.14 waves at 2.2 $\mu$m \citep{delorme_2021}.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale = 0.38]{figures/c0_ttjitter_coupling.pdf}
\includegraphics[scale = 0.38]{figures/c1_ttjitter_coupling.pdf}
\includegraphics[scale = 0.38]{figures/c2_ttjitter_coupling.pdf}
\caption{\label{fig:jitter_coupling} Left: Stellar coupling rates as a function of tip-tilt jitter, random-uniformly distributed in position angle, with no vortex (left), a charge 1 vortex (middle), and a charge 2 vortex (right). The standard deviation of the per-frame tip-tilt amplitude is given by $\sigma_{tt}$, with position angle drawn uniformly between 0 and $2 \pi$.}
\end{center}
\end{figure*}
\subsection{KPIC Atmospheric Residuals}
We also simulate the performance of the PLN under WFE conditions measured by the pyramid wavefront sensor (PyWFS) of KPIC. The atmospheric seeing the night the data was taken was 0.6 arcsec, and the wavefront sensor achieved residuals of 150 nm RMS. It should be noted that the PyWFS does not see all of the errors in the optical system, as recent on-sky demonstrations of the VFN on KPIC (Echeverri et al, in prep) do not achieve the level of starlight suppression predicted by these residuals alone. Specifically, in the real KPIC instrument, there is additional tip-tilt error downstream of the PyWFS that is not captured in these simulations. Thus, these simulations should be interpreted as an optimistic limit, while the real performance will be impacted by additional errors invisible to the PyWFS.
For our simulation, we take 590 frames of measured wavefront error, expressed in the form of reconstructed Zernike coefficients. From each frame of coefficients, we generate a pupil plane WFE map. As an intermediate diagnostic, we calculate the focal-plane image PSF averaged over these frames, compared it to an ideal PSF with no WFE in Figure \ref{fig:psfs}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale = 1.0]{figures/psf_simulation.pdf}
\caption{\label{fig:psfs} Left: Mean focal-plane PSF in the presence of WFE as measured by the KPIC PyWFS. Middle: Unaberrated focal-plane PSF. Right: Difference between the aberrated and ideal PSFs. Reminder that these are \textbf{not} simulations of the Keck PSF, but of the measured wavefront error residuals propagated through a system with an ideal circular aperture.}
\end{center}
\end{figure*}
For our simulation, we propagate an on-axis beam with that WFE through our PLN models to calculate the output null depths. We also propagate off-axis beams with each frame of WFE (at $0.84 \lambda/D$ for no vortex and charge 1 configurations and $1.3 \lambda/D$ for charge 2 configuration). The instantaneous coupling over time with these residuals may be found in Appendix \ref{app:timeseries}. Meanwhile, Figure \ref{fig:kpic_coupling} shows the mean coupling over all the frames. In the nulled ports of the PLN, the mean off-axis planet coupling over these frames (where it is expected based on the coupling maps) remains significantly higher than the stellar coupling in the presence of this WFE.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale = 0.45]{figures/summary_kpic_resids_1.pdf}
\caption{\label{fig:kpic_coupling} Mean coupling calculated over 590 frames of WFE residuals from the KPIC PyWFS, for the no vortex (left), charge 1 (middle), and charge 2 (right) configurations. The ports on the bottom axis are (from left to right): LP01, LP11a, LP11b, LP21a, LP21b, and LP02. Coupling values for ports that are not considered nulled are depicted in light grey. Off-axis planet coupling (where it is expected based on the coupling maps) remains higher than the stellar coupling in the presence of these WFE realizations.}
\end{center}
\end{figure*}
\section{Simulation of Exoplanet Characterization}
In this section, we demonstrate the exoplanet detection and characterization capabilities of a PLN and compare it to those of the VFN.
\subsection{Synthetic Data Generation} \label{sec:datagen}
We consider the outputs of the instrument to be the intensity at the single simulated wavelength in each port. In reality, the light in each port can be fed in to a spectrograph, and spectral analysis can be used to increase detectability by orders of magnitude \citep{wang_ji_2017}. However, we neglect spectral information in this preliminary demonstration of the PLN performance relative to the VFN, and leave exploring the combination of a broadband PLN and spectral analysis to future work.
We assume that the integration time of an observation is significantly longer than the coherence time of atmospheric residuals, such that fluctuations in wavefront error will average out to the null depth. Consequently, we assume that the primary contribution to non-static noise is photon noise.
The following process was used to generate the synthetic data. We first average the 590 intensity frames from the simulation of KPIC PyWFS residuals in Section \ref{sec:aberrations} to obtain the average null depth. To generate realizations of photon noise, we calculate the stellar photon rate entering the instrument:
\begin{equation} \label{eq:pr}
\mathrm{PR} = f_0 \times 10^{-m/2.5} \times A \times \Delta \lambda \times \eta_t,
\end{equation}
where $f_0 = 9.56\times 10^9$ photons m$^{-2}$ s$^{-1}$ $\mu$m$^{-1}$ is the zero point number corresponding to the photon flux per unit wavelength of a magnitude zero star in H band, $m$ is the stellar magnitude, $A$ the telescope area, $\Delta \lambda$ the bandwidth, and $\eta_t$ the throughput of the telescope before reaching the PLN instrument. We choose the stellar magnitude to be $m=5$ and use the Keck telescope area ($A=76$ m$^2$). We assume a bandwidth of $\Delta \lambda = 0.15 \mu$m and upstream telescope throughput of $\eta_t = 0.06$, a typical value for Keck.
For each port of the PLN, we multiply PR by its null depth to calculate the photon rate per port. We then multiply that photon rate by the assumed exposure time of 60 s to obtain the counts per exposure. We add normally-distributed noise with a variance equal to the number of counts, an approximation for Poisson-distributed photon noise that is valid at our high photon count rates. We assume that each dataset corresponds to 5 hours of integration time, and thus generate 300 exposures per dataset. We generate a total of 1000 such datasets for analysis.
We also generate off-axis point-spread-functions (PSFs) that can be injected as astrophysical signal. The off-axis PSFs do not include WFE, since the simulations show that, at the WFE amplitudes of interest in our work, the planet coupling at separations of interest is not significantly impacted. In order to create data with an injected companion, the off-axis PSF at the desired separation is scaled appropriately based on the desired flux ratio, then added to each exposure of the simulated intensity of the on-axis source.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale = 0.55]{figures/photon_noise_rocs/roc_fr7e-07_sep0.75.pdf}
\includegraphics[scale = 0.55]{figures/photon_noise_rocs/roc_fr7e-07_sep1.0.pdf}
\includegraphics[scale = 0.55]{figures/photon_noise_rocs/roc_fr8e-07_sep1.25.pdf}
\includegraphics[scale = 0.55]{figures/photon_noise_rocs/roc_fr1e-06_sep1.5.pdf}
\caption{\label{fig:roc_curves} Example ROC curves at different separations in the presence of with photon noise, assuming wavefront error averages to a baseline null depth. For both vortex charges, the inclusion of other ports of the PLN provides detection gains relative to the VFN. The grey areas indicate false positive rates which are not well sampled as they involve fewer than 3 datasets with false detections.}
\end{center}
\end{figure*}
\subsection{Detection} \label{ssec:detection}
In this section, we characterize the detectability of planets, comparing the performance of the VFN and the PLN. For each dataset generated in Section \ref{sec:datagen}, we first take the mean of the 300 exposures and subtract off the nominal on-axis signal with no WFE. We then perform detection testing on the resulting data, using a total energy test statistic:
\begin{equation}
\label{eqn:energy_statistic}
\epsilon = \sum_i y_i^2,
\end{equation}
where $i$ is the port index of the PLN and $y_i$ the signal in the port. The test statistic $\epsilon$ is calculated from the data and compared to a threshold $\xi$, which is chosen to provide a desired false-alarm rate. A detection is claimed if $\epsilon \geq \xi$, and a lack of detection is claimed otherwise.
There are four possible outcomes when comparing the test statistic calculated from a dataset to the value of the test statistic set as the detection threshold. The first is a true positive, in that a real companion in the data is detected; the fraction of real companions detected is the true positive rate ($\mathrm{TPR}$). A second possible outcome is that a real companion is \textit{not} detected, occurring at a rate of $1-\mathrm{TPR}$. A third outcome is that there is no companion in the data, but the detection test incorrectly claims a detection. The rate at which this occurs is the false positive rate ($\mathrm{FPR}$). The fourth and last outcome is that there is no companion, and a detection is correctly not claimed, occurring at a rate of $1-\mathrm{FPR}$.
Choosing a threshold for the test statistic is a balancing act between the $\mathrm{TPR}$ and $\mathrm{FPR}$: as the threshold is decreased, detecting real companions becomes more likely, but false detections also become more likely. This dependency can be characterized by examining the possible values of the test statistic and calculating the $\mathrm{TPR}$ and $\mathrm{FPR}$ \textit{if} that value were the detection threshold. Plotting the $\mathrm{TPR}$ as a function of the $\mathrm{FPR}$ results in a receiver operating characteristic (ROC) curve, which characterizes the performance of a detection scheme and can be used in the determination of flux ratio detection limits.
Figure \ref{fig:roc_curves} shows ROC curves from the distribution of $\epsilon$ over the 1000 datasets. The VFN corresponds to the case where only the LP 01 port is used, while with the PLN, all four nulled ports are used. The simulations show that for both charges, the inclusion of the other nulled ports of the PLN provides detection gains relative to the VFN. For a given rate of false positives, the PLN can achieve a higher true positive rate than the VFN. At close in separations $\leq 1 \lambda/D$, the charge 1 PLN achieves the best performance. At separations greater than $\approx 1.25 \lambda/D$, the charge 2 PLN starts to perform better. Despite having higher throughput, the photonic lantern without a vortex does not outperform both the charge 1 and the charge 2 PLNs at any separation, emphasizing that the distribution of flux relative to the achievable null-depths matters more than sheer throughput. However, the no vortex PLN has the advantage of not requiring an additional optic in a pupil or focal plane, and can thus be realized with a simpler optical system. Additionally, the relative performance of the different configurations will ultimately depend on the distribution of WFE, as the ports in each configuration are sensitive to different subsets of modes.
\subsection{Model-Fitting}
Data from the VFN consists of only one measurement that contains no information on position angle and cannot discriminate between the effects of flux ratio and separation. Unlike the VFN, the spatial structures of the PLN modes allows for the retrieval of the planet's location, albeit with degeneracy in the position angle as a result of their symmetry.
To illustrate this capability, we attempt to fit models to one of the simulated datasets of the charge 2 VFN from Section \ref{sec:datagen}, where a planet with a flux ratio of $2\times 10^{-6} $ is injected at ($X=$ 1.25 $\lambda/D$, $Y=$ 0 $\lambda/D$). We believe that a configuration that slightly breaks the symmetry would be a better strategy for localization than any of the configurations presented here. Determining how to do this effectively would be part of future work. For this work, our primary aim was to show that this localization capability exists in this architecture, so we choose to focus on just one configuration.
First, we assume that the average null-depth can be estimated, such as by observing a reference star. This assumes telescope conditions are reasonably stable between observations of the reference and target stars, as the accuracy of the null-depth estimation will be impacted by quasi-static aberrations as well as differential alignment onto the vortex or lantern centers, which would lead to differences between the reference and target observations.
The estimated null-depth is subtracted from the average of the measurement frames. This step is necessary to debias the data, since if only the nominal on-axis signal (without any wavefront error) is subtracted, the WFE that sets the null-depth will contribute to the apparent flux of the planet. We then fit a model to the data through Chi-squared ($\chi^2$) minimization, using only data from the LP 01 port for the VFN, and data from all six ports for the PLN.
The three model parameters for a planet are its location coordinates $(X,Y)$ and its flux ratio (FR). We first generate a grid of parameter values, choosing $X$ to span from 0 $\lambda/D$ to 3 $\lambda/D$ and $Y$ to span from -3 $\lambda/D$ to 3 $\lambda/D$. This spans the spatial half-plane, which is enough for our purposes, as the symmetry of the modes means the position angle can at best be localized with a 180 degeneracy. The flux ratios are chosen to range logarithmically from $10^{-7}$ to $10^{-5}$.
A planet corresponding to each set of parameters from the grid is simulated with the instrument model. The $\chi^2$ of the difference between the model and the data is calculated using $\chi^2 = \sum_i (y_i - x_i)^2/\sigma_i^2$, where $y_i$ is the measured data in port $i$, $x_i$ is the model, and $\sigma_i$ is the standard deviation of the noise across the 300 frames. The probability distribution is then calculated by taking $\mathrm{P}(X,Y,\mathrm{FR}) \propto \exp{-\chi^2/2}$, and normalizing such that the total probability over the entire explored parameter space is 1.
Figure \ref{fig:probmaps} depicts the three spatial cross-sections of the resulting probability distributions for the charge 2 VFN and PLN, corresponding to the flux ratio values from the grid closest to the injected value of $2\times 10^{-6}$. The parameter set in the grid closest to that of the injected planet is marked with an orange star. Also shown is the probability distribution of the flux ratio, marginalized over the spatial dimensions. As expected, it is largely unconstrained by the VFN, which cannot distinguish between the competing effects of flux ratio and separation. However, with the spatial information provided by the PLN, the retrieved probability distribution of the flux ratio peaks at the correct value of $2\times10^{-6}$. Given the best fit flux ratio using PLN, fitting a gaussian curve to the y-axis cross-section of the spatial probability distribution reveals that the position angle can be localized to $\sim 1~\lambda/D$ with the PLN, while it is completely unconstrained by the VFN. These simulation results show that compared to the VFN, the PLN can provide better constraints on the planet's location and flux ratio.
The response of the PLN to off-axis signal is not rotationally symmetric. We thus explore injecting and recovering a planet signal at varying position angles. Figure \ref{fig:pa_scan} shows that, given the correct flux ratio, the localization response varies as a function of position angle. At position angles other than 0 and $\pi/2$, additional solutions exist beyond the two guaranteed by the instrumental symmetry. However, an observing strategy that involves taking data with multiple rotations of the instrument relative to the sky will reduce the number of best fit position angle solutions to the fundamental two. Finding the most efficient observational strategy to best constrain the position angle given an unknown random initial orientation, and exploring the possibility of introducing slight asymmetries to break this degeneracy, are topics left for future work.
\begin{figure*}[t]
\begin{center}
\fbox{\includegraphics[scale = 0.35]{figures/smf_1p25_2e-06_probmap_3panel.pdf}}
\fbox{\includegraphics[scale = 0.35]{figures/pl_1p25_2e-06_probmap_3panel.pdf}}
\includegraphics[width=0.34\textwidth,height=120pt]{figures/fr_pdf_1p25_2e-06.pdf}
\caption{\label{fig:probmaps} Left: Select spatial probability distribution cross-sections, using a charge 2 VFN. The three panels are plotted on the same color scale. Middle: Select spatial probability distribution cross-sections, using a charge 2 PLN. The three panels are plotted on the same color scale. The parameters closest to that of the injected planet are marked with orange stars. Right: Probability distributions of the flux ratio, marginalized over the spatial dimensions. The flux ratio of the injected planet is marked by the red line. The model-fitting shows that the PLN can provide better constraints on planet model parameters compared to the VFN.}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[scale = 0.5]{figures/pa_scan.pdf}
\caption{\label{fig:pa_scan} Spatial probability distributions given the correctly identified flux ratio of $2.15\times10^{-6}$ (the panels are plotted on the same color scale). Planets at a separation of 1.25 $\lambda/D$ are injected at a variety of injected position angles (marked by the orange stars). At position angles other than 0 and $\pi/2$, additional solutions exist beyond the two guaranteed by the instrumental symmetry. However, an observing strategy that involves taking data with multiple rotations of the instrument relative to the sky will reduce the number of best fit position angle solutions to the fundamental two.
}
\end{center}
\end{figure*}
\newpage
\section{Conclusions}
This work presents a proof-of-concept study of the Photonic Lantern Vortex Fiber Nuller. The advantage the MSPL offers over the SMF is two-fold. First, a photonic lantern, regardless of modal selectivity, accepts more input modes than the SMF, increasing the overall amount of light that can couple in. This improves the overall field of view and total planet coupling provided by the VFN. Second, the symmetries resulting from modal selectivity interact with the vortex field to create not just on-axis nulls, but also ports insensitive to low-order aberrations that do not meet a specific azimuthal order condition. Together, these properties of the PLN result in an instrument that rejects starlight while maintaining a substantial amount of planet light in the regions of interest. Additionally, while the PLN is meant for integration with spectrographs, motivated by the science that can be done in the spectral domain, the ports with different modal structures captures some spatial information, enabling planet localization that is not possible with the VFN. However, the instrumental symmetries that provide starlight and wavefront error rejection currently also cause degeneracies in the spatial information captured. Future work will explore whether introducing slight asymmetries into the instrument can lift the spatial degeneracies with minimal impact to the achievable null depth.
This work simulates the PLN's ideal behavior at a single wavelength. However, the modes of a realistic mode-selective photonic lantern will deviate from the ideal LP modes. Furthermore, its modes will actually vary with wavelength. Finite-difference beam propagation simulations are needed to simulate the behavior of a realistic photonic lantern design across different wavelengths, since its modes will no longer correspond to perfect LP modes, and there will be modal cross-coupling due to imperfections in the design as well as the fabrication process. Additional performance simulations will be conducted to characterize the impact of this non-ideal, wavelength dependent behavior on science results. This work includes simulating the PLN with synthetic planetary spectra and investigating methods to analyze the data, building upon current practices in exoplanet spectral analysis \citep{Wang_2021}. We will identify best practices to account for the wavelength dependent mode-structure and throughput and the optimal method for combining data from the different ports, including the possibility of obtaining concurrent stellar spectra in the non-nulled ports to be used for calibration and analysis. We will investigate if multiple sets of spectroscopic data can be used to cross-calibrate systematic errors. The single-mode outputs are ideal for downstream spectroscopy using photonic spectrographs \citep{gatkine2019astrophotonic}.
We will thus investigate strategies for optimal integration of PLN with an on-chip photonic spectrograph on each of the single-mode outputs (nulled or otherwise) to measure the spectra of the planet/companion and star, as well for cross-calibration.
Future work also includes verifying the behavior of a PLN in the lab — both the characterization of the photonic lantern device itself, and after integration with a vortex. We intend to characterize the PLN with different levels of wavefront error, as well as investigate the possibility of performing wavefront control to achieve better nulls, potentially compensating for defects such as residual optical surface error or even non-ideal photonic lantern modes. If the laboratory characterization validates the performance of the PLN, an on-sky demonstration will be attempted.
This work on the PLN also naturally ties in to several related topics, such as the development of wavefront sensing algorithms through photonic lanterns \citep{Lin_PLWFS1,Norris_2020}, or the leveraging of the photonic lantern design paradigm to push towards the theoretical limits of optical signal separation.
\acknowledgments
We thank the anonymous reviewer for their careful consideration and feedback. This work is supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. Additional effort has been supported by the National Science Foundation under Grant No. 2109231. This research was carried out in part at the California Institute of Technology and the Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration (NASA).
\vspace{5mm}
\software{This research made use of Astropy \citep{astropy:2013, astropy:2018}; NumPy \citep{oliphant2006guide_numpy}; SciPy \citep{2020SciPy-NMeth}; and Matplotlib \citep{Hunter:2007_matplotlib}.}
\pagebreak
|
1,116,691,500,257 | arxiv | \section{Introduction}
\label{sec:intro}
Kernel mixtures are a powerful tool for modeling a variety of data sets, especially in the presence of a natural clustering structure \citep{escobar:1995,maceachern:1998}. A good portion of the rapidly expanding literature on Bayesian nonparametrics is aimed at building effective mixture models. A recent focus of the literature is on how to jointly model in a hierarchical manner data samples that are similar or otherwise related, the main objective being effective borrowing of strength across samples, thereby substantially enhancing inference on the underlying data generative mechanisms as well as prediction. This is particularly important for complex data sets, for which each individual sample may only contain very limited information regarding the underlying probability distribution. Among many notable efforts in this direction, \cite{lopes2003bayesian} proposed a hierarchical model for multiple finite mixtures. \cite{muller2004method} proposed a nonparametric extension of \cite{lopes2003bayesian}'s model by replacing finite mixtures with Dirichlet process (DP) mixtures. In a different vein, \cite{cron2013hierarchical} proposed to use the hierarchical DP, or HDP, \citep{teh2006hierarchical} as the mixing distribution to characterize variation across multiple mixture distributions. \cite{rodriguez2008nested} proposed the nested DP (NDP) mixture, which is an infinite mixture of DP mixtures that induces an additional level of clustering among multiple mixture distributions themselves (to be distinguished from the clustering within each mixture distribution).
While applicable to a variety of mixture modeling contexts, our work is motivated during our attempt to apply existing hierarchical mixture models to the analysis of data collected from flow cytometry experiments. Flow cytometry is a laser-based technology that measures biomarkers on a large number of cells, so each cell is an observation from a distribution in $\mathbb{R}^p$, where $p$ is the number of biomarkers measured. The cell population typically comes from a blood sample in immunological studies, and it consists of cells of various subtypes---e.g., T cells, B cells, etc.---with each subtype forming a ``cluster'' in the sample space. Because each cell subtype has a specific function in the immune system, inference on the abundance of the various subtypes across blood samples of a patient under different stimulating conditions, for instance, is of interest. Mixture models are natural tools for characterizing such data as the data is indeed a mixture of various cell types \citep{chan2008statistical}, and because a typical flow cytometry study will involve multiple samples collected under different conditions, the need for jointly modeling to achieve effective borrowing of strength also naturally arises \citep{cron2013hierarchical}.
During the analysis of flow cytometry experiments using mixtures, we encountered a number of important challenges that we believe are present in numerous (if not most of) other applications involving mixture modeling of related samples (not only with location-scale kernels but beyond). Below we summarize the three main data features/challenges that motivate the current work:
\begin{itemize}
\item[I.] {\em Samples often share clusters but with differing weights.} Related samples tend to share some (even most) of their clusters, and these common clusters vary across related samples in their weights. In flow cytometry, for instance, data samples often share a vast majority of the cell subtypes, and the most common type of variation across samples is the differences in the relative sizes of the subtypes.
\item[II.] {\em Only some, not all, clusters vary.} Often, only a fraction, not all, of the clusters vary across samples. In flow cytometry, not all cell subtypes are affected by the experimental conditions of interest. Very often only one or two cell types are affected and thus vary across the samples while the rest do not.
\item[III.] {\em Misalignment across samples in shared clusters.} Even the same cluster shared among samples is often not perfectly aligned across samples, either due to actual systematic difference across the samples, or very often due to the presence of extraneous, uncontrolled additional sources of variation, i.e., some ``random'' effect. This is easily seen in mixtures of location-scale families, where the location and spread of some shared clusters differ to various extent across samples. Such misalignment is ubiquitous in flow cytometry data, with numerous potential causes. For example even tiny differences in the chemical concentrations applied in the experimental protocol across experiments can cause noticeable ``perturbations'' in the cell subtypes.
\end{itemize}
As far as we know, none of the existing hierarchical approaches satisfactorily address all of these issues in a single coherent framework. Table~\ref{tab:comparison_of_models} provides a summary of these data features and the extent to which some of the state-of-the-art methods (along with the method we propose herein) address each of them.
\begin{table}[h]
\begin{center}
\resizebox{\textwidth}{!}{%
\begin{tabular}{ c | c|c|c }
\hline\hline
& Shared clusters & Only a subset & Misalignment\\
& with varying weights & of clusters differ& in kernels\\
\hline
\cite{lopes2003bayesian,muller2004method} & Not allowed & Allowed &Not allowed\\
\cite{teh2006hierarchical,cron2013hierarchical} & Allowed & Not allowed &Not allowed\\
\cite{rodriguez2008nested} & Not allowed & Not allowed &Not allowed\\
This work & Allowed & Allowed & Allowed\\
\hline\hline
\end{tabular}}
\caption{Comparison of hierarchical mixture models in terms of how they cope with the three common data features/challenges in modeling multiple related data samples.}
\label{tab:comparison_of_models}
\end{center}
\end{table}
Specifically, the existing approaches exploit some aspects of these features but do not fully take them into account. By introducing a cluster-specific hierarchical relationship among the samples, \cite{lopes2003bayesian} and \cite{muller2004method} allow some clusters to be shared among the samples. However, their models require that the kernel parameters and the mixture weight for each cluster be either both shared across samples or both different, without the option to decouple these two different types of variations. In particular, no clusters are allowed to have only one type of variation---e.g., mixing weights---under these models. In the context of flow cytometry, for instance, this would mean that cell subtypes cannot change just in abundance across the samples but not in their location and spread, clearly an unrealistic assumption. On the other hand, by using the hierarchical DP \citep{teh2006hierarchical} as the mixing distribution, \cite{cron2013hierarchical} does allow variations to exist in weights alone, but enforces the constraint that all clusters must all vary across samples, excluding the common situation in applications such as flow cytometry that only some clusters (e.g., subtypes) vary while others remain unchanged across conditions. Finally, under the nested DP mixture \citep{rodriguez2008nested}, the clusters in each sample must either be completely identical as those in another sample if they fall into the same model level cluster or all be completely different, in both weights and kernel parameters, if they belong to different model level clusters.
New hierarchical modeling techniques are needed to address these limitations. To meet this need, we design two new modeling devices that can be embedded into a single hierarchical mixture modeling framework---the first for the mixing weights and the other for the kernel parameters. For the weights, we introduce a new stick breaking process that induces shared weights on some clusters (those that do not change in abundance) through breaking a ``shared'' stick across all samples while inducing different weights on the other clusters through breaking an ``idiosyncratic'' stick for each sample. This technique will allow us to address challenges~I and II. For the mixture kernels, we utilize a {\em hierarchical} kernel to induce local perturbations in the kernel parameters across samples, which mimics the effect on the kernels due to uncontrolled confounding. By decoupling the hierarchical relationship among the mixing weights from that among the kernel parameters, our approach offers the needed additional flexibility and thus achieves substantially higher efficiency in modeling related mixtures, as will be demonstrated through numerical examples.
The rest of the paper is organized as follows. We start in Section~\ref{sec:background} with a brief review of the relevant background regarding nonparametric mixture modeling and stick breaking, and then in Section~\ref{sec:techniques} introduce the two techniques in turn. In Section~\ref{sec:computation} we provide a recipe for posterior inference based on Markov chain Monte Carlo (MCMC) sampling. In Section \ref{sec:examples} we compare our method to current methods through simulation studies that cover prediction/estimation, cross-sample calibration, and testing multi-sample differences, and finally use it to analyze two flow cytometry data sets.
\section{Method}\label{sec:method}
\subsection{Background: Dirichlet process mixtures and stick breaking}
\label{sec:background}
While our techniques can be embedded into mixture models with various weight generating mechanisms and kernel families, we shall introduce and illustrate them in the context of DP mixtures of Gaussians, which is the most widely adopted nonparametric mixture model.
Suppose $n$ observations $\vect{y}=(y_1,y_2,\ldots,y_n)$ are from a mixture model:
\begin{align*}
y_i \stackrel{\mathrm{iid}}{\sim} F, \quad i=1, \ldots, n, \quad \text{and} \quad f(\cdot) = \sum_{k \in \mathcal{K}} \pi_k \, g( \cdot| \lambda_k)
\end{align*}
where $f$ denotes the probability density function of $F$, $g(\cdot | \lambda)$ is a kernel distribution parametrized by $\lambda$, $\pi_k$ the associated (mixture) weight, and $\mathcal{K}$ the countable (possibly infinite) index set of the mixture components (or clusters). Location-scale families are commonly adopted as the kernel distribution, in which case $\lambda_k$ specifies the location and spread of the $k$th cluster. By definition the weights satisfy $\pi_k \geq 0$ and $\sum_k \pi_k = 1$. An alternative and computationally attractive formulation utilizes a latent cluster membership label $Z_i\in\mathcal{K}$ for each observation, such that
\[
y_i\,|\,Z_i = k \sim g(\cdot|\lambda_k) \quad \text{and} \quad \Pr(Z_i = k) = \pi_{k} \quad \text{for $i=1,2,\ldots,N$ and $k\in\mathcal{K}$} .
\]
Bayesian inference under mixture models can proceed after specifying prior distributions on the weights and the
kernel parameters $\{(\pi_{k},\lambda_{k}):k\in\mathcal{K}\}$ \citep{marin:2005}.
A flexible and convenient choice on the prior for the mixing weights is a generative procedure called the stick breaking process (SBP) \citep{ayaram1994constructive,ishwaran2001gibbs}. The general scheme of SBP starts with the drawing of a sequence of independent random variables $v_1,v_2,\ldots$ supported on $(0,1)$. Then the weight for the $k$th cluster is given as
\[ \pi_k = v_k\prod_{l=1}^{k-1} (1-v_l).\]
A popular two-parameter specification is the Poisson-Dirichlet process \citep{kingman1975random,pitman1997two}, corresponding to $v_i\sim {\rm Beta}(1-\gamma,\alpha+\gamma)$ for some parameters $\alpha$ and $\gamma$. In particular, when $\gamma=0$, this boils down to the weight generative mechanism from a Dirichlet process \citep{ferguson1973dp,ayaram1994constructive}, which we shall refer to as the SBP($\alpha$) process.
By adopting the SBP($\alpha$) prior on the weights, along with a prior $H$ on the kernel parameters, we obtain a Dirichlet process mixture (DPM) model:
\begin{align*}
\vect{\pi} = (\pi_k : k \in \mathcal{K}) & \sim \text{SBP}(\alpha) \quad \text{and} \quad \lambda_k \stackrel{\mathrm{iid}}{\sim} H, \quad k \in \mathcal{K}.
\end{align*}
The most commonly adopted kernel distributions
are location-scale families such as the (multivariate) Gaussian family, i.e., $g(\cdot| \lambda_k) =
N(\cdot | \mu_k, \Sigma_k)$. In this case, $H$ is often chosen to be the corresponding conjugate prior such as a normal-inverse-Wishart (NIW) prior on ($\mu_k,\Sigma_k$).
\subsection{Two techniques for hierarchically modeling related samples}
\label{sec:techniques}
Now assume $J$ samples of observations $\vect{y}_j = (y_{1,j}, \ldots,
y_{n_j,j})$ for $j=1, \ldots, J$ have been collected, and the observations in each sample are modeled by a mixture:
\begin{align*}
y_{i,j} & \stackrel{\mathrm{ind}}{\sim} F_j, \quad
i=1,
\ldots n_j\quad \text{and} \quad j=1, \ldots, J \\
f_j(\cdot) & = \sum_{k \in \mathcal{K}} \pi_{j,k}\, g( \cdot| \lambda_{j,k}), \quad j = 1, \ldots, J,
\end{align*}
where $f_j$ is the probability density function of $F_j$, and $\lambda_{j,k}$ represent the kernel parameter for the $k$th cluster in the $j$th sample. To characterize potential relationship across the samples, let us assume that the $k$th component under each sample represent the same cluster (e.g., cell subtype). Note that this does not exclude the possibility of having novel clusters that appear in only one or some of the samples, in which case the weights $\pi_{j,k}=0$ if cluster $k$ is absent in the $j$th sample. Again we let $\mathcal{K}$ be the collection of all cluster indices over all the samples. Let $Z_{i,j}$ be a latent variable indicating that the data point $y_{i,j}$ belongs to the $k$th cluster with $k \in \mathcal{K}$. Then the model can be equivalently written as
\begin{align*}
[ y_{i,j} | Z_{i,j} = k, \mu_{j,k}, \Sigma_{k} ] & \stackrel{\mathrm{ind}}{\sim} N(y_{i,j} |
\mu_{j,k}, \Sigma_{k}) \quad \text{and} \quad \Pr(Z_{i,j} = k) = \pi_{j,k} \text{ for $k\in\mathcal{K}$.}
\end{align*}
We next introduce techniques for prior choices on the weights and on the kernel parameters by extending the stick breaking prior and the kernel respectively, which will address the three data features and challenges described in the Introduction.
\paragraph{$\psi$-stick breaking for weights}
We consider a generative stick breaking procedure called ``$\psi$-stick breaking'' (for reasons to be explained below), which breaks $J$ sticks of unit length---one for each sample---in a dependent manner to generate the mixing weights $\{\pi_{j,k}:k=1,2,\ldots\}$ for $j=1,2,\ldots,J$. We start by observing that each cluster falls into one of two categories $\mathcal{K}_0$ and $\mathcal{K}_1$, that is $\mathcal{K}=\mathcal{K}_0\cup \mathcal{K}_1$ with $\mathcal{K}_0\cap\mathcal{K}_1 = \emptyset$: those in $\mathcal{K}_0$ have weights that do not vary across the $J$ samples (e.g., cell types whose abundance is constant across experimental conditions), i.e.,
$ \pi_{j,k} = \pi_{j', k}$ for $j,j'=1, \ldots, J$ for $k \in \mathcal{K}_0$, whereas those in $\mathcal{K}_1$ have varying weights across samples.
The generative process proceeds in two steps and is illustrated in Figure~\ref{fig:weights}. In the first step, we break the $J$ sticks at exactly the same spot into two pieces of length $\rho$ and $1-\rho$ respectively, where $\rho\in (0,1)$ is drawn as a Beta random variable.
Then in the second step, we use the $J$ pieces of length $\rho$ to generate the weights for the components in $\mathcal{K}_0$, and the $J$ pieces of length $1-\rho$ for the subtypes in $\mathcal{K}_1$.
Hence the parameter $\rho$ is interpreted as the overall proportion of the clusters with constant weights across samples.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{sticks.pdf}
\caption{Illustration of the $\psi$-stick breaking procedure with the $s$-stick (left) and the $i$-sticks (right).}
\label{fig:weights}
\end{figure}
Specifically, one can imagine that we {\em tie} the $J$ sticks of length $\rho$ together and break them using a single SBP as if they were a single stick---always at the same locations. For this reason, we shall refer to the common stick formed by tying the $J$ sticks of length $\rho$ as the ``shared'' stick, or the $s$-stick. Let $\{w_{0,k}: k\in\mathcal{K}_0\}$ with $\sum_{k\in\mathcal{K}_0} w_{0,k}=1$ be the randomly generated {\em relative} sizes of the components in $\mathcal{K}_0$ in terms of the proportions of the $s$-stick. So the absolute size of each cluster that does not change across samples is given by $\pi_{j,k}=\rho w_{0,k}$ for all $j=1,2,\ldots,J$ and $k\in\mathcal{K}_0$.
On the other hand, we break the $J$ sticks of length $1-\rho$ {\em independently} using separate independent SBPs, each generating the weights for one of the $J$ samples, corresponding to the sizes of clusters that vary across samples. For this reason, we shall refer to the $J$ sticks of length $1-\rho$ as the ``idiosyncratic'' sticks, or the $i$-sticks. We let $\{w_{j,k}:k\in\mathcal{K}_1\}$ for $j=1,2,\ldots,J$ with $\sum_{k\in\mathcal{K}_1}w_{j,k}=1$ be the randomly generated lengths of the components as proportions of the corresponding $i$-stick. So for the $k$th cluster, its weight in the $j$th sample is given by $\pi_{j,k}=(1-\rho)w_{j,k}$.
Using SBP$(\alpha)$ processes for breaking each of the $s$- and $i$-sticks, we arrive at a joint generative model for the weights in all of the $J$ samples, which we call ``shared/idiosyncratic" (si or $\psi$) stick breaking. Specifically, with a Beta prior on the length of the shared stick, we arrive at the following hierarchical model for weights
\begin{align}\label{eq:infinite_prior_on_weights}
\pi_{j,k} & =
\left\{
\begin{array}{ll}
\rho w_{0,k} & j=1, \ldots, J \text{ and } k \in \mathcal{K}_0 \\
(1 - \rho ) w_{j,k} & j=1, \ldots, J \text{ and } k \in \mathcal{K}_1
\end{array}
\right. \\
\rho & \sim \text{Beta}(a_{\rho}, b_{\rho}) \nonumber \\
(w_{0,k} : k \in \mathcal{K}_0) & \sim \text{SBP}(\alpha) \nonumber \\
(w_{j,k} : k \in \mathcal{K}_1) & \stackrel{\mathrm{iid}}{\sim} \text{SBP}(\alpha),
\quad j=1, \ldots, J. \nonumber
\end{align}
See Figure \ref{fig:weights} for a visualization of
the hierarchical prior on the mixture weights.
The hyperparameter $\alpha$ specifies the size of the clusters as well as the number of clusters (in $\mathcal{K}_0$ and $\mathcal{K}_1$ respectively), with a smaller $\alpha$ corresponding to a small number of large clusters and a larger $\alpha$ corresponding to a large number of small clusters. We infer on $\alpha$ in a hierarchical Bayesian paradigm by placing Gamma hyperprior on it:
$ \alpha \sim \text{Gamma}(\tau_{\alpha,1},\tau_{\alpha,2} ) $.
\paragraph{Local kernel perturbation}
We utilize a hierarchical setup to incorporate local perturbation in the kernel parameters, thereby adjusting for the misalignment and allowing more effective borrowing of information across the samples on each cluster. Specifically, we model the kernel parameters $\{\lambda_{j,k}\}$ as follows
\begin{align*}
\lambda_{0,k} &\stackrel{\mathrm{iid}}{\sim} H_0(\cdot\,|\, \phi_0) \quad \text{for $k\in\mathcal{K}$}\\
\lambda_{j,k} &\stackrel{\mathrm{iid}}{\sim} H(\cdot\,|\,\lambda_{0,k},\epsilon) \quad \text{for $j=1,2,\ldots,J$}
\end{align*}
where $\lambda_{0,k}$ represent the cross-sample ``centroid'' kernel parameters for the $k$th cluster, with a hyperprior $H_0$ specified by hyperparameter $\phi_0$. Given $\lambda_{0,k}$, the sample-specific kernel parameters for the $k$th cluster $\lambda_{j,k}$ is drawn from $H$ with additional hyperparameter $\epsilon$, which specifies the dispersion of cluster $k$ among the samples around the ``centroid''.
The above specification enforces that each cluster $k$ will have misalignment. More generally, in some problems misalignment may exist in only a subset of the clusters. To allow for such cases, again appeal to a ``spike-and-slab'' setup by introducing an additional Bernoulli latent indicator $S_{k}$ for each cluster, such that $S_k=1$ if there is misalignment in cluster $k$ whereas $S_k=0$ if otherwise. That is,
\[
\lambda_{j,k} \stackrel{\mathrm{ind}}{\sim} \begin{cases} \delta_{\lambda_0,k} & \text{if $S_{k}=0$}\\
H(\cdot|\lambda_{0,k},\epsilon) & \text{if $S_{k}=1$}
\end{cases} \qquad \text{and} \qquad S_{k}\stackrel{\mathrm{iid}}{\sim} {\rm Bernoulli}(\varphi)
\]
where $\delta_{\cdot}$ represents a point mass.
Putting the pieces together in the context of Gaussian kernels, we arrive at the following spike-and-slab version of the locally perturbed kernel model:
\begin{align*}
\Sigma_k^{-1} & \stackrel{\mathrm{iid}}{\sim} \text{Wishart}(\Psi_1, \nu_1) \\
[\mu_{j,k}| \mu_{0,k}, \Sigma_k, S_{k}] & \stackrel{\mathrm{ind}}{\sim}
\delta_{\mu_{0,k}} 1_{\{ S_{k}=0\}}
+ \text{Normal}(\mu_{0,k},
\epsilon \Sigma_{k}) 1_{\{ S_{k}=1\}} \\
[\mu_{0,k}| \Sigma_{k} ] & \stackrel{\mathrm{ind}}{\sim} \text{Normal}(m_1, \Sigma_{k}/ k_0) \\
S_{k} &\stackrel{\mathrm{iid}}{\sim} {\rm Bernoulli}(\varphi).
\end{align*}
This model is illustrated in Figure~\ref{fig:perturbation}. The hyperparameter $\epsilon$ specifies the
total amount of local variation between the means of each group $\mu_{j,k}$ and
the grand mean $\mu_{0,k}$, and $\varphi$ specifies the proportion of clusters that have misalignment.
The hyperparameters $m_1$, $\Psi_1$, $k_0$, $\epsilon$, and $\varphi$ are all characterizing ``global'' features of the data that pertain to all of the clusters and samples. We can reliably infer them by pooling information through hierarchical Bayes. In particular, in our numerical examples we adopt the following hyperpriors:
$\epsilon \sim
\text{Uniform}(a_{\epsilon},
b_{\epsilon} )$,
$ m_1 \sim \text{Normal}( m_2 ,S_2)$,
$ \Psi_1 \sim \text{Inverse-Wishart}(\Psi_2 ,\nu_2 )$,
$ k_0 \sim \text{Gamma}( \tau_1/2, \tau_2/2 )$, and $\varphi \sim \text{Beta}(a_{\varphi}, b_{\varphi})$.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{perturbation}
\vspace{-1em}
\caption{A locally perturbed Gaussian kernel with a spike-and-slab setup.
When $S_k=0$, all kernels for the $k$th cluster are identical across samples. When $S_k=1$, the kernel is centered around a common mean but are not identical. }
\label{fig:perturbation}
\end{figure}
\subsection{Posterior inference based on MCMC sampling}
\label{sec:computation}
Posterior inference can be carried out through Markov Chain Monte Carlo (MCMC).
One option is to use \cite{muller2004method}'s
standard \Polya urn scheme. A benefit of this sampling scheme is that all the random weights
are integrated out. However it can be computationally inefficient for large
datasets such as in flow cytometry experiments.
Alternatively, one can approximate the nonparametric model with a finite model
and use a blocked Gibbs sampler \citep{ishwaran2001gibbs}, which is more
efficient in terms of mixing and computational speed, and hence is what we recommend.
To this end, two different finite approximation strategies are commonly adopted for DPMs and other stick breaking mixtures: (i) truncating the stick breaking at some maximum number of components and (ii) using finite-dimensional
symmetric Dirichlet distribution.
These two approximations might look very different at first, but the
main difference between the two is in the induced stochastic ordering of the weights, which is
irrelevant in mixture models.
In fact, as \cite{kurihara2007collapsed} points out, one can apply a size-biased permutation to the
order of the weights of a finite symmetric Dirichlet distribution and obtain a
distribution which is practically identical to the truncated SBP. However, the two strategies are not computationally equivalent for mixture models. The weights under the symmetric finite-Dirichlet approximation are
exchangeable, which
results in substantially improved mixing over truncating the SBP. Therefore we opt for the symmetric finite Dirichlet approximation in our implementation.
This approximation has been studied and used by many authors in a
variety of contexts. See \cite{neal2000markov},
\cite{green2001modelling} and \cite{ishwaran2002exact}, among others.
Specifically, under this approximation, the infinite sequences of mixture weights in Eq.~\eqref{eq:infinite_prior_on_weights} are replaced by:
\begin{align*}
(w_{0,k} : k \in \mathcal{K}_0) & \sim \text{Dirichlet}(\alpha/K_0,\alpha/K_0,\ldots,\alpha/K_0) \\
(w_{j,k} : k \in \mathcal{K}_1) & \stackrel{\mathrm{iid}}{\sim}\text{Dirichlet}(\alpha/K_1,\alpha/K_1,\ldots,\alpha/K_1),
\quad \text{for } j=1, \ldots, J,
\end{align*}
where $K_0$ and $K_1$ represent the numbers of mixture components that are
shared and differential across the groups, respectively.
In the nonparametric case, both $\mathcal{K}_0$ and $\mathcal{K}_1$ are infinite, while in the finite
approximation we need to choose $K_0$ and $K_1$.
A simple choice is to set $K_0 = K_1 = K $ for some large $K$ which
represents an upperbound to the a priori expected number of mixture components.
With this specification, next we give the details on the MCMC sampler for the joint posterior in terms of the full conditionals:
\begin{enumerate}
\item Latent assignments for $ i = 1, \ldots, n_j $ and $ j=1, \ldots, J$:
$$
\Pr(Z_{i,j} = k | \ldots ) \propto
\pi_{j,k} \text{Normal}(y_{i,j}|\mu_{j,k},
\Sigma_{k}), \quad k \in
\mathcal{K}.
$$
\item Mixture weights:
\begin{align*}
[ w_{0,1}, \ldots, w_{0, K_0} | \ldots ] & \sim
\text{Dirichlet}( n_{0,1} + \alpha/K_0, \ldots, n_{0,K_0 } +
\alpha/K_0) \\
[ w_{j,1}, \ldots, w_{j, K_1} | \ldots ] & \stackrel{\mathrm{ind}}{\sim}
\text{Dirichlet}( n_{j,1} + \alpha/K_1, \ldots, n_{j,K_1 } +
\alpha/K_1),
\end{align*}
where $n_{0,k} = |Z_{i,j} = k\; : \; i=1, \ldots, n_j \; \text{and} \; j = 1,
\ldots, J |$ for $k \in \mathcal{K}_0$, and
$n_{j,k} = |Z_{i,j} = k \; : \; i=1, \ldots, n_j|$ for
$j=1, \ldots, J$ and $k \in \mathcal{K}_1$.
\item Latent perturbation state variables for $k \in \mathcal{K}$:
$$
\Pr( S_k = 1 | \ldots) = \bigg( 1 +
\dfrac{1-\varphi}{\varphi} \cdot
{\rm BF}_k \bigg)^{-1},
$$
where
\begin{align*}
{\rm BF}_k & = \bigg( \dfrac{|\Psi_{1,k}^{(0)}| }{|\Psi_{1,k}^{(1)}| }
\bigg)^{(\nu_1+ \sum_j n_{j,k})/2} \prod_j ( \epsilon n_{j,k} +
1 )^{p/2} \\
\Psi_{1,k}^{(1)} & = \bigg\{ \Psi_{1}^{-1} + \sum_j \bigg[ SS_{j,k} +
\big(\epsilon + \dfrac{1}{n_{j,k}}\big)^{-1}
(\bar{Y}_{j,k} - \mu_k) (\bar{Y}_{j,k} - \mu_k)' \bigg] \bigg\}^{-1} \\
\Psi_{1,k}^{(0)} &= [ \Psi_1^{-1} + SS_k + \sum_j n_{j,k} (\bar{Y}_k - \mu_k)
(\bar{Y}_k -
\mu_k)' ]^{-1},
\end{align*}
for $\bar{Y}_{j,k} = \sum_{i:Z_{i,j}=k} Y_{i,j} / n_{j,k}$,
$\bar{Y}_k = (\sum_{i,j:Z_{i,j}=k} Y_{i,j}) / (\sum_j n_{j,k})$, \\
$SS_{j,k} = \sum_{
\{i : Z_{i,j}=k \}} (Y_{i,j} - \bar{Y}_{j,k})(Y_{i,j} -
\bar{Y}_{j,k})'$
and
$SS_k =\sum_{ \{i,j : Z_{i,j}=k \}} (Y_{i,j} - \bar{Y}_k)(Y_{i,j} -
\bar{Y}_k)' $.
\item Precision matrices for $k \in \mathcal{K}$:
$$
[\Sigma_k^{-1} | \ldots ] \sim
\text{Wishart}\big(\Psi_{1,k}^{( S_k )},
\nu_1 + \sum_j{n_{j,k}} \big)
$$
\item Grand means for $k \in \mathcal{K}$:
$$
[ \mu_k | \ldots ] \sim
\text{Normal}\bigg(
m_{1,k}^{(S_k)}, \Sigma_k /(\sum_j (\epsilon S_k
+ 1/n_{j,k})^{-1} + k_0 ) \bigg),
$$
\item Group means for $j=1, \ldots, J$ and $k \in \mathcal{K}$:
\begin{align*}
[ \mu_{j,k}| S_k = 0, \ldots ] & \sim \delta_{\mu_k} \\
[ \mu_{j,k}| S_k = 1, \ldots ] & \sim
\text{Normal}\bigg( \dfrac{n_{j,k}
\bar{Y}_{j,k} +
\mu_k/\epsilon}{n_{j,k} + 1/\epsilon} , \Sigma_k/(n_{j,k}+1/\epsilon)
\bigg).
\end{align*}
\item A Metropolis step to explore different modes of the posterior distribution by swapping an index from $\mathcal{K}_0$ with an index from $\mathcal{K}_1$.
The proposal distribution is defined as follows. An initial index $k'$ is drawn
proportionally to $ \sqrt{ n_{j,k }}$ for $k \in \mathcal{K}$, where
$n_{j,k} = | (i,j) : Z_{i,j} = k|$, and
a second index $k''$ is drawn uniformly from $\mathcal{K}_0$ if $k' \in
\mathcal{K}_1$
and
uniformly from $\mathcal{K}_1$ if $k' \in \mathcal{K}_0$. Since the proposal is
symmetric, the swap is accepted with probability:
$$
\min \bigg( \dfrac{{\rm E}_{ w,\rho}( \prod_{j,k} \pi_{j,k}^{n_{j,k}}|
\vect{Z}_{\text{new}} )}{{\rm E}_{ w,\rho}( \prod_{j,k} \pi_{j,k}^{n_{j,k}}|
\vect{Z})}, 1 \bigg),
$$
where $\vect{Z}$ and $\vect{Z}_{\text{new}}$ represent the vectors of the latent assignments
before and after the swap. Since the mixture components are exchangeable within
$\mathcal{K}_0$ and $\mathcal{K}_1$, the acceptance probability
depends only on the swapped indices. Similar strategies to improve the exploration of the
sample space have been proposed by \cite{porteous2012gibbs} and
\cite{papaspiliopoulos2008retrospective}.
\item The Dirichlet pseudo-count parameter $\alpha$ is updated using a Metropolis-Hastings step with the following proposal:
$$
\alpha^* | \alpha \sim \text{Gamma}(\alpha ^ 2 \cdot a, \alpha \cdot a),
$$
where is $a$ is a tuning parameter calibrated in the burn-in.
\item Mean shrinkage parameter
$$[k_0| \ldots] \sim \text{Gamma}( (\tau_1 + p \cdot K) / 2, (\tau_2 + \sum_k(
\mu_{0,k} - m_1)' \Sigma_k^{-1} (
\mu_{0,k} - m_1) ) / 2)$$
\item Variance parameter $[\Psi_1^{-1}| \ldots] \sim \text{Wishart}( (\Psi_2 + \sum_k \Sigma_k^{-1})^{-1}, K \cdot \nu_1 + \nu_2)$.
\item Centroid mean parameter $[m_1| \ldots] \sim {\rm Normal}(Vm, V)$,
where
$$
m = S_2^{-1} m_2 + k_0 \sum_k \Sigma_k^{-1} \mu_{0,k}
$$
and
$$
V = ( S_2^{-1} + k_0\sum_k \Sigma_k^{-1} )^{-1}.
$$
\item The perturbation parameter $\epsilon$ is updated using a Metropolis step with the following proposal:
$$
\text{Uniform}(a_{\epsilon},b_{\epsilon})
$$
\item The proportion of clusters with kernel misalignment $[\varphi | \ldots] \sim \text{Beta}(a_{\varphi} + s_0, b_{\varphi} + s_1)$,
where
$s_i = |S_k = i : k=1, \ldots, K|.$
\item The ``length'' of the shared stick $[\rho | \ldots] \sim \text{Beta}(a_{\rho} + n_0, b_{\rho} + \sum_j n_j)$,
where
$n_j = \sum_{k} n_{j,k}.$
\end{enumerate}
\section{Numerical examples}\label{sec:examples}
In this section we provide three numerical examples. In the first example
data are simulated under different mixture distributions, and we compare the goodness-of-fit of our method with respect to competing approaches.
In the second example we illustrate through a simulated dataset how our model can be used to remove small distributional shifts across related mixture distributions. In the third example
we compare the performance
of our model to other competing methods in testing and identifying differences across distributions.
In the fourth example we analyze two real flow cytometry datasets. In all of the examples, we shall refer to our Dirichlet process mixtures of Gaussians with $\psi$-stick breaking and kernel perturbation as CREMID, as it models Closely RElated MIxture Distributions.
\subsection{Example 1: Estimation and predictive performance}
In this first example, we investigate how CREMID helps achieve more effective borrowing of information across samples thereby enhancing predictive performance. To this end, we consider four simulation scenarios, representative of a vast variety of real applications. We use the sum of $L_1$ distances of the estimated univariate predictive densities from the true densities as measure of goodness of fit. (Note that we used this metric instead of the more natural log predictive score or the $L_1$ distance between the multivariate predictive density from the true density, because at the time of writing, the available software for the competitor HDPM provides the marginal predictive densities but not the other two metrics.)
We consider the following multi-sample scenarios in
$\mathbb{R}^4$. In each scenario, there are three data samples ($j=1,2,3$) and the sample size for each is 100.
Below we outline the four different scenarios. Some of the parameters are
omitted here, but provided in the Appendix.
\begin{enumerate}
\item Local shift:
$$
y_{i,j}| \vect{\mu}, \vect{\Sigma}, \vect{\pi} \sim
\pi_1
N(y_{i,j}| \mu_{1} + \delta_j, \Sigma_{1}) + \sum_{k=2}^4 \pi_k
N(y_{i,j}| \mu_{k}, \Sigma_{k}),
$$
where $\delta_j = ( j/2, 0, 0, 0)$ and $\mu_k \sim U(0,
10)$ for
$k=1, \ldots, 4$.
\item Global shifts:
$$
y_{i,j}| \vect{\mu}, \vect{\Sigma}, \vect{\pi} \sim
\sum_{k=1}^4 \pi_k
N(y_{i,j}| \mu_{k} + \dfrac{j}{10} \mathbbm{1}_4, \Sigma_{k}),
$$
where $\mu_k \sim U(0, 10)$ for
$k=1, \ldots, 4$.
\item Local weight difference:
\begin{multline}
y_{i,j}| \vect{\mu}, \vect{\Sigma}, \vect{\pi} \sim
(\pi_{1} - 0.04(j-1)) N(y_{i,j}| \mu_{1}, \Sigma_{1}) \\
+ (\pi_{2} + 0.04(j-1)) N(y_{i,j}| \mu_{2}, \Sigma_{2})
+ \sum_{k=3}^4 \pi_k
N(y_{i,j}| \mu_{k}, \Sigma_{k}),
\end{multline}
where $\vect{\pi} = (0.09, 0.01, 0.8, 0.1)$ and $\mu_k \sim U(0, 10)$ for
$k=1, \ldots, 4$.
\item Global weight differences:
\begin{align*}
y_{i,j}| \vect{\mu}, \vect{\Sigma}, \vect{\pi} & \sim
\sum_{k=1}^8 \pi_{j,k}
N(y_{i,j}| \mu_{k}, \Sigma_{k}) \\
\pi_{j} & \propto \exp(m_j) \\
m_{j} & \sim N(0, S),
\end{align*}
where $\mu_k \sim U(0, 10)$ for
$k=1, \ldots, 8$.
\end{enumerate}
We compare our method to
\cite{muller2004method}'s hierarchical Dirichlet process mixture (HDPM) method. We use the R package \texttt{DPpackage} \cite[]{jara2011dppackage} for fitting
HDPM. In addition, we also compare these to methods to independent finite mixture of Gaussians for each of the three samples, using Mclust \citep{fraley:2002}, available in the R package \texttt{mclust}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{distance}
\caption{Box-plots of the sum of $L_1$ distances of the estimated univariate predictive densities from the true densities for three methods.}
\label{fig:distance}
\end{figure}
In Figure \ref{fig:distance} we show the sum of $L_1$ distances of the estimated univariate predictive densities from the true densities for the three methods. Our approaches outperform HDPM and mclust in the two shift scenarios. CREMID is the most accurate method in the two location shift scenarios as well as in the local weight change scenario. In the global weight change scenario, both our method and HDPM underperforms Mclust. Because the samples are different in all cluster weights, we pay a price for assuming that some cluster weights are shared.
\subsection{Example 2: Correcting for cross-sample misalignment}
A common problem in studies involving data collected from multiple labs or centers is the misalignment of the same clusters across samples due to external confounders, which is what motivated our hierarchical locally perturbed kernel construction. In flow cytometry, for example, misalignment across cell subpopulations can be substantial. An important preprocessing step is cross-sample calibration---that is, to estimate and correct for the misalignment across samples and thereby produce ``standardized'' data sets for follow up studies. (This shares the registration problem in functional data analysis.) To this end, we note that for each observation $y_{i,j}$, if $Z_{i,j}=k$, that is, the observation belongs to cluster $k$, then we can compute a corrected value by adjusting for the shift in the cluster center across the samples:
\[ \tilde{y}_{i,j} = \mu_{0,k} +(y_{i,j} - \mu_{j,k}) = y_{i,j} - \Delta_{j,k}\]
where $\Delta_{j,k}=\mu_{j,k}-\mu_{0,k}$ is the displacement of cluster $k$ in sample $j$ relative to the centroid. Because $Z_{i,j}$ is unobserved, we can appeal to Bayesian model averaging (BMA) by computing the posterior mean of $\tilde{y}_{i,j}$
\[
{\rm E}(\tilde{y}_{i,j}\,|\,\vect{y})
= y_{i,j} - {\rm E}(\Delta_{j,Z_{i,j}}\,|\,\vect{y})\approx y_{i,j} - \dfrac{1}{B} \sum_{b=1}^B \Delta_{j,Z_{i,j}^{(b)}}^{(b)},
\]
where $\Delta_{j,Z_{i,j}^{(b)}}^{(b)}$ is the $b$th posterior draw on the displacement $\Delta_{j,Z_{i,j}^{(b)}}^{(b)}=\mu_{j,Z_{i,j}^{(b)}}^{(b)}-\mu_{0,Z_{i,j}^{(b)}}^{(b)}$.
Let us consider a numerical example based on
mixture of normals in $\mathbb{R}^4$ to illustrate how one can remove cross-sample misalignment.
The data are generated as follows:
\begin{align*}
y_{i,1} & \sim 0.16 N(\mu_{1,1}, I) + 0.80 N(\mu_{2}, 2I) +
0.02
N(\mu_{3},0.2 I) + 0.02 N(\mu_{1,4}, 0.1 I) \\
y_{i,2} & \sim 0.09 N(\mu_{2,1}, I) + 0.80 N(\mu_{2}, 2I) +
0.09
N(\mu_{3},0.2 I) + 0.02 N(\mu_{2,4}, 0.1 I) \\
y_{i,3} & \sim 0.02 N(\mu_{3,1}, I) + 0.80 N(\mu_{2}, 2I) +
0.16
N(\mu_{3},0.2 I) + 0.02 N(\mu_{3,4}, 0.1 I),
\end{align*}
where $i=1, \ldots, 1000$, $\mu_{j,1} = (1,10-j,1,9)$, $\mu_2 = (8,8,8,8)$,
$\mu_3=(1,1,1,1)$ and
$\mu_{j,4} = (6+j,j,7,1)$.
The three plots in the first row of Figure \ref{fig:2} show the data projected
along the first two dimensions for each of the three distributions. Most of the data ($80\%$) belong to a mixture component which is identical across the three distributions. The remaining $20\%$ of the data belong to three mixture components which are different across the three distributions. The means of two mixture components are shifted across the three distributions, while two mixture components have different abundance across the three distributions.
The dashed lines in the plots help the reader identifying the across-sample shift in the means.
In the second row of Figure \ref{fig:2} the three plots show the calibrated data, i.e., after removing the estimated kernel perturbations. The model is able to correctly remove the local distributional shifts across the samples.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{calibration}
\caption{ The three plots in the first row show the data from Example 2
projected along the first two dimensions for each of the three samples.
In the second row the three plots show the calibrated data, i.e., after removing the estimated kernel perturbations.}
\label{fig:2}
\end{figure}
\subsection{Example 3: Testing cross-sample differences in cluster weights}
\label{ex:cross_sample}
We consider the same multi-sample scenarios in
$\mathbb{R}^4$ used in Example~1. For each dataset we define a corresponding \emph{null} data set by permuting the labels of the three samples. In Figure \ref{fig:1} we compare the ROC curves of our method and HDPM
for testing the hypothesis that the three distributions are identical. Our method is substantially more powerful than HDPM in all four scenarios.
In these simulations, for our method we use ${\rm E}( \rho \varphi |\vect{y})$ as the test statistic.
This quantity goes to zero when there are differences in the mixture weights or in the mixture kernels across samples, and it goes to one when the distributions are identical across samples. One can adopt different test statistics under our method depending on the inference objective. For instance, if one is interested in testing just the presence of differences in weights then a suitable test statistic is ${\rm E}(\rho |\vect{y})$.
We compare our method only to HDPM since Mclust does not provide a way to test for differences across samples.
In HDPM each $F_j$ is defined as a mixture of two
components: $F_j = \epsilon H_0 + (1-\epsilon) H_j$ for
$j=1, \ldots, J$. The distribution $H_0$
represents the common part, and $H_j$ represents the idiosyncratic part.
The hyperparameter $\epsilon$ controlling the
``degree of similarity'' across the $F_j$'s has a beta hyperprior. We use
${\rm E}(\epsilon|\vect{y})$ as the test statistic.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{roc}
\caption{
ROC curves for two methods in Example~\ref{ex:cross_sample}: HDPM \citep{muller2004method} in black solid, our method in red dashed.}
\label{fig:1}
\end{figure}
\subsection{Application: flow cytometry}
In flow cytometry experiments, biomarkers are measured on a large
number of blood cells. Different cell subtypes, i.e., groups of cells sharing
similar biomarker's levels, have distinct functions in human immune system.
Identifying variations in the abundance of subtypes across multiple samples
is an important immunological question.
Additionally, the location of
a given subtype across samples can slightly change due to both experimental
variability and other uncontrolled ``random effects''.
We analyze two datasets where each one contains three samples of 5,000 blood
cells, and for each cell six biomarkers have been measured.
\subsubsection{A control study}
The blood from a given patient was split in three samples, and each
sample went through a separate experimental procedure to generate the data. Since the three samples are essentially biologically identical, one expects
no variations in the abundance of the different subtypes or large location shifts of the cell types. Small perturbations of the cell types are likely due to additional variations in the experimental procedures.
In Figure \ref{fig:2a_1} we plot the posterior distributions of $\rho$ and $\epsilon$ for this data set under our proposed model.
The parameter $\rho$ reflects the total mass
assigned to mixture components where the mixture weights are identical across
groups.
In this dataset {\em a posteriori} this parameter concentrates around one, indicating that there is no
evidence of a difference in the mixture
weights across the three replicates.
The parameter $\epsilon$ controls the expected amount of shift in the location of each kernel across samples.
Its posterior does not
concentrate around zero, indicating the presence of small misalignment among the replicate samples due to uncontrolled sources of variation.
It is the decoupling of these two sources
of variations that allows us to correctly infer the absence of variations in the
mixture weights across the distributions of the three samples.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{flow_null_fixed}
\caption{Histograms of the posterior of $\rho$ and $\epsilon$ for the flow cytometry control study. }
\label{fig:2a_1}
\end{figure}
\subsubsection{Samples under different stimulation conditions}
In another data set, three blood samples from an individual underwent different stimulation treatments. One sample was left unstimulated, while the two remaining samples were stimulated with CEF and CMV pp65, respectively. The samples underwent separate experimental procedures in data generation. In Figure \ref{fig:3a_1} we plot the posterior distributions of $\rho$
and $\epsilon$. The parameter $\rho$ concentrates
around 0.6, indicating that there are differences in some of the mixture
weights across the three samples.
The parameter $\epsilon$ concentrates around $0.2$, either due to effects of the experiment conditions on the locations of the kernels, which is also a systematic cross-sample difference, or substantial additional variations in the experimental procedures in comparison to the control study.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{flow_alt_fixed}
\caption{Histograms of the posterior distributions of $\rho$ and $\epsilon$. }
\label{fig:3a_1}
\end{figure}
To judge the goodness-of-fit, we also compare the predictive performance of our model with Mclust, evaluated by the log predictive likelihood of the a ``test'' sample. We randomly select 1,000 data points from the whole data set as a ``test'' sample, while using 5,000 observations as the ``training sample''. We had hoped to compare our method to other methods such as \cite{muller2004method} but at the time of writing, the existing software in {\tt R} (the {\tt HDPMdensity} function in {\tt DPpackage}) crashes for the data sets, most probably due to the large sample sizes, and it does not output predictive scores.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcc}
\hline\hline
&\multicolumn{2}{c}{Method} \\
\cmidrule{2-3}
Data set & CREMID & MClust \\
\hline
Control study&-15456.34&-16310.93\\
Different stimulation conditions &-14649.47&-15408.23\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Log-$p$ predictive score comparison for CREMID versus MClust. Larger values (or smaller absolute values for negative scores) indicate better fit to the data.}
\label{tab:log_predictive_flow}
\end{table}
\section{Conclusion}
\label{sec:conc}
In this work we have introduced two useful techniques in modeling related data sets using mixture models---the shared-idiosyncratic stick breaking and the locally perturbed kernel. When used together, they incorporate three common data features observed in real applications---(i) samples often share the same clusters with different weights; (ii) only some clusters vary across samples; (iii) misalignment in the clusters due to extraneous causes. We have derived Bayesian inference recipe through MCMC sampling and carried out an extensive numerical studies to illustrate the gain in inferential efficiency in both estimation, prediction, and hypothesis testing.
Finally, we note that while the two techniques are introduced and demonstrated in the context of mixtures of location-scale families, they are generally applicable to modeling related mixtures of other forms of kernels as well, such as mixtures of generalized linear models and mixtures of factor models. The computational details will vary but the general ideas remain the same.
\section*{Software}
R code for the proposed MCMC sampler and code for the numerical examples are available at \url{https://github.com/jacsor/cremid/} and
\url{https://github.com/jacsor/MPG-examples/}, respectively.
\section*{Acknowledgment}
The authors are very grateful to Cliburn Chan for helpful discussions. The flow cytometry data set was provided by EQAPOL (HHSN272201000045C), an
NIH/NIAID/DAIDS-sponsored, international resource that supports the development, implementation, and oversight of quality assurance programs (Sanchez PMC4138253).
\section*{Appendix}
\subsection*{Numerical Examples}
\begin{enumerate}
\item Local and global shift scenarios:
\begin{align*}
\Sigma_1(i,i) & = 1.1 \quad \text{for } i=1, \ldots, 4, \quad \Sigma_1(i,j)
= 0.9 \quad \text{for } i\neq j \text{ and } i,j=1, \ldots, 4; \\
\Sigma_2(i,i) & = 2.0 \quad \text{for } i=1, \ldots, 4, \quad \Sigma_2(i,j)
= 1.0 \quad \text{for } i\neq j \text{ and } i,j=1, \ldots, 4; \\
\Sigma_3(i,i) & = 0.4 \quad \text{for } i=1, \ldots, 4, \quad \Sigma_3(i,j)
= -0.1 \quad \text{for } i\neq j \text{ and } i,j=1, \ldots, 4; \\
\Sigma_4(i,i) & = 0.1 \quad \text{for } i=1, \ldots, 4, \quad \Sigma_4(i,j)
= 0.0 \quad \text{for } i\neq j \text{ and } i,j=1, \ldots, 4; \\
\vect{\pi} & = (0.3, 0.3, 0.2, 0.2).
\end{align*}
\item Local weight difference:
$\Sigma_k$ for $k=1, \ldots, 4$ are identical to the local shift scenario
and the global shift scenario.
\item Global weight differences:
\begin{align*}
\Sigma_1 & = \text{diag}(1,1,1,1); \\
\Sigma_2 & = \text{diag}(2,2,2,2); \\
\Sigma_3 & = \text{diag}(0.2,0.2,0.2,0.2); \\
\Sigma_k & = \text{diag}(0.1,0.1,0.1,0.1) \quad \text{for } k = 4, \ldots,
8.
\end{align*}
\end{enumerate}
\bibliographystyle{Chicago}
|
1,116,691,500,258 | arxiv | \section{Introduction}
\subsection{Previous Results}
In a series of papers, we have begun to develop non-equilibrium thermodynamics
starting from the second law and ensuring the additivity of entropy as a state
function
\cite{Gujrati-Non-Equilibrium-I,Gujrati-Non-Equilibrium-II,Gujrati-Non-Equilibrium-III
. The central idea in this approach is that of \emph{internal equilibrium}
within a macroscopic system $\Sigma$ surrounded by an extremely large medium
$\widetilde{\Sigma}$; the two form an isolated system $\Sigma_{0}$ as shown in
Fig. \ref{Fig_Systems}. While the entropy $S(t)$ and the general
non-equilibrium thermodynamic potential $\Omega(t)$, see
\cite{Gujrati-Non-Equilibrium-II} for more details, such as the
non-equilibrium Gibbs free energy $G(t)$ of the system \emph{exist} even when
the system is not in internal equilibrium, the Gibbs fundamental relation
exists only when the system is in internal equilibrium
\begin{equation}
dS(t)=\mathbf{y}(t)\mathbf{\cdot}d\mathbf{X}(t)\mathbf{+a(t)\cdot
d\mathbf{I}(t)\mathbf{,} \label{Gibbs_Fundamental_relation_0
\end{equation}
where $\mathbf{X}(t)\mathbf{\ }$and $\mathbf{I}(t)$ represent the set of
observables and the set of internal variables, respectively, to be
collectively denote by $\mathbf{Z}(t)$. The entropy $S(\mathbf{Z}(t),t)$ away
from equilibrium, no matter how far from equilibrium, is normally a function
of $\mathbf{Z}(t)$ and $t$. However, when the system is in internal
equilibrium, where Eq. (\ref{Gibbs_Fundamental_relation_0}) remains valid,
$S(t)$ has \emph{no} explicit $t$-depenedence; the temporal evolution of the
entropy in this case comes from the time-dependence in $\mathbf{Z}(t)$, with
$\mathbf{X}(t)\mathbf{\ }$and $\mathbf{I}(t)$ still independent of each other.
The coefficient $\mathbf{y}(t)\mathbf{\ }$and $\mathbf{a}(t)$ represent the
derivatives of the entropy and are normally called the internal field and the
internal affinity, respectively. The energy $E$, volume $V$ and the number of
particles $N$ play a very special role among the observables, and the
corresponding internal fields are given b
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
height=2.5322in,
width=5.2243in
{System_Modified_1.eps
\caption{Schematic representation of a system $\Sigma$ and the medium
$\widetilde{\Sigma}$ surrounding it to form an isolated system $\Sigma_{0}$.
The medium is described by its fields $T_{0},P_{0},$ etc. while the system, if
in internal equilibrium (see text) is characterized by $T(t),P(t),$ etc.
\label{Fig_Systems
\end{center}
\end{figure}
\begin{equation}
\frac{1}{T(t)}=\left( \frac{\partial S(t)}{\partial E(t)}\right)
_{\mathbf{Z}^{\prime}(t)},\ \ \frac{P(t)}{T(t)}=\left( \frac{\partial
S(t)}{\partial V(t)}\right) _{\mathbf{Z}^{\prime}(t)},\ \frac{\mu(t)
{T(t)}=-\left( \frac{\partial S(t)}{\partial N(t)}\right) ,
\label{Field_Variables_0
\end{equation}
where $\mathbf{Z}^{\prime}(t)$ denotes all other elements of $\mathbf{Z}(t)$
except the one used in the derivative. Thus, internal temperature, pressure,
etc. have a meaning only when the system comes into internal equilibrium. In
general, the internal field $\mathbf{y}(t)$ and affinity $\mathbf{a}(t)$ are
given b
\begin{equation}
\mathbf{y}(t)\equiv\frac{\mathbf{Y}(t)}{T(t)}\equiv\left( \frac{\partial
S(t)}{\partial\mathbf{X}(t)}\right) _{\mathbf{Z}^{\prime}(t)},\mathbf{a
(t)\equiv\frac{\mathbf{A}(t)}{T(t)}\equiv\left( \frac{\partial S(t)
{\partial\mathbf{I}(t)}\right) _{\mathbf{Z}^{\prime}(t)}.
\label{Field_Variables_1
\end{equation}
The fields of the medium $T_{0},P_{0,}\mu_{0}$, etc., which we collectively
denote by $\mathbf{Y}_{0}$, are different from the internal fields of the
system unless the latter comes to equilibrium with the medium. The same is
also true of the affinity, except that the affinity vector $\mathbf{A}_{0}=0$
for the medium; see II.
From now on, we will only consider the case when the system is in internal
equilibrium. The heat transfer is given by
\begin{equation}
dQ=T(t)dS(t)=T_{0}d_{\text{e}}S(t), \label{Heat_Transfer
\end{equation}
where $d_{\text{e}}S(t)$ is the entropy exchange with the medium. The
irreversible entropy generation $d_{\text{i}}S(t)$ within the system is given
b
\[
d_{\text{i}}S(t)\equiv dS(t)-d_{\text{e}}S(t)\geq0.
\]
Thus, as long as the system is not in equilibrium, $T(t)\neq T_{0};$
accordingly, $d_{\text{i}}S(t)>0$ in accordance with the second law. There is
irreversible entropy production even when the system is in internal
equilibrium; the latter only allows us to introduce the internal fields and
affinities via Eqs. (\ref{Field_Variables_0}) and (\ref{Field_Variables_1}).
In the absence of any internal variables, the Gibbs fundamental relation is
given by
\begin{equation}
T(t)dS(t)=dE(t)+P(t)dV(t)-\mu(t)dN(t)
\label{Gibbs_Fundamental_relation_Internal
\end{equation}
for the special case when $\mathbf{X}(t)$ only contains $E(t),V(t)$ and
$N(t)$. For a fixed number of particles, the last term would be absent. As
said above, the temperature, pressure, etc. of the medium and the system are
usually different when the system is out of equilibrium with the medium. Only
in equilibrium do they become equal, in which case, the Gibbs fundamental
relation in Eq. (\ref{Gibbs_Fundamental_relation_Internal}) reduces to the
standard for
\begin{equation}
T_{0}dS_{\text{eq}}=dE_{\text{eq}}+P_{0}dV_{\text{eq}}-\mu_{0}dN_{\text{eq}},
\label{Gibbs_Fundamental_relation_Equilibrium
\end{equation}
in which none of the quantities has any time-dependence; the extensive
quantities represent the equilibrium values and are denoted by the additional
suffix. One normally considers a system with fixed number of particles, in
which case, the last term is absent in Eq.
(\ref{Gibbs_Fundamental_relation_Equilibrium}). In the following, we will not
explicitly show the additional suffix unless clarity is needed. The following
Maxwell relations that follow from the Gibbs fundamental relation, see Eq.
(\ref{Gibbs_Fundamental_relation_Equilibrium}), are well-known and can be
found in any good text-book such as \cite{Landau}\ on thermodynamics
\begin{align}
\left( \frac{\partial T_{0}}{\partial V}\right) _{S,N} & =-\left(
\frac{\partial P_{0}}{\partial S}\right) _{V,N},\ \ \ \left( \frac{\partial
T_{0}}{\partial P_{0}}\right) _{S,N}=\left( \frac{\partial V}{\partial
S}\right) _{P_{0},N},\nonumber\\
\left( \frac{\partial P_{0}}{\partial T_{0}}\right) _{V,N} & =\left(
\frac{\partial S}{\partial V}\right) _{T_{0},N},\ \ \ \left( \frac{\partial
S}{\partial P_{0}}\right) _{T_{0},N}=-\left( \frac{\partial V}{\partial
T_{0}}\right) _{P_{0},N}. \label{Maxwell_Relations
\end{align}
In equilibrium, there is no explicit $t$-dependence in $\mathbf{Z}$; moreover,
the internal variable $\mathbf{I}$ is no longer independent of $\mathbf{X}$.
The equilibrium field and affinity of the system become equal to those of the
medium ($\mathbf{Y}_{0}$ and $\mathbf{A}_{0}=0$); see
\cite{Gujrati-Non-Equilibrium-II}. Thus, the Gibbs fundamental relation
reduces to
\begin{equation}
dS=\mathbf{y}_{0}\mathbf{\cdot}d\mathbf{X},
\label{Gibbs_Fundamental_relation_Equilibrium_0
\end{equation}
compare with Eq. (\ref{Gibbs_Fundamental_relation_Equilibrium}). The
equilibrium value of the internal variable can be expressed as a function of
the equilibrium value of $\mathbf{X}$
\[
\mathbf{I=I}_{\text{eq}}(\mathbf{X}_{\text{eq}}).
\]
We now observe the similarity between the Gibbs fundamental relations in Eqs.
(\ref{Gibbs_Fundamental_relation_Equilibrium}) and
(\ref{Gibbs_Fundamental_relation_Equilibrium_0}). This strongly suggests that
there may also exist analogs of the Maxwell relations or other important
relations that are based on Eq.
(\ref{Gibbs_Fundamental_relation_Equilibrium_0}) for a system that, although
not in equilibrium with the medium, is in internal equilibrium. In this sequel
to the earlier papers
\cite{Gujrati-Non-Equilibrium-I,Gujrati-Non-Equilibrium-II,Gujrati-Non-Equilibrium-III
, which we denote by I, II, and III, respectively, we develop the consequence
of this internal equilibrium thermodynamics for important relations such as
Maxwell relations, Clausius-Clayperon equation, etc. These extensions will
play an important role in non-equilibrium systems that are nonetheless in
internal equilibrium.
The time-variation of the internal temperature $T$ of a non-equilibrium system
such as a glass is due to the time dependence of the observable $\mathbf{X
(t)$ such as $E(t),V(t)$, etc. and of the internal variable $\mathbf{I}(t)$.
For example, at fixed $T_{0}$, the internal temperature will continue to
change during structural relaxation. The internal temperature will also change
if the temperature of the medium changes. Thu
\[
dT=\left( \frac{\partial T}{\partial\mathbf{X}}\right) \cdot d\mathbf{X
+\left( \frac{\partial T}{\partial\mathbf{I}}\right) \cdot d\mathbf{I.
\]
The rate of change of the internal temperature can be expressed in terms of
the rate of change $r=dT_{0}/dt:
\begin{equation}
\frac{dT}{dt}=\left( \frac{\partial T}{\partial\mathbf{X}}\right) \cdot
\frac{d\mathbf{X}}{dt}+\left( \frac{\partial T}{\partial\mathbf{I}}\right)
\cdot\frac{d\mathbf{I}}{dt}\mathbf{.} \label{temperature-time_derivative
\end{equation}
Similarly
\begin{equation}
\frac{dT}{dT_{0}}=\left( \frac{\partial T}{\partial\mathbf{X}}\right)
\cdot\frac{d\mathbf{X}}{dT_{0}}+\left( \frac{\partial T}{\partial\mathbf{I
}\right) \cdot\frac{d\mathbf{I}}{dT_{0}}. \label{T_T0_Derivative0
\end{equation}
The same analysis can be carried out for other internal fields.
\subsection{Present Goal}
Our aim in this work is to follow the consequences of internal equilibrium in
a non-equilibrium system to find the generalization of Maxwell's relations,
the Clausius-Clapeyron relation, and the relations between response functions
to non-equilibrium states. We will be also be interested in glasses in this
work; they are traditionally treated as non-equilibrium states. Therefore, we
begin with a discussion of what is customarily called a glass and the
associated glass transition in Sect. \ref{Marker_Glass_Transitions}. A careful
discussion shows that the term does not refer to one single transition;
rather, it can refer to different kinds of transitions, some of which appear
similar to the conventional transitions in equilibrium, but the other refer to
apparent transitions where the Gibbs free energy cannot be continuous. There
are some well-known approximate approaches to glasses. We will briefly discuss
them. We then turn to our main goal to extend the Maxwell's relations, where
Jacobians are found to be quite useful. Therefore, we introduce Jacobians and
their various important properties in Sect. \ref{Marker_Jacobians}. This is
technical section, but we provide most of the required details so that the
clarity of presentation is not compromised. An important part of this section
is to show that the Jacobians can be manipulated in a straight forward manner
even in a subspace of the variables. This is important as the observations
require manipulating the observables and not the internal variables. Thus, the
experimental space refers to a subspace (Sect. \ref{Marker_Subspace}) of the
space where non-equilibrium thermodynamics is developed. Thermodynamic
potentials for non-equilibrium states are formulated in Sect.
\ref{Marker_Potentials}. We develop the generalization of the Maxwell's
relations in Sect. \ref{Marker_Maxwell_Relations}. We discuss generalization
of the Clausius-Clapeyron relation in Sect.
\ref{Marker_Clausius_Clapeyron_Relation}, where we also discuss the conditions
for phase transitions in non-equilibrium states. The response functions such
as the heat capacities, compressibilities and the expansion coefficients and
various relations among them are developed for non-equilibrium states in Sect.
Sect. \ref{Marker_Response_Functions}. The Prigogine-Defay ratio for glasses
are evaluated at various possible glass transitions in Sect.
\ref{Marker_PD_Ratio}. We compare our approach with some of the existing
approaches in determining the ratio in this section. The last section contains
a brief summary of our results.
\section{Glass Transitions and Apparent Glass
Transitions\label{Marker_Glass_Transitions}}
An example of non-equilibrium systems under investigation here is a glass
\cite{Gujrati-book}; see Figs. \ref{Fig_GlassTransition_V} and
\ref{Fig_GlassTransition_G}. A supercooled liquid L is a \emph{stationary}
(time-independent) metastable state \cite{Gujrati-book}, which for our
purpose, represents an equilibrium state (by not allowing the crystalline
state into consideration), and is shown by the curve ABF under isobaric
condition at a fixed pressure $P_{0}$ of the medium. We will refer to the
equilibrium liquid always as L in the following. In contrast, a
non-equilibrium liquid state will be designated gL here, and represents a
\emph{time-dependent} metastable state \cite{Gujrati-book}. The choice of gL
is to remind us that it is a precursor to the eventual glass GL at a lower
temperature. The equilibrium liquid L is obtained by cooling the liquid L and
waiting long enough at each for it to come to equilibrium with the medium.
However, if it is obtained at a fixed cooling rate $r$, then at some
temperature $T_{0\text{g}}(P_{0}),$ L cannot come to equilibrium and turns
into gL; the resulting curve BD leaves ABF tangentially at B, and gradually
turns into an isobaric glass GL represented by the segment DE at D, when the
viscosity becomes so large ($\sim10^{13}$ poises) that it appears as a solid.
At B, the transition is from an equilibrium liquid L to a non-equilibrium
liquid form gL, and will be called the L-gL transition. In the literature, it
is commonly known as a transition from an ergodic state (L) to a non-ergodic
state (gL). In our opinion, this is a misnomer, as the concept of
\emph{ergodicity} refers to the long-time , indeed the infinite-time,
behavior. In this limit, there will be no gL, only L. Therefore, we will refer
to this transition at $T=T_{0\text{g}}(P_{0})$ as a L-gL transition or a
\emph{precursory} glass transition. The true glass transition at D is not a
transition from L to GL, but a transition from gL to GL. We will refer to the
glass transition at the lower temperature $T_{0\text{G}}(P_{0})$ at D as the
\emph{actual} glass transition, or simply the glass transition. The transition
region BD represents a time-dependent metastable supercooled liquid (to be
distinguished from the stationary metastable supercooled liquid L denoted by
ABF), which turns into a glass at D. The expansion coefficient in the glass is
almost identical to that of the corresponding crystal below D. The glass
continuously emerges out of gL at D, whose location is also determined by the
rate $r$ of cooling. The relaxation time $\tau$ of the system (the supercooled
liquid) becomes equal (really comparable) to the observation time
$\tau_{\text{obs}}$ at B. As seen in Fig. \ref{Fig_GlassTransition_V}, the
volume remains continuous at B and D at the two glass transitions. The same is
also true of the entropy. Indeed, the state of the system changes continuously
at B and D, which is highly reproducible for a given cooling rate $r$ or the
observation time $\tau_{\text{obs}}$. Thus, the points B and D can be taken as
a well-defined and unique glass transition temperatures $T_{0\text{g}}(P_{0})$
and $T_{0\text{G}}(P_{0})$ associated with the point B and D, respectively, in
both figures. Both transitions represent a non-equilibrium version of a
continuous transition (See Sect. \ref{Marker_Clausius_Clapeyron_Relation} for
elaboration on this point), where not only the Gibbs free energy, see Fig.
\ref{Fig_GlassTransition_G}, but also its derivatives are continuous. The
non-equilibrium nature of the transition appears in the dependence of the
value of $T_{0\text{g}}(P_{0})$ and $T_{0\text{G}}(P_{0})$ on the rate of
cooling. The continuity of the Gibbs free energy at B and D makes them as
genuine candidates as (glass) transition points, a requirement of a transition
in equilibrium thermodynamics. Therefore, both these transitions will be
collectively called \emph{conventional transitions} in this work
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
trim=0.595673in 0.000000in -0.099027in 0.700200in,
height=3.2093in,
width=4.9338in
{GlassTransition_V.eps
\caption{Schematic form of isobaric $V$ as a function of $T_{0}$ for a given
cooling rate. The pressure is fixed at $P_{0}$. The supercooled liquid turns
gradually into a glass through the glass transition region. The transition
temperature $T_{0\text{g}}(P_{0})$ is identified as the temperature at B,
where the actual volume begins to deviate from the extrapolated supercooled
liquid volume BC. On the other hand, the apparent glass transition temperature
$T_{0\text{g}}^{(\text{A})}(P_{0})$ is the temperature where the extrapolated
glass volume DC\ meets the extrapolated supercooled liquid volume BC as
indicated in the fuigure; this temperature lies in the glass transition
region.
\label{Fig_GlassTransition_V
\end{center}
\end{figure}
Unfortunately, the idea of a glass transition was formulated as a transition
between L and GL. Thus, neither of the above two glass transitions represent
the glass transition in the original sense. As the glass is considered a
frozen state, it is common to assume that over the region DE, the glass has
its internal variables denoted by $\mathbf{I}$ frozen at its value
$\mathbf{I}_{\text{G}}$ at D, even though its observables denoted by
$\mathbf{X}$ continue to change. On the other hand, the internal variables and
the observables continue to change over BD from their values at B to their
values at D. Consequently,
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
trim=0.499262in 0.000000in 0.000000in 0.599710in,
height=3.4705in,
width=5.041in
{GlassTransition_GibbsFreeEnergy.eps
\caption{Schematic form of the isobaric Gibbs free energy $G$ shown by the
continuous curve ABDE as a function of the medium temperature $T_{0}$ at a
fixed pressure $P_{0}$. The extrapolation of the glassy portion (GL) along DCG
and the supercooled liquid (L) portion ABC$_{0}$F do not meet; the glassy
Gibbs free energy at point the apparent glass transition C, where
$T_{0}=T_{0\text{g}}^{(\text{A})}(P_{0})$, is higher than that at
C$_{\text{0}}$ on the continuous curve L at the same temperature
$T_{0\text{g}}^{(\text{A})}(P_{0})$, showing that the extrapolation results in
a more unstable state at the apparent glass transition C than the physical
state C$_{\text{0}}$ on the continuous curve. The Gibbs free energies match at
the glass transition temperature $T_{0\text{g}}(P_{0})$ at B$.$
\label{Fig_GlassTransition_G
\end{center}
\end{figure}
the properties such as the volume of gL, which is shown schematically in Fig.
\ref{Fig_GlassTransition_V}, gradually change to those of the glass at lower
temperatures. Thus, the glass transition from AB\ to DE is not a sharp
transition. It can be argued, as we have done above, that B and D should be
taken as the glass transition points. However, the practice in the field is to
take a point between BD as a transition point obtained by electing some
well-defined rule of selection; see for example \cite{Gutzow} for a good
discussion of various ways of identifying the glass transition temperature.
One such rule commonly used is to consider the volume of the system and
introduce an \emph{apparent }glass transition temperature $T_{0\text{g
}^{(\text{A})}(P_{0})$ by the \emph{equilibrium continuation} of the volume
BCF of AB and by the \emph{extrapolation }of the volume DCG of DE to find
their crossing point C. The state of the glass following Tool and
Narayanaswamy \cite{Tool,Narayanaswamy} is then \emph{customarily} identified
by the point C on DC. However, there is no reason to take the state at C to
represent any real glass, as the extrapolation does not have to satisfy
non-equilibrium thermodynamics; the latter is valid only along the physical
path DB for the \emph{given history }of preparation such as determined by the
fixed rate $r$ of cooling during vitrification. The glass at $T_{0\text{g
}^{(\text{A})}(P_{0})$ must be described by the point on DB corresponding to
$T_{0\text{g}}^{(\text{A})}(P_{0})$ if we wish to employ non-equilibrium
thermodynamics. To be sure, one can find a slow enough cooling rate than the
one used to obtain gL at B so that the point B actually coincides with the
point C on ABF, as the latter represents L. However, the gL that will emerge
at C for the slower cooling rate has nothing to do with the extrapolated state
C on DCG. Because of the continuity of the state, the gL at the slower rate at
C will have its $A=0$ and $\xi=\xi_{\text{eq}}~$\ and will have its Gibbs free
energy continuous. Moreover, the new gL will follow a curve that will be
strictly below BDE. These aspects make the new gL different from the
extrapolated GL\ at C. Taking the point C on CD to represent the glass will be
an approximation, which we will avoid in this work, as our interest is to
apply thermodynamics in the study of glasses. Therefore, we will use the
extrapolation to only determine $T_{0\text{g}}^{(\text{A})}(P_{0})$, but the
real glass and the real liquid states are determined by the curve BD and BCF,
respectively, where our non-equilibrium thermodynamics should be applicable.
The location of this temperature $T_{0\text{g}}^{(\text{A})}(P_{0})$ depends
on the property being extrapolated. We can use the entropy of the system to
locate the apparent glass transition temperature, which would invariably give
a different value for the apparent glass transition temperature. To call one
of these temperatures as a transition temperature is a misnomer for another
reason. None of these temperatures represent a "non-equilibrium" thermodynamic
transition for the simple reason that the two branches DCG and BCF do not have
a common Gibbs free energy \ at $T_{0\text{g}}^{(\text{A})}(P_{0})$ as is
clearly seen in Fig. \ref{Fig_GlassTransition_G}. The branch ABC$_{0}$F
represents the Gibbs free energy of the equilibrium supercooled liquid, while
the segment DE represents the Gibbs free energy of the glass, with the segment
BD denoting the Gibbs free energy of the system during the transition region.
The extrapolation DCG in Fig. \ref{Fig_GlassTransition_V} to determine the
glass transition temperature $T_{0\text{g}}^{(\text{A})}(P_{0})$ corresponds
to the extrapolated segment DCG in Fig. \ref{Fig_GlassTransition_G}. The Gibbs
free energy of the glass in this extrapolation is given by the point C, while
the Gibbs free energy of the supercooled liquid is determined by the point
C$_{0}$. Evidently, the two free energies are very different, with that of the
glass higher than that of the supercooled liquid, as expected from the
non-equilibrium nature of the glassy state.
The above discussion of the apparent glass transition also applies to
comparing the glass at D with the corresponding L at $T_{0\text{G}}(P_{0})$,
which will represent yet another apparent glass transition temperature. This
apparent glass transition has the same problem regarding the Gibbs free energy
as the previous one at $T_{0\text{g}}^{(\text{A})}(P_{0})$. However, this
transition differs from the apparent glass transition at $T_{0\text{g
}^{(\text{A})}(P_{0})$\ in that the "glass" at $T_{0\text{g}}^{(\text{A
)}(P_{0})$\ is not a frozen state, while the glass at D is a "frozen" glass to
some extent (as it also undergoes structural relaxation in time). It should
also be remarked that whether we consider the apparent glass transition at
$T_{0\text{G}}(P_{0})$ or $T_{0\text{g}}^{(\text{A})}(P_{0})$, the transition
is an example of a discontinuity in the Gibbs free energy of the two states.
This is different from the precursory glass transition and the actual glass
transition at B and D, respectively, where the Gibbs free energy is
continuous. Because of the discontinuity in the Gibbs free energies in the
apparent glass transitions at $T_{0\text{G}}(P_{0})$ and $T_{0\text{g
}^{(\text{A})}(P_{0})$, we will refer to these transitions as \emph{apparent
transitions} in this work. Indeed, one can think of these transitions as an
analog of a\emph{ zeroth order} transition because of the discontinuity in the
Gibbs free energy. However, it should be remarked that the apparent
transitions do not represent any transition in the system; those transitions
are the two conventional transitions discussed above. The apparent transitions
represent our desire to compare two distinct states. This is like comparing
the supercooled liquid with the crystal at the same temperature and pressure.
Therefore, a discontinuity in the Gibbs free energy is not a violation of the
principle of continuity discussed in \cite{Gujrati-Non-Equilibrium-III}.
We will consider all of the above glass transitions later when we discuss the
evaluation of the Prigogine-Defay ratio
\cite{Prigogine-Defay,Davies,Goldstein,Gupta,DiMarzio} in Sect.
\ref{Marker_PD_Ratio}. In this ratio, a non-equilibrium state is compared with
the equilibrium supercooled liquid state along ABF. In the classic approach
adopted by Simon \cite{Simon,Gutzow}, the temperature range $(T_{0\text{gG
}(P_{0}),T_{0\text{g}}(P_{0}))$ is shrunk to a point, either by considering
the apparent glass transition at $T_{0\text{g}}^{(\text{A})}(P_{0})$, or by
comparing the glass state at D with the supercooled liquid L at B. The latter
amounts to neglecting the segment BD from consideration. We will avoid this
ad-hoc approach in this work. The only possible scenario, where Simon's
approach is meaningful is that of the \emph{ideal glass transition }\cite[and
references thererin]{Gujrati-book}, in the limit the cooling rate
$r\rightarrow0$. In this limiting case, the crossover region BD disappears and
the ideal glass IGL emerges directly out of the L at the ideal glass
transition temperature $T_{0\text{IG}}$. This is a conventional continuous
transition between the two stationary states IGL and L, both of which remain
in equilibrium with the medium at $T_{0},P_{0}$. There is no need to invoke
any internal variable $\mathbf{I}$ to describe the ideal glass; the observable
$\mathbf{X}$ is sufficient for the investigation of the ideal glass
transition. We will revisit this point later in Sect. \ref{Marker_PD_Ratio}.
\section{Some Useful Mathematical Tools}
\subsection{Jacobian method\label{Marker_Jacobians}}
Jacobians \cite{Courant} will be found extremely useful in this work just as
they are found useful in equilibrium thermodynamics \cite{Landau}; see also
\cite{Shaw,Crawford,Pinkerton}. The $n$-th order Jacobian of $u_{1
,u_{2},\cdots u_{n}$ with respect to $x_{1},x_{2},\cdots x_{n}$ is the
$n\times n$ determinant of the matrix formed by $\partial u_{k}/\partial
x_{l}$
\[
\frac{\partial(u_{1},u_{2},\cdots u_{n})}{\partial(x_{1},x_{2},\cdots x_{n
)}\equiv\left\vert
\begin{array}
[c]{ccccc
\partial u_{1}/\partial x_{1} & \partial u_{1}/\partial x_{2} & . & . &
\partial u_{1}/\partial x_{n}\\
\partial u_{2}/\partial x_{1} & \partial u_{2}/\partial x_{2} & . & . &
\partial u_{2}/\partial x_{n}\\
. & . & . & . & .\\
. & . & . & . & .\\
\partial u_{n}/\partial x_{1} & \partial u_{n}/\partial x_{2} & . & . &
\partial u_{n}/\partial x_{n
\end{array}
\right\vert .
\]
It is clear from the properties of the determinant that
\begin{enumerate}
\item The Jacobian vanishes if any two $u$'s are identica
\[
\frac{\partial(u_{1},u_{2},\cdots u_{i},u_{i}\cdots u_{n})}{\partial
(x_{1},x_{2},\cdots x_{i},x_{i+1}\cdots x_{n})}=0.
\]
\item If $u_{i}$ and $u_{i+1}$ interchange their order, the Jacobian changes
its sig
\[
\frac{\partial(u_{1},u_{2},\cdots u_{i+1},u_{i}\cdots u_{n})}{\partial
(x_{1},x_{2},\cdots x_{i},x_{i+1}\cdots x_{n})}=-\frac{\partial(u_{1
,u_{2},\cdots u_{i},u_{i+1}\cdots u_{n})}{\partial(x_{1},x_{2},\cdots
x_{i},x_{i+1}\cdots x_{n})}.
\]
\item If any $u_{i}$ is equal to $x_{i}$, the $n$-th order Jacobian reduces to
a $(n-1)$-th order Jacobian formed by derivatives at fixed $x_{i}$. For
example, for $n=2$, we hav
\[
\frac{\partial(u_{1},x_{2})}{\partial(x_{1},x_{2})}=\left( \frac{\partial
u_{1}}{\partial x_{1}}\right) _{x_{2}}.
\]
\end{enumerate}
When we consider compound transformations $\left( x_{1},x_{2},\cdots
x_{n}\right) \rightarrow\left( u_{1},u_{2},\cdots u_{n}\right)
\rightarrow\left( v_{1},v_{2},\cdots v_{n}\right) $, the resulting Jacobian
is the product of the two Jacobians
\[
\frac{\partial(v_{1},v_{2},\cdots v_{n})}{\partial(u_{1},u_{2},\cdots u_{n
)}\cdot\frac{\partial(u_{1},u_{2},\cdots u_{n})}{\partial(x_{1},x_{2},\cdots
x_{n})}=\frac{\partial(v_{1},v_{2},\cdots v_{n})}{\partial(x_{1},x_{2},\cdots
x_{n})}.
\]
The definition of a Jacobian can lead to some interesting permutation rules as
the following examples\ illustrate. Consider a second order Jacobian
$\partial(u_{1},u_{2})/\partial(x_{1},x_{2})=\left( \partial u_{1}/\partial
x_{1}\right) \left( \partial u_{2}/\partial x_{2}\right) -\left( \partial
u_{1}/\partial x_{2}\right) \left( \partial u_{2}/\partial x_{1}\right) $,
which can be rearranged a
\[
\frac{\partial(u_{1},u_{2})}{\partial(x_{1},x_{2})}\frac{\partial(x_{1
,x_{2})}{\partial(x_{1},x_{2})}+\frac{\partial(u_{2},x_{1})}{\partial
(x_{1},x_{2})}\frac{\partial(u_{1},x_{2})}{\partial(x_{1},x_{2})
+\frac{\partial(x_{1},u_{1})}{\partial(x_{1},x_{2})}\frac{\partial(u_{2
,x_{2})}{\partial(x_{1},x_{2})}=0.
\]
This can be symbolically written as
\begin{equation}
\partial(u_{1},u_{2})(x_{1},x_{2})+\partial(u_{2},x_{1})\partial(u_{1
,x_{2})+\partial(x_{1},u_{1})\partial(u_{2},x_{2})=0
\label{Permutation_Property
\end{equation}
by suppressing the common denominator in each term. The result expresses the
cyclic permutation of $u_{1},u_{2},x_{1}$ in the three terms with the
remaining variable $x_{2}$ in the same place in all terms. As a second
example, consider some quantity $u$ as a function of three variables $x,y,$
and $z$ and consider the following relation between the partial derivatives
\begin{equation}
\left( \frac{\partial u}{\partial x}\right) _{y}=\left( \frac{\partial
u}{\partial x}\right) _{y,z}+\left( \frac{\partial u}{\partial z}\right)
_{x,y}\left( \frac{\partial z}{\partial x}\right) _{y}.
\label{Partial_Derivatives_Relation
\end{equation}
In terms of Jacobians, it can be written a
\begin{equation}
\frac{\partial(u,y)}{\partial(x,y)}=\frac{\partial(u,y,z)}{\partial
(x,y,z)}+\frac{\partial(u,x,y)}{\partial(z,x,y)}\frac{\partial(z,y)
{\partial(x,y)}, \label{Partial_Derivatives_Relation_1
\end{equation}
which simplifies to
\begin{equation}
\partial(x,y,z)\partial(u,y)=\partial(y,z,u)\partial(x,y)+\partial
(z,u,x)\partial(y,y)+\partial(u,x,y)\partial(z,y),
\label{Permutation_Property_1
\end{equation}
where we have added a vanishing second term on the right because
$\partial(y,y)=0$. This relation is easily constructed by considering the
cyclic permutation of
\[
x,y,z,u
\]
by taking three consecutive terms at a time for the $3$-Jacobians, with the
remaining variable yielding the $2$-Jacobians in which the second entry is the
variable $y$, the variable that is held fixed in all derivatives in Eq.
(\ref{Partial_Derivatives_Relation}). The ordering $x,y,z$ in $x,y,z,u$ is
determined by the denominator $3$-Jacobian in the first term on the right in
Eq. (\ref{Partial_Derivatives_Relation_1}). By writing all the $3$-Jacobians
in the non-vanishing terms in Eq. (\ref{Permutation_Property_1}) so that $y$
is the second entry, and then suppressing the second entry, we obtain the
following relatio
\[
\partial(x,z)\partial(u,y)+\partial(z,u)\partial(x,y)+\partial(u,x)\partial
(z,y)=0,
\]
which is identical to the relation in Eq. (\ref{Permutation_Property}) if we
identify $u_{1}$ with $x$, $u_{2}$ with $z$, $x_{1}$ with $u$ and $x_{2}$ with
$y$.
We will use the Jacobians and their properties to first re-express the Maxwell
relations as follow
\begin{align}
\frac{\partial(T_{0},S,N)}{\partial(V,S,N)} & =\frac{\partial(P_{0
,V,N)}{\partial(V,S,N)},\ \ \ \frac{\partial(T_{0},S,N)}{\partial(P_{0
,S,N)}=\frac{\partial(P_{0},V,N)}{\partial(P_{0},S,N)},\nonumber\\
\frac{\partial(P_{0},V,N)}{\partial(T_{0},V,N)} & =\frac{\partial
(T_{0},S,N)}{\partial(T_{0},V,N)},\ \ \ \frac{\partial(T_{0},S,N)
{\partial(P_{0},T_{0},N)}=\frac{\partial(P_{0},V,N)}{\partial(P_{0},T_{0},N)}.
\label{Maxwell_Jacobians
\end{align}
We now see a very important consequence of the use of the Jacobians. All four
Maxwell relations use the same numerators $\partial(T_{0},S,N)$ and
$\partial(P_{0},V,N)$. They use different denominators. Thus, they can all be
combined into one compact relation that can be simply written as
\begin{equation}
\partial(T_{0},S,N)\equiv\partial(P_{0},V,N). \label{Maxwell_Compact
\end{equation}
Here, the relation only has a meaning if each side is divided by one of the
possible denominators $\partial(V,S,N),\partial(P_{0},S,N),\partial
(T_{0},V,N)$ and $\partial(P_{0},T_{0},N)$ on both sides.
\subsection{Considerations in a Subspace\label{Marker_Subspace}}
It is very common to consider a function $F(x,y,z)$ in a subspace consisting
of $x,y$. This requires manipulating a $3$-Jacobians to construct a
$2$-Jacobians. of its argument. Thus, we may consider the $2$-Jacobia
\[
\frac{\partial(F,y)}{\partial(x,y)},
\]
even though $F$ also depends on $z$. We can manipulate such Jacobians in the
normal way. For example, we can express it as
\begin{equation}
\left( \frac{\partial F}{\partial x}\right) _{y}=\frac{\partial
(F,y)}{\partial(x,y)}=-\frac{\partial(F,y)}{\partial(K,x)}\frac{\partial
(x,K)}{\partial(x,y)}=-\left( \frac{\partial K}{\partial y}\right) _{x
\frac{\partial(F,y)}{\partial(K,x)}, \label{Subspace_Reduction
\end{equation}
where $K(x,y,z)$ is another function. The derivation is tedious and has been
supplied in the Appendix. The situation can be generalized to many variables
$z_{1},z_{2},\cdots$ without much complications. We will not do this here.$\,$
\subsection{Some Transformation Rules \label{Marker_transformation rules}}
Let us consider a derivative of some quantity $R$ either with respect to $T$
or $P$ in case A below or at fixed $T$ or $P$ in case B below, which we wish
to express as a derivative involving $T_{0},P_{0}$ that are manipulated by the observer.
\begin{enumerate}
\item[A.] The derivative is at fixed $\mathbf{U}$, where $\mathbf{U}$ has any
two different elements from $E,V,S,\xi,P_{0}$ and $T_{0}.$
\end{enumerate}
We write the derivative a
\begin{equation}
\left( \frac{\partial R}{\partial T}\right) _{U_{1},U_{2}}\equiv
\frac{\partial(R,U_{1},U_{2})}{\partial(T,U_{1},U_{2})}=\frac{\partial
(R,U_{1},U_{2})}{\partial(T_{0},U_{1},U_{2})}\frac{\partial(T_{0},U_{1
,U_{2})}{\partial(T,U_{1},U_{2})}=\left( \frac{\partial R}{\partial T_{0
}\right) _{U_{1},U_{2}}/\left( \frac{\partial T}{\partial T_{0}}\right)
_{U_{1},U_{2}}. \label{TR_TP0
\end{equation}
Similarly, we have
\begin{equation}
\left( \frac{\partial R}{\partial P}\right) _{U_{1},U_{2}}\equiv
\frac{\partial(R,U_{1},U_{2})}{\partial(P,U_{1},U_{2})}=\frac{\partial
(R,U_{1},U_{2})}{\partial(P_{0},U_{1},U_{2})}\frac{\partial(P_{0},U_{1
,U_{2})}{\partial(P,U_{1},U_{2})}=\left( \frac{\partial R}{\partial P_{0
}\right) _{U_{1},U_{2}}/\left( \frac{\partial P}{\partial P_{0}}\right)
_{U_{1},U_{2}}. \label{TR_PT0
\end{equation}
\begin{enumerate}
\item[B.] Let us consider a derivative with respect to $T_{0}$ at fixed
$U_{2}=P~$or $T$ ($U_{20}=P_{0}~$or $T_{0}$, as the case may be), but $U_{1}$
is any element from $E,V,S,\xi,P_{0}$ and $T_{0}$:
\begin{equation}
\left( \frac{\partial R}{\partial T_{0}}\right) _{U_{1},U_{2}}\equiv
\frac{\partial(R,U_{1},U_{2})}{\partial(T_{0},U_{1},U_{20})}\frac
{\partial(T_{0},U_{1},U_{20})}{\partial(T_{0},U_{1},U_{2})}=\frac
{\partial(R,U_{2},U_{1})}{\partial(T_{0},U_{20},U_{1})}/\left( \frac{\partial
U_{2}}{\partial U_{20}}\right) _{U_{1},U_{2}}. \label{TR_T0P
\end{equation}
Similarly,
\begin{equation}
\left( \frac{\partial R}{\partial P_{0}}\right) _{U_{1},U_{2}}\equiv
\frac{\partial(R,U_{1},U_{2})}{\partial(P_{0},U_{1},U_{20})}\frac
{\partial(P_{0},U_{1},U_{20})}{\partial(P_{0},U_{1},U_{2})}=\frac
{\partial(R,U_{2},U_{1})}{\partial(P_{0},U_{20},U_{1})}/\left( \frac{\partial
U_{2}}{\partial U_{20}}\right) _{U_{1},U_{2}}. \label{TR_P0T
\end{equation}
Let us now consider a derivative with respect to $T$ at fixed $P$ or with
respect to $P$ at fixed $T$; the derivative is at fixed $U_{1}$, where $U_{1}$
is any element from $E,V,S,\xi,P_{0}$ and $T_{0}.
\begin{equation}
\left( \frac{\partial R}{\partial T}\right) _{U_{1},P}\equiv\frac
{\partial(R,U_{1},P)}{\partial(T,U_{1},P)}=\frac{\partial(R,U_{1},P)
{\partial(T_{0},U_{1},P_{0})}/\frac{\partial(T,U_{1},P)}{\partial(T_{0
,U_{1},P_{0})}. \label{TR_TP
\end{equation}
Similarly
\begin{equation}
\left( \frac{\partial R}{\partial P}\right) _{U_{1},T}\equiv\frac
{\partial(R,U_{1},T)}{\partial(P,U_{1},T)}=\frac{\partial((R,U_{1
,T)}{\partial(T_{0},U_{1},P_{0})}/\frac{\partial(P,U_{1},T)}{\partial
(T_{0},U_{1},P_{0})}. \label{TR_PT
\end{equation}
\end{enumerate}
\section{Thermodynamic Potentials and Differentials\label{Marker_Potentials}}
\subsection{Equilibrium}
The forms of most useful thermodynamic potentials such as the enthalpy $H$,
the Helmholtz free energy $F$, and the Gibbs free energy $G$ of a system
$\Sigma$ in equilibrium are well known and are given in terms of the energy
$E(S,V,N)$ as
\begin{equation}
H=E+P_{0}V,\ F=E-T_{0}S,\ G=E-T_{0}S+P_{0}V, \label{Thermodynamic_Potentials
\end{equation}
where $T_{0},P_{0}$ are the temperature and pressure of the system; they are
also the temperature and/or pressure of the medium, depending on the medium
$\widetilde{\Sigma}$. Here, we are considering a system with fixed number of
particles. For the enthalpy, the medium $\widetilde{\Sigma}(P_{0})$ containing
the system exerts a fixed pressure $P_{0}$. For the Helmholtz free energy, the
medium $\widetilde{\Sigma}(T_{0})$ containing the system creates a fixed
temperature $T_{0}$. For the Gibbs free energy, the medium $\widetilde{\Sigma
}(T_{0},P_{0})$ containing the system exerts a fixed pressure $P_{0}$ and
creates a fixed temperature $T_{0}$. The potentials are Legendre transforms in
that the potentials are functions of the fields ($T_{0},P_{0}$) rather than
the observables ($E,V$) as the case may be. These potentials have the desired
property that they attain their minimum when the system is in equilibrium, as
discussed in I.
\subsection{Internal Equilibrium}
When the system is in internal equilibrium, we find from the Gibbs fundamental
relation for fixed $N$, which is obtained from setting $dN=0$ in Eq.
(\ref{Gibbs_Fundamental_relation_Internal}):
\begin{equation}
dE=TdS-PdV-Ad\xi, \label{Energy_relation_Internal/n
\end{equation}
where we have also introduced a single internal variable $\xi$ to allow us to
discuss non-equilibrium systems that are not in equilibrium with their medium
but are in internal equilibrium.\ The consideration of many internal variables
is to simply replace
\[
Ad\xi\rightarrow\mathbf{A\cdot}d\mathbf{\xi},
\]
and will not cause any extra complication. Thus, we will mostly consider a
single internal variable, but the extension to many internal variables is trivial.
We are no longer going to exhibit the time-dependence in these variables for
the sake of notational simplicity of. Let us return to Eq.
(\ref{Energy_relation_Internal/n}). It should be compared with Eq.
(\ref{Gibbs_Fundamental_relation_Equilibrium}) which contains $T_{0},P_{0}$.
We rewrite Eq. (\ref{Energy_relation_Internal/n}) to show the non-equilibrium
contribution explicitly
\begin{equation}
dE=T_{0}dS-P_{0}dV+(T-T_{0})dS-(P-P_{0})dV-Ad\xi.
\label{Energy_relation_Internal_Equilibrium/n
\end{equation}
The last two terms are due to the non-equilibrium nature of the system in
internal equilibrium. It is now easy to see tha
\begin{align}
dH & =T_{0}dS+VdP_{0}+(T-T_{0})dS-(P-P_{0})dV-Ad\xi,\nonumber\\
dF & =-SdT_{0}-P_{0}dV+(T-T_{0})dS-(P-P_{0})dV-Ad\xi
,\label{Thermodynamic_Differential_Internal/n}\\
dG & =-SdT_{0}+VdP_{0}+(T-T_{0})dS-(P-P_{0})dV-Ad\xi.\nonumber
\end{align}
These potentials correspond to $\xi$ as an independent variable of the
potential. One can make a transformation of these potentials to potentials in
which the conjugate field $A_{0}$ of the medium is the independent variable by
adding $A_{0}\xi$. The resulting potentials will be denoted by a superscript A
on the potential
\[
E^{\text{A}}=E+A_{0}\xi,H^{\text{A}}=H+A_{0}\xi,F^{\text{A}}=F+A_{0
\xi,G^{\text{A}}=G+A_{0}\xi.
\]
However, as discussed in II, $A_{0}=0$. Thus, there is no difference in the
values of the two potentials and the transformation is of no use. In
equilibrium, the internal fields $T,P$ attain their equilibrium values
$T_{0},P_{0}$ of the medium, and the affinity $A$ vanishes identically because
of $A_{0}=0$.
\section{Maxwell Relations For Systems in Internal
Equilibrium\label{Marker_Maxwell_Relations}}
From now on, we will always consider the case of a constant $N$. Therefore, we
will no longer exhibit it anymore. The Maxwell relation in Eq.
(\ref{Maxwell_Compact}) will then be denoted simply as $\partial
(T_{0},S)\equiv\partial(P_{0},V).$ The field parameters that appear in the
Maxwell relation are the parameters $T_{0},P_{0}$ of the medium, which because
of the existence of equilibrium also represent the field parameters of the
system. The Maxwell relation is a relation between the pairs $T_{0},S$ and
$P_{0},V$, each pair formed by the extensive variable and its conjugate field.
We will call these pairs conjugate pairs in this work. For a system described
by only two conjugate pairs, there is only one possible Maxwell relation. For
a system described by three conjugate pairs, there will be three different
Maxwell relations between them. For a system described by $k$ conjugate pairs,
there will be $k(k-1)/2$ different Maxwell relations.
As the system in internal equilibrium is very similar in many respects with an
equilibrium system as discussed in I and II, there may be analogs of the
Maxwell relations for systems in internal equilibrium. The question then
arises as to the field parameters that must appear in the Maxwell relations
when the system is not in equilibrium, but only in internal equilibrium. We
now turn to answer this question. Because of the absence of equilibrium, we
must now also include the internal variable $\xi$ in the discussion. Thus, we
expect three different Maxwell relations between
\subsection{Maxwell relation $\partial(T,S,\xi)\equiv\partial(P,V,\xi)$ at
fixed $\xi$}
We start with Eq. (\ref{Energy_relation_Internal/n}) and observe tha
\begin{equation}
\left( \frac{\partial E}{\partial S}\right) _{V,\xi}=T,\left(
\frac{\partial E}{\partial V}\right) _{S,\xi}=-P,\left( \frac{\partial
E}{\partial\xi}\right) _{S,V}=-A. \label{Energy_Fields/n
\end{equation}
Using the first two derivative at fixed $\xi$, we find tha
\[
\left( \frac{\partial^{2}E}{\partial V\partial S}\bigskip\right) _{\xi
}=\left( \frac{\partial T}{\partial V}\right) _{S,\xi},\ \left(
\frac{\partial^{2}E}{\partial S\partial V}\bigskip\right) _{\xi}=-\left(
\frac{\partial P}{\partial S}\right) _{V,\xi}.
\]
As we are allowed to interchange the order of derivatives in the above cross
derivative, we hav
\[
\left( \frac{\partial T}{\partial V}\right) _{S,\xi}=-\left( \frac{\partial
P}{\partial S}\right) _{V,\xi},
\]
which can be written using Jacobians as
\[
\frac{\partial(T,S,\xi)}{\partial(S,V,\xi)}=\frac{\partial(P,V,\xi)
{\partial(S,V,\xi)}.
\]
This suggests the existence of the Maxwell relation $\partial(T,S,\xi
)=\partial(P,V,\xi)$ between the conjugate pairs $T,S$ and $P,V$ at fixed
$\xi$. To check its validity for other potentials with $\xi$ as an independent
variable, we consider the differential $dG$ in Eq.
(\ref{Thermodynamic_Differential_Internal/n}) and note that
\begin{align*}
\left( \frac{\partial G}{\partial T_{0}}\right) _{P_{0},\xi} &
=-S+(T-T_{0})\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0},\xi
}+(P_{0}-P)\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi},\\
\left( \frac{\partial G}{\partial P_{0}}\right) _{T_{0},\xi} &
=V+(T-T_{0})\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0},\xi
}+(P_{0}-P)\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi}.
\end{align*}
We use these derivatives to evaluate the cross derivative $\left(
\partial^{2}G/\partial P_{0}\partial T_{0}\right) _{\xi}\bigskip$ to conclude
that
\begin{align*}
& -\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0},\xi
+(T-T_{0})\left( \frac{\partial^{2}S}{\partial T_{0}\partial P_{0}}\right)
_{\xi}+\left( \frac{\partial T}{\partial P_{0}}\right) _{T_{0},\xi}\left(
\frac{\partial S}{\partial T_{0}}\right) _{P_{0},\xi}\\
& +(P_{0}-P)\left( \frac{\partial^{2}V}{\partial P_{0}\partial T_{0
}\right) _{\xi}+\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0
,\xi}-\left( \frac{\partial P}{\partial P_{0}}\right) _{T_{0},\xi}\left(
\frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi}\\
& =\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi
+(T-T_{0})\left( \frac{\partial^{2}S}{\partial P_{0}\partial T_{0}}\right)
_{\xi}+\left( \frac{\partial T}{\partial T_{0}}\right) _{P_{0},\xi}\left(
\frac{\partial S}{\partial P_{0}}\right) _{T_{0},\xi}\\
& +(P_{0}-P)\left( \frac{\partial^{2}V}{\partial P_{0}\partial T_{0
}\right) _{\xi}-\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0
,\xi}-\left( \frac{\partial P}{\partial T_{0}}\right) _{P_{0},\xi}\left(
\frac{\partial V}{\partial P_{0}}\right) _{T_{0},\xi}.
\end{align*}
This is simplified to yiel
\[
\left( \frac{\partial T}{\partial P_{0}}\right) _{T_{0},\xi}\left(
\frac{\partial S}{\partial T_{0}}\right) _{P_{0},\xi}-\left( \frac{\partial
T}{\partial T_{0}}\right) _{P_{0},\xi}\left( \frac{\partial S}{\partial
P_{0}}\right) _{T_{0},\xi}=\left( \frac{\partial P}{\partial P_{0}}\right)
_{T_{0},\xi}\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi
}-\left( \frac{\partial P}{\partial T_{0}}\right) _{T_{0},\xi}\left(
\frac{\partial V}{\partial P_{0}}\right) _{P_{0},\xi}.
\]
In terms of the Jacobians, this can be written a
\[
\frac{\partial(T,S,\xi)}{\partial(T_{0},P_{0},\xi)}=\frac{\partial(P,V,\xi
)}{\partial(T_{0},P_{0},\xi)},
\]
thus justifying the Maxwell relatio
\begin{equation}
\partial(T,S,\xi)=\partial(P,V,\xi) \label{Maxwell_Relation_TSPV/N
\end{equation}
at fixed $\xi$. This relation must be satisfied at every point on the curve
ABDE that describes the vitrification process. This Maxwell relation turns
into the identity
\bigski
\[
\frac{\partial(T,S,\xi)}{\partial(P_{0},S,\xi)}=\frac{\partial(P,V,\xi
)}{\partial(P_{0},S,\xi)
\]
for the enthalpy and
\begin{equation}
\frac{\partial(T,S,\xi)}{\partial(T_{0},V,\xi)}=\frac{\partial(P,V,\xi
)}{\partial(T_{0},V,\xi)} \label{Maxwell_Helmholtz/n
\end{equation}
for the Helmholtz free energy, and are easily verified.
\subsection{Maxwell relation $\partial(T,S,V)\equiv\partial(A,\xi,V)$ at fixed
$V$}
We again start with Eqs. (\ref{Energy_relation_Internal/n}) and
(\ref{Energy_Fields/n}) , and evaluate the cross derivative $\left(
\partial^{2}E/\partial S\partial\xi\right) _{V}$ to obtai
\[
\left( \frac{\partial^{2}E}{\partial\xi\partial S}\bigskip\right)
_{V}=\left( \frac{\partial T}{\partial\xi}\right) _{S,V},\left(
\frac{\partial^{2}E}{\partial S\partial\xi}\bigskip\right) _{V}=-\left(
\frac{\partial A}{\partial S}\right) _{V,\xi}.
\]
We thus hav
\[
\left( \frac{\partial T}{\partial\xi}\right) _{S,V}=-\left( \frac{\partial
A}{\partial S}\right) _{V,\xi},
\]
which can be written using Jacobians as
\[
\frac{\partial(T,S,V)}{\partial(\xi,S,V)}=\frac{\partial(A,\xi,V)
{\partial(\xi,S,V)}.
\]
This suggests the existence of the Maxwell relation $\partial(T,S,V)=\partial
(A,\xi,V)$ between the conjugate pairs $T,S$ and $A,\xi$ $\ $at fixed $V$. To
check its validity for other potentials with $V$ as an independent variable,
we consider the differential $dF$ in Eq.
(\ref{Thermodynamic_Differential_Internal/n}) and note tha
\begin{align*}
\left( \frac{\partial F}{\partial T_{0}}\right) _{V,\xi} & =-S+(T-T_{0
)\left( \frac{\partial S}{\partial T_{0}}\right) _{V,\xi}\\
\left( \frac{\partial F}{\partial\xi}\right) _{V,T_{0}} & =-A+(T-T_{0
)\left( \frac{\partial S}{\partial\xi}\right) _{V,T_{0}}.
\end{align*}
We now evaluate the cross derivative $\left( \partial^{2}F/\partial
\xi\partial T_{0}\right) _{V}$ and obtain the equalit
\begin{align*}
& -\left( \frac{\partial S}{\partial\xi}\right) _{T_{0},V}+\left(
\frac{\partial S}{\partial T_{0}}\right) _{V,\xi}\left( \frac{\partial
T}{\partial\xi}\right) _{T_{0},V}+(T-T_{0})\left( \frac{\partial^{2
S}{\partial\xi\partial T_{0}}\right) _{V}\\
& =-\left( \frac{\partial A}{\partial T_{0}}\right) _{V,\xi}+\left[
\left( \frac{\partial T}{\partial T_{0}}\right) _{V,\xi}-1\right] \left(
\frac{\partial S}{\partial\xi}\right) _{T_{0},V}+(T-T_{0})\left(
\frac{\partial^{2}S}{\partial\xi\partial T_{0}}\right) _{V},
\end{align*}
which leads to the relatio
\[
\frac{\partial(A,\xi,V)}{\partial(T_{0},\xi,V)}=\frac{\partial(T,S,V)
{\partial(T_{0},\xi,V)}.
\]
This confirms that the Maxwell relation between the conjugate pairs $T,S$ and
$A,\xi$ $\ $at fixed $V$ is the following
\begin{equation}
\partial(T,S,V)=\partial(A,\xi,V). \label{Maxwell_Relation_TSA/N
\end{equation}
\subsection{Maxwell Relation $\partial(P,V,S)\equiv\partial(A,\xi,S)$ at fixed
$S$}
We again start with Eqs. (\ref{Energy_relation_Internal/n}) and
(\ref{Energy_Fields/n}), and evaluate the cross derivative $\left(
\partial^{2}E/\partial V\partial\xi\right) _{S}$ to obtai
\[
\left( \frac{\partial^{2}E}{\partial\xi\partial V}\bigskip\right)
_{S}=-\left( \frac{\partial P}{\partial\xi}\right) _{S,V},\left(
\frac{\partial^{2}E}{\partial V\partial\xi}\bigskip\right) _{S}=-\left(
\frac{\partial A}{\partial V}\right) _{S,\xi}.
\]
We thus hav
\[
\left( \frac{\partial P}{\partial\xi}\right) _{S,V}=\left( \frac{\partial
A}{\partial V}\right) _{S,\xi},
\]
which can be written using Jacobians as
\[
\frac{\partial(P,V,S)}{\partial(\xi,V,S)}=-\frac{\partial(A,\xi,S)
{\partial(\xi,V,S)}.
\]
This suggests the existence of the Maxwell relation $\partial(P,V,S)=-\partial
(A,\xi,S)$ between the conjugate pairs $P,V$ and $A,\xi$ $\ $at fixed $S$. To
check its validity for other potentials with $S$ as an independent variable,
we consider the differential $dH$ in Eq.
(\ref{Thermodynamic_Differential_Internal/n}) and note tha
\begin{align*}
\left( \frac{\partial H}{\partial P_{0}}\right) _{S,\xi} & =V-(P-P_{0
)\left( \frac{\partial V}{\partial P_{0}}\right) _{S,\xi}\\
\left( \frac{\partial F}{\partial\xi}\right) _{S,P_{0}} & =-A-(P-P_{0
)\left( \frac{\partial V}{\partial\xi}\right) _{S,T_{0}}.
\end{align*}
We now evaluate the cross derivative $\left( \partial^{2}H/\partial
\xi\partial P_{0}\right) _{S}$ and obtain the equalit
\begin{align*}
& \left( \frac{\partial V}{\partial\xi}\right) _{P_{0},S}-\left(
\frac{\partial V}{\partial P_{0}}\right) _{S,\xi}\left( \frac{\partial
P}{\partial\xi}\right) _{P_{0},S}-(P-P_{0})\left( \frac{\partial^{2
V}{\partial\xi\partial P_{0}}\right) _{S}\\
& =-\left( \frac{\partial A}{\partial P_{0}}\right) _{S,\xi}-\left[
\left( \frac{\partial P}{\partial P_{0}}\right) _{S,\xi}-1\right] \left(
\frac{\partial V}{\partial\xi}\right) _{P_{0},S}-(P-P_{0})\left(
\frac{\partial^{2}V}{\partial\xi\partial P_{0}}\right) _{S},
\end{align*}
which leads to the relatio
\[
\frac{\partial(A,\xi,S)}{\partial(P_{0},\xi,S)}=-\frac{\partial(P,V,S)
{\partial(P_{0},\xi,S)}.
\]
This confirms that the Maxwell relation between the conjugate pairs $P,V$ and
$A,\xi$ $\ $at fixed $S$ is the following
\begin{equation}
\partial(P,V,S)=-\partial(A,\xi,S). \label{Maxwell_Relation_PVA/N
\end{equation}
One can easily check that this Maxwell relation also works with other
thermodynamic potentials like $F$ and $G$. We will satisfy ourselves by giving
the demonstration for $F$ only. The natural variables for $F$ are $T_{0},\xi$
and $V$; however, instead of using $V$ as the independent variable, we will
use $S$ as the independent variable so that it can be held fixed. For constant
$S$, the differential $dF$ from Eq.
(\ref{Thermodynamic_Differential_Internal/n}) reduces to
\[
\left. dF\right\vert _{S}=-SdT_{0}-PdV-Ad\xi,
\]
so that
\begin{align*}
\left( \frac{\partial F}{\partial T_{0}}\right) _{S,\xi} & =-S-P\left(
\frac{\partial V}{\partial T_{0}}\right) _{S,\xi}\\
\left( \frac{\partial F}{\partial\xi}\right) _{S,T_{0}} & =-A-P\left(
\frac{\partial V}{\partial\xi}\right) _{S,T_{0}}.
\end{align*}
Now evaluating the cross derivative\ $\left( \partial^{2}F/\partial
\xi\partial T_{0}\right) _{S}$, we find that
\begin{align*}
& -\left( \frac{\partial A}{\partial T_{0}}\right) _{S,\xi}-\left(
\frac{\partial P}{\partial T_{0}}\right) _{S,\xi}\left( \frac{\partial
V}{\partial\xi}\right) _{T_{0},S}-P\left( \frac{\partial^{2}V}{\partial
\xi\partial T_{0}}\right) _{S}\\
& =-\left( \frac{\partial V}{\partial T_{0}}\right) _{S,\xi}\left(
\frac{\partial P}{\partial\xi}\right) _{T_{0},S}-P\left( \frac{\partial
^{2}V}{\partial\xi\partial P_{0}}\right) _{S}.
\end{align*}
This now immediately leads to
\begin{equation}
\frac{\partial(A,\xi,S)}{\partial(T_{0},\xi,S)}=-\frac{\partial(P,V,S)
{\partial(T_{0},\xi,S)}, \label{Maxwell_Relation_F/S/n
\end{equation}
and confirms our claim that the Maxwell relation is given by Eq.
(\ref{Maxwell_Relation_PVA/N}).
\subsection{General Maxwell Relations with system variables only}
We wish to emphasize that the Maxwell relation in
Eq.\ (\ref{Maxwell_Relation_F/S/n}) requires keeping $S$ fixed so that we must
divide Eq. (\ref{Maxwell_Relation_PVA/N}) by $\partial(T_{0},\xi,S)$ on both
sides. We must not use the independent variables $T_{0},\xi$ and $V$ of $F$
for the division and keep $T_{0}$ fixed. This will not give be a Maxwell
relation. We demonstrate this explicitly by evaluating $\left( \partial
^{2}F/\partial\xi\partial V\right) _{T_{0}}$ two different ways and equating
the results. A simple calculation yields
\begin{align*}
& -\left( \frac{\partial P}{\partial\xi}\right) _{V,T_{0}}+\left(
\frac{\partial S}{\partial V}\right) _{T_{0},\xi}\left( \frac{\partial
T}{\partial\xi}\right) _{V,T_{0}}+(T-T_{0})\left( \frac{\partial^{2
S}{\partial V\partial\xi}\right) _{T_{0}}\\
& =-\left( \frac{\partial A}{\partial V}\right) _{T_{0},\xi}+\left(
\frac{\partial S}{\partial\xi}\right) _{V,T_{0}}\left( \frac{\partial
T}{\partial V}\right) _{T_{0},\xi}+(T-T_{0})\left( \frac{\partial^{2
S}{\partial V\partial\xi}\right) _{T_{0}}.
\end{align*}
In terms of Jacobians, the above equation can be rewritten a
\begin{equation}
\ \ \frac{\partial(A,\xi,T_{0})}{\partial(V,\xi,T_{0})}=-\frac{\partial
(P,V,T_{0})}{\partial(V,\xi,T_{0})}+\frac{\partial(T,S,T_{0})}{\partial
(V,\xi,T_{0})}. \label{Maxwell_Relation_F/T0/n
\end{equation}
This relation from the cross derivative requires keeping $T_{0}$ fixed.
However, $T_{0}$ characterizes the medium and only indirectly characterizes
the system in internal equilibrium. In a similar way, using the cross
derivatives of the Gibbs free energy at fixed $T_{0}$, and at fixed $P_{0}$,
we find the following relations
\begin{equation}
\frac{\partial(A,\xi,P_{0})}{\partial(T_{0},P_{0,}\xi)}=\frac{\partial
(T,S,P_{0})}{\partial(T_{0},P_{0},\xi)}+\frac{\partial(P,V,P_{0})
{\partial(T_{0},P_{0},\xi)},\ \ \ \frac{\partial(A,\xi,T_{0})}{\partial
(T_{0},P_{0,}\xi)}=\frac{\partial(P,V,T_{0})}{\partial(T_{0},P_{0,}\xi)
-\frac{\partial(T,S,T_{0})}{\partial(T_{0},P_{0,}\xi)}.
\label{Maxwell_Relation_G/T0_P0/n
\end{equation}
We now wish to observe that the Maxwell relations appear only when we keep the
quantities of the system $T,P,S,V,A,$ or $\xi$ fixed. We have already seen the
Maxwell relations with fixed $S,V$, and $\xi$. We will now consider keeping
$T$ fixed to demonstrate our point. For fixed $T$, we obtain the following
Maxwell relatio
\begin{equation}
\frac{\partial(A,\xi,T)}{\partial(T_{0},\xi,T)}=-\frac{\partial(P,V,T)
{\partial(T_{0},\xi,T)}, \label{Maxwell_Relation_F/T/n
\end{equation}
as can easily be checked by evaluating the cross derivative $\left(
\partial^{2}F/\partial\xi\partial T_{0}\right) _{T}$ at fixed $T$. The
calculation is identical to that carried out in obtaining Eq.
(\ref{Maxwell_Relation_F/S/n}). One can easily check that keeping $P$ or $A$
also gives us new Maxwell relation
\[
\frac{\partial(A,\xi,P)}{\partial(T_{0},\xi,P)}=\frac{\partial(T,S,P)
{\partial(T_{0},\xi,P)},\frac{\partial(T,S,A)}{\partial(\xi,T_{0},A)
=-\frac{\partial(P,V,A)}{\partial(T_{0},\xi,A)}.
\]
\section{Clausius-Clapeyron Relation\label{Marker_Clausius_Clapeyron_Relation
}
As a system in internal equilibrium is not very different from that in
equilibrium, except that its Gibbs free energy $G(t)$ continuously decreases
until it reaches equilibrium with the medium, it is possible for the system to
exist in two distinct phases that have the same Gibbs free energy at some
instant. Such a non-equilibrium phase transition situation will arise, for
example, when an isotropic supercooled liquid can turn into a liquid crystal
phase. This is not a novel idea as there are several attempts in the
literature \cite[and references therin]{Onuki,Sugar,Allahverdyan,Arndt} where
such non-equilibrium phase transitions have been investigated. Therefore, let
us now consider the possibility of the system being in two different phases at
some time. As experiments are carried out by controlling observables only and
not the internal variables, it is important to consider thermodynamic
quantities as a function of $\mathbf{X~}$only, and not of $\mathbf{X,I}$ in
all cases. Restricting ourselves to a single internal variable $\xi$, and to
$E$ and $V$, we will treat thermodynamic quantities not only as a function of
three independent variables, but will also have the need to consider them as a
function of observables or associated fields $T_{0},P_{0}$. In particular, the
Clausius-Clapeyron relation is obtained in the $T_{0}$-$P_{0}$ plane, a
subspace; see Sect. \ref{Marker_Subspace}.
Let us consider the two phases, which we denote by $1$ and $2$, in the system.
We will use subscripts $1$ and $2$ to refer to the quantities in the two
phases. In internal equilibrium, the entropy $S$ of the system is a function
of the averages $\mathbf{X(}t)\mathbf{,I(}t\mathbf{)}$ along with the fixed
number of particles $N$. It is important to include $N$ in our consideration
as the two phases will contain number of particles $N_{1}$ and $N_{2}$ that
are not constant, except in equilibrium. Obviously
\[
\mathbf{X(}t)=\mathbf{X}_{1}\mathbf{(}t)+\mathbf{X}_{2}\mathbf{(
t),\mathbf{I(}t)=\mathbf{I}_{1}\mathbf{(}t)+\mathbf{I}_{2}\mathbf{(
t),N=N_{1}(t)+N_{2}(t).
\]
Then, we can express the entropy of the system as a sum over the two phases
\[
S(\mathbf{X(}t),\mathbf{I(}t),N)=S_{1}(\mathbf{X}_{1}\mathbf{(}t),\mathbf{I
_{1}\mathbf{(}t),N_{1}(t))+S_{2}(\mathbf{X}_{2}\mathbf{(}t),\mathbf{I
_{2}\mathbf{(}t),N_{2}(t)),
\]
which takes its maximum possible value for given $\mathbf{X(}t),\mathbf{I(
t),N$ in internal equilibrium. Thus
\[
dS(\mathbf{X(}t),\mathbf{I(}t),N)=dS_{1}(\mathbf{X}_{1}\mathbf{(
t),\mathbf{I}_{1}\mathbf{(}t),N_{1}(t))+dS_{2}(\mathbf{X}_{2}\mathbf{(
t),\mathbf{I}_{2}\mathbf{(}t),N_{2}(t))=0
\]
in internal equilibrium. This can only happen if
\[
\mathbf{y}_{1}(t)=\mathbf{y}_{2}(t),\ \ \mathbf{a}_{1}\mathbf{(
t)\ \mathbf{=a}_{2}\mathbf{(}t\mathbf{),\ \ }\ \mu_{1}(t)/T_{1}(t)=\ \mu
_{2}(t)/T_{2}(t);
\]
see Eqs. (\ref{Gibbs_Fundamental_relation_0}), (\ref{Field_Variables_1}) and
(\ref{Field_Variables_0}).
For the restricted case under consideration, this results in the equalit
\[
T_{1}(t)=T_{2}(t),\ \ P_{1}\mathbf{(}t)\ \mathbf{=}P_{2}\mathbf{(
t\mathbf{),\ \ }\ \mu_{1}(t)=\ \mu_{2}(t),\ \ A_{1}(t)=A_{2}(t)
\]
for the internal fields and affinity along the coexistence of the two phases.
It also follows from the continuity of the Gibbs free energy
\cite{Gujrati-Non-Equilibrium-I} that the Gibbs free energies of the two pure
phases ($N_{1}=N$ and $N_{2}=N$) must be equal at the coexistence. We will
only consider the two pure phases below, and not a mixture of the two. As the
numbers of particles in the two pure phases are constant, we will no longer
consider them anymore in the discussion.
We now consider the $T_{0}$-$P_{0}$ plane, relevant for the observation of
coexistence. Since the Gibbs free energy is continuous along the transition line
\[
\Delta G(T_{0},P_{0}(T_{0}))=0
\]
where $P_{0}(T_{0})$ is the pressure along the transition line. Thus
\[
d\Delta G=\Delta\left( \frac{\partial G}{\partial T_{0}}\right) _{P_{0
}dT_{0}+\Delta\left( \frac{\partial G}{\partial P_{0}}\right) _{T_{0}
dP_{0}.
\]
Using $d\Delta G=0$ yield
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{coex}}\Delta\left(
\frac{\partial G}{\partial T_{0}}\right) _{P_{0}}+\Delta\left(
\frac{\partial G}{\partial P_{0}}\right) _{T_{0}}=0 \label{dG_Coexistence
\end{equation}
along the coexistence. Using $dG$ from Eq.
(\ref{Thermodynamic_Differential_Internal/n}) gives u
\begin{equation}
\Delta\left( \frac{\partial G}{\partial T_{0}}\right) _{P_{0}}=-\Delta
S+(T-T_{0})\Delta\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0
}-(P-P_{0})\Delta\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0
}-A\Delta\left( \frac{\partial\xi}{\partial T_{0}}\right) _{P_{0}}
\label{Delta_dG_T0
\end{equation}
\begin{equation}
\Delta\left( \frac{\partial G}{\partial P_{0}}\right) _{T_{0}}=\Delta
V+(T-T_{0})\Delta\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0
}-(P-P_{0})\Delta\left( \frac{\partial V}{\partial P_{0}}\right) _{T_{0
}-A\Delta\left( \frac{\partial\xi}{\partial P_{0}}\right) _{T_{0}}
\label{Delta_dG_P0
\end{equation}
Putting the above two equations in Eq. (\ref{dG_Coexistence}), we get the
following Clausius-Clapeyron equation for coexistence of phases in internal
equilibriu
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{coex}}=\frac{\Delta
V+(T-T_{0})\Delta\left( \partial S/\partial P_{0}\right) _{T_{0}
-(P-P_{0})\Delta\left( \partial V/\partial P_{0}\right) _{T_{0}
-A\Delta\left( \partial\xi/\partial P_{0}\right) _{T_{0}}}{\Delta
S-(T-T_{0})\Delta\left( \partial S/\partial T_{0}\right) _{P_{0}
+(P-P_{0})\Delta\left( \partial V/\partial T_{0}\right) _{P_{0}
+A\Delta\left( \partial\xi/\partial T_{0}\right) _{P_{0}}}.
\label{Clausius-Claperon_Eq/n
\end{equation}
We now express $\left( \partial S/\partial P_{0}\right) _{T_{0}}$ in terms
of $\left( \partial V/\partial T_{0}\right) _{P_{0}}$ by using the Maxwell
relation $\partial(P,V)=\partial(T,S)$ and by using Eq.
(\ref{Subspace_Reduction}) ($F\rightarrow S,K\rightarrow V,x\rightarrow
P_{0},$ and $y\rightarrow T_{0})$ as follows
\[
\frac{\partial(S,T_{0})}{\partial(P_{0},T_{0})}=-\frac{\partial(S,T_{0
)}{\partial(S,T)}\frac{\partial(P,V)}{\partial(P_{0},V)}\frac{\partial
(P_{0},V)}{\partial(P_{0},T_{0})},
\]
which immediately give
\begin{equation}
\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0}}=-\frac{\left(
\partial P/\partial P_{0}\right) _{V}}{\left( \partial T/\partial
T_{0}\right) _{S}}\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0}},
\label{S_P0_V_T0_relation
\end{equation}
which can now be used in the Clausius-Clapeyron equation to express it in
terms of measurable quantities assuming that $P,T$ can be measured. In
equilibrium, $T=T_{0},P=P_{0}$ and $A=0$, so that the above equation reduces
to the well-known versio
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{coex}}^{\text{(eq)}
=\frac{\Delta V}{\Delta S}, \label{Clausius-Claperon_Eq
\end{equation}
as expected.
\section{\textbf{Response functions in Internal Equilibrium
\label{Marker_Response_Functions}}
\subsection{$\bar{C}_{P}$\textbf{\ and }$\bar{C}_{V}$\textbf{ }}
The heat capacities with respect to the internal temperature at fixed $P$ or
$V$ ar
\begin{align*}
\bar{C}_{P,\xi} & =T\left( \frac{\partial S}{\partial T}\right) _{P,\xi
},\ \ \bar{C}_{V,\xi}=T\left( \frac{\partial S}{\partial T}\right) _{V,\xi
},\\
\bar{C}_{P} & =T\left( \frac{\partial S}{\partial T}\right) _{P
,\ \ \bar{C}_{V}=T\left( \frac{\partial S}{\partial T}\right) _{V
\end{align*}
We again start from the fundamental relation in Eq.
(\ref{Fundamental relation0}) and evaluate the derivativ
\[
T\left( \frac{\partial S}{\partial T}\right) _{P,\xi}=T\left(
\frac{\partial S}{\partial T}\right) _{V,\xi}+T\left( \frac{\partial
S}{\partial V}\right) _{T,\xi}\left( \frac{\partial V}{\partial T}\right)
_{P,\xi},
\]
which can be rewritten in two equivalent form
\begin{equation}
\overline{C}_{P,\xi}=\overline{C}_{V,\xi}+T\left[ \left( \frac{\partial
S}{\partial P}\right) _{T,\xi}\left( \frac{\partial V}{\partial T}\right)
_{P,\xi}\right] /\left( \frac{\partial V}{\partial P}\right) _{T,\xi}
\label{Internal_Heat_Capacities_Relation
\end{equation}
o
\begin{equation}
\overline{C}_{P,\xi}=\overline{C}_{V,\xi}+T\left( \frac{\partial P}{\partial
T}\right) _{V,\xi}\left( \frac{\partial V}{\partial T}\right) _{P,\xi},
\label{Internal_Heat_Capacities_Relation_0
\end{equation}
where we have used the Maxwell relation in Eq. (\ref{Maxwell_Relation_TSPV/N})
after we divide it by $\partial(V,T,\xi)$. As $\left( \partial S/\partial
P\right) _{V,\xi}$ is not directly measurable, the identity in Eq.
(\ref{Internal_Heat_Capacities_Relation_0}) is more useful from a practical
point of view. However, we need to transform the various derivatives in it to
the derivatives with respect to $T_{0}$ at fixed $P_{0}$ or $V$ by using the
transformation rules in Sect. \ref{Marker_transformation rules}, as it is the
pair $T_{0},P_{0}$ that can be manipulated by the observer. However, the
identities still contains $\bar{C}_{P,\xi}$ and $\bar{C}_{V,\xi}$, which are
defined with respect to $T$, and not with respect to $T_{0}$. Therefore, we
now turn to heat capacities obtained as a derivative with respect to $T_{0}$.
\subsection{$C_{P}$\textbf{\ and }$C_{V}$\textbf{ }}
From Eq. (\ref{Heat_Transfer})$,$ we have
\begin{align}
C_{P} & \equiv\left( \frac{\partial Q}{\partial T_{0}}\right) _{P_{0
}\equiv T\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0}
,C_{V}\equiv\left( \frac{\partial Q}{\partial T_{0}}\right) _{V}\equiv
T\left( \frac{\partial S}{\partial T_{0}}\right) _{V}, \label{Heat_Capacity
\\
C_{P,\xi} & \equiv\left( \frac{\partial Q}{\partial T_{0}}\right)
_{P_{0},\xi}\equiv T\left( \frac{\partial S}{\partial T_{0}}\right)
_{P_{0},\xi},C_{V,\xi}\equiv\left( \frac{\partial Q}{\partial T_{0}}\right)
_{V,\xi}\equiv T\left( \frac{\partial S}{\partial T_{0}}\right) _{V,\xi}.
\label{Heat_Capacity_Xi
\end{align}
It would have been more appropriate to express the capacities $C_{P
$\textbf{\ }and\textbf{ }$C_{P,\xi}$ as $C_{P_{0}}$\textbf{\ }and\textbf{
}$C_{P_{0},\xi}$, but we will use the simpler notation. This should cause no
confusion. Introducing the expansion coefficient
\begin{equation}
\alpha_{P}\equiv\frac{1}{V}\left( \frac{\partial V}{\partial T_{0}}\right)
_{P_{0}},\alpha_{P,\xi}\equiv\frac{1}{V}\left( \frac{\partial V}{\partial
T_{0}}\right) _{P_{0},\xi} \label{Expansion_Coefficients
\end{equation}
we find that
\[
\frac{C_{P}}{\alpha_{P}}=TV\frac{\partial(S,P_{0})/\partial(T_{0},P_{0
)}{\partial(V,P_{0})/\partial(T_{0},P_{0})}=TV\frac{\partial(S,P_{0
)}{\partial(V,P_{0})}=TV\left( \frac{\partial S}{\partial V}\right) _{P_{0
}.
\]
The same discussion can be applied to $C_{P,\xi}$ and $\alpha_{P,\xi}$ with a
similar resul
\[
\frac{C_{P,\xi}}{\alpha_{P,\xi}}=TV\frac{\partial(S,P_{0},\xi)/\partial
(T_{0},P_{0},\xi)}{\partial(V,P_{0},\xi)/\partial(T_{0},P_{0},\xi)
=TV\frac{\partial(S,P_{0},\xi)}{\partial(V,P_{0},\xi)}=TV\left(
\frac{\partial S}{\partial V}\right) _{P_{0},\xi}.
\]
Let us now consider the relation between $C_{P,\xi}$\textbf{\ }and\textbf{
}$C_{V,\xi}$ and between $C_{P}$\textbf{\ }and\textbf{ }$C_{V}$, for
which\textbf{ }we consider $S$ as a function of $T,V$ and $\xi$, which follows
from Eq. (\ref{Energy_relation_Internal/n}), so that
\begin{equation}
dS=\frac{\partial S}{\partial T}dT+\frac{\partial S}{\partial V
dV+\frac{\partial S}{\partial\xi}d\xi. \label{Fundamental relation0
\end{equation}
Therefore,
\[
\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0},\xi}=\left(
\frac{\partial S}{\partial T}\right) _{V,\xi}\left( \frac{\partial
T}{\partial T_{0}}\right) _{P_{0},\xi}+\left( \frac{\partial S}{\partial
V}\right) _{T,\xi}\left( \frac{\partial V}{\partial T_{0}}\right)
_{P_{0},\xi}.
\]
Now, using Eq. (\ref{TR_TP0}), we have
\begin{equation}
\left( \frac{\partial S}{\partial T}\right) _{V,\xi}=\left( \frac{\partial
S}{\partial T_{0}}\right) _{V,\xi}/\left( \frac{\partial T}{\partial T_{0
}\right) _{V,\xi} \label{S_T_V_derivative
\end{equation}
Similarly, using the Maxwell relation in Eq. (\ref{Maxwell_Helmholtz/n}) we
have
\begin{equation}
\left( \frac{\partial S}{\partial V}\right) _{T,\xi}=\frac{\partial
(V,P,\xi)}{\partial(V,T_{0},\xi)}\frac{\partial(V,T_{0},\xi)}{\partial
(V,T,\xi)}=\left( \frac{\partial P}{\partial T_{0}}\right) _{V,\xi}/\left(
\frac{\partial T}{\partial T_{0}}\right) _{V,\xi}. \label{S_V_T_derivative
\end{equation}
We thus finally obtai
\[
\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0},\xi}\left(
\frac{\partial T}{\partial T_{0}}\right) _{V,\xi}=\left( \frac{\partial
S}{\partial T_{0}}\right) _{V,\xi}\left( \frac{\partial T}{\partial T_{0
}\right) _{P_{0},\xi}+\left( \frac{\partial P}{\partial T_{0}}\right)
_{V,\xi}\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi},
\]
After multiplying by $T$ on both sides, we obtain the desired relation between
$C_{P,\xi}$ and $C_{V,\xi}$ for the non-equilibrium cas
\begin{equation}
C_{P,\xi}\left( \frac{\partial T}{\partial T_{0}}\right) _{V,\xi}=C_{V,\xi
}\left( \frac{\partial T}{\partial T_{0}}\right) _{P_{0},\xi}+T\left(
\frac{\partial P}{\partial T_{0}}\right) _{V,\xi}\left( \frac{\partial
V}{\partial T_{0}}\right) _{P_{0},\xi}.
\label{Internal_Heat_Capacities_T0_Relation
\end{equation}
This relation generalize the following standard equilibrium relation
\[
C_{P}^{\text{eq}}=C_{V}^{\text{eq}}+T_{0}\left( \frac{\partial P_{0
}{\partial T_{0}}\right) _{V}\left( \frac{\partial V}{\partial T_{0
}\right) _{P_{0}},
\]
obtained by setting
\[
\left( \frac{\partial T}{\partial T_{0}}\right) _{V,\xi}=\left(
\frac{\partial T}{\partial T_{0}}\right) _{P_{0},\xi}=1.
\]
We can obtain a standard form of the above heat capacity relations as in Eq.
(\ref{Internal_Heat_Capacities_Relation}) as follows
\[
C_{V,\xi}=T\frac{\partial(S,V,\xi)}{\partial(T_{0},V,\xi)}=T\frac
{\partial(S,V,\xi)/\partial(T_{0},P_{0},\xi)}{\partial(T_{0},V,\xi
)/\partial(T_{0},P_{0},\xi)}.
\]
We thus finally have
\begin{equation}
C_{P,\xi}=C_{V,\xi}+T\frac{(\partial S/\partial P_{0})_{T_{0},\xi}(\partial
V/\partial T_{0})_{P_{0},\xi}}{(\partial V/\partial P_{0})_{T_{0},\xi
}=C_{V,\xi}+T\left( \frac{\partial S}{\partial V}\right) _{T_{0},\xi}\left(
\frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi},
\label{Internal_Heat_Capacities_T0_Relation00
\end{equation}
which is an extension of Eq. (\ref{Internal_Heat_Capacities_Relation}).
Although tedious, it is straightforward to show that this relation is
identical to the above relation. One needs to evaluate $\left( \partial
S/\partial V\right) _{T_{0},\xi}$ as follows
\begin{align*}
\left( \frac{\partial S}{\partial V}\right) _{T_{0},\xi} & =\frac
{\partial(S,T_{0},\xi)}{\partial(V,T,\xi)}\frac{\partial(V,T,\xi)
{\partial(V,T_{0},\xi)}\\
& =\left( \frac{\partial T}{\partial T_{0}}\right) _{V,\xi}\left[ \left(
\frac{\partial S}{\partial V}\right) _{T,\xi}\left( \frac{\partial T_{0
}{\partial T}\right) _{V,\xi}-\left( \frac{\partial S}{\partial T}\right)
_{V,\xi}\left( \frac{\partial T_{0}}{\partial V}\right) _{T,\xi}\right] ,
\end{align*}
where we must now use Eqs. (\ref{S_V_T_derivative}) and
(\ref{S_T_V_derivative}). We finally obtai
\begin{equation}
\left( \frac{\partial S}{\partial V}\right) _{T_{0},\xi}=\left(
\frac{\partial T}{\partial T_{0}}\right) _{V,\xi}\left[ \left(
\frac{\partial P}{\partial T_{0}}\right) _{V,\xi}\left( \frac{\partial
T_{0}}{\partial T}\right) _{V,\xi}\left( \frac{\partial T_{0}}{\partial
T}\right) _{V,\xi}-\left( \frac{\partial S}{\partial T_{0}}\right) _{V,\xi
}\left( \frac{\partial T_{0}}{\partial T}\right) _{V,\xi}\left(
\frac{\partial T_{0}}{\partial V}\right) _{T,\xi}\right] .
\label{S_V_T0_derivative
\end{equation}
The equivalence is now established by the use of the permutation property
given in Eq. (\ref{Permutation_Property}). In a similar fashion, we find that
\begin{equation}
C_{P}=C_{V}+T\left( \frac{\partial S}{\partial V}\right) _{T_{0}}\left(
\frac{\partial V}{\partial T_{0}}\right) _{P_{0}},
\label{CP_CV_relation_full
\end{equation}
where we must use, see Sect. \ref{Marker_Subspace},
\begin{equation}
\left( \frac{\partial S}{\partial V}\right) _{T_{0}}=\left( \frac{\partial
T}{\partial T_{0}}\right) _{V}\left[ \left( \frac{\partial P}{\partial
T_{0}}\right) _{V}\left( \frac{\partial T_{0}}{\partial T}\right)
_{V}\left( \frac{\partial T_{0}}{\partial T}\right) _{V}-\left(
\frac{\partial S}{\partial T_{0}}\right) _{V}\left( \frac{\partial T_{0
}{\partial T}\right) _{V}\left( \frac{\partial T_{0}}{\partial V}\right)
_{T}\right] \label{S_V_T0_only_derivative
\end{equation}
obtained in a similar fashion as Eq. (\ref{S_V_T0_derivative}).
It is important at this point to relate $C_{P}$ with $C_{P,\xi}$ and $C_{V}$
with $C_{V,\xi}.$ For this, it is convenient to consider the differential $dS$
by treating $S$ as a function of $T_{0},P_{0}$ and $\xi$. We find that
\begin{equation}
C_{P}=C_{P,\xi}+T\left( \frac{\partial S}{\partial\xi}\right) _{T_{0},P_{0
}\left( \frac{\partial\xi}{\partial T_{0}}\right) _{P_{0}},\ \ \ C_{V
=C_{V,\xi}+T\left( \frac{\partial S}{\partial\xi}\right) _{P_{0},V}\left(
\frac{\partial\xi}{\partial T_{0}}\right) _{V}. \label{C_C_Xi_Relation
\end{equation}
\subsection{Compressibilities $K_{T}$ and $K_{S}$}
The two important isothermal compressibilities ar
\[
K_{T}\equiv-\frac{1}{V}\left( \frac{\partial V}{\partial P_{0}}\right)
_{T_{0}},\ \ K_{T,\xi}\equiv-\frac{1}{V}\left( \frac{\partial V}{\partial
P_{0}}\right) _{T_{0},\xi},
\]
which we need to relate to the corresponding adiabatic compressibilit
\[
K_{S}\equiv-\frac{1}{V}\left( \frac{\partial V}{\partial P_{0}}\right)
_{S},\ \ K_{S,\xi}\equiv-\frac{1}{V}\left( \frac{\partial V}{\partial P_{0
}\right) _{S,\xi}.
\]
However, we first consider the relation between the compressibility and the
expansion coefficient. We find that
\[
\frac{K_{T}}{\alpha_{P}}=\frac{\partial(V,T_{0})/\partial(T_{0},P_{0
)}{\partial(V,P_{0})/\partial(T_{0},P_{0})}=\frac{\partial(V,T_{0})
{\partial(V,P_{0})}=\left( \frac{\partial T_{0}}{\partial P_{0}}\right)
_{V}.
\]
The same discussion can be applied to $K_{T,\xi}$ and $\alpha_{P,\xi}$ with a
similar resul
\[
\frac{K_{T,\xi}}{\alpha_{P,\xi}}=\left( \frac{\partial T_{0}}{\partial P_{0
}\right) _{V,\xi}.
\]
The relation between $K_{T}$ and $K_{T,\xi}$ and between $K_{S}$ and
$K_{S,\xi}$ are obtained by treating $V$ as a function of $T_{0},P_{0}$ and
$\xi$ and of $S,P_{0}$ and $\xi$, respectively. Usin
\begin{align}
dV & =\left( \frac{\partial V}{\partial T_{0}}\right) _{P_{0},\xi
dT_{0}+\left( \frac{\partial V}{\partial P_{0}}\right) _{T_{0},\xi
dP_{0}+\left( \frac{\partial V}{\partial\xi}\right) _{T_{0},P_{0}
d\xi,\label{V_T0_P0_Relation}\\
dV & =\left( \frac{\partial V}{\partial S}\right) _{P_{0},\xi}dS+\left(
\frac{\partial V}{\partial P_{0}}\right) _{S,\xi}dP_{0}+\left(
\frac{\partial V}{\partial\xi}\right) _{S,P_{0}}d\xi, \label{V_S_P0_Relation
\end{align}
we find that
\begin{equation}
K_{T}=K_{T,\xi}-\frac{1}{V}\left( \frac{\partial V}{\partial\xi}\right)
_{T_{0},P_{0}}\left( \frac{\partial\xi}{\partial P_{0}}\right) _{T_{0
},\ \ K_{S}=K_{S,\xi}-\frac{1}{V}\left( \frac{\partial V}{\partial\xi
}\right) _{S,P_{0}}\left( \frac{\partial\xi}{\partial P_{0}}\right) _{S},
\label{K_K_Xi_Relation
\end{equation}
which is similar to similar relations for the heat capacity in Eq.
(\ref{C_C_Xi_Relation}). We similarly find that
\begin{equation}
\alpha_{P}=\alpha_{P,\xi}-\frac{1}{V}\left( \frac{\partial V}{\partial\xi
}\right) _{T_{0},P_{0}}\left( \frac{\partial\xi}{\partial T_{0}}\right)
_{P_{0}}. \label{Alpha_Alpha_Xi_Relation
\end{equation}
Let us consider $\left( \partial V/\partial P_{0}\right) _{T_{0},\xi}$
\[
\left( \frac{\partial V}{\partial P_{0}}\right) _{T_{0},\xi}=\frac
{\partial(V,T_{0},\xi)}{\partial(P_{0},T_{0},\xi)}=\frac{\partial(V,S,\xi
)}{\partial(P_{0},S,\xi)}\frac{\partial(P_{0},S,\xi)}{\partial(P_{0},T_{0
,\xi)}\frac{\partial(V,T_{0},\xi)}{\partial(V,S,\xi)}=\left( \frac{\partial
V}{\partial P_{0}}\right) _{S,\xi}\frac{C_{P,\xi}}{C_{V,\xi}}.
\]
Similarly, we find that, see Sect. \ref{Marker_Subspace},
\[
\left( \frac{\partial V}{\partial P_{0}}\right) _{T_{0}}=\left(
\frac{\partial V}{\partial P_{0}}\right) _{S}\frac{C_{P}}{C_{V}
\]
Thus, we have the standard identity for both kinds of compressibility
\begin{equation}
\frac{C_{P,\xi}}{C_{V,\xi}}\equiv\frac{K_{T,\xi}}{K_{S,\xi}},\ \ \frac{C_{P
}{C_{V}}\equiv\frac{K_{T}}{K_{S}} \label{Response_Ratios
\end{equation}
Let us again consider $K_{S,\xi}$. Rewritin
\[
\left( \frac{\partial V}{\partial P_{0}}\right) _{S,\xi}=\frac
{\partial(V,S,\xi)/\partial(P_{0},T_{0},\xi)}{\partial(P_{0},S,\xi
)/\partial(P_{0},T_{0},\xi)}=\left( \frac{\partial V}{\partial P_{0}}\right)
_{T_{0},\xi}-\frac{(\partial S/\partial P_{0})_{T_{0},\xi}(\partial V/\partial
T_{0})_{P_{0},\xi}}{(\partial S/\partial T_{0})_{P_{0},\xi}},
\]
we find that
\[
K_{T,\xi}\equiv K_{S,\xi}-\frac{(\partial S/\partial P_{0})_{T_{0},\xi
}(\partial V/\partial T_{0})_{P_{0},\xi}}{V(\partial S/\partial T_{0
)_{P_{0},\xi}}=K_{S,\xi}-(\partial S/\partial P_{0})_{T_{0},\xi}(\partial
V/\partial S)_{P_{0},\xi}..
\]
\section{\textbf{Prigogine-Defay Ratio}\label{Marker_PD_Ratio}}
Let us consider Figs. \ref{Fig_GlassTransition_V} and
\ref{Fig_GlassTransition_G} again that describe various kinds of glass
transitions: the apparent transitions at $T_{0\text{G}}$\ (point D) and
$T_{0\text{g}}^{(\text{A})}$\ (point C) and the conventional transitions at
$T_{0\text{G}}$\ (point D) and $T_{0\text{g}}$\ (point B). From the discussion
in Sect. \ref{Marker_Glass_Transitions}, we know that the Gibbs free energies
have a discontinuity between the two states involved at the apparent
transitions. Even the volumes and the entropies exhibit discontinuities at
these transitions. On the other hand, the Gibbs free energies, volumes and
entropies have no discontinuities at the conventional transitions at
$T_{0\text{G}}$ and $T_{0\text{g}}$ due to the continuity of the state. Let us
introduce the difference
\begin{equation}
\Delta q\equiv q_{\text{I}}-q_{\text{II}} \label{G_L_Difference
\end{equation}
for any quantity $q$ at a given $T_{0},P_{0}$ in the two possible states I and
II$.$ For the apparent glass transition at $T_{0\text{G}}$, $q_{\text{I
},q_{\text{II}}$ are the values of $q$ in GL and L, respectively, at
$T_{0\text{G}}$; for the apparent glass transition at $T_{0\text{g
}^{(\text{A})}$, $q_{\text{I}},q_{\text{II}}$ are the values of $q$ in gL and
L, respectively at $T_{0\text{g}}^{(\text{A})}$. For the conventional glass
transition at $T_{0\text{G}}$, $q_{\text{I}},q_{\text{II}}$ are the values of
$q$ in the glass GL and gL, respectively, at $T_{0\text{G}}$; for the
precursory glass transition at $T_{0\text{g}}$, $q_{\text{I}},q_{\text{II}}$
are the values of $q$ in gL and L, respectively at $T_{0\text{g}}$. These
states are summarized in the Table below
\[
\ \ \ \ \ \ \
\begin{tabular}
[c]{lllll}
& & Table: Various & States & \\
& Apparent $T_{0\text{G}}$ & Apparent $T_{0\text{g}}^{(\text{A})}$ &
Conventional $T_{0\text{g}}$ & Conventional $T_{0\text{G}}$\\
I & \ \ \ \ \ \ GL & \ \ \ \ \ \ \ \ gL & \ \ \ \ \ \ \ \ \ gL &
\ \ \ \ \ \ \ \ \ \ GL\\
II & \ \ \ \ \ \ \ \ L & \ \ \ \ \ \ \ \ \ L & \ \ \ \ \ \ \ \ \ \ L &
\ \ \ \ \ \ \ \ \ \ \ gL
\end{tabular}
\]
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
In terms of the discontinuities $\Delta C_{P},\Delta K_{T}$ and $\Delta
\alpha_{P}$, the Prigogine-Defay ratio \cite{Prigogine-Defay} is traditionally
defined as
\cite{Prigogine-Defay,Davies,Goldstein,DiMarzio,Gupta,Gutzow-Pi,Nemilov,Garden
\[
\Pi^{\text{trad}}\equiv\frac{\Delta C_{P}\Delta K_{T}}{VT_{0}(\Delta\alpha
_{P})^{2}},
\]
where it is assumed that the volume is the same in both states at $T_{0
,P_{0}$, as is evident from earlier work. As we will see below, the volume is
normally not continuous at the apparent glass transitions, used in most
experimental and theoretical analyses of the glass transition. To allow for
this possibility, we will consider the following equivalent definition of the
Prigogine-Defay ration in this work
\begin{equation}
\Pi\equiv\frac{\Delta C_{P}\Delta K_{T}}{T_{0}(\Delta V\alpha_{P
)(\Delta\alpha_{P})}, \label{P-D_Ratio
\end{equation}
where we have absorbed $V$ in one of the $\Delta\alpha_{P}$-factors. It is
clear that $\Pi$ is not different from $\Pi^{\text{trad}}$ when the volume is
the same as happens for conventional transitions.
As the experimentalists have no control over the internal variables, and can
only manipulate the observables $\mathbf{X}$ by controlling the fields
$\mathbf{y}_{0}$ of the medium, we will discuss the evaluation of the
Prigogine-Defay ratio in the subspace of $\mathbf{y}_{0}$ of the complete
thermodynamic space of $\mathbf{y}_{0},\mathbf{a}_{0}$. We will consider the
simplest possible case in which the subspace reduces to the $T_{0}$-$P_{0}$
plane. Therefore, we will restrict ourselves to this plane in the following,
knowing very well that the GL and gL are also determined by the set
$\boldsymbol{\xi}$ of internal variables; see Sect. \ref{Marker_Subspace}. We
will consider the general case of several internal variables $\xi
_{k},k=1,2,\cdots,n$.
\subsection{Conventional Transitions at $T_{0\text{g}}$ and $T_{0\text{G}}$}
We will first consider the Prigogine-Defay ratio $\Pi_{\text{g}}$ at the
conventional transitions at points B and D (see Figs.
\ref{Fig_GlassTransition_V} and \ref{Fig_GlassTransition_G}). The continuity
of the state across B and D means that $E,V$ and $S$ remain continuous across
the conventional transitions at B and D. This is consistent with the
continuity of the Gibbs free energy. Let us first consider the transition at
B, where the relaxation time $\tau$ of the system becomes equal to the
observation time-scale $\tau_{\text{obs}}$, so that both states gL and L
remain in equilibrium with the medium. Thus, $T=T_{0},P=P_{0}$, and
$\mathbf{A}=\mathbf{A}_{0}=0$ for both states at B. Therefore, there is no
need to consider the internal variables in the Gibbs free energy, as they are
not independent variables. Moreover, $V=(\partial G/\partial P_{0})_{T_{0}}$
and $S=-(\partial G/\partial T_{0})_{P_{0}}$. Thus, the Gibbs free energy and
its derivatives with respect to $T_{0},P_{0}$ are continuous at B; the second
derivatives need not be. It is clear that B represents a point that resembles
a continuous transition in equilibrium; it turns into a glass transition curve
$T_{0\text{g}}(P_{0})$ of continuous transitions in the $T_{0}$ -$P_{0}$ plane.
For the transition at D, we have a glass GL on the low-temperature side, and
gL at the high temperature side; both states are out of equilibrium and have
the same temperature $T(t)$ and pressure $P(t)$, different from $T_{0},P_{0}$,
respectively at the transition. Similarly, $A(t)\neq0$ is the same in both
states. The important characteristics of the conventional transitions are the
continuity of $E,V$ and $S$ at B and D. We now follow the consequences of
these continuities.
\subsubsection{Continuity of Volume}
From the continuity of the volume, we have
\begin{equation}
d\Delta_{\text{g}}\ln V=\Delta_{\text{g}}\left( \frac{\partial\ln V}{\partial
T_{0}}\right) _{P_{0}}dT_{0}+\Delta_{\text{g}}\left( \frac{\partial\ln
V}{\partial P_{0}}\right) _{T_{0}}dP_{0}=0, \label{Volume_Continuity
\end{equation}
where $\Delta_{\text{g}}q$ denotes the difference in Eq. (\ref{G_L_Difference
) at the conventional glass transitions, and the derivatives are also
evaluated at the transition points. This equation can be written in terms of
the compressibilities and the expansion coefficients in the two states at the
glass transition temperature $T_{0\text{g}}$ or $T_{0\text{G}}$
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{tr}}=\frac{\Delta_{\text{g
}K_{T}}{\Delta_{\text{g}}\alpha_{P}}; \label{Transition_slope_V_1
\end{equation}
the isothermal compressibility $K_{T}$ and the isobaric expansion coefficient
$\alpha_{P}$\ are given in Eqs. (\ref{Heat_Capacity}) and
(\ref{Expansion_Coefficients}), respectively, and can be expressed in terms of
the derivatives of the internal variable $\boldsymbol{\xi}$, such as given in
Eqs. (\ref{K_K_Xi_Relation}) and (\ref{Alpha_Alpha_Xi_Relation}) for a single
internal variable $\xi$. We make no assumption about these $\xi$-derivatives,
such as their vanishing or any assumption about freezing of $\xi$ at its value
at B; indeed, we expect $\xi$ to change continuously over BC. Of course, we
must remember that $\xi$ is an independent thermodynamic variable in gL and GL
states only, and not in the L state \cite{Gujrati-Non-Equilibrium-II}. The
slope equation (\ref{Transition_slope_V_1}) determines the variation of
$T_{0\text{g}}$ or $T_{0\text{G}}$ with the medium pressure $P_{0}$ along the
glass transition curve $T_{0\text{g,G}}(P_{0})$ in the $T_{0}$ -$P_{0}$ plane,
regardless of $\boldsymbol{\xi}.$ The form of the above equation does not
depend on the number of internal variables, provided we use the proper
definitions of $K_{T}$ and $\alpha_{P}$ as given in Eqs. (\ref{Heat_Capacity})
and (\ref{Expansion_Coefficients}), respectively. Its form follows from the
continuity of the volume at the conventional glass transition.
\subsubsection{Continuity of Entropy}
From the continuity of the entropy at $T_{0\text{g}}$, we similarly have
\begin{equation}
d\Delta_{\text{g}}S=\Delta_{\text{g}}\left( \frac{\partial S}{\partial T_{0
}\right) _{P_{0}}dT_{0}+\Delta_{\text{g}}\left( \frac{\partial S}{\partial
P_{0}}\right) _{T_{0\text{g}}}dP_{0}=0, \label{Entropy_Continuity
\end{equation}
from which we obtain at the precursory glass transition at
\begin{equation}
\left. \frac{dT_{0\text{g}}}{dP_{0}}\right\vert _{\text{tr}}=\frac
{T_{0}\Delta_{\text{g}}(V\alpha_{P})}{\Delta_{\text{g}}C_{P}}=\frac
{V_{\text{g}}T_{0}\Delta_{\text{g}}\alpha_{P}}{\Delta_{\text{g}}C_{P}},
\label{Transition_slope_S_1
\end{equation}
where we have used the equilibrium Maxwell relation $(\partial S/\partial
P_{0})_{T_{0}}=-(\partial V/\partial T_{0})_{P_{0}}=V\alpha_{P}$; see Eq.
(\ref{Maxwell_Relations}) or Eq. (\ref{S_P0_V_T0_relation}) applied to this
case. Here $V_{\text{g}}$ is the common volume of gL and L at B and has been
taken out of $\Delta_{\text{g}}(V\alpha_{P})$. Again, this relation for the
slope is quite general, independent of the number of internal variables in gL
state at lower temperatures $T_{0}<T_{0\text{g}}$. Accordingly
\begin{equation}
\Pi_{\text{g}}\equiv\frac{\Delta_{\text{g}}C_{P}\Delta_{\text{g}}K_{T
}{V_{\text{g}}T_{0}(\Delta_{\text{g}}\alpha_{P})^{2}}=1,
\label{P-D_Ratio_Glass
\end{equation}
as expected for equilibrium states.\ It is a consequence of the glass
transition being a continuous transition between equilibrium states at B. As
we will see below, it is not merely a consequence of the continuity of volume
and entropy simultaneously.
Let us now consider the glass transition at $T_{0\text{G}}$. It follows from
Eq. (\ref{Heat_Capacity}) that
\[
\Delta_{\text{g}}\left( \frac{\partial S}{\partial T_{0}}\right) _{P_{0
}=\frac{\Delta_{\text{g}}C_{P}}{T}.
\]
In conjunction with Eq. (\ref{S_P0_V_T0_relation}), we find that
\[
\left. \frac{dT_{0\text{G}}}{dP_{0}}\right\vert _{\text{tr}}=\frac
{V_{\text{G}}T\Delta_{\text{g}}\alpha_{P}}{\Delta_{\text{g}}C_{P}
\frac{\left( \partial P/\partial P_{0}\right) _{V}}{\left( \partial
T/\partial T_{0}\right) _{S}},
\]
where $V_{\text{G}}$ is the common volume of gL and GL at D and has been taken
out of $\Delta_{\text{g}}(V\alpha_{P})$. We finally obtai
\begin{equation}
\Pi_{\text{G}}\equiv\frac{\Delta_{\text{g}}C_{P}\Delta_{\text{g}}K_{T
}{V_{\text{G}}T_{0}(\Delta_{\text{g}}\alpha_{P})^{2}}=\frac{T}{T_{0}
\frac{\left( \partial P/\partial P_{0}\right) _{V}}{\left( \partial
T/\partial T_{0}\right) _{S}}\neq1 \label{P-D_Ratio_Glass_Genuine
\end{equation}
for the conventional glass transition at D. The deviation of $\Pi_{\text{G}}$
from unity is independent of the number of internal variables. It will be
different from unity even if we have no internal variables.
\subsection{Apparent Glass Transitions at $T_{0\text{g}}^{(\text{A})}$ and
$T_{0\text{G}}$}
Unfortunately, it is not a common practice to determine the Prigogine-Defay
ratio at the conventional transitions at temperatures $T_{0\text{g}}(P_{0})$
or $T_{0\text{G}}(P_{0})$, which resemble continuous transition in that the
volume and entropy are continuous, along with the Gibbs free energy. In
experiments, one determines the ratio at apparent glass transitions either at
D or at $T_{0\text{g}}^{(\text{A})}(P_{0})$ in the glass transition region BD;
see Figs. \ref{Fig_GlassTransition_V} and \ref{Fig_GlassTransition_G}. In
these transitions, there are discontinuities in the $G,E,V$ and $S$. The
extrapolated point C (see Fig. \ref{Fig_GlassTransition_V}) identifies the
apparent glass transition temperature $T_{0\text{g}}^{(\text{A})}(P_{0})$,
which cannot be treated as a transition temperature because the Gibbs free
energy in the two states (gL and L) are not equal, as is clearly seen in Fig.
\ref{Fig_GlassTransition_G}. It is clear from Fig. \ref{Fig_GlassTransition_V}
that the volume is also different in gL and L at the apparent glass transition
$T_{0\text{g}}^{(\text{A})}(P_{0})$. The discontinuity of the volume should
not be confused with the continuity of the \emph{extrapolated} volumes used to
determine the location of the phenomenological glass transition $T_{0\text{g
}^{(\text{A})}(P_{0})$. The extrapolated glass volume does not represent the
physical volume of the glass at $T_{0\text{g}}^{(\text{A})}(P_{0})$ given by
the point on the curve BD in Fig. \ref{Fig_GlassTransition_V}. The
discontinuity is between the physical volumes of gL and L at $T_{0\text{g
}^{(\text{A})}(P_{0})$. We already know that both the entropy and the enthalpy
of the glass continue to decrease during vitrification as the system relaxes
\cite{Gujrati-Non-Equilibrium-I}. Indeed, the volume of the glass or gL also
relaxes towards that of the supercooled liquid L. This will also be true at
$T_{0\text{g}}^{(\text{A})}(P_{0})$ so that the volume and the entropy of gL
are higher than their values in the supercooled liquid at $T_{0\text{g
}^{(\text{A})}(P_{0})$ in a vitrification process. The same sort of
discontinuities also occur at D. In the following, we will take into account
these discontinuities in the volume and entropy in determining the
Prigogine-Defay ratio at the apparent glass transitions at $T_{0\text{G
}(P_{0})$ and $T_{0\text{g}}^{(\text{A})}(P_{0})$. The discontinuity of volume
$\Delta_{\text{g}}^{(\text{A})}V$ ($\neq0$) causes a modification of Eq.
(\ref{Volume_Continuity}) at these transitions:
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{tr}}=\frac{\delta\ln
V_{P}^{(\text{A})}+\Delta_{\text{g}}^{(\text{A})}K_{T}}{\Delta_{\text{g
}^{(\text{A})}\alpha_{P}}=\frac{\Delta_{\text{g}}^{(\text{A})}K_{T}
{\Delta_{\text{g}}^{(\text{A})}\alpha_{P}}(1+\delta_{\text{g}}^{(\text{A
)}V_{P}) \label{Transition_slope_V_A
\end{equation}
in terms of
\begin{equation}
\delta\ln V_{P}^{(\text{A})}\equiv\left. d\Delta_{\text{g}}^{(\text{A})}\ln
V/dP_{0}\right\vert _{\text{tr}} \label{lnV_Discontinuity
\end{equation}
at $T_{0\text{g}}^{(\text{A})}$ or $T_{0\text{G}}$, as the case may be; the
three $\Delta_{\text{g}}^{(\text{A})}$'s are the difference $\Delta$\ in Eq.
(\ref{G_L_Difference}) evaluated at $T_{0\text{g}}^{(\text{A})}$ or
$T_{0\text{G}}$, and the new quantity $\delta_{\text{g}}^{(\text{A})}V_{P}$
has an obvious definition
\begin{equation}
\delta_{\text{g}}^{(\text{A})}V_{P}=\frac{\delta\ln V_{P}^{(\text{A})}
{\Delta_{\text{g}}^{(\text{A})}K_{T}} \label{Volume_Correction_term
\end{equation}
at the appropriate temperature. This contribution would vanish under the
approximation $\Delta_{\text{g}}^{(\text{A})}\ln V\simeq0$, or $\delta\ln
V_{P}^{(\text{A})}\simeq0$. The slope equation (\ref{Transition_slope_V_A})
must always be satisfied at the apparent glass transition temperature. The
quantity $\delta\ln V_{P}^{(\text{A})}$\ in it represents the variation of the
discontinuity
\[
\Delta_{\text{g}}^{(\text{A})}\ln V=\ln V_{\text{I}}(T_{0},P_{0})-\ln
V_{\text{II}}(T_{0},P_{0})
\]
with pressure along the apparent glass transition curve $T_{0\text{g
}^{(\text{A})}(P_{0})$ or $T_{0\text{G}}(P_{0}),$ and can also be found
experimentally. Indeed, we can treat $\Delta_{\text{g}}^{(\text{A})}\ln V$ as
a function of $P_{0\text{g}}\equiv P_{0}(T_{0\text{g}}^{(\text{A})})$ along
the transition curves$.$ Then the contribution from the volume discontinuity
is given by
\begin{equation}
\delta\ln V_{P}^{(\text{A})}=\frac{1}{V_{\text{I}}}\left. \frac{dV_{\text{I
}(P_{0})}{dP_{0}}\right\vert _{\text{tr}}-\frac{1}{V_{\text{II}}}\left.
\frac{dV_{\text{II}}(P_{0})}{dP_{0}}\right\vert _{\text{tr}}.
\label{lnV_derivative_difference
\end{equation}
We can use Eqs. (\ref{K_K_Xi_Relation}) and (\ref{Alpha_Alpha_Xi_Relation}) to
express the slope in terms of $\Delta_{\text{g}}K_{T,\xi}$ and\ $\Delta
_{\text{g}}\alpha_{P,\xi}$
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{tr}}=\frac{\delta\ln
V_{P}^{(\text{A})}+\Delta_{\text{g}}^{(\text{A})}K_{T,\xi}-V_{\xi,\text{I
}\left. \partial\xi/\partial P_{0}\right\vert _{\text{tr}}/V_{\text{I}
}{\Delta_{\text{g}}^{(\text{A})}\alpha_{P,\xi}-V_{\xi,\text{I}}\left.
\partial\xi/\partial T_{0}\right\vert _{\text{tr}}/V_{\text{I}}},
\label{Transition_slope_V_0
\end{equation}
where $V_{\xi,\text{G}}$ represents the derivative $\left( \partial
V_{\text{I}}/\partial\xi\right) _{T_{0},P_{0}}$, and $V_{\text{I}}$ is the GL
volume at $T_{0\text{G}}$ or the gL volume at $T_{0\text{g}}^{(\text{A})}$.
The $\xi$-contribution from the L state is absent due to the vanishing of the
affinity $\mathbf{A}_{0}(=0)$ in the L.
Let us now consider the differential of the entropy difference at the apparent
glass transition in the $T_{0}$ -$P_{0}$ plane
\[
d\Delta_{\text{g}}^{(\text{A})}S=\Delta_{\text{g}}^{(\text{A})}\left(
\frac{\partial S}{\partial T_{0}}\right) _{P_{0}}dT_{0}+\Delta_{\text{g
}^{(\text{A})}\left( \frac{\partial S}{\partial P_{0}}\right) _{T_{0}
dP_{0},
\]
from which we find that
\begin{equation}
\left. \frac{dT_{0}}{dP_{0}}\right\vert _{\text{tr}}=\frac{\delta
S_{P}^{(\text{A})}-\Delta_{\text{g}}^{(\text{A})}\left( \partial S/\partial
P_{0}\right) _{T_{0}}}{\Delta_{\text{g}}^{(\text{A})}\left( \partial
S/\partial T_{0}\right) _{P_{0}}}, \label{Transition_slope_S_A
\end{equation}
wit
\begin{equation}
\delta S_{P}^{(\text{A})}\equiv\left. d\Delta_{\text{g}}^{(\text{A})
S/dP_{0}\right\vert _{\text{tr}}; \label{S_Discontinuity
\end{equation}
it represents the rate of variation of the entropy discontinuity
\[
\Delta_{\text{g}}^{(\text{A})}S=S_{\text{I}}(T_{0},P_{0})-S_{\text{II}
(T_{0},P_{0})
\]
along the apparent glass transition curves. Following the steps in deriving
Eq. (\ref{lnV_derivative_difference}), we find that the contribution from the
entropy discontinuity is given by
\begin{equation}
\delta S_{P}^{(\text{A})}=\left. \frac{dS_{\text{I}}(P_{0})}{dP_{0
}\right\vert _{\text{tr}}-\left. \frac{dS_{\text{II}}(P_{0})}{dP_{0
}\right\vert _{\text{tr}}. \label{S_derivative_difference
\end{equation}
\qquad
The derivative $\left( \partial S_{\text{I}}/\partial P_{0}\right) _{T_{0}}$
in the second term in the numerator in Eq. (\ref{Transition_slope_S_A}) can be
manipulated as in Eq. (\ref{S_P0_V_T0_relation}):
\[
\frac{\partial(S,T_{0})}{\partial(P_{0},T_{0})}=-\frac{\partial(S,T_{0
)}{\partial(V,P_{0})}\frac{\partial(P_{0},V)}{\partial(P_{0},T_{0})}=-\left(
\frac{\partial V}{\partial T_{0}}\right) _{P_{0}}\frac{\partial(S,T_{0
)}{\partial(V,P_{0})},
\]
in which the last Jacobian reduces to unity under equilibrium by the use of
the Maxwell relation $\partial(V,P=P_{0})=$ $\partial(S,T=T_{0})$. We,
therefore, writ
\[
\frac{\partial(S,T_{0})}{\partial(V,P_{0})}=\frac{(\partial P/\partial
P_{0})_{V}}{(\partial T/\partial T_{0})_{S}}=1+\delta S_{VS}^{\text{I}
\]
for the glassy state; this equation also defines the modification $\delta
S_{VP}^{\text{I}}$ given b
\begin{equation}
\delta S_{VS}^{\text{I}}=\frac{(\partial P/\partial P_{0})_{V}}{(\partial
T/\partial T_{0})_{S}}-1 \label{Entropy_Correction_term
\end{equation}
for the glassy state, where $T,P$ are the internal temperature, pressure of
the glassy state. It vanishes under the approximation $T=T_{0}$ and $P=P_{0}$.
We now have
\[
\left( \frac{\partial S_{\text{I}}}{\partial P_{0}}\right) _{T_{0}}=-\left(
\frac{\partial V_{\text{I}}}{\partial T_{0}}\right) _{P_{0}}(1+\delta
S_{VS}^{\text{I}})=-V_{\text{I}}(1+\delta S_{VS}^{\text{I}})\alpha
_{P}^{\text{I}}.
\]
For the supercooled liquid, which represents an equilibrium state, we
evidently have ($S_{\text{II}}\equiv S_{\text{L}}$
\[
\left( \frac{\partial S_{\text{L}}}{\partial P_{0}}\right) _{T_{0}}=-\left(
\frac{\partial V_{\text{L}}}{\partial T_{0}}\right) _{P_{0}}=-V_{\text{L
}\alpha_{P}^{\text{L}},
\]
so that
\[
\Delta_{\text{g}}^{(\text{A})}\left( \partial S/\partial P_{0}\right)
_{T_{0}}=-\Delta_{\text{g}}^{(\text{A})}(V\alpha_{P})-V_{\text{I}}\alpha
_{P}^{\text{I}}\delta S_{VS}^{\text{I}}.
\]
We now turn to the denominator in Eq. (\ref{Transition_slope_S_A}). For the
supercooled liquid state, whose temperature is $T_{0}$, we hav
\[
\left( \frac{\partial S_{\text{L}}}{\partial T_{0}}\right) _{P_{0}
=\frac{C_{P}^{\text{L}}}{T_{0}};
\]
we must use $T_{0\text{g}}^{(\text{A})}$ or $T_{0\text{G}}$\ for $T_{0}$ to
evaluate this slope at the appropriate apparent glass transition. For the
glass, whose internal temperature is $T$, we hav
\[
\left( \frac{\partial S_{\text{I}}}{\partial T_{0}}\right) _{P_{0}
=\frac{C_{P}^{\text{I}}}{T}\equiv(1+\delta T^{\text{I}})\frac{C_{P}^{\text{I
}}{T_{0}},
\]
where we have introduced a correction paramete
\begin{equation}
\delta T^{\text{I}}\equiv T_{0\text{g,G}}^{(\text{A})}/T-1,
\label{Temperature_Correction_term
\end{equation}
with $T_{0\text{g,G}}^{(\text{A})}$ denoting $T_{0\text{g}}^{(\text{A})}$ or
$T_{0\text{G}}$\ as the case may be. Again, this modification term vanishes
under the approximation $T=T_{0}$. We thus find that
\[
T_{0}\Delta_{\text{g}}^{(\text{A})}\left( \partial S/\partial T_{0}\right)
_{P_{0}}=\Delta_{\text{g}}^{(\text{A})}C_{P}+C_{P}^{\text{I}}\delta
T^{\text{I}}.
\]
Equating the two different versions of the slope in Eqs.
(\ref{Transition_slope_V_A}) and (\ref{Transition_slope_S_A}), we hav
\begin{align*}
\frac{\Delta_{\text{g}}^{(\text{A})}K_{T}}{\Delta_{\text{g}}^{(\text{A
)}\alpha_{P}}(1+\delta_{\text{g}}^{(\text{A})}V_{P}) & =T_{0\text{g,G
}^{(\text{A})}\frac{\delta S_{P}^{(\text{A})}+\Delta_{\text{g}}^{(\text{A
)}(V\alpha_{P})+V_{\text{I}}\alpha_{P}^{\text{I}}\delta S_{VS}^{\text{I}
}{\Delta_{\text{g}}^{(\text{A})}C_{P}+C_{P}^{\text{I}}\delta T^{\text{G}}}\\
& \equiv\frac{T_{0\text{g,G}}^{(\text{A})}\Delta_{\text{g}}^{(\text{A
)}(V\alpha_{P})}{\Delta_{\text{g}}^{(\text{A})}C_{P}}(1+\delta^{\prime
\Pi_{\text{gA}}),
\end{align*}
where we have introduced a new quantity $\delta^{\prime}\Pi_{\text{gA}}$,
whose definition is obvious from the equality.
We finally find that the Prigogine-Defay ratio is given by
\begin{equation}
\Pi_{\text{gA}}\equiv\frac{\Delta_{\text{g}}^{(\text{A})}C_{P}\Delta
_{\text{g}}^{(\text{A})}K_{T}}{T_{0\text{g,G}}^{(\text{A})}\Delta_{\text{g
}^{(\text{A})}\alpha_{P}\Delta_{\text{g}}^{(\text{A})}(V\alpha_{P})
\equiv1+\delta\Pi_{\text{gA}}\equiv\frac{1+\delta^{\prime}\Pi_{\text{gA}
}{1+\delta_{\text{g}}^{(\text{A})}V_{P}} \label{P-D_Ratio_Glass_Apparent
\end{equation}
at the apparent glass transition. Its complete form is given b
\begin{equation}
\Pi_{\text{gA}}=\frac{1+\left( V_{\text{I}}\alpha_{P}^{\text{I}}\delta
S_{VS}^{\text{I}}+\delta S_{P}^{(\text{A})}\right) /\Delta_{\text{g
}^{(\text{A})}(V\alpha_{P})}{(1+C_{P}^{\text{I}}\delta T^{\text{I}
/\Delta_{\text{g}}^{(\text{A})}C_{P})(1+\delta_{\text{g}}^{(\text{A})}V_{P})}.
\label{P-D_Ratio_Glass_Apparent_0
\end{equation}
It should be obvious that the Prigogine-Defay ratio is itself a function of
time as it depends on time-dependent quantities such as $\Delta_{\text{g
}^{(\text{A})}S$, $\delta T^{\text{I}},$ etc.
\subsubsection{Approximation A}
Let us assume that the discontinuities in the volume and entropy are
negligible or that the contributions $\delta\ln V_{P}^{(\text{A})}$ and
$\delta S_{P}^{(\text{A})}$\ are negligible. In that case, the Prigogine-Defay
ratio reduces t
\[
\Pi_{\text{gA}}\simeq\frac{1+V_{\text{I}}\alpha_{P}^{\text{I}}\delta
S_{VS}^{\text{1}}/\Delta_{\text{g}}^{(\text{A})}(V\alpha_{P})}{1+C_{P
^{\text{I}}\delta T^{\text{I}}/\Delta_{\text{g}}^{(\text{A})}C_{P}},
\]
and will have a value different than $1$. Thus, the continuity of volume and
entropy alone is not sufficient to yield $\Pi_{\text{gA}}=1$, as noted above.
If we further approximate $T\simeq T_{0}$ and $P\simeq P_{0}$, then $\delta
S_{VS}^{\text{I}}\simeq0$ and $\delta T^{\text{I}}\simeq0$, and we obtain
$\Pi_{\text{gA}}\simeq1$. This is expected as the approximations change the
apparent glass transition into a continuous transition. If, however, we only
assume $P\simeq P_{0},$ but allow $T$ to be different from $T_{0}$, the
\[
\delta S_{VS}^{\text{I}}\simeq\frac{1}{(\partial T/\partial T_{0
)_{S_{\text{I}}}}-1,
\]
and we still have $\Pi_{\text{gA}}\neq1$.
\subsubsection{Approximation B}
We make no assumption about $\delta\ln V_{P}^{(\text{A})}$ and $\delta
S_{P}^{(\text{A})}$, but approximate $T\simeq T_{0}$ and $P\simeq P_{0}$. In
this case, $\delta S_{VS}^{\text{I}}\simeq0$ and $\delta T^{\text{I}}\simeq0$,
and we obtai
\[
\Pi_{\text{gA}}\simeq\frac{1+\delta S_{P}^{(\text{A})}/\Delta_{\text{g
}^{(\text{A})}(V\alpha_{P})}{1+\delta_{\text{g}}^{(\text{A})}V_{P}}.
\]
If, however, the approximation $T\simeq T_{0}$ is not valid, we hav
\[
\Pi_{\text{gA}}\simeq\frac{1+\delta S_{P}^{(\text{A})}/\Delta_{\text{g
}^{(\text{A})}(V\alpha_{P})}{(1+C_{P}^{\text{I}}\delta T^{\text{G}
/\Delta_{\text{g}}^{(\text{A})}C_{P})(1+\delta_{\text{g}}^{(\text{A})}V_{P
)}.
\]
In both cases, $\Pi_{\text{gA}}\neq1$.
\subsection{Comparison with Other Attempts for $\Pi$}
As far as we know, almost all previous attempts
\cite{Prigogine-Defay,Davies,Goldstein,DiMarzio,Gupta,Gutzow-Pi,Nemilov,Garden}
in the evaluation of $\Pi$ are based on treating the glass transition as a
\emph{direct} transition from L to GL; the structure is supposed to be almost
frozen in the latter. As we see from Figs. \ref{Fig_GlassTransition_V} and
\ref{Fig_GlassTransition_G}, this can only occur at C between L and the
extrapolated branch DC. At C, there will be a discontinuity between the values
of the internal variable $\xi$; it will take the equilibrium value
$\xi_{\text{C}}^{\text{eq}}$ in L, but will take a non-equilibrium value
$\xi_{\text{C}}^{\text{etra}}\neq\xi_{\text{C}}^{\text{eq}}$ obtained along
DC. Similarly, $A=A_{0}=0$ in L at C, while $A=A_{\text{C}}\neq0$ in the
extrapolated GL at C. As C is obtained by matching the volumes, the volume
remains continuous, but there is no reason to believe that the entropy will
remain continuous in this transition. The Gibbs free energy obviously remains
discontinuous in this transition.
However, we have been careful in not treating this transition as an apparent
transition above for the simple reason that there is no guarantee that the
branch DC can be described by vitrification thermodynamics at the constant
cooling rate $r$. To see it most easily, we observe that as the cooling rate
is gradually taken to be slower and slower, the transition point B gradually
moves towards C along BF. However, the analog of BD will most certainly not be
identical to DC for the simple reason that the state of L will continuously
change to gL so that the values of $\xi$ and $A$ in gL at C will be identical
to their values $\xi=$ $\xi_{\text{C}}^{\text{eq}}$\ and $A=0$ in L at C.
Moreover, there is no guarantee that the extrapolated branch DC can even
satisfy thermodynamics with known controllable parameters $T_{0},P_{0}$ and
$r$. To treat this "transition" as a glass transition requires some
approximation, which we have avoided.
\section{Conclusions}
We have followed the consequences of internal equilibrium to derive
generalizations of equilibrium thermodynamic relations such as Maxwell's
relations, Clausius-Clapeyron relation, relations between response functions
(heat capacities, compressibilities, etc.) to non-equilibrium systems.
Non-equilibrium states are described not only by internal fields (temperature,
pressure, etc.) that are different from the medium, but also described by
internal variables which cannot be controlled from outside by the observer.
The observer can only control the observables. Thus, in this work, we have
also discussed how the thermodynamics should be described in the subspace of
the observables only. As glasses are a prime example of non-equilibrium
states, we have reviewed the notion of the glass transition. The frozen
structure known as the glass (GL) does not emerge directly out of the
equilibrium supercooled liquid (L). There is an intermediate non-equilibrium
state (gL) that is not yet frozen when it emerges continuously out of the
equilibrium liquid L. At a lower temperature, this state continuously turns
into GL. Because of this, we find that there is no one unique non-equilibrium
transition. We introduce four of the most conceptually useful transitions. At
two of them, which we term conventional glass transitions, the Gibbs free
energies and the states are continuous. Thus, they are the non-equilibrium
analog of the conventional continuous or second order transition between
equilibrium states. At the other two glass transitions, which we term apparent
glass transition, not only the states but also the Gibbs free energies are
discontinuous. Because of this, these transitions are examples of a zeroth
order transition where the free energy is discontinuous. But there is no
transition in the system itself at the apparent glass transition as discussed
in Sect. \ref{Marker_Glass_Transitions}.
We briefly review the use of Jacobians which are found extremely useful in
obtaining the generalization of the Maxwell relations. There are many other
Maxwell relations than reported here; they can be easily constructed. We then
discuss various response functions and obtain relationship between them in
non-equilibriums states. Surprisingly, many of these relations look similar in
form to those found in equilibrium thermodynamics.
We finally evaluate the Prigogine-Defay ratio at the four possible glass
transitions. We find that the ratio is normally different than $1$, except at
the conventional glass transition at the highest temperature, where it is
always equal to $1$, regardless of the number of internal variables. We also
find that the continuity of volume and entropy is not a guarantee for $\Pi=1$.
We compare our analysis of $\Pi$ with those carried out by other workers.
\begin{acknowledgement}
P.P. Aung was supported by a summer internship from NSF through the University
of Akron REU\ site for Polymer Science.
\end{acknowledgement}
|
1,116,691,500,259 | arxiv | \section{Charginos and Neutralinos in Trileptons}
Supersymmetry has been a popular extension to the standard
model for several decades and numerous searches for
evidence of any of the superpartners have been carried out.
At hadron colliders, a unique signature for supersymmetry
comes in the trilepton (``three lepton'') final states. If
the masses lie in the correct region, proton-antiproton
collisions can produce charginos and neutralinos in
association:
\begin{equation}
p\bar{p} \rightarrow \tilde{\chi}_{1}^\pm \tilde{\chi}_2^0
\end{equation}
\noindent
with decay modes
\begin{equation}
\tilde{\chi}_{1}^\pm \rightarrow \ell^\pm \nu \tilde{\chi}_1^0 \hspace{1cm}
\tilde{\chi}_2^0 \rightarrow \ell^\pm \ell^\mp \tilde{\chi}_1^0
\end{equation}
In the standard model, trilepton final states are only produced
by rare processes (such as di-bosons) which means these searches
will naturally have small backgrounds. The challenge lies in
the inefficiency to uniquely identify all three leptons. The
solution is to use three search techniques: (1) observe
all three leptons; (2) observe two leptons and a third isolated
track; (3) observe two same-signed leptons. By combining all
three search methods and combinations of electrons, muons and
isolated tracks, the experiments improve the sensitivity to
discovery.
CDF has preformed searches in 14 different channels~\cite{bib:cdftrilepton}
ranging in luminosity 0.7-1.1 fb$^{-1}$.
While a slight excess of data
vs. background is observed, there is no strong evidence of
supersymmetry. Therefore limits on the production
cross-section times branching ratio can be set. CDF interprets
this within three different mSUGRA inspired models with
$m_0 = 60$ GeV: (A) mSUGRA;
(B) MSSM without slepton mixing;
(C) MSSM with lepton branching ratio set to same as W/Z.
For model (B) a limit on the $\tilde{\chi}_1^\pm$ mass greater than
130 GeV is set (Fig.~\ref{fig:trileptons}(a)).
D\O\ has preformed four searches~\cite{bib:d0trilepton}
with luminosity 1.0-1.1 fb$^{-1}$ \ with no excess of data observed.
Three mSUGRA-inspired models (with no slepton mixing)
are explored: (1) large $m_0$ where W/Z decays dominate;
(2) 3$\ell$-max with slepton mass slightly larger than the
$\tilde{\chi}_2^0$ mass;
(3) heavy squarks where scalar mass unification is relaxed
(Fig.~\ref{fig:trileptons}(b)).
For the 3$\ell$-max model, a limit of
$M(\tilde{\chi}_1^\pm)$ $>$ 141 GeV is found.
\begin{figure}
\unitlength1cm
\begin{picture}(15.0,7.0)
\put(0.0,0.2){\psfig{figure=excl_new_nomix.eps,height=6.25cm}}
\put(6.2,0){\psfig{figure=N52F6.eps,height=6.25cm}}
\put(4.5,2.5){\Large (a)}
\put(13.5,2.5){\Large (b)}
\end{picture}
\caption{Limits on SUSY production of charginos and neutralinos.
(a) CDF limit using a model of MSSM without slepton mixing and
$m_0 = 60$ GeV;
(b) D0 limit using three mSUGRA inspired models.
\label{fig:trileptons}}
\end{figure}
\section{$W^\prime$}
Some extensions of the standard model predict the existence
of additional, heavy, gauge bosons. D0 has performed a search
for a $W^\prime$ decaying to an electron and
neutrino~\cite{bib:d0wprime} using a dataset of 0.9 fb$^{-1}$. Data
selection requires a high energy electron ($E_T > 30$ GeV),
large missing transverse energy (MET $>$ 30 GeV) and large
transverse mass ($M_T > 150$ GeV). Figure~\ref{fig:wprime}(a)
shows the transverse mass distribution (without $M_T$ cut)
for data, background and signal. D0 observes 630 events with
an expected background of $623 \pm 18^{+83}_{-75}$ events.
Therefore, a limit on a $W^\prime$ mass $>$ 965 GeV is set
assuming standard model couplings
(Fig.~\ref{fig:wprime}(b)).
\begin{figure}
\unitlength1cm
\begin{picture}(15.0,7.0)
\put(0.5,0){\psfig{figure=N48F03_3.eps,height=6.5cm}}
\put(8.5,0){\psfig{figure=N48F04.eps,height=6.5cm}}
\put(3.5,3.5){\bf \Large (a)}
\put(10.5,2.5){(\bf \Large b)}
\end{picture}
\caption{(a) Distribution of transverse mass without $M_T$
mass cut. Data is shown as points with error bars, background
is the solid histogram, while a sample signal with
$M_{W^\prime}$ = 500 GeV is represented by the open
histogram. (b) D0 limit on the $W^\prime \rightarrow e\nu$
cross-section times branching ratio as a function of the
$W^\prime$ mass.
\label{fig:wprime}}
\end{figure}
\section{$Z^\prime$}
CDF has performed a model independent search for narrow
resonances decaying to an electron and a
positron~\cite{bib:cdfzprime} using 1.3~fb$^{-1}$ \ of data. They
scan the mass region 150-900 GeV in 4 GeV mass bins looking
for an excess of data over predicted background
(Fig~\ref{fig:zprime}(a)). The small
excesses seen are consistent with
statistical fluctuations. This is interpreted to exclude
a standard model type $Z^\prime$ with mass below
923 GeV. Additional models are shown in Fig.~\ref{fig:zprime}(b).
\begin{figure}
\unitlength1cm
\begin{picture}(15.0,7.0)
\put(0.5,0){\psfig{figure=mass1288pbCCCP.eps,height=6.5cm}}
\put(8.5,0){\psfig{figure=spin1SysLimits.eps,height=6.5cm}}
\put(5.5,2.5){\bf \Large (a)}
\put(13.5,2.5){\bf \Large (b)}
\end{picture}
\caption{(a) Distribution of di-electron invariant mass with
data shown as points with errors and background as the
histograms. (b) CDF limit on the cross-section times
branching ratio for a spin 1 object along with various
models of $Z^\prime$ production.
\label{fig:zprime}}
\end{figure}
\section{Randall-Sundrum Gravitons}
Both CDF and D0 have combined searches in di-electron
final states with similar searches in di-photon to explore
models of extra dimensions involving Randall-Sundrum
gravitons~\cite{bib:cdfzprime,bib:d0rsgrav}. Models of
extra dimensions attempt to address the hierarchy problem
between the strength of the weak force and gravity.
At hadron colliders,
RS gravitons may be observed in the invariant mass or
angular distributions of electron and/or photon pairs.
Both experiments observe data in agreement with background
predictions and exclude large regions in the graviton mass
vs. k/$\bar{M}_{pl}$ parameter
space (Fig.~\ref{fig:rsgravitons}). At
k/$\bar{M}_{pl}$=0.1 CDF(D0) exclude gravitons with
masses below 889(865) GeV.
\begin{figure}
\unitlength1cm
\begin{picture}(15.0,7.0)
\put(0.5,0){\psfig{figure=kMplVsM_1288pb.eps,height=6.5cm}}
\put(8.9,0){\psfig{figure=N49F04.eps,height=6.5cm}}
\put(6,2.0){\bf \Large (a)}
\put(13.5,1.5){\bf \Large (b)}
\end{picture}
\caption{Limits on extra dimensions using the Randall-Sundrum
model from (a) CDF and (b) D0. Limits are set on the
parameters $k/\bar{M}_{Pl}$ and the graviton mass.
\label{fig:rsgravitons}}
\end{figure}
\section{Excited Electrons}
Some models predict that quarks and leptons are composite
particles composed of smaller pieces. These models allow
for excited quark/lepton states. D0 has carried out a
search for excited electrons ($e^*$) from the process
$p\bar{p} \rightarrow e e^* \rightarrow e e \gamma$. After
selecting events with $p_T(e_1,e_2,\gamma) > 25,15,15$ GeV,
259 events are observed with an expected background of
232 $\pm$ 3 $\pm$ 29 events. From this, limits are set on
the mass of the excited electron and the compositeness
scale (Fig.~\ref{fig:d0excitedelectron}).
For $\Lambda = 1$, the limit is $m_{e^*} > 756$ GeV.
If decays via contact interaction are neglected, D0 finds
a limit of $m_{e^*} > 946$ GeV for $\Lambda = m_{e^*}$.
\begin{figure}
\unitlength1cm
\begin{picture}(15.0,7.0)
\put(0.5,0){\psfig{figure=N51F06.eps,height=6.5cm}}
\put(8.5,0){\psfig{figure=N51F07.eps,height=6.5cm}}
\put(1.8,1.0){\bf \Large (a)}
\put(13.5,3.5){\bf \Large (b)}
\end{picture}
\caption{Limits on excited electrons from D0. (a) shows
the limit on the cross-section times branching ratio
as a function of the mass of the excited electron.
(b) shows the limit on the compositeness scale vs.
the mass of the excited electron.
\label{fig:d0excitedelectron}}
\end{figure}
\section{Neutral, Long-lived Particles}
D0 has performed a search for neutral, long-lived particles
decaying to two muons after traveling at least 5 cm from
the production point. A sample of pair production
of neutralinos with R-parity violating decays and long
lifetime is used to model the signal. Background is
estimated from data to be 0.75 $\pm$ 1.1 $\pm$ 1.1 events.
No events are observed with a decay length in the transverse
plane of 5-20 cm. Limits are set on the production
cross-section times branching ratio as well as a comparison
with a previous result from NuTeV~\cite{bib:nutev3events}
using the sample model (Fig.~\ref{fig:d0nllp}).
This comparison limits the possible interpretations of
NuTeV's result.
\begin{figure}
\centerline{\psfig{figure=N06GF03color.eps,height=6.5cm}}
\caption{Limits on the cross-section times branching
ratio for neutral, long-lived particles decaying to
two muons as a function of the lifetime. The area
above the (red) line is excluded at the 95\% CL. The
dark blue shaded region is a 99\% CL from D0. The
yellow region shows the limit from NuTeV converted to
$p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV. The
light blue region shows the area favored by a signal
interpretation of NuTeV's result.
\label{fig:d0nllp}}
\end{figure}
\section{Summary}
The CDF and D0 collaborations have performed numerous searches
for new phenomena using leptonic final states. Recent results
place limits on associated chargino and neutralino production,
extra gauge bosons, Randall-Sundrum gravitons, excited
electrons and neutral, long-lived particles. Most of these
are the world's best limits.
\section*{Acknowledgments}
I would like to acknowledge the members of the CDF and D0 collaborations
without whom these results would not exist. I would like to
specifically thank Christopher Hays, Yuri Gershtein,
Jean-Fran\c{c}ois Grivaz and Jane Nachtman for help in preparing the
talk and proceedings,
Finally, I thank the organizers for their efforts which resulted in such
a wonderful conference.
\section*{References}
|
1,116,691,500,260 | arxiv | \section{Introduction}
Self-assembled InAs quantum dots are ideal model systems to study the energetic structure and dynamics of fully quantized few carrier systems \cite{Bimbergbuch,Petroff01,Reimann02}. When incorporated into a suitable diode or transistor structure, the coupling to a free electron or hole reservoir opens up new possibility for tuning the charge and energy of the dots \cite{Drexler94}. It also makes it possible to study the quantum mechanical properties in great detail.
When investigating the non-equilibrium transport between a reservoir and the dot system, the charging and discharging dynamics are given by the tunneling matrix element, which gives access to, e.g., wave function mapping \cite{Vdovin00,Maltezopoulos03,Wibbelhoff05} and manipulation \cite{Rontani11,Lei10,Patane10}.
As we will show in the following, also the multiplicity/degeneracy of the quantum dot states has a profound influence on the tunneling dynamics between the reservoir and the dots \cite{Reckermann10,Cockins10}. Starting from the observation that \emph{charging and discharging of the dots are governed by different relaxation times}, we develop a non-equilibrium transport model based on a master equation. The comparison between the model and the experimental data allows us to determine the details of the degeneracy of the electronic $p$-shell. These details are usually hidden by the unavoidable inhomogeneous ensemble broadening of the energy structure, but can be resolved by studying the charging and discharging dynamics.
\begin{figure}[b]
\includegraphics[width=0.9\columnwidth]{Fig1-final.pdf}
\caption{Device (a) and layer schematics (b) of the investigated sample:
A high electron mobility transistor, grown with an embedded layer of quantum dots, serving as a floating gate.
The tunneling current into the dots is monitored via a time-resolved conductance measurement of the 2DEG.
}
\label{Fig1}
\end{figure}
\section{Experiment}
The measurements are performed on an inverted AlGaAs/GaAs high electron mobility transistor structure as sketched in Fig. \ref{Fig1}(a) with an embedded layer of InAs quantum dots.
The layer sequence, grown by molecular beam epitaxy, is schematically shown in Fig. \ref{Fig1}(b).
The active part of the structure starts with a 300\,nm thick Al$_{0.34}$Ga$_{0.66}$As layer, a silicon $\delta$-doping sheet ($3\cdot 10^{12}$\,cm$^{-2}$) and an AlGaAs spacer layer of 16\,nm thickness.
Subsequently, a 15\,nm thick GaAs layer, which contains a two-dimensional electron gas, a 10\,nm thick AlGaAs tunneling layer, a 5\,nm thick GaAs spacer layer and the InAs quantum dots are deposited.
The dot formation takes place after evaporating the equivalent of 1.9 monolayers of InAs at 525$^\circ$C.
This results in a dot density of $n_{\rm QD}\approx 8\cdot 10^9$ cm$^{-2}$.
The dots are covered by 150\,nm of GaAs and a 116\,nm thick blocking layer of alternating AlAs/GaAs layers (3\,nm and 1\,nm, respectively).
The structure is capped by a protective, 5\,nm thick GaAs film.
Using standard lithographic techniques, the samples are patterned into a 60\,$\mu$m long and 50\,$\mu$m wide strip with source / drain contacts on either side.
The central region is covered by a 50\,nm thick gold layer, which serves as a gate electrode.
The application of a gate voltage $V_{\rm G}$ will shift the energetic position of the states in the quantum dots, which are embedded in the dielectric between the gate electrode and the two-dimensional electron gas (2DEG) \cite{Rus06PRB}.
This way, the number of electrons per quantum dot can be adjusted between 0 and 6 \cite{Drexler94,Petroff01,Rus06PRB}.
More specifically, each time the energy difference $\epsilon_m := E_{m} - E_{m-1}$ of the $m$-electron ground-state energies $E_m$ of the quantum dot is in resonance with the electro-chemical potential $\mu_{\rm F}$ of the two-dimensional reservoir, electrons can tunnel between 2DEG and quantum dots.
To monitor the tunneling dynamics between the dot ensemble and the 2DEG, we use a recently developed transconductance spectroscopy technique \cite{Marquardt09,Marquardt11,Beckel12}.
At a time $t=0$, a voltage pulse is applied to the gate, and the time-resolved response of the 2DEG conductivity $\sigma(t)$ is recorded.
For a positive pulse (upward step in $V_{\rm G}$) the energy of the quantum dot states are shifted downward, so that electrons can tunnel from the 2DEG into unoccupied states in the dot layer.
Therefore, an exponential decrease of $\sigma$ is observed, because mobile charges from the 2DEG will become localized when they are transferred into the dots.
For the reverse process (when switching back to the original voltage), charges are transferred back out of the dots into the 2DEG, so that its conductance will increase again \cite{Rus06PRB,Marquardt09}.
In this way, the conductivity of the 2DEG is a direct measure of quantum-dot charge and the conductance traces as shown in Fig. \ref{Fig2} allow us to directly compare the charging with the discharging process.
Taking the geometric distance between the 2DEG and the dot layer, $d_{\rm tunn}$, as well as the distance between the 2DEG and the gate, $d_{\rm tot}$, the energy shift $\Delta E$ caused by the voltage step $\Delta V_{\rm G}$ is easily calculated as $\Delta E = e \frac{d_{\rm tunn}}{d_{\rm tot}} \Delta V_{\rm G} = \frac{e}{\lambda} \Delta V_{\rm G}$.
Here we chose the simple but well established \cite{MedeirosRibeiro1997,Warburton98} energy conversion based on the geometric lever arm \cite{Note1} $\lambda = \frac{d_{\rm tot}}{d_{\rm tunn}}=7$.
For small excitation voltages $\Delta V_{\rm G}$ this allows us to derive the density of states in the dot layer $D(E)$ from the measured total change in conductivity $\Delta \sigma=\left| \sigma(0)-\sigma(\infty)\right|$ from:
\begin{equation}
\label{DOS}
\frac{\Delta \sigma}{\Delta V_{\rm G}} = \frac{\Delta n e \mu}{\frac{\Delta E}{e \lambda}} = \lambda e^2 \mu \frac{\Delta n}{\Delta E} = \lambda e^2 \mu D(E) ,
\end{equation}
where $\mu$ is the mobility of the 2DEG, and $\Delta n$ is the change in the 2DEG carrier density, caused by the tunneling electrons \cite{Note2}.
Figure \ref{Fig3}(b) shows the thus obtained density of states in the dot layer.
We observe two clearly distinct maxima, corresponding to the charging of the two $s$-states around $V_{\rm G} = -0.6$ V and a broader distribution, corresponding to the charging of the four $p$-states in the range between $-0.3$ and $0.3$ V.
The peaks are broadened because of the size distribution of the self-assembled quantum dots.
On samples with even better size homogeneity, the four $p$-states can also be clearly resolved \cite{Rus06PRB}.
\begin{figure}[bt]
\includegraphics[width=0.85\columnwidth]{Fig2-final.pdf}
\caption{Conductance change $\Delta \sigma$ while charging ($0\rightarrow 1$) and discharging ($1\rightarrow 0$) the first electron ($V_{\rm G}=-0.67$~V).
The time constants are determined to $\tau_{0\rightarrow 1}$=2.3~ms and $\tau_{1\rightarrow 0}$=3.2~ms respectively by fitting a stretched exponential (solid lines) to the transients.
The inset shows the temperature dependence of the averaged time ratio $\bar{\nu}=(\nu_1+1/\nu_2)/2$ (see Eq.\ref{ratio}).
For sufficiently low temperatures we find a maximum ratio of $\approx 1.4$ while $\bar\nu\rightarrow 1$ for high temperatures.
The solid line shows the temperature dependence, calculated from a master equation (see text).
}
\label{Fig2}
\end{figure}
Turning to the dynamics of the tunneling process, we find a significant difference for the charging compared to the discharging process as shown in Fig.~\ref{Fig2}.
Here, a small energy shift of $\Delta E\sim 1.4\,{\rm meV}$ in the chemical potential at a gate voltage $V_{\rm G} = -0.67\,{\rm V}$ allows a fraction of the dot ensemble \cite{Note3} to become charged ($0\rightarrow 1$) or discharged ($1\rightarrow 0$) with a single electron.
From a stretched exponential fit \cite{Note4} we obtain time constants of $\tau_{0\rightarrow 1}=2.3$~ms and $\tau_{1\rightarrow 0}=3.2$~ms, which clearly differ from each other.
As shown in the inset of Fig.~\ref{Fig2}, the ratio between the charging and discharging relaxation rate decreases with increasing temperature.
This raises two questions: 1) What is the physical origin of this asymmetry and 2) how can this asymmetry be used to gain insight in the internal structure of the quantum dots?
To answer both questions we model the charge relaxation after the voltage pulse within a master-equation approach.
\section{Theory}
At first glance, an asymmetry between charging and discharging the dots may appear counter intuitive.
After all, Fermi's Golden Rule \cite{Dirac27,Fermi50} $\Gamma_{i \rightarrow f} = \frac{2 \pi}{\hbar} \left| \left< f \right|H' \left| i \right>\right|^2 \rho_f$ for the transition rate from an initial state $i$ to a (fixed) final state $f$ is symmetric, $\Gamma_{i \rightarrow f} = \Gamma_{f \rightarrow i}\equiv \Gamma(\epsilon)$, as a consequence of the hermiticity of the tunneling Hamiltonian $H'$ and energy conservation which ensures that the (many-body) density of states for the initial and the final state are equal to each other (here practically given by the density of states of the 2DEG).
The dependence of $\Gamma(\epsilon)$ on the quantum-dot energy level $\epsilon$ reflects the energy dependence of the tunnel amplitudes (density of states of the 2DEG is practically energy independent).
We describe the charge dynamics by a master equation \cite{Beenakker} $\dot{p}_m = \sum_{m' \neq m} \Gamma_{m' \rightarrow m} p_{m'} - \sum_{m' \neq m} \Gamma_{m \rightarrow m'} p_m$ in terms of the quantum-dot charge $m$ (and its probability $p_m$), which contains only partial information of the initial and final many-body states.
The 2DEG degrees of freedom and the $d_m$-fold (spin and/or orbital) degeneracy of the quantum-dot state with $m$ electrons are integrated out.
Averaging the Fermi-Golden-Rule expression $\Gamma(\epsilon)$ over all initial and summing over all final states with the given quantum-dot charge yields the transition rates
\begin{eqnarray}
\label{Gm+}
\Gamma_m^+ &\equiv& \Gamma_{m-1\rightarrow m} = k_{m-1\rightarrow m} \Gamma(\epsilon_m) f(\epsilon_m)
\\
\label{Gm-}
\Gamma_m^- &\equiv& \Gamma_{m\rightarrow m-1} = k_{m\rightarrow m-1} \Gamma(\epsilon_m) [1 - f(\epsilon_m)] \, ,
\end{eqnarray}
where the Fermi function $f(\epsilon)$ stems from the average over the 2DEG occupation. Only transitions with $m \leftrightarrow m- 1$ need to be taken into account, because the electron-electron interaction energy (Coulomb blockade) is much larger than both the thermal energy and the excitation energy induced by the voltage pulse.
The integer $k_{m\rightarrow m'}$ counts how many quantum-dot states with charge $m'$ can be reached from each of the states with charge $m$.
Due to selection rules, $k_{m\rightarrow m'}$ may be smaller than the degeneracy $d_{m'}$.
Nevertheless, their ratios are equal,
\begin{equation}
\xi_m = \frac{k_{m-1\rightarrow m}}{k_{m\rightarrow m-1}} = \frac{d_{m}}{d_{m-1}} \, .
\end{equation}
Let us consider the $m$-th charge transition for an individual quantum dot.
From the master equations for the probabilities $p_{m-1}=1-p_{m}$, we obtain the kinetic equation for the average charge $N= \sum_m m p_m$,
\begin{equation}
\dot N(t) = m \Gamma_m^+ + (m-1) \Gamma_m^- - (\Gamma_m^+ + \Gamma_m^-) N(t)
\end{equation}
which is solved by
\begin{equation}
\label{singleoccupation}
\Delta N(t) \equiv N(t) - N_{\rm eq} = \left( N_0 - N_{\rm eq} \right) \exp ( -t/\tau)
\end{equation}
with the relaxation time $\tau$ given by
\begin{equation}
\label{time}
\frac{1}{\tau} = \tilde \Gamma_m \left[ 1 + (\xi_m -1) f(\epsilon_m) \right] \, ,
\end{equation}
where $\tilde \Gamma_m = k_{m\rightarrow m-1} \Gamma (\epsilon_m)$,
and the equilibrium occupation
\begin{equation}
\label{final}
N_{\rm eq} = m-1 + \frac{\xi_m f(\epsilon_m)}{1 + (\xi_m -1) f(\epsilon_m)}
\, .
\end{equation}
We would like to mention that for a given (fixed) final state energy $\epsilon_m$, the relaxation time $\tau$ does {\it not} distinguish between charging and discharging, i.e., whether the initial charge $N_0$ was larger or smaller than $N_{\rm eq}$.
Experimentally, on the other hand, the applied gate voltage pulse changes the energy $\epsilon_m$ of the quantum dot by a small amount $\Delta E$. For small voltage pulses, the energy dependence of the tunnel barrier can be neglected, i.e., $\Gamma(\epsilon) \approx \Gamma = const.$. However, as seen from Eq.~(\ref{time}) even a small change in $\epsilon_m$ can have a strong influence on the tunneling time when two conditions are fulfilled: (1) the temperature is small ($k_{\rm B}T < \Delta E$), so that the Fermi function $f$ has a steep slope near $\epsilon_m$ and (2) the degeneracies of the charging states $m$ and $m-1$ are different, so that $\xi_m \neq 1$.
For example, at the transition $m=1$ with $d_0=1$ and $d_1=2$, the relaxation rate is $\tau^{-1} = \Gamma [ 1+f(\epsilon_1) ]$.
At this point, we should emphasize again the importance of the charging energy.
For negligible charging energy, the charging and discharging of each quantum dot level (with orbital and spin degree of freedom) is independent of the other levels.
Therefore, degeneracies would not play any role, $\xi=1$, and the relaxation time would be energy independent.
The finite asymmetry is, therefore, a clear signature of Coulomb interaction.
A similar conclusion has been drawn from measurements of the width of tunneling resonances in quantum dots that are asymmetrically coupled to source and drain electrodes \cite{Koenemann}.
There the dependence of the width on the polarity of the applied bias voltage could also be traced back to the energy dependence of $\Gamma[1+f(\epsilon)]$.
On the other hand, identical relaxation times for charging and discharging have been recently observed on an electrostatically-defined quantum dot, coupled to a large top-gate capacitance (such that, the charging energy is negligible) \cite{Feve}.
For an individual quantum dot or an ensemble with a sharp distribution of the quantum-dot resonances, asymmetric charge-relaxation times can only be observed when the final gate voltage after the voltage pulse for charging is {\it different} from the one for discharging, such that the corresponding quantum-dot level positions are separated by at least $k_{\rm B}T$. In our sample, however, the opposite limit of a rather broad distribution is realized.
To describe this case, we integrate over all energies for the quantum-dot levels in the ensemble and obtain an expression that is independent of the gate voltage after the pulse,
\begin{equation}
\label{occ}
\Delta N (t) \propto \int d \epsilon \left[ N_{\rm eq} (\epsilon\pm \Delta E) - N_{\rm eq} (\epsilon)\right] e^{- t/\tau(\epsilon)} \, .
\end{equation}
Here, $N_{\rm eq} (\epsilon\pm \Delta E)$ and $N_{\rm eq} (\epsilon)$ are the equilibrium occupation of the quantum dot before and after the voltage pulse, respectively.
The upper (lower) sign corresponds to charging (discharging).
The pre-exponential factor in the integrand selects only those quantum dots which change their occupation after the voltage pulse.
An asymmetry of the relaxation times now appears because relative to the Fermi energy, the dot energies lie by $\Delta E$ lower for charging than for discharging.
Due to the energy integral, the charge relaxation is no longer governed by a single exponential decay.
To characterize the relaxation by a single time constant, we numerically perform an exponential fit of Eq.~(\ref{occ}).
We quantify the asymmetry in the charge relaxation time by the ratio
\begin{equation}
\label{ratio}
\nu_m = \frac{\tau_{m\rightarrow m-1}}{\tau_{m-1\rightarrow m}} \, .
\end{equation}
For the reasons discussed above, $\nu_m$ is a function of temperature. It ranges from $\xi_m$ for $k_{\rm B}T \ll \Delta E$ to $1$ for $k_{\rm B}T \gg \Delta E$.
\section{Results and discussion}
Let us now consider the first two transitions $m=1$ and $m=2$ for filling the $s$-shell with the first and the second electron, respectively.
Spin degeneracy implies $d_0=1$, $d_1=2$ and $d_2=1$, which yields $\xi_1=2$ and $\xi_2=1/2$.
Due to finite temperature, $\nu_1$ and $\nu_2$ should be closer to $1$ than $\xi_1$ and $\xi_2$.
Indeed, we measure $\nu_1= 1.4$ and $\nu_2=0.85$ at $T=4\,{\rm K}$.
Deviations from the expected relation $\nu_1=1/\nu_2$ may be attributed to an energy dependence of the tunneling barrier $\Gamma(\epsilon)$.
This effect can be accounted for by averaging $\nu_1$ and $1/\nu_2$ to $\bar\nu=1.3$.
A temperature-dependent comparison between measured and calculated values of $\bar\nu$ is shown in the inset of Fig.~\ref{Fig2}.
We find qualitative agreement: $\bar \nu$ ranges between $2$ and $1$ and decreases with temperature, where the crossover temperature is given by $k_{\rm B}T \sim \Delta E$.
Quantitatively, the measured values of $\bar \nu$ are somewhat smaller than the calculated ones.
A better agreement can be achieved by assuming a higher electron temperature, caused by Ohmic heating of the 2DEG through the measurement current and the voltage pulse.
Also, fluctuations of the distance between quantum dots and 2DEG, i.e., variations of $\Gamma$ within the dot ensemble, may contribute to the discrepancy.
\begin{figure}[bt]
\includegraphics[width=1.0\columnwidth]{Fig3-final.pdf}
\caption{(a) Measured (dots) and calculated (lines) tunneling ratios using $\nu_m$ from table \ref{tab:nutable} for all transitions weighted with the average occupation of the ensemble.
(b) Conductance change (i.e. density of states) due to charge transfer into the QDs.
Shaded areas show fits to the measured density, which has been used for the calculation shown in (a).}
\label{Fig3}
\end{figure}
So far, we have only looked at transitions involving the $s$-shell.
Now, we turn to the filling sequence up to the sixth electron occupying the $p$-shell of the quantum dots.
Even though for electrons, the $s, p, d\,\dots$ shell filling sequence has been verified repeatedly \cite{Bimbergbuch,Drexler94,Fricke96,Marquardt11,Cockins10}, it was not quite clear whether Hund's rule applies to the filling of the $p$-shell \cite{Tarucha96,Warburton98} or whether it is lifted by an anisotropy of the confinement potential in the dot \cite{Fricke96,Wibbelhoff05,Lei10}.
Our time-resolved transconductance spectroscopy provides an excellent tool to clarify which scenario is realized.
In Fig.~\ref{Fig3}(a), the data points show the experimental ratios $\nu$ as a function of gate voltage for a temperature of 2.5~K. To compare with our model, we consider three different scenarios: (i) a circular dot with non-interacting charge carriers, which gives degeneracies in the $p$-shell $\lbrace d_3, d_4, d_5, d_6\rbrace = \lbrace 4, 6, 4, 1\rbrace$, (ii) a circular dot, but taking Hund's rule into account, leading to degeneracies $\lbrace 4, 2, 4, 1\rbrace$, and (iii) an elongated dot with degeneracies $\lbrace 2, 1, 2, 1\rbrace$.
The resulting $\xi_m$ and the calculated $\nu_m$ are listed in table \ref{tab:nutable}.
Since the separation of the charging states in the $p$-shell is comparable to the inhomogeneous width of the QD ensemble, different transitions may occur during a single switch in energy.
To account for this overlap, we use the decomposition of the density of states shown as shaded areas in Fig. \ref{Fig3}(b) to weight the processes with the percentage of dots at a certain occupation $m$. In Fig. \ref{Fig3}(a), the thus obtained $\nu(V_{\rm G})$ are shown as dash-dotted, dashed, and solid lines for the models (i) -- (iii), respectively.
Without any adjustable parameters, we find very good agreement with the model for an elongated dot, where all but the spin degeneracies have been lifted by the asymmetric potential. The other models are incompatible with the data.
This finding is in agreement with wave-function mapping experiments \cite{Wibbelhoff05,Vdovin07,Beckel12}.
It should be pointed out that the distribution of quantum-dot energy levels is much broader than the splitting $\delta E_p$ of the $p$-orbitals.
As a consequence, it is not possible to resolve $\delta E_p$ in the equilibrium density of states as shown in Fig.~\ref{Fig3}(b).
Nevertheless, from the time-resolved measurement, Fig.~\ref{Fig3}(a), we can unambiguously conclude that there is a splitting $\delta E_p$, which is larger than the energy shift $\Delta E$ caused by the voltage pulse.
Our calculations show that it may be possible to quantitatively determine an energy splitting $\delta E$ with our method, even when it is masked by inhomogeneous broadening:
For voltage pulses large enough such that $\Delta E \gtrsim \delta E$ one could map out the crossover from $k_{\rm B}T \gg \delta E$ for which the splitting can be neglected (larger degeneracy) to $k_{\rm B}T \ll \delta E$ for which the split levels are filled separately (smaller degeneracy).
In the crossover regime, $k_{\rm B}T \sim \delta E$, charge and (for Zeeman splitting) spin dynamics are coupled to each other \cite{Splettstoesser,Contreras}.
\begin{table}[htb]
\caption{Degeneracy ratios $\xi_m$ and relaxation time ratios $\nu_m$, calculated for different models of shell filling at $T=2.5$~K. Also shown are measured tunneling ratios $\nu$, determined by fits to the charging and discharging data (cf. Fig.~\ref{Fig2}) at the charging voltages of the $m^{\rm th}$ electron. Best agreement is found for the model of an elongated dot. }
\begin{tabular}{| r || c c | c c || c c | c c | c c | c c |}
\hline
Model & $\xi_1$ & $\nu_1$ & $\xi_2$ & $\nu_2$ & $\xi_3$ & $\nu_3$ & $\xi_4$ & $\nu_4$ & $\xi_5$ & $\nu_5$ & $\xi_6$ & $\nu_6$ \\
\emph{degenerate} & 2 & 1.6 & $\frac{1}{2}$ & 0.6 & 4 & 2.6 & $\frac{3}{2}$ & 1.3 & $\frac{2}{3}$ & 0.8 & $\frac{1}{4}$ & 0.4 \\
\emph{Hund's rule} & 2 & 1.6 & $\frac{1}{2}$ & 0.6 & 4 & 2.6 & $\frac{1}{2}$ & 0.6 & 2 & 1.6 & $\frac{1}{4}$ & 0.4 \\
\emph{elongated} & 2 & 1.6 & $\frac{1}{2}$ & 0.6 & 2 & 1.6 & $\frac{1}{2}$ & 0.6 & 2 & 1.6 & $\frac{1}{2}$ & 0.6 \\
\hline
measured &{}& 1.5 &{}& 0.7 &{}& 1.6 &{}& 0.7 &{}& 1.5 &{}& 0.7 \\
\hline
\end{tabular}
\label{tab:nutable}
\end{table}
In conclusion, we propose time-resolved transconductance spectroscopy of quantum dots coupled to a 2DEG as useful tool to determine the degeneracies of the quantum-dot levels with a much better resolution than the inhomogeneous width of the QD ensemble.
As a consequence of Coulomb interaction, the ratios of the charge relaxation times for charging and discharging is, in general, different from 1 and depends both on the level degeneracies and temperature.
Our measurements can be qualitatively explained within a master-equation approach and they unambiguously show the existence of a $p$-orbital splitting.
\section*{Acknowledgement}
We acknowledge financial support by the Mercator Research Center Ruhr (MERCUR) of Stiftung Mercator,
the DFG (Contract No. GE 2141/1-1) in the framework of the NanoSci-E+ project QD2D of the European Commission and the project QuaHL-Rep 16BQ1035 as well as the project 'Hochfunktionale Speicher' (HOFUS) within the VIP program of the BMBF.
|
1,116,691,500,261 | arxiv |
\section{Background and Motivation}
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{figures/fig-fl-arch.pdf}
\vspace{-1.5ex}
\caption{In a centralized FL architecture, an aggregator sends a global model to clients (step 1). Each client trains the model on local data (step 2) and sends the locally trained model back to the server (step 3). The server aggregates all models to form a new global model (step 4).}
\label{fig:fl_base}
\vspace{-4ex}
\end{figure}
\subsection{Federated Learning}
\label{background}
In Federated Learning, multiple clients locally train a model on their data and share it with a central server (also called an aggregator) to construct a global model. During this collaborative training, clients' training data never leaves their premises \cite{kairouz2021advances}.
Figure~\ref{fig:fl_base} shows an FL setting where multiple hospitals collaboratively train a global model on their local labeled medical imaging data.
\begin{enumerate}
\item In the first step, the aggregator sends copies of the current global model, \emph{i.e.,}\xspace the global model weights, and hyperparameters (\emph{e.g.,}\xspace learning rate and epochs) to participating clients (Step 1 of Figure~\ref{fig:fl_base}).
\item Using the global model as initial parameters, each client trains a model on its local data similar to traditional ML training (Step 2 of Figure~\ref{fig:fl_base}).
\item Once trained, each client sends its local model, in the form of updated weights, back to the aggregator as shown in Step 3 of Figure~\ref{fig:fl_base}. Additionally, clients share performance metrics such as training loss and quality/quantity of training data with the central aggregator.
\item After receiving model updates, the server aggregates the updated weights from all clients using established model aggregations such as FedAvg~\cite{mcmahan2017communication} to form a new global model (Step 4 in Figure~\ref{fig:fl_base}).
\end{enumerate}
The four steps are repeated for a fixed number of {\em rounds} or until the global model meets some convergence criteria, for example, when training loss is close to zero. Note that not every client participates in every round. There are additional variants of federated learning (FL) such as vertical FL~\cite{liu2019communication}, FL with differential privacy~\cite{wei2020federated}, decentralized FL \cite{mugunthan2020blockflow}, and personalized FL \cite{t2020personalized}. Our work mainly focuses on the standard FL, where the goal is to train a single global model.
\subsection{Motivating Scenario}
\label{motivating}
Suppose that an FL application developer trains a global neural network model, ResNet \cite{he2015deep}, on chest X-ray images from hospitals across the country to diagnose respiratory diseases (\emph{e.g.,}\xspace Covid-19). We use the term {\em developer} to refer to a person who writes, deploys, and monitors the FL application at the central server as shown in Figure~\ref{fig:fl_base}. Every participating hospital collects X-rays of patients labelled by radiologists and trains a local ResNet model on that data. Hospitals periodically share their locally trained models with a central server. The central server then aggregates these shared models into one global model using FedAvg~\cite{mcmahan2017communication}. After aggregation, the central server sends the updated global model to each hospital to incorporate in local training in the next round, as shown in Figure~\ref{fig:fl_base}.
The developer observes that multiple hospitals are reporting a high training loss from their preceding training rounds. One plausible reason is that one of the hospitals performed training on noisy, mislabelling by inexperienced staff ~\cite{chen2020focus, jiang2018mentornet, li2019learning ,li2020dividemix} and continuously impacted the global model during aggregation. Thus, when the global model is shared back with the other hospitals, it influences their training.
\textbf{Limitations for FL Debugging.} After noticing an increase in {\em training loss}, the developer must investigate the root cause, as misdiagnosis can lead to ill-treatment. To debug the FL application at this scale, the developer starts manually inspection of the collected logs such as global model weights, shared local models from hospitals, response/training time of each hospital, at the central server.
Due to patient's privacy, the hospitals refrain from sharing their labelled training data, which is critical for correctly evaluating the quality of a model and thus essential for localizing the faulty round and model. Even if she can find the problematic round, she cannot isolate the hospital(s) responsible for affecting the global model without test data. One option is cross validating each client's model by requesting that other clients test the model on their local data. This is prohibited in practice, as it adds computational burden on clients (\emph{e.g.,}\xspace edge devices) and can potentially cause data privacy violation. Lastly, statically inspecting hospitals' models does not provide any meaningful information. Without any debugging techniques at her disposal, she resorts to using guesswork to identify the hospital with noisy labels (\emph{i.e.,}\xspace faulty client).
\textbf{FedDebug.} The developer decides to use \textsc{FedDebug}\xspace to investigate the root cause behind high training loss. When enabled, \textsc{FedDebug}\xspace allows a developer to set a {\em breakpoint} at any round with training loss. This breakpoint separately invokes a debugging session, a simulation of the original FL service, without stopping the live training.
In the debugging session, the developer uses \textsc{FedDebug}\xspace's {\em step-back} and {\em step-next} constructs to move between rounds, inspecting the global and local models of hospitals recorded by \textsc{FedDebug}\xspace.
Upon inspecting the training rounds, she finds the specific round, \emph{e.g.,}\xspace~{\tt Round-8}, where the performance starts to deteriorate. This round can be different from the {\em breakpoint} enabled round, as performance issues can manifest in earlier rounds but surface later.
During this inspection, \textsc{FedDebug}\xspace also reports the list of hospitals that participated in that round.
Next, she invokes \textsc{FedDebug}\xspace's fault localization algorithm to precisely identify the hospital responsible for deteriorating the global model, leading to lower performance. After finding the hospital with noisy labels, developer removes it from the problematic round (\emph{i.e.,}\xspace~{\tt Round-8}) and onwards. \textsc{FedDebug}\xspace's {\em fix and replay} starts retraining from round {\tt Round-8} to the current round, and then replaces the impacted global model with the retrained global model and continues the original FL application.
\section{Conclusion}
\label{conclusion}
Federated learning promotes accurate and collaborative model training across millions of clients--a type of learning that was previously impossible due to privacy concerns related to user data. However, FL poses unprecedented challenges in debugging a faulty client responsible for deterring global training. With minimal information about the training process and non-existent debugging techniques, such issues are often left untreated. \textsc{FedDebug}\xspace enables interactive and automated fault localization in FL. It adapts conventional debugging practices in FL with its {\em breakpoint} and {\em fix and replay} feature. It offers a novel differential testing technique to automatically identify the precise faulty clients. We demonstrate that \textsc{FedDebug}\xspace identifies a faulty client with 100\% accuracy within 2.1\% of a round's training time, advocating for \textsc{FedDebug}\xspace's efficacy and efficiency. With \textsc{FedDebug}\xspace, we pave the way for advanced software debugging techniques to be adapted in the emerging area of federated learning and the broader community of machine learning practitioners.
\section{\textsc{FedDebug}\xspace's Debugging Constructs}
\label{debugging-primitives}
The goal of \textsc{FedDebug}\xspace is to facilitate an FL application developer in isolating a faulty client responsible for deteriorating the global training. Recent studies emphasize the need for debugging techniques in FL applications and the challenges associated with providing debugging support in FL frameworks~\cite{kairouz2021advances}. To this end, we must overcome the following major challenges in designing \textsc{FedDebug}\xspace. First, the privacy concerns of FL put restrictions on any client-side interference. Second, the unpredictable and ephemeral nature of clients in FL threatens reproducibility, which is critical for debugging a live system. Third, the distributed nature with hundreds of participating clients makes traditional breakpoint debugging ineffective in FL. Pausing the entire FL application at this scale will be prohibitively expensive. Therefore, traditional debugging approaches, such as {\tt gdb}, are not suited for the FL systems' scale and architecture.
In \textsc{FedDebug}\xspace, we address the above challenges and advance the systematic FL application debugging. We enable realtime, interactive debugging on a simulation of the live FL application. To do so, \textsc{FedDebug}\xspace's continuously collects and stores concise telemetry data from a live FL application. Whenever a debugging need arises, the developer can interact with the \textsc{FedDebug}\xspace's debugging interface, which uses the telemetry data to regenerate an FL application state.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{figures/breakpoint.pdf}
\vspace{-2ex}
\caption{Using \textsc{FedDebug}\xspace, a developer can set a {\em breakpoint} at round {\tt R20}. When the FL application finishes round {\tt R20}, \textsc{FedDebug}\xspace launches a Debugging Interface, reflected on the right. {\em Step next} (\ding{203}) takes the developer to the next step (round or client). {\em Step-in} increases the granularity of computation, \emph{e.g.,}\xspace round to client level. {\em Resume} (\ding{204}) will re-join the current execution status of the FL application if no intrusive actions are taken. At a given round, \textsc{FedDebug}\xspace can automatically localize the faulty client (\ding{205}) and then resume (\ding{206}) upon which the global model will be recomputed without the faulty client. This model will replace the corresponding round\textquotesingle s model, and \textsc{FedDebug}\xspace will start retraining from that round, {\tt R22}, in the FL interface.}
\label{fig:breakpoint}
\vspace{-4ex}
\end{figure}
\subsection{Selective Telemetry}
\textsc{FedDebug}\xspace collects critical FL execution metrics to reproduce an FL application\textquotesingle s state for the developer to interact with it while investigating the root cause of a problem. Existing FL frameworks are carefully architected to refrain from revealing private data. As a result, most debugging data is private and cannot be investigated.
\textsc{FedDebug}\xspace's debugging approach takes inspiration from replay debugging. As with any other replay debugging, it is essential that \textsc{FedDebug}\xspace stores the necessary runtime metrics to reproduce an FL application\textquotesingle s state if requested by the developer. We design a highly selective FL event telemetry technique that records the concise execution data available at the central aggregator that is vital for generating any prior FL application state.
\textsc{FedDebug}\xspace is different from traditional replay debugging as it only tracks the information needed to recreate an {\em observable} event and does not log the information unavailable to the developer in a live application. This design reduces the size of continuously growing telemetry data and minimizes the likelihood of information leakage. \textsc{FedDebug}\xspace mainly stores the information available after step 3 of Figure~\ref{fig:fl_base} which is clients' models, their reported metrics such as response time, training loss, validation loss, performance metric (\emph{e.g.,}\xspace F1 score), hyperparameters (\emph{e.g.,}\xspace learning rates, epochs, weight decay), and round ID. Note that the FL application, including client-side training, will continue uninterrupted in the background with \textsc{FedDebug}\xspace's telemetry module continuously collecting execution traces.
\subsection{Interactive Replay Debugging}
To start the interactive debugging process, a developer can invoke \textsc{FedDebug}\xspace's debugging constructs that let the developer leverage the telemetry data to investigate the root cause. Breakpoint debugging is the de-facto method of debugging a program. It pauses the program when the execution reaches it. At that point, a developer can inspect the values assigned to different variables, both local and global, and examine the method stack. Such debugging features are not applicable in FL. The traditional breakpoint will pause the distributed training (i.e., none of the clients will be able to train its model), resulting in unnecessary idling. Additionally, since the state of a round is not saved,
it is currently impossible for the developer to inspect previous rounds. For instance, a developer may want to debug a latent issue that was introduced by a client five rounds ago but surfaced in the current round when the same client participated in training again.
We make the following observation about FL frameworks. An FL application only reveals aggregator\textquotesingle s events to a developer. In contrast, events on the client\textquotesingle s side are entirely hidden from the developer except the ones relayed to the aggregator by the client. Building on this observation and the telemetry data captured by \textsc{FedDebug}\xspace, our insight is that instead of debugging a system in real-time, we can recreate its observable behavior in a simulated environment, giving an illusion of debugging an FL application in real-time. By doing so, inspections with \textsc{FedDebug}\xspace are side-effect free, \emph{i.e.,}\xspace they will not interfere or interrupt the live FL application, thus eliminating the need to pause client-side training or halt aggregator execution.
\textbf{\textit{Breakpoint.}} To this end, \textsc{FedDebug}\xspace offers {\em breakpoint} that can help a developer inspect intermediate states of an FL application in real-time without stopping the training process. \textsc{FedDebug}\xspace's breakpoint operates on computation units of {\em rounds} and {\em clients}. A developer can set a breakpoint on either a round (\emph{e.g.,}\xspace round {\tt R20}) or on both a round and a client (\emph{e.g.,}\xspace round {\tt R20}, client {\tt C5}) to inspect the state of FL training using metrics such as training loss, clients' participation, and response time. When the live FL application arrives on a breakpoint, \textsc{FedDebug}\xspace spawns a new debugging interface on the aggregator side, as shown in \ding{202} in Figure~\ref{fig:breakpoint}, while continuing the live FL training in the background.
\textbf{\textit{Step in/Step out.}} While at a breakpoint in a debugging session, a developer can use its {\em step-in} and {\em step-out} actions to switch between different granularities of computational units. Traditionally, these two actions are used to go one-level deeper in the stack (\emph{e.g.,}\xspace inside a function call) and move one level up in the stack (\emph{e.g.,}\xspace outside the function call), respectively. Based on this convention, we define a round as a coarse-grained unit of computation that can be decomposed into a subset of clients participating in that round.
Suppose the current breakpoint is at round {\tt R20}. In that case, step-in will take the developer to the client-level granularity (\ding{203} in Figure~\ref{fig:breakpoint}) where trained models from clients are being aggregated incrementally. Step-out will take the developer back to round, where they can inspect the global trained model at the granularity of round. Inspecting a state at client-level granularity entails evaluating the performance of a partially-aggregated global model. For example, in Figure~\ref{fig:breakpoint}, step-in at \ding{203} will take the execution between {\tt C1} and {\tt C3}, where the global model has yet to incorporate the local models of client {\tt C3} and onwards.
\textbf{\textit{Step Next/Step Back.}} Similar to step-in/out, {\em step next} and {\em step back} helps a developer transition from one state to another. For instance, if the current breakpoint is at round {\tt R20}, step next will take the execution to round {\tt R21} in the debugging interface, showing all the information corresponding to that round only. Similarly, if the current breakpoint is at client {\tt C5}, step back will take the execution state to a partial global model after aggregating models from clients {\tt C1} and {\tt C3} only ({\tt Step back} in Figure~\ref{fig:breakpoint}).
\textbf{\textit{Resume.}} Unlike resume in {\tt gdb}, \textsc{FedDebug}\xspace's resume does not resume any paused execution. Instead, resume gives the illusion to the developer that execution is being continued from where it left off. \textsc{FedDebug}\xspace creates this environment by replaying the telemetry data that was captured while the FL application was being inspected using breakpoints, in case the developer does not find any faults in the round under inspection. Once the sequence of events in telemetry catches up with the live execution of the FL application, \textsc{FedDebug}\xspace switches to the FL interface and shuts down the debugging interface. This three-step process is nearly indistinguishable from an FL application with \textsc{FedDebug}\xspace disabled, giving the impression of debugging a real-time FL application interactively. {\em Resume} is also illustrated in Figure~\ref{fig:breakpoint} - \ding{204}.
\subsection{Fix and Replay}
\label{fix-and-rep}
When the developer successfully identifies a faulty client in any round, \textsc{FedDebug}\xspace offers {\em Fix and Replay} to allow a developer to roll back the training and provide a retrained global model (the one without a faulty client). We describe the technique to identify a faulty client in Section \ref{section:fault_localization}. A faulty client may have a compound effect on the global model, as it may have begun to share its noisy model updates latently several rounds ago, which only later becomes noticeable. In such cases, it is important to rectify the impact of a faulty client\textquotesingle s inclusion in prior training rounds by removing its contributions. This requires retraining over multiple rounds, which is not possible as clients may not store the data used in training in the prior rounds. Figure~\ref{fig:breakpoint}-\ding{205} shows the removal of a faulty client ({\tt C5}) in round {\tt R21}. \textsc{FedDebug}\xspace recomputes the global model in the debugging interface and then replaces the actual global model in round {\tt R22} with the newly recomputed global model, after the fix and replay action in Figure \ref{fig:breakpoint}-\ding{206}. By default, \textsc{FedDebug}\xspace forbids the faulty client from participating in the FL training. However, it is up to the developer to weigh the benefits of including the faulty client in future rounds.
\section{Evaluation}
\label{evaluation}
We evaluate \textsc{FedDebug}\xspace on (1) runtime performance overhead, (2) debugging time, (3) fault localizability and (4) scalability. Our evaluation targets to answer the following research questions:
\begin{itemize}
\item {\bf RQ1.} What impact does \textsc{FedDebug}\xspace have on the baseline FL framework's performance?
\item {\bf RQ2.} How accurate is \textsc{FedDebug}\xspace in identifying a faulty client?
\item {\bf RQ3.} Can \textsc{FedDebug}\xspace identify multiple faulty clients?
\item {\bf RQ4.} Can \textsc{FedDebug}\xspace scale to large number of clients and find a faulty client efficiently?
\end{itemize}
\textbf{\textit{Datasets, Model, \& FL Framework.}} We evaluate \textsc{FedDebug}\xspace on CIFAR-10 and FEMNIST. Both are considered as gold standard to evaluate both FL frameworks~\cite{shamsian2021personalized, collins2021exploiting, li2021ditto, vahidian2021personalized, burkhalter2021rofl} and deep learning testing techniques~\cite{pei2017deepxplore, xie2019diffchaser, usman2021neurospf, guo2018dlfuzz, xie2019deephunter}. FEMNIST is a modified version of MNIST presented in the FL LEAF Benchmark~\cite{caldas2018leaf} and the Non-IID Bench~\cite{li2022federated}. The FEMNIST dataset contains over 340K training and over 40K testing grayscale, 28x28 images spanning ten different classes. CIFAR-10 contains 50K training 32x32 RGB images that span ten different classes and 10K instances for testing.
We adopt popular CNN models \emph{i.e.,}\xspace ResNet, VGG, and DenseNet architectures \cite{simonyan2014very, he2015deep, huang2017densely}.
We set the learning rate between 0.0001 and 0.001, the number of epochs between 10 and 25, the batch size from 512 to 2048, and the weight to 0.0001. We realize \textsc{FedDebug}\xspace's design in the IBMFL library~\cite{ibmfl2020ibm} due to its ease-of-use, open documentation, and publicly available codebase. These techniques should be equally applicable to other FL frameworks.
\textbf{\textit{Evaluation Environment Specifications.}} We run our experiments on an AMD 16-core processor, with 128 GB RAM and an NVIDIA Tesla T4 GPU. To measure the performance of \textsc{FedDebug}\xspace in terms of runtime and debugging overhead, we simulate IBMFL framework deployment on a MacBook Pro with Quad-core Intel Core i5 processor and 16 GB RAM.
\textbf{\textit{Federated Learning Experimental Settings.}} Prior FL literature~\cite{caldas2018leaf, li2022federated} establishes two data distribution strategies among FL clients: IID (independent and identically distributed data), and Non-IID (non-independent and identically distributed data) ~\cite{li2022federated}. For Non-IID, we use the quantity base imbalance~\cite{li2022federated} where clients have an unequal quantity of data, and the class distribution is random. In IID, the clients receive the same quantity of data. None of the clients share the same data points in both settings. We simulate FL with varying quantities of clients, ranging from 10 to 400 clients.
\textbf{\textit{Fault Injection.}} Since there is no existing FL benchmark with faulty clients, \textsc{FedDebug}\xspace adopts a standard noisy labels approach from prior machine learning literature to inject a faulty client into experiments~\cite{yao2018deep, jiang2020beyond, li2021learning, frenay2013classification, hendrycks2018using}. Similar to prior work~\cite{ma2018dimensionality, ghosh2017robust, li2020dividemix}, we arbitrarily add noise by changing training data labels (\emph{e.g.,}\xspace changing label \say{bird} to \say{cat}). When such a client's model is merged with the global model, the global model's performance (\emph{e.g.,}\xspace accuracy) deteriorates. We define different strengths of noise with a {\em noise rate} that controls the amount of labels modified in a faulty client. Noise rate is defined as a ratio between changed labels and original labels ($change\ labels/original\ labels$).
Figure~\ref{fig:flipped_labels_global_model_performance} shows the impact of different noise rates on the global model's accuracy, with one faulty client and nine benign clients.
Low noise rates, ranging from 0.2 to 0.7, barely affect the global model performance. With a 0.7 noise rate, the accuracy is lowered by 4.8\% and 5.5\% in CIFAR-10 and FEMNIST, respectively. A noise rate of 0.9 incurs a 16.2\% and 9.9\% reduction in the global model accuracy in both settings. Thus, to have a measurable impact on the global model's performance, we select a noise rate of one for a faulty client.
\begin{figure}[!ht]
\vspace{-2ex}
\centering
\scalebox{0.4}
{\input{graphs/tikz_global_acc}}
\caption{Global model (ResNet-34) prediction accuracy in the presence of a faulty client with different noise rates. Lower noise rates hardly degrade global model performance.}
\label{fig:flipped_labels_global_model_performance}
\vspace{-2ex}
\end{figure}
\textbf{\textit{Neuron Activation Threshold.}}
We adopt the method from Harel-Canada et al.~\cite{harel2020neuron} to profile neuron activations. We empirically find 0.003 to be the optimal value for the default activation threshold. A neuron is considered active when its value crosses this threshold.
\textbf{\textit{Faulty Client Localization Accuracy.}} We calculate faulty client localization accuracy as the ratio between (a) the number of test inputs on which faulty clients are correctly identified and (b) the total number of test inputs. For instance, if \textsc{FedDebug}\xspace identifies the correct set of faulty clients on four out of ten test inputs generated by Alogrithm~\ref{algo:inputs}, we report 40\% fault localization accuracy.
\subsection{\textsc{FedDebug}\xspace's Performance}
Capturing telemetry data in realtime may slow down the performance of the FL application's aggregator. In this subsection, we present our evaluation results of \textsc{FedDebug}\xspace's runtime overhead as well as the fault localization time. These experiment settings employ ResNet-18 with CIFAR-10.
\noindent {\bf Runtime Overhead (RQ1).}
To evaluate the impact on the FL application's performance, we measure the slowdown in the running time that \textsc{FedDebug}\xspace incurs. We compare the cumulative processing time of the vanilla IBMFL's aggregator (baseline) against that of the \textsc{FedDebug}\xspace-enabled aggregator on a variety of client combinations \emph{i.e.,}\xspace from 5 clients to 100 clients, simulating a real-world FL deployment. The aggregation time varies with the model's architecture and the number of clients participating in a round, but it is completely independent of the models' quality. Therefore, we create up to 100 pre-trained ResNet-18 models and perform the FL aggregation.
Figure~\ref{plot:runtime-overhead} compares the baseline's aggregation time with the \textsc{FedDebug}\xspace enabled aggregation time.
\begin{figure}[t]
\centering
\scalebox{0.8}
{\input{graphs/runtime-overhead}}
\vspace{-2ex}
\caption{\textsc{FedDebug}\xspace's runtime overhead as a comparison between vanilla FL framework's aggregation time with \textsc{FedDebug}\xspace enabled FL aggregation. }
\label{plot:runtime-overhead}
\end{figure}
The X-axis represents the number of clients ranging from 5 to 100 clients, and the Y-axis represents the average time across two FL rounds. For instance, with 30 clients, \textsc{FedDebug}\xspace takes 3.9 seconds compared to the 2.5 seconds for the baseline to aggregate 30 trained models into a global model. Overall, \textsc{FedDebug}\xspace takes approximately 48\% additional aggregation time across all experiments. However, in an end-to-end round, the training phase on the clients' end occupies the majority (up to 97.8\% in our experiments) of the round's time. Compared to the training time of a round, the aggregation time is almost negligible, as low as 1.2\% in our experiments.
\begin{tcolorbox}[left=0mm, right=0mm, top=0mm, bottom=0mm]
\textbf{Summary:} Considering the training and aggregation time of each FL round, \textsc{FedDebug}\xspace's runtime overhead is a very small fraction, 1.2\%, of the training time. Hence, capturing telemetry data for replay debugging does not impede the FL application's runtime performance.
\end{tcolorbox}
\noindent {\bf Debugging Time (RQ1).}
To assess the localizability of \textsc{FedDebug}\xspace, we design experiments to measure \textsc{FedDebug}\xspace's \emph{debugging time}, the time it takes to localize a faulty client. We then compare this time with the training time of that round. Since there is no comparable approach to localize a faulty client, we use training time as a baseline to provide a meaningful scale for the cost of debugging.
Figure~\ref{plot:debugging-overhead} shows the results of these experiments. The X-axis represents the number of clients, and the Y-axis shows the debugging time in seconds on a logarithmic scale. For 30 clients, \textsc{FedDebug}\xspace's input generation and selection takes 0.2 seconds to find high-quality test input, and its fault localization takes approximately 0.5 seconds to localize a faulty client. In a ten clients setting, input selection takes more time due to the stricter constraint of (\emph{i.e.,}\xspace $\eta = 4 $) for criteria 1 in Figure~\ref{fig:fault_localization}, \emph{i.e.,}\xspace at least four previously unseen clients should predict the same label on newly selected test input.
\begin{figure}[t]
\vspace{-2ex}
\centering
\scalebox{0.85}
{\input{graphs/tikz_Algo1_Algo2_Train.tex}}
\caption{\textsc{FedDebug}\xspace's debugging time contains input generation time and faulty client detection time and is compared against a round's training time. }
\label{plot:debugging-overhead}
\vspace{-3ex}
\end{figure}
Overall, our results show an increasing debugging time when the number of clients increases, which is expected as increasing the number of clients increases the search space. Note that the debugging time is still in the order of seconds, even for 50 clients. This is because 1) for {\tt n} clients, the search space has at most {\tt n} possible combinations of potentially benign {\tt n-1} clients, representing linear complexity, and 2) on a given input, \textsc{FedDebug}\xspace only profiles neuron activations once while iterating over the {\tt n} combinations.
\begin{tcolorbox}[left=0mm, right=0mm, top=0mm, bottom=0mm]
\textbf{Summary:} On average, \textsc{FedDebug}\xspace can efficiently identify a faulty client in 2.1\% of the total training time of a round.
\end{tcolorbox}
\subsection{Localization of Faulty Client}
To answer RQ2, we measure how accurate \textsc{FedDebug}\xspace is in localizing a faulty client. We automatically inject a faulty client that is representative of a real-world scenario and can cause a measurable change in the global model's performance. By varying the number of clients, datasets, models, and data distributions (IID and Non-IID), we create 36 different FL configurations for \textsc{FedDebug}\xspace's evaluation.
Column 4 and 5 of Table~\ref{table:flip_all_labels} show the accuracy of \textsc{FedDebug}\xspace in the IID and Non-IID settings, respectively. We repeat each experiment on 100 generated test inputs and take the average of each metric to generalize the results. \textsc{FedDebug}\xspace correctly identifies a faulty client with 100\% accuracy in both IID and Non-IID settings.
\begin{table}[t]
{
\vspace{-2ex}
\centering
\scalebox{.85}{
\input{tables/table_flip_10}
}
\caption{\textsc{FedDebug}\xspace's debugging time and accuracy when localizing a faulty client in 36 different FL settings with 100 test inputs.}
\label{table:flip_all_labels}
\vspace{-4ex}
}
\end{table}
\noindent\textbf{Varying Noise Rate.} Figure~\ref{fig:flipped_labels_global_model_performance} shows the impact of different noise rates on the global model prediction accuracy. We observe that a faulty client has measurable impact on the global model with a noise rate of $> 0.8$.
The global model's accuracy merely drops from 73.8\% to 71.1\% when the faulty client has 0.6 noise rate and drops to 57\% when the noise is close to one.
\textsc{FedDebug}\xspace accurately localizes a faulty client with low noise rates, showing its robustness. Figure~\ref{fig:faults-2-9} shows the evaluations on varying noise rates in 10 clients FL settings with ResNet and DenseNet architectures. The X-axis shows the faulty client's noise rate, and the Y-axis represents the average fault localization accuracy on the CIFAR-10 and FEMNIST datasets. The results, as seen in Figure~\ref{fig:faults-2-9}, indicate that \textsc{FedDebug}\xspace has the capability to identify low noise faults--it successfully localizes a faulty client with 0.4 noise rate with approximately 58\% and 87.5\% accuracy in DenseNet and ResNet settings, respectively.
\begin{figure}[t]
\centering
\scalebox{0.4}
{\input{graphs/tikz_noise_rate_fault_detection.tex}}
\caption{\textsc{FedDebug}\xspace localization performance when a faulty client has varying fault strength (\emph{i.e.,}\xspace low noise rate). }
\label{fig:faults-2-9}
\vspace{-4ex}
\end{figure}
\begin{tcolorbox}[left=0mm, right=0mm, top=0mm, bottom=0mm]
\textbf{Summary:} \textsc{FedDebug}\xspace achieves 100\% fault localization accuracy on average on a total of 3600 test inputs, when the faulty client significantly deters the global model performance in both IID and Non-IID settings.
\end{tcolorbox}
\noindent {\bf Detecting Multiple Faulty Clients (RQ3).}
We evaluate \textsc{FedDebug}\xspace's ability to identify multiple faulty clients in an FL application. To this end, we inject up to seven faulty clients in the following experiment settings. We train ResNet-50 and DenseNet-121 on the CIFAR-10 and FEMNIST datasets in 30 and 50 clients FL settings. Each setting is evaluated on 10 test inputs. By default, \textsc{FedDebug}\xspace's fault localization technique finds a single faulty client. We apply \textsc{FedDebug}\xspace in an iterative manner to find multiple faulty clients by removing one faulty client on each iteration, similar to traditional bug repair process, where one bug is fixed first before the next one is investigated.
Table~\ref{table:corrupt} presents the results of finding multiple faulty clients in 32 FL configurations. For instance, when 7 out of 30 clients are faulty and the model is ResNet-50, \textsc{FedDebug}\xspace finds all seven faulty clients with 100\% accuracy on CIFAR-10 and 97.1\% accuracy on FEMNIST. Compared to ResNet, \textsc{FedDebug}\xspace performs relatively better with DenseNet. This behavior is expected because, compared to ResNet, DenseNet learns better features due to dense concatenation among its layers, resulting in better performance~\cite{zhang2021resnet}.
Thus, \textsc{FedDebug}\xspace performs well in localizing multiple faults with DenseNet with an average accuracy of 99.7\% on both datasets compared to ResNet's 80.8\%.
Table~\ref{table:corrupt} also reveals that, generally, \textsc{FedDebug}\xspace's localization performance is positively correlated to the number of training data points per client. Large, high-quality training data promotes better feature learning among neurons and thus, yields better performance. Since the number of data points in FEMNIST (340K) is large compared to CIFAR-10 (40K), clients in FEMNIST have significantly larger training data than clients in CIFAR-10. As a result, \textsc{FedDebug}\xspace average localization accuracy is 78.5\% in ResNet-CIFAR experiment, while it has 83.1\% localization accuracy in the ResNet-FEMNIST experiment. \textsc{FedDebug}\xspace finds multiple faults with linear time complexity, as shown in Figure~\ref{fig:multi_faulty_cleint_overhead} with 50 clients. The input generation time is almost constant, as the number of clients is fixed. However, the localization time increases as we increase the number of faults from 2 to 7. For instance, it localizes two faulty clients in 3.6 seconds and five faulty clients in 4 seconds.
\begin{table}[t]
\caption {\textsc{FedDebug}\xspace's fault localization in 32 FL configurations with multiple faulty clients, ranging from two to seven.}
{
\centering
\scalebox{.99}{
\input{tables/table_corrupt}
}
\label{table:corrupt}
\vspace{-2ex}
}
\end{table}
\begin{figure}[t]
\centering
\scalebox{0.4}
{\input{graphs/tikz_multi_fault_overhead.tex}}
\caption{\textsc{FedDebug}\xspace finds multiple faulty clients in a linear time. Total clients are 50 in each graph.}
\label{fig:multi_faulty_cleint_overhead}
\vspace{-4ex}
\end{figure}
\noindent{\textbf{Scalability (RQ4):}} Our findings also show that \textsc{FedDebug}\xspace is scalable to larger datasets and an increasing number of clients in FL. Figure~\ref{plot:scalability} summarizes the impact on \textsc{FedDebug}\xspace's ability to identify a faulty client when the number of clients changes from 25 to 400 and the training data size per client changes. We perform this experiment with two faulty clients in the FEMNIST-DenseNet configuration. Figure~\ref{plot:scalability}-(a) verifies that \textsc{FedDebug}\xspace's fault localization accuracy only reduces to 75\% even when the number of clients increases to 400. \textsc{FedDebug}\xspace's debugging time increases linearly as the number of clients increases, consistent with the scale-up properties of general distributed systems, as shown in Figure~\ref{plot:scalability}-(b). When the number of clients increases, less data is used to train a client's model, which may reduce the accuracy of clients' models. Figure~\ref{plot:scalability}-(c) also shows that \textsc{FedDebug}\xspace's fault localizability also increases when the number of data points per client increases, and it is also robust against low performing client models. For instance, when the number of data points increases from 850 to 1700, \textsc{FedDebug}\xspace's localization accuracy also changes from 75\% to 85\%, respectively.
\begin{figure}[h]
\centering
\scalebox{0.4}
{\input{graphs/tikz_scalability_400.tex}}
\caption{\textsc{FedDebug}\xspace retains scalability on a large number of clients.}
\label{plot:scalability}
\end{figure}
\begin{tcolorbox}[left=0mm, right=0mm, top=0mm, bottom=0mm]
\textbf{Summary:} Our experiment results provide concrete evidence that \textsc{FedDebug}\xspace preserves scalability properties both in terms of time overhead and in the presence of multiple faults. It successfully identifies multiple faulty clients in 32 different FL configurations with an average accuracy of 90.3\%.
\end{tcolorbox}
\subsection{Neuron Activation Threshold}
There is no standard threshold of neuron activations~\cite{pei2017deepxplore} and prior work uses experiential value for different use cases~\cite{harel2020neuron}. We evaluate the impact of different activation thresholds on \textsc{FedDebug}\xspace's faulty client localizability. We take 30 clients including five faulty clients, and train ResNet-50 and DenseNet-121 on both the CIFAR-10 and FEMNIST datasets. We repeat each experiment on 10 different inputs generated by Algorithm~\ref{algo:inputs}.
Figure \ref{fig:thresholds} shows the result of these experiments. The X-axis represents the neuron activation thresholds, ranging from 0 to 0.9. The Y-axis shows the \textsc{FedDebug}\xspace's localization accuracy in a given experiment setting. For instance, at the 0.003 threshold, the average localization accuracy across four settings is 100\%. On the other hand, at 0.5 threshold, the average accuracy decreases significantly to 73.5\% across these configurations. Specifically, for DenseNet-121 and FEMNIST experiment in Figure~\ref{fig:thresholds}-(d), the localization drops to 64\% at the 0.5 neuron activation threshold. We observe that \textsc{FedDebug}\xspace performs better at lower thresholds ($<$ 0.01) across different models and datasets. This behavior is expected because lower thresholds increase the sensitivity of \textsc{FedDebug}\xspace's localization approach. It starts monitoring the majority of the neurons compared to a higher threshold, where \textsc{FedDebug}\xspace profiles only a few neurons crossing the threshold.
\begin{figure}[t]
\centering
\scalebox{0.42}
{\input{graphs/tikz_nc_thresholds.tex}}
\caption{\textsc{FedDebug}\xspace performance at neuron activation threshold on 30 clients, including five faulty clients.}
\label{fig:thresholds}
\vspace{-2.5ex}
\end{figure}
\section{Faulty Client Localization}
\label{section:fault_localization}
Faults in a client's model can arise due to measurement errors, human labeling errors, data poisoning, communication problems, or subjective biases of labellers~\cite{ frenay2013classification, li2016data, steinhardt2017certified, ghosh2017robust}. To achieve optimal performance of the global model, it is critical to correctly identify a faulty client and potentially restrict its participation. Manually identifying faulty clients is neither scalable nor effective due to a large number of participating clients in FL and their uninterpretable models \emph{i.e.,}\xspace model parameters do not provide any meaningful debugging information. To automate faulty client localization, we must define a feedback mechanism to guide our search for faulty clients efficiently. Automated debugging tools~\cite{zeller2002simplifying, le2011genprog} for regular software address this problem by relying on multiple test {\em inputs} and a test {\em oracle}. For example, unit tests can guide the search toward concise input leading to incorrect program output~\cite{zeller2002simplifying}. In FL, the two (\emph{i.e.,}\xspace inputs and oracle) translate into diverse test data and the corresponding accurate labels; both of which are unavailable in FL applications.
\textsc{FedDebug}\xspace addresses the challenges of automated fault localization with a two-pronged approach. First, it generates a pool of random test inputs and applies a novel inference-guided test input selection to construct a suite of test inputs, as shown in Figure~\ref{fig:fault_localization}-A.
Since the test inputs are autonomously generated, and they are not accompanied with ground truth labels, and hence metrics such as F1 score or accuracy cannot be used as oracle feedback to find a faulty client.
Instead, \textsc{FedDebug}\xspace performs differential testing of clients' models to measure similarities and differences among models' behaviors on selected inputs (Figure~\ref{fig:fault_localization}-B). \textsc{FedDebug}\xspace fingerprints a neural network behavior on an input by profiling the internal neurons' contributions towards a prediction of the model. Subsequently, it accurately recognizes a client as faulty if its behavior deviates from the norm \emph{i.e.,}\xspace the majority of the clients' behavior. Our insight is that a faulty client's model will show a noticeable difference in its internal neuron values compared to benign clients' models, based on the principle that faulty executions are intrinsically different from correct ones. The same principle is behind popular fault localization techniques, such as Spectra-based Fault Localization~\cite{Jone2002sbfl} and Delta Debugging~\cite{zeller2002simplifying}.
\begin{figure}[t]
\centering
\includegraphics[width=0.50\textwidth]{figures/fault-localization.pdf}
\vspace{-3ex}
\caption{An overview of \textsc{FedDebug}\xspace's fault localization approach. Firstly, it selects a random input that invokes diverse model behavior (A). Secondly, it applies differential execution on clients' models to localize a faulty client (B).}
\label{fig:fault_localization}
\vspace{-2.5ex}
\end{figure}
\begin{algorithm}[t]
{\scriptsize
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\DontPrintSemicolon
\KwInput{$shape$: dimension of the random input to be generated.}
\KwInput{$\kappa$: number of inputs to be generated.}
\KwInput{$\eta $: minimum number of clients for same prediction.}
\KwOutput{$\overline{X}$: a list containing auto-generated test inputs.}
\SetKwProg{Fn}{Function}{:}{}
{
$rand\_inputs = lazilyGenerateRandInputs(shape)$ \;
$\overline{X} = list( )$\tcp{a list for inference guided test inputs}
$seen\_clients\_sequences = list( )$
{
\While {$length(\overline{X}) < \kappa $}{
$ r\_input = pop(rand\_inputs)$ \;
$clients\_preds = getPredictions(clients, r\_input)$\;
\For{$label \in class\_labels$}{
$clients\_seq = samePredClients(clients\_preds, label)$ \;
\If{$clients\_seq \not \in seen\_clients\_sequences$ \textbf{and} $ length(clients\_seq)$ $\geq$ $\eta $ }{
$seen\_sequences.append(clients\_seq)$ \;
$\overline{X}.append(r\_input)$ \tcp{ valid test input}
$\textbf{break}$
}
}
\If {$length(rand\_inputs) < 1 $}{
$rand\_inputs = lazilyGenerateRandInputs(shape)$ \;
}
}
\KwRet $\overline{X}$\;
}
}
\caption {Inference-Guided Test Input Selection}
\label{algo:inputs}
}
\vspace{-0.5ex}
\end{algorithm}
\textbf{\textit{Inference-Guided Test Input Selection.}} As shown in Figure~\ref{fig:fault_localization}-A, \textsc{FedDebug}\xspace first lazily generates a pool of random test inputs (\emph{e.g.,}\xspace 32x32 images constructed from random values within the RGB scale) using Kaiming Initialization~\cite{he2015delving}. It then automatically selects only those inputs that lead to a consensus on predictions among a {\em unique} subset of clients. \textsc{FedDebug}\xspace selects up to $\kappa$ test inputs (default is $\kappa=10$) among the pool of 1000 random inputs. The goal is to minimize any overlapping behavior between clients while inferring unique class labels on selected test inputs. This is similar to achieving maximum code coverage in regular software with minimum tests.
Algorithm~\ref{algo:inputs} selects a test input (line 5) if at least ($\eta \geq$ 5) clients predict the same label and that subset of clients has not been seen in the previously selected input (line 6-11).
On the next random input, if the previously observed subset of clients (\emph{i.e.,}\xspace $clients\_seq \in seen\_clients\_sequences$) predict the same class label, we discard this input. If a unique combination of clients predicts an unseen label, we include the input in the test suite. This process is repeated until we collect a user-defined, $\kappa$, number of test inputs.
\textbf{\textit{Differential Execution of Clients Models.}}
In the absence of correct labels of generated test inputs, \textsc{FedDebug}\xspace adapts differential testing to find behavioral differences and similarities among clients' models, as shown in Figure~\ref{fig:fault_localization}-B. \textsc{FedDebug}\xspace profiles the contributions of individual neurons during model inference on an input and uses it to identify models with common behavior. Note that clients' models in FL are comparable due to having a similar architecture. Algorithm~\ref{algo:detection} describes the faulty client localization process. For a selected test input,
\textsc{FedDebug}\xspace exhaustively iterates all possible combinations of potentially non-faulty clients (\emph{i.e.,}\xspace $n \choose 1$ combinations). For each combination, it performs model inference on the test input and captures its neuron profiles. It aims to find one combination of clients that has the highest overlap in behavior, representing the true $n-1$ benign clients and consequently isolating the precise faulty client. This is a lightweight process due to the negligible model inference time and the iterations' linear time ($O(n)$) complexity.
Our insight is that among all possible combinations of clients, only one represents true benign clients' subset. The remaining combinations contain the faulty client with abnormal neuron activations, reducing the model behavior overlap within that set. In summary, at a given ill-performing round in FL, \textsc{FedDebug}\xspace takes in all participating clients' models as the only input. It automatically generates test inputs and employs differential testing on clients' models to monitor abnormal behavior to precisely identify a faulty client.
\begin{algorithm}[t]
{\scriptsize
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\DontPrintSemicolon
\KwInput{$clients$: a list of clients participated in the given FL round.}
\KwInput{$x$: a random input belongs to $\overline{X}$.}
\KwInput{$na\_t$: a threshold to profile neuron activations.}
\KwOutput{$faulty\_client$: the faulty client who has abnormal behaviour.}
\SetKwProg{Fn}{Function}{:}{}
{
{
$all\_clients\_combinations = nChooseK(clients,1$)\;
$benign\_clients = set()$\;
$max\_common\_activations = -1$ \;
\For{$t\_clients \in all\_clients\_combinations$}{
$neuron\_ids = ActivatedNeurons(t\_clients, x, na\_t)$ \;
$t\_clients\_common\_neurons = intersection(neuron\_ids)$ \;
$temp\_n = length(t\_clients\_common\_neurons)$ \;
\If{$temp\_n > max\_common\_activations$ }{
$max\_common\_activations = temp\_n$ \;
$benign\_clients = t\_clients$ \;
}
}
}
$faulty\_client = clients - benign\_clients $ \;
\KwRet $faulty\_client$ \;
}
}
\caption {Faulty Client Localization using Differential Testing}
\label{algo:detection}
\end{algorithm}
\section{Introduction}
Many machine learning models today require private user information for training accurate models. However, users are naturally reluctant to share such data to minimize the risk of privacy violation.
To address the above needs, Federated Learning (FL) \cite{mcmahan2017communication} enables individual participating clients (\emph{e.g.,}\xspace smart-home edge devices) to train a machine learning (ML) model on their local data in a privacy-preserving environment, and then send the trained model (\emph{e.g.,}\xspace~the weights of the neural network) to a central aggregator to build a global model. FL trains highly accurate models without ever accessing user data, keeping clients' data privacy intact~\cite{li2014scaling}. With the advent of frameworks like Fedml \cite{Fedml}, and IBMFL \cite{ibmfl2020ibm}, FL is actively used in solving real-world problems~\cite{jiang2020federated, rieke2020future, long2020federated, zheng2021applications}.
\noindent{\bf Problems.} The support for collaborative yet privacy-preserving training on FL frameworks comes at the cost of transparency and comprehension, making debugging prohibitively complicated. For instance, a faulty client can send an inaccurate model to the aggregator either due to noisy labels \cite{natarajan2013learning, hendrycks2018using, lee2018cleannet, zhang2018generalized, jiang2018mentornet, li2019learning ,li2020dividemix} in the training data or malicious intent to deteriorate the global model's performance \cite{biggio2012poisoning, chen2017targeted, bagdasaryan2020backdoor, bhagoji2019analyzing}. Finding such a faulty client is challenging due to a large number of unpredictable clients, that may not have participated in every round because of a poor network connection or low battery power ~\cite{xu2019exploring, tang2021battery}. The FL training process also spans numerous rounds which significantly increases
the search space for identifying the true, culprit round. None of the existing FL frameworks provide debugging and testing support to assist the developers building FL applications using FL frameworks~\cite{kairouz2021advances}.
These developers rely on guesswork and expensive trail-error debugging to find a fault-inducing client.
\noindent{\bf Challenges.} FL poses two fundamental challenges when designing a debugging technique. First, in FL deployments, train and test data are kept private and strictly reside with clients. Access to such data could allow developers to evaluate individual clients' models sent to the aggregator and identify the lowest performing model as the culprit, similar to traditional ML model testing. Neither test data nor labels are available to an FL application developer and, therefore, existing ML debugging approaches~\cite{pei2017deepxplore, xie2019diffchaser, guo2018dlfuzz, tian2018deeptest, zhang2018deeproad, xie2019deephunter, sun2019deepconcolic, odena2019tensorfuzz, braiek2019deepevolution, wardat2021deeplocalize, usman2021neurospf} are also inapplicable.
Second, due to the unpredictability of clients' participation in a round and the ephemeral nature of their contributions in the global model, reproducing a fault (\emph{i.e.,}\xspace faulty client) and then debugging it is not feasible. Traditional breakpoint debugging will pause the entire training process in FL across all clients, causing severe side-effects such as data loss as clients may not have persistent storage to store data. Live postmortem or trial-error debugging may lead to a new set of clients for each round, based on client availability and quorum, thus making debugging even more ineffective. Considering the above limitations and challenges, we must design a debugging approach that does not rely on clients' data, can debug a live FL application without any interference, and can localize a faulty client precisely.
\noindent{\bf Contributions.} We take inspiration from traditional debuggers, such as {\tt gdb}, and redesign traditional debugging constructs that are tailored to the needs of an FL application developer. Our approach, \textsc{FedDebug}\xspace, selectively records an FL application's telemetry data to enable realtime interactive debugging on a simulation that mirrors a live FL application. With \textsc{FedDebug}\xspace's {\em breakpoint}, a developer can spawn a simulation of a live FL application and inspect the current state containing information such as clients' models and their reported metrics (\emph{e.g.,}\xspace their training loss or hyperparameters).
It also allows a seamless transition between the rounds and clients at a given breakpoint, enabling a fine-grained step-by-step inspection of the application's state. When a developer finds a suspicious state (\emph{e.g.,}\xspace multiple clients report high training loss), \textsc{FedDebug}\xspace's novel automated fault localization approach precisely identifies the faulty client without ever needing any test data or labels. Once a faulty client is identified, \textsc{FedDebug}\xspace's {\em fix and replay} repairs the global training by retroactively removing the client and resumes the live FL training.
\noindent{\bf Key Insights.} \textsc{FedDebug}\xspace leverages several insights to enable systematic FL debugging while preserving clients' privacy. We observe that instead of debugging a live FL application, we can record a set of runtime metrics essential to regenerate a given state in an FL application. Thus, \textsc{FedDebug}\xspace performs debugging on a regenerated simulated state equivalent to a live state.
To have a measurable impact on the global model, a faulty client's model must behave differently than the regular clients. Every client in an FL application has the same model architecture, so their internal behaviors are comparable. Based on this insight, \textsc{FedDebug}\xspace proposes an inference-guided test selection method to select high-quality and diverse test data from a pool of randomly generated input images using Kaiming Initialization~\cite{he2015delving}.
However, an auto-generated data does not include the class label \emph{i.e.,}\xspace an oracle. To address the {\em oracle} problem with such data, \textsc{FedDebug}\xspace adapts differential testing to FL domain. It captures differences in the models' execution via neuron activations instead of output labels to identify \emph{diverging} behavior of a faulty client.
\noindent {\bf Evaluations.} We perform large-scale, extensive evaluation of \textsc{FedDebug}\xspace on popular models, two large-scale datasets, two well-established FL data distributions, and a real-world fault-injection technique in a total of 68 different FL configurations.
We evaluate \textsc{FedDebug}\xspace on fault localizability, debugging time, performance overhead over a vanilla FL framework (IBMFL), and scalability. \textsc{FedDebug}\xspace shows remarkable success in identifying faulty clients. It can localize the real-world faulty client with 100\% accuracy within 2.1\% of a round's training time. When faced with multiple faulty clients, \textsc{FedDebug}\xspace retains the high fault localization accuracy \emph{i.e.,}\xspace 90.3\%. \textsc{FedDebug}\xspace's debugging constructs incur an overhead of 48\% of the aggregation time to record telemetry data for state regeneration. Surprisingly, this time is only 1.2\% of a single round's training time in our experiments. Through our evaluation, we demonstrate that \textsc{FedDebug}\xspace effectively conducts interactive debugging and efficiently automates fault localization without incurring high runtime costs. \textsc{FedDebug}\xspace augments the IBMFL framework, but its underlying insights can be adapted for other FL frameworks.
We summarize \textsc{FedDebug}\xspace's contributions below:
\begin{itemize}
\item \textbf{Originality:} To the best of our knowledge, \textsc{FedDebug}\xspace is the first general-purpose debugging framework for federated learning applications that is not limited by access to clients' data. It addresses open debugging challenges in FL \cite{kairouz2021advances}.
\item \textbf{Approach:} Traditional ML trains a single model, whereas FL involves distributed training across hundreds of clients over multiple rounds. Thus, existing ML debugging approaches are inapplicable on FL. \textsc{FedDebug}\xspace's novelty lies in observations about FL and the exploitation of insights on reproducibility, inference guided test generation, and differential testing that do not impede performance or violate FL privacy principles.
\item \textbf{Benchmark:} We evaluate \textsc{FedDebug}\xspace in 68 FL configurations derived from well-established datasets, models, varying clients, data distribution, and fault-injections. We package our experiment environment into a public benchmark for future research use.
\item \textbf{Usefulness:} Our extensive experiments demonstrate that \textsc{FedDebug}\xspace successfully locates faulty client(s) without impeding the FL workflow. On a wide range of experiments, \textsc{FedDebug}\xspace exhibits robust results against multiple faulty clients, challenging data distributions, and a large number of clients.
\end{itemize}
\section{Motivating Scenario}
\section{Related Work}
\label{related-work}
Debugging ML models has been extensively explored in recent works~\cite{pei2017deepxplore, xie2019diffchaser, guo2018dlfuzz, usman2021neurospf, wardat2021deeplocalize, odena2019tensorfuzz, braiek2019deepevolution}. The primary objectives of these approaches are interpretability, generating new test cases by carefully perturbing the real-world training inputs to improve performance and find bugs and corner cases in the given model. These approaches require access to the training and testing data, and some are limited to testing a single neural network; hence, such approaches cannot be directly imported in FL. Lack of access to client data and resources in FL settings makes testing and debugging FL more challenging. If applied to FL, these testing approaches will find every client's model defective. Clients' models are architecturally similar, but trained on local clients' data, and thus their models are semantically different from each other. Identifying defects in an isolated model is not practical either. Every client's model has weaknesses that will surface on carefully selected test data. \textsc{FedDebug}\xspace overcomes these problems by focusing on the commonality of models instead of differences.
Most relevant work to {\textsc{FedDebug}\xspace} primarily focuses on finding clients' contributions to a global model without exposing the private data to a central server ~\cite{zeng2021comprehensive}. In practice, individual clients report information about training such as dataset size and performance metrics to the central aggregator \cite{zhang2020hierarchically, kang2019incentive, goetz2019active, sarikaya2019motivating, zeng2020fmore, ye2020federated, le2021incentive}. Existing approaches use prior information \emph{e.g.,}\xspace previous task performance and data quality obtained via third-party services to evaluate clients' models \cite{ur2021trustfed}. Some approaches recommend cross-validating clients' models on another client's local dataset ~\cite{lyu2020towards}. Another alternate is to maintain a validation dataset at the central server to evaluate clients' models ~\cite{lyu2020collaborative, chen2020focus}. A major limitation of the above FL-related approaches is that the aggregator server is entirely dependent on the client\textquotesingle s reported information or test data to evaluate clients' models. The aggregator also assumes that all clients are trustworthy about their performance in these approaches, which invites adversarial clients to exploit FL in order to retrieve clients' private data. Cross validation is also prohibited due to limited computing resources for edge devices such as smart home sensors. \textsc{FedDebug}\xspace overcomes the limitations of debugging faulty clients with interactive and automated approaches that are privacy preserving.
\subsection{Threats to Validity}
\label{section:threats}
To alleviate threats to external validity, we use established state-of-the-art FL experimental models (ResNet-18, ResNet-34, ResNet-50, DenseNet-121, and VGG-16), two standardized datasets from FL benchmarks, two real-world data distributions, and an industrial scale FL framework. Similarly, we remove bias in fault injection using standard noisy labels technique from the ML literature, to make a fault reflective of real-world scenarios. We also experiment with varying noise rates for better evaluations, transparency, and fairness. Another source of external threats to validity is randomness in the \textsc{FedDebug}\xspace's input selection method. We minimize such randomness by evaluating each configuration on at least 10 and 100 test inputs and reporting the average results.
|
1,116,691,500,262 | arxiv | \section{Introduction}
Although the observations of non-zero neutrino mass and large leptonic mixing
have been confirmed by several neutrino experiments in the last two decades
\cite{PDG, kamland08, T2K, chooz, daya, reno, minos}, three important issues
related to neutrino physics are yet not settled. They are namely, (a) nature of
neutrinos: Dirac or Majorana, (b) mass hierarchy of neutrinos: normal $(m_3 >
m_2 > m_1)$ or inverted $(m_2 > m_1 > m_3)$ and (c) leptonic CP violation. The
present status of different neutrino parameters can be found in the latest
global fit analysis \cite{schwetz16, valle17}. While neutrino oscillation experiments are insensitive to the
nature of neutrinos, experiments looking for lepton number violating signatures
can probe the Majorana nature of neutrinos. Neutrinoless double beta decay
$(0\nu\beta\beta)$ is one such lepton number violating process which has been
searched for at several experiments without any positive result so far but
giving stricter bounds on the effective neutrino mass. Cosmology experiments
are also giving tight constraints on the lightest neutrino mass from the
measurement
of the sum of absolute neutrino masses $\sum m_i \leq 0.17$ eV
\cite{Planck15}, disfavouring the quasi-degenerate regime of light neutrino
masses.
Although negative results at $0\nu \beta \beta$ experiments do not prove that the light neutrinos are of Dirac
nature, it is nevertheless suggestive enough to come up with scenarios
predicting Dirac neutrinos with correct mass and mixing. There have been several
proposals already that can generate tiny Dirac neutrino masses \cite{babuhe,
diracmass, diracmass1, ma1, ma2, ma3, ma4, db1, db2, db3, db4,
CentellesChulia:2017koy, type2dirac, A4dirac}. While most of these scenarios
explain the origin of tiny Dirac mass through some type of seesaw mechanisms at
tree or loop level, there are some scenarios \cite{diracmass1, A4dirac} which
consider an additional scalar doublet apart from the standard model (SM) one
which acquire a tiny vacuum expectation value (vev) naturally due to the
presence of a softly broken global symmetry. These Dirac neutrino mass models
also incorporate additional symmetries like $U(1)_{B-L}, Z_N, A_4$ in order to
generate a tiny neutrino mass
of purely Dirac type with specific mixing patterns. These symmetries play a
crucial role either in forbidding a tree level Dirac mass term between left
handed lepton doublet and right handed
neutrino singlet or a Majorana mass term of right handed neutrino singlet. In
this work, we particularly look at the possibility of a flavour symmetric
scenario for Dirac neutrinos within the well motivated $A_4$ flavour symmetry
group. The details of this non-abelian discrete group is given in Appendix
\ref{appen1} and can also be found in several review articles
\cite{discreteRev}. Although there are many $A_4$ realisations of seesaw
mechanisms for Majorana neutrinos (see~\cite{Karmakar:2014dva} and
references there in), there are not many studies done in the
context of Dirac neutrinos. Recently there have been some attempts in this
direction, specially for type I seesaw \cite{CentellesChulia:2017koy}, type II
seesaw \cite{type2dirac} and neutrinophilic two Higgs doublet model
\cite{A4dirac} for Dirac neutrinos.
In the present work, we propose two different seesaw scenarios for Dirac
neutrinos namely, type I and inverse seesaw within the framework of $A_4$
flavour symmetry. Type I seesaw for Dirac neutrinos with $A_4$ flavour symmetry
was also proposed recently by the authors of \cite{CentellesChulia:2017koy}
along with its correlation to dark matter stability. In this work, we propose a
more minimal version of type I seesaw as we do not incorporate dark matter into
account. We also incorporate additional $Z_N$ discrete symmetries in such a way
that naturally explains the hierarchy of different terms in the neutrino mass
matrix. Here we note that type I seesaw for Majorana neutrinos were proposed
long back \cite{ti}. We then propose an inverse seesaw realisation of Dirac
neutrinos within the $A_4$ flavour symmetric framework. For earlier works on this seesaw
mechanism for Majorana neutrinos, one may refer to \cite{inverse}. Unlike
canonical seesaw models, the inverse seesaw can be a low scale framework where
the singlet heavy neutrinos can be at or below the TeV scale without any fine
tuning of Yukawa couplings. In the Majorana neutrino scenario, this is possible
due to softly broken global lepton number symmetry by the singlet mass term. In
the present case, we however, have a conserved lepton number global symmetry
due to the purely Dirac nature of light neutrinos. Therefore, it is no longer
possible to use soft $U(1)_L$ global symmetry breaking argument to generate a
tiny singlet mass term. In spite of that, we generate a tiny singlet neutrino
mass term at next to leading order by appropriately choosing $Z_N$ discrete
symmetries. Such discrete symmetries make sure that such a term do not arise at
leading order so that its smallness can be naturally explained from higher order
terms.
Similar to the type I seesaw case, here also we can naturally explain the
hierarchy of different terms present in the inverse seesaw mass matrix. In both
of these models, the
antisymmetric term arising out of the products of two $A_4$ triplets plays a
non-trivial role in generating the correct neutrino mixing. We can obtain the
tribimaximal (TBM) mixing from the symmetric contribution of the product of the
two triplet flavons while nonzero $\theta_{13}$ is generated from the
anti-symmetric contribution \cite{A4dirac}. Such anti-symmetric contribution
from $A_4$ triplet products can play a non-trivial role in generating nonzero
$\theta_{13}$ in Majorana neutrino scenarios (through Dirac Yukawa
coupling appearing in type I seesaw) as well \cite{A4Majorana}. The Dirac
neutrino mass matrix can completely dictate the observed neutrino mixing in this
construction in case where the charged lepton mass matrix is diagonal. However,
in some cases, the charged lepton mass matrix can be non-trivial and has an
important contribution to lepton mixing.
Both of the discrete flavour symmetric constructions for type I and inverse
seesaw mechanisms show highly predictive nature of the models for generic
choices of flavon alignments. The anti-symmetric contribution arising from the
Dirac nature of neutrinos not only generate nonzero $\theta_{13}$ but also shows
deviations from maximal value of atmospheric mixing angle, favoured by the
latest global fit data \cite{schwetz16, valle17}. Interestingly, $\theta_{23}$
is found to be in the lower octant in our models. Now, due to the particular
flavour structures of the models, only normal hierarchy for neutrino mass
spectrum is allowed, another interesting prediction of the model. In addition to
this, we also constrain the absolute neutrino masses and Dirac CP phase, that
can be probed at ongoing and future experiments. The model can also be falsified
by any future observation of $0\nu \beta \beta$.
This letter is organised as follows. In Section \ref{sec:models}, we present
complete $A_4$ flavour symmetric models for type-I and inverse seesaw scenario
respectively. Complete phenomenology of the associated models and their
predictions are also presented in this section. Then we conclude in Section
\ref{sec:conc} and included a short note on $A_4$ multiplication rules involved
in our analysis in the Appendix \ref{appen1}.
\section{$A_4$ Flavour Model with Dirac
Neutrinos}\label{sec:models}
\subsection{Dirac Type I seesaw}
Unlike in the canonical seesaw mechanism for Majorana neutrinos \cite{ti} where
we incorporate the presence of three (at least two) Majorana heavy neutrinos,
here we introduce two copies of Weyl fermions $N_{L}$ and $N_{R}$ per
generation, which are charged under discrete $Z_4 \times Z_3$ symmetry as given
in Table \ref{tab:t22}. Here $N_{L, R}$ can also be considered to be part of a
heavy Dirac fermion whose mass can arise either as a bare mass term or from
flavons depending upon their transformations under the flavour symmetries. In
Table \ref{tab:t22}, we also show the relevant SM fields, required flavon fields
as well as their transformations under the flavour symmetry. It can be seen from
the symmetry transformations that a Dirac mass term for light neutrinos can not
be written at tree level. However, we can write down mass term for heavy
neutrinos as well as coupling between light and heavy neutrinos, so that the
effective light neutrino Dirac mass can be generated from a seesaw mechanism.
\begin{table}[h]
\centering
\resizebox{12cm}{!}{%
\begin{tabular}{|c|cccccc|cccc|}
\hline
Fields & $L$ & $e_{\mbox{\tiny$R$}}, \mu_{\mbox{\tiny$R$}}, \tau_{\mbox{\tiny$R$}}$ & $H$ & $\nu_{\mbox{\tiny$R$}}$& $N_L$ &
$N_{\mbox{\tiny$R$}}$ & $\phi_{\mbox{\tiny$S$}}$
& $\phi_{\mbox{\tiny$T$}}$ & $\xi$ &$\chi$\\
\hline
$A_{4}$ & 3 & 1,$1''$,$1'$ & 1 & 3 &3 &3& 3 & 3 & 1 &1\\
\hline
$Z_{4}$ & $i$ &-$i$& 1& $1$ & -1& -1&$1$&$1$ &1&-$i$ \\
\hline
$Z_3$ & $\omega$ & $\omega$ & 1& $\omega^2$ & $\omega^2$ & $\omega$&
$\omega$& 1 & $\omega$ & 1 \\
\hline
\end{tabular}
}\
\caption{\label{tab:t22} Field content and transformation properties under
$A_4 \times Z_4 \times Z_3$ symmetry. }
\end{table}
The relevant Lagrangian for charged lepton sector can be written as
\begin{equation}\label{Lag:cl2}
\mathcal{L}_l = \frac{y_e}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})H e_{\mbox{\tiny$R$}}
+\frac{y_{\mu}}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})_{1'}H\mu_{\mbox{\tiny$R$}}+
\frac{y_{\tau}}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})_{1''}H\tau_{\mbox{\tiny$R$}}.
\end{equation}
For generic flavon vev alignment $\langle \phi_T
\rangle=(v_T,v_T,v_T)$ the corresponding mass matrix is given by
\begin{eqnarray}\label{mCL2}
m_{l} =\frac{vv_{\mbox{\tiny$T$}}}{\Lambda} \left(
\begin{array}{ccc}
y_e & y_{\mu} & y_{\tau}\\
y_e & \omega y_{\mu} & \omega^2 y_{\tau} \\
y_e & \omega^2 y_{\mu} & \omega y_{\tau}
\end{array}
\right),
\end{eqnarray}
where $\Lambda$ is the cut-off scale of the theory and $y_e, y_{\mu},
y_{\tau}$ are respective coupling constants.
This matrix can be diagonalised by using the magic matrix $U_{\omega}$, given by
\begin{eqnarray}\label{eq:omega}
U_{\omega} =\frac{1}{\sqrt{3}}\left(
\begin{array}{ccc}
1 & 1 & 1\\
1 & \omega & \omega^2\\
1 & \omega^2 & \omega
\end{array}
\right).
\end{eqnarray}
Now, the Lagrangian for neutrino sector can be written as
\begin{eqnarray}
\mathcal{L}=y_{\mbox{\tiny$D$}}\chi\bar{L}\tilde{H}N_R /\Lambda + y_{\mbox{\tiny$D$}'} \chi^2 \bar{N_L}
\nu_{\mbox{\tiny$R$}}/\Lambda +y_\xi \xi \bar{N_L}N_{R}+ y_s \phi_{\mbox{\tiny$S$}} (\bar{N_L}N_{R})_{3s}
+ y_a \phi_{\mbox{\tiny$S$}} (\bar{N_L}N_{R})_{3a}+\text{h.c.}
\end{eqnarray}
where the subscripts $3s, 3a$ correspond to symmetric and anti-symmetric parts
of triplet products in the $S$ diagonal $A_4$ basis, given in Appendix
\ref{appen1}. From these contributions, we obtain the mass matrices in $(\nu_L,
N_R), (N_L, \nu_R), (N_L, N_R)$ basis as
\begin{eqnarray}\label{xxx}
M_{D}=\frac{y_{\mbox{\tiny$D$}} vv_{\chi}}{\Lambda}\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1\\
\end{array}
\right), M'_{D}= \frac{y_{\mbox{\tiny$D$} '} v_{\chi}^2}{\Lambda}\left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1\\
\end{array}
\right)~~ {\rm and}
\end{eqnarray}
\begin{eqnarray}\label{xx}
M&=&\left(
\begin{array}{ccc}
x & 0 & s+a \\
0 & x & 0\\
s - a & 0 & x
\end{array}
\right)~~{\rm with}~~\langle \chi
\rangle=v_{\chi},\langle \xi
\rangle=v_{\xi}, ~\langle \phi_S
\rangle=(0,v_S,0)
\end{eqnarray}
respectively. Such vev alignment for one of the $A_4$ triplet, $\phi_S$ in
this case, is widely used in the $S$ diagonal basis of $A_4$ and can be realised
in a natural way by minimisation the scalar potential~\cite{He:2006dk, Branco:2009by,
A4dirac, Lin:2008aj, Rodejohann:2015hka}. In the $T$ diagonal basis other
possible vev alignment (e.g. where the first component of the triplet gets vev)
is adopted~\cite{Altarelli:2005yx}. Here it is worth mentioning that in
the present set-up other possible vev alignments like $\langle \phi_S
\rangle=(v_S,0,0)$ or $\langle \phi_S \rangle=(0,0,v_S)$ are unable to
reproduce correct neutrino mixing as observed by the experiments. The vev of
the SM Higgs is denoted by $v$. Here we have defined
$x=y_\xi v_{\xi}$, $s=y_s v_{\mbox{\tiny$S$}}$, $a=y_a v_{\mbox{\tiny$S$}}$ and $y_{\mbox{\tiny$D$}}$ $y_{\mbox{\tiny$D$}'}$,
$y_{\xi}$, $y_{s}$, $y_a$ are respective coupling constants involved in the
neutrino Lagrangian. Note that $s$ and $a$ are the symmetric and
antisymmetric contributions originated from $A_4$ multiplication, mentioned earlier.
This antisymmetric part only contribute in the mass matrix if neutrinos are
Dirac particles \cite{A4dirac} or in the Dirac neutrino mass matrix used in
canonical seesaw mechanism for Majorana light neutrinos \cite{A4Majorana}. On
the other hand, only the symmetric part contributes in a Majorana neutrino mass
matrix as the anti-symmetric part identically vanishes. Here we will find that
this antisymmetric part,
originated due to the Dirac nature of neutrinos, plays an instrumental role in
the rest of the analysis and crucially dictates the neutrino masses and mixing.
Now, the light Dirac neutrino mass matrix in this type I seesaw like scenario can be
written as
\begin{eqnarray}
m_{\nu}&=&-M'_{D}M^{-1}M_{D}\\
&=& -\frac{y_{\mbox{\tiny$D$}}y_{\mbox{\tiny$D$}'}vv_{\chi}^3}{\Lambda^2}M^{-1}\\
&=&-\lambda \left(
\begin{array}{ccc}
x & 0 & -(a+s) \\
0 & \frac{a^2-s^2+x^2}{x} & 0\\
a-s & 0 & x
\end{array}
\right),
\end{eqnarray}
where $\lambda=\frac{y_{\mbox{\tiny$D$}}y_{\mbox{\tiny$D$}'}vv_{\chi}^3}{\Lambda^2(a^2-s^2+x^2)}$ is a
dimensionless quantity. It should be noted that the simple type I seesaw
formula written above for light Dirac neutrinos is obtained under the assumption
$M_D, M'_{D} \ll M$ which is justified as the latter is generated at leading
order whereas $M_D, M'_{D}$ arise at dimension five level only due to the chosen
particle content and their symmetry transformations. Now we define a Hermitian
matrix as
\begin{align}
\mathcal{M}&=m_{\nu}m_{\nu}^{\dagger}\\
&=|\lambda|^2 \left(
\begin{array}{ccc}
|x|^2+|s+a|^2 & 0 & x(a-s)^*-x^*(a+s) \\
0 & \frac{a^2-s^2+x^2}{x}\frac{(a^2-s^2+x^2)^*}{x^*}& 0\\
x^*(a-s)-x(a+s)^*& 0 & |x|^2+|a-s|^2
\end{array}
\right).
\end{align}
This matrix can be diagonalised by a unitary matrix $U_{13}$, given by
\begin{eqnarray}\label{u13}
U_{13}=\left(
\begin{array}{ccc}
\cos\theta & 0 & \sin\theta{e^{-i\psi}} \\
0 & 1 & 0 \\
-\sin\theta{e^{i\psi}} & 0 & \cos\theta
\end{array}
\right),
\end{eqnarray}
through the relation $U_{13}^{\dagger}\mathcal{M}U_{13}={\rm diag}
(m_1^2,m_2^2,m_3^2)$. Here we find the mass eigenvalues ($m_1^2,m_2^2, m_3^2$)
to be
\begin{eqnarray}
m_1^2&=&\kappa^2\left[1+\alpha^2+\beta^2-\sqrt{(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
\right],\label{eq:tm1}\\
m_2^2&=&\kappa^2\left[1+\alpha^4+\beta^4+2\alpha^2\cos 2 \phi_{ax}
-2\beta^2\cos 2 \phi_{sx}-2\alpha^2\beta^2\cos 2 (\phi_{sx} - \phi_{ax}
)\right],\label{eq:tm2}\\
m_3^2&=&\kappa^2\left[1+\alpha^2+\beta^2+\sqrt{(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
\right],\label{eq:tm3}
\end{eqnarray}
where we have defined $\kappa^2=|\lambda|^2|x|^2$, $\alpha=|a|/|x|$,
$\beta=|s|/|x|$, $\phi_{sx}=\phi_s -
\phi_x $, $\phi_{ax}=\phi_a - \phi_x$ with $s=|s|e^{i\phi_s}$,
$a=|a|e^{i\phi_a}$ and $x=|x|e^{i\phi_x}$ respectively. From these definitions
it is clear that $\alpha$ is associated with the antisymmetric
contribution whereas $\beta$ is related to the symmetric contribution in the
Dirac neutrino mass matrix. Now, we obtain the rotation angle and phase involved in $U_{13}$ as
\begin{eqnarray}\label{eq:th2}
\tan2\theta=\frac{\beta\cos\phi_{sx}\cos\psi-\alpha\sin\phi_{ax}\sin\psi}
{\alpha\beta\cos(\phi_{sx}-\phi_{ax}) }
\end{eqnarray}
and
\begin{eqnarray}\label{eq:tanpsit1}
\tan\psi=-\frac{\alpha\sin\phi_{ax}}
{\beta\cos\phi_{sx}}
\end{eqnarray}
respectively.
Now the final lepton mixing matrix is given by
\begin{eqnarray}
U&=&U^{\dagger}_{\omega}U_{13},
\end{eqnarray}
and the $U_{e3}$ element of the Pontecorvo Maki Nakagawa Sakata (PMNS) leptonic
mixing matrix is given by $\frac{1}{\sqrt{3}}(\cos\theta+\sin\theta
e^{-i\psi})$. The PMNS mixing matrix is parametrised as
\begin{equation}
U_{\text{PMNS}}=\left(\begin{array}{ccc}
c_{12}c_{13}& s_{12}c_{13}& s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}& c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13} \\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} & -c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta}& c_{23}c_{13}
\end{array}\right).
\label{PMNS}
\end{equation}
Comparing $U_{e3}$ from the model with the one in the standard PMNS leptonic mixing matrix $U_{\text{PMNS}}$, we obtain
\begin{eqnarray}
\sin\theta_{13} e^{-i\delta}=\frac{1}{\sqrt{3}}(\cos\theta+\sin\theta
e^{-i\psi}).
\end{eqnarray}
Now, $\sin\theta_{13}$ and $\delta$ can be parametrised in terms of $\theta$
and $\psi$ as
\begin{eqnarray}\label{eq:s13}
\sin^2\theta_{13}=\frac{1}{3}(1+\sin 2\theta\cos\psi)~~{\rm and}~~
\tan\delta=\frac{\sin\theta\sin\psi}{\cos\theta+\sin\theta\cos\psi}.
\end{eqnarray}
\begin{figure}[h]
$$
\includegraphics[height=5cm]{rabt1.png}~~~~~~
\includegraphics[height=4.8cm]{rsxaxt1.png}
$$
\caption{Allowed regions of $\beta$-$\alpha$ (left panel) and
$\phi_{ax}$-$\phi_{sx}$ (right panel) planes from the 3$\sigma$ global fit
values of $\theta_{13}$,
$\theta_{12}$ and $\theta_{23}$~\cite{valle17} represented by the blue dots.
Red dots in each plot also satisfy $3\sigma$ allowed range for the ratio
($r$) of solar to atmospheric mass squared differences~\cite{valle17}.}
\label{fig:abpp}
\end{figure}
Such correlation between $\sin\theta_{13}$
($i.e.$ $U_{e3}$) and the model parameters can easily be obtained and can also be found
in~\cite{A4dirac, Grimus:2008tt,Albright:2008rp,Albright:2010ap,He:2011gb}
for other scenarios. From equations (\ref{eq:th2}-\ref{eq:s13}) it is clear
that, all
the mixing angles ($\theta_{13}, \theta_{12}, \theta_{23}$) and Dirac CP phase
$(\delta)$ involved in the lepton mixing matrix $U_{\text{PMNS}}$ are
functions of four parameters namely, $\alpha$, $\beta$, $\phi_{ax}$ and
$\phi_{sx}$. Now, using 3$\sigma$ allowed range~\cite{valle17} of the three
mixing angles ($\theta_{13}, \theta_{12}, \theta_{23}$), in figure \ref{fig:abpp}
we have shown the constrained range of $\alpha$, $\beta$, $\phi_{ax}$ and
$\phi_{sx}$. In figure \ref{fig:abpp}, the blue dots represent
the allowed points in the $\alpha$-$\beta$ plane (left panel) and
$\phi_{ax}$-$\phi_{sx}$ (right panel)
plane respectively. In addition to the bounds obtained from the mixing angles,
the parameter space can be further constrained in order to satisfy the ratio of
solar to atmospheric mass squared differences, defined as
\begin{eqnarray}\label{eq:r}
r=\frac{\Delta{m}_{\odot}^{2}}{|\Delta{m}_{A}^{2}|}=
\frac{\Delta{m_{21}^{2}}}{|\Delta{m}^2_{31}|}.
\end{eqnarray}
From equation (\ref{eq:tm1}-\ref{eq:tm3}) and equation (\ref{eq:r}) it is evident that
this ratio $r$ is also function of $\alpha$, $\beta$, $\phi_{sx}$ and $\phi_{ax}$ (which are
appearing in the expression for the mixing angles). Once again using the
\begin{figure}[h]
$$
\includegraphics[height=5.0cm]{kt1.png}~~~~~
\includegraphics[height=4.8cm]{mast1.png}
$$
\caption{Left panel: Estimation for $\kappa$ (in eV) as a function
of $\alpha$. Right panel: Prediction for absolute neutrino masses (orange, blue,
brown and red for $m_1$, $m_2$, $m_3$ and $\sum m_i$
respectively) as a function of $\alpha$. In both cases the parameter space
simultaneously satisfies 3$\sigma$ allowed range of $\theta_{13}$,
$\theta_{12}$, $\theta_{23}$ and $r$~\cite{valle17} as shown in figure
\ref{fig:abpp}.}
\label{fig:k-m-t1}
\end{figure}
$3\sigma$ range of the neutrino mass squared differences we find the
allowed ranges for $\alpha$, $\beta$, $\phi_{sx}$ and $\phi_{ax}$ given by the
red dots in the both panels of figure \ref{fig:abpp}. Therefore, these
red dots represents the regions of model parameters that satisfy the complete
neutrino oscillation data~\cite{valle17}. This reveals that the allowed range
of $\alpha \approx$ 0.6-1.6 corresponds to $\beta \approx$ 0.4-2.0
as evident for the left panel of figure \ref{fig:abpp}. On the
other hand, the right panel plot
of figure \ref{fig:abpp} shows that few disconnected regions are allowed in the
$\phi_{sx}$-$\phi_{ax}$ parameter space.
\begin{figure}[h]
$$
\includegraphics[height=5cm]{dax.png}~
\includegraphics[height=5cm]{dsx.png}
$$
$$
\includegraphics[height=5cm]{ordat1.png}
$$
\caption{Predictions for Dirac CP phase $\delta$ (in radian) as a function of
$\phi_{ax}$, $\phi_{sx}$ and $\alpha$ for 3$\sigma$ allowed range of
$\theta_{13}$, $\theta_{12}$, $\theta_{23}$ and $r$~\cite{valle17} as evaluated
in figure \ref{fig:abpp}.}
\label{fig:delt1}
\end{figure}
Now, using these allowed values (obtained from figure \ref{fig:abpp}) for the
parameters ($\alpha$, $\beta$, $\phi_{sx}$ and $\phi_{ax}$) and the best fit
value for the solar mass squared difference $\Delta{m}_{\odot}^{2}=
7.5\times10^{-5}~ $eV$ ^2$~\cite{valle17}, we can find the the common factor
$\kappa$ appearing in the absolute
neutrino mass eigenvalues using the relation
\[
\kappa=
\sqrt{
\begin{aligned}
\Delta{m}_{\odot}^{2}/&\{\left[1+\alpha^4+\beta^4+2\alpha^2\cos
2
\phi_{ax}
-2\beta^2\cos 2 \phi_{sx}-2\alpha^2\beta^2\cos 2 (\phi_{sx} - \phi_{ax}
)\right] \\
& -[1+\alpha^2+\beta^2-\sqrt{(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
]\}
\end{aligned}
}.
\]
Here we have used equations (\ref{eq:tm1}) and \eqref{eq:tm2} to deduce the above
correlation. In left panel of figure \ref{fig:k-m-t1} we have plotted the allowed
values for $\kappa$ (in eV) as a function of $\alpha$, where we also find that
the allowed range $\alpha\approx$ 0.6-1.6 restricts $\kappa$ to fall in the range
0.012-0.03 eV. Now using the estimation for $\kappa$ as in left-panel of figure
\ref{fig:k-m-t1}, we can find the absolute neutrino masses using equations
(\ref{eq:tm1})-\eqref{eq:tm3}. In the right panel of figure \ref{fig:k-m-t1} we
have plotted the individual absolute neutrino masses (where orange, blue, brown
dots stand for $m_1$, $m_2$ and $m_3$ respectively) as well as their sum
($\sum m_i$ denoted by the red dots) as a function of $\alpha$. Here we find
that the allowed ranges for the absolute neutrino masses (obeying normal
hierarchy) are given by
$m_1\approx0.0060-0.0023$ eV, $m_2\approx 0.0105-0.0090$ eV, $m_3\approx
0.0547-0.0481$ eV and $\sum m_i\approx 0.0707-0.0596$ eV when $\alpha$ is in
the range 0.6-1.6 . In the present setup, an inverted hierarchy of
light neutrino mass spectrum however can not be accommodated, an interesting
prediction that will undergo tests in several ongoing and near future
experiments.
\begin{figure}[h]
$$
\includegraphics[height=5cm]{d23t1.png}~
\includegraphics[height=5cm]{m1t23t1.png}
$$
$$
\includegraphics[height=5cm]{m1-mall-t1.png}
\includegraphics[height=5cm]{dmallt1.png}
$$
\caption{Correlations between different light neutrino parameters for 3$\sigma$ allowed range of $\theta_{13}$,
$\theta_{12}$, $\theta_{23}$ and $r$~\cite{valle17}.}
\label{fig:colt1}
\end{figure}
Now, from equations (\ref{eq:tanpsit1}) and (\ref{eq:s13}), we find that the Dirac CP
phase can evaluated once we find allowed parameter space in the present model.
Therefore using the allowed regions for $\alpha$, $\beta$, $\phi_{sx}$ and
$\phi_{ax}$ as obtained in figure \ref{fig:abpp}, we can find the predictions for
the Dirac CP phase $\delta$ within this framework. In figure \ref{fig:delt1}
we have shown the prediction for $\delta$ as a function of $\phi_{ax}$,
$\phi_{sx}$, $\alpha$ and it is clear that the model predicts $\delta$ to be in the range
-$\pi/3\lesssim \delta \lesssim \pi/3$.
Next, to understand the correlation between associated observables in the
present type I seesaw framework we present few schematics in figure
\ref{fig:colt1}. Here the upper left panel represents the correlation between
$\delta$ and $\theta_{23}$. This figure shows that in our setup $\theta_{23}$
falls in the lower octant when $\delta$ lies between -$\pi/3$ to $\pi/3$.
This is in good agreement with all three global fit analysis~\cite{schwetz16,valle17,
Capozzi:2016rtj} where the best fit value for $\theta_{23}$ prefers to be in
the lower octant (although for 3$\sigma$ range both octants are possible) for
the normal hierarchy of light neutrino masses. And in our case, only normal
hierarchy of neutrino mass spectrum is allowed. Now, in the other two panels of
figure \ref{fig:colt1} we have plotted our allowed parameter space in
$\sin^2\theta_{23}$-$m_1$ and $\delta$-$\sum m_i$ plane respectively. From
$\sin^2\theta_{23}$ versus $m_1$ plot it is clear that as $m_1$ approaches towards
its maximum, $\theta_{23}$ also tends towards the maximal value. Now in
the left panel of figure \ref{fig:colt1} we plot sum of the absolute neutrino
masses $\sum m_i$ as a function of lightest light neutrino mass $m_1$ and it
falls well below the Planck upper limit \cite{Planck15} (as shown by the shaded
region). In this figure the splitting in the sum of absolute mass is due to
3$\sigma$ uncertainties in the solar and atmospheric mass squared
differences~\cite{valle17}. Now, on the other hand, $\delta$ versus $\sum m_i$
plot in the lower right panel of figure \ref{fig:colt1} shows that for
$\delta$ within the range -$\pi/3$ to $\pi/3$, $\sum m_i$ ranges between
0.0707 eV to 0.0596 eV indicating higher value of $\sum m_i$ is allowed only
when $\delta \neq 0$. It is interesting to note that, the predicted values of
$\sum m_i$ lie well below the cosmological upper bound $\sum m_i \leq 0.17$
eV \cite{Planck15}.
\subsection{Dirac Inverse seesaw}
In usual inverse seesaw model, the complete neutral fermion mass matrix is
$9\times9$ whose structure in the $(\nu_L, N_R, S_R)$ basis
\begin{equation}\label{eq:2}
M_{\nu}= \left(\begin{array}{ccc}
0 & m^{T}_{D} & 0 \\
m_{D}& 0 & M^{T}\\
0 & M & \mu
\end{array}\right)
\end{equation}
where $m_D$ is the usual Dirac neutrino mass. The lepton number violation
occurs only through the $3 \times 3$ block denoted by $\mu$ so that this term
can be naturally small. Block diagonalisation of the above mass matrix results
in the effective light neutrino mass matrix as ,
\begin{equation}\label{eq:iss1}
m_{\nu} = m_{D}^{T}(M^{T})^{-1} \mu M^{-1}m_{D}
\end{equation}
Unlike canonical seesaw where the light neutrino mass is inversely proportional
to the lepton number violating Majorana mass term of singlet neutrinos, here
the light neutrino mass is directly proportional to the singlet mass term
$\mu$. The heavy neutrino masses are proportional to $M$. Here, even if $M \sim
1$ TeV, correct neutrino masses can be generated for $m_D \sim 10$ GeV, say if
$\mu \sim 1$ keV. Such small $\mu$ term is natural as $\mu \rightarrow 0$ helps
in recovering the global lepton number symmetry $U(1)_L$ of the model. Thus,
inverse seesaw is a natural TeV scale seesaw model where the heavy neutrinos
can remain as light as a TeV and Dirac mass can be as large as the charged
lepton masses and can still be consistent with sub-eV light neutrino masses.
In this section, we wish to construct a similar mass matrix for Dirac neutrinos
so that the smallness of light Dirac neutrino mass can be generated naturally
by a TeV scale seesaw. Since lepton number is conserved for Dirac neutrinos, we
consider it as a conserved global symmetry of the model, similar to the type I
seesaw discussed above. The field content of the proposed model is given in
table \ref{tab:inverse}. The $A_4$ symmetry is augmented by $Z_4 \times Z_3
\times Z_2$ discrete symmetries in order to make sure that the desired strengths
of different elements of the inverse seesaw mass matrix are naturally obtained.
\begin{table}[h]
\centering
\resizebox{14cm}{!}{%
\begin{tabular}{|c|cccccccc|cccccc|}
\hline
Fields & $L$ & $e_{\mbox{\tiny$R$}}, \mu_{\mbox{\tiny$R$}}, \tau_{\mbox{\tiny$R$}}$ & $H$ & $\nu_{\mbox{\tiny$R$}}$& $N_L$ &
$N_{\mbox{\tiny$R$}}$ & $S_L$ & $S_R$ & $\phi_{\mbox{\tiny$S$}}$ & $\phi_{\mbox{\tiny$T$}}$ & $\xi$ & $\zeta$ &$\eta$
& $\phi^{\prime}$ \\
\hline
$A_{4}$ & 3 & 1,$1''$,$1'$ & 1 & 3 &3 &3& 3 & 3 & 3 & 3 & 1 &1 &1 & 1\\
\hline
$Z_{4}$ & $1$ &1 & 1 & $i$& -$i$ & 1& $1$ & -$i$ & 1 & 1 & 1&1 & -1 & $i$
\\
\hline
$Z_3$ & $\omega$ & $\omega$ & 1 & 1& 1& 1 & 1 & 1 & $\omega$ & 1&
$\omega$&1 & 1 & $\omega$ \\
\hline
$Z_2$ & 1 & 1 & 1 & -1& 1& -1 & 1 & -1 & -1 & 1 & -1&-1 & 1 & -1 \\
\hline
$U(1)_L$ & 1 & 1& 0 &1 &1 &1 &1 &1 & 0 & 0 & 0 & 0& 0 & 0 \\
\hline
\end{tabular}
}\
\caption{\label{tab:inverse} Fields content and transformation properties under
$A_4 \times Z_4 \times Z_3 \times Z_2$ symmetry for inverse seesaw. }
\end{table}
The Lagrangian for the above field content can be written as
\begin{align}\label{eq:isslag}
\mathcal{L}_Y &= \frac{y_e}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})H e_{\mbox{\tiny$R$}}
+\frac{y_{\mu}}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})_{1'}H\mu_{\mbox{\tiny$R$}}+
\frac{y_{\tau}}{\Lambda}(\bar{L}\phi_{\mbox{\tiny$T$}})_{1''}H\tau_{\mbox{\tiny$R$}}+\frac{\bar{L}\tilde{H
}N_R}{\Lambda}
\left(y_x\xi+ y_s\phi_S+y_a\phi_S\right) \nonumber \\
& + \frac{Y_{RN}}{\Lambda} \bar{\nu_{\mbox{\tiny$R$}}}{N_L}\eta \zeta +
Y_{NS} \bar{S_R} N_L \zeta+Y^{\prime}_{NS} \bar{S_L} N_R \zeta +
\frac{Y_{S}}{\Lambda^2} \bar{S_L} S_R \phi^{\prime 3}.
\end{align}
We consider the vev alignment (similar to the one present in the
previous subsection) of the flavons as
\begin{eqnarray}
\langle \phi_T \rangle= (v_T, v_T, v_T), \langle \phi_S \rangle=(0,
v_S,0), \langle \xi \rangle=v_{\xi}, \langle \zeta \rangle=v_{\zeta},
\langle \eta \rangle=v_{\eta}, \langle \phi' \rangle= v_{\phi'}.
\end{eqnarray}
Effective light neutrino mass in this scenario can be written as,
\begin{equation}\label{eq:nuiss}
m_{\nu} = M_{RN}(M^{\prime}_{NS})^{-1} M_S M^{-1}_{NS} M_{\nu N}.
\end{equation}
From the Lagrangian presented in equation (\ref{eq:isslag}), we can find the
mass matrices involved in the neutrino sector after symmetry breaking ($A_4$ as
well as electroweak) as
$$ M_{RN} = \frac{Y_{RN}}{\Lambda}
v_\eta v_S \mathbf{I},
M_{NS} = Y_{NS} v_\zeta \mathbf{I}, M^{\prime}_{NS} = Y^{\prime}_{NS}
v_{\zeta}\mathbf{I}, M_S = \frac{Y_{S}}{\Lambda^2}
v_{\phi^{\prime}}^3\mathbf{I}
$$
\begin{eqnarray}\label{eq:MnuN}
M_{\nu N} = \frac{v v_S}{\Lambda}\left(
\begin{array}{ccc}
x & 0 & s+a \\
0 & x & 0\\
s - a & 0 & x
\end{array}
\right).
\end{eqnarray}
Here, $x=y_\xi v_{\xi}$, $s=y_s v_{\mbox{\tiny$S$}}$ and $a=y_a
v_{\mbox{\tiny$S$}}$ respectively where $s$ and $a$ stands for symmetric and antisymmetric
contributions originated from $A_4$ multiplication similar to the type I seesaw
case discussed before. The couplings $Y_{RN}, Y_{NS}, Y^{\prime}_{NS}, Y_{S},
y_\xi, y_s, y_a$ are the Yukawa couplings given in the above Lagrangian and
$\Lambda$ is the cut-off scale. Again, here we emphasise that the antisymmetric
part of $A_4$ triplet products particularly contribute to any Dirac type mass
matrix involved in the neutrino seesaw formula and the associated phenomenology
crucially depends on this contribution. Since the construction of the charged
lepton sector is exactly identical with type I seesaw scenario, it can again be
diagonalised by the magic matrix $U_{\omega}$ given in equation
(\ref{eq:omega}). To diagonalise the neutrino mass matrix let us define the
Hermitian mass matrix as before
\begin{eqnarray}\label{eq:mmdiss}
\mathcal{M}=m_{\nu}m_{\nu}^{\dagger}=|\lambda|^2
\left(
\begin{array}{ccc}
|x|^2+|s + a|^2 & 0 & x(s - a)^*+x^*(s+a) \\
o& |x|^2 & 0\\
x^*(s - a)+x(s+a)^*& 0 & |x|^2+|s - a|^2
\end{array}
\right),
\end{eqnarray}
where $\lambda=\frac{Y_{RN} Y_{S}}{Y_{NS}Y'_{NS}}\frac{vf^3}{\Lambda^4}$. Here
we have assumed the vev of all the scalar flavons (except the SM Higgs) to be same and
denoted by $f$, $i.e$, $v_{\mbox{\tiny$S$}}=v_{\xi}=v_{\zeta}=v_{\eta}=v_{\phi'}=f$.
The Hermitian matrix $\mathcal{M}$ can be diagonalised by a unitary matrix
$U_{13}$ as given in equation (\ref{u13}), obeying $U_{13}^{\dagger}\mathcal{M}U_{13}={\rm diag}
(m_1^2,m_2^2,m_3^2)$,
where the two parameters $\theta$ and $\psi$ appearing in $U_{13}$ are found to
be
\begin{eqnarray}\label{eq:angiss}
\tan 2\theta=\frac{\alpha\sin\phi_{ax}\sin\psi-\beta\cos\phi_{sx}\cos\psi}
{\alpha\beta\cos(\phi_{sx}-\phi_{ax})}~~~ {\rm and}~~~
\tan\psi=-\frac{\alpha\sin\phi_{ax}}
{\beta\cos\phi_{sx}}.
\end{eqnarray}
Here, $\alpha=|a|/|x|$, $\beta=|s|/|x|$, $\phi_{sx}=\phi_s -
\phi_x $, $\phi_{ax}=\phi_a - \phi_x$ with $s=|s|e^{i\phi_s}$,
$a=|a|e^{i\phi_a}$ and $x=|x|e^{i\phi_x}$ respectively.
Hence $\alpha$ is basically associated with the antisymmetric
contribution whereas $\beta$ is related to the symmetric contribution in the
Dirac neutrino mass matrix. The final lepton
mixing matrix in this case is also governed by the mixing matrix,
$U=U^{\dagger}_{\omega}U_{13}$ involving contributions from both charged
lepton and neutrino sector. Therefore, the correlation of $\theta_{13}$ (and
$\delta$) with $\theta$ (and $\psi$) in this case is similar to the one
presented in the type I seesaw case as given by equation \ref{eq:s13}.
\begin{figure}[h]
$$
\includegraphics[height=5cm]{rabissbp.png}~~~~~~
\includegraphics[height=4.8cm]{rsxaxissbp.png}
$$
\caption{Allowed regions of $\beta$ vs $\alpha$ (left panel) and $\phi_{ax}$ vs
$\phi_{sx}$ (right panel) for 3$\sigma$ allowed range of $\theta_{13}$,
$\theta_{12}$ and $\theta_{23}$ represented by the blue dots. Red dots in each
plot also satisfies $3\sigma$ allowed range for the the solar to atmospheric
mass-squared ratio $r$ along with upper limit on sum of the thee light
neutrinos $\sum m_i \leq 0.17$ eV~\cite{Planck15}, representing the actual
allowed parameter space.}
\label{fig:abpiss}
\end{figure}
After diagonalisation of the Hermitian matrix as given in equation
(\ref{eq:mmdiss}), the real, positive squared mass eigenvalues are obtained as
\begin{eqnarray}
m_1^2&=&\kappa^2\left[1+\alpha^2+\beta^2-\sqrt{(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
\right],\label{eq:issm1}\\
m_2^2&=&\kappa^2,\label{eq:issm2}\\
m_3^2&=&\kappa^2\left[1+\alpha^2+\beta^2+\sqrt{(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
\right],\label{eq:issm3}
\end{eqnarray}
where we have defined $\kappa^2=|\lambda|^2|x|^2$. Here we find that,
both neutrino mixing angles and masses are functions of parameters like $\alpha$,
$\beta$, $\phi_{ax}$ and $\phi_{sx}$ as evident from equations (\ref{eq:angiss})
and (\ref{eq:issm1}-\ref{eq:issm3}) respectively. Using similar strategy, we
again try to constrain the involved parameter space ($\alpha$, $\beta$,
$\phi_{ax}$ and $\phi_{sx}$) as illustrated in figure \ref{fig:abpiss}. The blue
dots in both left (in $\alpha$-$\beta$ plane) and right (in
$\phi_{sx}$-$\phi_{ax}$ plane) panel satisfies 3$\sigma$ allowed range for the
neutrino mixing angles~\cite{valle17}. Then we impose the constraints
(varying within $3\sigma$ range) coming from the ratio of the two mass squared
differences as defined in equation (\ref{eq:r}). The red dots in both the panels of
figure \ref{fig:abpiss} shows allowed ranges of the parameter space, after
taking both these constraints (mixing angles and mass squared difference ratios)
into account. In the left panel of figure \ref{fig:abpiss} we find that,
corresponding to $\alpha$ in
the range 0.2 to 1.7, $\beta$ is restricted within 2.5. The
right panel of the same plot reveals that a few disconnected regions in the
$\phi_{sx}$-$\phi_{ax}$ plane are allowed. Note that here we have also used the
\begin{figure}[h]
$$
\includegraphics[height=5cm]{kissbp.png}~~~~~
\includegraphics[height=5cm]{masissbp.png}
$$
\caption{Left panel: Estimation for $\kappa$ (in eV) as a function
of $\alpha$. Right panel: Prediction for absolute neutrino masses (
orange, blue,
brown and red for $m_1$, $m_2$, $m_3$ and $\sum m_i$ respectively.) and Dirac
CP phase $\delta$ (right panel) for 3$\sigma$ allowed range of $\theta_{13}$,
$\theta_{12}$, $\theta_{23}$, $r$~\cite{valle17} along with with upper limit on
sum of the thee light
neutrinos $\sum m_i \leq 0.17$ eV~\cite{Planck15}
}.
\label{fig:k-m-iss}
\end{figure}
recent upper bound on sum of the thee light neutrinos $\sum m_i \leq 0.17$
eV~\cite{Planck15} to constrain the parameter space and afterwards we analyse
only those regions which satisfy this limit. Now, the common factor
($\kappa$) appearing in the neutrino mass eigenvalues shown in equations
(\ref{eq:issm1}-\ref{eq:issm3}) can be evaluated using
\begin{eqnarray}
\kappa=
\sqrt{\Delta{m}_{\odot}^{2}/\{1-[1+\alpha^2+\beta^2-\sqrt{
(2\alpha\beta\cos(\phi_{
ax}-\phi_{ sx}))^2+4(\alpha^2\sin^2\phi_{ax}+\beta^2\cos^2\phi_{sx}) }
]\}}.
\label{kappa34}
\end{eqnarray}
In figure \ref{fig:k-m-iss} left panel we show the estimates of $\kappa$ (in eV)
as a function of $\alpha$. Also, it is worth mentioning that due to particular
flavour structure of the this inverse seesaw scenario $m_2$ coincides with $\kappa$ as
given in equation \ref{eq:issm2}. Now, our prediction for absolute neutrino masses
(with orange, blue, brown
dots representing $m_1$, $m_2$ and $m_3$ respectively)
and their sum ($\sum m_i$ denoted by the red dots) are given in the
right panel of figure \ref{fig:k-m-iss}. Here we find
that the allowed ranges for the absolute neutrino masses (obeying normal
hierarchy) are given by
$m_1\approx 0.050-0.007$ eV, $m_2\approx 0.051-0.010$ eV, $m_3\approx
0.072-0.049$ eV and $\sum m_i\approx 0.17-0.067$ eV when $\alpha$ is in
the range 0.2-1.7. In this inverse seesaw scenario, inverted mass
hierarchy is not possible as $\Delta
m^2_{23}+\Delta m^2_{21}=-2k^2(\alpha^2+\beta^2)<0$.
\begin{figure}[h]
$$
\includegraphics[height=5cm]{daxisp.png}~
\includegraphics[height=5cm]{dsxisp.png}
$$
$$
\includegraphics[height=5cm]{irdap.png}
$$
\caption{Predictions for Dirac CP phase $\delta$ (in radian) as a function of
$\phi_{ax}$, $\phi_{sx}$ and $\alpha$ for 3$\sigma$ allowed range of
$\theta_{13}$,
$\theta_{12}$, $\theta_{23}$, $r$~\cite{valle17} along with the limit on
sum of the thee light
neutrinos $\Sigma m_i\leq 0.17$ eV~\cite{Planck15}.}
\label{fig:deliss}
\end{figure}
Now, to illustrate the prediction for Dirac CP phase and its dependence on
the parameters of the model, in figure \ref{fig:deliss} we present the allowed
regions for $\delta$ as a function of $\phi_{ax}$ (upper left panel),
$\phi_{sx}$ (upper right panel) and $\alpha$ (bottom panel) respectively. From
these plots it turns out that in this inverse seesaw scenario, the allowed value
for $\delta$ lies in appropriately range $-\pi/3$ to $+\pi/3$, similar to what
we saw for type I seesaw case before.
\begin{figure}[h]
$$
\includegraphics[height=5cm]{d23issp.png}~
\includegraphics[height=5cm]{m1t23issp.png}
$$
$$
\includegraphics[height=5cm]{m1-mall-iss.png}
\includegraphics[height=5cm]{dmallissp.png}
$$
\caption{Correlations between different light neutrino parameters for 3$\sigma$
allowed range of $\theta_{13}$, $\theta_{12}$, $\theta_{23}$, $r$~\cite{valle17}
along with
upper limit on sum of the thee light neutrinos $\sum m_i \leq 0.17$
eV~\cite{Planck15}.}
\label{fig:coliss}
\end{figure}
Finally, to understand the correlation between observables associated with
neutrino masses and mixings in this inverse framework, we refer to figure
\ref{fig:coliss}. In the upper left panel of this figure a correlation between
$\delta$ and $\theta_{23}$ is presented and we find that for $\delta$ in the
range -$\pi/3$ to $\pi/3$, $\theta_{23}$ always falls in the lower octant.
As mentioned earlier, this is in good agreement with all three global
analysis~\cite{schwetz16, valle17, Capozzi:2016rtj} where the best fit value for
$\theta_{23}$ prefers to be in the lower octant (although for 3$\sigma$ range
both the octants are possible) for the normal hierarchy of light neutrino masses.
Here we remind ourself that only normal hierarchy of neutrino mass is allowed
in the present scenario. In the upper right panel of figure \ref{fig:coliss},
we have plotted the allowed parameter space in $\sin^2\theta_{23}$-$m_1$ plane
whereas the bottom left panel
represents the allowed region in the $\delta$-$\sum m_i$ plane. From
$\sin^2\theta_{23}$ versus $m_1$ plot it is clear that smaller the lightest
neutrino mass $m_1$, more likely is the deviation of $\theta_{23}$ from its
maximal value. In the bottom left panel of figure \ref{fig:coliss}, the
purple dots show the model predictions for $\sum m_i$ corresponding to the
lightest light neutrino mass $m_1$, representing a high mass regime for the
light neutrinos. Here, the region bounded by the solid lines represent
3$\sigma$
uncertainty in the mass squared differences and the shaded region stands for the
disallowed
region by the Planck upper limit~\cite{Planck15}.
Finally, $\delta$ vs $\sum m_i$ plot in the bottom right panel shows that all
regions of $\delta$ (between -$\pi/3$ to $\pi/3$) allowed with $\sum m_i$
ranging in between 0.067 eV to 0.17 eV indicating higher value of $\sum m_i$
is only possible when $\delta \neq 0$. Such high values of $\sum m_i$ can
saturate the cosmological upper bound $\sum m_i \leq 0.17$ eV
\cite{Planck15} which can indirectly constrain the Dirac CP phase as well.
It is observed that the allowed range of the lightest neutrino
mass is different in inverse seesaw case compared to what is obtained for type I
seesaw. This is evident from the right panels of figure \ref{fig:k-m-t1} and
figure \ref{fig:k-m-iss} for type I and inverse seesaw respectively. This can be
explained from the difference in light neutrino mass eigenvalue expressions
given in equations \eqref{eq:tm1}, \eqref{eq:tm2}, \eqref{eq:tm3} for type I
seesaw and equations \eqref{eq:issm1}, \eqref{eq:issm2}, \eqref{eq:issm3} for
inverse seesaw. As can be seen from these expressions, the second mass
eigenvalue ($m_2$) expression is very different in the two cases due to the
$A_4$ flavour symmetric construction and the governing
seesaw mechanism. Due to this difference, constraint coming from the ratio of
solar to atmospheric mass squared differences ($r$) in these two scenarios are
such that the inverse seesaw scenario permits a relatively larger allowed
parameter space (for $\alpha$ and $\beta$) satisfying neutrino oscillation
data. This is evident from the left panel of figure \ref{fig:abpp} (for type I seesaw) and figure \ref{fig:abpiss}
(for inverse seesaw) respectively, where red dots represents allowed parameter
space and one can find that relatively smaller values for $\alpha$ and $\beta$
are allowed for inverse\ seesaw compared to the type-I seesaw scenario. These
smaller vales of $\alpha$ and $\beta$ for inverse seesaw case actually yields
larger value for the common factor $\kappa$ (evaluated using equation
\eqref{kappa34}) appearing
in the absolute light neutrino masses and hence generates larger value for
neutrino mass compared to type I case.
\section{Conclusion}\label{sec:conc}
We have studied two different seesaw scenarios for light Dirac neutrinos namely,
type I and inverse seesaw within the framework of $A_4$ flavour symmetry to
explain lepton masses and mixing. In both the cases, the $A_4$
symmetry is augmented by additional discrete symmetries in order to make sure
that the correct hierarchy between different terms appearing in the complete
neutral fermion mass matrix is naturally obtained without making any ad hoc
assumptions. This is done by generating relatively smaller terms at next to
leading order compared to the large terms in the seesaw matrix. Since lepton
number is a global conserved symmetry in both the cases, all the mass matrices
involved are of Dirac type and hence the $A_4$ triple products contain the
anti-symmetric component. This anti-symmetric part plays a crucial role in
generating the correct neutrino phenomenology by explicitly breaking $\mu-\tau$
symmetries which give rise to vanishing reactor mixing angle. Since we use the
$S$ diagonal basis of $A_4$ for Dirac neutrino case, the charged lepton mass
matrix is also non trivial in our scenarios and hence can contribute to the
leptonic mixing matrix.
For generic choices of $A_4$ flavon alignments, we find that both the models
are very predictive in terms of predicting the light neutrino mass spectrum and
hierarchy, leptonic CP phase as well as the octant of atmospheric mixing angle.
While both of them predicts normal hierarchical pattern of light neutrino masses
with the atmospheric mixing angle lying in the lower octant, in agreement with
the latest global fit neutrino oscillation data, they also predict the leptonic
Dirac CP phase to lie in specific range -$\pi/3$ to $\pi/3$. While the type I
seesaw predicts the sum of light neutrino masses to be small, the inverse seesaw
scenario predicts it to be high and can saturate the cosmological upper bound
$\sum m_i \leq 0.17$ eV. Apart from this, the models also predict
interesting correlation between neutrino observables like Dirac CP phase,
atmospheric mixing angle, light neutrino masses so that measuring one can shed
light on the other. Both the models can also predict the absence of lepton
number violation and hence can not be tested in ongoing and future neutrinoless
double beta decay experiments. Also, the inverse seesaw model can naturally
predict lighter heavy neutrino spectrum compared to type I seesaw and hence can
have other phenomenological consequences. Such a detailed analysis is left for
future investigations.
Apart from different predictions for light neutrino parameters,
the two seesaw scenarios discussed here can also be distinguished by observing
different phenomena they give rise to. Since the light neutrino mass in inverse
seesaw mechanism is primarily governed by the smallness of the $\mu$ term in
\eqref{eq:iss1}, the right handed neutrinos can have masses near the TeV scale
and at the same time can have sizeable Yukawa couplings with the light
neutrinos, giving rise to interesting possibilities at collider experiments
\cite{Antusch:2016ejd}. This interesting feature makes it different from
ordinary type I seesaw, where TeV scale right handed neutrino mass has to be
compensated by tiny Yukawa couplings. We may also get distinguishable features
in terms of predictions of these two models, if we also incorporate the quark
sector mixing \cite{He:2006dk}. For simplicity, we have considered the quark
sector particles to be singlet under the $A_4$ symmetry and leave a more general
study including quarks and leptons to future studies.
\section*{Acknowledgements}
The authors would like to thank the organisers of XV Workshop on High Energy
Physics Phenomenology during 14-23 December, 2017 at Indian Institute of Science
Education and Research Bhopal, India where part of this work was completed.
|
1,116,691,500,263 | arxiv | \section{Introduction}
Though the origin of Gamma-ray Bursts (GRBs) remains elusive,
the abundant data collected by BATSE on {\it Compton} Gamma-ray
Observatory (CGRO) have provided many constraints on the physical
modeling of these events. Thanks to BATSE's high temporal resolution
and high signal-to-noise ratio, temporal analyses of bursts have began
to bear fruit. Notably, the bi-modal distribution in the burst durations
(\cite{kle92};\cite{k93}); the claimed time-dilation of peak width
between strong and weak bursts (\cite{n94} although see
\cite{m94}); and that the peak width narrows
with energy following a power-law (\cite{fen95}). Though
the exact physics is yet unknown to satisfy all these observations,
they are important probes into the burst emission processes.
Here in this {\it Letter}, we propose a new, simple but powerful
peak-finding-algorithm (PFA) that can identify the ``peaks'' (or
non-statistical variations) within a GRB. Using this PFA,
we uncover that the distributions of peak fluences and intervals within
each burst are log-normal and several tests are performed to confirm our
findings.
\section{The Peak-Finding Algorithm}
\label{pca-sec}
GRB time histories show vast diversities and Figure \ref{2grb-fig}
shows portions of two complex bursts: trigger 678 and 1606.
Norris et al. (1996) have used a pulse-fitting
algorithm based on $\chi^2$-fitting of individual
pulses. However, their method fails to converge when the burst is
very complex, particularly for the two bursts shown here.
The detailed procedure to find all the peaks and valleys in a burst
is as follows: (a) First, we fit the burst background using a
linear function $B(t)$ to the pre- and post-burst regions.
(b) During the burst, every count bin that has more counts
than the neighboring bins (both sides) is a candidate peak with
count $C_p$ at time $t_p$.
(c) We then search on both sides of each candidate peak for counts
$C_1$ (at $t_1$) and $C_2$ (at $t_2$) so that the conditions
$C_p - C_{1,2} \geq N_{\rm var} \sqrt{C_p}$ are satisfied.
(d) The search will stop either when both $C_1$ and $C_2$ are found,
in which case $C_p$ becomes a ``true'' peak, or when counts higher
than $C_p$ (on either side of $t_p$) is encountered, in which case
$C_p$ is not a true peak and discarded. After this step, all the
peaks ($N_k$ in total) should have been identified.
(e) We then locate the minima between two successive peaks as valleys.
Note that $C_1$ and $C_2$ are not necessarily the valleys.
The valley at the beginning (end) of a burst is chosen
from the location where counts start deviating from (dimming into)
the background.
The above procedure has been implemented and
in Figure \ref{2grb-fig}, we present the selected peaks (up-triangles)
and valleys (down-triangles) for burst 678 and 1606. Note that only
a small portion of the time histories is shown for each burst and
there 58 and 35 peaks in total (with $N_{\rm var}=5$) for
burst 678 and 1606 respectively.
Assume that the peak is $C_p(t_p)$ and two neighboring valleys
are $C_1(t_1)$ and $C_2(t_2)$ after applying the
peak-finding algorithm, as schematically shown in Figure \ref{pfa-fig}
along with the background level $B(t)$.
The interval between adjacent peaks, or the waiting time
between successive peaks, is $\delta_i = t_{p_{i+1}} - t_{p_i}$ with
$i = 1, \dots, N_k -1$; and the count fluence within the $i$th peak
is defined as $S_i = \sum_{t_1}^{t_2} \left [ C(t) - B(t) \right]$.
Note that we use count fluence rather than photon energy fluence,
though they are simply related by the mean photon energy
if it remains approximately the same throughout the burst.
So, there are $N_k-1$ intervals but $N_k$ peak fluences in each burst.
The number and position of peaks within a GRB, as one might suspect,
depends somewhat on the exact value of $N_{\rm var}$, but the effect
is small when $3 \leq N_{\rm var} \leq 5$.
In this {\it Letter}, we choose $N_{\rm var}$ to be 5.
Tests using various $N_{\rm var}$ show that our
conclusions are not affected by this choice.
\section{Log-Normal Distributions of Peak Fluences and Intervals}
A two-parameter log-normal distribution is represented
as follows (\cite{ab57}; \cite{cs88}):
\begin{equation}
\label{lnd-eq}
f(x)= \left \{ \begin{array}{ll}
{1 \over {\sqrt{2\pi} \sigma}}~
\exp^{-(\log x-\mu)^2/2\sigma^{2}} & x > 0 \\
0 & x \leq 0
\end{array}
\right.
\end{equation}
\noindent where $f(x)$ is the probability density function for
$\log x$, $\mu$ and $\sigma^2$ are the two parameters corresponding to
the sample mean and variance of $\{\log x_i\}$.
The cumulative log-normal distribution is ${1\over 2}~{\rm erfc}(y)$,
where ${\rm erfc}(y)$ is the complementary error function and
$y = (\mu - \log x_i)/\sqrt{2\sigma^2}$.
Utilizing the BATSE DISCSC data (64 ms temporal resolution and
4-channel spectral resolution), we analyze most of the bursts in
the third BATSE catalog (\cite{mee96}) and identify their peaks
using our PFA with $N_{\rm var} = 5$.
Since we require that each burst must have more than 20 peaks
so that a distribution can be built, and all the peaks must
be 5 sigma above background, 32 bursts are selected that meet the
criteria. The requirement of more than 20 peaks in each burst is somewhat
arbitrary and it is mainly due to the concern that the statistical
significance on fewer than 20 points in a distribution is,
in general, questionable.
We have done tests by relaxing this requirement and found
that our conclusions remain intact.
All 32 selected bursts tend to be bright, long, complex and,
on average, each of them has $\sim 35$ peaks. In the following,
we concentrate on the peak fluences $S_i$ and peak intervals
$\delta_i$ distributions. For a burst with $N$ peaks, we define
two sample means for fluences: $\mu_{ln} = \sum^{N}_1 \log(S_{i}) / N$ and
$\mu_{n} = \sum^{N}_1 S_{i} / N$; and two sample variances:
$\sigma^2_{ln} = \sum^{N}_1 (\log(S_{i}) - \mu_{ln})^2 / (N-1)$ and
$\sigma^2_{N} = \sum^{N}_1 (S_{i} - \mu_{n})^2 / (N-1)$. Here,
the subscript $ln$ represents $\log$ and $n$ for linear.
Sample means and variances for $\{\delta_i\}$ can be defined similarly.
We find that, for all 32 bursts,
the distributions of the peak fluences $\{S_i\}$ and the peak
intervals $\{\delta_i\}$ in {\it individual bursts} are consistent
with log-normal distributions (i.e. the differential
distributions of $\{\log(S_i)\}$ and
$\{\log(\delta_i)\}$ are consistent with Gaussian).
We reach this conclusion by performing $\chi^2$-fitting.
The peak fluences and intervals from each bursts are binned (5 bins)
and fitted with log-normal distributions with two parameters
(mean and variance). In Figure \ref{chi2prob-fig}$a$ and $b$, we
give the $\chi^2$-fitting probabilities for
$\{S_i\}$ and $\{\delta_i\}$ for each burst, respectively.
Note that the probabilities are reasonably uniformly distributed
between 0 and 1, implying that log-normal is, in general, an
acceptable hypothesis. However, we caution that $\chi^2$-fitting
might not be an accurate measure of statistics here due to the small
number of points in each bin though they do satisfy the limit
given by Lindgren (1976) that the sample size should be roughly
4 or 5 times the number of bins.
Perhaps the peak fluence distribution is similar to log-normal,
but instead follows some other distributions, such as a truncated
power-law. We have performed fitting to peak fluences assuming
they are distributed as power-laws (again with two parameters,
the slope of the power-law and the lower cutoff). Their probabilities are
shown in Figure \ref{chi2prob-fig}$c$ (notice that probability is now
in logarithmic scale). More than half of the bursts
have probabilities less than $10^{-3}$ and the acceptable
probabilities for some bursts may just be due to their small number of
peaks (note the trend for probability becoming exceedingly smaller
when peak number increases).
Even though peak fluence and peak interval distributions
in individual burst are consistent with log-normal distribution,
their confidence is nevertheless affected by the small number of peaks.
A more stringent test on peak fluence distribution can be performed
as follows: for each burst, we rescale each $S_i$ so that
the new $\mu_{ln} = \mu_{n} = 0$ and $\sigma^2_{ln} = \sigma^2_{n}
= 1$. As a result, all 32 bursts now have the same $\mu$($=0$)
and $\sigma^2$($=1$) in the peak fluence distribution.
We then put all the peaks from all 32 bursts together and sort them in the
ascending order for the {\it scaled} $S_i$.
This combination of all peaks from all bursts after
the rescaling (1107 peaks total) removes the uncertainty in small
number of peaks. In Figure \ref{sumks-fig}$a$ and $b$,
we present the cumulative distribution of rescaled
$\log S_i$ and $S_i$ as compared to the cumulative
Gaussian distribution with mean $=0$ and variance
$=(N_{\rm pk} - N_{\rm bur}) /(N_{\rm pk} - 1) \approx 0.97$,
where $N_{\rm pk}=1107, N_{\rm bur}=32$ are the total number
of peaks and bursts, respectively.
This extremely good fit in Figure \ref{sumks-fig}$a$ strongly
indicates that the peak fluences in GRBs are distributed
log-normally rather than normally (Figure \ref{sumks-fig}$b$).
The same procedure can be applied to the peak interval distribution
and the results for $\log\delta_i$ are shown in Figure
\ref{sumks-fig}$c$, it is well fitted by the log-normal distribution.
Special attention should be taken, however, in making the
comparison with the hypothesis that peak intervals are distributed
{\it randomly}. The probability for having another peak after time
$\delta_i$ is $e^{-\delta_i / <\delta_i>}$ (i.e. the Poisson probability
for zero event), and the corresponding cumulative distribution
is $1 - e^{-x}$, where $0 \leq x < \infty$. Therefore, we rescale the
peak intervals in each burst by its mean $<\delta_i>$ and put all
the peaks from all bursts together and plot it against
$1 - e^{-x}$. This is shown in Figure \ref{sumks-fig}$d$.
The inconsistency is obvious, therefore, the hypothesis that
peaks appear randomly can be ruled out.
Though the fitting to log-normal distribution is extremely good,
small deviations in Figure \ref{sumks-fig}$a$ and
\ref{sumks-fig}c are still visible. Possible explanations include
the following: For peak fluence, our
definition of $S_i$ (the area enclosed by two successive valleys)
may be a slight underestimation, especially for large peaks due to
the fluences lost in the tails which extend into the neighboring
peaks. However, this effect is mutual for adjacent peaks so the net
difference can be estimated to be very small (a few percent at most).
This is enough to account for the tiny asymmetry (bright end)
in peak fluence distribution.
The small asymmetry in peak interval distribution, we believe,
is caused by the limited temporal resolution (64ms) of the data.
In fact, the peak interval has to be larger than a few bins
(by definition) whereas equation \ref{lnd-eq} assumes $x$ can be
arbitrarily close to 0. More precisely, a three-parameter log-normal
distribution (\cite{ab57}) can be employed but we omit any
detailed comparison here as the real correction can only come from
analyzing the high temporal resolution data (such as Time-Tagged Events
(TTE) data of BATSE).
McBreen et al. (1994) have also suggested that the intervals between
peaks in GRBs may be distributed log-normally, based on their analyses
of 34 long, intense KONUS bursts with 0.25 s time resolution
(1 s resolution after 34 s in a burst).
Their analyses and results, however, are very different from ours
in several aspects. By using a rigorous peak selection process,
we have identified individual entities within the GRB time history that
are distinct, and found that each {\em individual} burst has a
log-normal distribution.
In contrast to our $\sim 35 $ peaks per bursts, McBreen et al. (1994) had
only 4-5 ``peaks'' on average. More questionably, peak intervals from
different bursts with different brightnesses were put together to form one
distribution with no normalization.
We find that great caution has to be taken
when putting together quantities from different bursts due to the bias
that fewer peaks on average will be selected from weaker bursts.
No alternative, non-lognormal distributions were considered in
their analyses either.
\section{Summary}
Our findings that peak fluences and peak intervals in individual
GRB are distributed log-normally probably have far-reaching
implications on the production of GRBs and consequently their origin,
though it is beyond the scope of this {\it Letter}.
The non-random appearance of peaks seems directly pointing
to a central engine, and interestingly, similar behavior has
also been found in the interval (waiting time) distribution for the
successive events from the soft gamma-ray repeaters (SGRs)
(\cite{hur94}), which have been suggested to come
from the neutron star quakes (\cite{cheng96}).
Further analyses on the statistical properties of GRB time
histories, such as the detailed comparisons with those
found in SGRs, will be presented in the future publications.
\acknowledgments
Useful discussions with B. Cheng, S. Colgate and R. Epstein
are appreciated. H.L. gratefully acknowledges the support
by the Director's Postdoc Fellowship at Los Alamos National Lab.
This work is partially supported by the GRO Guest investigator program.
|
1,116,691,500,264 | arxiv | \section{Introduction}
\label{section:intro}
The physics of star formation and
molecular gas in galaxies depend on the
properties of supersonically
turbulent clouds. Observed line widths
indicate the presence of supersonic,
random
bulk motions within interstellar clouds,
and a combination of collisional heating
and radiative cooling
keeps their gas roughly isothermal
despite any strong shocks
that develop.
Protostars can condense
from gravitationally bound regions within
cold molecular gas, and the supersonic
isothermal
turbulence within the bound clouds will persist
as long as collapse occurs faster than
the largest turbulent eddies turn over
\citep{robertson2012a,murray2015a,murray2017a},
until magnetic fields become important \citep[e.g.,][]{hennebelle2008b,chen2014a},
until the gas becomes optically thick to its
own cooling radiation,
or the conversion
of gravitational potential or nuclear energy
into kinetic energy disperses the cloud.
The properties
of dense turbulent clouds therefore
set the initial conditions of the
star formation process on smaller scales,
and a deeper understanding of the
physics of dense regions in turbulence
will enable a more complete picture for
how interstellar clouds transform into
stars.
To this end, this paper develops a new theoretical
model for dense regions in supersonic
isothermal turbulence that explains their
internal structure and time evolution. Using
a combination of hydrodynamical simulation
and new analysis methods, we identify the
population of dense regions, measure their
physical structure, and characterize their
features. Our
work connects the properties of individual
dense regions to the statistical properties of the
supersonically turbulent fluid, and
provides a new view for how gravitational collapse initiates.
The increasingly rich set of observations of
molecular gas clouds acquired over the last
forty years provides a strong empirical
motivation for modeling interstellar medium
(ISM) clouds as turbulent fluids. The
velocity-size relations of molecular clouds
\citep{larson1981a,myers1983a,solomon1987a,goodman1998a,bolatto2008a,heyer2004a,heyer2009a}
finds an analog in the velocity structure
function of turbulent motions
\citep{elmegreen2004a}.
Other observed properties of molecular clouds,
such as their filamentary morphology in the radio
\citep{,schneider2011a,kirk2013a} and in {\it Herschel} infrared data
\citep{andre2010a,menshchikov2010a,miville-deschenes2010a,arzoumanian2011a,hennemann2012a,schneider2012a,konyves2015a},
or their approximately fractal
character \citep{stutzki1998a,roman-duval2010a},
suggest they contain supersonically turbulent
gas. Indeed, maps of molecular clouds resemble
the projected density fields of simulated
turbulent fluids \citep[e.g.,][]{federrath2010a,smith2014a},
with both possessing
large spatial inhomogeneities \citep[e.g.,][]{falgarone1992a}
and dense, filamentary features.
Simple analytical and dimensional
arguments provide deep reaching physical
descriptions of the properties of
incompressible turbulence \citep{kolmogorov1941a},
and subsonic magnetohydrodynamical turbulence has a well-developed
analytical theory for how dissipation proceeds \citep[e.g.,][]{goldreich1995a,goldreich1997a}.
However, the shock-ridden
structure of supersonic turbulence
limits analytical models from providing a complete
picture.
In contrast to the roughly local (in
$k$-space) interactions between vortices that
describes the energy cascade in
incompressible turbulence \citep{kraichnan1959a},
the nonlocal
interactions between large-scale bulk motions
and dissipation occurring on small scales near
shocks
has mostly stymied rigorous analytical
modeling.
For instance,
the velocity power spectrum of supersonic
turbulence is intermediate between Kolmogorov
and the Burger's spectrum for pure shock turbulence,
and may require a density-weighting to describe
approximately through analytical means \citep{kritsuk2007a,federrath2013b}.
This challenge has motivated the engineering
of sophisticated numerical simulations of
the properties of supersonic turbulence, through
which much of the current intuition about the
role of turbulence in molecular clouds has
been built. Simulations have verified that
random motions in supersonic turbulence dissipate
roughly on the Mach crossing time of the fluid,
without or without the presence of magnetic fields
\citep[e.g.,][]{stone1998a,maclow1998a,maclow1999a,ostriker2001a,cho2003a,beresnyak2011a}.
This finding suggests that turbulence in
real molecular clouds must be regularly driven or
the interior structure of the cloud will evolve
on a short time scale.
The velocity structure function of supersonic
turbulence shows a steep relation between velocity
differences and scale \citep{ballesteros-paredes2006a,kritsuk2007a},
similar to the size-line width relation
for molecular clouds, which may indicate that
clouds of different sizes have similar turbulent
properties.
Connections drawn between models for supersonic turbulence
and the theory of star formation often involve
the statistical properties of the turbulent
density field \citep[for reviews, see][]{maclow2004a,mckee2007a,krumholz2014a}. Supersonic isothermal turbulence
displays a volumetric density probability density
function (PDF) close to lognormal for solenoidally-driven
turbulence
\citep{vazquez-semadeni1994a,padoan2002a,kritsuk2007a}.
The shape of the PDF has been ascribed to the statistics
of random, overlapping density modes \citep{vazquez-semadeni1994a,padoan2002a},
which emphasizes
the very statistical picture for understanding
astrophysical turbulence to date.
The width of the PDF depends on the turbulent
Mach number, such that the density contrasts
increase as the bulk motions become more
supersonic \citep[e.g.,][]{lemaster2008a}.
The morphology of density
inhomogeneities and the corresponding shape of
the density PDF also depend on whether the turbulent
forcing field is primarily solenoidal or
compressive \citep[e.g.,][]{federrath2008a,federrath2010a},
suggesting that the observed properties of
molecular clouds may encode the nature of the
driving mechanism \citep[e.g.,][]{ginsburg2013a}.
In star-forming clouds the line-of-sight extinction
and inferred column density PDFs develop a power-law behavior
at high densities \citep[e.g.,][]{kainulainen2009a,arzoumanian2011a,schneider2012a},
a feature which has been reproduced by turbulence simulations that include self-gravity
\citep[e.g.,][]{kritsuk2011a,ballesteros-paredes2011a,lee2015a,burkhart2017a}.
These statistical properties of the turbulent
density field provide the elements for a relatively
simple picture of star formation in molecular clouds.
Supersonic turbulence within a cloud is generated
by a driving field, setting the velocity structure
and density inhomogeneities of the gas. The combination of
the velocity-size scaling relation with observed
correlations involving the cloud mass indicate that
gravitational potential and kinetic energies
of molecular clouds lie close to virial balance
\citep{larson1981a,solomon1987a,bertoldi1992a,krumholz2005a}.
Given the strength of gravity, virial balance sets
the largest scale on which the cloud is marginally
bound. The density PDF then indicates what fraction
of the gas lies at densities above some Jeans-like
instability criterion, which sets the fraction
of gas that collapses via self-gravity \citep{krumholz2005a}.
The average efficiency of star formation in molecular clouds
is low \citep{krumholz2007a}, with typically a few percent of
the cloud mass converted to stars on a free-fall time scale.
Observationally, star formation rates
scale with the abundance of molecular gas \citep{gao2004a,bigiel2008a,kennicutt2012a}
or the fraction of dense molecular gas \citep{lada2010a,lada2013a,evans2014a,lada2017a},
but there has been some disagreement about how that connection arises
physically \citep{lada2012a,krumholz2012a}.
By choosing
the threshold for star formation appropriately and accounting for other
relevant properties of the turbulence (e.g., magnetic
field strength), low star formation
efficiencies of a molecular cloud can be reproduced
\citep[e.g.,][]{krumholz2005a,padoan2011a,federrath2012a,kainulainen2014a,padoan2017a}.
Detailed simulations of
star-forming clouds use similar criteria to determine
the regions that ultimately collapse into stars,
often by placing sink particles in potential
minima with converging velocity fields subject to
constraints on the proximity of infalling regions.
These models for star formation in molecular clouds
enjoy considerable success in matching the observations
of star-forming regions and the resulting population
of dense cores and stars
\citep{klessen1998a,klessen2000a,klessen2001a,bate2003a,bonnell2003a,bonnell2006a,glover2007a,glover2007b,krumholz2007a,offner2008a,girichidis2011a,federrath2012a,federrath2013a,federrath2015a,liptai2017a,haugbolle2017a}, although the relative importance
of driving mechanisms, feedback, initial cloud structure, magnetic fields, or
other physics remains unclear.
Despite the successes of these models, some
important puzzles still remain in relating
isothermal, supersonically-turbulent fluid to
a real star-forming cloud. If the cloud persists
over long time scales \citep[e.g.][]{blitz1980a},
the large-scale forcing of the cloud turbulence must
operate repeatedly on time scales less than the
Mach crossing time.
For simulations
where the turbulence has reached steady statistical
state, the forcing has typically
been applied many times over.
If turbulent
motions marginally support the cloud
against self-gravity on large scales,
as the apparent virial balance may imply,
then
the bulk of the cloud
might survive as long as a source of
regular driving remains available.
Under such conditions,
the density structure of the turbulence
within the cloud will give rise to regions
that will nonetheless collapse on time scales substantially
shorter than the Mach crossing time of the whole
cloud. These dense interior regions will form
stars once they collapse, and several outcomes
are possible. If the gravitational potential
or nuclear energy can be converted into kinetic
energy through the star formation process (i.e.,
feedback), then the
star formation itself could in principle drive
the cloud turbulence \citep{maclow2004a,federrath2015a}.
However, to prevent the
collapse of the whole cloud the forcing has to
be applied on large scales and coupled to gas
throughout \citep[e.g.,][]{vazquez-semadeni2003a,brunt2009a}. If the feedback can
be efficiently coupled to the gas, then substantial mass
from the marginally bound cloud could be
freed.
If the feedback cannot sustain the turbulence
but does not dissipate the cloud, then a persistent
cloud would again require continuous external driving
and perhaps a steady inflow of gas to balance its
star formation rate.
Otherwise, the star-formation efficiency becomes
time-variable and increases as the molecular clouds
disrupt \citep{murray2011a}.
The difficulties in arranging a long-lived
turbulent cloud with successive generations of
star formation have motivated models beyond
the simple turbulent box picture. Converging
flows can drive turbulence and lead to realistic
molecular clouds \citep[e.g.,][]{ballesteros-paredes1999a,heitsch2005a,vazquez-semadeni2006a,heitsch2008a,heitsch2009a,heitsch2011a,chen2014a,kortgen2015a,kortgen2017a,inoue2017a}. Clouds can
be continually formed during the time scale of
the converging flow, but their turbulence will
decay on a Mach crossing time once the
large-scale convergence ends. Unless the
convergence is somehow permanent or another large-scale
driving mechanism is created (see above), the clouds
will eventually undergo a rapid end where dense
bound regions will convert to stars and the cloud
will dissipate on large scales, perhaps owing to
feedback.
In a picture where star-forming molecular clouds experience
short lifetimes comparable to or less than their Mach crossing
times, the original formation of the cloud would
need to generate its interior turbulent structure.
Once regions within the cloud become overdense enough
to become gravitationally bound, the evolution of the
cloud proceeds quickly. Bound regions form stars, and
the short-lived massive stars provide feedback energy
to the surrounding gas that may affect the overall
cloud star formation efficiency but does not supply
effective large-scale driving to sustain the cloud
turbulence over the long term. The cloud may be
dispersed owing to the star formation feedback
as the turbulence decays, the
kinetic energy in bulk motions dissipates, and the
density inhomogeneities reduce. Star formation on
large scales within a galaxy would be connected to
the rate at which molecular clouds form, through
converging flows \citep[e.g.,][]{hartmann2001a,dobbs2008a}, large scale gravitational instability,
or other means, and the processes that set the
star formation efficiency of the clouds as regions
within them collapse \citep[e.g.,][]{braun2015a,semenov2016a}.
A model for long-lived molecular clouds could assert
that the observed cloud velocity-size relations
result from all clouds maintaining a
marginal virial balance, sustained by a persistent
driving mechanism.
Short-lived molecular cloud models still must
reproduce the observed cloud scaling relations, but
cannot rely on replenishment of the turbulent
motions from large-scale driving.
The nature of the
gravitational collapse itself has to maintain the
observed scaling relations by driving turbulence
\citep[e.g.,][]{scalo1982a,ballesteros-paredes2011a,ibanez-mejia2016a}.
In \citet{robertson2012a},
we identified how collapsing regions
undergo ``adiabatic heating'' of the turbulence
if the collapse occurs quickly
compared to the initial Mach crossing time. We showed
how eventually
the collapse rate and the large-scale eddy turnover
rate in the cloud will synchronize, leading to a
connection between the turbulence within the cloud
and its gravitational collapse, and suggested
the size-dispersion relation for clouds reflected
this connection. \citet{murray2015a} showed that
adiabatic heating during gravitational
collapse can explain changes in the size-line width
relation in massive star-forming regions \citep{fuller1992a,caselli1995a,plume1997a}. They showed that, in the presence of adiabatic
heating, within the sphere of
influence of a collapsing region the turbulent velocity
increases with decreasing radius. This feature contrasts
with earlier models of collapse where the
character of turbulent velocities during infall did not
change \citep{mckee2003a}.
In simulations of
turbulent self-gravitating gas \citet{murray2017a} showed that the
turbulent velocities increase with decreasing radius during
the gravitational collapse as $\varv\propto r^{-0.5}$, as we
speculated in \citet{robertson2012a}.
Other recent simulations of star-formation in turbulent
gas show consistency with the \citet{murray2015a}
model \citep[e.g.,][]{mocz2017a,ibanez-mejia2017a,li2017a} for the structure
of self-gravitating regions shaped by adiabatic heating.
If the gravitational collapse of turbulent clouds
proceeds in a manner that can reproduce the size-line width
relations, then the picture forwarded by \citet{murray2015a}
of molecular
clouds as a collapsing turbulent flow appears viable.
A remaining issue for
this model is how the star formation efficiency
connects with the internal structure of the cloud.
Therefore, understanding the properties of dense
regions in supersonic turbulence including
their density profiles, turbulent lifetimes and
structural evolution, spatial clustering,
connection with the gravitational potential,
and relation to the statistical properties of
the turbulent medium is
of interest.
Below, we present the
results of supersonic isothermal turbulence
simulations where we have characterized in
detail the properties of dense regions.
Section \ref{section:simulation}
describes how our hydrodynamical turbulence
simulations were performed.
Section \ref{section:dense_regions} presents
our method for identifying dense regions
and measurements of their individual properties.
We develop an analytical model for their
internal density structure based on
exponential isothermal atmospheres in
Section \ref{section:exponential_atmospheres}.
The time-dependent properties of the dense
regions are studied in Section \ref{section:time_dependence},
including a measurement of the typical lifetimes
of the densest regions in Section \ref{section:shock_lifetimes}.
The spreading of shocked regions in response to
deceleration from on-coming ram pressure is examined in
Section \ref{section:spreading}.
We compute the
collective properties of
the population of dense regions in
Section \ref{section:shock_populations},
including spatial clustering (Section \ref{section:clustering})
and their contributions to the density
PDF (Section \ref{section:dense_pdf}).
We then consider how the gravitational
potential of the turbulent cloud might
affect the dense regions in Section \ref{section:gravity}.
A discussion of our results is presented
in Section \ref{section:discussion}, along
with our conclusions in \ref{section:conclusions}.
A host of analysis methods
were engineered for studying the properties of
dense regions in turbulence, and these methods
are described in more detail in a set of Appendices.
Throughout the paper, we will refer to the dense fluid structures bounded by shock discontinuities as ``shocked regions''. The terms ``pre-shock'' and
``post-shock'' will indicate areas ahead and behind of a shock, respectively.
\section{Turbulence Simulations}
\label{section:simulation}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=7.05in]{full_box.pdf}
\caption{Hydrodynamical simulation of supersonic isothermal turbulence. Shown are a
logarithmic projection of the average turbulent fluid density integrated through the $N=512^3$ grid (left panel; $0.25<\langle\rho\rangle<4.5$) colorized by the
vertical velocity ($0<\varv_z<10$ in red; $-10<\varv_z<0$ in blue), and a logarithmic projection of the
number of tracer particles evolved with the fluid (right panel).
\label{fig:box}}
\end{center}
\end{figure*}
To study dense regions in turbulent clouds,
we perform
simulations of supersonic isothermal turbulence using a modified
version of the hydrodynamics
code {\it Athena} \citep{stone2008a}. The simulations follow
the calculations presented in \citet{robertson2012a},
with a few modifications. The calculations simulate
an isothermal fluid (with sound speed $\cs=1$) in a unit box (side length $L=1$)
with mean density $\bar{\rho}=1$, evolved
on either $N=512^3$ or $N=1024^3$ grids
using linear reconstruction and a
constrained transport upwind integrator \citep[see][]{colella1990a,gardiner2008a}.
Following \citet{kritsuk2007a}, an
acceleration field generated with a flat spectrum with power only in the
first two $k$-modes drives the fluid. The driving field is constrained
to be solenoidal
by performing a Helmholtz decomposition in Fourier space on a generic field produced
from an appropriate transfer function applied to white noise, using the method described
by \citet{bertschinger2001a}.
The forcing field is applied ten times per crossing
time $t_{\mathrm{cross}} = L/(2\Mbar \cs)$ with an amplitude chosen to maintain a root-mean-squared (RMS)
Mach number of $\Mbar\approx5$.
The $N=512^3$ simulation is run for fifty crossing times, and the
conserved quantities from the simulation grid are recorded ten times per crossing time.
After twenty-five crossing times, the simulation is output at a rate of
5000 snapshots per crossing time over a brief duration of one tenth of a crossing time.
Afterward, the simulation data are
again saved ten times
per crossing time. During the last twenty crossing times, the forcing is turned off
and the turbulence is allowed to decay. The resulting $\sim1300$ snapshots provide a wealth of
information on the time-dependent properties of supersonic turbulence.
For the $N=1024^3$ simulation, we drive the turbulence continuously to
maintain an RMS Mach number of $\Mbar\approx5$ and perform our analysis
on a single snapshot output after four crossing times. While this higher
resolution simulation is driven by different realizations of the forcing
field than is the $N=512^3$
simulation, we have checked that the statistical
properties of both simulations are consistent. We use the results of the
$N=1024^3$ simulation to verify that our conclusions are insensitive to
resolution, as discussed in Section 4 below.
The left panel of Figure \ref{fig:box} shows a visualization of the entire
$N=512^3$ simulation volume
after twenty turbulent crossing times. The
image intensity is scaled with a logarithmic
projection of the density through the simulation, while the coloration reflects whether the
average projected fluid velocity in the vertical direction is positive (red) or negative (blue).
The classic features of supersonic turbulence are apparent, with
large density inhomogeneities in the fluid spanning $\sim6$ orders of magnitude
in the range $10^{-3} \lesssim \rho/\bar{\rho} \lesssim 10^3$. The main focuses of this paper are
the structural properties and evolution of the dense regions, which appear bright white in
Figure \ref{fig:box}.
To assist in our analysis of dense regions in turbulence, we have implemented a new tracer
particle scheme into {\it Athena}. The details of this numerical scheme are presented in
Appendix \ref{section:tracers}. The tracer particles are initially distributed with the
grid, but move in response to the fluid velocity interpolated from the grid. Throughout the
paper, we use the tracer particles to define dense regions, track their evolution with
time, measure the statistical properties of the population of dense regions,
and connect the dense regions to the gravitational potential that the turbulent
gas would generate given its density structure.
Figure \ref{fig:box} shows the number of tracer particles projected through the
$N=512^3$ simulation volume, scaled logarithmically (right panel). Very similar density inhomogeneities
are apparent in both the fluid simulated on the grid and
tracer particles. The tracer particles do not represent Lagrangian mass elements \citep[e.g.,][]{genel2013a},
but do provide convenient locations for measuring approximate fluid properties interpolated
from the grid. The particle interpolation methods are discussed in detail in Appendix \ref{section:reinterpolation}.
\section{Dense Regions in Turbulence}
\label{section:dense_regions}
\begin{figure*}[ht]
\includegraphics[width=7.1in]{mv_fractions.pdf}
\caption{Volume (left) and mass (right) fractions
of supersonically turbulent isothermal fluid below and above a given density, respectively. Shown
are the volume and mass fractions, defined in Equations \ref{eqn:volume_fraction} and
\ref{eqn:mass_fraction} for density thresholds $\rho/\bar{\rho} = [1/\Mbar,1,\Mbar,\Mbar^2,\Mbar^3]$. Most of the volume in supersonic isothermal turbulence lies at densities
$1/\Mbar\lesssim\rho/\bar{\rho}\lesssim 1$, while
most of the mass resides in regions with
densities $1\lesssim\rho/\bar{\rho}\lesssim\Mbar$.
\label{fig:mass_fractions}}
\end{figure*}
The simulations described in Section \ref{section:simulation} reproduce the well-known
phenomenologies of supersonic isothermal turbulence
studied extensively in the literature \citep[e.g.,][]{kritsuk2007a,federrath2010a}. The velocity power
spectrum is steeper than Kolmogorov, with the
high-frequency power-law behaving as $P(k)\propto k^{-\alpha}$ with $\alpha\approx 1.7-1.9$ depending on
time variations. In agreement with previous
work, the volumetric PDF of density $\rho$
for our solenoidally-driven simulation
is close to a lognormal of
the form
\begin{equation}
\label{eqn:pdf}
p(x|\Mbar)dx = \frac{1}{\sqrt{2\pi\sigma^2(\Mbar)}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2(\Mbar)}\right]dx,
\end{equation}
\noindent
with $x\equiv \log \rho/\bar{\rho}$,
a dispersion $\sigma$,
and the constraint
$\mu = -\sigma^2(\Mbar) / 2$. Previous authors have
found that the dispersion scales with the root-mean-squared (RMS)
turbulent Mach number
$\Mbar$ as
\begin{equation}
\sigma^2(\Mbar) = \log\left(1 + b^2 \Mbar^2\right),
\end{equation}
\noindent
where the constant $b\sim0.2-0.5$
\citep[e.g.,][]{padoan1997a,passot1998a,li2003a,kritsuk2007a,lemaster2008a,federrath2010a,price2011a,konstandin2012a,molina2012a}.
In what follows, we will distinguish between
the RMS Mach number $\Mbar$ that describes the
typical bulk random velocity of fluid in the
turbulence, and the Mach number $\Mach$ of
individual shocks.
Some implications of the density PDF on the
formation and
evolution of dense regions in turbulence can be foreseen from integrals of Equation \ref{eqn:pdf}, as shown in Figure \ref{fig:mass_fractions}. Displayed are the volume integrals
\begin{equation}
\label{eqn:volume_fraction}
P_V(<\rho | \Mbar) = \int_{-\infty}^{\ln \rho/\bar{\rho}} p(x|\Mbar) dx
\end{equation}
\noindent
and the mass integrals
\begin{equation}
\label{eqn:mass_fraction}
P_M(>\rho | \Mbar) = \int_{\ln \rho/\bar{\rho}}^{\infty} p(x|\Mbar) e^x dx
\end{equation}
\noindent
indicating the fraction of the
volume below and the mass above densities
$\rho/\bar{\rho}$, as a function of the RMS Mach
number $\Mbar$. The volume-filling densities lie at
$1/\Mbar\lesssim \rho/\bar{\rho} \lesssim 1$, while most
of the mass has densities $1\lesssim\rho/\bar{\rho}\lesssim \Mbar$. These fractions are only weakly dependent on $\Mbar$, and a rough rule of thumb is
that for solenoidally-driven turbulence
the mass fractions $P_M(\rho>\Mbar^\alpha \bar{\rho} | \Mbar) \lesssim \Mbar^{-\alpha}$. Turbulence driven
with compressional modes deviates from the lognormal
PDF, and can have somewhat higher
velocity- and mass-fractions in dense regions \citep[e.g.,][]{federrath2008a}.
The rough factor of $\Mbar^2$ between the volume-filling density and the mass-occupying
density is not accidental, and arises from the
compression factor $\Mach^2$ for isothermal
shocks.
By design, most of the volume and
mass of fluid in the simulation move with
relative velocities $\varv\sim\Mbar\cs$. This connection
gives rise to the concept of a {\it first-generation}
shocked region
in turbulence, generated by encounters
between regions with the volume-filling density
$\rho\sim \bar{\rho}/\Mbar$ at the typical relative
velocity $\varv\sim\Mbar\cs$.
Regions with densities $\rho\gg\Mbar\bar{\rho}$
occupy very small volumes ($\lesssim$1 percent)
and comprise a
small fraction of the total mass of the
fluid (a $\sim$few percent) in supersonic
isothermal turbulence. Often, the vast majority of
computational effort in these simulations
is therefore spent elsewhere, on either the
regions with volume-filling or mass-occupying
densities. The statistical measures typically
applied to turbulence simulations, such as the
velocity power spectrum, are volume-weighted and
therefore largely ignore the densest regions
in turbulence.
Since dense regions occupy such a small volume,
chance encounters between dense regions are
relatively rare. If it survives long enough,
a given region with $\rho_1\gg\bar{\rho}$ could
travel a significant fraction of the simulation
volume without colliding with another region with
$\rho_1>\rho_2\gg\bar{\rho}$ if the densities are
comparable (e.g., $\rho_1/\rho_2<\Mbar$).
This fact bears on whether very dense regions
are produced as {\it higher-generation}
shocked regions,
meaning they are produced
through generations of collisions between
shocks
traveling at velocities $\varv\sim\Mbar\cs$,
or whether they are {\it high-velocity}
shocked regions
where a large relative velocity between the
pre- and post-shock regions give rise to a
very large density contrast. We discuss this
issue in more detail below.
\subsection{Measuring the Properties of Dense Regions}
\label{subsection:dense_regions}
Dense regions occupy small fractions
of the volume and mass of a turbulent fluid.
The three-dimensional structure of turbulence
is famously complex, and identifying and
characterizing the properties of the
densest regions requires additional analysis
effort beyond performing the simulation itself.
Figure \ref{fig:box} illustrates the complexity
of identifying distinct dense regions in turbulence,
as dense structures, which appear as filaments
in projection, seemingly overlap and do not
clearly exist as individual ``objects'' \citep[e.g.,][]{smith2016a}.
This complexity
arises in part because dense regions are
bounded by shocks,
and
are generated in the interaction of waves in the fluid
that have a wide extent in frequency space. The
projection of the density field also implies connections
between regions along the line of sight,
but in many cases these regions can be separated by
surrounding regions of substantially lower densities.
Nonetheless, the density field appears complex and
some methodology for identifying individual
shocked regions
needs
engineering.
The problem of identifying individual dense structures in
supersonic turbulence is not unlike the task of
cataloging dark matter halos in cosmological N-body
simulations \citep[see, e.g.,][]{knebe2011a}, with
some notable differences. The complexity of the
density field in turbulence leads to the ``cloud-in-cloud''
problems encountered when identifying substructure
during halo finding, except with actual clouds.
In the absence of self-gravity, turbulence does not
have a virial condition to define the extent of
regions of interest. Further, in the absence of
self-gravity, regions in turbulence are not
Lagrangian features. Indeed, the densest regions
in turbulence are shocks and material may
pass from the pre-shock to the post-shock regions ahead
and behind of the shock quickly. The intermittency
of turbulence suggests that the properties of
dense regions may themselves change on relatively
short time scales
\citep[e.g.,][]{klessen2000a,vazquez-semadeni2005a,glover2007a}, and further complicates
the analysis of dense regions in turbulence.
To study dense regions, we therefore
require methodologies
for identifying, measuring, and following them
over time.
We have engineered some new techniques
for accomplishing these tasks, and present those
methods in Appendices \ref{section:group_finding},
\ref{section:shock_orientation},
and \ref{section:group_tracking}. The key issues
in developing these algorithms include separating
distinct regions in the density field, defining
a natural frame-of-reference for dense regions
that often involve velocity shifts and rotations
from the simulation frame and coordinates, and
the time-tracking of non-Lagrangian regions whose
particle content can evolve over short time scales.
These issues do not have unique solutions, but
our methods resolve them satisfactorily for the
purposes of this work. We refer the interested
reader to the Appendices for more detail. Depending
on the time step, we typically identify several
thousand independent regions with densities
$\rho\gtrsim \Mbar^2\bar{\rho} \sim 25\bar{\rho}$. For
simulations with $N=512^3$ tracer particles, the
dense regions contain $10-10,000$ particles
at $\rho\ge25\bar{\rho}$ depending on each region's
peak density $\rho_0\approx (25-300)\bar{\rho}$.
We now turn to applying the techniques
we have engineered for measuring the properties
and time-evolution of these dense regions in
supersonic turbulence.
\section{Shocked Region Profiles}
\label{section:shock_profiles}
A prominent feature of isothermal shocks is the $\Mach^2$-contrast in pre- and post-shock densities,
and for normal shocks of infinite extent this relation inferred from the Rankine-Hugoniot conditions provides
a complete description of the density structure of the flow near the shock
\citep[e.g.,][]{shu1991a}.
The post-shock structure behind real isothermal
shocks
are
not solely specified by the jump condition, and as is apparent from Figure \ref{fig:box} the
individual shocked regions
are quite thin with large negative density gradients behind the shock. Using our
methods for identifying and measuring the properties of dense regions, we can determine the structure
of individual shocked regions
and develop a physical model for their density profiles.
Figure \ref{fig:example_shock} shows the density and velocity field near an example shock with
peak density $\rho_0 \approx 230 \bar{\rho}$. The one-dimensional profiles are centered about the local
peak in the density field and oriented using information from
the moment of inertia tensor and the velocity field in the region.
Piecewise parabolic (PPI) and Gaussian
process (GPI) interpolations of the fluid properties are shown. The
``0'' subscript denotes the coordinate system of the simulation, and
the $x$-direction denotes the primary direction of travel of the
shock.
This example
shocked region
is oriented near the $z_0$-axis of the simulation volume, such that the bulk
velocity of the shocked region
is nearly aligned with the $z_0$-direction.
In this example, the pre-shock density is close to the mean density and the
large density contrast relative to the mean
is primarily driven by the $\Mach^2$ change in the $x$-velocity
across the shock. This example is therefore a ``high-velocity''
shocked region.
The
post-shock density profile eventually declines to near the mean density.
As is highlighted by the log-linear
scale shown in Figure \ref{fig:example_shock}, the post-shock
density profile appears roughly exponential.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=7.1in]{example_shock.pdf}
\caption{\label{fig:example_shock} Density (left) and
velocity (right) profiles for an example shocked region
identified in a
supersonic isothermal turbulence simulation. The shocked region
was
identified from a peak among the tracer particles in the simulation,
and the tracers were used to identify the orientation and direction of
travel in the simulation volume. The $x$-direction indicates the direction
of travel that is primarily orthogonal to the shock front, while the
$0$-subscripts indicate coordinates aligned with the simulation volume
reference frame.
In both panels, Gaussian process (solid lines) and piecewise parabolic (dotted
lines) interpolations through the simulation volume are shown. The
large density
contrast of the shock
(left panel) relative to the mean density $\bar{\rho}$ results
from its high Mach number ($\Mach = \varv_x/c_s \approx 15$), as the pre-shock
density is only $\rho\approx\bar{\rho}$. The post-shock density profile appears roughly
exponential. This example shocked region
is nearly aligned
with the $z_0$-direction of the simulation volume, and has a primary
direction velocity $\varv_x\approx \varv_{z,0}$ (solid line, right panel).
The velocities of the shocked region
in the simulation box reference frame are shown as dashed lines in the right panel. The abscissae in both panels are scaled relative to the simulation box size $L$.
}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\epsscale{1.1}
\plottwo{density_profiles.png}{density_profiles_1024.png}
\caption{\label{fig:density_profiles}
Post-shock density profiles of shocked regions
in simulations of supersonic isothermal
turbulence.
The left panel shows the individual density profiles of hundreds of
shocked regions with peak densities $\rho_0>25\bar{\rho}$
identified in the $N=512^3$
simulation using the tracer particles to separate distinct density enhancements. A path through
each shocked region
is determined by information from its moment of inertia tensor and velocity field.
The post-shock region of the density profile measured along this
path is fit with an exponential $\rho\propto\exp(-h/x)$, where positive $x$ corresponds to
a post-shock distance from the peak density.
Each profile is then rescaled by the scale length $h$ of the exponential, normalized by the maximum of
the exponential fit, and then plotted (gray lines). At each position $x/h$ along the axis the
median (solid blue line) and inner 68\% spread (dashed blue lines) of
the Gaussian process interpolation (GPI) profiles can be measured, and
compared with a exponential function (dashed black line). The corresponding median profile determined
from piecewise parabolic interpolation (PPI) profiles is shown for comparison (thin red line), using the exponential scale length and amplitude
determined by fitting to the GPI profiles to rescale each PPI profile.
The comparison demonstrates that the median post-shock
density profiles is close to exponential out to a distance of at least $x/h\approx3$, even
as individual shocked regions
can show substantial deviations and the density profiles of some
shocked regions
are poorly resolved.
The right panel shows the individual shock profiles for $\sim15,000$ shocked
regions identified in the $N=1024^3$ simulation (gray lines) with peak densities
$\rho_0>10\bar{\rho}$. These regions also show exponential profiles (median
and inner 68\% spread shown as blue lines),
but encounter the surrounding background density at smaller $x/h$ than the
higher peak density regions shown in the left panel. In the
$N=1024^3$ simulation, a
larger number of the post-shock regions
are well-resolved ($h/\Delta x>5$; red dotted line).
These well-resolved regions typically
have peak densities of $\rho_0\sim10\bar{\rho}$,
and encounter the background after $x/h\approx1.5-2$. In contrast, the
densest peaks with $\rho_0>100\bar{\rho}$ (thick red line)
show exponential profiles to $x/h\approx3$
before encountering the background density.
A physical model for the origin of the exponential profiles shown in
both panels is discussed in
Section \ref{section:exponential_atmospheres}.
}
\end{figure*}
\subsection{Average Density Profiles}
\label{section:density_profiles}
Given our method for identifying dense regions from the tracer particle distribution,
repeating the measurement illustrated in Figure \ref{fig:example_shock}
for each shocked region
identified in the simulation is straightforward. Information from
the moment of inertia tensor defined by the tracer particles associated with each
shocked region
and their nearby velocity fields can be used to determine the
shocked regions'
spatial
orientations. The trajectory of each
shocked region
defines a skewer through the simulation
volume oriented roughly perpendicular to the associated shock
face, and the properties
of the simulated fluid can be interpolated along this skewer using the same
interpolation scheme used for assigning properties to the tracer particles.
Motivated by the roughly exponential post-shock density profile apparent in the
example shocked region
shown in Figure \ref{fig:example_shock}, we can fit exponentials to
the post-shock density profiles of each shocked region
and rescale them by their best-fit
amplitudes and scale lengths to place them on the same graph.
The left panel of Figure \ref{fig:density_profiles} shows
the ensemble of density profiles behind the five
hundred densest shocked regions
identified in a snapshot of
the $N=512^3$ simulation. Each shocked region
profile is rescaled by its fitted scale length $h$ and normalized by
the peak of the exponential fit, then plotted as a gray line.
At each location $x/h$, the distribution of density profiles
can be measured. The median (solid blue line) and inner 68\% variation (dashed blue line) of the
GPI density profile distribution is plotted in Figure \ref{fig:density_profiles}, along with
an exponential function (dotted black line). The median of the PPI density profiles is shown
for comparison as a thin red line, and is rescaled by
the GPI profile exponential fit amplitude and scale length parameters.
We find that the
median post-shock profile of these dense regions is very close to exponential out to at least $x/h\approx 3$.
The inferred scale lengths vary widely, from poorly- ($h\sim\Delta x$) to well-resolved ($h\gtrsim6\Delta x$).
Individual shocked regions
do show substantial variations from the exponential profile. Some
shocked regions
are clearly unresolved, and resemble an early solution to the isothermal two-shock Riemann problem with little difference between the pre- and post-shock profile shape (e.g., sharp discontinuities on both sides). Other
shocked regions
can show exponential post-shock behavior out to roughly five scale lengths. More typically, shocked regions
in the simulations follow roughly exponential behavior in their post-shock density profiles for a few scale lengths and then show more complicated
density (and velocity) structure well behind the
shock as the density profile approaches the average background density.
To further illustrate the exponential density profiles in shocked regions,
we can use the $N=1024^3$ simulation to study post-shock structures. The
right panel of Figure \ref{fig:density_profiles} shows $15,000$ density profiles
of shocked regions with peak densities $\rho_0>10\bar{\rho}$ identified
in the higher resolution simulation (gray lines). The previous fitting
procedure is
repeated, with the resulting median and inner 68\% spread in the GPI
density profiles shown as blue solid and dashed lines, respectively. These
lower density shocked regions show exponential behavior out to $x/h\approx1.5$,
at which point the profiles begin to encounter the background density
of the surrounding fluid. Restricting to shocked regions with density
profiles resolved with $h/\Delta x>5$ (thin red line) selects out shocked
regions with peak densities of $\rho_0\sim 10\bar{\rho}$, which typically
encounter the background density by $x/h\approx1.5-2$.
This measurement demonstrates that
restricting the analysis to well-resolved shocked regions
does not substantially change the median exponential behavior.
Restricting to the densest shocked regions with $\rho_0>100\bar{\rho}$
(thick red line) extends the exponential behavior to $x/h\approx 3$,
similar to the densest regions examined in the $N=512^3$ simulation
(Figure \ref{fig:density_profiles}, left panel). The $N=512^3$ and
$N=1024^3$ simulations therefore find good agreement for the
typical density profiles of shocked regions.
\subsection{Exponential Atmosphere Model for Isothermal Shocked Regions}
\label{section:exponential_atmospheres}
The post-shock density profiles of shocked regions
measured in Section \ref{section:shock_profiles}
typically show a roughly exponential decline. This rapid fall-off of the density distribution
can be modeled using a physical picture for the formation and evolution of the isothermal
shocked regions
forming in the turbulence. In what follows, we present a physical model to explain
the general features of dense shocked regions
in isothermal supersonic turbulence based on exponential
atmospheres.
In turbulence simulations like those studied here, low frequency velocity perturbations are
introduced to drive large scale motions of the fluid and resupply energy into the turbulent
cascade. These perturbations can lead to substantial velocity variations in the fluid that
are compressive on small scales. Large compressive velocities between regions of typical densities
can result in high Mach-number shocks.
Initially, these shocked regions
can be extremely thin and display
sharp density contrasts (unresolved discontinuities) on either side of the density peak. Such regions
resemble the initial stages of a two-shock
isothermal Riemann problem, where the shock
conditions
would enforce a $\Mach^2$ density jump relative to the pre- and post-shock regions (with roughly constant densities and velocities) that comprise the local flow.
If the local flow were one-dimensional, this shock
structure would persist and the width of the dense
region
would simply increase as the forward and reverse shocks
moved into the pre- and post-shock regions.
However, given the complexity of the turbulent flow, the pre- and post-shock regions will have density and velocity structure such that the initial pressure balances generating the discontinuities
on either side of the dense region
will be upset. The density distribution in the post-shock region will re-adjust to
accommodate the pressure imbalance, with adjustments occurring over a sound-crossing time
across the narrow region. Provided that the original Mach number of the shock
is large, material
from the pre-shock region with density $\rho_w$
will still be encountered at a high relative velocity $\varv_w \approx \Mach \cs$.
The post-shock density profile of this region will necessarily adjust to provide a pressure
gradient $\nabla p$ that can reach
hydrostatic balance with the decelerating force $\rho g$ owing to ram pressure $\rho_w \varv_w^2$
exerted by this
on-coming material. We can
describe this scenario mathematically by balancing the pressure
gradient behind the shock
(of density $\rho$) with the ram
pressure applied to the shocked region,
and write
\begin{equation}
\nabla p = - \rho g = - \rho \frac{\rho_w \varv_w^{2}}{\Sigma}
\end{equation}
\noindent
where $\Sigma = \int \rho dx$ is the mass per unit area of the
shocked region
measured along the
$x-$direction of travel. Writing $p = \rho \cs^{2}$ we have that
\begin{equation}
\label{eqn:hydro_equil}
\frac{d\rho}{dx} = -\rho \frac{\rho_w \varv_w^{2}}{\cs^{2} \Sigma},
\end{equation}
which gives the exponential solution $\rho(x) = \rho_{0} \exp \left( - x/h\right)$ with
\begin{equation}
\label{eqn:scale_length}
h \equiv \frac{\cs^2}{g} = \frac{\Sigma}{\rho_{w} \Mach^{2}}
\end{equation}
\noindent
where $\Mach$ is the Mach number of the shock.
In this picture, the density structure in the post-shock region provides the
pressure gradient needed to counterbalance the incoming ram pressure of the
pre-shock material. In a steady state converging flow with constant pre- and
post-shock density and velocity, this additional pressure support would be
unnecessary and the shocked region
would simply behave as a two-shock Riemann problem.
The spatial and temporal variations in the turbulent flow that the
shock
moves
through results in the development of a density gradient in the post-shock region.
For an isothermal fluid, the corresponding density profile can be roughly exponential.
Variations in the velocity and density field, and the non-zero pressure support from
the converging flow behind the shock,
can lead to deviations from this exponential
form, but we expect that the general idea holds. Fluids with different equations of
state, or other sources of pressure support like magnetic fields, could display
other primary post-shock solutions. We will discuss these possibilities in more
detail in Section \ref{section:discussion}.
\subsection{Time-Dependent Exponential Waves}
\label{section:time_dependence}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=7.1in]{exp_wave.pdf}
\caption{\label{fig:exp_waves}
Simulations of exponential waves traveling through a background medium. Shown are thin slices through
the
density distributions for exponential waves initially traveling to the right with Mach numbers of
$\Mach_{0}=5$ (upper row) and $\Mach_{0}=25$ (lower row), at times $t=[0,1/4\Mach_0,3/4\Mach_0,1/\Mach_0]$ (left to
right). The logarithmic color map spans the range $\rho=[1/\Mach_0,\Mach_0]$.
The initial peak density for
each wave is set to $\rho_{\star}=\Mach_0\bar{\rho}$, while the background medium has density
$\rho=\bar{\rho}/\Mach_0$. The initial surface density of each wave was set to
$\Sigma_0=0.125\bar{\rho} L$,
with corresponding exponential scale lengths of $h_0=0.025L$
($\Mach_0=5$) and $h_0=0.005L$ ($\Mach_0=25$).
Numerical details of the simulation are discussed in Section \ref{section:idealized_model}.
The simulations demonstrate that the deceleration associated with the ram pressure from the
on-coming pre-shock material causes the shocked regions
to spread behind the shock and
decline in peak density.
}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=7.1in]{exp_model.pdf}
\caption{\label{fig:wave_model}
Time evolution of properties of exponential waves traveling through a background medium.
Shown are the surface density (left column), Mach number (middle column), and exponential
scale length (right column) with time of the waves from the
simulations visualized in Figure \ref{fig:exp_waves}.
The properties of the traveling
waves measured in the simulations are shown as blue lines in each panel, while
the results from the
analytical model described in Section \ref{section:time_dependence} are
shown as red lines. The analytical model for the time dependent surface
density, Mach number, and scale lengths of the exponential waves works well,
and much of the variation owes to uncertainties in separating the
tails of exponential waves
from the background medium or relaxation from the initial conditions.
}
\end{center}
\end{figure*}
Motivated by the typical post-shock shape of shocked regions
in our turbulence simulation,
in Section \ref{section:exponential_atmospheres}
we considered an exponential atmosphere model for
isothermal shocked regions
traveling through a background medium.
As the
exponential shocked region
moves through the background medium,
the region could be decelerated by ram pressure from the
pre-shock material or an increase in its surface density.
The density contrast
between the pre-shock material and the density peak will decline
as the shocked region
decelerates, and as the Mach number of the shock
decreases.
To get some sense of the time-dependence of an isothermal
shocked region
traveling through a background medium,
we can extend the
exponential atmosphere model to account for the effects associated with
the region's
deceleration.
To do so, we will still approximate the wave as in pressure equilibrium with
ram pressure from the background. The region will therefore still have an
exponential atmosphere behind the shock,
but the scale length of the atmosphere
will increase as the mass of the wave grows and the wave decelerates.
First, we can model the time-dependent growth of the surface density
of the shocked region.
As the shock
plows through the surrounding background
material, any new material accrued into the shocked region
will depend on the background
density $\rho_w$ and the velocity $\varv=\Mach \cs$ of the shock.
With the
ansatz that this material is deposited with some efficiency $\epsilon$, we
can write time rate of change of the surface density as
\begin{equation}
\frac{d\Sigma}{dt} = \epsilon \rho_w \Mach \cs.
\end{equation}
\noindent
The value of $\epsilon\ne1$ in general, as not all of the background material
that encounters the shocked region
will become permanently entrained.
The Mach number of the shock
will change with time, but we can still implicitly
calculate the time-dependent surface density as
\begin{equation}
\label{eqn:surface_density}
\Sigma(t) = \Sigma(t=0) + \int \epsilon \rho_w \Mach \cs dt.
\end{equation}
\noindent
The surface density of the shocked region
increases according to the time-integral
of the surface density flux of pre-shock material the shock
encounters,
moderated by some efficiency parameter $\epsilon$.
The deposition of this material will be accompanied by the deposition of relative
momentum into the shocked region,
and in the case where the background medium is uniform
in density and momentum we approximate
the rate of this momentum deposition as proportional
to the relative velocity of the background medium with respect to the
shock
times
the mass accretion rate into the shocked region.
This momentum deposition will reduce the
relative velocity of the shock
and background. We can balance the rate at which
the relative momentum from the background medium is added to the
shocked region
and the
corresponding rate at which the shock
decelerates. We can then write
\begin{equation}
\label{eqn:dmdt}
\Sigma \cs \frac{d\Mach}{dt} = -\eta \cs \Mach \frac{d\Sigma}{dt}
\end{equation}
\noindent
where $\eta$ describes the efficiency of depositing momentum from the
background material into the traveling shocked region.
Again, $\eta\ne1$ in general
and the efficiency of mass and momentum deposition do not have to be equal (i.e.,
we have no clear reason to require $\epsilon=\eta$).
For constant mass and momentum deposition efficiencies,
the solution to Equation \ref{eqn:dmdt} is a power-law relation between the Mach number
and the surface density,
\begin{equation}
\label{eqn:mach_number}
\Mach(t) = \Mach(t=0) \left( \frac{\Sigma}{\Sigma(t=0)}\right)^{-\eta}.
\end{equation}
\noindent
If we assume that the density distribution behind the shock
maintains
instantaneous hydrostatic equilibrium, then the pressure gradient behind
the shock
will be balanced by the deceleration from the instantaneous
ram pressure of the
background material. We are making the same assumptions that lead to Equations
\ref{eqn:hydro_equil} and \ref{eqn:scale_length} above, but now allow
for the surface density of the wave to change with time according to Equation
\ref{eqn:surface_density}.
The time-dependent scale length can then be modeled as
\begin{equation}
\label{eqn:scale_length_t}
h(t) = \frac{\Sigma(t)}{\rho_w \Mach^{2}(t)}.
\end{equation}
\noindent
As the surface density of the shocked region
increases and the Mach number of the
shock decreases,
the scale length of the post-shock density distribution
increases. The
material of associated with the shock
spreads through the post-shock
region. The isothermal jump conditions between the peak density $\rho_0$
and the pre-shock
density $\rho_w$ are maintained, since $\Sigma = \rho_0 h$ for an
exponential density profile.
\subsection{Idealized Simulations of Exponential Waves}
\label{section:idealized_model}
Testing the above model of shocked regions
in the context of the turbulence simulations is
difficult because of the complexities of the turbulent flow. Each
shocked region
encounters differing, time-dependent pre-shock conditions and variations
in their locally-convergent velocity field. Instead, we have tried to
test the model for shocked regions via controlled simulations of the motion of exponential waves through
a background medium. To do this, we use the hydrodynamics code
{\it Athena} \citep{stone2008a} to model an exponential wave with
initial scale length $h_0$ and surface density $\Sigma_0$ traveling through
a background medium $\rho_w$ with an initial relative velocity $\varv = \Mach_0 \cs$. The
fluid is treated as isothermal with a sound speed $\cs = 1$. We perform
two such simulations, with $\Mach_0=5$ and $\Mach_0=25$. In both cases, in terms of
a characteristic density $\bar{\rho} = 1$ we set
$\Sigma_0/h_0 = \Mach_0 \bar{\rho}$ and $\rho_w = \bar{\rho} /\Mach_0$. In terms of a
characteristic scale $L=1$, for the $\Mach_0=5$
simulation we set the initial exponential scale length to be $h_0=0.025L$.
For the $\Mach_0=25$ simulation, we set $h_0=0.005L$.
The simulations are performed on a three dimensional grid of size $1024\times512\times512$
with periodic boundary conditions. For the $\Mach_0=5$ simulation, the spatial resolution of
the simulation is set by the cubic cell size $\Delta x=1/256$. For the $\Mach_0=25$ simulation, we
use a resolution of $\Delta x=1/1024$ along the shocked region
and $\Delta y= \Delta z =1/256$ in the plane of the shock.
The exponential shocked regions
are initialized as cylinders oriented along the $x$-axis with
a diameter of $0.75L$ in the $y$-$z$ plane. We evolve each system until the wave interacts with
its own wake after it transverses the volume.
Figure \ref{fig:exp_waves} shows the logarithmically scaled map of the density in the $x$-$y$
plane for the $\Mach_0=5$ (upper row) and $\Mach_0=25$ (lower row) simulations, plotted at
times $t=0$ (far left column), $t = 1/4\Mach_0$ (inner left), $t=3/4\Mach_0$ (inner right), and
$t=1/\Mach_0$ (far right). The exponential waves are traveling to the right at an initial
velocity of $\varv(t=0) = \Mach_0 \cs$ relative to the background medium. The
image frames travel at a constant velocity of $\varv = \Mach_0 \cs$, initially centered on the shock
front. The apparent motion of the shocked regions
from right to left reflect their deceleration
relative to the background medium (which is traveling through the image frames from right
to left with constant relative velocity $\varv = -\Mach_0\cs$). In addition to the deceleration,
the decrease in the peak density of the shocked regions
and the increase in the post-shock
exponential
scale lengths are apparent from the density distribution.
The density distributions of both shocked regions
remain close to exponential for the duration of
the simulations.
At the
edges of the exponential waves bow-like shocks
develop
\citep{vishniac1994a}, and the relative size appears larger for the slower
shocked region
since the vertical spreading of the fluid is limited by the sound speed and the
absolute time scale in the $\Mach_0=5$ simulation is prolonged relative to the $\Mach_0=25$ simulation in
the bottom panels.
The qualitative evolution of the exponential shocked regions
shown in Figure \ref{fig:exp_waves}
can be quantified from the simulations and compared with the model presented in Section
\ref{section:time_dependence}. We estimate the maximum density $\rho_\mathrm{max}(t)$ and exponential
scale lengths $h(t)$ of the post-shock regions,
and their velocity relative to the background medium,
at 100 time steps evenly spaced over the time span $t=[0,1/\Mach_0]$. The time-dependent
surface densities of the shocked regions
are estimated as $\Sigma(t) = \rho_\mathrm{max} h$. Figure \ref{fig:wave_model}
shows the surface density (left panels), Mach number (center panels), and exponential
scale lengths (right panels) estimated for the shocked regions
in the $M_0=5$ (upper row) and $M_0=25$
(lower row) simulations, normalized to their initial values. These quantities estimated from
the simulation data are shown as blue lines. We then use Equations \ref{eqn:surface_density}-\ref{eqn:scale_length_t}
as fitting functions to
model the time-dependence of the simulation data (red lines). The mass
accretion efficiency is taken as $\epsilon=0.88$, while we use momentum
efficiencies of $\eta\approx0.79$ for $\Mach_0=5$ and $\eta\approx1.0$
for $\Mach_0=25$. Relative to Equation \ref{eqn:scale_length_t}, we
allow
for a mildly nonlinear time dependence in the scale length of $\bar{h}(t) \propto h(t)^{\alpha}$ with $\alpha=0.9$ for $\Mach_0=5$ and $\alpha=0.8$ for $\Mach_0=25$.
The early variation apparent in the surface density and scale lengths owes to relaxation from the approximate initial conditions that model the entire
exponential waves as traveling with the same initial group velocity, and in
inaccuracies in separating the wave from the background medium. These lead to
$\sim10\%$ uncertainties in the measured shocked region
surface density and scale lengths,
and we account for these errors when computing the
time-dependent models shown in Figure \ref{fig:wave_model}.
As Figure \ref{fig:wave_model}
demonstrates, the model presented in Section \ref{section:time_dependence} roughly
recovers the time-dependence of the surface density, velocity, and scale lengths of the
exponential shocked regions
as they are decelerated by the background medium. Physically, this
model succeeds because
the exponential atmosphere behind the shock
responds quickly to the changing ram pressure
from the medium ahead of the shock.
\section{Shocked Region Lifetimes}
\label{section:shock_lifetimes}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.3in]{density_vs_time.pdf}
\caption{\label{fig:density_vs_time}
Time-dependence of the peak density of shocked regions
in isothermal
supersonic turbulence. Shown are the rise and fall of the
density in very dense shocked regions
($\rho>25\bar{\rho}$ at their peak)
in our turbulence simulation (gray lines), measured in between
drivings. The ordinate and abscissa are rescaled by the peak
density and by their rise and fall times determined by a skew Gaussian
fit (red line, see Equation \ref{eqn:skew_gaussian}). Dense
shocked regions
in supersonic isothermal turbulence spread very quickly
and rapidly decline in maximum density.
}
\end{center}
\end{figure}
The comparison presented in Section \ref{section:time_dependence} between
the simulated exponential waves and the time-dependent exponential atmosphere
model suggests that the deceleration of shocked regions
in supersonic isothermal turbulence will lead to a
rapid decline of their peak densities with time. Using the method for tracking shocked regions
described in Appendix \ref{section:group_tracking}, the time dependence of the peak density of simulated
shocked regions
can be measured.
During the turbulence simulation described in Section \ref{section:simulation} the
simulation output frequency is increased dramatically after twenty-five turbulent
crossing times, such that five hundred outputs are recorded in between applications of
the driving field. We identify dense regions from tracers with interpolated densities
$\rho\geq 25\bar{\rho} \approx \Mbar^2\bar{\rho}$
in these simulation outputs according to
the method described in Appendix \ref{section:group_finding}. Using the population
of dense regions identified half-way through this high output-frequency period of
the simulation, we track the shocked regions
forward and backward with time following the
method described in Appendix \ref{section:group_tracking}. We then have time
trajectories of each shocked region's
properties over a short period where the simulation
output frequency enables us to follow them reliably. We identify the time at
which each shocked region
reaches its maximum density over this window, and can then
analyze the formation and dispersal of the shocked regions
with time.
As anticipated from the model presented in Section \ref{section:time_dependence},
the individual shocked regions
with high peak densities evolve very quickly. Figure \ref{fig:density_vs_time} shows the rise and fall of shocked regions
with peak densities $\rho_0 \geq 25\bar{\rho}$ (gray lines).
For each shocked region,
we fit a skew Gaussian of the form
\begin{equation}
\label{eqn:skew_gaussian}
\rho(t) = \rho_{\mathrm{max}} \exp\left[ - \frac{(t-t_\mathrm{max})^2}{2 \sigma_{t, \pm}^2} \right],
\end{equation}
\noindent
where $t_{\max}$ is the time of maximum density $\rho_{\mathrm{max}}$. The quantity
$\sigma_{t,\pm}$ equals the rise time $\sigma_{t,-}$ when $t-t_{\mathrm{max}}<0$ and
the fall time $\sigma_{t,+}$ when $t-t_{\mathrm{max}}>0$. The rise and fall times are
fit separately. We find typical fall times of $\sigma_{t,+} \approx 2-4\times10^{-3} L/\cs$.
The distribution of fall to rise times has a mean of $\sigma_{t,+}/\sigma_{t,-} \approx 1.5$, with
a tail extending to $\sigma_{t,+}/\sigma_{t,-} \approx 5$.
The lifetimes of dense shocked regions
in the supersonic isothermal turbulence simulation are
quite short, comparable to the sound crossing time across the thickness of the post-shock
region. The portion of their existence when their density is rising is very short, comparable
to the sound crossing time across the cells needed to resolve the density discontinuity at the
shock
interface with the pre-shock material. The time over which their density declines is
only slightly more extended, but substantially longer than the time material takes to flow
from the pre- to post-shock regions.
The short lifetimes of these shocked regions
may bear on
models of gravitational collapse in turbulent fluids, and we revisit this measurement in that
context in Section \ref{section:gravity} below.
\section{Deceleration and Spreading of Shocked Regions}
\label{section:spreading}
The previous sections have outlined an exponential atmosphere
model for shocked regions, where the
exponential scale length adjusts to the deceleration of the
shocked region owing to the on-coming ram pressure of the
pre-shock material. This deceleration causes the peak density
of shocked regions to decline as the exponential atmosphere
spreads behind the traveling shock.
The idealized simulations of exponential waves presented in
Section \ref{section:idealized_model} illustrate this deceleration
and spreading of shocked regions, but demonstrating this effect
for shocked regions in supersonic turbulence requires more
effort. To this end, we have selected a shocked region tracked over the
high-frequency output portion of the $N=512^{3}$ simulation and
measured $\sim2,200$ individual trajectories of the subset of
tracer particles continuously
associated with the shocked region. As the shocked region travels
and spreads, material in the exponential atmosphere is decelerated
and gradually lags behind the shock. Relative to the density peak just
behind the shock, each parcel of material in the exponential atmosphere will
reside at a time-dependent distance $x$ behind the peak. As the material
spreads, the distance $x$ of each parcel will typically
increase from an initial separation $x_0$ to a larger distance after some
time $t$.
Figure \ref{fig:spreading} shows the time-dependent separation
between the tracked tracer particles and the moving density peak, $(x-x_0)/h_0$,
in units of the initial scale length $h_0$ of the atmosphere (gray lines).
In Figure \ref{fig:spreading},
the coordinate $x$ corresponds to the direction of travel of the shock
and increases in the post-shock direction, and time $t$ is scaled by
the sound crossing time across the initial scale length $h_0$.
The tracers spread at a range of rates as they respond to the deceleration
of the shocked region,
which owes both to their initial distribution of $x_0$ throughout
the post-shock flow and to the interpolation scheme
used to compute the particle velocities. The mean separation of the
tracers and the moving peak of the shocked region is shown as a solid blue
line, and the inner 68\% of the distribution of separations is shown with
dashed blue lines.
To verify that the region spreads in response to the deceleration of the
shocked region, we must estimate the expected rate of spreading.
For an exponential atmosphere extending to zero density,
the density-weighted average distance of material from the
peak is equal to the scale length $h$. For finite atmosphere the mean distance
from the peak is less than $h$, and for this region that extends for $\sim 2 h$
before reaching the background density the mean distance is $\sim0.7h$. The
rate of change of the mean distance from the peak should be proportional to
$dh/dt$.
If the
scale length of the region $h\sim \Sigma/\rho_0$, where $\Sigma$ is the
surface density of the region and $\rho_0$ is the peak density, then
we can write
\begin{equation}
\label{eqn:spreading}
\frac{dh}{dt} = \frac{1}{\rho_0}\frac{d\Sigma}{dt} - \frac{\Sigma}{\rho_0^2}\frac{d\rho_0}{dt} = \frac{1}{\rho_0}\frac{d\Sigma}{dt} - \frac{\Sigma}{\rho_0}\frac{d\log \rho_0}{dt}.
\end{equation}
\noindent
We can integrate this equation to find the expected spreading of the region
$(h-h_0)/h_0$
relative to the initial scale length $h_0$ (dotted black line). Equation \ref{eqn:spreading} accounts for how changes in the surface density of the region
affect its deceleration in addition to the response to the on-coming
ram pressure.
As Figure \ref{fig:spreading} demonstrates,
we find reasonable agreement between the spreading rate
measured
from the tracer particle positions and the estimate computed from the
expected time-dependence of the scale length. We present this measurement
as supporting evidence that the exponential atmosphere model provides a
useful description of the post-shock flows in shocked regions. We caution
that this smooth behavior of spreading only occurs when a region travels
through a pre-shock region with relatively constant density and velocity.
If instead the region is further compressed to form a higher density sheet, the
spreading will cease at least momentarily. Nonetheless, when the pre-shock
conditions allow for the exponential atmosphere to develop it spreads at
a rate comparable to expectations based on the region's deceleration.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.3in]{spreading.pdf}
\caption{\label{fig:spreading}
Spreading of post-shock material in a shocked region with time.
As shocked regions decelerate owing to on-coming ram pressure from
the pre-shock material, their exponential atmospheres spread through
the post-shock region. The gray lines show the time-dependent separation $x$
between $\sim2,200$
tracer particles identified in a shocked region and the location
of the region's peak density, as measured along the direction defined by the
associated shock's trajectory.
Plotted are the post-shock locations $(x-x_0)/h_0$ relative
to the initial separation $x_0$ from the peak, in units of the initial
exponential scale length $h_0$
of the region, as a function of time $t$ in units of the sound crossing time
of the initial region scale length $h_0/c_s$.
The blue solid line indicates the average time-dependent
separation, while the dashed blue lines indicate the inner 68\% spread in
separations. Given the deceleration of the region, the expected growth of the
scale length $(h-h_0)/h_0$
and the mean separation of material from the density peak
can be estimated from the time-dependent surface density and
on-coming ram pressure (dotted black line). The rate of
spreading of material in the
post-shock region is consistent with expectations computed from the deceleration
of the region.}
\end{center}
\end{figure}
\section{Properties of Shock Populations}
\label{section:shock_populations}
The preceding analysis has explored the properties of individual
shocked regions
and the average shocked region
properties determined from the population of
dense regions identified in the simulation volume. We now turn to the
properties of the population of shocked regions
as a whole. In analogy with
treating dense regions of a cosmological density field in the context
of a ``halo model'', we will measure some important properties of
the dense regions of turbulence in the context of a ``shock model''
for the population of shocks and associated shocked regions
present in the fluid.
Dense regions of the turbulent fluid are assigned to individual
shocks
identified using the method described in Appendix \ref{section:group_finding}.
The locations of shocked regions are taken to coincide with the
their density maxima. Their interior density profiles are
assumed to follow the interpolated density at the locations of
the tracer particles assigned to them.
\subsection{Spatial Clustering of Dense Regions}
\label{section:clustering}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.3in]{correlation_function.pdf}
\caption{\label{fig:correlation_function}
Two-point correlation function of dense shocked regions
in supersonic
isothermal turbulence. Shown is the correlation function
of the density maxima of shocked regions
with peak densities $\rho_0\gtrsim\bar{\Mach}^2\bar{\rho}$,
computed using the \citet{landy1993a} estimator, constructed by
counting pairs of dense regions
in bins of radial separation and comparing them
with the corresponding clustering of randomly distributed locations. The
correlation function is measured for five statistically-independent times
during the turbulence simulation, and then averaged. The error bars indicate
the relative uncertainty per radial bin, scaling with the square root
of the number of pairs in each. On small scales, the shocked regions
are strongly clustered
(blue points), with the correlation function scaling approximately
as $\xi\propto(r/L)^{-1.5}$ (black line). At scales comparable to the driving
scale (gray shaded region), the dense regions
become anti-correlated with $\xi<0$ ($|\xi|$ is shown
with maroon points).
}
\end{center}
\end{figure}
The density field shown in Figure \ref{fig:box} illustrates some important
features of shocked region
population in isothermal turbulence. First, the densest regions are
spatially clustered. In projection, the sheet-like structures associated with
shocks
appear filamentary. For turbulence driven at low spatial frequencies,
the densest shocked regions
occur near the intersections of
large-scale velocity perturbations. Second, there are large interior regions in the
turbulent fluid, comparable
to the driving scale, that are nearly devoid of dense shocked regions.
As discussed in
Section \ref{section:dense_regions}, these regions have volume filling densities
$\bar{\rho}/\Mbar \lesssim \rho \lesssim \bar{\rho}$ and represent mild rarefactions owing to the
large-scale driving modes. Once the shocked regions
are identified using the method
described in Appendix \ref{section:group_finding}, the statistical properties
of their spatial distribution should reflect these features.
A useful statistic familiar from cosmology is the two-point correlation function
$\xi(r)$ that describes the excess probability for two points pulled from
a spatial distribution to be separated by a distance $r$ relative to two points
pulled from a uniform random distribution. A convenient estimator for $\xi(r)$
was provided by \citet{landy1993a} in the context of galaxy surveys, written as
\begin{equation}
\label{eqn:landy-szalay}
\xi(r) \approx \frac{(DD - 2DR + RR)}{RR},
\end{equation}
\noindent
where $D$ represents the locations of data points for which $\xi(R)$ is desired,
and $R$ represents the locations of randomly distributed points with the same
average number density. The quantities $DD$, $DR$, and $RR$ correspond to
data-data, data-random, and random-random point pairs separated by a distance
$r$.
To compute $\xi(r)$ for shocked regions
with peak densities $\rho_0\gtrsim\Mbar^2\bar{\rho}$
in our turbulence simulation, we identify the locations of maximum density for each
region to generate our data sample $D$. We then generate a uniform random
point distribution of the same size to populate $R$. The point populations are
loaded into k-D trees, allowing for fast neighbor searching to find point pairs
separated by a distance $r$. The $DD$, $DR$, and $RR$ pairs are computed, and Equation
\ref{eqn:landy-szalay} used to estimate $\xi(r)$. To build signal and to average over
high time-frequency variations in the correlation function, the process is repeated for
five statistically independent times during the simulation and the measurements averaged.
Figure \ref{fig:correlation_function}
shows the resulting two-point correlation function for dense regions
in the
turbulence simulation. The correlation increases to small scales, behaving
as a rough power law with $\xi(r)\propto (r/L)^{-1.5}$. On scales of a few cells,
the correlation weakens somewhat, but this slight turn-down may owe to our
shocked region
identification method or to resolution effects near the grid scale. On
scales comparable to the driving scale of the turbulence the dense regions
become
anti-correlated, reflecting the presence of large, underdense voids in the
density distribution.
Further analogies with the spatial correlations of dark matter halos may provide
additional insight. The densest shocked regions
in supersonic turbulence
are clearly more strongly clustered than
the density field, which is itself spatially clustered. The analog in cosmology
is the concept of halo bias, where the ratio of the halo and matter correlation
functions is $b>1$ for strongly-clustered dark matter halos. The base analytical
picture for understanding halo bias is the peak-background split model \citep[e.g.,][]{mo1996a,sheth1999a,tinker2010a},
where the excess abundance of halos in regions of enhanced background density
can be used to estimate their clustering bias relative to the matter field.
For turbulence it may be tempting to imagine a
``peak density function''
$dn/d\rho_0$ describing the differential number density of shocked regions
as a function
of their peak density, or a mass function in analogy to the halo mass function $dn/dM$,
which then could be used to estimate the expected
bias relative to the turbulent density field. Indeed, similar
ideas have been explored before in the context of driven and decaying
turbulence \citep[e.g.,][]{smith2000a,smith2000b}.
The excursion set formalism
model of \citet{hopkins2013a} can be used to compute an analytical model for
the clustering of dense regions in turbulence \citep{hopkins2013b} and predicts
that $\xi(r)\propto r^{-1}$ on small scales, steepening to $\xi(r)\propto r^{-2}$
on large scales. Our findings appear roughly
consistent with these predictions,
but our group finding algorithm, which prevents the identification of distinct
regions near the grid scale, does not enable us to confirm robustly the origin of
the flattening of $\xi(r)$ on small scales.
We leave additional comparisons for future work.
\subsection{Dense Regions and the Density PDF}
\label{section:dense_pdf}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.5in]{cumulative_density_distribution.pdf}
\caption{\label{fig:density_cdf}
Cumulative density distribution function for supersonic isothermal turbulence,
reconstructed from the combined density distributions of individual
shocked regions.
The volumetric density probability distribution function of supersonic isothermal
turbulence is close to lognormal (e.g., Equation \ref{eqn:pdf}), and the cumulative density distribution function in turbulence corresponds to the fraction of volume in the fluid above
a given density. The density CDF measured from the turbulence simulation (blue) can be compared
with the density CDF constructed by summing the individual density distributions (gray lines) of all of the dense regions identified using tracer particles in the simulation (red line).
The slight excess of the reconstructed CDF over the simulation CDF results from the allowed
overlap of the Voronoi tessellations used to estimate the density PDFs of individual shocked regions.
Overall, the excellent agreement demonstrates that the dense tail of the
turbulent density PDF corresponds to a population of distinct structures.
}
\end{center}
\end{figure}
The origin and shape of the density probability distribution function (PDF)
of supersonic isothermal turbulence influences the star formation process.
The roughly lognormal shape of the PDF has been cited as evidence for a
statistical origin \citep[e.g.,][]{vazquez-semadeni1994a,padoan2002a}.
However the character of the forcing field influences the shape of the PDF
on the high-density tail \citep[e.g.,][]{federrath2008a,federrath2010a,hopkins2013c},
with compressive
modes leading to more high-density material. This result
implies that the statistics of the turbulence at high densities retains
some memory of the properties of large-scale driving modes, which may
argue against the density PDF arising simply from central limit theorem
statistical arguments.
In the context of this work, where we have identified individual
dense regions in supersonic turbulence using the method described in Appendix \ref{section:group_finding} and tracked their time evolution following Appendix \ref{section:group_tracking},
a clear test for our model of dense regions in supersonic turbulence as a collection of distinct traveling waves is whether the density PDF can be reconstructed from their properties.
The turbulence simulation performed on a Cartesian mesh described in Section \ref{section:simulation} does display a nearly lognormal density PDF with a width appropriate for its root-mean-squared Mach number. For convenience, we will work with the density cumulative distribution
function (CDF)
\begin{equation}
\label{eqn:cdf}
V\left(>\rho\right) = \int_{\rho}^{\infty} \frac{dp}{d\rho} d\rho,
\end{equation}
\noindent
where the density PDF $dp/d\rho$ is normalized to integrate to unity over all densities $\rho>0$.
Figure \ref{fig:density_cdf} shows the density CDF for our turbulence simulation (blue line),
computed by summing the volume in cells above a given density. We examine only the tail
of the PDF at densities $\rho>25\bar{\rho}$, corresponding to the threshold density for the
tracer particles we associate into groups.
Recovering the density CDF from the individual dense regions in our catalogue constructed
from tracer particles is more involved. By design, the density field interpolated at the tracer
particle locations varies on scales less that the cell width $\Delta x$. To
compute the density CDF from the tracer particles therefore requires us to assign a
volume to each tracer particle, and then sum the volume occupied by tracers in our catalogue
above a given density. For each group in our catalog we use the {\it Voro++} library
\citep{rycroft2009a} to construct
a Voronoi tessellation about the tracer particle positions, accounting for the presence of
nearby low-density tracers that surround each group. Individual tracer particle
groups then have their own density CDF $V_{i}(\rho>)$, shown as gray lines in
Figure \ref{fig:density_cdf} for the one hundred identified groups with the highest
peak densities. The total density CDF of the tracer particle groups then corresponds to the
sum of the individual group CDFs, e.g., $V(\rho>) = \sum_i V_{i}(\rho>)$. The resulting
total density CDF reconstructed from our group catalogue is shown as a red line in Figure
\ref{fig:density_cdf}. The agreement between the simulation and reconstructed density CDFs appears quite good, and this result is nontrivial. The slight excess in the reconstructed
CDF owes to a combination of our interpolation scheme (here, PPI is used) and permitting
overlap of the volumes assigned to the individual groups (tessellating about all tracer
particles in the simulation simultaneously would avoid this). Note that in general interpolation
schemes that smooth the density field near maxima will not lead to an accurate reconstructed
density CDF, as the highest density tail of the CDF will be suppressed.
This demonstrated correspondence between the simulated and reconstructed density CDFs demonstrates that this statistical property of the turbulence arises from the
internal structure of distinct regions \citep[for some related analytical models, see][]{fischera2014a,fischera2014b,myers2015a,veltchev2016a,donkov2017a}.
Projections of multiple physically distinct regions along the line-of-sight will then comprise
the filaments that produce the observed column density PDF \citep{moeckel2015a,chen2017a}.
Each of the individual regions shown in Figure \ref{fig:density_cdf} have been tracked with time during a portion of the simulation, and we
have checked that time variations in the simulation density CDF correspond to the
evolution of the density structures of individual groups as described in
Section \ref{section:shock_lifetimes}.
We can confirm that the rapid evolution in the peak density of
the densest regions discussed in Section \ref{section:shock_lifetimes} indeed corresponds to
the time variation in the high-density tail of the PDF (and CDF), as suggested
by the models of \citet{hopkins2013c}.
Indeed, conceptually the reconstructed density CDF
can be considered as an integral over a
peak density function $dn/d\rho_0$ times
the internal density PDF of an individual shocked region
with peak density $\rho_0$. However, the
group-to-group variations apparent in Figures \ref{fig:density_profiles} and
\ref{fig:density_cdf} may suggest a more complex picture. We speculate that variations
in the density PDF in turbulence with differing driving mechanisms, or with differing
physics (e.g., magnetic fields, adiabatic equations-of-state), will correspond conceptually
to changes in the number density and the
typical density profile of regions with a given peak density. In self-gravitating regions,
this connection appears as the development of a power-law tail in the PDF as the density
profile in collapsing regions \citep{kritsuk2011a,ballesteros-paredes2011a,lee2015a,burkhart2017a,murray2017a}.
\section{Gravity and the Fates of Dense Regions}
\label{section:gravity}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=7.1in]{group_locations.png}
\caption{\label{fig:group_locations}
Gravitational potential minima in supersonic isothermal turbulence, and their relation to the
density field.
Shown are projections of the gravitational potential (left panel; linear projection) and density field (center panel; logarithmic projection) through the entire
simulation volume shown in Figure \ref{fig:box}. The white points indicate local minima in the gravitational potential, as determined from the potential interpolated at the position
of tracer particles. Density maxima correspond to local potential minima, but the global potential minimum in turbulence does not occur at the maximum density (the densest regions
occupy very small volumes and contain small mass, see Figure \ref{fig:mass_fractions}).
Assuming the turbulence
is marginally self-bound to set the value of the
gravitational constant, regions in the turbulent
fluid that are gravitationally bound to any potential minimum can be identified (right panel; same scale as center panel). Typical gravitationally-bound regions have densities of a few
times the mean density, corresponding to the largest density regions that
contain substantial mass.
}
\end{center}
\end{figure*}
The correspondence between the simulated density distribution of the turbulent fluid and the
density CDF reconstructed from individual regions suggests that the bulk properties of
high-density volumes in turbulence are tightly connected with the detailed properties of distinct
shocked regions.
This picture may therefore have
important ramifications for models of star formation that
involve the turbulent density PDF, such as models that use the density PDF to set the star-formation efficiency of a molecular cloud, the stellar initial mass function, or the core
mass function \citep[e.g.,][]{krumholz2005a,padoan2011a,hennebelle2008a,hennebelle2011a,federrath2012a,hopkins2013a}.
As our analysis has illustrated, dense
regions are far from static cores and at any given density the density PDF is comprised from differing regions within distinct structures with a range of peak densities.
However, to say much more we need to develop some
expectations for the evolution of the turbulent gas under the influence of self-gravity.
To compute the gravitational potential of the simulated fluid, we solve the Poisson
equation using standard Fourier methods. The Fourier transform of the density
field is computed in the three-dimensional volume using the NVIDIA {\tt cuFFT} library.
The potential is calculated by multiplying the density transform by factors of the wave
number, and then taking the inverse transform. This process provides the potential
$\Phi$ in
units of the gravitational constant (i.e., $\Phi/G$). We can then interpolate the
potential at the locations of tracer particles to aid in our analysis.
A projection of the resulting gravitational potential is shown in the left panel of
Figure \ref{fig:group_locations}, along with the density field generating the
potential (Figure \ref{fig:group_locations}, center panel). The morphology
of the potential $\Phi$ follows large overdensities in the turbulent fluid, with
broad minima of the potential corresponding to regions with typical density of
a few times the background density. This correspondence results from the
density structure of the turbulence, since regions with $\bar{\rho} \lesssim \rho \lesssim \Mbar\bar{\rho}$
contain a plurality of the fluid mass.
While density maxima do lie at local
minima in the gravitational potential
(shown as white points in Figure \ref{fig:group_locations}), the global minimum of the potential does not
correspond to a prominent maximum of the density field. The densest regions in the
turbulence carry very little mass, and do not dominate the global structure of the
gravitational potential sourced by the fluid.
To compute gravitationally-bound regions, the value of the gravitational
constant $G$ must be chosen.
To set the value of $G$ we
assume that the gravitational potential energy in the simulation
volume approximately equals the kinetic energy in the
turbulent motions, such that the entire box is marginally
self-bound. We then have that
\begin{equation}
\label{eqn:virial}
G\bar{\rho}^2 L^5 \approx \frac{1}{2} \bar{\rho} L^3 \Mbar^2 \cs^2,
\end{equation}
\noindent
or solving for G we have
\begin{equation}
G = \frac{\Mbar^2 \cs^2}{2 \bar{\rho} L^2}.
\end{equation}
\noindent
Choosing different geometrical factors of order unity in
Equation \ref{eqn:virial} would not change the results of
our analysis.
With the gravitational constant selected, the tracer particles
associated with any potential minima are identified by using
a friends-of-friends algorithm similar to that described in
Appendix \ref{section:group_finding} with a linking length set
to the cell width $\Delta x$. The relative potential between
the minimum and the maximum potential at the edge of the FOF
groups are computed, and then compared with the relative kinetic
energy of each particle with respect to the potential minima.
Tracer particles with a negative relative total energy are
considered to be bound. This process is analogous to that
commonly performed when identifying substructure in simulated
dark matter halos \citep[e.g.,][]{knebe2011a}, except that
the geometry in turbulence is more complicated.
A logarithmic density projection of the bound regions in the turbulence
simulation is shown in the right panel of Figure \ref{fig:group_locations}.
Most of the mass in bound regions reside in broad potential minima and
at typical densities of a few times the mean. The bound regions
collectively comprise about $50\%$ of the mass of the entire
cloud.
\subsection{Time Evolution of the Potential Field vs. Dynamical Timescales}
\label{section:potential_evolution}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=7.1in]{potential_evolution.png}
\caption{\label{fig:potential_evolution}
Time evolution of the large-scale potential field in supersonic
isothermal turbulence. The gravitational potential (top row; linear projection),
sourced by the turbulent density field (bottom row; logarithmic projection),
has broad minima that correspond to regions of moderate density that carry a
significant fraction of the mass of the fluid. The simulation is evolved forward in time (left to right, in units of the mean Mach crossing time across the simulation volume $t_{\mathrm{cross}} \approx L/\Mbar cs$) without the effects of self-gravity to estimate the lifetime of prominent potential minima. Absent
the effects of self-gravity, these potential minima would survive only a few
Mach crossing times of the dense region sourcing the potential minimum. After
this time, the overdensities that source the potential minima disperse (and
others reform).
}
\end{center}
\end{figure*}
Dense regions in supersonic isothermal turbulence evolve quickly, as discussed in Section \ref{section:shock_lifetimes} above and elsewhere \cite[e.g.,][]{klessen2000a,vazquez-semadeni2005a,glover2007a}. For dense regions to collapse,
their gravitational free-fall time must be shorter than their expansion timescale.
In our turbulence simulations where we identified and tracked individual dense
regions, the expansion time scale is of order the sound crossing time from the pre-
to post-shock regions about the density maxima and is comparable to a few thousandths or one hundredth of the sound crossing time of the whole simulation volume. Dense regions that do collapse and become bound may initially have falling densities or be newly forming
during the collapse of a bound region on larger scales.
The evolution of the gravitational potential on the scales of the largest bound
regions sets the time scale over which denser interior regions must become bound and
collapse. Since we found in Section \ref{section:gravity} above that large bound regions in turbulence characteristically have
intermediate densities $\bar{\rho} \lesssim \rho \lesssim \Mbar \bar{\rho}$ that contain
substantial mass, we need to monitor the density and potential fields on scales of
the simulation volume over time to determine their typical lifetimes.
Figure \ref{fig:potential_evolution} shows the time evolution of the potential
(upper panels; linear projection) and density (lower panels; logarithmic projection) fields in our simulation volume. The potential field is computed
from the density field following the method described in the previous Section \ref{section:gravity}, but the resulting gravitational acceleration is not
applied to the fluid with the goal of monitoring the typical lifetime of
moderate density regions and their resulting potential minima. Shown are
four separate times during the simulation at $\Delta t = [0, 0.275, 0.525, 0.65]$
times the Mach crossing time $t_{\Mach} = L/\Mbar \cs$. The density enhancement
in the upper left quadrant at time $t=0$ contains an average interior density
close to $\rho \sim \Mbar\bar{\rho}$ and sources the broad potential minimum
apparent in the corresponding image of the potential field. The initial
size of this region is $l \sim L/3$. As the time sequence in
Figure \ref{fig:potential_evolution} shows, the region disperses over a time
$\Delta t \sim 0.5-0.6 L/\Mbar\cs$ that corresponds to $1-2$ Mach crossing
times $t_{\mathrm{cross}} \sim l/\Mbar\cs$ across the region.
The mass-bearing structures with intermediate densities and their
corresponding potential minima have lifetimes that are a few times shorter than the Mach crossing
time across the entire box size $L$ but considerably longer than the typical
lifetime of the densest turbulent structures. In a real molecular cloud
transitioning from low to high mean densities and subsequently undergoing star
formation through the gravitational collapse of its dense interior regions, the
lifetime of the intermediate density regions will influence how star formation
proceeds. For a cloud in rough virial balance, bound regions with
intermediate densities $\bar{\rho}\lesssim\rho\lesssim \Mbar \bar{\rho}$ will have a
collapse time scale of $t_{\mathrm{col}} \lesssim \Mbar^{-1} L/\cs \approx t_{\mathrm{cross}}$. If the
entire mass of such a region were to collapse to form stars, the star formation
efficiency in the entire cloud would be $>10\%$ rather than
$\sim1\%$ \citep[e.g.,][]{krumholz2007a}. Significant density enhancements within the
intermediate density region will collapse on much faster time scales, and if the
star formation process of these regions supplies the energy that eventually regulates
the cloud's star formation efficiency such mechanisms must therefore operate on time scales
shorter than $t_{\mathrm{col}} \sim \Mbar^{-1} L/\cs$.
\section{Discussion}
\label{section:discussion}
This work presents a model where the dense regions of supersonic
isothermal turbulence have density profiles that develop
approximately exponential atmospheres (Sections \ref{section:density_profiles} and \ref{section:exponential_atmospheres}).
We present some idealized simulations of exponential waves and an analytical
model that shows the time-evolution of dense regions may be understood
by accounting for the interaction of the traveling wave with the pre-shock
medium (Section \ref{section:time_dependence}).
Observationally, filaments are seen to display a wide range of profiles that
are generally modeled with power-laws \citep[e.g.,][]{arzoumanian2011a,kirk2013a}. We note that
outside of the smallest scales that may be affected by the beam shape, the
density profiles are not far from exponential.
In simulations, filamentary profiles have been often been
modeled with power-law and Gaussian profiles
\citep{gomez2014a,smith2014a,smith2016a,federrath2016a}. We expect that exponential profiles
will provide comparable quality fits, and benefit from a physical model for their origin. We will examine this issue in future work.
We note
that filamentary profiles in simulations and observations are frequently treated as a radial
profile, whereas our model describes the density profile perpendicular to the shock.
The dense
regions in our simulations are asymmetrical, with much steeper density profiles (e.g., jumps)
ahead of the shock
than in the post-shock region and are oriented mostly along the velocity
field. Symmetrical fitting of the filament profiles do not account for these features.
We note that the power-law radial profile behavior seen in self-gravitating regions of
turbulence arises from physics we do not model \citep{kritsuk2011a,fischera2012a,heitsch2013a,heitsch2013b,federrath2016a,murray2017a,mocz2017a,li2017a}.
\subsection{Caveats to the Exponential Atmosphere Model}
A model that approximately describes the properties of
dense regions in supersonic turbulence can provide a
useful conceptual picture for the formation and
evolution of dense shocked regions
that may collapse to form
self-bound regions. The applicability of the model
depends on a host of approximations that we have
employed to make the problem tractable, and our
numerical simulations have limitations.
We now examine
some of these assumptions and limitations, and attempt to evaluate, at
least qualitatively, how they might affect the realism
of the physical picture presented in this work.
\subsubsection{Equation of State}
An immediate concern is the applicability of isothermality
to dense regions of observed molecular clouds. The
radiative efficiency of shocks
in dense gas motivates the
isothermal assumption, but the gas does not have to
remain perfectly isothermal. If the adiabatic index
$\gamma>1$, then the additional pressure support of the
fluid during compression will resist the large amplification
of the density possible in isothermal shocks.
According
to the Rankine-Hugoniot conditions, the factor of $\sim4$
density amplification in individual adiabatic
shocks
may
have a variety of implications to the model. In terms
of the volumetric density PDF, regions of high density
can still develop through (relatively larger numbers of)
successive generations of shocked regions
\citep[][]{scalo1998a,federrath2012a}. But how will
the structure of individual shocked regions
change? The alteration
will depend on whether the shocked regions
remain thin enough
that the time for their interior structure to adjust
to the ram pressure force applied by on-coming material
remains shorter than the time for them to travel an
appreciable distance or their mass to change
significantly. The primary influence on this thickness
will be the value of $\gamma$, but if the adiabatic
index is low enough to allow a fast response to
ram pressure variations at the shock
front then the
shocked region
structure will reflect the pressure gradient
required under the stiffer equation of state to balance
the ram pressure force.
Indeed, previous simulations of turbulence that include radiative
cooling suggest that the post-shock region will remain close to
isothermal behind a radiative shock front \citep[e.g.,][]{pavlovski2006a}.
\subsubsection{Magnetic Fields}
We do not attempt to generalize
our results to MHD turbulence. Models for the
statistical properties of strong, incompressible
MHD turbulence have
been developed \citep{goldreich1995a} and examined
in the context of large scale numerical simulations
\citep[][see also \citealt{perez2012a}]{beresnyak2011a,beresnyak2014a}.
The shock compression
of regions threaded by weak magnetic
fields will lead to an amplification of the field and
an increase in the magnetic pressure support within the
fluid, thereby changing the balance between internal
pressure and exterior ram pressure. The density PDF
of MHD turbulence does display an approximately
lognormal distribution \citep[e.g.,][]{ostriker2001a}, which
suggests large density inhomogeneities that would allow
for dense shocked regions
to encounter a low density background
a short time after their formation. However, their pressure
support and internal structure could differ significantly
from isothermal hydrodynamic shocks.
Analysis of filaments
in MHD simulations show that the central density contrast
of the filaments are reduced, but that the overall profile
of the filaments may not change substantially \citep{federrath2016a}.
\subsubsection{Limitations of the Simulations}
The numerical
simulations of supersonic turbulence we present
have limitations.
Given the reconstruction scheme
used, the first scale length of the exponential atmosphere
of the densest shocked regions
($\rho\gtrsim50\bar{\rho}$) will not be well-resolved.
As Figure \ref{fig:density_profiles} indicates, our ability
to study the density profile very near the shocked region
peaks
(e.g., $x\lesssim0.5h$) will affected by resolution.
However, the exponential profile appears to be consistent
with the actual measured density profile of shocked regions
out to
typically $x\sim3h$ where even dense shocked region
atmospheres have been
captured by $\gtrsim10$ cells and simulations
in the literature do not show strong variations in filament
profiles with resolution \citep[e.g,][]{federrath2016a}.
We also note that the inertial
range of $N=512^3-1024^3$ simulations of $M\approx5$ turbulence is known
to be limited \citep[e.g.,][]{kritsuk2007a,federrath2010a}.
Increasing the numerical resolution of the simulations can
obviate or mitigate these limitations.
\subsection{Outstanding Issues}
In this work, we have used supersonic isothermal turbulence
to generate a set of dense regions whose properties we have
measured and modeled physically. The methodology for generating the
turbulence and using it as a model for molecular clouds is
well-established, and we follow the approach of previous
works discussed in Section \ref{section:intro}. Once
self-gravity begins to operate, we expect the subsequent
collapse to follow the picture laid out by \citet{murray2015a}
where the adiabatic heating mechanism we identified in
\citet{robertson2012a} changes the nature of the turbulence
in the clouds as they condense. Adiabatic heating
moderates the collapse, and gives rise to the density and
velocity structure measured in simulations that include
self-gravity and follow the collapse of individual
regions to small scales
\citep[e.g.,][]{kritsuk2011a,murray2017a,mocz2017a,li2017a}.
Once star formation commences, the input of feedback in
various forms \citep{matzner2000a,maclow2004a,krumholz2007a,federrath2015a,raskutti2016a,rosen2016a,li2017a} may influence the overall star formation
efficiency by dispersing low density material.
Although the
collapse to high densities occurs on short (e.g., several free-fall)
time scales, the cloud itself could
persist much longer if its material is replenished.
An important
issue then is to determine how the persistent turbulent flow
in star-forming molecular clouds originates or is organized
on large scales.
A possible avenue is the density enhancement of gas passing
through potential perturbations as it orbits in the galactic
disk, such as in spiral arms \citep{shu1973a}, that leads to
a convergent flow \citep[e.g.,][]{hartmann2001a}.
Various
mechanisms have been envisioned for using the response of the
gas in spiral features to promote the formation of molecular
clouds, such as agglomeration, cooling and
thermal instability, large-scale
Jean instability, and cloud-cloud collisions
\citep{kim2008a,dobbs2008a,dobbs2008b,dobbs2008c,tasker2009a,dobbs2015a},
and to drive turbulence \citep[e.g.,][]{kim2006a}.
Using the response of gas to galactic potential
perturbations to understand the initial conditions for
persistent turbulent flows of molecular clouds may be promising
if it can explain how the size-line width relation of
originates on large scales and determine what sets the
maximum size of clouds. Once molecular clouds are organized
from more diffuse galactic disk material and placed on the
size - line width relation on the largest scales, the
properties of
the resulting
turbulent flow, adiabatic heating in interior regions
undergoing gravitational
collapse, and possible input from feedback
may combine to provide a complete picture for star formation
on large scales. We leave analysis of this speculation for
future work.
\section{Conclusions}
\label{section:conclusions}
Astrophysical fluids often display turbulent motions,
owing to the large Reynolds numbers and low viscosities
typical of conditions observed in molecular clouds, neutral
hydrogen gas, and even the ionized interstellar medium.
The properties of supersonic isothermal turbulence
especially influence dense molecular clouds with large radiative
cooling efficiencies and bulk motions with velocities in excess of
the thermal value. As a result, the statistical properties of
supersonic turbulence have provided the core of analytical theories
of the formation rate of populations of stars. Owing in part
to the complexity of turbulence, these models have largely
emphasized the
statistical properties of the fluid rather than the character of
individual dense regions that may collapse to form stars.
Our work adopts a new approach that aims to describe the properties
of the dense regions in supersonic isothermal turbulence that
serve as the sites of star formation. Our model is grounded by
a distinguishing feature of
supersonic isothermal turbulence, that the origin of large
density contrasts in the turbulent fluid is from
strong shocks with Mach number $\Mach$
where the density in post-shock regions
is enhanced by a factor $\sim \Mach^{2}$ relative
to the pre-shock material. The volume of supersonic isothermal
turbulence occupied by dense fluid is small, and as a result
after their formation these shocked regions
rapidly encounter lower density
material. The low density external medium applies a
ram pressure to the shocked regions,
and the thinness of the dense shocked regions
implies that the internal structure of the isothermal shocked regions
will quickly
adjust such that its pressure (and density) gradient balances
the force applied by the ram pressure. Using this local force
balance at the front of the shocked region,
we model the structure of the
fluid behind the shock
as an exponential atmosphere whose rapidly
declining density reflects the pressure gradient necessary to
counter the force from the ram pressure applied across the face of
the shocked region.
We use simulations of supersonic isothermal turbulence to compare
our analytical model with detailed hydrodynamical calculations
of the fluid structure.
We study the density structure of shocked regions,
and find consistency
between their density distributions and our exponential atmosphere
model.
We then use idealized simulations to study the evolution of exponential
shocked regions
traveling through a low-density background, and find that we can
describe the time-dependent structure of the exponential shocked regions
in terms
of the integrated deceleration of each region
by the ram pressure force
applied by the low density material, the corresponding hydrostatic balance
of this force and the pressure gradient behind the shock,
and
the mass of the background fluid accreted into the shocked region.
After determining that the exponential atmosphere model appears consistent
with the structures of isothermal shocked regions
in our idealized and supersonic
turbulence simulations, we use the results of the model to infer some
additional extensions. Using the catalog of shocked regions
constructed from our
tracer particle distributions, we measure the correlation function of
dense regions and find that it scales as a power-law $\xi(r)\propto r^{-1.5}$
over scales between a few times the grid size to the driving scale. Dense
regions become anti-correlated at about the driving scale, possibly owing to the
large scale rarefactions in the fluid on these scales.
Computing the volumes associated with each dense region from cell tessellations
about the tracer particle locations allows us to demonstrate that the
dense tail of the cumulative density function can be entirely accounted
for through the sum of individual shocked region
structures.
We compute the gravitational potential of the turbulent fluid, and determine
the characteristic density and size of bound regions for a turbulent cloud marginally bound
on a scale comparable to the box size. The global minimum of the gravitational
potential of the turbulent fluid corresponds to a region of moderate density, with
a volume large enough to contain substantial mass (a few tens of percent of the
total cloud). The densest regions in the turbulence correspond to local (not global) potential
minima but do not contain substantial mass as distinct regions.
Without the influence of gravity the larger overdensities will dissipate on their
local Mach crossing time, which we find to be of order $t_{\mathrm{col}}\sim \Mbar^{-1}L/\cs$.
As this model for the properties of individual shocked regions
in supersonic
turbulence is developed, extended, and modified, the
time-dependent evolution of the structure of dense regions in turbulence
can be utilized to describe the observed properties of molecular clouds.
Descriptions of the
statistical properties of turbulence and the structural properties of
individual dense regions can be employed together to improve our
physical picture of star formation in supersonically turbulent gas.
\acknowledgments
We thank the anonymous referee for helpful suggestions that improved our manuscript.
Some of these calculations made use of the {\it Hyades} supercomputer
at UCSC, supported by grant NSF AST-1229745. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562; see \citet{towns2014a}
for more details.
\software{ {\it Athena} \citep{stone2008a}, {\it FLASH} \citep{fryxell2000a},
and {\it Voro++} \citep{rycroft2009a}.}
|
1,116,691,500,265 | arxiv | \section{Introduction}
There has been a burst of activities and research in codes over finite rings in last decade,
In particular codes over ${\ZZ}_{p^{s}}$ and ${\ZZ}_{4}$ received much attention \cite{bdho99, bsc95, bsbm97,
cs93, dhs99, dgh99, gh98, hkcss94, ha98, Z4-shadow, Kerdock, rs98}.
The covering radius of binary linear codes is a widely studied parameter\cite{ckms85,chll97}.
Recently the covering radius of codes over ${\ZZ}_4$ has been investigated with respect to
Lee and Euclidean distances \cite{aghos99}.
Several upper and lower bounds on the covering
radius has been obtained. In this paper we investigate the coverning radius of the
codes over ${\ZZ}_{2^{s}}$. In particular some bounds
of \cite{aghos99} have been generalized for codes over ${\ZZ}_{2^s}$. We also investigate the covering
radius of the ${\ZZ}_4$ simplex codes (both types) and their duals, MacDonald codes and repetition codes.
A {\em linear code} ${\cal C},$ of length $n$, over $\;{\ZZ}_{p^{s}}$ is an additive subgroup of $\;{\ZZ}_{p^{s}}^{n}$.
An element of ${\cal C}$ is called a {\em codeword of} ${\cal C}$ and a {\em generator matrix} of ${\cal C}$ is a
matrix whose rows generate ${\cal C}$.
The {\em Hamming weight} $w_H({\bf x})$ of a vector ${\bf x}$ in $\mbox{\msbm Z}_{p^{s}}^n$
is the number of non-zero components.
The {\em Homogeneous weight} $w_{HW}({\bf x})$ \cite{Chh96} of a vector ${\bf x}=(x_1,x_2,\ldots,x_n) \in {\ZZ}^n_{2^{s}}$
is given by $\sum_{i=1}^n w_{HW}(x_i)$ where
\begin{equation}\label{glw}
w_{HW}(x_i)=\left\{\begin{array}{cc}2^{s-2},& x_i \neq 2^{s-1}\\
2^{s-1},& x_i=2^{s-1}.\end{array}\right.
\end{equation}
In particular, for $s=2$, Homogeneous weight $w_{HW}({\bf x})$ reduces to {\em Lee weight} $w_L({\bf x})$ given by
$\sum_{i=1}^n \min\{|x_i|,|4-x_i|\}$.
The {\em Euclidean weight} $w_E({\bf x})$ of a vector ${\bf x} \in {\ZZ}^n_{2^{s}}$ is
$\sum_{i=1}^n \min\{x_i^2,(2^s-x_i)^2\}$.
The Euclidean weight is useful in connection with lattice
constructions.
The Hamming, Homogeneous / Lee and Euclidean distances $d_H({\bf x},{\bf y})$, $d_{HW}({\bf x},{\bf y}) / d_L({\bf x},{\bf y})$ and $d_E({\bf x},{\bf y})$
between two vectors ${\bf x}$ and ${\bf y}$ are $w_H({\bf x}-{\bf y})$, $w_{HW}({\bf x}-{\bf y}) / w_L({\bf x}-{\bf y})$ and $w_E({\bf x}-{\bf y})$, respectively.
The minimum Hamming, Homogeneous / Lee and Euclidean weights, $d_H, d_{HW} / d_L$ and $d_E$, of ${\cal C}$ are the smallest Hamming,
Homogeneoues/ Lee and Euclidean weights among all non-zero codewords of ${\cal C},$ respectively.
One can define an isometry (called {\em Generalized Gray map} \cite{carlet98jul}) from $\left( {\ZZ}_{2^s}, w_{HW} \right) \rightarrow \left( {{\ZZ}^{2^{s-1}}_2}, w_H \right) $
which maps a linear code over ${\ZZ}_{2^{s}}$ to a binary code of length $2^{s-1}$ times and with minimum Hamming weight equal to minimum Homogeneous
weight of pre-image code over ${\ZZ}_{2^s}$. In particular, the {\em Gray map} $\phi : \;{\ZZ}_4^{n} \rightarrow \;{\ZZ}_2^{2n}$
is the coordinate-wise extension of the function from $\;{\ZZ}_4$ to $\;{\ZZ}_2^{2}$ defined by
$0 \rightarrow (0,0), 1 \rightarrow (0,1), 2 \rightarrow (1,1), 3 \rightarrow (1,0)$.
The image $\phi({\cal C})$, of a linear code ${\cal C}$ over $\;{\ZZ}_4$ of length $n$ by the Gray map, is
a binary code of length $2n$ \cite{hkcss94}.
The {\em dual code} ${\cal C}^{\perp}$ of ${\cal C}$ is defined as
$\{ {\bf x} \in {\mbox{\msbm Z}}_{2^{s}}^n \mid {\bf x} \cdot {\bf y} = 0 \;\mbox{for all}\; {\bf y} \in {\cal C}\}$
where ${\bf x} \cdot {\bf y}$ is the standard inner product of ${\bf x}$ and ${\bf y}$.
${\cal C}$ is {\em self-orthogonal} if ${\cal C} \subseteq {\cal C}^\perp$ and ${\cal C}$ is
{\em self-dual} if ${\cal C}={\cal C}^\perp$.
Two codes are said to be {\em equivalent} if one can be obtained from the
other by permuting the coordinates and (if necessary) changing the signs of
certain coordinates.
Codes differing by only a permutation of coordinates are called
{\em permutation-equivalent}.
In this paper we define the covering radius of codes over ${\ZZ}_{2^s}$ with respect to
different distances and in particular study the covering radius of $\;{\ZZ}_4$-simplex codes
of type $\;\alpha$ and $\beta$ namely, $S_k^{\alpha}$ and $S_k^{\beta}$ and their duals, MacDonald codes and repetition codes.
Section $2$ contains some preliminaries and notations.
Basic results for the covering radius of codes over ${\ZZ}_{2^s}$ are given in Section $3.$
Section $4$ determines the covering radii of different ${\ZZ}_4$ repetition codes. Section $5$ determines the covering radius of ${\ZZ}_4$
Simplex codes and its dual and finally Section $6$ determines the bounds on the covering radius of ${\ZZ}_4$ MacDonald codes.
\section{Preliminaries and Notations}
Any linear code ${\cal C}$ over $\mbox{\msbm Z}_{p^{s}}$ is permutation-equivalent to a code with generator
matrix $G$ (the rows of $G$ generate ${\cal C}$) of the form
\begin{equation}\label{eqn:1}
G=\left[\begin{array}{cccccc}I_{k_0}&A_{01}&A_{02}& \cdots&A_{0s-1}&A_{0s}\\
{\bf 0}&pI_{k_1}&pA_{12}& \cdots &pA_{1s-1}& pA_{1s}\\
{\bf 0}&{\bf 0}&p^{2}I_{k_2}& \cdots & p^{2}A_{2s-1} & p^{2}A_{2s}\\
\vdots& \vdots & \vdots & \ddots & \vdots & \vdots \\
{\bf 0}&{\bf 0}&{\bf 0}& \cdots & p^{s-1}I_{k_{s-1}} & p^{s-1}A_{s-1s}
\end{array}\right],
\end{equation}
\noindent where $A_{ij}$ are matrices over $\;{\ZZ}_{p^{s}}$ and the columns are
grouped into blocks of sizes $k_0, \; k_1, \; \cdots, \; k_{s-1}, \; k_{s},$ respectively.
Let $k=\sum_{i=0}^{s-1} (s-i)k_i$. Then $|{\cal C}|= p^{k}$.
For $s=2,p=2$, two binary codes (residue and torsion) obtained from code over $\mbox{\msbm Z}_{4}$ are well studied.
For each $a \in {\ZZ}_4 $ let $\bar{a}$ be the reduction of $ a$
modulo $2$ then the code
$${\cal C}^{(1)}= \left\{(\bar{c_1}, \bar{c_2}, \ldots, \bar{c_n} ) \mid
(c_1, c_2, \ldots, c_n) \in {\cal C} \right\}$$
is a binary linear code called the {\em
residue code} of ${\cal C}.$ Another binary linear code associated with ${\cal C}$ is the
{\em torsion code} ${\cal C}^{(2)}$ which is defined by
$${\cal C}^{(2)} = \left\{ {\bf c} \in {\ZZ}_2^{n} \mid 2 {\bf c} \in {\cal C} \right\}.$$
A vector ${\bf v} \in {\ZZ}^{n}_{p^{s}}$ is a {\em $p$-linear combination} of the vectors ${\bf v}_1, {\bf v}_2, \ldots,
{\bf v}_k \in {\ZZ}^{n}_{p^{s}}$ if $ {\bf v} = \l_1 {\bf v}_1 + \ldots + \l_k {\bf v}_k$ with $\l_i \in {\ZZ}_p$ for $ 1 \leq i
\leq k.$ A subset $ S = \{ {\bf v}_1,{\bf v}_2, ...,{\bf v}_k \}$ of ${\cal C}$ is called a {\em $p$-basis}
for ${\cal C}$ if for each $i= 1,2,...,k-1, \; p {\bf v}_i$ is a $p-$linear combination of $
{\bf v}_{i+1},..., {\bf v}_k$, $ p {\bf v}_k = 0, \; {\cal C}$ is the $p$-linear span of $S$ and $S$ is
$p$-linearly independent \cite{vsr96}. The number of elements in a $p$-basis for
${\cal C}$ is called the {\em $p$-dimension} of ${\cal C}.$
It is easy to verify that the rows of the matrix
\begin{equation}\label{eqn:2}
\cal{B} = \left[\begin{array}{llllll}
I_{k_0}&A_{01}&A_{02}& \cdots&A_{0s-1}&A_{0s}\\ \hline
pI_{k_0}&pA_{01}&pA_{02}& \cdots&pA_{0s-1}&pA_{0s}\\
{\bf 0}&pI_{k_1}&pA_{12}& \cdots &pA_{1s-1}& pA_{1s}\\ \hline
\vdots & \vdots & \vdots & & \vdots & \vdots \\
\vdots & \vdots & \vdots & & \vdots & \vdots \\
\hline
p^{s-1}I_{k_0}&p^{s-1}A_{01}&p^{s-1}A_{02}& \cdots&p^{s-1}A_{0s-1}&
p^{s-1}A_{0s}\\
{\bf 0}&p^{s-1}I_{k_1}&p^{s-1}A_{12}& \cdots &p^{s-1}A_{1s-1}&
p^{s-1}A_{1s}\\
{\bf 0}&{\bf 0}&p^{s-1}I_{k_2}& \cdots & p^{s-1} A_{2 s-1} & p^{s-1}A_{2s}\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
{\bf 0}&{\bf 0}&{\bf 0}& \cdots & p^{s-1}I_{k_{s-1}} & p^{s-1}A_{s-1s}
\end{array}\right].
\end{equation}
form a $p$-basis for the code ${\cal C}$ generated by $G$ given in (\ref{eqn:1}).
Thus $p\!-\!\dim({\cal C})= k =\sum_{i=0}^{s-1} (s-i)k_i.$ From now on we restrict to the case
of $p=2$.
A linear code ${\cal C}$ over $\;{\ZZ}_{2^s}$ ( over $\;{\ZZ}_2$) of length $n$,
$2$-dimension $k$, minimum distance $d_H, d_{HW}$ and $d_E$ is called an
$\left[ n,k,d_H,d_{HW},d_E \right]$ $\left([n,k,d_H]\right)$ or simply
an $\left[ n,k \right]$ code.
\section{Covering Radius of Codes}
In this section, we describe some properties of the covering radius of codes over ${\ZZ}_{2^s}$
after giving the definition of the covering radius for the codes over ${\ZZ}_{2^s}$. Since for
the codes over ${\ZZ}_{2^s}$ various distances are possible we give a definition of the covering
radius for a general distance which could be any of the possible distance. Let $d$ be the
general distance out of various possible distances (such as Hamming, Lee, Homogenous and Euclidean). The {\em covering radius} of a code
${\cal C}$ over ${\ZZ}_{2^s}$ with respect to a general distance $d$ is given by
$$r_d({\cal C})= \max_{{\bf u} \in {\ZZ}_{2^s}^{n}}\left\{\min_{{\bf c} \in {\cal C}}d({\bf u},{\bf c})\right\}.$$
It is easy to see that $r_d({\cal C})$ is the minimum value $r_d$ such that
$${\ZZ}_{2^s}^{n}= \cup_{{\bf c} \in {\cal C}} S_{r_d}({\bf c})$$ where
$$S_{r_d}({\bf u})=\left\{{\bf v} \in {\ZZ}_{2^s}^{n} \mid d({\bf u},{\bf v}) \leq r_d \right\}$$ for
any element ${\bf u} \in {\ZZ}_{2^s}^{n}$.
The translate ${\bf u} + {\cal C} = \left\{{\bf u} + {\bf c} \mid {\bf c} \in {\cal C} \right\}$ is called
the coset of ${\cal C}$ where ${\bf u}$ is a vector of ${\ZZ}_{2^s}^{n}$. A vector of minimum
weight in a coset is called a {\em coset leader}. The following proposition is
straigthforward generalization from a proposition ~\cite{aghos99}.
\begin{proposition}
The covering radius of ${\cal C}$ with respect to the general distance $d$ is the
largest minimum weight among all cosets.
\end{proposition}
Also the following proposition is straightforward~\cite{aghos99}.
\begin{proposition}
Let ${\cal C}$ be a code over ${\ZZ}_{2^s}$ and $\phi({\cal C})$ the generalized Gray map image of ${\cal C}$.
Then $r_{HW}({\cal C})=r_H(\phi({\cal C}))$.
\end{proposition}
Now we give several lower and upper bounds on the covering radius of codes over ${\ZZ}_{2^s}$ with respect
to homogenous weight. The proof of Proposition $3$ and Theorem $1$, being similar to the case of ${\ZZ}_4$ ~\cite{aghos99}, is omitted.
\begin{proposition}{\bf(Sphere-Covering Bound)}
For any code ${\cal C}$ of length $n$ over ${\ZZ}_{2^s}$,
$$\frac{2^{2^{s-1} n}}{|{\cal C}|} \leq \sum_{i=0}^{r_{HW}({\cal C})} {2^{s-1} n \choose i}.$$
\end{proposition}
Now we consider the two upper bounds on the covering radius of a code over ${\ZZ}_{2^s}$
with respect to homogenous weight. Let ${\cal C}$ be a code over ${\ZZ}_{2^s}$ and let
$$s({\cal C}^{\perp})=|\left\{i \mid A_i({\cal C}^{\perp}) \neq 0, i \neq 0 \right\}|$$
where $A_i({\cal C}^{\perp})$ is the number of codewords of homogenous weight $i$
in ${\cal C}^{\perp}$.
\begin{theorem}{\bf (Delsarte Bound)}
Let ${\cal C}$ be a code over ${\ZZ}_{2^s}$ then $r_{HW}({\cal C}) \leq s({\cal C}^{\perp})$.
\end{theorem}
The following result of Mattson \cite{ckms85} is useful for computing covering radii of codes over rings generalized easily
from codes over finite fields.
\begin{proposition} {\bf (Mattson)}\label{mattson}
If ${\cal C}_0$ and ${\cal C}_1$ are codes over ${\ZZ}_{2^s}$ generated by matrices $G_0$ and $G_1$ respectively and
if ${\cal C}$ is the code generated by
\[
G = \left( \begin{array}{c|c}
0 & G_1 \\\hline
G_0 & A
\end{array}\right),
\]
then $r_d({\cal C}) \leq r_d({\cal C}_0) + r_d({\cal C}_1)$ and the covering radius of ${\cal D}$ (concatenation of ${\cal C}_0$ and ${\cal C}_1$)
satisfy the following
\[
r_d({\cal D}) \geq r_d({\cal C}_0) + r_d({\cal C}_1),
\]
for all distances $d$ over ${\ZZ}_{2^s}$.
\end{proposition}
\section{Repetition Codes}
A $q$-ary repetition code ${\cal C}$ over a finite field $\mbox{\msbm F}_q= \{\alpha_0 = 0,\alpha_1 =1, \alpha_2, \alpha_3, \ldots, \alpha_{q-2} \}$ is
an $[n,1,n]$ code ${\cal C} = \{ \bar{\alpha} | \alpha \in \mbox{\msbm F}_q \},\;\mbox{where}\; \bar{\alpha} = (\alpha, \alpha, \ldots, \alpha)$. The covering
radius of ${\cal C}$ is $\lceil \frac{n(q-1)}{q} \rceil$ \cite{duraithesis96}. Using this it can be seen easily
that the covering radius of block (of size $n$) repetition code $[n(q-1),1,n(q-1)]$ generated by
$G= [ \overbrace{ 11 \ldots 1}^{n} \overbrace{\alpha_2 \alpha_2 \ldots \alpha_2}^{n} \ldots \overbrace{\alpha_{q-2} \alpha_{q-2} \ldots \alpha_{q-2}}^{n} ]$ is $\lceil \frac{n(q-1)^2}{q} \rceil$
(since it will be equivalent to a repetition code of length $(q-1)n$).
Consider the repetition code over ${\ZZ}_4$. There are two
types of them of length $n$ viz. unit repetition code ${\cal C}_{\beta}: [n,2,n,n]$ generated by $G_{\beta}=[\overbrace{1 1 \ldots 1}^{n}]$ and zero divisor repetition code ${\cal C}_{\alpha}: [n,1,n,2n]$ generated by $G_{\alpha}=[\overbrace{2 2 \ldots 2}^{n}]$. The following result determines the covering radius for both.
\begin{theorem}
$r_L({\cal C}_{\alpha})=n, r_E({\cal C}_{\alpha})=2n, r_L({\cal C}_{\beta})=n\; \mbox{and}\; r_E({\cal C}_{\beta})=\frac{3n}{2}.$
\end{theorem}
\begin{proof}
Note that $\phi({\cal C}_{\alpha})$ is a binary repetition code of length $2n$ hence $r_L({\cal C}_{\alpha})=\frac{2n}{2}=n$.
Now by definition $r_E({\cal C}_{\alpha}) = \max_{{\bf x} \in {\ZZ}^n_4} \{ d_E({\bf x},{\cal C}_{\alpha}) \}$. Let ${\bf x} =\overbrace{2 2 2 \ldots 2}^{\frac{n}{2}}\overbrace{0 0 0 \ldots 0}^{\frac{n}{2}} \in {\ZZ}^n_4$, then
$d_E({\bf x},\bar{0})=d_E({\bf x},\bar{2}) = 2n$. Thus $ r_E({\cal C}_{\alpha}) \geq 2n$. On the other hand if ${\bf x} \in {\ZZ}^n_4$ has a composition $(\omega_0, \omega_1, \omega_2, \omega_3)$, where
$\sum_{i=0}^{3} \omega_i = n$ then $d_E({\bf x},\bar{0})=n-\omega_0+3 \omega_2$ and $d_E({\bf x},\bar{2})=n-\omega_2+3 \omega_0$. Thus $d_E({\bf x},{\cal C}_{\alpha}) = \min \{ n-\omega_0+3 \omega_2, n-\omega_2+3 \omega_0 \} \leq n+\omega_0 + \omega_2 \leq n+n = 2n$. Hence $r_E({\cal C}_{\alpha})=2n.$ Similar arguments can be used to show that $r_E({\cal C}_{\beta}) \leq \frac{3n}{2}.$ To show that
$r_E({\cal C}_{\beta}) \geq \frac{3n}{2}, $ let ${\bf x}=\overbrace{0 0 0 \ldots 0}^{t}\overbrace{1 1 1 \ldots 1}^{t}\overbrace{2 2 2 \ldots 2}^{t}\overbrace{3 3 3 \ldots 3}^{n-3t} \in {\ZZ}^n_4$, where $t=\lfloor \frac{n}{4} \rfloor$,
then $d_E({\bf x},\bar{0})=n+2t, d_E({\bf x},\bar{1})=4n-10t, d_E({\bf x},\bar{2})=n+2t$ and $d_E({\bf x},\bar{3}) =6t$. Thus $ r_E({\cal C}_{\beta}) \geq \min \{4n-10t,n+2t,6t \} \geq \frac{3n}{2}$. Thus $r_E({\cal C}_{\beta})=\frac{3n}{2}$.
The proof of $r_L({\cal C}_{\beta})=n$ is simple so we omit it.
\end{proof}
\hfill $\Box$ \\
In order to determine the covering radius of Simplex and MacDonald codes over ${\ZZ}_4$, we need to define few block repetition codes over ${\ZZ}_4$ and find their covering radii.
To determine the covering radius of ${\ZZ}_4$ block (three blocks each of size $n$) repetition code $BRep^{3n}_{\alpha}: [3n,2,2n,4n,6n]$ generated by
$G= [ \overbrace{11 \ldots 1}^{n}\overbrace{2 2 \ldots 2}^{n} \ldots \overbrace{3 3 \ldots 3}^{n} ]$ note
that the code has constant Lee weight $4n$. Thus for ${\bf x}= 11 \ldots 1 \in {{\ZZ}^{3n}_4}$, we have
$d_{L}({\bf x},BRep^{3n}_{\alpha}) = 3n$. Hence by definition, $r_L({BRep^{3n}_{\alpha}}) \geq 3n$. On the other hand, its Gray image $\phi(BRep^{3n}_{\alpha})$ is equivalent to binary linear code $[6n,2,4n]$ with the generator matrix
\[
\left(\begin{array}{c|c|c}
\overbrace{1 1 \ldots 1}^{2n} &\overbrace{1 1 \ldots 1}^{2n}& \overbrace{0 0 \ldots 0}^{2n} \\
\underbrace{1 1 \ldots 1}_{2n}& 0 0 \ldots 0 & 1 1 \ldots 1 \\
\end{array} \right) .
\]
Thus the covering radius $r_L({BRep^{3n}_{\alpha}}) \leq \frac{4n}{2} + \frac{2n}{2} = 3n$. This completes the proof of the first part of useful Theorem ~\ref{brep3n}. For the second part note that
$r_E({BRep^{3n}_{\alpha}}) \geq \frac{3n}{2} + 2n + \frac{3n}{2} = 5n.$ To find an upper bound let ${\bf x}= ({\bf u} | {\bf v} | {\bf w}) \in {{\ZZ}^{3n}_4}$, with ${\bf u}, {\bf v}$ and ${\bf w}$ have compositions $(r_0,r_1,r_2,r_3)$, $(s_0,s_1,s_2,s_3)$ and
$(t_0,t_1,t_2,t_3)$ respectively such that sum of each component composition is $n$, then $d_E({\bf x},\bar{0})= 3n-r_0+3r_3-s_0-3s_3-t_0+3t_3, d_E({\bf x},{\bf c}_1) = 3n -r_1+3r_0-s_2+3s_1-t_3+3t_2,
d_E({\bf x},{\bf c}_2) = 3n-r_2+3r_1-s_0+3s_3-t_2+3t_1$ and $d_E({\bf x},{\bf c}_3)= 3n-r_3+3r_2-s_2+3s_1-t_1+3t_0$. Thus $d_E({\bf x},{BRep^{3n}_{\alpha}}) \leq 3n + \min \{ 3r_3+3s_3+3t_3-r_0-s_0-t_0, 3r_0+3s_2+3t_2-r_1-s_2-t_3, 3r_1+3s_3+3t_1-r_2-s_0-t_2, 3r_2+3s_1+3t_0-r_3-s_2-t_1 \} \leq 3n + \frac{1}{2} \{ n+4s_1+4s_3 \} \leq \frac{11n}{2} $.
\begin{theorem}\label{brep3n}\label{brep3n}
$r_L({BRep^{3n}_{\alpha}}) = 3n$ and $5n \leq r_E({BRep^{3n}_{\alpha}}) \leq \frac{11n}{2}.$
\end{theorem}
One can also define a ${\ZZ}_4$ block (two blocks each of size $n$) repetition code $BRep^{2n}_{\alpha}: [2n,2,n,2n,4n]$ generated by
$G= [ \overbrace{11 \ldots 1}^{n}\overbrace{2 2 \ldots 2}^{n}]$. We have following theorem (its proof is similar to the proof of Theorem \ref{brep3n}) so we omit it.
\begin{theorem}\label{brep2n}
$r_L({BRep^{2n}_{\alpha}}) = 2n$ and $r_E({BRep^{2n}_{\alpha}}) = \frac{7n}{2}.$
\end{theorem}
Block code $BRep^{2n}_{\alpha}$ can be generalized to a block repetition code (two blocks of size $m$ and $n$ respectively) $BRep^{m+n}: [m+n,2,m, \min \{2m,m+2n\}, \min \{4m, m+4n\}]$
generated by $G= [ \overbrace{11 \ldots 1}^{m}\overbrace{2 2 \ldots 2}^{n}]$. Theorem \ref{brep2n} can be easily generalized using similar arguments to the following.
\begin{theorem}\label{brep_mn}
$r_L({BRep^{m+n}}) = m+n$ and $ r_E({BRep^{m+n}}) = 2n+\frac{3m}{2} .$
\end{theorem}
\section{Quaternary Simplex Codes of Type $\alpha$ and $\beta$}
Quaternary simplex codes of type $\alpha$ and $\beta$ have been recently studied
in \cite{bgl99}. Type $\alpha$ simplex code $S_k^{\alpha}$ is a linear code over
${\ZZ}_4$ with parameters $\left[2^{2k},2k,2^{2k-1},2^{2k},3 \cdot 2^{2k-1}\right]$
and an inductive generator matrix given by
\be \label{skalpha}
G_k^{\alpha} = \left[\begin{array}{c|c|c|c} 0\; 0 \cdots 0 & 1\; 1 \cdots 1 &
2\; 2 \cdots 2 & 3\; 3 \cdots 3 \\\hline
G_{k-1}^{\alpha} & G_{k-1}^{\alpha} & G_{k-1}^{\alpha}&G_{k-1}^{\alpha}
\end{array}\right]\ee
with $G_1^{\alpha}$ =$[ 0\; 1\; 2\; 3 ]$.
The dual code of $S_k^{\alpha}$ is a $\left[2^{2k},2^{2k+1}-2k\right]$ code.
Type $\beta$ simplex code $S_k^{\beta}$ is a punctured version of $S_k^{\alpha}$
with parameters $$\left[2^{k-1}(2^k-1),2k,2^{2(k-1)},2^{k-1}(2^k-1),2^k(3 \cdot 2^{k-2}-1)\right]$$
and an inductive generator matrix given by
\be \label{skbeta2}
G_2^{\beta} = \left[\begin{array}{c|c|c} 1\; 1\; 1\; 1 & 0 & 2 \\\hline
0\; 1\; 2\; 3 & 1 & 1\end{array}\right], \ee
and for $k > 2$
\be \label{skbetak}
G_k^{\beta} = \left[\begin{array}{c|c|c} 1\; 1 \cdots 1 & 0\; 0 \cdots 0
& 2\; 2 \cdots 2 \\\hline
G_{k-1}^{\alpha} & G_{k-1}^{\beta} & G_{k-1}^{\beta}\end{array}\right], \ee
where $G_{k-1}^{\alpha}$ is the generator matrix of $S_{k-1}^{\alpha}$. For details the
reader is refereed to \cite{bgl99}.
The dual code of $S_k^{\beta}$ is a $\left[2^{k-1}(2^k-1),2^{2k}-2^{k}-2k\right]$ type
$\alpha$ code with minimum Lee weight $d_L=3$.
\begin{theorem}
$r_L({S_k^{\alpha}})=2^{2k}\;\mbox{and}\;r_E({S_k^{\alpha}}) \leq \frac{11(4^k-1)+9}{6}.$
\end{theorem}
\begin{proof}
Let ${\bf x}= 11 \ldots 1 \in {\ZZ}^{n}_4$. Since $S_k^{\alpha}$ is of constant Lee weight $(=2^{2k})$ code, we have
$d_{L}({\bf x},S_k^{\alpha}) = 2^{2k}$. Hence by definition, $r_L({S_k^{\alpha}}) \geq 2^{2k}$. On the other hand by equation (\ref{skalpha}),
the result of Mattson (see Proposition \ref{mattson}) for finite rings and using Theorem \ref{brep3n}, we get
\[
\begin{array}{ccc}
r_L({S_k^{\alpha}}) & \leq & r_L({S_{k-1}^{\alpha}}) + r_L( < \overbrace{1 1 \ldots 1}^{2^{2(k-1)}}\overbrace{2 2 \ldots 2}^{2^{2(k-1)}} \overbrace{ 3 3 \ldots 3}^{2^{2(k-1)}} >)\\
& = & r_L({S_{k-1}^{\alpha}}) + 3.2^{2(k-1)} \\
& \leq & 3.2^{2(k-1)} + 3.2^{2(k-2)} + 3.2^{2(k-3)} + \ldots + 3.2^{2.1} + r_L({S_{1}^{\alpha}}) \\
& \leq & 3 (4^{k-1}+4^{k-2}+ \ldots+4+1) +1 (\mbox{since}\; r_L({S_{1}^{\alpha}})=4)\\
&=& 2^{2k}.
\end{array}
\]
Thus $r_L({S_k^{\alpha}})=2^{2k}$. Similar arguments can be used to show that (using Theorem \ref{brep3n})
\[
\begin{array}{ccc}
r_E({S_k^{\alpha}}) & \leq & \frac{11}{2}\left( 4^{(k-1)} + 4^{(k-2)} + 4^{(k-3)} + \ldots + 4^{1} +1\right)-\frac{11}{2}+ r_E({S_{1}^{\alpha}}) \\
& \leq & \frac{11}{6} (4^k-1) -\frac{11}{2}+7\; (\;\mbox{since}\; r_E({S_{1}^{\alpha}}) \leq 7)\\
&& = \frac{11(4^k-1)+9}{6}.
\end{array}
\]
\end{proof}
\hfill $\Box$ \\
Similar arguments will compute the covering radius of Simplex codes of type $\beta$. We provide an outline of the proof.
\begin{theorem}
$r_L({S_k^{\beta}}) \leq 2^{k-1}(2^k-1)-2\;\mbox{and}\;r_E({S_k^{\beta}}) \leq 2^k(2^{k+1}-1)+\frac{1}{3}(4^k-1)-\frac{147}{2}.$
\end{theorem}
\begin{proof}
By equation (\ref{skbetak}), Proposition \ref{mattson} and Theorem \ref{brep_mn}, we get
\[
\begin{array}{ccc}
r_L({S_k^{\beta}}) & \leq & r_L({S_{k-1}^{\beta}}) + r_L( < \overbrace{1 1 1 \ldots 1}^{4^{(k-1)}} \overbrace{2 2 2 \ldots 2}^{2^{(2k-3)}-2^{(k-2)}} >)\\
& = & r_L({S_{k-1}^{\alpha}}) + 2^{(2k-2)}+2^{(2k-3)}-2^{(k-2)} \\
& \leq & (2^{(2k-2)}+2^{(2k-3)} +\ldots +2^6+2^5+2^4+2^3) - (2^{(k-3)}+2^{(k-4)} + \ldots +2^2+2)+ r_L({S_{2}^{\beta}}) \\
& \leq & (2^{(2k-1)}-1)-(2^2+2+1)-(2^{(k-1)}-1)-1+ 6 (\mbox{since}\; r_L({S_{2}^{\beta}}) \leq 7)\\
&=& 2^{k-1}(2^k-1)-2.
\end{array}
\]
Thus $r_L({S_k^{\beta}}) \leq 2^{k-1}(2^k-1)-2$. Similar arguments can be used to show that (using Theorem \ref{brep3n})
\[
\begin{array}{ccc}
r_E({S_k^{\beta}}) & \leq & 2^{(k-1)}(2^{(k-1)}-1)+2^{(k-2)}(2^{(k-2)}-1)+ \ldots +2^3(2^3-1)+2^2(2^2-1)\\
& &+3 (2^{(2k-1)}+2^{(2k-3)}+\ldots + 2^7+2^5)+ r_E({S_{2}^{\beta}}) \\
& \leq&2^{2k+1}+ \frac{1}{3} (4^k-1)-(2^k-1)-4^3-4^2-4 + \frac{19}{2} \; (\;\mbox{since}\; r_E({S_{2}^{\beta}}) \leq \frac{19}{2})\\
&& = 2^k(2^{k+1}-1)+\frac{1}{3}(4^k-1)-\frac{147}{2}.
\end{array}
\]
\end{proof}
\hfill $\Box$ \\
\begin{theorem}
$r_L({S_k^{\alpha}}^{\perp})=1$, $r_L({S_k^{\beta}}^{\perp}) = 2,$ $r_E({S_k^{\alpha}}^{\perp}) \leq 4$ and $ r_E({S_k^{\beta}}^{\perp}) \leq 4$.
\end{theorem}
\begin{proof}
By Delsarte bound, $r_L({S_k^{\alpha}}^{\perp}) \leq 1$ and $r_L({S_k^{\beta}}^{\perp}) \leq 2$.
Thus equality follows in the first case. For second case, note that $r_L({S_k^{\beta}}^{\perp}) \neq 1,$ by sphere-covering bound. The results for
Euclidean distance follows from Delsarte bound.
\end{proof}
\hfill $\Box$ \\
\section{Quaternary MacDonald Codes of Type $\alpha$ and $\beta$}
The $q$-ary MacDonald code ${\cal M}_{k,u}(q)$
over the finite field $\mbox{\msbm F}_q$ is a unique $\left[\frac{q^{k}-q^{u}}{q-1},k,
q^{k-1}-q^{u-1}\right]$ code in which every nonzero codeword has
weight either $q^{k-1}$ or $q^{k-1}-q^{u-1}$ \cite{dodsim98}.
In \cite{cg03}, authors have defined the MacDonald codes over ${\ZZ}_4$ using the generator matrices of
simplex codes. For $1 \leq u \leq k-1,$ let
$G_{k,u}^{\alpha}\left(G_{k,u}^{\beta}\right)$ be the matrix
obtained from $G_{k}^{\alpha}\left(G_{k}^{\beta}\right)$ by
deleting columns corresponding to the columns of
$G_{u}^{\alpha}\left(G_{u}^{\beta}\right)$. i.e, \be
\label{macalpha}
G_{k,u}^{\alpha}=\left[\begin{array}{cc}G_k^{\alpha}& \backslash\;
\frac{\bf{0}}{G_u^{\alpha}}
\end{array} \right],
\ee
and\\
\be \label{macbeta}
G_{k,u}^{\beta}=\left[\begin{array}{cc}G_k^{\beta}&
\backslash\; \frac{\bf{0}}{G_u^{\beta}}
\end{array} \right],
\ee
where $[A \backslash B]$ denotes the matrix obtained
from the matrix $A$ by deleting
the columns of the matrix $B$ and ${\bf 0}$ in
$(\ref{macalpha})\left(\;\mbox{resp.}(\ref{macbeta})\right)$
is a $(k-u) \times 2^{2u}\left(\;\mbox{resp.}\;(k-u)
\times 2^{u-1}(2^{u}-1)\right)$ zero matrix.
The code ${\cal M}_{k,u}^{\alpha}:[2^{2k}-2^{2u},2k] \left({\cal M}_{k,u}^{\beta}: [(2^{k-1}-2^{u-1})(2^k+2^u-1),2k]\right)$
generated by the matrix
$G_{k,u}^{\alpha}\left(G_{k,u}^{\beta}\right)$ is the punctured
code of $S_k^{\alpha}\left(S_k^{\beta}\right)$ and is called a
{\em MacDonald code} of type $\alpha\; ( \beta)$.
Next theorems provides basic bounds on the covering radii of MacDonald codes.
\begin{theorem}
\[
\begin{array}{ccc}
r_L({\cal M}_{k,u}^{\alpha}) & \leq & 4^k-4^r + r_L({\cal M}_{r,u}^{\alpha})\;\mbox{for}\; u < r \leq k,\\
r_E({\cal M}_{k,u}^{\alpha}) &\leq & \frac{11}{6} (4^k-4^{r})+ r_E({\cal M}_{r,u}^{\alpha})\;\mbox{for}\; u < r \leq k.
\end{array}
\]
\end{theorem}
\begin{proof}
By Theorem \ref{brep3n},
\[
\begin{array}{ccc}
r_L({\cal M}_{k,u}^{\alpha}) &\leq & 3.2^{(2k-2)} + r_L({\cal M}_{k-1,u}^{\alpha})\\
& \leq & 3.2^{(2k-2)} + 3. 2^{(2k-4)} + \ldots + 3.2^r + r_L({\cal M}_{r,u}^{\alpha}), k \geq r > u\\
& = &4^k-4^r + r_L({\cal M}_{r,u}^{\alpha}).
\end{array}
\]
Similar arguments holds for $r_E({\cal M}_{k,u}^{\alpha})$.
\end{proof}
\hfill $\Box$ \\
Similarily using equation (\ref{macbeta}), Proposition \ref{mattson} and Theorem \ref{brep_mn} following bounds can be obtained for type $\beta$ MacDonald code.
\begin{theorem}
\[
\begin{array}{ccc}
r_L({\cal M}_{k,u}^{\beta}) & \leq & 2^{k-1}(2^k-1) - 2^{r-1}(2^r-1) + r_L({\cal M}_{r,u}^{\beta})\;\mbox{for}\; u < r \leq k,\\
r_E({\cal M}_{k,u}^{\beta}) &\leq & \frac{2^{2r-1}}{3}(4^{k-r+1}-1)+4^{r-1}(4^{k-r}-1)- 3. 2^{r-2}(2^{k-r}-1)+ r_E({\cal M}_{r,u}^{\beta})\;\mbox{for}\; u < r \leq k.
\end{array}
\]
\end{theorem}
\section{Conclusion}
We have computed bounds on the covering radii of Simplex and MacDonald codes over ${\ZZ}_4$ and also provided exact values in some cases. It would be
an interesting future task to find out the exact covering radii of many of these codes and generalize the results for codes over ${\ZZ}_{2^s}.$
\bigskip
\noindent
{\bf Acknowledgement.}
The authors would like to thank Patrick Sol\'e for reading the first draft of the paper and pointing out an error in it.
|
1,116,691,500,266 | arxiv | \section{Introduction}
The aim of this article is to develop a general framework for spatial discretisations of the parabolic stochastic PDEs of the form
\begin{equ}
\partial_t u = A u + F(u, \xi)\;,
\end{equ}
where $A$ is an elliptic differential operator, $\xi$ is a rough noise, and $F$ is a non-linear function in $u$
which is affine in $\xi$. The class of spatial discretisations we work with are of the form
\begin{equ}
\partial_t u^\varepsilon = A^\varepsilon u^\varepsilon + F^\varepsilon(u^\varepsilon, \xi^\varepsilon)\;,
\end{equ}
with the spatial variable taking values in the dyadic grid with mesh size $\varepsilon > 0$,
where $A^\varepsilon$, $\xi^\varepsilon$ and $F^\varepsilon$ are discrete approximations of $A$, $\xi$ and $F$
respectively.
A particular example prototypical for the class of equations we are interested in
is the dynamical $\Phi^4$ model in dimension $3$,
which can be formally described by the equation
\begin{equs}[e:Phi]
\partial_t \Phi = \Delta \Phi + \bigl(\infty - a\bigr) \Phi - \lambda \Phi^3 + \xi\;, \qquad \Phi(0, \cdot) = \Phi_0(\cdot)\;, \tag{$\Phi^4_3$}
\end{equs}
on the torus $\mathbf{T}^3 \stackrel{\mathclap{\mbox{\tiny def}}}{=} (\mathbf{R} / \mathbf{Z})^3$ and for $t \geq 0$, where $\Delta$ is the Laplace operator on $\mathbf{T}^3$, $a \in \mathbf{R}$ is a fixed constant, $\lambda > 0$ is a ``coupling constant'', $\Phi_0$ is some initial data, and $\xi$ is the space-time white noise over $L^2(\mathbf{R} \times \mathbf{T}^3)$, see \cite{PZ14}.
Here, $\infty$ denotes an ``infinite constant'': \eqref{e:Phi} should be interpreted
as the limit of solutions to the equation obtained by mollifying $\xi$ and replacing
$\infty$
by a constant which diverges in a suitable way as the mollifier tends to the identity.
It was shown in \cite{Hai14} that this limit exists and is independent of the choice
of mollifier. The reason for the appearance of this infinite constant is that
solutions are random Schwartz distributions (this is already the case for the linear
equation, see \cite{PZ14}), so that their third power is undefined.
The above notation also correctly suggests that
solutions to \eqref{e:Phi} still depend on one parameter, namely the ``finite part'' of the
infinite constant, but this will not be relevant here and we consider this as being fixed from
now on.
In two spatial dimensions, a solution theory for \eqref{e:Phi} was given in
\cite{AR91,DPD03}, see also \cite{MR815192} for earlier work on a closely related model.
In three dimensions, alternative approaches to \eqref{e:Phi} were recently obtained
in \cite{CC13} (via paracontrolled
distributions, see \cite{GIP12} for the development of that approach),
and in \cite{Antti} (via renormalisation group techniques
\`a la Wilson).
It is natural to consider finite difference approximations to \eqref{e:Phi} for a number of reasons.
Our main motivation goes back to the seminal article \cite{BFS83}, where the authors provide a
very clean and relatively compact argument showing that lattice approximations $\mu_\varepsilon$ to the $\Phi^4_3$
measure are tight as the mesh size goes to $0$. These measures are given on the dyadic grid $\mathbf{T}^3_\varepsilon \subset \mathbf{T}^3$ with the mesh size $\varepsilon > 0$ by
\begin{equ}[e:mu_eps]
\mu_\varepsilon(\Phi^\varepsilon) \stackrel{\mathclap{\mbox{\tiny def}}}{=} e^{-S_\varepsilon(\Phi^\varepsilon)} \prod_{x \in \mathbf{T}^3_\varepsilon} d \Phi^\varepsilon(x) / Z_\varepsilon\;,
\end{equ}
for every function $\Phi^\varepsilon$ on $\mathbf{T}^3_\varepsilon$, where $Z_\varepsilon$ is a normalisation factor and
\begin{equ}[e:DAction]
S_\varepsilon(\Phi^\varepsilon) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon \sum_{x \sim y} \bigl(\Phi^\varepsilon(x) - \Phi^\varepsilon(y)\bigr)^2 - \frac{(C_\lambda^{(\varepsilon)} - a) \varepsilon^3}{2} \sum_{x \in \mathbf{T}^3_\varepsilon} \Phi^\varepsilon(x)^2 + \frac{\lambda \varepsilon^3}{4} \sum_{x \in \mathbf{T}^3_\varepsilon} \Phi^\varepsilon(x)^4\;,
\end{equ}
with $C_\lambda^{(\varepsilon)}$ being a ``renormalisation constant'' and with the first sum running over all the nearest neighbours on the grid $\mathbf{T}^3_\varepsilon$, when each pair $x, y$ is counted twice. Then the $\Phi^4_3$ measure $\mu$ can be heuristically written as
\begin{equ}[e:PhiMeasure]
\mu(\Phi) \sim e^{-S(\Phi)} \prod_{x \in \mathbf{T}^3} d \Phi(x)\;,
\end{equ}
for $\Phi \in \mathcal{S}'$ and for $S$ begin a limit of its finite difference approximations \eqref{e:DAction}:
\begin{equ}
S(\Phi) = \int_{\mathbf{T}^3} \left(\frac{1}{2} \bigl(\nabla \Phi(x)\bigr)^2 - \frac{\infty - a}{2} \Phi(x)^2 + \frac{\lambda}{4} \Phi(x)^4\right) dx\;.
\end{equ}
Since the measures $\mu_\varepsilon$ with a sufficiently small coupling constant are invariant for the natural
finite difference approximations of \eqref{e:Phi}, showing that these converge to \eqref{e:Phi}
straightforwardly implies that any accumulation point of $\mu_\varepsilon$ is invariant for the
solutions of \eqref{e:Phi}. These accumulation points are known to coincide with the $\Phi^4_3$ measure $\mu$
\cite[Thm.~2.1]{Park}, thus showing that $\mu$ is indeed invariant for \eqref{e:Phi}, as one might expect.
Another reason why discretisations of \eqref{e:Phi} are interesting is because they can be related to
the behaviour of Ising-type models under Glauber dynamics near their critical temperature, see \cite{Simon1,Simon2}.
See also the related result \cite{MW14} where the dynamical $\Phi^4_2$ model
is obtained from the Glauber dynamic for a Kac-Ising model
in a more direct way, without going through lattice approximations.
Similar results are expected to hold in three spatial dimensions, see e.g.\ the
review article \cite{GLP99}.
We will henceforth consider discretisations of \eqref{e:Phi} of the form
\begin{equs}[e:DPhiRenorm]
\frac{d}{dt} \Phi^\varepsilon = \Delta^\varepsilon \Phi^\varepsilon + \bigl( C_\lambda^{(\varepsilon)} - a\bigr) \Phi^\varepsilon - \lambda \bigl(\Phi^\varepsilon\bigr)^3 + \xi^\varepsilon\;, \quad \Phi^\varepsilon(0, \cdot) = \Phi^\varepsilon_0(\cdot)\;, \tag{$\Phi^{4}_{3,\varepsilon}$}
\end{equs}
on the dyadic discretisation $\mathbf{T}^3_\varepsilon$ of $\mathbf{T}^3$ with mesh size $\varepsilon = 2^{-N}$ for $N \in \mathbf{N}$, where $\Phi^\varepsilon_0 \in \mathbf{R}^{\mathbf{T}^3_\varepsilon}$, $\Delta^\varepsilon$ is the nearest-neighbour approximation of the Laplacian $\Delta$, $\xi^\varepsilon$ is a spatial discretisation of $\xi$, and $C_\lambda^{(\varepsilon)}$ is a sequence of diverging, as $\varepsilon \to 0$, renormalization constants depending on $\lambda$. We construct these discretisations on a common probability space by setting
\begin{equ}[e:SimpleDNoise]
\xi^\varepsilon(t,x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-3} \langle \xi(t, \cdot), \mathbf{1}_{|\cdot - x| \leq \varepsilon / 2} \rangle\;, \qquad (t,x) \in \mathbf{R} \times \mathbf{T}^3_\varepsilon\;,
\end{equ}
where $| x|$ denotes the supremum norm of $x \in \mathbf{R}^3$.
Our results are however flexible enough to easily accommodate a variety of different approximations to the noise and the Laplacian.
Existence and uniqueness of global solutions to \eqref{e:DPhiRenorm} for any fixed $\varepsilon > 0$ follows
immediately from standard results for SDEs \cite{Khasminkii,IW89}.
Our main approximation result is the following, where we take the initial conditions
$\Phi^\varepsilon_0$ to be random variables defined on a common probability space,
independent of the noise $\xi$. (We could of course simply take them deterministic, but
this formulation will be how it will then
be used in our proof of existence of global solutions.)
\begin{theorem}\label{t:Phi}
Let $\xi$ be a space-time white noise over $L^2(\mathbf{R} \times \mathbf{T}^3)$ on a probability space $(\Omega, \mathscr{F}, \mathbf{P})$, let $\Phi_0 \in \mathcal{C}^\eta(\mathbf{R}^{3})$ almost surely,
for some $\eta > -\frac{2}{3}$, and let $\Phi$ be the unique maximal solution of \eqref{e:Phi} on $[0, T_\star]$ with fixed constants $a \in \mathbf{R}$ and $\lambda > 0$. Let furthermore $\Delta^\varepsilon$ be the nearest-neighbour approximation of $\Delta$, let $\Phi^\varepsilon_0 \in \mathbf{R}^{\mathbf{T}^3_\varepsilon}$ be a random variable on the same probability space, let $\xi^\varepsilon$ be given by \eqref{e:SimpleDNoise}, and let $\Phi^\varepsilon$ be the unique global solution of \eqref{e:DPhiRenorm}. If the initial data satisfy almost surely
\begin{equ}
\lim_{\varepsilon \to 0} \Vert \Phi_0; \Phi^\varepsilon_0 \Vert^{(\varepsilon)}_{\mathcal{C}^\eta} = 0\;,
\end{equ}
then for every $\alpha < -\frac{1}{2}$ there is a sequence of renormalisation constants
\begin{equ}[e:RenormIntro]
C^{(\varepsilon)}_\lambda \sim \frac{\lambda}{\varepsilon} - \lambda^2 \log \varepsilon
\end{equ}
in \eqref{e:DPhiRenorm} and a sequence of stopping times $T_\varepsilon$ (which also depend on $\lambda$ and $a$) satisfying $\lim_{\varepsilon \to 0} T_\varepsilon = T_\star$ in probability such that, for every $\bar{\eta} < \eta \wedge \alpha$, and for any $\delta > 0$ small enough, one has the limit in probability
\begin{equ}[e:PhiConvergence]
\lim_{\varepsilon \to 0} \Vert \Phi; \Phi^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T_\varepsilon}} = 0\;.
\end{equ}
\end{theorem}
\begin{remark}
By writing \eqref{e:RenormIntro} we mean that $C^{(\varepsilon)}_\lambda$ is a sum of two terms proportional to $\lambda$ and $- \lambda^2$ respectively, whose asymptotic divergence speeds are $\varepsilon^{-1}$ and $\log \varepsilon$ as $\varepsilon \to 0$.
\end{remark}
As a corollary of this convergence result and an argument along the lines of
\cite{Bourgain}, we have the following result, where we denote by $\mu$ the $\Phi^4_3$ measure
on the torus with a coupling constant $\lambda > 0$ and mass $m_0 > 0$, see \cite{BFS83} for a definition.
\begin{corollary}\label{c:Phi}
If $a = m_0^2 > 0$ and if the coupling constant $\lambda > 0$ in \eqref{e:Phi} is sufficiently small, then for $\mu$-almost every initial condition $\Phi_0$ and for every $T > 0$, the solution of \eqref{e:Phi} constructed in \cite{Hai14} belongs to $\mathcal{C}^{\delta, \alpha}_{\bar \eta}\bigl([0,T], \mathbf{T}^3\bigr)$,
for $\delta$, $\alpha$ and $\bar \eta$ as in \eqref{e:PhiConvergence}. In particular, this
yields a reversible Markov process on $\mathcal{C}^{\alpha}\bigl(\mathbf{T}^3\bigr)$ with an
invariant measure $\mu$.
\end{corollary}
In order to prove this result, we will use regularity structures, as
introduced in \cite{Hai14}, to obtain uniform bounds (in $\varepsilon$) on solutions to
\eqref{e:DPhiRenorm} by describing the right hand side via a type of generalised
``Taylor expansion'' in the neighbourhood of any space-time point. The problem of
obtaining uniform bounds is then split into the problem of on the one hand
obtaining uniform bounds on the objects playing the role of Taylor monomials
(these require subtle stochastic cancellations, but are given by explicit formulae),
and on the other hand obtaining uniform regularity estimates on the ``Taylor coefficients''
(these are described implicitly as solutions to a fixed point problem but can be
controlled by standard Banach fixed point arguments).
In order to treat the discretised equation \eqref{e:DPhiRenorm}, we introduce a discrete
analogue to the concept of ``model'' introduced in \cite{Hai14} and we show that the
corresponding ``reconstruction map'' satisfies uniform bounds analogous to the ones
available in the continuous case.
One technical difficulty we encounter with this approach
is that the set-up is somewhat asymmetric since
time is continuous while space is discrete. Instead of considering a fixed model
as in \cite{Hai14}, we will consider a family of models indexed by the time parameter
and satisfying a suitable regularity property. This idea requires some modification of the original theory, in particular of the ``abstract integration'' operation
\cite[Sec.~5]{Hai14} and of the corresponding Schauder-type estimates.
As this article was nearing its completion, Zhu and Zhu \cite{Twins}
independently obtained the convergence of
solutions to \eqref{e:DPhiRenorm} to those of \eqref{e:Phi} using different methods.
Additionally, Gubinelli and Perkowski \cite{Reloaded} recently obtained a similar
result for the KPZ equation.
One advantage of the approach pursued here is that it is quite systematic and that
many of our intermediate
results do not specifically refer to the $\Phi^4_3$ model.
This lays the foundations of a systematic approximation theory which
can in principle be applied to many other singular SPDEs, e.g.\
stochastic Burgers-type equations \cite{Hai11a,HMW12, HM14}, the KPZ
equation \cite{PhysRevLett.56.889,MR1462228,Hai13},
or the continuous parabolic Anderson model \cite{Hai14, HL15}.
\subsection*{Structure of the article}
In Section~\ref{s:RegStruct} we introduce regularity structures and inhomogeneous models
(i.e.\ models which are functions in the time variable). Furthermore, we prove here the
key results of the theory in our present framework, namely the reconstruction theorem
and the Schauder estimates. In Section~\ref{s:PDEs} we provide a solution theory for
a general parabolic stochastic PDE, whose solution is a function in time.
Section~\ref{s:DModels} is devoted to the development of a discrete analogue of
inhomogeneous models, which we use in Section~\ref{s:DPDEs} to analyse solutions
of discretised stochastic equations. In Section~\ref{s:GaussModels} we analyse models,
built from a Gaussian noise. Finally, in Section~\ref{s:DPhi}, we prove Theorem~\ref{t:Phi}
and Corollary~\ref{c:Phi}.
\subsection*{Notations and conventions}
Throughout this article, we will work in $\mathbf{R}^{d+1}$ where $d$ is the dimension of space and $1$ is the dimension of time. Moreover, we consider the time-space scaling $\mathfrak{s} = (\mathfrak{s}_0, 1, \ldots, 1)$ of $\mathbf{R}^{d+1}$, where $\mathfrak{s}_0 > 0$ is an integer time scaling and $\mathfrak{s}_i = 1$, for $i=1, \ldots, d$, is the scaling in each spatial direction. We set $|\mathfrak{s}| \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{i=0}^d \mathfrak{s}_i$, denote by $| x|$ the $\ell^\infty$-norm of a point $x \in \mathbf{R}^d$, and define $\snorm{z} \stackrel{\mathclap{\mbox{\tiny def}}}{=} |t|^{1/\mathfrak{s}_0} \vee | x|$ to be the $\mathfrak{s}$-scaled $\ell^\infty$-norm of $z=(t,x) \in \mathbf{R}^{d+1}$. For a multiindex $k \in \mathbf{N}^{d+1}$ we define $\sabs{k} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{i=0}^{d} \mathfrak{s}_i k_i$, and for $k \in \mathbf{N}^d$ with the scaling $(1, \ldots, 1)$ we denote the respective norm by $|k|$. (Our natural numbers $\mathbf{N}$ include $0$.)
For $r > 0$, we denote by $\mathcal{C}^r(\mathbf{R}^d)$ the usual H\"{o}lder space on $\mathbf{R}^d$, by $\mathcal{C}^r_0(\mathbf{R}^d)$ we denote the space of compactly supported $\mathcal{C}^r$-functions and by $\mathcal{B}^r_0(\mathbf{R}^d)$ we denote the set of $\mathcal{C}^r$-functions, compactly supported in $B(0,1)$ (the unit ball centered at the origin) and with the $\mathcal{C}^r$-norm bounded by $1$.
For $\varphi \in \mathcal{B}^r_0(\mathbf{R}^d)$, $\lambda > 0$ and $x, y \in \mathbf{R}^{d}$ we define $\varphi_x^\lambda(y) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \lambda^{-d} \varphi(\lambda^{-1}(y-x))$. For $\alpha < 0$, we define the space $\mathcal{C}^\alpha(\mathbf{R}^d)$ to consist of $\zeta \in \mathcal{S}'(\mathbf{R}^d)$, belonging to the dual space of the space of $\mathcal{C}^{r}_0$-functions, with $r > -\lfloor \alpha \rfloor$, and such that
\begin{equ}[e:AlphaNorm]
\Vert \zeta \Vert_{\mathcal{C}^\alpha} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{\varphi \in \mathcal{B}^r_0} \sup_{x \in \mathbf{R}^d} \sup_{\lambda \in (0,1]} \lambda^{-\alpha} |\langle \zeta, \varphi_x^\lambda \rangle| < \infty\;.
\end{equ}
Furthermore, for a function $\mathbf{R} \ni t \mapsto \zeta_t$ we define the operator $\delta^{s, t}$ by
\begin{equ}[e:deltaTime]
\delta^{s, t} \zeta \stackrel{\mathclap{\mbox{\tiny def}}}{=} \zeta_t - \zeta_s\;,
\end{equ}
and for $\delta > 0$, $\eta \leq 0$ and $T > 0$, we define the space $\mathcal{C}^{\delta, \alpha}_{\eta}\bigl([0,T], \mathbf{R}^d\bigr)$ to consist of the functions $(0, T] \ni t \mapsto \zeta_t \in \mathcal{C}^{\alpha}(\mathbf{R}^d)$, such that the following norm is finite
\begin{equ}[e:SpaceTimeNorm]
\Vert \zeta \Vert_{\mathcal{C}^{\delta, \alpha}_{\eta, T}} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{t \in (0, T]} \onorm{t}^{-\eta} \Vert \zeta_t \Vert_{\mathcal{C}^\alpha} + \sup_{s \neq t \in (0, T]} \onorm{t, s}^{-\eta} \frac{\Vert \delta^{s, t} \zeta \Vert_{\mathcal{C}^{\alpha - \delta}}}{|t-s|^{\delta/\mathfrak{s}_0}}\;,
\end{equ}
where $\onorm{t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} |t|^{1/\mathfrak{s}_0} \wedge 1$ and $\onorm{t, s} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \onorm{t} \wedge \onorm{s}$. The space $\mathcal{C}^{0, \alpha}_{\eta}\bigl([0,T], \mathbf{R}^d\bigr)$ contains the function $\zeta$ as above which are continuous in time and is equipped with the norm defined by the first term in \eqref{e:SpaceTimeNorm}.
Sometimes we will need to work with space-time distributions with scaling $\mathfrak{s}$. In order to describe their regularities, we define, for a test function $\varphi$ on $\mathbf{R}^{d+1}$, for $\lambda > 0$ and $z, \bar z \in \mathbf{R}^{d+1}$,
\begin{equ}[e:ScaleEta]
\varphi_z^{\lambda, \mathfrak{s}}(\bar z) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \lambda^{-|\mathfrak{s}|} \varphi\bigl(\lambda^{-s_0}(\bar z_0- z_0), \lambda^{-1}(\bar z_1- z_1), \ldots, \lambda^{-1}(\bar z_d- z_d)\bigr)\;,
\end{equ}
and we define the space $\mathcal{C}_{\mathfrak{s}}^\alpha(\mathbf{R}^{d+1})$ similarly to $\mathcal{C}^\alpha(\mathbf{R}^{d})$, but using the scaled functions \eqref{e:ScaleEta} in \eqref{e:AlphaNorm}.
In this article we will also work with discrete functions $\zeta^\varepsilon \in \mathbf{R}^{\Lambda_\varepsilon^d}$ on the dyadic grid $\Lambda_\varepsilon^d \subset \mathbf{R}^d$ with the mesh size $\varepsilon = 2^{-N}$ for $N \in \mathbf{N}$. In order to compare them with their continuous counterparts $\zeta \in \mathcal{C}^\alpha(\mathbf{R}^d)$ with $\alpha \leq 0$, we introduce the following ``distance''
\begin{equ}
\Vert \zeta; \zeta^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^\alpha} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{\varphi \in \mathcal{B}^r_0} \sup_{x \in \Lambda_\varepsilon^d} \sup_{\lambda \in [\varepsilon,1]} \lambda^{-\alpha} |\langle \zeta, \varphi_x^\lambda \rangle - \langle \zeta^\varepsilon, \varphi_x^\lambda \rangle_\varepsilon|\;,
\end{equ}
where $\langle \cdot, \cdot \rangle_\varepsilon$ is the discrete analogue of the duality pairing on the grid, i.e.
\begin{equ}[e:DPairing]
\langle \zeta^\varepsilon, \varphi_x^\lambda \rangle_\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} \int_{\Lambda_\varepsilon^d} \zeta^\varepsilon(y) \varphi_x^\lambda(y)\, dy \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^d \sum_{y \in \Lambda_\varepsilon^d} \zeta^\varepsilon(y) \varphi_x^\lambda(y)\;.
\end{equ}
For space-time distributions / functions $\zeta$ and $\zeta^\varepsilon$, for $\delta > 0$ and $\eta \leq 0$, we define
\begin{equ}[e:DHolderDist]
\Vert \zeta; \zeta^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\eta, T}} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{t \in (0, T]} \onorm{t}^{-\eta} \Vert \zeta_t; \zeta_t^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^\alpha} + \sup_{s \neq t \in (0, T]} \onorm{s, t}^{-\eta} \frac{\Vert \delta^{s, t} \zeta; \delta^{s, t}\zeta^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\alpha - \delta}}}{\bigl(|t-s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{\delta}}.
\end{equ}
Furthermore, we define the norm $\Vert \zeta^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\eta, T}}$ in the same way as in \eqref{e:AlphaNorm} and \eqref{e:SpaceTimeNorm}, but using the discrete pairing \eqref{e:DPairing}, the quantities $\enorm{t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \onorm{t} \vee \varepsilon$ and $\enorm{s, t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \enorm{s} \wedge \enorm{t}$ instead of $\onorm{t}$ and $\onorm{s, t}$ respectively, and $|t-s|^{1/\mathfrak{s}_0} \vee \varepsilon$ instead of $|t-s|^{1/\mathfrak{s}_0}$.
Finally, we denote by $\star$ and $\star_\varepsilon$ the convolutions on $\mathbf{R}^{d+1}$ and $\mathbf{R} \times \Lambda_\varepsilon^d$ respectively, and by $x \lesssim y$ we mean that there exists a constant $C$ independent of the relevant quantities such that $x \leq C y$.
\subsection*{Acknowledgements}
The authors would like to thank Hendrik Weber for valuable discussions of this and related problems,
Dirk Erhard for pointing out an inaccuracy in the formulation of a previous version
of Theorem~\ref{t:ModelsConvergence}, as well as Rongchan Zhu and Xiangchan Zhu for pointing out
that the results of \cite{Fel74,Park,BFS83} rely on the smallness of the coupling constant.
MH would like to gratefully acknowledge support by the ERC and the Leverhulme Foundation through
a consolidator grant (number 615897) and a leadership award respectively.
\section{Regularity structures}
\label{s:RegStruct}
In this section we recall the definition of a regularity structure and we introduce the inhomogeneous models
used in this article,
which are maps from $\mathbf{R}$ (the time coordinate) to the usual space of models as in \cite[Def. 2.17]{Hai14},
endowed with a norm enforcing some amount of time regularity.
Furthermore, we define inhomogeneous modelled distributions and prove the respective
reconstruction theorem and Schauder estimates.
Throughout this section, we work with the scaling $\mathfrak{s}=(\mathfrak{s}_0, 1, \ldots, 1)$ of $\mathbf{R}^{d+1}$,
but all our results can easily be generalised to any non-Euclidean scaling in space, similarly to \cite{Hai14}.
\subsection{Regularity structures and inhomogeneous models}
\label{ss:Models}
The purpose of regularity structures, introduced in \cite{Hai14} and motivated by
\cite{Lyo98,Gub04}, is to generalise Taylor expansions using essentially arbitrary
functions/distributions instead of polynomials. The precise definition is as follows.
\begin{definition}
A {\it regularity structure} $\mathscr{T} = (\mathcal{T}, \mathcal{G})$ consists of two objects:
\begin{itemize}
\item A {\it model space} $\mathcal{T}$, which is a graded vector space $\mathcal{T} = \bigoplus_{\alpha \in \mathcal{A}} \mathcal{T}_\alpha$, where each $\mathcal{T}_\alpha$ is a (finite dimensional in our case) Banach space and $\mathcal{A} \subset \mathbf{R}$ is a finite set of
``homogeneities''.
\item A {\it structure group} $\mathcal{G}$ of linear transformations of $\mathcal{T}$, such that for every $\Gamma~\in~\mathcal{G}$, every $\alpha \in \mathcal{A}$ and every $\tau \in \mathcal{T}_\alpha$ one has
$\Gamma \tau - \tau \in \mathcal{T}_{< \alpha}$, with $\mathcal{T}_{< \alpha} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigoplus_{\beta < \alpha} \mathcal{T}_\beta$.
\end{itemize}
\end{definition}
In \cite[Def. 2.1]{Hai14}, the set $\mathcal{A}$ was only assumed to be locally finite and bounded from below.
Our assumption is more strict, but does not influence anything in the analysis of the equations we consider.
In addition, our definition rules out the ambiguity of topologies on $\mathcal{T}$.
\begin{remark}\label{ex:Poly}
One of the simplest non-trivial examples of a regularity structure is given by the ``abstract polynomials'' in $d+1$
indeterminates $X_i$, with $i = 0, \ldots, d$. The set $\mathcal{A}$ in this case consists of the values
$\alpha \in \mathbf{N}$ such that $\alpha \leq r$, for some $r < \infty$ and, for each $\alpha \in \mathcal{A}$, the space
$\mathcal{T}_\alpha$ contains all monomials in the $X_i$ of scaled degree $\alpha$.
The structure group $\poly\mathcal{G}$ is then simply the group of translations in $\mathbf{R}^{d+1}$
acting on $X^k$ by $h \mapsto (X-h)^k$.
We now fix $r > 0$ to be sufficiently large and denote by $\poly{\mathcal{T}}$ the
space of such polynomials of scaled degree $r$ and by $\poly{\mathcal{F}}$
the set $\{X^k\,:\, \sabs{k} \leq r\}$.
We will only ever consider regularity structures containing
$\poly\mathcal{T}$ as a subspace. In particular, we always assume that there's a natural morphism
$\mathcal{G} \to \poly\mathcal{G}$ compatible with the action of $\poly\mathcal{G}$ on $\poly\mathcal{T} \hookrightarrow \mathcal{T}$.
\end{remark}
\begin{remark}\label{r:QOperators}
For $\tau \in \mathcal{T}$ we will write $\mathcal{Q}_\alpha \tau$ for its canonical projection onto $\mathcal{T}_\alpha$, and define $\Vert \tau \Vert_{\alpha} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \mathcal{Q}_\alpha \tau \Vert$. We also write $\mathcal{Q}_{<\alpha}$
for the projection onto $\mathcal{T}_{< \alpha}$, etc.
\end{remark}
Another object in the theory of regularity structures is a model. Given an abstract expansion, the model converts it into a concrete distribution describing its local behaviour around every point. We modify the original definition of model in \cite{Hai14}, in order to be able to describe time-dependent distributions.
\begin{definition}\label{d:Model}
Given a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$, an {\it inhomogeneous model} $(\Pi, \Gamma, \Sigma)$ consists of the following three elements:
\begin{itemize}
\item A collection of maps $\Gamma^t : \mathbf{R}^{d} \times \mathbf{R}^{d} \to \mathcal{G}$, parametrised by $t \in \mathbf{R}$, such that
\begin{equ}[e:GammaDef]
\Gamma^t_{x x}=1\;, \qquad \Gamma^t_{x y} \Gamma^t_{y z} = \Gamma^t_{x z}\;,
\end{equ}
for any $x, y, z \in \mathbf{R}^{d}$ and $t \in \mathbf{R}$, and the action of $\Gamma^t_{x y}$ on polynomials is given as in Remark~\ref{ex:Poly} with $h = (0, y-x)$.
\item A collection of maps $\Sigma_x : \mathbf{R} \times \mathbf{R} \to \mathcal{G}$, parametrized by $x \in \mathbf{R}^d$, such that, for any $x \in \mathbf{R}^{d}$ and $s, r, t \in \mathbf{R}$, one has
\begin{equ}[e:SigmaDef]
\Sigma^{t t}_{x}=1\;, \qquad \Sigma^{s r}_{x} \Sigma^{r t}_{x} = \Sigma^{s t}_{x}\;, \qquad \Sigma^{s t}_{x} \Gamma^{t}_{x y} = \Gamma^{s}_{x y} \Sigma^{s t}_{y}\;,
\end{equ}
and the action of $\Sigma^{s t}_{x}$ on polynomials is given as in Remark~\ref{ex:Poly} with $h = (t-s, 0)$.
\item A collection of linear maps $\Pi^t_x: \mathcal{T} \to \mathcal{S}'(\mathbf{R}^{d})$, such that
\begin{equ}[e:PiDef]
\Pi^t_{y} = \Pi^t_x \Gamma^t_{x y}\;, \quad \bigl(\Pi_x^t X^{(0, \bar k)}\bigr)(y) = (y-x)^{\bar k}\;, \quad \bigl(\Pi_x^t X^{(k_0, \bar k)}\bigr)(y) = 0\;,
\end{equ}
for all $x, y \in \mathbf{R}^{d}$, $t \in \mathbf{R}$, $\bar{k} \in \mathbf{N}^{d}$, $k_0 \in \mathbf{N}$ such that $k_0 > 0$.
\end{itemize}
Moreover, for any $\gamma > 0$ and every $T > 0$, there is a constant $C$ for which the analytic bounds
\minilab{Model}
\begin{equs}\label{e:PiGammaBound}
| \langle \Pi^t_{x} \tau, \varphi_{x}^\lambda \rangle| \leq C \Vert \tau \Vert \lambda^{l} &\;, \qquad \Vert \Gamma^t_{x y} \tau \Vert_{m} \leq C \Vert \tau \Vert | x-y|^{l - m}\;,\\
\Vert \Sigma^{s t}_{x} \tau \Vert_{m} &\leq C \Vert \tau \Vert |t - s|^{(l - m)/\mathfrak{s}_0}\;,\label{e:SigmaBound}
\end{equs}
hold uniformly over all $\tau \in \mathcal{T}_l$, with $l \in \mathcal{A}$ and $l < \gamma$, all $m \in \mathcal{A}$ such that $m < l$, all $\lambda \in (0,1]$, all $\varphi \in \mathcal{B}^r_0(\mathbf{R}^d)$ with $r > -\lfloor\min \mathcal{A}\rfloor$, and all $t, s \in [-T, T]$ and $x, y \in \mathbf{R}^d$ such that $|t - s| \leq 1$ and $|x-y| \leq 1$.
In addition, we say that the map $\Pi$ has time regularity $\delta > 0$, if the bound
\begin{equs}
\label{e:PiTimeBound}
| \langle \bigl(\Pi^t_{x} - \Pi^{s}_{x}\bigr) \tau, \varphi_{x}^\lambda \rangle| \leq C \Vert \tau \Vert |t-s|^{\delta/\mathfrak{s}_0} \lambda^{l - \delta}\;,
\end{equs}
holds for all $\tau \in \mathcal{T}_l$ and the other parameters as before.
\end{definition}
\begin{remark}\label{r:ModelNorm}
For a model $Z = (\Pi, \Gamma, \Sigma)$, we denote by $\Vert \Pi \Vert_{\gamma; T}$, $\Vert \Gamma \Vert_{\gamma;T}$ and $\Vert \Sigma \Vert_{\gamma;T}$ the smallest constants $C$ such that the bounds on $\Pi$, $\Gamma$ and $\Sigma$ in \eqref{e:PiGammaBound} and \eqref{e:SigmaBound} hold. Furthermore, we define
\begin{equ}
\vert\!\vert\!\vert Z \vert\!\vert\!\vert_{\gamma; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \Pi \Vert_{\gamma; T} + \Vert \Gamma \Vert_{\gamma; T} + \Vert \Sigma \Vert_{\gamma; T}\;.
\end{equ}
If $\bar{Z} = (\bar{\Pi}, \bar{\Gamma}, \bar{\Sigma})$ is another model, then we also define the ``distance'' between two models
\begin{equ}[e:ModelsDist]
\vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\gamma; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \Pi - \bar{\Pi} \Vert_{\gamma; T} + \Vert \Gamma - \bar{\Gamma} \Vert_{\gamma; T} + \Vert \Sigma - \bar{\Sigma} \Vert_{\gamma; T}\;.
\end{equ}
We note that the norms on the right-hand side still make sense with $\Gamma$ and $\Sigma$ viewed
as linear maps on $\mathcal{T}$. We also set $\Vert \Pi \Vert_{\delta, \gamma; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \Pi \Vert_{\gamma; T} + C$, where $C$ is the smallest constant such that the bound \eqref{e:PiTimeBound} holds, and we define
\begin{equ}
\vert\!\vert\!\vert Z \vert\!\vert\!\vert_{\delta, \gamma; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \Pi \Vert_{\delta, \gamma; T} + \Vert \Gamma \Vert_{\gamma; T} + \Vert \Sigma \Vert_{\gamma; T}\;.
\end{equ}
Finally, we define the ``distance'' $\vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\delta, \gamma; T}$ as in \eqref{e:ModelsDist}.
\end{remark}
\begin{remark}
In \cite[Def. 2.17]{Hai14} the analytic bounds on a model were assumed to hold locally uniformly.
In the problems which we aim to consider, the models are periodic in space, which allows us to
require the bounds to hold globally.
\end{remark}
\begin{remark} \label{r:OriginalModel}
For a given model $(\Pi, \Gamma, \Sigma)$ we can define the following two objects
\begin{equ}[e:ModelTilde]
\bigl(\tilde{\Pi}_{(t,x)} \tau\bigr) (s, y) = \bigl(\Pi_x^s \Sigma_x^{s t} \tau\bigr) (y)\;, \qquad \tilde{\Gamma}_{(t,x), (s,y)} = \Gamma^{t}_{xy} \Sigma_y^{t s} = \Sigma_x^{t s} \Gamma^{s}_{xy} \;,
\end{equ}
for $\tau \in \mathcal{T}$. Of course, in general we cannot fix the spatial point $y$ in the definition of $\tilde{\Pi}$, and we should really write $\bigl(\bigl(\tilde{\Pi}_{(t,x)} \tau\bigr) (s, \cdot)\bigr)(\varphi) = \bigl(\Pi_x^s \Sigma_x^{s t} \tau\bigr) (\varphi)$ instead, for any test function $\varphi$, but the notation \eqref{e:ModelTilde} is more suggestive. One can then easily verify that the pair $(\tilde{\Pi}, \tilde{\Gamma})$ is a model in the original sense of \cite[Def.~2.17]{Hai14}.
\end{remark}
\subsection{Inhomogeneous modelled distributions}
\label{ss:ModelledDistr}
Modelled distributions represent abstract expansions in the basis of a regularity structure. In order to be able to describe the singularity coming from the behaviour of our
solutions near time $0$, we introduce inhomogeneous modelled distributions which admit
a certain blow-up as time goes to zero.
Given a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$ with a model $Z=(\Pi, \Gamma, \Sigma)$, values
$\gamma, \eta \in \mathbf{R}$ and a final time $T > 0$, we consider maps
$H : (0, T] \times \mathbf{R}^{d} \to \mathcal{T}_{<\gamma}$ and define
\begin{equs}[e:ModelledDistributionNormAbs]
\Vert H \Vert_{\gamma, \eta; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{t \in (0,T]} &\sup_{x \in \mathbf{R}^d} \sup_{l < \gamma} \onorm{t}^{(l - \eta) \vee 0} \Vert H_t(x) \Vert_l\\
&+ \sup_{t \in (0,T]} \sup_{\substack{x \neq y \in \mathbf{R}^d \\ | x - y | \leq 1}} \sup_{l < \gamma} \frac{\Vert H_t(x) - \Gamma^{t}_{x y} H_t(y) \Vert_l}{\onorm{t}^{\eta - \gamma} | x - y |^{\gamma - l}}\;,
\end{equs}
where $l \in \mathcal{A}$ in the third supremum. Then the space $\mathcal{D}^{\gamma, \eta}_T$ consists of all such functions $H$, for which one has
\begin{equ}[e:ModelledDistributionNorm]
\vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert H \Vert_{\gamma, \eta; T} + \sup_{\substack{s \neq t \in (0,T] \\ | t - s | \leq \onorm{t, s}^{\mathfrak{s}_0}}} \sup_{x \in \mathbf{R}^d} \sup_{l < \gamma} \frac{\Vert H_t(x) - \Sigma_x^{t s} H_{s}(x) \Vert_l}{\onorm{t, s}^{\eta - \gamma} |t - s|^{(\gamma - l)/\mathfrak{s}_0}} < \infty\;.
\end{equ}
The quantities $\onorm{t}$ and $\onorm{t, s}$ used in these definitions were introduced in \eqref{e:SpaceTimeNorm}. Elements of these spaces will be called {\it inhomogeneous modelled distributions}.
\begin{remark}
The norm in \eqref{e:ModelledDistributionNorm} depends on $\Gamma$ and $\Sigma$, but
does {\it not} depend on $\Pi$; this fact will be crucial in the sequel.
When we want to stress the dependency on the model, we will also write $\mathcal{D}^{\gamma, \eta}_T(Z)$.
\end{remark}
\begin{remark}\label{r:ModelledDistrib}
In contrast to the singular modelled distributions from \cite[Def.~6.2]{Hai14}, we do not require the restriction $|x - y| \leq \onorm{t, s}$ in the second term in \eqref{e:ModelledDistributionNormAbs}. This is due to the fact that we consider the space and time variables separately (see the proof of Theorem~\ref{t:Integration}, where this fact is used).
\end{remark}
\begin{remark}\label{r:DistrMult}
Since our spaces $\mathcal{D}^{\gamma, \eta}_T$ are almost identical to those of \cite[Def.~6.2]{Hai14}, the multiplication and differentiation results from \cite[Sec.~6]{Hai14} hold also for our definition.
\end{remark}
To be able to compare two modelled distributions $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ and $\bar{H} \in \mathcal{D}^{\gamma, \eta}_T(\bar{Z})$, we define the quantities
\minilab{ModelledNorms}
\begin{equs}
\Vert H; \bar{H} \Vert_{\gamma, \eta; T} &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{t \in (0,T]} \sup_{x \in \mathbf{R}^d} \sup_{l < \gamma} \onorm{t}^{(l - \eta) \vee 0} \Vert H_t(x) - \bar{H}_t(x) \Vert_l \\
& + \sup_{t \in (0,T]} \sup_{\substack{x \neq y \in \mathbf{R}^d \\ | x - y | \leq 1}} \sup_{l < \gamma} \frac{\Vert H_t(x) - \Gamma^{t}_{x y} H_t(y) - \bar{H}_t(x) + \bar{\Gamma}^{t}_{x y} \bar{H}_t(y) \Vert_l}{\onorm{t}^{\eta - \gamma} | x - y |^{\gamma - l}}\;,\\
\vert\!\vert\!\vert H; \bar{H} \vert\!\vert\!\vert_{\gamma, \eta; T} &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert H; \bar{H} \Vert_{\gamma, \eta; T}\\
& + \sup_{\substack{s \neq t \in (0,T] \\ | t - s | \leq \onorm{t, s}^{\mathfrak{s}_0}}} \sup_{x \in \mathbf{R}^d} \sup_{l < \gamma} \frac{\Vert H_t(x) - \Sigma_x^{t s} H_{s}(x) - \bar{H}_t(x) + \bar{\Sigma}_x^{t s} \bar{H}_{s}(x) \Vert_l}{\onorm{t, s}^{\eta - \gamma} |t - s|^{(\gamma - l)/\mathfrak{s}_0}}\;.
\end{equs}
The ``reconstruction theorem'' is one of the key results of the theory of regularity structures.
Here is its statement in our current framework.
\begin{theorem}\label{t:Reconstruction}
Let $\mathscr{T} = (\mathcal{T}, \mathcal{G})$ be a regularity structure with $\alpha \stackrel{\mathclap{\mbox{\tiny def}}}{=} \min \mathcal{A} < 0$ and $Z = (\Pi, \Gamma, \Sigma)$ be a model. Then, for every $\eta \in \mathbf{R}$, $\gamma > 0$ and $T > 0$, there is a unique family of linear
operators $\mathcal{R}_t : \mathcal{D}_T^{\gamma, \eta}(Z) \to \mathcal{C}^{\alpha}(\mathbf{R}^d)$, parametrised by $t \in (0,T]$,
such that the bound
\begin{equ}[e:Reconstruction]
|\langle \mathcal{R}_t H_t - \Pi^t_x H_t(x), \varphi_x^\lambda \rangle| \lesssim \lambda^\gamma \onorm{t}^{\eta - \gamma} \Vert H \Vert_{\gamma, \eta; T} \Vert \Pi \Vert_{\gamma; T}\;,
\end{equ}
holds uniformly in $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$, $t \in (0,T]$, $x \in \mathbf{R}^d$, $\lambda \in (0,1]$ and $\varphi \in \mathcal{B}^r_0(\mathbf{R}^d)$ with $r > -\lfloor \alpha\rfloor$.
If furthermore the map $\Pi$ has time regularity $\delta > 0$, then, for any $\tilde{\delta} \in (0, \delta]$ such that $\tilde{\delta} \leq (m - \zeta)$ for all $\zeta, m \in \left((-\infty, \gamma) \cap \mathcal{A}\right) \cup \{\gamma\}$ such that $\zeta < m$, the function $t \mapsto \mathcal{R}_t H_t$ satisfies
\begin{equ}[e:ReconstructBound]
\Vert \mathcal{R} H \Vert_{\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma, T}} \lesssim \Vert \Pi \Vert_{\delta, \gamma; T} \bigl(1 + \Vert \Sigma \Vert_{\gamma; T} \bigr) \vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T}\;.
\end{equ}
Let $\bar{Z} = (\bar{\Pi}, \bar{\Gamma}, \bar{\Sigma})$ be another model for the same regularity structure, and let $\bar{\mathcal{R}}_t$ be the operator as above, but for the model $\bar Z$. Moreover, let the maps $\Pi$ and $\bar \Pi$ have time regularities $\delta > 0$. Then, for every $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ and $\bar{H} \in \mathcal{D}^{\gamma, \eta}_T(\bar{Z})$, the maps $t \mapsto \mathcal{R}_t H_t$ and $t \mapsto \bar \mathcal{R}_t \bar H_t$ satisfy
\begin{equ}[e:ReconstructTime]
\Vert \mathcal{R} H - \bar \mathcal{R} \bar H \Vert_{\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma, T}} \lesssim \vert\!\vert\!\vert H; \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\delta, \gamma; T}\;,
\end{equ}
for any $\tilde{\delta}$ as above, and where the proportionality constant depends on $\vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T}$, $\vert\!\vert\!\vert \bar H \vert\!\vert\!\vert_{\gamma, \eta; T}$, $\vert\!\vert\!\vert Z \vert\!\vert\!\vert_{\delta, \gamma; T}$ and $\vert\!\vert\!\vert \bar Z \vert\!\vert\!\vert_{\delta, \gamma; T}$.
\end{theorem}
\begin{proof}
Existence and uniqueness of the maps $\mathcal{R}_t$, as well as the bound \eqref{e:Reconstruction},
follow from \cite[Thm.~3.10]{Hai14}. The uniformity in time in \eqref{e:Reconstruction} follows from the uniformity of the corresponding bounds in \cite[Thm.~3.10]{Hai14}.
To prove that $t \mapsto \mathcal{R}_t H_t$ belongs to $\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma}\bigl([0,T], \mathbf{R}^d\bigr)$, we will first bound $\langle \mathcal{R}_t H_t, \varrho_x^\lambda\rangle$, for $\lambda \in (0,1]$, $x \in \mathbf{R}^d$ and $\varrho \in \mathcal{B}^r_0(\mathbf{R}^d)$. Using \eqref{e:Reconstruction} and properties of $\Pi$ and $H$ we get
\begin{equs}[e:ReconstructMapBound]
|\langle\mathcal{R}_t H_t, \varrho_x^\lambda\rangle| &\leq |\langle\mathcal{R}_t H_t - \Pi^t_{x} H_t(x), \varrho_x^\lambda\rangle | + |\langle\Pi^t_{x} H_t(x), \varrho_x^\lambda\rangle|\\
&\lesssim \lambda^{\gamma} \onorm{t}^{\eta - \gamma} + \sum_{\zeta \in [\alpha, \gamma) \cap \mathcal{A}} \lambda^{\zeta} \onorm{t}^{(\eta - \zeta) \wedge 0} \lesssim \lambda^{\alpha} \onorm{t}^{\eta - \gamma}\;,
\end{equs}
where the proportionality constant is affine in $\Vert H \Vert_{\gamma, \eta; T} \Vert \Pi \Vert_{\gamma; T}$, and $\alpha$ is the minimal homogeneity in $\mathcal{A}$.
In order to obtain the time regularity of $t \mapsto \mathcal{R}_t H_t$, we show that the distribution $\zeta_x^{s t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Pi^t_x H_t(x) - \Pi^s_x H_s(x)$ satisfies the bound
\begin{equs}[e:RecosntructTimeProof]
| \langle \zeta_x^{s t} - \zeta_y^{s t}, \varrho_x^{\lambda} \rangle | \lesssim |t-s|^{\tilde{\delta}/\mathfrak{s}_0} \onorm{s,t}^{\eta - \gamma} |x-y|^{\gamma - \tilde{\delta} - \alpha} \lambda^\alpha\;,
\end{equs}
uniformly over all $x, y \in \mathbf{R}^d$ such that $\lambda \leq |x-y| \leq 1$, all $s, t \in \mathbf{R}$, and for any value of $\tilde{\delta}$ as in the statement of the theorem. To this end, we consider two regimes: $|x-y| \leq |t-s|^{1/\mathfrak{s}_0}$ and $|x-y| > |t-s|^{1/\mathfrak{s}_0}$.
In the first case, when $|x-y| \leq |t-s|^{1/\mathfrak{s}_0}$, we write, using Definition~\ref{d:Model},
\begin{equ}[e:RecosntructTimeProofFirstCase]
\zeta_x^{s t} - \zeta_y^{s t} = \Pi^t_x \left( H_t(x) - \Gamma^t_{x y} H_t(y)\right) - \Pi^s_x \left( H_s(x) - \Gamma^s_{x y} H_s(y)\right),
\end{equ}
and bound these two terms separately. From the properties \eqref{e:PiGammaBound} and \eqref{e:ModelledDistributionNorm} we get
\begin{equs}
| \langle \Pi^t_x &\big( H_t(x) - \Gamma^t_{x y} H_t(y)\big), \varrho_x^{\lambda} \rangle | \lesssim \sum_{\zeta \in [\alpha, \gamma) \cap \mathcal{A}} \lambda^\zeta \Vert H_t(x) - \Gamma^t_{x y} H_t(y) \Vert_\zeta \\
&\lesssim \sum_{\zeta \in [\alpha, \gamma) \cap \mathcal{A}} \lambda^\zeta | x - y |^{\gamma - \zeta} \onorm{t}^{\eta - \gamma} \lesssim \lambda^\alpha |x-y|^{\gamma - \alpha} \onorm{t}^{\eta - \gamma}\;,
\label{e:RecosntructTimeProofFirstCaseBound}
\end{equs}
where we have exploited the condition $|x-y| \geq \lambda$. Recalling now the case we consider, we can bound the last expression by the right-hand side of \eqref{e:RecosntructTimeProof}. The same estimate holds for the second term in \eqref{e:RecosntructTimeProofFirstCase}.
Now, we will consider the case $|x-y| > |t-s|^{1/\mathfrak{s}_0}$. In this regime we use the definition of model and write
\begin{equs}
\zeta_x^{s t} - \zeta_y^{s t} &= \big(\Pi^t_x - \Pi^s_x\big) \big(H_t(x) - \Gamma^t_{x y} H_t(y)\big) + \Pi^s_x \big(1 - \Sigma_x^{s t}\big) \big(H_t(x) - \Gamma^t_{x y} H_t(y)\big)\\
&\qquad - \Pi^s_x \big(H_s(x) - \Sigma^{s t}_{x} H_t(x)\big) + \Pi^s_y \big(H_s(y) - \Sigma^{s t}_{y} H_t(y)\big)\;.
\label{e:RecosntructTimeProofSecondCase}
\end{equs}
The first term can be bounded exactly as \eqref{e:RecosntructTimeProofFirstCaseBound}, but using this time \eqref{e:PiTimeBound}, i.e.
\begin{equs}
| \langle \big(\Pi^t_x - \Pi^s_x\big) \big( H_t(x) - \Gamma^t_{x y} H_t(y)\big), \varrho_x^{\lambda} \rangle | \lesssim \lambda^{\alpha - \delta} |x-y|^{\gamma - \alpha} \onorm{t}^{\eta - \gamma} |t-s|^{\delta/\mathfrak{s}_0}\;.
\end{equs}
In order to estimate the second term in \eqref{e:RecosntructTimeProofSecondCase}, we first notice that from \eqref{e:SigmaBound} and \eqref{e:ModelledDistributionNorm} we get
\begin{equs}[e:RecosntructTimeProofSecondCaseSigma]
\Vert \big(1 &- \Sigma_x^{s t}\big) \big(H_t(x) - \Gamma^t_{x y} H_t(y)\big) \Vert_{\zeta} \lesssim \sum_{\zeta < m < \gamma} |t - s|^{(m - \zeta) / \mathfrak{s}_0} \Vert H_t(x) - \Gamma^t_{x y} H_t(y) \Vert_{m}\\
&\lesssim \sum_{\zeta < m < \gamma} |t - s|^{(m - \zeta) / \mathfrak{s}_0} | x - y |^{\gamma - m} \onorm{t}^{\eta - \gamma} \lesssim |t - s|^{\tilde{\delta}/ \mathfrak{s}_0} | x - y |^{\gamma - \tilde{\delta} - \zeta} \onorm{t}^{\eta - \gamma}\;,
\end{equs}
for any $\tilde{\delta} \leq \min_{m > \zeta \in \mathcal{A}} (m - \zeta)$, where we have used the assumption on the time variables. Hence, for the second term in \eqref{e:RecosntructTimeProofSecondCase} we have
\begin{equs}
| \langle \Pi^s_x \big(1 - \Sigma_x^{s t}\big) \big(H_t(x) &- \Gamma^t_{x y} H_t(y)\big), \varrho_x^{\lambda} \rangle | \\
&\lesssim |t - s|^{\tilde{\delta}/\mathfrak{s}_0} \onorm{t}^{\eta - \gamma} \sum_{\zeta < \gamma} \lambda^\zeta | x - y |^{\gamma - \tilde{\delta} - \zeta}\;.
\end{equs}
Since $|x-y| \geq \lambda$ and $\zeta \geq \alpha$, the estimate \eqref{e:RecosntructTimeProof} holds for this expression.
The third term in \eqref{e:RecosntructTimeProofSecondCase} we bound using the properties \eqref{e:PiGammaBound} and \eqref{e:ModelledDistributionNorm} by
\begin{equs}[e:RecosntructTimeProofSecondCaseBound]
| \langle \Pi^s_x (H_s(x) - \Sigma^{s t}_{x} H_t(x)), \varrho_x^{\lambda} \rangle | &\lesssim \sum_{\zeta < \gamma} \lambda^\zeta \Vert H_s(x) - \Sigma^{s t}_{x} H_t(x) \Vert_\zeta \\
&\lesssim \sum_{\zeta < \gamma} \lambda^\zeta |t - s|^{(\gamma - \zeta)/\mathfrak{s}_0} \onorm{t, s}^{\eta - \gamma}\;.
\end{equs}
It follows from $|x-y| \geq \lambda$, $|x-y| > |t-s|^{1/\mathfrak{s}_0}$ and $\zeta \geq \alpha$, that the latter can be estimated as in \eqref{e:RecosntructTimeProof}, when $\tilde{\delta} \leq \min\{\gamma - \zeta : \zeta \in \mathcal{A},\, \zeta < \gamma\}$. The same bound holds for the last term in \eqref{e:RecosntructTimeProofSecondCase}, and this finishes the proof of \eqref{e:RecosntructTimeProof}.
In view of the bound \eqref{e:RecosntructTimeProof} and \cite[Prop.~3.25]{Hai14}, we conclude that
\begin{equs}[e:ReconstructFullTime]
|\langle \mathcal{R}_t H_t - \mathcal{R}_s H_s - \zeta_x^{s t}, \varrho_x^\lambda \rangle| \lesssim |t-s|^{\tilde{\delta}/\mathfrak{s}_0} \lambda^{\gamma - \tilde{\delta}} \onorm{s,t}^{\eta - \gamma}\;,
\end{equs}
uniformly over $s, t \in \mathbf{R}$ and the other parameters as in \eqref{e:Reconstruction}. Thus, we can write
\begin{equs}
\langle \mathcal{R}_t H_t - \mathcal{R}_s H_s, \varrho_x^\lambda \rangle = \langle \mathcal{R}_t H_t - \mathcal{R}_s H_s - \zeta_x^{s t}, \varrho_x^\lambda \rangle + \langle \zeta_x^{s t}, \varrho_x^\lambda \rangle\;,
\end{equs}
where the first term is bounded in \eqref{e:ReconstructFullTime}. The second term we can write as
\begin{equs}
\langle \zeta_x^{s t}, \varrho_x^\lambda \rangle = \langle \big(\Pi^t_x - \Pi^s_x\big) H_t(x), \varrho_x^\lambda \rangle &+ \langle \Pi^s_x \big(H_t(x) - \Sigma_x^{t s} H_s(x)\big), \varrho_x^\lambda \rangle\\
&+ \langle \Pi^s_x \big(\Sigma_x^{t s}-1\big) H_s(x), \varrho_x^\lambda \rangle\;,
\end{equs}
which can be bounded by $|t-s|^{\tilde{\delta}/\mathfrak{s}_0} \lambda^{\alpha - \tilde{\delta}} \onorm{s,t}^{\eta - \gamma}$, using \eqref{e:PiTimeBound}, \eqref{e:RecosntructTimeProofSecondCaseBound} and \eqref{e:SigmaBound}. Here, in order to estimate the last term, we act similarly to \eqref{e:RecosntructTimeProofSecondCaseSigma}. Combining all these bounds together, we conclude that
\begin{equ}[e:ReconstructTimeReg]
|\langle \mathcal{R}_t H_t - \mathcal{R}_s H_s, \varrho_x^\lambda \rangle| \lesssim |t-s|^{\tilde{\delta}/\mathfrak{s}_0} \lambda^{\alpha - \tilde{\delta}} \onorm{s,t}^{\eta - \gamma}\;,
\end{equ}
which finishes the proof of the claim.
The bound \eqref{e:ReconstructTime} can be shown in a similar way. More precisely, similarly to \eqref{e:ReconstructMapBound} and using \cite[Eq.~3.4]{Hai14}, we can show that
\begin{equs}
|\langle\mathcal{R}_t H_t - \bar{\mathcal{R}}_t \bar{H}_t, \varrho_x^\lambda\rangle| \lesssim \lambda^{\alpha} \onorm{t}^{\eta - \gamma}\bigl(\Vert \Pi \Vert_{\gamma; T} \vert\!\vert\!\vert H; \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} + \Vert \Pi - \bar \Pi \Vert_{\gamma; T} \vert\!\vert\!\vert \bar H \vert\!\vert\!\vert_{\gamma, \eta; T}\bigr).
\end{equs}
Denoting $\bar{\zeta}_x^{s t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar \Pi^t_x \bar H_t(x) - \bar \Pi^s_x \bar H_s(x)$ and acting as above, we can prove an analogue of \eqref{e:ReconstructFullTime}:
\begin{equs}
| \langle \mathcal{R}_t H_t - \bar{\mathcal{R}}_t \bar{H}_t &- \mathcal{R}_s H_s + \bar{\mathcal{R}}_s \bar{H}_s - \zeta_x^{s t} + \bar{\zeta}_x^{s t}, \varrho_x^\lambda \rangle| \\
&\lesssim |t-s|^{\tilde{\delta}/\mathfrak{s}_0} \lambda^{\gamma - \tilde{\delta}} \onorm{s,t}^{\eta - \gamma} \bigl(\vert\!\vert\!\vert H; \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\delta, \gamma; T}\bigr)\;,
\end{equs}
with the values of $\tilde{\delta}$ as before. Finally, similarly to \eqref{e:ReconstructTimeReg} we get
\begin{equs}
| \langle \mathcal{R}_t H_t - \bar{\mathcal{R}}_t \bar{H}_t - \mathcal{R}_s H_s + \bar{\mathcal{R}}_s \bar{H}_s&, \varrho_x^\lambda \rangle| \lesssim |t-s|^{\tilde{\delta}/\mathfrak{s}_0} \lambda^{\alpha - \tilde{\delta}} \onorm{s,t}^{\eta - \gamma} \\
&\times \bigl(\vert\!\vert\!\vert H; \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\delta, \gamma; T}\bigr)\;,
\end{equs}
which finishes the proof.
\end{proof}
\begin{definition}\label{d:Reconstruct}
We will call the map $\mathcal{R}$, introduced in Theorem~\ref{t:Reconstruction}, the {\it reconstruction operator}, and we will always postulate in what follows that $\mathcal{R}_t = 0$, for $t \leq 0$.
\end{definition}
\begin{remark}\label{r:OriginalReconstruct}
One can see that the map $\tilde{\mathcal{R}}(t,\cdot) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{R}_t(\cdot)$ is the reconstruction operator for the model \eqref{e:ModelTilde} in the sense of \cite[Thm.~3.10]{Hai14}.
\end{remark}
\subsection{Convolutions with singular kernels}
In the definition of a mild solution to a parabolic stochastic PDE, convolutions with singular kernels are involved. In particular Schauder estimates plays a key role. To describe this on the abstract level, we introduce the abstract integration map.
\begin{definition}\label{d:AbstractIntegration}
Given a regularity structure $\mathscr{T}=(\mathcal{T}, \mathcal{G})$, a linear map $\mathcal{I} : \mathcal{T} \to \mathcal{T}$ is said to be
an {\it abstract integration map} of order $\beta > 0$ if it satisfies the following properties:
\begin{itemize}
\item One has $\mathcal{I} : \mathcal{T}_m \to \mathcal{T}_{m + \beta}$, for every $m \in \mathcal{A}$ such that $m + \beta \in \mathcal{A}$.
\item For every $\tau \in \poly{\mathcal{T}}$, one has $\mathcal{I} \tau = 0$, where $\poly{\mathcal{T}} \subset \mathcal{T}$ contains the polynomial part of $\mathcal{T}$ and was introduced in Remark~\ref{ex:Poly}.
\item One has $\mathcal{I} \Gamma \tau - \Gamma \mathcal{I} \tau \in \poly{\mathcal{T}}$, for every $\tau \in \mathcal{T}$ and $\Gamma \in \mathcal{G}$.
\end{itemize}
\end{definition}
\begin{remark}
The second and third properties are dictated by the special role played by polynomials in the Taylor expansion.
One can find a more detailed motivation for this definition in \cite[Sec. 5]{Hai14}.
In general, we also allow for the situation where $\mathcal{I}$ has a domain which isn't all of $\mathcal{T}$.
\end{remark}
Now, we will define the singular kernels, convolutions with which we are going to describe.
\begin{definition}\label{d:Kernel}
A function $K : \mathbf{R}^{d + 1} \setminus \{0\} \to \mathbf{R}$ is regularising of order $\beta > 0$, if there is a constant $r > 0$ such that we can decompose
\begin{equ}[e:KernelExpansion]
K = \sum_{n \geq 0} K^{(n)}\;,
\end{equ}
in such a way that each term $K^{(n)}$
is supported in $\{z \in \mathbf{R}^{d+1} : \snorm{z} \leq c 2^{-n} \}$ for some $c > 0$,
satisfies
\begin{equs}[e:KernelBound]
|D^k K^{(n)}(z) | \lesssim 2^{(|\mathfrak{s}| - \beta + \sabs{k})n}\;,
\end{equs}
for every multiindex $k$ with $\sabs{k} \leq r$, and annihilates every polynomial
of scaled degree $r$, i.e. for every $k \in \mathbf{N}^{d+1}$ such that $\sabs{k} \leq r$ it satisfies
\begin{equ}[e:PolyKill]
\int_{\mathbf{R}^{d+1}} z^k K^{(n)}(z)\, dz = 0\;.
\end{equ}
\end{definition}
Now, we will describe the action of a model on the abstract integration map. When it is convenient for us, we will write $K_t(x) = K(z)$, for $z=(t,x)$.
\begin{definition}
Let $\mathcal{I}$ be an abstract integration map of order $\beta$ for a regularity structure $\mathscr{T}=(\mathcal{T}, \mathcal{G})$, let $Z = (\Pi, \Gamma, \Sigma)$ be a model and let $K$ be regularising of
order $\beta$ with $r > -\lfloor\min \mathcal{A}\rfloor$. We say that $Z$ realises $K$ for $\mathcal{I}$, if
for every $\alpha \in \mathcal{A}$ and every $\tau \in \mathcal{T}_\alpha$ one has the identity
\begin{equ}[e:PiIntegral]
\Pi^t_{x} \left(\mathcal{I} \tau + \mathcal{J}_{t, x} \tau \right)(y) = \int_{\mathbf{R}} \langle \Pi^{s}_{x} \Sigma_x^{s t} \tau, K_{t-s}(y - \cdot)\rangle\, ds\;,
\end{equ}
where the polynomial $\mathcal{J}_{t, x} \tau$ is defined by
\begin{equs}
\label{e:JDef}
\mathcal{J}_{t, x} \tau \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{\sabs{k} < \alpha + \beta} \frac{X^k}{k!} \int_{\mathbf{R}} \langle\Pi^{s}_{x} \Sigma_x^{s t} \tau, D^k K_{t-s}(x - \cdot)\rangle\, ds\;,
\end{equs}
with $k \in \mathbf{N}^{d+1}$ and the derivative $D^k$ in time-space. Moreover, we require that
\begin{equs}[e:GammaSigmaIntegral]
\Gamma_{x y}^t \big(\mathcal{I} + \mathcal{J}_{t, y}\big) &= \big(\mathcal{I} + \mathcal{J}_{t, x}\big) \Gamma_{x y}^t\;,\\
\Sigma_x^{st} \big(\mathcal{I} + \mathcal{J}_{t, x}\big) &= \big(\mathcal{I} + \mathcal{J}_{s, x}\big) \Sigma_x^{st}\;,
\end{equs}
for all $s, t \in \mathbf{R}$ and $x, y \in \mathbf{R}^d$.
\end{definition}
\begin{remark}
We define the integrals in \eqref{e:PiIntegral} and \eqref{e:JDef} as sums of the same integrals, but using the functions $K^{(n)}$ from the expansion \eqref{e:KernelExpansion}. Since these integrals coincide with those from \cite{Hai14} for the model \eqref{e:ModelTilde}, it follows from \cite[Lem.~5.19]{Hai14} that these sums converge absolutely, and hence the expressions in \eqref{e:PiIntegral} and \eqref{e:JDef} are well defined.
\end{remark}
\begin{remark}
The identities \eqref{e:GammaSigmaIntegral} should be viewed as defining
$\Gamma_{xy}^t \mathcal{I}\tau$ and $\Sigma_x^{st} \mathcal{I} \tau$ in terms of $\Gamma_{xy}^t \tau$,
$\Sigma_x^{st} \tau$, and \eqref{e:JDef}.
\end{remark}
With all these notations at hand we introduce the following operator acting on modelled distribution
$H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ with $\gamma + \beta > 0$:
\begin{equ}[e:KDef]
\bigl(\mathcal{K}_{\gamma} H\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{I} H_t(x) + \mathcal{J}_{t, x} H_t(x) + \bigl(\mathcal{N}_{\gamma} H\bigr)_t(x)\;.
\end{equ}
Here, the last term is $\poly{\mathcal{T}}$-valued and is given by
\begin{equ}[e:NDef]
\bigl(\mathcal{N}_{\gamma} H\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{\sabs{k} < \gamma + \beta} \frac{X^k}{k!} \int_{\mathbf{R}} \langle\mathcal{R}_s H_s - \Pi^{s}_{x} \Sigma_x^{s t} H_t(x), D^k K_{t-s}(x - \cdot)\rangle\, ds\;,
\end{equ}
where as before $k \in \mathbf{N}^{d+1}$ and the derivative $D^k$ is in time-space, see Definition~\ref{d:Reconstruct} for consistency of notation.
\begin{remark}
It follows from Remark~\ref{r:OriginalReconstruct} and the proof of \cite[Thm.~5.12]{Hai14}, that the integral in \eqref{e:NDef} is well-defined, if we express it as a sum of the respective integrals with the functions $K^{(n)}$ in place of $K$. (See also the definition of the operator $\bf{R}^+$ in \cite[Sec. 7.1]{Hai14}.)
\end{remark}
The modelled distribution $\mathcal{K}_{\gamma} H$ represents the space-time convolution of $H$ with $K$, and the following result shows that this action ``improves'' regularity by $\beta$.
\begin{theorem}\label{t:Integration}
Let $\mathscr{T}=(\mathcal{T}, \mathcal{G})$ be a regularity structure with the minimal homogeneity $\alpha$, let $\mathcal{I}$ be an abstract integration map of an integer order $\beta > 0$, let $K$ be a singular function regularising by $\beta$, and let $Z = (\Pi, \Gamma, \Sigma)$ be a model, which realises $K$ for $\mathcal{I}$. Furthermore, let $\gamma > 0$, $\eta < \gamma$, $\eta > -\mathfrak{s}_0$, $\gamma < \eta + \mathfrak{s}_0$, $\gamma + \beta \notin \mathbf{N}$, $\alpha + \beta > 0$ and $r > -\lfloor \alpha \rfloor$, $r > \gamma + \beta$ in Definition~\ref{d:Kernel}.
Then $\mathcal{K}_\gamma$ maps $\mathcal{D}^{\gamma, \eta}_T(Z)$ into $\mathcal{D}^{\bar{\gamma}, \bar{\eta}}_T(Z)$, where $\bar{\gamma} = \gamma + \beta$, $\bar{\eta} = \eta \wedge \alpha + \beta$, and for any $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ the following bound holds
\begin{equ}[e:Integration]
\vert\!\vert\!\vert \mathcal{K}_{\gamma} H \vert\!\vert\!\vert_{\bar{\gamma}, \bar{\eta}; T} \lesssim \vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T} \Vert \Pi \Vert_{\gamma; T} \Vert \Sigma \Vert_{\gamma; T} \bigl(1 + \Vert \Gamma \Vert_{\bar{\gamma}; T} + \Vert \Sigma \Vert_{\bar{\gamma}; T}\bigr)\;.
\end{equ}
Furthermore, for every $t \in (0, T]$, one has the identity
\begin{equ}[e:IntegralIdentity]
\mathcal{R}_t \bigl(\mathcal{K}_{\gamma} H\bigr)_t(x) = \int_{0}^t \langle\mathcal{R}_s H_s, K_{t-s}(x - \cdot)\rangle\, ds\;.
\end{equ}
Let $\bar{Z} = (\bar{\Pi}, \bar{\Gamma}, \bar{\Sigma})$ be another model realising $K$ for $\mathcal{I}$, which satisfies the same assumptions, and let $\bar{\mathcal{K}}_{\gamma}$ be defined by \eqref{e:KDef} for this model. Then one has
\begin{equ}[e:IntegrationDistance]
\vert\!\vert\!\vert \mathcal{K}_{\gamma} H; \bar{\mathcal{K}}_{\gamma} \bar{H} \vert\!\vert\!\vert_{\bar{\gamma}, \bar{\eta}; T} \lesssim \vert\!\vert\!\vert H; \bar{H} \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\bar{\gamma}; T}\;,
\end{equ}
for all $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ and $\bar{H} \in \mathcal{D}^{\gamma, \eta}_T(\bar{Z})$. Here, the proportionality constant depends on $\vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T}$, $\vert\!\vert\!\vert \bar{H} \vert\!\vert\!\vert_{\gamma, \eta; T}$ and the norms on the models $Z$ and $\bar{Z}$ involved in the estimate \eqref{e:Integration}.
\end{theorem}
\begin{proof}
In view of Remarks \ref{r:OriginalModel} and \ref{r:OriginalReconstruct}, the required bounds on the components of $(\mathcal{K}_\gamma H)_t(x)$ and $(\mathcal{K}_\gamma H)_{t}(x) - \Sigma^{t s}_x (\mathcal{K}_\gamma H)_{s}(x)$, as well as on the components of $(\mathcal{K}_{\gamma} H)_t(y) - \Gamma_{y x}^t (\mathcal{K}_{\gamma} H)_t(x)$ with non-integer homogeneities, can be obtained in exactly the same way as in \cite[Prop.~6.16]{Hai14}. (See the definition of the operator $\bf{R}^+$ in \cite[Sec.~7.1]{Hai14}.)
In order to get the required bounds on the elements of $(\mathcal{K}_\gamma H)_{t}(x) - \Gamma^{t}_{x y} (\mathcal{K}_\gamma H)_{t}(y)$ with integer homogeneities, we need to modify the proof of \cite[Prop.~6.16]{Hai14}. The problem is that our definition of modelled distributions is slightly different than the one in \cite[Def.~6.2]{Hai14} (see Remark~\ref{r:ModelledDistrib}). That's why we have to consider only two regimes, $c 2^{-n + 1} \leq |x - y|$ and $c 2^{-n + 1} > |x - y|$, in the proof of \cite[Prop.~6.16]{Hai14}, where $c$ is from Definition \ref{d:Kernel}. The only place in the proof, which requires a special treatment, is the derivation of the estimate
\begin{equ}
\Big|\int_{\mathbf{R}} \langle\mathcal{R}_s H_s - \Pi^{s}_{x} H_s(x), D^k K^{(n)}_{t-s}(x - \cdot)\rangle\, ds\Big| \lesssim 2^{(\sabs{k} - \gamma - \beta)n} \onorm{t}^{\eta - \gamma}\;,
\end{equ}
which in our case follows trivially from Theorem~\ref{t:Reconstruction} and Definition~\ref{d:Kernel}. Here is the place where we need $\gamma - \eta < \mathfrak{s}_0$, in order to have an integrable singularity. Here, we use the same argument as in the proof of \cite[Thm.~7.1]{Hai14} to make sure that the time interval does not increase.
With respective modifications of the proof of \cite[Prop.~6.16]{Hai14} we can also show that \eqref{e:IntegralIdentity} and \eqref{e:IntegrationDistance} hold.
\end{proof}
\section{Solutions to parabolic stochastic PDEs}
\label{s:PDEs}
We consider a general parabolic stochastic PDE of the form
\begin{equ}[e:SPDE]
\partial_t u = A u + F(u, \xi)\;, \qquad u(0, \cdot) = u_0(\cdot)\;,
\end{equ}
on $\mathbf{R}_+ \times \mathbf{R}^d$, where $u_0$ is the initial data, $\xi$ is a rough noise, $F$ is a function in $u$ and $\xi$, which depends in general on the space-time point $z$ and which is affine in $\xi$, and $A$ is a differential operator such that $\partial_t - A$ has a Green's function $G$, i.e. $G$ is the distributional solution of $(\partial_t - A) G = \delta_0$. Then we require the following assumption to be satisfied.
\begin{assumption}\label{a:Operator}
The operator $A$ is given by $Q(\nabla)$, for $Q$ a homogeneous polynomial on $\mathbf{R}^d$ of some even degree $\beta > 0$. Its Green's function $G : \mathbf{R}^{d+1} \setminus \{0\} \mapsto \mathbf{R}$ is smooth, non-anticipative, i.e. $G_{t} = 0$ for $t \leq 0$, and for $\lambda > 0$ satisfies the scaling relation
\begin{equ}
\lambda^{d} G_{\lambda^{\beta} t}(\lambda x) = G_{t}(x)\;.
\end{equ}
\end{assumption}
\begin{remark}
One can find in \cite{Hor55} precise conditions on $Q$ such that $G$ satisfies Assumption~\ref{a:Operator}.
\end{remark}
In order to apply the abstract integration developed in the previous section, we would like the localised singular part of $G$ to have the properties from Definition~\ref{d:Kernel}. The following result, following from \cite[Lem.~7.7]{Hai14}, shows that this is indeed the case.
\begin{lemma}\label{l:GreenDecomposition}
Let us consider functions $u$ supported in $\mathbf{R}_+ \times \mathbf{R}^d$ and periodic in the spatial variable with some fixed period. If Assumption~\ref{a:Operator} is satisfied with some $\beta > 0$, then we can write $G = K + R$, in such a way that the identity
\begin{equ}
\bigl(G \star u\bigr)(z) = \bigl(K \star u\bigr)(z) + \bigl(R \star u\bigr)(z)\;,
\end{equ}
holds for every such function $u$ and every $z \in (-\infty, 1] \times \mathbf{R}^{d}$, where $\star$ is the space-time convolution. Furthermore, $K$ has the properties from Definition~\ref{d:Kernel} with the parameters $\beta$ and some arbitrary (but fixed) value $r$, and the scaling $\mathfrak{s} = (\beta, 1, \ldots, 1)$. The function $R$ is smooth, non-anticipative and compactly supported.
\end{lemma}
In particular, it follows from Lemma~\ref{l:GreenDecomposition} that for any $\gamma > 0$ and any periodic $\zeta_t \in \mathcal{C}^{\alpha}\big(\mathbf{R}^d\big)$, with $t \in \mathbf{R}$, which is allowed to have an integrable singularity at $t = 0$, we can define
\begin{equ}[e:RDef]
\left(R_{\gamma} \zeta\right)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{\sabs{k} < \gamma} \frac{X^k}{k!} \int_\mathbf{R} \langle \zeta_s, D^k R_{t-s}(x - \cdot) \rangle\, ds\;,
\end{equ}
where $k \in \mathbf{N}^{d+1}$ and $D^k$ is taken in time-space.
\subsection{Regularity structures for locally subcritical stochastic PDEs}
\label{ss:RegStruct}
In this section we provide conditions on the equation \eqref{e:SPDE}, under which one can build a regularity structure for it. More precisely, we consider the mild form of equation \eqref{e:SPDE}:
\begin{equ}[e:MildGeneral]
u = G \star F(u, \xi) + S u_0\;,
\end{equ}
where $\star$ is the space-time convolution, $S$ is the semigroup generated by $A$ and
$G$ is its fundamental solution. We will always assume that we are in a subcritical
setting, as defined in \cite[Sec.~8]{Hai14}.
It was shown in \cite[Sec.~8.1]{Hai14} that it is possible to build a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$ for a locally subcritical equation and to reformulate it as a fixed point problem in an associated space of modelled distributions. We do not want to give a precise description of this regularity structure, see for example
\cite{Hai14,CDM} for details in the case of $\Phi^4_3$.
Let us just mention that we can recursively build two sets of symbols, $\mathcal{F}$ and $\mathcal{U}$.
The set $\mathcal{F}$ contains $\Xi$, $\mathbf{1}$, $X_i$, as well as some of the symbols that can
be built recursively from these basic building blocks by the operations
\begin{equ}[e:basicOps]
\tau \mapsto \mathcal{I}(\tau)\;,\qquad (\tau,\bar \tau) \mapsto \tau\bar \tau\;,
\end{equ}
subject to the equivalences $\tau \bar \tau = \bar \tau \tau$, $\mathbf{1} \tau = \tau$,
and $\mathcal{I}(X^k) = 0$.
These symbols are involved in the description of the right hand side of \eqref{e:SPDE}. The set $\mathcal{U} \subset \mathcal{F}$ on the other hand contains only those symbols which
are used in the description of the solution itself, which are either of the form
$X^k$ or of the form $\mathcal{I}(\tau)$ with $\tau \in \mathcal{F}$. The
model space $\mathcal{T}$ is then defined as $\mathrm{span}\{\tau \in \mathcal{F}\,:\, |\tau| \le r\}$ for a sufficiently large $r > 0$, the set of all (real) linear combinations of symbols in $\mathcal{F}$ of homogeneity $|\tau| \le r$, where
$\tau \mapsto |\tau|$ is given by
\begin{equ}[e:defDegree]
|\mathbf{1}| = 0\;,\quad |X_i| = \mathfrak{s}_i\;,\quad
|\Xi| = \alpha, \quad|\mathcal{I} (\tau)| = |\tau| + \beta\;,\quad
|\tau \bar \tau| = |\tau| + |\bar \tau|\;.
\end{equ}
In the situation of interest, namely the $\Phi^4_3$ model, one chooses $\beta = 2$
and $\alpha = -{5\over 2}-\kappa$ for some $\kappa > 0$ sufficiently small.
Subcriticality then guarantees that $\mathcal{T}$ is finite-dimensional.
We will also write $\mathcal{T}_\mathcal{U}$ for the linear span of $\mathcal{U}$ in $\mathcal{T}$.
One can also build a structure group $\mathcal{G}$ acting on $\mathcal{T}$ in such a way that
the operation $\mathcal{I}$ satisfies the assumptions of Definition~\ref{d:AbstractIntegration}
(corresponding to the convolution operation with the kernel $K$), and such that
it acts on $\poly{\mathcal{T}}$ by translations as required.
Let now $Z$ be a model realising $K$ for $\mathcal{I}$, we denote by $\mathcal{R}$, $\mathcal{K}_{\bar{\gamma}}$ and $R_\gamma$ the reconstruction operator, and the corresponding operators \eqref{e:KDef} and \eqref{e:RDef}.
We also use the notation $\mathcal{P} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{K}_{\bar \gamma} + R_\gamma \mathcal{R}$
for the operator representing convolution with the heat kernel.
With these notations at hand, it was shown in \cite{Hai14} that
one can associate to \eqref{e:MildGeneral} the fixed point problem
in $\mathcal{D}^{\gamma, \eta}_T(Z)$ given by
\begin{equ}[e:AbstractEquationBasic]
U = \mathcal{P} F(U) + Su_0\;,
\end{equ}
for a suitable function (which we call again $F$)
which ``represents'' the nonlinearity of the SPDE
in the sense of \cite[Sec.~8]{Hai14} and which is such that
$\mathcal{I} F(\tau) \in \mathcal{T}$ for every $\tau \in \mathcal{T}_\mathcal{U}$.
In our running example, we would take
\begin{equ}[e:ourF]
F(\tau) = - \mathcal{Q}_{\le 0} \bigl(a \tau + \lambda \tau^3\bigr) + \Xi\;,
\end{equ}
where $\mathcal{Q}_{\le 0}$ denotes the canonical projection onto $\mathcal{T}_{\le 0}$ defined in Remark~\ref{r:QOperators}%
\footnote{The reason for adding this projection is to guarantee that $\mathcal{I} F$
maps $\mathcal{T}_\mathcal{U}$ into $\mathcal{T}$, since we truncated $\mathcal{T}$ at homogeneity $r$. Note also that the presence
of this projection does not affect the outcome of the reconstruction operator when applied to $F(U)$.} and $a$ and $\lambda$ are the constants from \eqref{e:Phi}.
The problem we encounter is that since we impose that our models are
functions of time, there exists no model for which $\Pi^t_x \Xi = \xi$ with
$\xi$ a typical realisation of space-time white noise.
We would like to replace \eqref{e:AbstractEquationBasic} by an equivalent fixed
point problem that circumvents this problem, and this is the content of the
next two subsections.
\subsection{Truncation of regularity structures}
\label{sec:trunc}
In general, as just discussed, we cannot always define a suitable inhomogeneous model for the
regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$, so we introduce the
following truncation procedure, which amounts to simply removing the problematic
symbols.
\begin{definition}\label{d:TruncSets}
Consider a set of {\it generators} $\gen{\mathcal{F}} \subset \mathcal{F}$ such that $\poly{\mathcal{F}} \subset \gen{\mathcal{F}}$
and such that $\gen{\mathcal{T}} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathrm{span}\{\tau \in \gen{\mathcal{F}}\,:\, |\tau| \le r\} \subset \mathcal{T}$ is closed under the action of $\mathcal{G}$.
We then define the corresponding
{\it generating regularity structure} $\gen{\mathscr{T}} = (\gen{\mathcal{T}}, \mathcal{G})$.
Moreover, we define $\hat{\mathcal{F}}$ as the subset of $\mathcal{F}$ generated by $\gen{\mathcal{F}}$ via
the two operations \eqref{e:basicOps}, and we assume that $\gen{\mathcal{F}}$ was chosen
in such a way that $\mathcal{U} \subset \hat{\mathcal{F}}$, with $\mathcal{U}$ as in the previous section.
Finally, we define the {\it truncated regularity structure} $\hat{\mathscr{T}} = (\hat{\mathcal{T}}, \mathcal{G})$
with $\hat{\mathcal{T}} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathrm{span}\{\tau \in \hat{\mathcal{F}}\,:\, |\tau| \le r\} \subset \mathcal{T}$.
\end{definition}
\begin{remark}
Note that $\hat{\mathscr{T}}$ is indeed a regularity structure since
$\hat{\mathcal{T}}$ is automatically closed under $\mathcal{G}$. This can easily be verified by induction
using the definition of $\mathcal{G}$ given in \cite{Hai14}.
A set $\gen{\mathcal{F}}$ with these properties always exists, because one can take either
$\gen{\mathcal{F}} = \mathcal{F}$ or $\gen\mathcal{F} = \{\Xi\} \cup \poly\mathcal{F}$. In both of these examples,
one simply has
$\hat{\mathcal{F}} = \mathcal{F}$, but in the case of \eqref{e:Phi}, it turns out to be convenient
to make a choice for which this is not the case (see Section~\ref{s:DPhi} below).
\end{remark}
\subsection{A general fixed point map}
\label{ss:FixedPointMap}
We now reformulate \eqref{e:SPDE}, with the operator $A$ such that Assumption~\ref{a:Operator} is satisfied, using the regularity structure from the previous section, and show that the
corresponding fixed point problem admits local solutions.
For an initial condition $u_0$ in \eqref{e:SPDE} with ``sufficiently nice'' behavior at infinity, we can define the function $S_t u_0 : \mathbf{R}^d \to \mathbf{R}$, which has a singularity at $t=0$, where as before $S_t$ is the semigroup generated by $A$. In particular, we have a precise description of its singularity, the proof of which is provided in \cite[Lem.~7.5]{Hai14}:
\begin{lemma}\label{l:InitialData}
For some $\eta < 0$, let $u_0 \in \mathcal{C}^\eta(\mathbf{R}^d)$ be periodic.
Then, for every $\gamma > 0$
and every $T > 0$, the map $(t,x) \mapsto S_t u_0(x)$ can be lifted to
$\mathcal{D}^{\gamma, \eta}_T$ via its Taylor expansion. Furthermore, one has the bound
\begin{equ}[e:InitBound]
\vert\!\vert\!\vert S u_0 \vert\!\vert\!\vert_{\gamma, \eta; T} \lesssim \Vert u_0 \Vert_{\mathcal{C}^\eta}\;.
\end{equ}
\end{lemma}
Before reformulating \eqref{e:SPDE}, we make some assumptions on its nonlinear term $F$.
For a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$, let $\hat{\mathscr{T}} = (\hat{\mathcal{T}}, \mathcal{G})$ be as in Definition~\ref{d:TruncSets} for a suitable set $\gen{\mathcal{F}}$. In what follows, we consider models on $\hat{\mathscr{T}}$ and denote by $\mathcal{D}^{\gamma, \eta}_{T}$ the respective spaces of modelled distributions.
We also assume that we are given a function $F\colon \mathcal{T}_\mathcal{U} \to \mathcal{T}$ as above (for example
\eqref{e:ourF}),
and we make the following assumption on $F$.
For some fixed $\bar \gamma > 0$, $\eta \in \mathbf{R}$ we choose, for
any model $Z$ on $\hat \mathscr{T}$,
elements $F_0(Z), I_0(Z) \in \mathcal{D}^{\bar \gamma, \eta}_T(Z)$ such that, for
every $z$, $I_0(z) \in \hat{\mathcal{T}}$,
$I_0(z) - \mathcal{I} F_0(z) \in \poly{\mathcal{T}}$ and such that, setting
\begin{equ}[e:NonlinAs]
\hat{F}(z, \tau) \stackrel{\mathclap{\mbox{\tiny def}}}{=} F(z, \tau) - F_0(z) \;,
\end{equ}
$\hat F(z,\cdot)$ maps $\{I_0(z) + \tau : \tau \in \hat{\mathcal{T}} \cap \mathcal{T}_\mathcal{U}\}$ into $\hat{\mathcal{T}}$.
Here we suppressed the argument $Z$ for conciseness by writing
for example $I_0(z)$ instead of $I_0(Z)(z)$.
\begin{remark}
Since it is the \textit{same} structure group $\mathcal{G}$ acting on both $\mathcal{T}$
and $\hat{\mathcal{T}}$, the condition $F_0 \in \mathcal{D}^{\bar \gamma, \eta}_T$ makes sense for
a given model on $\hat\mathscr{T}$, even
though $F_0(z)$ takes values in all of $\mathcal{T}$ rather than just $\hat{\mathcal{T}}$.
\end{remark}
Given such a choice of $I_0$ and $F_0$ and given $H : \mathbf{R}^{d+1} \to \hat{\mathcal{T}} \cap \mathcal{T}_\mathcal{U}$, we denote by
$\hat F(H)$ the function
\begin{equ}[e:NonlinearTerm]
\bigl(\hat F(H)\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \hat F\bigl((t, x), H_t(x)\bigr)\;.
\end{equ}
With this notation, we replace the problem \eqref{e:AbstractEquationBasic} by the problem
\begin{equ}[e:AbstractEquation]
U = \mathcal{P} \hat F(U) + Su_0 + I_0\;.
\end{equ}
This shows that one should really think of $I_0$ as being given by
$I_0 = \mathcal{P} F_0$ since, at least formally, this would then turn \eqref{e:AbstractEquation}
into \eqref{e:AbstractEquationBasic}.
The advantage of \eqref{e:AbstractEquation} is that it makes sense for any model on
$\hat\mathscr{T}$ and does not require a model on all of $\mathscr{T}$.
We then assume that $\hat F$, $I_0$ and $F_0$ satisfy the following conditions.
\begin{assumption}\label{a:Nonlin}
In the above context, we assume that there exists $\gamma \ge \bar \gamma$ such
that, for every $B > 0$ there exists a constant $C > 0$ such that the bounds
\begin{equs}\label{e:Lipschitz}
\vert\!\vert\!\vert \hat F(H); \hat F(\bar H) \vert\!\vert\!\vert_{\bar{\gamma}, \bar{\eta}; T} &\leq C \left( \vert\!\vert\!\vert H; \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\gamma; T} \right),\\
\vert\!\vert\!\vert I_0(Z); I_0(\bar Z) \vert\!\vert\!\vert_{\bar{\gamma}, \bar{\eta}; T} \leq C \vert\!\vert\!\vert Z; \bar{Z}& \vert\!\vert\!\vert_{\gamma; T}\;,\quad
\vert\!\vert\!\vert F_0(Z); F_0(\bar Z) \vert\!\vert\!\vert_{\bar{\gamma}, \bar{\eta}; T} \leq C \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\gamma; T}\;,
\end{equs}
hold for any two models $Z$, $\bar{Z}$ with $\vert\!\vert\!\vert Z \vert\!\vert\!\vert_{\gamma; T} + \vert\!\vert\!\vert \bar{Z} \vert\!\vert\!\vert_{\gamma; T} \leq B$, and for $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$, $\bar H \in \mathcal{D}^{\gamma, \eta}_T(\bar{Z})$ such that $\vert\!\vert\!\vert H \vert\!\vert\!\vert_{\gamma, \eta; T} + \vert\!\vert\!\vert \bar H \vert\!\vert\!\vert_{\gamma, \eta; T} \leq B$.
\end{assumption}
\begin{remark}
The bounds in Assumption~\eqref{a:Nonlin} can usually be easily checked for a polynomial nonlinearity $F$ in \eqref{e:MildGeneral}. See Lemma~\ref{l:PhiLipschiz} below for a respective prove in the case when $F$ is give by \eqref{e:ourF}.
\end{remark}
The following theorem provides the existence and uniqueness results of a local solution to this equation.
\begin{theorem}\label{t:FixedMap}
In the described context, let $\alpha \stackrel{\mathclap{\mbox{\tiny def}}}{=} \min \hat{\mathcal{A}}$, and an abstract integration map $\mathcal{I}$ be of order $\beta > -\alpha$. Furthermore, let the values $\gamma \geq \bar{\gamma} > 0$ and $\eta, \bar{\eta} \in \mathbf{R}$ from Assumption~\ref{a:Nonlin} satisfy $\eta < \bar{\eta} \wedge \alpha + \beta$, $\gamma < \bar{\gamma} + \beta$ and $\bar{\eta} > -\beta$.
Then, for every model $Z$ as above, and for every periodic $u_0 \in \mathcal{C}^\eta(\mathbf{R}^d)$, there exists a time $T_\star \in (0, +\infty]$ such that, for every $T < T_\star$ the equation \eqref{e:AbstractEquation} admits a unique solution $U \in \mathcal{D}^{\gamma, \eta}_T(Z)$.
Furthermore, if $T_\star < \infty$, then
\begin{equ}
\lim_{T \to T_\star} \Vert \mathcal{R}_T \mathcal{S}_{T} (u_0, Z)_T \Vert_{\mathcal{C}^\eta} = \infty\;,
\end{equ}
where $\mathcal{S}_T : (u_0, Z) \mapsto U$ is the solution map. Finally, for every $T < T_\star$, the solution map $\mathcal{S}_T$ is jointly Lipschitz continuous in a neighbourhood around $(u_0, Z)$ in the sense that, for any $B > 0$ there is $C > 0$ such that, if $\bar{U} = \mathcal{S}_T(\bar{u}_0, \bar{Z})$ for some initial data $(\bar{u}_0, \bar{Z})$, then one has the bound $\vert\!\vert\!\vert U; \bar{U} \vert\!\vert\!\vert_{\gamma, \eta; T} \leq C \delta$, provided $\Vert u_0 - \bar{u}_0 \Vert_{\mathcal{C}^{\eta}} + \vert\!\vert\!\vert Z; \bar{Z} \vert\!\vert\!\vert_{\gamma; T} \leq \delta$, for any $\delta \in (0,B]$.
\end{theorem}
\begin{proof}
See \cite[Thm.~7.8]{Hai14}, combined with \cite[Prop.~7.11]{Hai14}.
Note that since we consider inhomogeneous models, we have no problems
in evaluating $\mathcal{R}_t U_t$.
\end{proof}
\begin{definition}\label{d:SPDESolution}
In the setting of Theorem~\ref{t:FixedMap}, let $U$ be the unique solution to the equation~\eqref{e:AbstractEquation} on $[0, T_\star)$. Then for $t < T_\star$ we define the solution to \eqref{e:SPDE} by
\begin{equ}[e:SPDESolution]
u_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl(\mathcal{R}_t U_t\bigr)(x)\;.
\end{equ}
\end{definition}
\begin{remark}\label{r:ClassicalSolution}
If the noise $\xi$ in \eqref{e:SPDE} is smooth, so that this equation can be solved in the classical sense, one can see that the reconstruction operator satisfies
\begin{equ}
\bigl(\mathcal{R}_t U_t\bigr)(x) = \bigl(\Pi_x^t U_t(x)\bigr)(x)\;,
\end{equ}
and the solution \eqref{e:SPDESolution} coincides with the classical solution.
\end{remark}
\section{Discrete models and modelled distributions}
\label{s:DModels}
In order to be able to consider discretisations of the equations whose solutions were provided in Section~\ref{s:PDEs}, we introduce the discrete counterparts of inhomogeneous models and modelled distributions. In this section we use the following notation: for $N \in \mathbf{N}$, we denote by $\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2^{-N}$ the mesh size of the grid $\Lambda_\varepsilon^d \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl(\varepsilon \mathbf{Z}\bigr)^d$, and we fix some scaling $\mathfrak{s}=(\mathfrak{s}_0, 1, \ldots, 1)$ of $\mathbf{R}^{d+1}$ with an integer $\mathfrak{s}_0 > 0$.
\subsection{Definitions and the reconstruction theorem}
Now we define discrete analogues of the objects from Sections~\ref{ss:Models} and \ref{ss:ModelledDistr}.
\begin{definition}\label{d:DModel}
Given a regularity structure $\mathscr{T}$ and $\varepsilon>0$, a {\it discrete model} $(\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ consists of the collections of maps
\begin{equ}
\Pi_x^{\varepsilon, t}: \mathcal{T} \to \mathbf{R}^{\Lambda_\varepsilon^d}\;, \qquad \Gamma^{\varepsilon, t} : \Lambda^d_\varepsilon \times \Lambda^d_\varepsilon \to \mathcal{G}\;, \qquad \Sigma^{\varepsilon}_x : \mathbf{R} \times \mathbf{R} \to \mathcal{G}\;,
\end{equ}
parametrised by $t \in \mathbf{R}$ and $x \in \Lambda_\varepsilon^d$, which have all the algebraic properties of their continuous counterparts in Definition~\ref{d:Model}, with the spatial variables restricted to the grid. Additionally, we require $\bigl(\Pi^{\varepsilon, t}_x \tau\bigr) (x) = 0$, for all $\tau \in \mathcal{T}_l$ with $l > 0$, and all $x \in \Lambda^d_\varepsilon$ and $t \in \mathbf{R}$.
\end{definition}
We define the quantities $\Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T}$ and $\Vert \Gamma^\varepsilon \Vert_{\gamma;T}^{(\varepsilon)}$ to be the smallest constants $C$ such that the bounds \eqref{e:PiGammaBound} hold uniformly in $x, y \in \Lambda^d_\varepsilon$, $t \in \mathbf{R}$, $\lambda \in [\varepsilon,1]$ and with the discrete pairing \eqref{e:DPairing} in place of the standard one. The quantity $\Vert \Sigma^\varepsilon \Vert_{\gamma;T}^{(\varepsilon)}$ is defined as the smallest constant $C$ such that the bounds
\begin{equ}[e:DSigmaBound]
\Vert \Sigma^{\varepsilon, s t}_{x} \tau \Vert_{m} \leq C \Vert \tau \Vert \bigl(|t - s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{l - m}\;,
\end{equ}
hold uniformly in $x \in \Lambda_\varepsilon^d$ and the other parameters as in \eqref{e:SigmaBound}.
We measure the time regularity of $\Pi^\varepsilon$ as in \eqref{e:PiTimeBound}, by substituting the continuous objects by their discrete analogues, and by using $|t - s|^{1/\mathfrak{s}_0} \vee \varepsilon$ instead of $|t - s|^{1/\mathfrak{s}_0}$ on the right-hand side. All the other quantities $\Vert \cdot \Vert^{(\varepsilon)}$, $\vert\!\vert\!\vert \cdot \vert\!\vert\!\vert^{(\varepsilon)}$, etc.\ are defined by analogy with Remark~\ref{r:ModelNorm}.
\begin{remark}
The fact that $\bigl(\Pi^{\varepsilon, t}_x \tau\bigr) (x) = 0$ if $|\tau| > 0$ does not follow
automatically from the discrete analogue of \eqref{e:PiGammaBound} since these
are only assumed to hold for test functions at scale $\lambda \ge \varepsilon$. We use this
property in the proof of \eqref{e:DIntegralIdentity}.
\end{remark}
\begin{remark}
The weakening of the continuity property of $\Sigma^{\varepsilon,st}_x$ given by \eqref{e:DSigmaBound}
will be used in the analysis of the ``discrete abstract integration'' in Section~\ref{ss:DConvols}.
It allows us to deal with the fact that the discrete heat kernel is discontinuous at $t=0$,
so we simply use uniform bounds on very small time scales
(see \cite[Lem.~6.7]{HMW12} for a simple explanation in a related context).
\end{remark}
For $\gamma, \eta \in \mathbf{R}$ and $T > 0$, for a discrete model $Z^\varepsilon=(\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ on a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$, and for a function $H^\varepsilon : (0, T] \times \Lambda_\varepsilon^d \to \mathcal{T}_{<\gamma}$, we define
\begin{equs}[e:DModelledDistributionNormAbs]
\Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{t \in (0,T]} &\sup_{x \in \Lambda_\varepsilon^d} \sup_{l < \gamma} \enorm{t}^{(l - \eta) \vee 0} \Vert H^\varepsilon_t(x) \Vert_l\\
&+ \sup_{t \in (0,T]} \sup_{\substack{x \neq y \in \Lambda^d_\varepsilon \\ | x - y | \leq 1}} \sup_{l < \gamma} \frac{\Vert H^\varepsilon_t(x) - \Gamma^{\varepsilon, t}_{x y} H^\varepsilon_t(y) \Vert_l}{\enorm{t}^{\eta - \gamma} | x - y |^{\gamma - l}}\;,
\end{equs}
where $l \in \mathcal{A}$. Furthermore, we define the norm
\begin{equ}[e:DModelledDistributionNorm]
\vert\!\vert\!\vert H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} + \sup_{\substack{s \neq t \in (0,T] \\ | t - s | \leq \onorm{t, s}^{\mathfrak{s}_0}}} \sup_{x \in \Lambda_\varepsilon^d} \sup_{l < \gamma} \frac{\Vert H^\varepsilon_t(x) - \Sigma_x^{\varepsilon, t s} H^\varepsilon_{s}(x) \Vert_l}{\enorm{t, s}^{\eta - \gamma} \bigl(|t - s|^{1/s_0} \vee \varepsilon\bigr)^{\gamma - l}}\;,
\end{equ}
where the quantities $\enorm{t}$ and $\enorm{t, s}$ are defined below \eqref{e:DHolderDist}. We will call such functions $H^\varepsilon$ {\it discrete modelled distributions}.
\begin{remark}\label{r:DDistrMult}
It is easy to see that the properties of multiplication of modeled distributions from \cite[Sec.~6.2]{Hai14} can be translated mutatis mutandis to the discrete case.
\end{remark}
In contrast to the continuous case, a reconstruction operator of discrete modeled distributions can be defined in a simple way.
\begin{definition}\label{d:DReconstruct}
Given a discrete model $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ and a discrete modelled distribution $H^\varepsilon$ we define the {\it discrete reconstruction map} $\mathcal{R}^{\varepsilon}$ by $\mathcal{R}^{\varepsilon}_t = 0$ for $t \leq 0$, and
\begin{equ}[e:DReconstructDef]
\big(\mathcal{R}^{\varepsilon}_t H^\varepsilon_t\big)(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \big(\Pi_x^{\varepsilon, t} H^\varepsilon_t(x) \big)(x)\;, \qquad (t, x) \in (0, T] \times \Lambda_\varepsilon^d\;.
\end{equ}
\end{definition}
Recalling the definition of the norms from \eqref{e:DHolderDist}, the
following result is a discrete analogue of Theorem~\ref{t:Reconstruction}.
\begin{theorem}
\label{t:DReconstruct}
Let $\mathscr{T}$ be a regularity structure with $\alpha \stackrel{\mathclap{\mbox{\tiny def}}}{=} \min \mathcal{A} < 0$ and $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ be a discrete model. Then the bound
\begin{equ}
|\langle \mathcal{R}^\varepsilon_t H^\varepsilon_t - \Pi^{\varepsilon, t}_x H^\varepsilon_t(x), \varrho_x^\lambda \rangle_\varepsilon| \lesssim \lambda^\gamma \enorm{t}^{\eta - \gamma} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T}\;,
\end{equ}
holds uniformly in $\varepsilon$ (see Remark~\ref{r:Uniformity} below) for all discrete modelled distributions $H^\varepsilon$, all $t \in (0,T]$, $x \in \Lambda^d_\varepsilon$, $\varrho \in \mathcal{B}^r_0(\mathbf{R}^d)$ with $r > -\lfloor \alpha \rfloor$, all $\lambda \in [\varepsilon, 1]$.
Let furthermore $\bar{Z}^\varepsilon = (\bar{\Pi}^\varepsilon, \bar{\Gamma}^\varepsilon, \bar{\Sigma}^\varepsilon)$ be another model for $\mathscr{T}$ with the reconstruction operator $\bar{\mathcal{R}}^\varepsilon_t$, and let the maps $\Pi^\varepsilon$ and $\bar \Pi^\varepsilon$ have time regularities $\delta > 0$. Then, for any two discrete modelled distributions $H^\varepsilon$ and $\bar{H}^\varepsilon$, the maps $t \mapsto \mathcal{R}^\varepsilon_t H^\varepsilon_t$ and $t \mapsto \bar \mathcal{R}^\varepsilon_t \bar H^\varepsilon_t$ satisfy
\minilab{e:DReconstructBounds}
\begin{equs}
\Vert \mathcal{R}^\varepsilon H^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma, T}} &\lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\delta, \gamma; T} \bigl(1 + \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \bigr) \vert\!\vert\!\vert H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T}\;,\label{e:DReconstructSpace}\\
\Vert \mathcal{R}^\varepsilon H^\varepsilon - \bar \mathcal{R}^\varepsilon \bar H^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma, T}} &\lesssim \vert\!\vert\!\vert H^\varepsilon; \bar H^{\varepsilon} \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T} + \vert\!\vert\!\vert Z^\varepsilon; \bar Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T}\;,\label{e:DReconstructTime}
\end{equs}
for any $\tilde{\delta}$ as in Theorem~\ref{t:Reconstruction}. Here, the norms of $H^\varepsilon$ and $\bar H^\varepsilon$ are defined via the models $Z^\varepsilon$ and $\bar Z^\varepsilon$ respectively, and the proportionality constants depend on $\varepsilon$ only via $\vert\!\vert\!\vert H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T}$, $\vert\!\vert\!\vert \bar H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T}$, $\vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T}$ and $\vert\!\vert\!\vert \bar Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T}$.
\end{theorem}
\begin{remark}\label{r:Uniformity}
In the statement of Theorem~\ref{t:DReconstruct} and the following results we actually consider a sequence of discrete models and modeled distributions parametrised by $\varepsilon = 2^{-N}$ with $N \in \mathbf{N}$. By ``uniformity in $\varepsilon$'' we then mean that the estimates hold for all values of $\varepsilon$ with a proportionality constant independent of $\varepsilon$.
\end{remark}
\begin{remark}
To compare a discrete model $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ to a continuous model $Z = (\Pi, \Gamma, \Sigma)$, we can define
\begin{equs}
\Vert \Pi&; \Pi^\varepsilon \Vert^{(\varepsilon)}_{\delta, \gamma; T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sup_{\varphi, x, \lambda, l, \tau} \sup_{t \in [-T, T]} \lambda^{-l} | \langle \Pi^t_{x} \tau, \varphi_{x}^\lambda \rangle - \langle \Pi^{\varepsilon, t}_{x} \tau, \varphi_{x}^\lambda \rangle_\varepsilon|\\
&+ \sup_{\varphi, x, \lambda, l, \tau} \sup_{\substack{s \neq t \in [-T, T] \\ |t-s| \leq 1}} \lambda^{-l + \delta} \frac{| \langle \bigl(\Pi^t_{x} - \Pi^s_{x}\bigr) \tau, \varphi_{x}^\lambda \rangle - \langle \bigl(\Pi^{\varepsilon, t}_{x} - \Pi^{\varepsilon, s}_{x}\bigr) \tau, \varphi_{x}^\lambda \rangle_\varepsilon|}{\bigl(|t-s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{\delta}}\;,
\end{equs}
where the supremum is taken over $\varphi \in \mathcal{B}^r_0$, $x \in \Lambda_\varepsilon^d$, $\lambda \in [\varepsilon, 1]$, $l < \gamma$ and $\tau \in \mathcal{T}_l$ with $\Vert \tau \Vert = 1$. In order to compare discrete and continuous modelled distributions, we use the quantities as in \eqref{ModelledNorms}, but with the respective modifications as in \eqref{e:DModelledDistributionNorm}.
Then one can show similarly to \eqref{e:ReconstructTime} that for $H \in \mathcal{D}^{\gamma, \eta}_T(Z)$ and a discrete modeled distribution $H^\varepsilon$ the maps $t \mapsto \mathcal{R}_t H_t$ and $t \mapsto \mathcal{R}^\varepsilon_t H^\varepsilon_t$ satisfy the estimate
\begin{equ}
\Vert \mathcal{R} H; \mathcal{R}^\varepsilon H^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\tilde{\delta}, \alpha}_{\eta - \gamma, T}} \lesssim \vert\!\vert\!\vert H; H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T} + \vert\!\vert\!\vert Z; Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T} + \varepsilon^\theta\;,
\end{equ}
for $\tilde \delta > 0$ and $\theta > 0$ small enough. We will however not make use of this
in the present article.
\end{remark}
In order to prove Theorem~\ref{t:DReconstruct}, we need to introduce a multiresolution analysis and its discrete analogue.
\subsubsection{Elements of multiresolution analysis}
\label{ss:MultiresolutionAnalysis}
In this section we provide only the very basics of the multiresolution analysis, which are used in the sequel. For a more detailed introduction and for the proofs of the provided results we refer to \cite{Dau92} and \cite{Mey92}.
One of the remarkable results of \cite{Dau88} is that for every $r > 0$ there exists a compactly supported function $\varphi \in \mathcal{C}^r(\mathbf{R})$ (called {\it scaling function}) such that
\begin{equ}[e:ScaleFunctionsOrthonormality]
\int_{\mathbf{R}} \varphi(x)\, dx = 1\;, \qquad \int_{\mathbf{R}} \varphi(x) \varphi(x + k)\,dx = \delta_{0, k}, \quad k \in \mathbf{Z}\;,
\end{equ}
where $\delta_{\cdot, \cdot}$ is the Kronecker's delta on $\mathbf{Z}$. Furthermore, if for $n \in \mathbf{N}$ we define the grid $\Lambda_n \stackrel{\mathclap{\mbox{\tiny def}}}{=} \{2^{-n} k : k \in \mathbf{Z}\}$ and the family of functions
\begin{equ}[e:FatherScale]
\varphi_x^n(\cdot) \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2^{n / 2} \varphi\bigl(2^{n} (\cdot - x)\bigr)\;, \qquad x \in \Lambda_n\;,
\end{equ}
then there is a finite collection of vectors $\mathcal{K} \subset \Lambda_1$ and a collection of structure constants $\{ a_k : k \in \mathcal{K} \}$ such that the {\it refinement equation}
\begin{equ}[e:FatherRelation]
\varphi_x^n = \sum_{k \in \mathcal{K}} a_k \varphi_{x + 2^{-n}k}^{n+1}
\end{equ}
holds. Note that the multiplier in \eqref{e:FatherScale} preserves the $L^2$-norm of the scaled functions
rather than their $L^1$-norm. It follows immediately from \eqref{e:ScaleFunctionsOrthonormality} and \eqref{e:FatherRelation} that one has the identities
\begin{equ}[e:StructRelations]
\sum_{k \in \mathcal{K}} a_k = 2^{d/2}\;, \qquad \sum_{k \in \mathcal{K}} a_k a_{k + m} = \delta_{0, m}\;, \quad m \in \mathbf{Z}^d\;.
\end{equ}
For a fixed scaling function $\varphi$, we denote by $V_n \subset L^2(\mathbf{R})$ the subspace spanned by $\{ \varphi_x^n : x \in \Lambda_n \}$. Then the relation \eqref{e:FatherRelation} ensures the inclusion $V_n \subset V_{n+1}$ for every $n$. It turns out that there is a compactly supported function $\psi \in \mathcal{C}^r(\mathbf{R})$ (called {\it wavelet function}) such that the space $V_n^\perp$, which is the orthogonal complement of $V_n$ in $V_{n+1}$, is given by
\begin{equ}
V_n^\perp = \mathrm{span} \{ \psi_x^n : x \in \Lambda_n \}\;,
\end{equ}
where $\psi_x^n$ is as in \eqref{e:FatherRelation}. Moreover, there are constants $\{b_k : k \in \mathcal{K}\}$, such that the {\it wavelet equation} holds:
\begin{equ}[e:MotherRelation]
\psi_x^n = \sum_{k \in \mathcal{K}} b_k \varphi_{x + 2^{-n}k}^{n+1}\;.
\end{equ}
One more useful property of the wavelet function is that it has vanishing moments, in the sense that the identity
\begin{equ}[e:MotherKiller]
\int_{\mathbf{R}} \psi(x)\, x^m dx = 0
\end{equ}
holds for all $m \in \mathbf{N}$ such that $m \leq r$.
There is a standard generalization of scaling and wavelet functions to $\mathbf{R}^d$, namely for $n \geq 0$ and $x = (x_1, \ldots, x_d) \in \Lambda^d_n$ we define
\begin{equ}
\varphi_x^n(y) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varphi^n_{x_1}(y_1) \cdots \varphi^n_{x_d}(y_d)\;, \qquad y = (y_1, \ldots, y_d) \in \mathbf{R}^d\;.
\end{equ}
For these scaling functions we also define $V_n$ as the closed subspace in $L^2$ spanned
by $\{ \varphi_x^n : x \in \Lambda^d_n \}$. Then there is a finite set ${\boldsymbol{\Psi}}$ of functions on $\mathbf{R}^d$ such that the space $V_n^\perp \stackrel{\mathclap{\mbox{\tiny def}}}{=} V_{n+1} \setminus V_n$ is a span of $\{ \psi_x^n : \psi \in {\boldsymbol{\Psi}},\, x \in \Lambda^d_n \}$, where we define the scaled function $\psi_x^n$ by
\begin{equ}
\psi_x^n(y) \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2^{n d/2} \psi\bigl(2^n (y_1 - x_1), \ldots, 2^n (y_d - x_d)\bigr)\;.
\end{equ}
All the results mentioned above can be literally translated from $\mathbf{R}$ to $\mathbf{R}^d$, but of course with $\mathcal{K} \subset \Lambda_1^d$ and with different structure constants $\{a_k : k \in \mathcal{K}\}$ and $\{b_k : k \in \mathcal{K}\}$.
\subsubsection{An analogue of the multiresolution analysis on the grid}
\label{ss:DMultiresolutionAnalysis}
In this section we will develop an analogue of the multiresolution analysis which will be useful for working with functions defined on a dyadic grid. Our construction agrees with
the standard discrete wavelets on gridpoints, but also extends off the grid.
To this end, we use the notation of Section~\ref{ss:MultiresolutionAnalysis}. We recall furthermore that we use $\varepsilon = 2^{-N}$ for a fixed $N \in \mathbf{N}$.
Let us fix a scaling function $\varphi \in \mathcal{C}^r_0(\mathbf{R})$, for some integer $r > 0$, as in Section~\ref{ss:MultiresolutionAnalysis}. For integers $0 \leq n \leq N$ we define the functions
\begin{equ}[e:DFather]
\varphi^{N,n}_x(\cdot) \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2^{N d / 2} \langle \varphi^N_{\cdot}, \varphi^{n}_x \rangle\;, \qquad x \in \Lambda^d_n\;.
\end{equ}
One has that $\varphi^{N,n}_x \in \mathcal{C}^r(\mathbf{R}^d)$, it is supported
in a ball of radius $\mathcal{O}(2^{-n})$ centered at $x$, it has the same scaling properties as $\varphi^{n}_x$, and it satisfies
\begin{equ}[e:DFatherProperty]
\varphi_x^{N,N}(y) = 2^{N d / 2} \delta_{x,y}\;, \qquad \quad x, y \in \Lambda^d_N\;,
\end{equ}
where $\delta_{\cdot, \cdot}$ is the Kronecker's delta on $\Lambda^d_N$. The last property follows from \eqref{e:ScaleFunctionsOrthonormality}. Furthermore, it follows from \eqref{e:FatherRelation} that for $n < N$ these functions satisfy the refinement identity
\begin{equ}[e:DFatherRelation]
\varphi^{N,n}_x = \sum_{k \in \mathcal{K}} a_k\, \varphi^{N,n+1}_{x + 2^{-n}k}\;,
\end{equ}
with the same structure constants $\{ a_k : k \in \mathcal{K} \}$ as for the functions $\varphi^{n}_x$. One more consequence of \eqref{e:ScaleFunctionsOrthonormality} is
\begin{equ}
2^{-Nd} \sum_{y \in \Lambda^d_N} \varphi_x^{N,n}(y) = 2^{- n d / 2}\;,
\end{equ}
which obviously holds for $n = N$, and for $n < N$ it can be proved by induction, using \eqref{e:DFatherRelation} and \eqref{e:StructRelations}.
The functions $\varphi_x^{N,n}$ inherit many of the crucial properties of the functions $\varphi^{n}_x$, which allows us to use them in the multiresolution analysis. In particular, for $n < N$ and $\psi \in {\boldsymbol{\Psi}}$ (the set of wavelet functions, introduced in Section~\ref{ss:MultiresolutionAnalysis}), we can define the functions
\begin{equ}
\psi^{N,n}_x(\cdot) \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2^{N d / 2} \langle \varphi^N_{\cdot}, \psi^n_x \rangle\;, \qquad x \in \Lambda^d_n\;,
\end{equ}
whose properties are similar to those of $\psi^n_x$. For example, $\psi^{N,n}_x \in \mathcal{C}^r(\mathbf{R})$, and it has the same scaling and support properties as $\psi^{n}_x$. Furthermore, it follows from \eqref{e:MotherRelation} that for $n < N$ the following identity holds
\begin{equ}[e:DMotherRelation]
\psi_x^{N,n} = \sum_{k \in \mathcal{K}} b_k \varphi_{x + 2^{-n}k}^{N, n+1}\;,
\end{equ}
with the same constants $\{b_k : k \in \mathcal{K}\}$. It is easy to see that the functions just introduced are not $L^2$-orthogonal, but still, using \eqref{e:StructRelations}, one can go by induction from $N$ to any $n < N$ and prove the following result:
\begin{proposition}
In the context just described, for every integer $n \in [0, N)$, the set
\begin{equ}
\{ \varphi_x^{N, n} : x \in \Lambda_n \} \cup \{ \psi_x^{N, m} : m \in [n, N), \,x \in \Lambda_m \}\;,
\end{equ}
forms an orthonormal basis of $\ell^2(\Lambda_\varepsilon)$ equipped with the inner product $\langle \cdot, \cdot \rangle_\varepsilon$.
\end{proposition}
A generalisation of this discrete analogue of the wavelet analysis to higher dimensions can be done by analogy with the continuous case in Section~\ref{ss:MultiresolutionAnalysis}.
\subsubsection{Proof of the discrete reconstruction theorem}
With the help of the discrete analogue of the multiresolution analysis introduced in the previous section we are ready to prove Theorem~\ref{t:DReconstruct}.
\begin{proof}[Proof of Theorem~\ref{t:DReconstruct}]
We take a compactly supported scaling function $\varphi \in \mathcal{C}^r(\mathbf{R}^d)$ of regularity $r > -\lfloor \alpha\rfloor$, where $\alpha$ is as in the statement of the theorem, and build the functions $\varphi^{N,n}_x$ as in \eqref{e:DFather}. Furthermore, we define the discrete functions $\zeta_{x}^{\varepsilon, t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Pi^{\varepsilon, t}_{x} H^\varepsilon_t(x)$ and $\zeta_{x y}^{\varepsilon, t} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \zeta_{y}^{\varepsilon, t} - \zeta_{x}^{\varepsilon, t}$. Then from Definition~\ref{d:DModel} we obtain
\begin{equs}
\bigl|\langle \zeta_{x y}^{\varepsilon, t}, \varphi_y^{N,n} \rangle_{\varepsilon}\bigr| &\lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \sum_{l \in [\alpha, \gamma) \cap \mathcal{A}} 2^{-n d / 2 - l n} \Vert H^\varepsilon_t(y) - \Gamma_{yx}^{\varepsilon, t} H^\varepsilon_t(x) \Vert_{l} \\
&\lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \enorm{t}^{\eta - \gamma} \sum_{l \in [\alpha, \gamma) \cap \mathcal{A}} 2^{-n d / 2 - l n} |y - x|^{\gamma - l}\\
&\lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \enorm{t}^{\eta - \gamma} 2^{- n d / 2 - \alpha n} |y - x|^{\gamma - \alpha}\;,\label{e:ReconstructIntermBound}
\end{equs}
which holds as soon as $|x -y| \geq 2^{-n}$. Moreover, we define
\begin{equ}
\mathcal{R}_t^{\varepsilon,n} H^\varepsilon_t \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{y \in \Lambda^d_n} \langle \zeta_{y}^{\varepsilon, t}, \varphi_y^{N,n} \rangle_\varepsilon\, \varphi_y^{N,n}\;.
\end{equ}
It follows from the property \eqref{e:DFatherProperty} that $\mathcal{R}_t^{\varepsilon} H^\varepsilon_t = \mathcal{R}_t^{\varepsilon,N} H^\varepsilon_t$ and $\Pi^{\varepsilon, t}_{x} H^\varepsilon_t(x) = \mathcal{P}_{\varepsilon,N} (\zeta_{x}^{\varepsilon,t})$ (recall that $\varepsilon = 2^{-N}$), where the operator $\mathcal{P}_{\varepsilon,n}$ is defined by
\begin{equ}
\mathcal{P}_{\varepsilon,n} (\zeta) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{y \in \Lambda^d_n} \langle \zeta, \varphi_y^{N,n} \rangle_\varepsilon\, \varphi_y^{N,n}\;.
\end{equ}
This allows us to choose $n_0 \geq 0$ to be the smallest integer such that $2^{-n_0} \leq \lambda$ and rewrite
\begin{equs}
\mathcal{R}_t^{\varepsilon} H^\varepsilon_t &- \Pi^{\varepsilon, t}_{x} H^\varepsilon_t(x) = \left( \mathcal{R}^{\varepsilon,n_0}_t H^\varepsilon_t - \mathcal{P}_{\varepsilon,n_0} (\zeta_{x}^{\varepsilon, t}) \right) \label{e:DReconstructExpansion}\\
&+ \sum_{n = n_0}^{N-1} \left( \mathcal{R}_t^{\varepsilon,n+1} H^\varepsilon_t - \mathcal{P}_{\varepsilon,n+1} (\zeta_{x}^{\varepsilon,t}) - \mathcal{R}_t^{\varepsilon,n} H^\varepsilon_t + \mathcal{P}_{\varepsilon,n} (\zeta_{x}^{\varepsilon, t}) \right).
\end{equs}
The first term on the right hand side yields
\begin{equ}[e:DReconstructExpansionFirst]
\langle \mathcal{R}^{\varepsilon,n_0}_t H^\varepsilon_t - \mathcal{P}_{\varepsilon,n_0} (\zeta_{x}^{\varepsilon, t}), \varrho_x^\lambda \rangle_\varepsilon = \sum_{y \in \Lambda^d_{n_0}} \langle \zeta_{x y}^{\varepsilon, t}, \varphi_y^{N,n_0}\rangle_\varepsilon\, \langle \varphi_y^{N,n_0}, \varrho_x^\lambda \rangle_\varepsilon\;.
\end{equ}
Using \eqref{e:ReconstructIntermBound} and the bound $|\langle \varphi_y^{N,n_0}, \varrho_x^\lambda \rangle_\varepsilon| \lesssim 2^{n_0 d/2}$, we obtain
\begin{equs}
\bigl| \langle \mathcal{R}^{\varepsilon,n_0}_t H^\varepsilon_t - \mathcal{P}_{\varepsilon,n_0} (\zeta_{x}^{\varepsilon, t}), \varrho_x^\lambda\rangle_\varepsilon\bigr| \lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \enorm{t}^{\eta - \gamma} 2^{ - \gamma n_0}\;.
\end{equs}
Here, we have also used $|x-y| \lesssim 2^{-n_0}$ in the sum in \eqref{e:DReconstructExpansionFirst}, and the fact that only a finite number of points $y \in \Lambda^d_{n_0}$ contribute to this sum.
Now we will bound each term in the sum in \eqref{e:DReconstructExpansion}. Using \eqref{e:DFatherRelation} and \eqref{e:DMotherRelation}, we can write
\begin{equ}
\mathcal{R}_t^{\varepsilon,n+1} H^\varepsilon_t - \mathcal{P}_{\varepsilon,n+1} (\zeta_{x}^{\varepsilon,t}) - \mathcal{R}_t^{\varepsilon,n} H^\varepsilon_t + \mathcal{P}_{\varepsilon,n} (\zeta_{x}^{\varepsilon, t}) = g^\varepsilon_{t, n} + h^\varepsilon_{t,n}\;,
\end{equ}
where $g^\varepsilon_{t, n}$ is defined by
\begin{equ}
g^\varepsilon_{t, n} = \sum_{y \in \Lambda_n^d} \sum_{k \in \mathcal{K}} a_k \langle \zeta^{\varepsilon, t}_{y, y+2^{-n} k}, \varphi_{y+2^{-n} k}^{N, n+1}\rangle_\varepsilon\, \varphi_y^{N, n}
\end{equ}
and the constants $\{a_k : k \in \mathcal{K}\}$ are from \eqref{e:DFatherRelation}. For $h^\varepsilon_{t,n}$ we have the identity
\begin{equs}[e:deltaFExpansion]
h^\varepsilon_{t,n} = \sum_{y \in \Lambda_{n+1}^d} \sum_{k \in \mathcal{K}} \sum_{\psi \in {\boldsymbol{\Psi}}} b_k \langle \zeta^{\varepsilon, t}_{x y} , \varphi_{y}^{N, n+1}\rangle_\varepsilon\, \psi_{y - 2^{-n} k}^{N,n}\;.
\end{equs}
Moreover, the following bounds, for $n \in [n_0, N]$, follow from the properties of the functions $\varphi^n_x$ and $\psi^n_x$:
\begin{equ}
|\langle \varphi_y^{N,n}, \varrho_x^\lambda \rangle_\varepsilon| \lesssim 2^{n_0 d/2} 2^{-(n-n_0) d/2}\;, \quad |\langle \psi_y^{N,n}, \varrho_x^\lambda \rangle_\varepsilon| \lesssim 2^{n_0 d/2} 2^{-(n-n_0) (r + d/2)}\;.
\end{equ}
Using them and \eqref{e:ReconstructIntermBound}, we obtain a bound on $g^\varepsilon_{t, n}$:
\begin{equs}
|\langle g^\varepsilon_{t, n}, \varrho_x^\lambda \rangle_\varepsilon| &\lesssim \sum_{y \in \Lambda_n^d} \sum_{k \in \mathcal{K}} |\langle \zeta^{\varepsilon, t}_{y, y+2^{-n} k} , \varphi_{y+2^{-n} k}^{N, n+1}\rangle_\varepsilon|\, |\langle \varphi_y^{N,n}, \varrho_x^\lambda \rangle_\varepsilon|\\
&\lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \enorm{t}^{\eta - \gamma} 2^{ - \gamma n}\;,
\end{equs}
where we have used $|x-y| \lesssim 2^{-n}$ in the sum. Summing these bounds over $n \in [n_0, N]$, we obtain a bound of the required order. Similarly, we obtain the following bound on \eqref{e:deltaFExpansion}:
\begin{equs}
|\langle h^\varepsilon_{t, n}, \varrho_x^\lambda \rangle_\varepsilon| \lesssim \Vert \Pi^\varepsilon \Vert^{(\varepsilon)}_{\gamma; T} \Vert H^\varepsilon \Vert^{(\varepsilon)}_{\gamma, \eta; T} \enorm{t}^{\eta - \gamma} 2^{ - \gamma n_0} 2^{-(n-n_0)(r + \alpha)}\;,
\end{equs}
which gives the required bound after summing over $n \in [n_0, N]$. In this estimate we have used the fact that $|y-x| \lesssim 2^{-n_0}$ in the sum in \eqref{e:deltaFExpansion}.
The bounds \eqref{e:DReconstructBounds} can be shown similarly to \eqref{e:ReconstructBound} and \eqref{e:ReconstructTime}.
\end{proof}
\subsection{Convolutions with discrete kernels}
\label{ss:DConvols}
In this section we describe on the abstract level convolutions with discrete kernels. We start with a definition of the kernels we will work with.
\begin{definition}\label{d:DKernel}
We say that a function $K^{\varepsilon} : \mathbf{R} \times \Lambda^d_\varepsilon \to \mathbf{R}$ is regularising of order $\beta > 0$,
if one can find functions $K^{(\varepsilon, n)} : \mathbf{R}^{d+1} \to \mathbf{R}$ and $\mathring{K}^{\varepsilon} : \mathbf{R} \times \Lambda^d_\varepsilon \to \mathbf{R}$ such that
\begin{equ}[e:DEpsExpansion]
K^{\varepsilon} = \sum_{n = 0}^{N-1} K^{(\varepsilon, n)} + \mathring{K}^{\varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar K^\varepsilon + \mathring{K}^{\varepsilon}\;,
\end{equ}
where the function $K^{(\varepsilon, n)}$ has the same support and bounds as the function $K^{(n)}$ in Definition~\ref{d:Kernel}, for some $c, r > 0$, and furthermore, for $k \in \mathbf{N}^{d+1}$ such that $\sabs{k} \leq r$, it satisfies
\begin{equ}[e:DPolyKill]
\int_{\mathbf{R} \times \Lambda_\varepsilon^d} z^k K^{(\varepsilon, n)}(z)\, dz = 0\;.
\end{equ}
The function $\mathring{K}^{\varepsilon}$ is supported in $\{z \in \mathbf{R} \times \Lambda_\varepsilon^d : \snorm{z} \leq c \varepsilon \}$ and satisfies \eqref{e:DPolyKill} with $k = 0$ and
\begin{equs}[e:KZeroBounds]
\sup_{z \in \mathbf{R} \times \Lambda^d_\varepsilon}|\mathring{K}^{\varepsilon}(z) | \leq C \varepsilon^{-|\mathfrak{s}| + \beta}\;.
\end{equs}
\end{definition}
Now, we will define how a discrete model acts on an abstract integration map.
\begin{definition}\label{d:DIntegralModel}
Let $\mathcal{I}$ be an abstract integration map of order $\beta$ as in Definition~\ref{d:AbstractIntegration} for a regularity structure $\mathscr{T}=(\mathcal{T}, \mathcal{G})$, let $Z^\varepsilon=(\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ be a discrete model, and let $K^\varepsilon$ be regularising of order $\beta$ with $r > -\lfloor \min \mathcal{A} \rfloor$. Let furthermore $\bar{K}^\varepsilon$ and $\mathring{K}^\varepsilon$ be as in~\eqref{e:DEpsExpansion}. We define $\bar{\mathcal{J}}^\varepsilon$ on the grid in the same way as its continuous analogue in~\eqref{e:JDef}, but using $\bar{K}^\varepsilon$ instead of $K$ and using the discrete objects instead of their continuous counterparts. Moreover, we define
\begin{equs}
\mathring{\mathcal{J}}^\varepsilon_{t,x} \tau \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathbf{1} \int_{\mathbf{R}} \langle \Pi^{\varepsilon, s}_{x} \Sigma_x^{\varepsilon, s t} \tau, \mathring{K}^\varepsilon_{t-s}(x - \cdot) \rangle_\varepsilon\, ds\;,
\end{equs}
and $\mathcal{J}^\varepsilon_{t,x} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar{\mathcal{J}}^\varepsilon_{t,x} + \mathring{\mathcal{J}}^\varepsilon_{t,x}$. We say that $Z^\varepsilon$ realises $K^\varepsilon$ for $\mathcal{I}$ if the identities \eqref{e:PiIntegral} and \eqref{e:GammaSigmaIntegral}
hold for the corresponding discrete objects.
As before, these two identities should be thought of as providing the definitions
of $\Gamma_{xy}^{\varepsilon,t} \mathcal{I} \tau$ and $\Sigma_x^{\varepsilon,st} \mathcal{I} \tau$ via $\Gamma_{xy}^{\varepsilon,t} \tau$ and $\Sigma_x^{\varepsilon,st} \tau$.
\end{definition}
For a discrete modelled distribution $H^\varepsilon$, we define $\bar \mathcal{N}^\varepsilon_{\gamma} H^\varepsilon$ as in \eqref{e:NDef}, but using the discrete objects instead of the continuous ones, and using the kernel $\bar{K}^\varepsilon$ instead of $K$. Furthermore, we define the term containing $\mathring{K}^\varepsilon$ by
\begin{equ}[e:NZeroDef]
\bigl(\mathring{\mathcal{N}}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathbf{1} \int_{\mathbf{R}} \langle \mathcal{R}^\varepsilon_s H^\varepsilon_s - \Pi^{\varepsilon, s}_{x} \Sigma_x^{\varepsilon, s t} H^\varepsilon_t(x), \mathring{K}^\varepsilon_{t-s}(x - \cdot) \rangle_\varepsilon\, ds\;,
\end{equ}
and we set $\mathcal{N}^\varepsilon_{\gamma} H^\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar{\mathcal{N}}^\varepsilon_{\gamma} H^\varepsilon + \mathring{\mathcal{N}}^\varepsilon_{\gamma} H^\varepsilon$. Finally, we define the discrete analogue of \eqref{e:KDef} by
\begin{equ}[e:KEpsDef]
\bigl(\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{I} H^\varepsilon_t(x) + \mathcal{J}^\varepsilon_{t, x} H^\varepsilon_t(x) + \bigl(\mathcal{N}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x)\;.
\end{equ}
Our definition is consistent thanks to the following two lemmas.
\begin{lemma}\label{l:DPiIntegralBound}
In the setting of Definition~\ref{d:DIntegralModel}, let $\min\mathcal{A} + \beta > 0$. Then all the algebraic relations of Definition~\ref{d:DModel} hold for the symbol $\mathcal{I} \tau$. Moreover, for $\delta > 0$ sufficiently small and for any $\l \in \mathcal{A}$ and $\tau \in \mathcal{T}_l$ such that $l + \beta \notin \mathbf{N}$ and $\Vert \tau \Vert = 1$, one has the bounds
\begin{equs}
| \langle \Pi^{\varepsilon, t}_{x} \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon| &\lesssim \lambda^{l + \beta} \Vert \Pi^\varepsilon \Vert_{\l; T}^{(\varepsilon)} \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigl( 1 + \Vert \Gamma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigr)\;,\label{e:PiIntegralBound}\\
\frac{| \langle \bigl(\Pi^{\varepsilon, t}_{x} - \Pi^{\varepsilon, s}_{x}\bigr) \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon|}{\bigl(|t-s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{\delta}} &\lesssim \lambda^{l + \beta - \delta} \Vert \Pi^\varepsilon \Vert_{\delta, l; T}^{(\varepsilon)} \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigl( 1 + \Vert \Gamma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigr)\;,\label{e:PiIntegralTimeBound}
\end{equs}
uniformly over $\varepsilon$ (see Remark~\ref{r:Uniformity}), $x \in \Lambda_\varepsilon^d$, $s, t \in [-T, T]$, $\lambda \in [\varepsilon, 1]$ and $\varphi \in \mathcal{B}_0^r(\mathbf{R}^d)$.
\end{lemma}
\begin{proof}
The algebraic properties of the models for the symbol $\mathcal{I} \tau$ follow easily from Definition~\ref{d:DIntegralModel}. In order to prove \eqref{e:PiIntegralBound}, we will consider the terms in \eqref{e:PiIntegral} containing $\mathring{K}^\varepsilon$ separately from the others. To this end, we define
\begin{equs}
\bigl(\mathring{\Pi}^{\varepsilon,t}_{x} \mathcal{I} \tau \bigr)(y) &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \int_{\mathbf{R}} \langle \Pi^{\varepsilon, s}_{x} \Sigma_x^{\varepsilon, s t} \tau, \mathring{K}^{\varepsilon}_{t-s}(y - \cdot) - \mathring{K}^{\varepsilon}_{t-s}(x - \cdot)\rangle_\varepsilon\, ds\;,\label{e:PiRingDef}\\
\bigl(\bar{\Pi}^{\varepsilon,t}_{x} \mathcal{I} \tau \bigr)(y) &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl(\Pi^{\varepsilon,t}_{x} - \mathring{\Pi}^{\varepsilon,t}_{x} \bigr) \bigl(\mathcal{I} \tau \bigr)(y)\;.
\end{equs}
Furthermore, for $x, y \in \Lambda_\varepsilon^d$ we use the assumption $0^0 \stackrel{\mathclap{\mbox{\tiny def}}}{=} 1$ and set
\begin{equ}
T^l_{x y} K^{(\varepsilon, n)}_{t}(\cdot) \stackrel{\mathclap{\mbox{\tiny def}}}{=} K^{(\varepsilon, n)}_{t}(y - \cdot) - \sum_{\sabs{k} < l + \beta} \frac{(0, y-x)^k}{k!} D^k K^{(\varepsilon, n)}_{t}(x - \cdot)\;.
\end{equ}
Using Definitions~\ref{d:DModel} and \ref{d:DKernel} and acting as in the proof of \cite[Lem.~5.19]{Hai14}, we can obtain the following analogues of the bounds \cite[Eq.~5.33]{Hai14}:
\begin{equs}[e:DPiInterBounds]
| \langle \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r t}_x \tau, T^l_{x y} K^{(\varepsilon, n)}_{t-r} \rangle_\varepsilon| &\lesssim \sum_{\zeta > 0} |y-x|^{l + \beta + \zeta} 2^{(\mathfrak{s}_0 + \zeta) n} \mathbf{1}_{|t-r| \lesssim 2^{-\mathfrak{s}_0 n}}\;,\\
\Big|\int_{\Lambda_\varepsilon^d} \langle \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r t}_x \tau, T^l_{x y} K^{(\varepsilon, n)}_{t-r} \rangle_\varepsilon&\, \varphi^\lambda_{x}(y) \, dy\Big| \lesssim \sum_{\zeta > 0} \lambda^{l + \beta - \zeta} 2^{(\mathfrak{s}_0 - \zeta) n} \mathbf{1}_{|t-r| \lesssim 2^{-\mathfrak{s}_0 n}}\;,
\end{equs}
for $\varepsilon \leq |y-x| \leq 1$, $\lambda \in [\varepsilon, 1]$, with $\zeta$ taking a finite number of values and with the proportionality constants as in \eqref{e:PiIntegralBound}. Integrating these bounds in the time variable $r$ and using the first bound in \eqref{e:DPiInterBounds} in the case $|y-x| \leq 2^{-n}$ and the second bound in the case $2^{-n} \leq \lambda$, we obtain the required estimate on $\langle \bar \Pi^{\varepsilon, t}_{x} \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon$.
In order to bound $\bigl(\bar \Pi^{\varepsilon, t}_{x} - \bar \Pi^{\varepsilon, s}_{x}\bigr) \mathcal{I} \tau$, we consider two cases $|t-s| \geq 2^{-\mathfrak{s}_0 n}$ and $|t-s| < 2^{-\mathfrak{s}_0 n}$. In the first case we estimate $\bar \Pi^{\varepsilon, t}_{x} \mathcal{I} \tau$ and $\bar \Pi^{\varepsilon, s}_{x} \mathcal{I} \tau$ separately using \eqref{e:DPiInterBounds}, and obtain the required bound, if $\delta > 0$ is sufficiently small. In the case $|t-s| < 2^{-\mathfrak{s}_0 n}$ we write
\begin{equs}
\langle& \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r t}_x \tau, T^l_{x y} K^{(\varepsilon, n)}_{t-r} \rangle_\varepsilon - \langle \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r s}_x \tau, T^l_{x y} K^{(\varepsilon, n)}_{s-r} \rangle_\varepsilon\\
&= \langle \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r s}_x \bigl(\Sigma^{\varepsilon, s t}_x - 1\bigr) \tau, T^l_{x y} K^{(\varepsilon, n)}_{t-r} \rangle_\varepsilon + \langle \Pi^{\varepsilon, r}_{x} \Sigma^{\varepsilon, r s}_x \tau, T^l_{x y} \bigl(K^{(\varepsilon, n)}_{t-r} - K^{(\varepsilon, n)}_{s-r}\bigr) \rangle_\varepsilon\;,
\end{equs}
and estimate each of these terms similarly to \eqref{e:DPiInterBounds}, which gives the required bound for sufficiently small $\delta > 0$.
It is only left to prove the required bounds for $\mathring{\Pi}^{\varepsilon,t}_{x} \bigl(\mathcal{I} \tau \bigr)$. It follows immediately from Definition~\ref{d:DModel} that $|\bigl(\Pi^{\varepsilon, t}_x a\bigr)(x)| \lesssim \Vert a \Vert \varepsilon^{\zeta}$, for $a \in \mathcal{T}_\zeta$. Hence, using the properties \eqref{e:SigmaDef} and \eqref{e:PiDef} we obtain
\begin{equs}
\int_{\mathbf{R}} \bigl|\langle \Pi^{\varepsilon, s}_{x} \Sigma_x^{\varepsilon, s t} \tau, \mathring{K}^{\varepsilon}_{t-s}(y - \cdot)\rangle_\varepsilon\bigr|\, ds &= \int_{\mathbf{R}} \bigl|\langle \Pi^{\varepsilon, s}_{y} \Sigma_y^{\varepsilon, s t} \Gamma_{y x}^{\varepsilon, t} \tau, \mathring{K}^{\varepsilon}_{t-s}(y - \cdot)\rangle_\varepsilon\bigr|\, ds\\
&\lesssim \sum_{\zeta \leq l} \varepsilon^{\zeta + \beta} |y-x|^{l - \zeta}\;,\label{e:KRingBound}
\end{equs}
where $\zeta \in \mathcal{A}$. Similarly, the second term in \eqref{e:PiRingDef}
is bounded by $\varepsilon^{l +\beta}$, implying that if $\lambda \geq \varepsilon$ and $\min\mathcal{A} + \beta > 0$, then one has
\begin{equs}[e:PiRingIntegralBound]
| \langle \mathring{\Pi}^{\varepsilon, t}_{x} \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon| \lesssim \sum_{\zeta \leq l} \varepsilon^{\zeta + \beta} \lambda^{l - \zeta} \lesssim \lambda^{l + \beta}\;,
\end{equs}
which finishes the proof of \eqref{e:PiIntegralBound}. In order to complete the proof of \eqref{e:PiIntegralTimeBound}, we use \eqref{e:KRingBound} and brutally bound
\begin{equs}
| \langle \bigl(\mathring{\Pi}^{\varepsilon, t}_{x} &- \mathring{\Pi}^{\varepsilon, s}_{x}\bigr) \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon| \leq | \langle \mathring{\Pi}^{\varepsilon, t}_{x} \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon| + | \langle \mathring{\Pi}^{\varepsilon, s}_{x} \mathcal{I} \tau, \varphi_{x}^\lambda \rangle_\varepsilon| \\
&\lesssim \sum_{\zeta \leq l} \varepsilon^{\zeta + \beta} |y-x|^{l - \zeta} \lesssim \bigl(|t-s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{\tilde{\delta}} \sum_{\zeta \leq l} \varepsilon^{\zeta + \beta - \tilde{\delta}} |y-x|^{l - \zeta}\;,
\end{equs}
from which we obtain the required bound in the same way as before, as soon as $\delta \in (0, \min \mathcal{A} + \beta)$.
\end{proof}
The following lemma provides a relation between $\mathcal{J}^\varepsilon$ and the operators $\Gamma^\varepsilon$, $\Sigma^\varepsilon$.
\begin{lemma}\label{l:DGammaIntegralBound}
In the setting of Lemma~\ref{l:DPiIntegralBound}, the operators
\begin{equs}[e:DSigmaDiff]
\mathcal{J}^{\varepsilon, t}_{x y} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{J}^\varepsilon_{t, x} \Gamma^{\varepsilon, t}_{x y} - \Gamma^{\varepsilon, t}_{x y} \mathcal{J}^\varepsilon_{t, y}\;,\qquad \mathcal{J}^{\varepsilon, s t}_{x} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{J}^\varepsilon_{s, x} \Sigma^{\varepsilon, s t}_{x} - \Sigma^{\varepsilon, s t}_{x} \mathcal{J}^\varepsilon_{t, x}\;,
\end{equs}
with $s, t \in \mathbf{R}$ and $x, y \in \Lambda_\varepsilon^d$, satisfy the following bounds:
\begin{equs}
\bigl| \bigl(\mathcal{J}^{\varepsilon, t}_{x y} \tau\bigr)_k \bigr| &\lesssim \Vert \Pi^\varepsilon \Vert_{\l; T}^{(\varepsilon)} \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigl( 1 + \Vert \Gamma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigr) |x - y|^{l + \beta - |k|_{\mathfrak{s}}}\;,\\
\bigl| \bigl(\mathcal{J}^{\varepsilon, s t}_{x} \tau\bigr)_k \bigr| &\lesssim \Vert \Pi^\varepsilon \Vert_{\l; T}^{(\varepsilon)} \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigl( 1 + \Vert \Gamma^\varepsilon \Vert^{(\varepsilon)}_{l; T} \bigr) \bigl(|t - s|^{1/\mathfrak{s}_0} \vee \varepsilon\bigr)^{l + \beta - |k|_{\mathfrak{s}}}\;,\label{e:DSigmaDiffBound}
\end{equs}
uniformly in $\varepsilon$ (see Remark~\ref{r:Uniformity}), for $\tau$ as in Lemma~\ref{l:DPiIntegralBound}, for any $k \in \mathbf{N}^{d+1}$ such that $|k|_{\mathfrak{s}} < l + \beta$, and for $(\cdot)_k$ being the multiplier of $X^k$. In particular, the required bounds on $\Gamma^\varepsilon \mathcal{I}\tau$ and $\Sigma^\varepsilon \mathcal{I}\tau$ from Definition~\ref{d:DModel} hold.
\end{lemma}
\begin{proof}
The bounds on the parts of $\mathcal{J}^{\varepsilon, t}_{x y} \tau$ and $\mathcal{J}^{\varepsilon, s t}_{x} \tau$ not containing $\mathring{K}^\varepsilon$ can be obtained as in \cite[Lem.~5.21]{Hai14}, where the bound on the right-hand side of \eqref{e:DSigmaDiffBound} comes from the fact that the scaling of the kernels $K^{(\varepsilon, n)}$ in \eqref{e:DEpsExpansion} does not go below $\varepsilon$. The contributions to \eqref{e:DSigmaDiff} from the kernel $\mathring{K}^\varepsilon$ come via the terms $\mathring{\mathcal{J}}^\varepsilon_{t, x} \Gamma^{\varepsilon, t}_{x y}$, $\mathring{\mathcal{J}}^\varepsilon_{t, y}$, $\mathring{\mathcal{J}}^\varepsilon_{s, x} \Sigma^{\varepsilon, s t}_{x}$ and $\mathring{\mathcal{J}}^\varepsilon_{t, x}$. We can bound all of them separately, similarly to \eqref{e:KRingBound}, and use $|x-y| \geq \varepsilon$ and $|t - s|^{1/\mathfrak{s}_0} \vee \varepsilon \geq \varepsilon$ to estimate the powers of $\varepsilon$. Since all of these powers are positive by assumption, this yields the required bounds.
Now, we will prove the bound on $\Gamma^\varepsilon \mathcal{I}\tau$ required by Definition~\ref{d:DModel}. For $m < l$ such that $m \notin \mathbf{N}$, \eqref{e:GammaSigmaIntegral} yields
\begin{equ}
\Vert \Gamma_{x y}^{\varepsilon, t} \mathcal{I} \tau \Vert_m = \Vert \mathcal{I} \bigl(\Gamma_{x y}^{\varepsilon, t} \tau\bigr) \Vert_m \leq \Vert \Gamma_{x y}^{\varepsilon, t} \tau \Vert_{m - \beta} \lesssim |y-x|^{l + \beta - m}\;,
\end{equ}
where we have used the properties of $\mathcal{I}$. Similarly, we can bound $\Vert \Sigma_{x}^{\varepsilon, s t} \mathcal{I} \tau \Vert_m$. Furthermore, since the map $\mathcal{I}$ does not produce elements of integer homogeneity, we have for $m \in \mathbf{N}$,
\begin{equs}
\Vert \Gamma_{x y}^{\varepsilon, t} \mathcal{I} \tau \Vert_m = \Vert \mathcal{J}^{\varepsilon, t}_{x y} \Vert_m \lesssim |y-x|^{l + \beta - m}\;,
\end{equs}
where the last bound we have proved above. In the same way we can obtain the required bound on $\Vert \Sigma_{x}^{\varepsilon, s t} \mathcal{I} \tau \Vert_m$.
\end{proof}
\begin{remark}\label{r:ModelLift}
If $(\Pi^{\varepsilon}, \Gamma^{\varepsilon}, \Sigma^{\varepsilon})$ is a discrete model on $\gen{\mathscr{T}}$, which is introduced in Definition~\ref{d:TruncSets}, then there is a canonical way to extend it to a discrete model on $\hat{\mathscr{T}}$. Since the symbols from $\hat{\mathcal{F}}$ are ``generated'' by $\gen{\mathcal{F}}$, we only have to define the actions of $\Pi^{\varepsilon}$, $\Gamma^{\varepsilon}$ and $\Sigma^{\varepsilon}$ on the symbols $\tau \bar{\tau}$ and $\mathcal{I}\tau \in \hat{\mathcal{F}} \setminus \gen{\mathcal{F}}$ with $\tau, \bar{\tau} \in \hat{\mathcal{F}}$, so that the extension of the model to $\hat{\mathscr{T}}$ will follow by induction. For the product $\tau \bar{\tau}$, we set
\minilab{CanonicalProduct}
\begin{equs}
\big(\Pi^{\varepsilon,t}_{x} \tau \bar{\tau}\big) (y) = \bigl(\Pi^{\varepsilon,t}_{x} \tau\bigr) (y)\,& \big(\Pi^{\varepsilon,t}_{x} \bar{\tau}\big) (y)\;,\label{e:CanonicalPiProduct}\\
\Sigma_x^{\varepsilon, s t} \tau \bar{\tau} = \big(\Sigma_x^{\varepsilon, s t} \tau\big)\, \big(\Sigma_x^{\varepsilon, s t} \bar{\tau}\big)\;,\qquad \Gamma_{x y}^{\varepsilon, t}& \tau \bar{\tau} = \big(\Gamma_{x y}^{\varepsilon, t} \tau\big)\, \big(\Gamma_{x y}^{\varepsilon, t} \bar{\tau}\big)\;.\label{e:CanonicalGammaSigmaProduct}
\end{equs}
For the symbol $\mathcal{I} \tau$ we define the actions of the maps $(\Pi^{\varepsilon}, \Gamma^{\varepsilon}, \Sigma^{\varepsilon})$ by the identities \eqref{e:PiIntegral} and \eqref{e:GammaSigmaIntegral}.
However, even if the family of models satisfy analytic bounds uniformly in $\varepsilon$ on $\gen{\mathscr{T}}$,
this is not necessarily true for its extension to $\hat{\mathscr{T}}$.
\end{remark}
The structure of the canonical extension of a discrete model will be important for us. That is why we make the following definition.
\begin{definition}
We call a discrete model $Z^\varepsilon = (\Pi^{\varepsilon}, \Gamma^{\varepsilon}, \Sigma^{\varepsilon})$ defined on $\hat{\mathscr{T}}$ {\it admissible}, if it satisfies the identities \eqref{e:CanonicalGammaSigmaProduct} and furthermore
realises $K^\varepsilon$ for $\mathcal{I}$.
\end{definition}
\begin{remark}\label{r:RenormModel}
If $M \in \mathfrak{R}$ is a renormalisation map as mentioned in Section~\ref{ss:RegStruct}, such that $M \hat{\mathcal{T}} \subset \hat{\mathcal{T}}$, where $\hat{\mathcal{T}}$ is introduced in Definition~\ref{d:TruncSets}, and if $Z^\varepsilon = (\Pi^{\varepsilon}, \Gamma^{\varepsilon}, \Sigma^{\varepsilon})$ is an admissible model, then we can define a renormalised discrete model $\hat Z^\varepsilon$ as in \cite[Sec.~8.3]{Hai14}, which is also admissible.
\end{remark}
The following result is a discrete analogue of Theorem~\ref{t:Integration}.
\begin{theorem}
For a regularity structure $\mathscr{T} = (\mathcal{T}, \mathcal{G})$ with the minimal homogeneity $\alpha$, let $\beta$, $\gamma$, $\eta$, $\bar \gamma$, $\bar \eta$ and $r$ be as in Theorem~\ref{t:Integration} and let $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ be a discrete model which realises $K^\varepsilon$ for $\mathcal{I}$. Then for any discrete modelled distribution $H^\varepsilon$ the following bound holds
\begin{equ}[e:DIntegralBound]
\vert\!\vert\!\vert \mathcal{K}^\varepsilon_{\gamma} H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\bar{\gamma}, \bar{\eta}; T} \lesssim \vert\!\vert\!\vert H^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T} \Vert \Pi^\varepsilon \Vert_{\gamma; T}^{(\varepsilon)} \Vert \Sigma^\varepsilon \Vert_{\gamma; T}^{(\varepsilon)} \bigl(1 + \Vert \Gamma^\varepsilon \Vert^{(\varepsilon)}_{\bar{\gamma}; T} + \Vert \Sigma^\varepsilon \Vert^{(\varepsilon)}_{\bar{\gamma}; T}\bigr)\;,
\end{equ}
and one has the identity
\begin{equ}[e:DIntegralIdentity]
\mathcal{R}^\varepsilon_t \bigl(\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) = \int_{0}^t \langle \mathcal{R}^\varepsilon_s H^\varepsilon_s, K^\varepsilon_{t-s}(x - \cdot)\rangle_\varepsilon\, ds\;.
\end{equ}
Moreover, if $\bar{Z}^\varepsilon = (\bar{\Pi}^\varepsilon, \bar{\Gamma}^\varepsilon, \bar{\Sigma}^\varepsilon)$ is another discrete model realising $K^\varepsilon$ for $\mathcal{I}$, and if $\bar{\mathcal{K}}^\varepsilon_{\gamma}$ is defined as in \eqref{e:KEpsDef} for this model, then one has the bound
\begin{equ}[e:DIntegrationDistance]
\vert\!\vert\!\vert \mathcal{K}^\varepsilon_{\gamma} H^\varepsilon; \bar{\mathcal{K}}^\varepsilon_{\gamma} \bar{H}^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\bar{\gamma}, \bar{\eta}; T} \lesssim \vert\!\vert\!\vert H^\varepsilon; \bar{H}^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma, \eta; T} + \vert\!\vert\!\vert Z^\varepsilon; \bar{Z}^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\bar{\gamma}; T}\;,
\end{equ}
for all discrete modelled distributions $H^\varepsilon$ and $\bar{H}^\varepsilon$, where the norms on $H^\varepsilon$ and $\bar H^\varepsilon$ are defined via the models $Z^\varepsilon$ and $\bar Z^\varepsilon$ respectively, and the proportionality constant depends on $\varepsilon$ only via the same norms of the discrete objects as in \eqref{e:IntegrationDistance}.
\end{theorem}
\begin{proof}
The proof of the bound \eqref{e:DIntegralBound} for the components of $\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon$ not containing $\mathring{K}^\varepsilon$ is almost identical to that of \eqref{e:Integration}, and we only need to bound the terms $\mathring{\mathcal{J}}^\varepsilon H^\varepsilon$ and $\mathring{\mathcal{N}}^\varepsilon_{\gamma} H^\varepsilon$. The estimates on $\mathring{\mathcal{J}}^\varepsilon H^\varepsilon$ were obtained in the proof of Lemma~\ref{l:DGammaIntegralBound}. To bound $\mathring{\mathcal{N}}^\varepsilon_{\gamma} H^\varepsilon$, for $x, y \in \Lambda_\varepsilon^d$, we write
\begin{equs}
\bigl(\mathcal{R}^\varepsilon_s H^\varepsilon_s - \Pi^{\varepsilon, s}_{x} \Sigma_x^{\varepsilon, s t} H^\varepsilon_t(x)\bigr)(y) &= \Pi^{\varepsilon, s}_{y} \bigl(H^\varepsilon_s(y) - \Gamma_{y x}^{\varepsilon, s} H^\varepsilon_s(x)\bigr)(y) \\
&\qquad + \Pi^{\varepsilon, s}_{y} \Gamma_{y x}^{\varepsilon, s} \bigl(H^\varepsilon_s(x) - \Sigma_x^{\varepsilon, s t} H^\varepsilon_t(x)\bigr)(y)\;,
\end{equs}
where we made use of Definitions~\ref{d:DReconstruct} and \ref{d:DModel}. Estimating this expression similarly to~\eqref{e:KRingBound}, but using~\eqref{e:DModelledDistributionNorm} this time, we obtain
\begin{equs}[e:NZeroBound]
\Vert \bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(x) \Vert_0 \lesssim \enorm{t}^{\eta - \gamma} \varepsilon^{\gamma + \beta} \lesssim \enorm{t}^{\eta + \beta}\;,
\end{equs}
where we have used $\gamma + \beta > 0$.
Furthermore, the operator $\Gamma^{\varepsilon, t}_{y x}$ leaves $\mathbf{1}$ invariant, and we have
\begin{equ}
\Gamma^{\varepsilon, t}_{y x} \bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(x) = \bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(x)\;.
\end{equ}
Thus, estimating $\bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(y)$ and $\bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(x)$ separately by the intermediate bound in \eqref{e:NZeroBound} and using $|x-y| \geq \varepsilon$, yields the required bound. In the same way we obtain the required estimate on $\Sigma^{\varepsilon, s t}_{x} \bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_t(x) - \bigl(\mathring{\mathcal{N}}^{\varepsilon}_{\gamma} H^\varepsilon\bigr)_s(x)$.
The bound \eqref{e:DIntegrationDistance} can be show similarly to \eqref{e:IntegrationDistance}, using the above approach. In order to show that the identity \eqref{e:DIntegralIdentity} holds, we notice that
\begin{equ}
\bigl(\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) \in \poly{\mathcal{T}} + \mathcal{T}_{\geq \alpha + \beta}\;,
\end{equ}
where $\poly{\mathcal{T}}$ contains only the abstract polynomials and $\alpha + \beta > 0$ by assumption. It hence follows from Definitions \ref{d:DModel} and \ref{d:DReconstruct} that
\begin{equ}
\mathcal{R}^\varepsilon_t \bigl(\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) = \langle \mathbf{1}, \bigl(\mathcal{K}^\varepsilon_{\gamma} H^\varepsilon\bigr)_t(x) \rangle\;,
\end{equ}
which is equal to the right-hand side of \eqref{e:DIntegralIdentity}.
\end{proof}
\section{Analysis of discrete stochastic PDEs}
\label{s:DPDEs}
We consider the following spatial discretisation of equation \eqref{e:SPDE} on $\mathbf{R}_+ \times \Lambda_\varepsilon^d$:
\begin{equ}[e:DSPDE]
\partial_t u^{\varepsilon} = A^{\varepsilon} u^{\varepsilon} + F^{\varepsilon}(u^{\varepsilon}, \xi^\varepsilon)\;, \qquad u^{\varepsilon}(0, \cdot) = u^{\varepsilon}_0(\cdot)\;,
\end{equ}
where $u^{\varepsilon}_0 \in \mathbf{R}^{\Lambda_\varepsilon^d}$, $\xi^\varepsilon$ is a spatial discretisation of $\xi$, $F^{\varepsilon}$ is a discrete approximation of $F$, and $A^\varepsilon : \ell^\infty(\Lambda_\varepsilon^d) \to \ell^\infty(\Lambda_\varepsilon^d)$ is a bounded linear operator satisfying the following assumption.
\begin{assumption}\label{a:DOperator}
There exists an operator $A$ given by a Fourier multiplier $a : \mathbf{R}^d \to \mathbf{R}$ satisfying Assumption~\ref{a:Operator} with an even integer parameter $\beta > 0$ and a measure $\mu$ on $\mathbf{Z}^d$
with finite support such that
\begin{equ}[e:DOperatorExtension]
\bigl(A^{\varepsilon} \varphi\bigr) (x) = \varepsilon^{-\beta} \int_{\mathbf{R}^d} \varphi(x - \varepsilon y)\, \mu(dy)\;, \qquad x \in \Lambda_\varepsilon^d\;,
\end{equ}
for every $\varphi \in \mathcal{C}(\mathbf{R}^d)$, and such that the identity
\begin{equ}[e:DOperatorPoly]
\int_{\mathbf{R}^d} P(x - y)\, \mu(dy) = (A P)(x)\;, \qquad x \in \mathbf{R}^d\;,
\end{equ}
holds for every polynomial $P$ on $\mathbf{R}^d$ with $\deg P \leq \beta$.
Furthermore, the Fourier transform of $\mu$ only vanishes on $\mathbf{Z}^d$.
\end{assumption}
\begin{example}\label{ex:Laplacian}
A common example of the operator $A$ is the Laplacian $\Delta$, with its nearest neighbor discrete approximation $\Delta^{\varepsilon}$, defined by \eqref{e:DOperatorExtension} with the measure $\mu$ given by
\begin{equ}[e:DLaplacian]
\mu ( \varphi) = \sum_{x \in \mathbf{Z}^d : \Vert x \Vert = 1} \bigl(\varphi(x) - \varphi(0)\bigr)\;,
\end{equ}
for every $\varphi \in \ell^\infty(\mathbf{Z}^d)$, and where $\Vert x \Vert$ is the Euclidean norm. In this case, the Fourier multiplier of $\Delta$ is $a(\zeta) = - 4 \pi^2 \Vert \zeta \Vert^2$ and
\begin{equ}
\bigl(\mathscr{F} \mu\bigr)(\zeta) = - 4 \sum_{i=1}^d \sin^2 \bigl(\pi \zeta_i\bigr)\;, \qquad \zeta \in \mathbf{R}^d\;.
\end{equ}
where $\mathscr{F}$ is the Fourier transform. One can see that Assumption~\ref{a:DOperator} is satisfied with $\beta = 2$.
\end{example}
The following section is devoted to the analysis of discrete operators.
\subsection{Analysis of discrete operators}
We assume that the operator $A^\varepsilon : \ell^\infty(\Lambda_\varepsilon^d) \to \ell^\infty(\Lambda_\varepsilon^d)$ satisfies Assumption~\ref{a:DOperator} and we define the Green's function of $\partial_t - A^\varepsilon$ by
\begin{equ}[e:DGreenDef]
G^\varepsilon_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-d} \mathbf{1}_{t \geq 0} \bigl(e^{t A^\varepsilon} \delta_{0, \cdot}\bigr)(x)\;, \qquad (t, x) \in \mathbf{R} \times \Lambda_\varepsilon^d\;,
\end{equ}
where $\delta_{\cdot, \cdot}$ is the Kronecker's delta.
In order to build an extension of $G^\varepsilon$ off the grid, we first choose a function $\varphi \in \mathcal{S}(\mathbf{R}^d)$ whose values coincide with $\delta_{0, \cdot}$ on $\mathbf{Z}^d$, and such that $\bigl(\mathscr{F} \varphi\bigr)(\zeta) = 0$ for $|\zeta|_\infty \geq 3/4$, say, where $\mathscr{F}$ is the Fourier transform.
To build such a function, write $\tilde \varphi \in \mathcal{C}^\infty(\mathbf{R}^d)$ for the Dirichlet kernel
$\tilde \varphi(x) = \prod_{i=1}^d \frac{\sin(\pi x_i)}{\pi x_i}$,
whose values coincide with $\delta_{0, x}$ for $x \in \mathbf{Z}^d$, and whose Fourier transform is supported in $\{\zeta : |\zeta|_\infty \leq \frac{1}{2}\}$. Choosing any function $\psi \in \mathcal{C}^\infty(\mathbf{R}^d)$
supported in the ball of radius $1/4$ around the origin and integrating to $1$, it then suffices to set $\mathscr{F} \varphi = \bigl(\mathscr{F} \tilde \varphi\bigr) * \psi$.
Furthermore, we define the bounded operator $\tilde{A}^\varepsilon : \mathcal{C}_b(\mathbf{R}^d) \to \mathcal{C}_b(\mathbf{R}^d)$ by the right-hand side of \eqref{e:DOperatorExtension}, where $\mathcal{C}_b(\mathbf{R}^d)$ is the space of bounded continuous functions on $\mathbf{R}^d$ equipped with the supremum norm. Then, denoting as usual by $\varphi^{\varepsilon}$ the rescaled version of $\varphi$, we have for $G^\varepsilon$ the representation
\begin{equ}[e:DGreenExt]
G^\varepsilon_t(x) = \mathbf{1}_{t \geq 0} \bigl(e^{t \tilde{A}^\varepsilon} \varphi^{\varepsilon}\bigr)(x)\;, \qquad (t, x) \in \mathbf{R} \times \Lambda_\varepsilon^d\;.
\end{equ}
By setting $x \in \mathbf{R}^{d}$ in \eqref{e:DGreenExt}, we obtain an extension of $G^\varepsilon$ to $\mathbf{R}^{d+1}$, which we again denote by $G^\varepsilon$.
Unfortunately, the function $G^\varepsilon_t(x)$ is discontinuous at $t = 0$, and our next aim is to modify it in such a way that it becomes differentiable at least for sufficiently large values of $|x|$. Since $\tilde{A}^\varepsilon$ generates a strongly continuous semigroup, for every $m \in \mathbf{N}$ we have the uniform limit
\begin{equ}[e:DGreenTimeDiff]
\lim_{t \downarrow 0} \partial_t^m G^\varepsilon_t = \bigl(\tilde{A}^\varepsilon \bigr)^m \varphi^{\varepsilon}\;.
\end{equ}
This gives us the terms which we have to subtract from $G^\varepsilon$ to make it continuously differentiable at $t = 0$. For this, we take a function $\varrho : \mathbf{R} \to \mathbf{R}$ such that $\varrho(t) = 1$ for $t \in \bigl[0, \frac{1}{2}\bigr]$, $\varrho(t) = 0$ for $t \in (-\infty, 0) \cup [1, +\infty)$, and $\varrho(t)$ is smooth on $t > 0$. Then, for $r > 0$, we define
\begin{equ}[e:TimeOperator]
T^{\varepsilon, r}(t, x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varrho \bigl(t / \varepsilon^\beta\bigr) \sum_{m \leq r / \beta} \frac{t^m}{m!} \bigl(\tilde A^\varepsilon\bigr)^m \varphi^\varepsilon (x)\;, \qquad (t, x) \in \mathbf{R}^{d+1}\;.
\end{equ}
The role of the function $\varrho$ is to have $T^{\varepsilon, r}$ compactly supported in $t$. Then we have the following result.
\begin{lemma}\label{l:DGreensBound}
In the described context, let Assumption~\ref{a:DOperator} be satisfied. Then for every fixed value $r > 0$ there exists a constant $c > 0$ such that the bound
\begin{equ}[e:GHatBound]
\bigl| D^k \bigl(G^\varepsilon - T^{\varepsilon, r}\bigr)(z) \bigr| \leq C\snorm{z}^{-d - \sabs{k}}\;,
\end{equ}
holds uniformly over $z \in \mathbf{R}^{d+1}$ with $\snorm{z} \geq c \varepsilon$, for all $k \in \mathbf{N}^{d+1}$ with $|k|_{\mathfrak{s}} \leq r$, for $D^k$ begin a space-time derivative and for the space-time scaling $\mathfrak{s} = (\beta, 1, \ldots, 1)$.
Moreover, for $|t|_\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} |t|^{1/\beta} \vee \varepsilon$, the function $\bar G^\varepsilon_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} |t|_{\varepsilon}^d G^\varepsilon_{t} \bigl(|t|_{\varepsilon} x\bigr)$ is Schwartz in $x$, i.e. for every $m \in \mathbf{N}$ and $\bar k \in \mathbf{N}^d$ there is a constant $\bar C$ such that the bound
\begin{equ}[e:GHatSchwartz]
\bigl| D_x^{\bar k} \bar G^\varepsilon_t (x) \bigr| \leq \bar C \bigl(1 + |x|\bigr)^{-m}\;,
\end{equ}
holds uniformly over $(t,x) \in \mathbf{R}^{d+1}$.
\end{lemma}
\begin{proof}
The function $G^\varepsilon - T^{\varepsilon, r}$ is of class $\mathcal{C}^r_\mathfrak{s}$ on $\mathbf{R}^{d+1}$. Indeed, spatial regularity follows immediately from the regularity of $\varphi$ and commutation of $\tilde A^\varepsilon$ with the differential operator. Continuous differentiability at $t = 0$ follows from \eqref{e:DGreenTimeDiff}. Furthermore, since $G^\varepsilon$ vanishes on $t \leq 0$, we only need to consider $t > 0$.
Next, we notice that the bound \eqref{e:GHatBound} follows from \eqref{e:GHatSchwartz}.
Let $\hat r>0$ be such that the measure $\mu$ in Assumption~\ref{a:DOperator} is supported in the ball
of radius $\hat r$. Then, for $k = (k_0, \bar k) \in \mathbf{N}^{d+1}$ with $k_0 \in \mathbf{N}$ and $|k|_{\mathfrak{s}} \leq r$ we use \eqref{e:DGreenExt} and the identities \eqref{e:DOperatorPoly}, combined with the Taylor's formula, to get
\begin{equs}[e:DGreenInterBound]
\bigl| D^k G^\varepsilon_t (x) \bigr| = \bigl| \bigl(\tilde A^\varepsilon\bigr)^{k_0} D^{\bar k}_x G^\varepsilon_t(x) \bigr| \lesssim \sup_{y : |y-x| \leq k_0 \hat r \varepsilon} \sup_{l : |l| = \beta k_0} \bigl| D^{\bar k + l}_y G^\varepsilon_t (y) \bigr|\;,
\end{equs}
where $y \in \mathbf{R}^d$, $l \in \mathbf{N}^d$. For $\snorm{t, x} \geq c \varepsilon$, in the case $|t|^{1/\beta} \geq |x|$, we bound the right-hand side of \eqref{e:DGreenInterBound} using \eqref{e:GHatSchwartz} with $m = 0$, what gives an estimate of order $|t|^{-(d + |k|_{\mathfrak{s}})/\beta}$. In the case $|t|^{1/\beta} < |x|$, we use \eqref{e:GHatSchwartz} with $m = d + |k|_s$, and we get a bound of order $|x|^{-d - |k|_{\mathfrak{s}}}$, if we take $c \geq 2 r \hat r / \beta$. Furthermore, the required bound on $T^{\varepsilon, r}$ follows easily from the properties of the functions $\varphi$ and $\varrho$. Hence, we only need to prove the bound \eqref{e:GHatSchwartz}.
Denoting by $\mathscr{F}$ the Fourier transform, we get from \eqref{e:DGreenExt} and Assumption~\ref{a:DOperator}:
\begin{equs}[e:DGreenBound]
\bigl( \mathscr{F} \bar G^{\varepsilon}_t\bigr)(\zeta) = \bigl( \mathscr{F} \varphi\bigr) \bigl(\varepsilon |t|_{\varepsilon}^{-1} \zeta\bigr)\, e^{t |t|_{\varepsilon}^{-1} a(\zeta) f( \varepsilon |t|_{\varepsilon}^{-1} \zeta)}\;,
\end{equs}
where we have used the scaling property $\lambda^\beta a(\zeta) = a(\lambda \zeta)$, and where $f \stackrel{\mathclap{\mbox{\tiny def}}}{=} (\mathscr{F} \mu) / a$.
We start with considering the case $t \geq \varepsilon^\beta$. It follows from the last part of
Assumption~\ref{a:DOperator} that there exists $\bar c>0$ such that $f(\zeta) \ge \bar c$ for
$|\zeta|_\infty \le 3/4$. Since
$\varepsilon |t|_{\varepsilon}^{-1} \leq 1$, we conclude that
\begin{equ}
\bigl|D^{\bar k}_\zeta e^{a(\zeta) f( \varepsilon |t|_{\varepsilon}^{-1} \zeta)} \bigr| \lesssim |\zeta|^{\beta |\bar k|} e^{a(\zeta) \bar c} \lesssim \bigl(1 + |\zeta| \bigr)^{-m}\;,
\end{equ}
for $|\zeta|_\infty < 3 / \bigl(4\varepsilon |t|_{\varepsilon}^{-1}\bigr)$, for every $m \geq 0$ and for a proportionality constant dependent on $m$ and $\bar k$. Here, we have used $a(\zeta) < 0$ and polynomial growth of $|a(\zeta)|$. Since $\bigl( \mathscr{F} \varphi\bigr) \bigl(\varepsilon |t|_{\varepsilon}^{-1} \zeta\bigr)$ vanishes for $|\zeta|_\infty \geq 3 / \bigl(4\varepsilon |t|_{\varepsilon}^{-1}\bigr)$, we conclude that
\begin{equ}
\bigl|D^{\bar k}_\zeta \bigl( \mathscr{F} \bar G^{\varepsilon}_t\bigr)(\zeta)\bigr| \lesssim \bigl(1 + |\zeta| \bigr)^{-m}\;,
\end{equ}
uniformly in $t$ and $\varepsilon$ (provided that $t \geq \varepsilon^\beta$),
and for every $m \in \mathbf{N}$ and $\bar k \in \mathbf{N}^d$.
In the case $t < \varepsilon^\beta$, we can bound the exponent in \eqref{e:DGreenBound} by $1$, and the polynomial decay comes from the factor $\bigl( \mathscr{F} \varphi\bigr) \bigl(\zeta\bigr)$, because $\varphi \in \mathcal{S}(\mathbf{R}^d)$. Since the Fourier transform is continuous on Schwartz space, this implies that $\bar G^{\varepsilon}_t$ is a Schwartz function, with bounds uniform in $\varepsilon $ and $t$, which is exactly the claim.
\end{proof}
The following result is an analogue of Lemma~\ref{l:GreenDecomposition} for $G^\varepsilon$.
\begin{lemma} \label{l:DGreenDecomposition}
Let Assumption~\ref{a:DOperator} be satisfied. Then, the function $G^\varepsilon$ defined in \eqref{e:DGreenExt} can be written as $G^\varepsilon = K^\varepsilon + R^\varepsilon$ in such a way that the identity
\begin{equ}[e:sumGreen]
\bigl(G^\varepsilon \star_\varepsilon u\bigr)(z) = \bigl(K^\varepsilon \star_\varepsilon u\bigr)(z) + \bigl(R^\varepsilon \star_\varepsilon u\bigr)(z)\;,
\end{equ}
holds for all $z \in (-\infty, 1] \times \Lambda_\varepsilon^{d}$ and all functions $u$ on $\mathbf{R}_+ \times \Lambda_\varepsilon^d$, periodic in the spatial variable with some fixed period. Furthermore, $K^\varepsilon$ is regularising of order $\beta$ in the sense of Definition~\ref{d:DKernel}, for arbitrary (but fixed) $r$ and with the scaling $\mathfrak{s} = (\beta, 1, \ldots, 1)$. The function $R^{\varepsilon}$ is compactly supported, non-anticipative and the norm $\Vert R^\varepsilon \Vert_{\mathcal{C}^r}$ is bounded uniformly in $\varepsilon$.
\end{lemma}
\begin{proof}
Let $M : \mathbf{R}^{d+1} \to \mathbf{R}_+$ be a smooth norm for the scaling $\mathfrak{s}$ (see for example \cite[Rem.~2.13]{Hai14}). Furthermore, let $\bar \varrho : \mathbf{R}_+ \to [0,1]$ be a smooth ``cutoff function'' such that $\bar \varrho(s) = 0$ if $s \notin [1/2, 2]$, and such that $\sum_{n \in \mathbf{Z}} \bar \varrho(2^n s) = 1$ for all $s > 0$ (see the construction of the partition of unity in \cite{BCD11}). For integers $n \in [0, N)$ we set the functions
\begin{equ}
\bar \varrho_n(z) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar \varrho(2^{n} M(z))\;, \qquad \bar \varrho_{< 0} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{n < 0} \bar \varrho_n\;, \qquad \bar \varrho_{\ge N} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{n \ge N} \bar \varrho_n\;,
\end{equ}
as well as
\begin{equs}
\bar{K}^{(\varepsilon, n)}(z) = \bar \varrho_n(z) &\bigl(G^\varepsilon - T^{\varepsilon, r}\bigr)(z)\;, \qquad \bar{R}^\varepsilon(z) = \bar \varrho_{< 0}(z) \bigl(G^\varepsilon - T^{\varepsilon, r}\bigr)(z)\;,\\
\tilde{K}^{\varepsilon}(z) &= \bar \varrho_{\ge N}(z) \bigl(G^\varepsilon - T^{\varepsilon, r}\bigr)(z) + T^{\varepsilon, r}(z)\;.\label{e:TildeKDef}
\end{equs}
Then it follows immediately from the properties of $\bar \varrho$ that
\begin{equ}
G^\varepsilon = \sum_{n = 0}^{N-1} \bar K^{(\varepsilon, n)} + \tilde{K}^{\varepsilon} + \bar R^\varepsilon\;.
\end{equ}
Since $\bar \varrho_{< 0}$ is supported away from the origin, we use \eqref{e:GHatBound} and Assumption~\ref{a:DOperator} to conclude that $\Vert \bar{R}^\varepsilon \Vert_{\mathcal{C}^r}$ is bounded uniformly in $\varepsilon$.
(Actually, its value and derivatives even decay faster than any power.)
Furthermore, the function $\bar{K}^{(\varepsilon, n)}$ is supported in the ball of radius $c 2^{-n}$, for $c$ as
in Lemma~\ref{l:DGreensBound}, provided that the norm $M$ was chosen such that $M(z) \geq 2 c \snorm{z}$.
By the same reason, the first term in \eqref{e:TildeKDef} is supported in the ball of radius $c \varepsilon$.
Moreover, the support property of the measure $\mu$ and the properties of the functions $\varrho$
and $\varphi^\varepsilon$ in \eqref{e:TimeOperator} yield that the restriction of $T^{\varepsilon, r}$ to
the grid $\Lambda_\varepsilon^d$ in space is supported in the ball of radius $c \varepsilon$, as soon as $c \geq 2 r \hat r / \beta$, where $\hat r$ is the support radius of the measure $\mu$ from Assumption~\ref{a:DOperator}.
As a consequence of \eqref{e:DOperatorExtension}, \eqref{e:DGreenExt} and \eqref{e:TimeOperator},
we get for $0 \leq n < N$ the exact scaling properties
\begin{equs}
\bar{K}^{(\varepsilon, n)}(z) = 2^{n d} \bar{K}^{(\varepsilon 2^n, 0)}\bigl(2^{\mathfrak{s} n} z\bigr)\;,\qquad \tilde{K}^{\varepsilon}(z) = \varepsilon^{-d} \tilde{K}^{1}\bigl(\varepsilon^{-\mathfrak{s} n} z\bigr)\;,
\end{equs}
and \eqref{e:KernelBound} and \eqref{e:KZeroBounds} follow immediately from \eqref{e:GHatBound} and \eqref{e:TimeOperator}.
It remains to modify these functions in such a way that they ``kill'' polynomials in the sense of \eqref{e:DPolyKill}. To this end, we take a smooth function $P^{(N)}$ on $\mathbf{R}^{d+1}$, whose support coincides with the support of $\tilde{K}^\varepsilon$, which satisfies $|P^{(N)}(z)| \lesssim \varepsilon^{-d}$, for every $z \in \mathbf{R}^{d+1}$, and such that one has
\begin{equs}[e:PKillerDef]
\int_{\mathbf{R} \times \Lambda_\varepsilon^d} \bigl(\tilde{K}^\varepsilon - P^{(N)} \bigr)(z)\, dz = 0\;.
\end{equs}
Then we define $\mathring{K}^{\varepsilon}$ to be the restriction of $\tilde{K}^{\varepsilon} - P^{(N)}$ to the grid $\Lambda_\varepsilon^d$ in space. Clearly, the function $\mathring{K}^{\varepsilon}$ has the same scaling and support properties as $\tilde{K}^{\varepsilon}$, and it follows from \eqref{e:PKillerDef} that it satisfies \eqref{e:DPolyKill} with $k=0$.
Moreover, we can recursively build a sequence of smooth functions $P^{(n)}$, for integers $n \in [0, N)$, such that $P^{(n)}$ in supported in the ball of radius $c 2^{-n}$, the function $P^{(n)}$ satisfies the bounds in \eqref{e:KernelBound}, and for every $k \in \mathbf{N}^{d+1}$ with $\sabs{k} \leq r$ one has
\begin{equ}[e:PKillerTwoDef]
\int_{\mathbf{R} \times \Lambda_\varepsilon^d} z^{k} \left(\bar K^{(\varepsilon, n)} - P^{(n)} + P^{(n + 1)} \right)(z)\, dz = 0\;.
\end{equ}
Then, for such values of $n$, we define
\begin{equs}
K^{(\varepsilon, n)} = \bar{K}^{(\varepsilon, n)} - P^{(n)} + P^{(n+1)}\;, \qquad R^\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar R^\varepsilon + P^{(0)}\;.
\end{equs}
It follows from the properties of the functions $P^{(n)}$ that $K^{(\varepsilon, n)}$ has all the required
properties. The function $R^\varepsilon$ also has the required properties, and
the decompositions \eqref{e:DEpsExpansion} and \eqref{e:sumGreen} hold by construction.
Finally, using \eqref{e:GHatSchwartz}, we can make the function $R^\varepsilon$ compactly supported in the same way as in \cite[Lem.~7.7]{Hai14}.
\end{proof}
\begin{remark}\label{r:KernelDiff}
One can see from the proof of Lemma~\ref{l:DGreenDecomposition} that the function $\mathring{K}^\varepsilon$ is $(r / \mathfrak{s}_0)$-times continuously differentiable in the time variable for $t \neq 0$ and has a discontinuity at $t = 0$.
\end{remark}
By analogy with \eqref{e:RDef}, we use the function $R^\varepsilon$ from Lemma~\ref{l:DGreenDecomposition} to define for periodic $\zeta_t \in \mathbf{R}^{\Lambda_\varepsilon^d}$, $t \in \mathbf{R}$, the abstract polynomial
\begin{equ}[e:DRDef]
\bigl(R^\varepsilon_{\gamma} \zeta\bigr)_t(x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sum_{\sabs{k} < \gamma} \frac{X^k}{k!} \int_{\mathbf{R}} \langle \zeta_s, D^k R^\varepsilon_{t-s}(x - \cdot) \rangle_\varepsilon\, ds\;,
\end{equ}
where as before $k \in \mathbf{N}^{d+1}$ and the mixed derivative $D^k$ is in space-time.
\subsection{Properties of the discrete equations}
In this section we show that a discrete analogue of Theorem~\ref{t:FixedMap} holds for the solution map of the equation \eqref{e:DSPDE} with an operator $A^\varepsilon$ satisfying Assumption~\ref{a:DOperator}.
Similarly to \cite[Lem.~7.5]{Hai14}, but using the properties of $G^\varepsilon$ proved in the previous section, we can show that for every periodic $u^\varepsilon_0 \in \mathbf{R}^{\Lambda^d_\varepsilon}$, we have a discrete analogue of Lemma~\ref{l:InitialData} for the map $(t,x) \mapsto S^\varepsilon_t u^\varepsilon_0(x)$, where $S^\varepsilon$ is the semigroup generated by $A^\varepsilon$.
For the regularity structure $\mathscr{T}$ from Section~\ref{ss:RegStruct}, we take a truncated regularity structure $\hat{\mathscr{T}}=(\hat{\mathcal{T}}, \mathcal{G})$ and make the following assumption on the nonlinearity $F^\varepsilon$.
\begin{assumption}\label{a:DNonlin}
For some $0 < \bar\gamma \leq \gamma$, $\eta \in \mathbf{R}$, every $\varepsilon > 0$ and every discrete model $Z^\varepsilon$ on $\hat{\mathscr{T}}$, there exist discrete modeled distributions $F_0^\varepsilon(Z^\varepsilon)$ and $I_0^\varepsilon(Z^\varepsilon)$, with exactly the same properties as of $F_0$ and $I_0$ in Assumption~\ref{a:Nonlin} on the grid. Furthermore, we define $\hat{F}^\varepsilon$ as in \eqref{e:NonlinAs}, but via $F^\varepsilon$ and $F_0^\varepsilon$, and we define $\hat{F}^{\varepsilon}(H)$ for $H : \mathbf{R}_+ \times \Lambda^d_\varepsilon \to \mathcal{T}_{< \gamma}$ as in \eqref{e:NonlinearTerm}. Finally, we assume that the discrete analogue of the Lipschitz condition \eqref{e:Lipschitz} holds for $\hat{F}^{\varepsilon}$, with the constant $C$ independent of $\varepsilon$.
\end{assumption}
Similarly to \eqref{e:AbstractEquation}, but using the discrete operators \eqref{e:DReconstructDef}, \eqref{e:DRDef} and \eqref{e:KEpsDef}, we reformulate the equation \eqref{e:DSPDE} as
\begin{equ}[e:DAbstractEquation]
U^\varepsilon = \mathcal{P}^\varepsilon \hat{F}^\varepsilon(U^\varepsilon) + S^\varepsilon u^\varepsilon_0 + I_0^\varepsilon\;,
\end{equ}
where $\mathcal{P}^\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{K}^\varepsilon_{\bar{\gamma}} + R^\varepsilon_\gamma \mathcal{R}^\varepsilon$ and $U^\varepsilon$ is a discrete modeled distribution.
\begin{remark}\label{r:DClassicalSolution}
If $Z^\varepsilon$ is a canonical discrete model, then it follows from \eqref{e:DIntegralIdentity}, \eqref{e:DRDef}, \eqref{e:DReconstructDef}, Definition~\ref{d:DModel} and Assumption~\ref{a:DNonlin} that
\begin{equ}[e:DSPDESolution]
u^\varepsilon_t(x) = \bigl(\mathcal{R}^\varepsilon_t U^\varepsilon_t\bigr)(x)\;, \qquad (t, x) \in \mathbf{R}_+ \times \Lambda_\varepsilon^d\;.
\end{equ}
is a solution of the equation \eqref{e:DSPDE}.
\end{remark}
The following result can be proven in the same way as Theorem~\ref{t:FixedMap}.
\begin{theorem}\label{t:DSolutions}
Let $Z^\varepsilon$ be a sequence of models and let $u^\varepsilon_0$ be a sequence of periodic functions on $\Lambda_\varepsilon^d$. Let furthermore the assumptions of Theorem~\ref{t:FixedMap} and Assumption~\ref{a:DNonlin} be satisfied. Then there exists $T_{\star} \in (0, +\infty]$ such that for every $T < T_{\star}$ the sequence of solution maps $\mathcal{S}^\varepsilon_T : (u^\varepsilon_0, Z^\varepsilon) \mapsto U^\varepsilon$ of the equation \eqref{e:DAbstractEquation} is jointly Lipschitz continuous (uniformly in $\varepsilon$!) in the sense of Theorem~\ref{t:FixedMap}, but for the discrete objects.
\end{theorem}
\begin{remark}
Since we require uniformity in $\varepsilon$ in Theorem~\ref{t:DSolutions}, the solution of equation \eqref{e:DAbstractEquation} is considered only up to some time point $T_{\star}$.
\end{remark}
\section{Inhomogeneous Gaussian models}
\label{s:GaussModels}
In this section we analyse discrete and continuous models which are built from Gaussian noises. In the discrete case, we will work as usual on the grid $\Lambda_\varepsilon^d$, with $\varepsilon = 2^{-N}$ and $N \in \mathbf{N}$, and with the time-space scaling $\mathfrak{s} = (\mathfrak{s}_0, 1, \ldots, 1)$.
We assume that we are given a probability space $(\Omega, \mathcal{F}, \P)$, together with a white noise $\xi$ over the Hilbert space $H \stackrel{\mathclap{\mbox{\tiny def}}}{=} L^2(D)$ (see \cite{Nua06}), where $D \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathbf{R} \times \mathbf{T}^d$ and $\mathbf{T} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathbf{R} / \mathbf{Z}$ is the unit circle. In the sequel, we will always identify $\xi$ with its periodic extension to $\mathbf{R}^{d+1}$.
In order to build a spatial discretisation of $\xi$, we take a compactly supported function $\varrho : \mathbf{R}^d \to \mathbf{R}$, such that for every $y \in \mathbf{Z}^d$ one has
\begin{equ}
\int_{\mathbf{R}^d} \varrho(x) \varrho(x - y)\, dx = \delta_{0, y}\;,
\end{equ}
where $\delta_{\cdot, \cdot}$ is the Kronecker's function. Then, for $x \in \Lambda_\varepsilon^d$, we define the scaled function $\varrho^\varepsilon_x(y) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-d} \varrho((y-x)/\varepsilon)$ and
\begin{equ}[e:DNoise]
\xi^\varepsilon(t,x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \xi(t, \varrho^\varepsilon_x)\;, \qquad (t,x) \in \mathbf{R} \times \Lambda_\varepsilon^d\;.
\end{equ}
One can see that $\xi^\varepsilon$ is a white noise on the Hilbert space $H_\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} L^2(\mathbf{R}) \otimes \ell^2(\mathbf{T}_\varepsilon^d)$, where $\mathbf{T}_\varepsilon \stackrel{\mathclap{\mbox{\tiny def}}}{=} (\varepsilon \mathbf{Z}) / \mathbf{Z}$ and $\ell^2(\mathbf{T}_\varepsilon^d)$ is equipped with the inner product $\langle \cdot, \cdot \rangle_\varepsilon$.
In the setting of Section~\ref{sec:trunc}, we assume that
$Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ is a discrete model on $\hat{\mathcal{T}}$ such that, for each $\tau \in \hat{\mathcal{F}}$ and each test function $\varphi$, the maps $\langle\Pi^{\varepsilon, t}_x \tau, \varphi\rangle_\varepsilon$, $\Gamma^{\varepsilon, t}_{xy} \tau$ and $\Sigma^{\varepsilon, s t}_{x} \tau$ belong to the inhomogeneous
Wiener chaos of order $\vert\!\vert\!\vert \tau \vert\!\vert\!\vert$ (the number of occurrences of $\Xi$ in $\tau$) with respect to $\xi^\varepsilon$. Moreover, we assume that the distributions of the functions $(t,x) \mapsto \langle \Pi^{\varepsilon, t}_x \tau, \varphi_x\rangle_\varepsilon$, $(t,x) \mapsto \Gamma^{\varepsilon, t}_{x, x + h_1} \tau$ and $(t,x) \mapsto \Sigma^{\varepsilon, t, t+h_2}_{x} \tau$ are stationary, for all $h_1 \in \Lambda_\varepsilon^d$ and $h_2 \in \mathbf{R}$. In what follows, we will call the discrete models with these properties {\it stationary Gaussian discrete models}.
The following result provides a criterion for such a model to be bounded uniformly in $\varepsilon$. In its statement we use the following set:
\begin{equ}[e:TMinus]
\hat{\mathcal{F}}^{-} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \left(\{\tau \in \hat{\mathcal{F}}: |\tau| < 0\} \cup \gen{\mathcal{F}}\right) \setminus \poly{\mathcal{F}}\;.
\end{equ}
\begin{theorem}\label{t:ModelsConvergence}
In the described context, let $\hat{\mathscr{T}} = (\hat{\mathcal{T}}, \mathcal{G})$ be a truncated regularity structure and let $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ be an admissible stationary Gaussian discrete model on it. Let furthermore the bounds
\begin{equ}[e:GaussModelBoundSigmaGamma]
\mathbf{E} \Big[\Vert \Gamma^\varepsilon \Vert_{\gamma; T}^{(\varepsilon)}\Big]^p \lesssim 1\;, \qquad \mathbf{E}\Big[\Vert \Sigma^\varepsilon \Vert_{\gamma; T}^{(\varepsilon)}\Big]^p \lesssim 1\;.
\end{equ}
hold uniformly in $\varepsilon$ (see Remark~\ref{r:Uniformity}) on the respective generating regularity structure $\gen{\mathscr{T}} = (\gen{\mathcal{T}}, \mathcal{G})$, for every $p \geq 1$, for every $\gamma > 0$ and for some $T \geq c$, where $c > 0$ is from Definition~\ref{d:DKernel} and where the proportionality constants can depend on $p$. Let finally $\Pi^\varepsilon$ be such that for some $\delta > 0$ and for each $\tau \in \hat{\mathcal{F}}^{-}$ the bounds
\begin{equs}[e:GaussModelBound]
\mathbf{E}\Big[ |\langle \Pi^{\varepsilon, t}_x \tau, \varphi_x^\lambda\rangle_\varepsilon|^2\Big] &\lesssim \lambda^{2 |\tau| + \kappa}\;, \\
\mathbf{E}\Big[ |\langle \bigl(\Pi^{\varepsilon, t}_x - \Pi^{\varepsilon, s}_x\bigr) \tau, \varphi_x^\lambda\rangle_\varepsilon|^2 \Big] &\lesssim \lambda^{2 (|\tau| - \delta) + \kappa} |t-s|^{2 \delta /\mathfrak{s}_0}\;,
\end{equs}
hold uniformly in $\varepsilon$, all $\lambda \in [\varepsilon,1]$, all $x \in \Lambda_\varepsilon^d$, all $s \neq t \in [-T, T]$ and all $\varphi \in \mathcal{B}^r_0(\mathbf{R}^d)$ with $r > - \lfloor\min \hat{\mathcal{A}}\rfloor$. Then, for every $\gamma > 0$, $p \geq 1$ and $\bar \delta \in [0, \delta)$, one has the following bound on $\hat{\mathscr{T}}$ uniformly in $\varepsilon$:
\begin{equ}[e:GaussianModelBound]
\mathbf{E}\left[\vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert_{\bar \delta, \gamma; T}^{(\varepsilon)}\right]^p \lesssim 1\;.
\end{equ}
Finally, let $\bar Z^\varepsilon = (\bar \Pi^\varepsilon, \bar \Gamma^\varepsilon, \bar\Sigma^\varepsilon)$ be another admissible stationary Gaussian discrete model on $\hat{\mathcal{T}}$, such that for some $\theta > 0$ and some $\bar{\varepsilon} > 0$ the maps $\Gamma^\varepsilon - \bar{\Gamma}^\varepsilon$, $\Sigma^\varepsilon - \bar{\Sigma}^\varepsilon$ and $\Pi^{\varepsilon} - \bar{\Pi}^\varepsilon$ satisfy the bounds \eqref{e:GaussModelBoundSigmaGamma} and \eqref{e:GaussModelBound} respectively with proportionality constants of order $\bar{\varepsilon}^{2\theta}$. Then, for every $\gamma > 0$, $p \geq 1$ and $\bar \delta \in [0, \delta)$, the models $Z^\varepsilon$ and $\bar Z^\varepsilon$ satisfy on $\hat{\mathscr{T}}$ the bound
\begin{equ}[e:GaussianModelsBound]
\mathbf{E}\left[\vert\!\vert\!\vert Z^\varepsilon; \bar{Z}^\varepsilon \vert\!\vert\!\vert_{\bar \delta, \gamma; T}^{(\varepsilon)}\right]^p \lesssim \bar{\varepsilon}^{\theta p}\;,
\end{equ}
uniformly in $\varepsilon \in (0,1]$.
\end{theorem}
\begin{proof}
Since by assumption $\langle\Pi^{\varepsilon, t}_x \tau, \varphi\rangle_\varepsilon$ belongs to a fixed inhomogeneous Wiener chaos, the equivalence of moments \cite{Nel73} and the bounds \eqref{e:GaussModelBound} yield the respective bounds on the $p$-th moments, for any $p \geq 1$. In particular, the Kolmogorov continuity criterion implies that for such $p$ the bounds
\begin{equs}[e:GaussModelBoundKolmogorov]
\mathbf{E}\bigg[ \sup_{t \in [-T, T]} |\langle \Pi^{\varepsilon, t}_x \tau, \varphi_x^\lambda\rangle_\varepsilon|\bigg]^p &\lesssim \lambda^{p |\tau| + \bar \kappa}\;, \\
\mathbf{E}\bigg[ \sup_{s \neq t \in [-T, T]} \frac{|\langle \bigl(\Pi^{\varepsilon, t}_x - \Pi^{\varepsilon, s}_x\bigr) \tau, \varphi_x^\lambda \rangle_\varepsilon|}{|t-s|^{\bar \delta /\mathfrak{s}_0}} \bigg]^p &\lesssim \lambda^{p (|\tau| - \delta) + \bar \kappa}\;,
\end{equs}
hold uniformly over $x$, $\varphi$ and $\lambda$ as in \eqref{e:GaussModelBound} and for some $\bar \kappa > 0$ depending on $p$. Going now by induction from the elements of $\gen{\mathcal{T}}$ to the elements of $\hat{\mathcal{T}}$, using Lemmas~\ref{l:DPiIntegralBound} and~\ref{l:DGammaIntegralBound} and the discrete multiresolution analysis defined in Section~\ref{ss:DMultiresolutionAnalysis}, we can obtain~\eqref{e:GaussianModelBound} in the same way as in the proof of \cite[Thm.~10.7]{Hai14}. The bound \eqref{e:GaussianModelsBound} can be proved similarly.
\end{proof}
The conditions \eqref{e:GaussModelBound} can be checked quite easily if the maps $\Pi^\varepsilon \tau$ have certain Wiener chaos expansions. More precisely, we assume that there exist kernels $\mathcal{W}^{(\varepsilon; k)}\tau$ such that $\bigl(\mathcal{W}^{(\varepsilon; k)} \tau\bigr)(z) \in H^{\otimes k}_\varepsilon$, for $z \in \mathbf{R} \times \Lambda_\varepsilon^d$, and
\begin{equ}[e:PiWiener]
\langle\Pi^{\varepsilon, t}_0 \tau, \varphi\rangle_\varepsilon = \sum_{k \leq \vert\!\vert\!\vert \tau \vert\!\vert\!\vert} I^\varepsilon_k\Big(\int_{\Lambda_\varepsilon^{d}} \varphi(y)\, \bigl(\mathcal{W}^{(\varepsilon; k)}\tau\bigr)(t,y)\, dy\Big)\;,
\end{equ}
where $I^\varepsilon_k$ is the $k$-th order Wiener integral with respect to $\xi^\varepsilon$ and the space $H_\varepsilon$ is introduced above. Then we define the function
\begin{equ}[e:CovDef]
\bigl(\mathcal{K}^{(\varepsilon; k)}\tau\bigr)(z_1, z_2) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \langle \bigl(\mathcal{W}^{(\varepsilon; k)}\tau\bigr)(z_1), \bigl(\mathcal{W}^{(\varepsilon; k)}\tau\bigr)(z_2) \rangle_{H_\varepsilon^{\otimes k}}\;,
\end{equ}
for $z_1 \neq z_2 \in \mathbf{R} \times \Lambda_\varepsilon^d$, assuming that the expression on the right-hand side is well-defined.
In the same way, we assume that the maps $\bar{\Pi}^\varepsilon \tau$ are given by \eqref{e:PiWiener} via the respective kernels $\bar{\mathcal{W}}^{(\varepsilon; k)}\tau$. Moreover, we define the functions $\delta \mathcal{K}^{(\varepsilon; k)}\tau$ as in \eqref{e:CovDef}, but via the kernels $\bar \mathcal{W}^{(\varepsilon; k)}\tau - \mathcal{W}^{(\varepsilon; k)}\tau$, and we assume that the functions $\mathcal{K}^{(\varepsilon; k)}\tau$ and $\delta \mathcal{K}^{(\varepsilon; k)}\tau$ depend on the time variables $t_1$ and $t_2$ only via $t_1 - t_2$, i.e.
\begin{equ}[e:KernelTime]
\bigl(\mathcal{K}^{(\varepsilon; k)}\tau\bigr)_{t_1 - t_2}(x_1, x_2) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl(\mathcal{K}^{(\varepsilon; k)}\tau\bigr)(z_1, z_2)\;,
\end{equ}
where $z_i = (t_i, x_i)$, and similarly for $\delta \mathcal{K}^{(\varepsilon; k)}\tau$.
The following result shows that the bounds \eqref{e:GaussModelBound} follow from
corresponding bounds on these functions.
\begin{proposition}
\label{p:CovarianceConvergence}
In the described context, we assume that for some $\tau \in \hat{\mathcal{F}}^{-}$ there are values $\alpha > |\tau| \vee (-d/2)$ and $\delta \in (0, \alpha + d / 2)$ such that the bounds
\begin{equs}[e:CovarianceBounds]
|\big(\mathcal{K}^{(\varepsilon; k)}\tau\big)_0(x_1, x_2)| &\lesssim \sum_{\zeta \geq 0} \big( \senorm{0, x_1} + \senorm{0, x_2}\big)^\zeta \senorm{0, x_1 - x_2}^{2 \alpha - \zeta}\;,\\
\frac{|\delta^{0, t} \big(\mathcal{K}^{(\varepsilon; k)}\tau\big)(x_1, x_2)|}{|t|^{2 \delta/ \mathfrak{s}_0}} &\lesssim \sum_{\zeta \geq 0} \big( \senorm{t, x_1} + \senorm{t, x_2}\big)^\zeta \senorm{0, x_1- x_2}^{2 \alpha - 2 \delta - \zeta}\;,
\end{equs}
hold uniformly in $\varepsilon$ for $t \in \mathbf{R}$, $x_1, x_2 \in \Lambda_\varepsilon^d$ and $k \leq \vert\!\vert\!\vert \tau \vert\!\vert\!\vert$, where the operator $\delta^{0, t}$ is defined in \eqref{e:deltaTime}, where $\senorm{z} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \snorm{z} \vee \varepsilon$, and where the sums run over finitely many values of $\zeta \in [0, 2 \alpha - 2 \delta + d)$. Then the bounds \eqref{e:GaussModelBound} hold for $\tau$ with a sufficiently small value of $\kappa > 0$.
Let furthermore \eqref{e:CovarianceBounds} hold for the function $\delta \mathcal{K}^{(\varepsilon; k)}\tau$ with the proportionality constant of order $\bar{\varepsilon}^{2\theta}$, for some $\theta > 0$. Then the required bounds on $\bigl(\Pi^\varepsilon - \bar \Pi^\varepsilon\bigr)\tau$ in Theorem~\ref{t:ModelsConvergence} hold.
\end{proposition}
\begin{proof}
We note that due to our assumptions on stationarity of the models, it is sufficient to check the conditions \eqref{e:GaussModelBound} only for $\langle \Pi^{\varepsilon, t}_0 \tau, \varphi_0^\lambda\rangle_\varepsilon$ and $\langle \bigl(\Pi^{\varepsilon, t}_0 - \Pi^{\varepsilon, 0}_0\bigr) \tau, \varphi_0^\lambda \rangle_\varepsilon$, and respectively for the map $\bar \Pi^\varepsilon$.
We start with the proof of the first statement of this proposition. We denote by $\Pi^{(\varepsilon, k), t}_0 \tau$ the component of $\Pi^{\varepsilon, t}_0 \tau$ belonging to the $k$-th homogeneous Wiener chaos. Furthermore, we will use the following property of the Wiener integral \cite{Nua06}:
\begin{equ}[e:WienerBound]
\mathbf{E}\bigl[I^\varepsilon_k(f)^2\bigr] \leq \Vert f \Vert_{H_\varepsilon^{\otimes k}}\;, \qquad f \in H_\varepsilon^{\otimes k}\;.
\end{equ}
Thus, from this property, \eqref{e:KernelTime} and the first bound in \eqref{e:CovarianceBounds}, we get
\begin{equs}
\mathbf{E}| \langle \Pi^{(\varepsilon, k), t}_0 \tau&, \varphi^\lambda_0 \rangle_\varepsilon |^{2} \lesssim \int_{\Lambda_\varepsilon^d} \int_{\Lambda_\varepsilon^d} |\varphi^\lambda_0 (x_1)|\, |\varphi^\lambda_0(x_2)|\, |\big(\mathcal{K}^{(\varepsilon, k)}\tau\big)_0(x_1, x_2)|\, d x_1 d x_2\\
&\lesssim \lambda^{-2 d} \sum_{\zeta \geq 0} \int_{\substack{|x_1| \leq \lambda \\ |x_2| \leq \lambda}} \bigl( \senorm{0, x_1} + \senorm{0, x_2}\bigr)^\zeta \senorm{0, x_1- x_2}^{2 \alpha - \zeta}\, d x_1 d x_2\\
&\lesssim \lambda^{-2 d} \sum_{\zeta \geq 0} \lambda^{d + \zeta} \int_{|x| \leq 2\lambda} \senorm{0, x}^{2 \alpha - \zeta}\, d x \lesssim \lambda^{2 \alpha}\;,\label{e:PiEstimate}
\end{equs}
for $\lambda \geq \varepsilon$. Here, to have the proportionality constant independent of $\varepsilon$, we need $2 \alpha - \zeta > - d$. Combining the bounds \eqref{e:PiEstimate} for each $k$ with stationarity of $\Pi^{\varepsilon} \tau$, we obtain the first estimate in \eqref{e:GaussModelBound}, with a sufficiently small $\kappa > 0$.
Now, we will investigate the time regularity of the map $\Pi^\varepsilon$. For $|t| \geq \lambda^{\mathfrak{s}_0}$ we can use \eqref{e:PiEstimate} and brutally bound
\begin{equs}
\mathbf{E}| \langle \delta^{0, t} \Pi^{(\varepsilon, k)}_0 \tau, \varphi^\lambda_0 \rangle_\varepsilon |^2 &\lesssim \mathbf{E}| \langle \Pi^{(\varepsilon, k), t}_0 \tau, \varphi^\lambda_0 \rangle_\varepsilon |^2 + \mathbf{E}| \langle \Pi^{(\varepsilon, k), 0}_0 \tau, \varphi^\lambda_0 \rangle_\varepsilon |^2\\
&\lesssim \lambda^{2 \alpha} \lesssim |t|^{2 \delta / s_0} \lambda^{2 \alpha - 2 \delta}\;,\label{e:PiTimeSimpleEstimate}
\end{equs}
for any $\delta \geq 0$, which is the required estimate. In the case $|t| < \lambda^{\mathfrak{s}_0}$, the bound \eqref{e:WienerBound} and second bound in~\eqref{e:CovarianceBounds} yield
\begin{equs}
\mathbf{E}&| \langle \delta^{0, t} \Pi^{(\varepsilon, k)}_0 \tau, \varphi^\lambda_0\rangle_\varepsilon |^2 \lesssim \int_{\Lambda_\varepsilon^d} \int_{\Lambda_\varepsilon^d} |\varphi^\lambda_0 (x_1)|\, |\varphi^\lambda_0 (x_2)|\, |\delta^{0, t} \big(\mathcal{K}^{(\varepsilon, k)}\tau\big)(x_1, x_2)|\, d x_1 d x_2\\
&\qquad \qquad + \int_{\Lambda_\varepsilon^d} \int_{\Lambda_\varepsilon^d} |\varphi^\lambda_0 (x_1)|\, |\varphi^\lambda_0 (x_2)|\, |\delta^{-t, 0} \big(\mathcal{K}^{(\varepsilon, k)}\tau\big)(x_1, x_2)|\, d x_1 d x_2\\
&\lesssim |t|^{2 \delta / \mathfrak{s}_0} \lambda^{-2 d} \sum_{\zeta \geq 0} \int_{\substack{|x_1| \leq \lambda \\ |x_2| \leq \lambda}} \bigl(\senorm{t, x_1} + \senorm{t, x_2}\bigr)^\zeta \senorm{0, x_1- x_2}^{2 \alpha - 2 \delta - \zeta} d x_1 d x_2 \\
&\lesssim |t|^{2 \delta/ \mathfrak{s}_0} \lambda^{2 \alpha - 2 \delta}\;,\label{e:PiTimeNotSimpleEstimate}
\end{equs}
where the integral is bounded as before for $2 \alpha - 2 \delta - \zeta > - d$. Combining the bounds \eqref{e:PiTimeSimpleEstimate} and \eqref{e:PiTimeNotSimpleEstimate} for each value of $k$ with stationarity of $\Pi^{\varepsilon} \tau$, we obtain the second estimate in \eqref{e:GaussModelBound}. The required bounds on $\bigl(\Pi^\varepsilon - \bar \Pi^\varepsilon\bigr)\tau$ can be proved in a similar way.
\end{proof}
\begin{remark}\label{r:ContinousModelLift}
Assume that we are given an admissible continuous model $Z = (\Pi, \Gamma, \Sigma)$ on $\hat \mathscr{T}$ such that the map $\Pi$ is given on $\hat{\mathcal{F}}^-$ by the expansions \eqref{e:PiWiener} in which we replace all the discrete objects by their continuous counterparts. Then one can prove in the same way analogues to Theorem~\ref{t:ModelsConvergence} and Proposition~\ref{p:CovarianceConvergence} in the continuous case, i.e. when we use $\varepsilon = 0$ and use continuous objects in place of the discrete ones.
\end{remark}
\subsection{Continuous inhomogeneous models}
\label{ss:ContinuousGauss}
In this section we will show how in some cases we can build a continuous inhomogeneous model from an admissible model in the sense of \cite[Def.~8.29]{Hai14}.
For a white noise $\xi$ on a Hilbert space $H$ as in the beginning of the previous section, we assume that we are given an admissible model $\tilde{Z}=(\tilde{\Pi}, \tilde{\Gamma})$ in the sense of \cite[Def.~8.29]{Hai14} on the truncated regularity structure $\hat \mathscr{T}$ such that for every $\tau \in \hat \mathcal{F}$, every test function $\varphi$ on $\mathbf{R}^{d+1}$ and every pair of points $z, \bar z \in \mathbf{R}^{d+1}$, the maps $\langle\tilde{\Pi}_z \tau, \varphi\rangle$ and $\tilde{\Gamma}_{z \bar z} \tau$ belong to the inhomogeneous Wiener chaos of order $\vert\!\vert\!\vert \tau \vert\!\vert\!\vert$ (the quantity $\vert\!\vert\!\vert \tau \vert\!\vert\!\vert$ is defined in the beginning of Section~\ref{s:GaussModels}) with respect to $\xi$. Furthermore, we assume that for every $\tau \in \hat{\mathcal{F}}$ there exist kernels $\mathcal{W}^{(k)}\tau$ such that for every test function $\varphi$ on $\mathbf{R}^{d+1}$ one has $\int_{\mathbf{R}^{d+1}} \varphi(z) \bigl(\mathcal{W}^{(k)} \tau\bigr)(z)\, dz \in H^{\otimes k}$, postulating that the integral is well-defined, and $\tilde{\Pi}_z \tau$ can be written as
\begin{equ}[e:GeneralPiWiener]
\langle \tilde{\Pi}_{z} \tau, \varphi_z\rangle = \sum_{k \leq \vert\!\vert\!\vert \tau \vert\!\vert\!\vert} I_k\Big(S_z^{\otimes k} \int_{\mathbf{R}^{d+1}} \varphi(\bar z)\, \bigl(\mathcal{W}^{(k)}\tau\bigr)(\bar z)\, d\bar z\Big)\;,
\end{equ}
where $I_k$ is the $k$-th Wiener integral with respect to $\xi$, $\varphi_z$ is the recentered version of $\varphi$ and $\{S_z\}_{z \in \mathbf{R}^{d+1}}$ is the group of translations acting on $H$. Using the scalar product in $H^{\otimes k}$ rather than in $H_\varepsilon^{\otimes k}$ and points from $\mathbf{R}^{d + 1}$, we assume that the respective modification of the right-hand side of \eqref{e:CovDef} is well defined and we introduce for these kernels the functions $\mathcal{K}^{(k)}\tau$. In addition, we assume that they satisfy the continuous analogue of \eqref{e:KernelTime} and the first bound in \eqref{e:CovarianceBounds} (when $\varepsilon = 0$). Then for every $\tau \in \hat{\mathcal{F}}$ we can define a distribution $\Pi_{x}^{t} \tau \in \mathcal{S}'(\mathbf{R}^d)$ by
\begin{equ}[e:ModelTranslation]
\langle \Pi_{x}^{t} \tau, \varphi_x \rangle = \sum_{k \leq \vert\!\vert\!\vert \tau \vert\!\vert\!\vert} I_k\Big(S_{(t,x)}^{\otimes k} \int_{\mathbf{R}^{d}} \varphi(y)\, \bigl(\mathcal{W}^{(k)}\tau\bigr)(t,y)\, dy\Big)\;,
\end{equ}
where $\varphi$ is a test function on $\mathbf{R}^d$. In fact, the expression on the right-hand side of \eqref{e:ModelTranslation} is well-defined, because one can show in exactly the same way as in \eqref{e:PiEstimate} that for every test function $\varphi$ on $\mathbf{R}^d$ one has
\begin{equ}
\Bigl|\int_{\mathbf{R}^d} \int_{\mathbf{R}^d} \varphi^\lambda_0 (x_1)\, \varphi^\lambda_0 (x_2)\, \big(\mathcal{K}^{(k)}\tau\big)_0(x_1, x_2)\, d x_1 d x_2\Bigr| \lesssim \lambda^{2 \alpha}\;.
\end{equ}
Finally, defining the maps $\Gamma$ and $\Sigma$ by
\begin{equ}[e:GammaTranslation]
\Gamma^t_{x y} = \tilde{\Gamma}_{(t,x), (t,y)}\;, \qquad \Sigma^{s t}_{x} = \tilde{\Gamma}_{(s,x), (t,x)}\;,
\end{equ}
one can see that $(\Pi, \Gamma, \Sigma)$ is an admissible inhomogeneous model on $\hat \mathscr{T}$.
\section{Convergence of the discrete dynamical $\Phi^4_3$ model}
\label{s:DPhi}
In this section we use the theory developed above to prove convergence of the solutions of \eqref{e:DPhiRenorm}, where $\Delta^\varepsilon$ is the nearest-neighbour approximation of $\Delta$ and the discrete noise $\xi^\varepsilon$ is defined in \eqref{e:SimpleDNoise} via a space-time white noise $\xi$.
Example~\ref{ex:Laplacian} yields that Assumption~\ref{a:DOperator} is satisfied, and moreover $\xi^\varepsilon$ is a discrete noise as in \eqref{e:DNoise}. The time-space scaling for the equation \eqref{e:Phi} is $\mathfrak{s}=(2,1,1,1)$ and the kernels $K$ and $K^\varepsilon$ are defined in Lemma~\ref{l:DGreenDecomposition} with the parameters $\beta = 2$ and $r > 2$, for the operators $\Delta$ and $\Delta^\varepsilon$ respectively.
The regularity structure $\mathscr{T}=(\mathcal{T}, \mathcal{G})$ for the equation \eqref{e:Phi}, introduced in Section~\ref{ss:RegStruct}, has the model space $\mathcal{T} = \mathrm{span} \{\mathcal{F}\}$, where
\begin{equs}[e:CFDef]
\mathcal{F} = \{\mathbf{1}, \Xi, \Psi, \Psi^2, \Psi^3, \Psi^2 X_i, \mathcal{I}(\Psi^3) \Psi, \mathcal{I}(\Psi^3) \Psi^2&, \mathcal{I}(\Psi^2) \Psi^2, \mathcal{I}(\Psi^2),\\
&\mathcal{I}(\Psi) \Psi, \mathcal{I}(\Psi) \Psi^2, X_i, \ldots \}\;,
\end{equs}
$\Psi \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{I}(\Xi)$, $|\Xi| = \alpha \in \big(-\frac{18}{7}, -\frac{5}{2}\big)$ and the index $i$ corresponds to any of the three spatial dimensions, see \cite[Sec. 9.2]{Hai14} for a complete description of the model space $\mathcal{T}$. The homogeneities $\mathcal{A}$ of the symbols in $\mathcal{F}$ are defined recursively by the rules \eqref{e:defDegree}. The bound $\alpha > -\frac{18}{7}$ is required, in order for the collection of symbols of negative degree generated by the
procedure of \cite[Sec.~8]{Hai14} not to
depend on $\alpha$.
A two-parameter renormalisation subgroup $\mathfrak{R}^0 \subset \mathfrak{R}$ for this problem consists of the linear maps $M$ on $\mathcal{T}$ defined in \cite[Equ.~9.3]{Hai14}.
In the proof of Theorem~\ref{t:Phi} in Section~\ref{s:ProofOfPhi} we will make use of the Gaussian models on $\mathscr{T}$ built in \cite[Thm.~10.22]{Hai14}. As one can see from Remark~\ref{r:ContinousModelLift} and the continuous versions of the bounds \eqref{e:CovarianceBounds}, one can expect a concrete realisation of an abstract symbol $\tau$ to be a function in time if $|\tau| > -\frac{3}{2}$. In our case, the symbols $\Xi$ and $\Psi^3$ don't satisfy this condition, having homogeneities $\alpha < -\frac{5}{2}$ and $3 (\alpha + 2) < -\frac{3}{2}$ respectively. This was exactly the reason for introducing a truncated regularity structure in Section~\ref{sec:trunc}, which primarily means that we can remove these problematic symbols from $\mathscr{T}$. More precisely, we introduce a new symbol $\bar{\Psi} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{I}(\Psi^3)$ and the set
\begin{equ}
\gen{\mathcal{F}} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \{\Psi, \bar \Psi\} \cup \poly{\mathcal{F}}\;.
\end{equ}
Furthermore, we remove $\Xi$ and $\Psi^3$ from $\mathcal{F}$ in \eqref{e:CFDef} and replace all the occurrences of $\mathcal{I}(\Psi^3)$ by $\bar \Psi$, which gives
\begin{equ}
\hat{\mathcal{F}} = \{\mathbf{1}, \Psi, \Psi^2, \Psi^2 X_i, \Psi \bar{\Psi}, \Psi^2 \bar{\Psi}, \mathcal{I}(\Psi^2) \Psi^2, \mathcal{I}(\Psi^2), \mathcal{I}(\Psi) \Psi, \mathcal{I}(\Psi) \Psi^2, X_i, \ldots \}\;.
\end{equ}
Then the model spaces of the regularity structures $\gen{\mathscr{T}}$ and $\hat{\mathscr{T}}$ from Definition~\ref{d:TruncSets} are the linear spans of $\gen{\mathcal{F}}$ and $\hat{\mathcal{F}}$ respectively, and the set $\hat{\mathcal{F}}^{-}$ from \eqref{e:TMinus} is given in this case by
\begin{equ}[e:CFMinus]
\hat{\mathcal{F}}^{-} = \{\Psi,\, \bar{\Psi},\, \Psi^2,\, \Psi^2 X_i,\, \Psi \bar{\Psi},\, \mathcal{I}(\Psi^2) \Psi^2,\, \Psi^2 \bar{\Psi}\}\;.
\end{equ}
In the following lemma we show that the nonlinearities in \eqref{e:Phi} and \eqref{e:DPhiRenorm} satisfy the required assumptions, provided that the appearance of the renormalisation constant is being dealt with at the level of the corresponding models.
\begin{lemma}\label{l:PhiLipschiz}
Let $\hat \alpha \stackrel{\mathclap{\mbox{\tiny def}}}{=} \min \hat{\mathcal{A}}$ and let $a$ and $\lambda$ be as in \eqref{e:Phi}. Then, for any $\gamma > |2 \hat \alpha|$ and any $\eta \leq \hat \alpha$, the maps
\begin{equ}[e:PhiNonlin]
F(\tau) = F^\varepsilon(\tau) = - \mathcal{Q}_{\le 0} \bigl(a \tau + \lambda \tau^3\bigr) + \Xi
\end{equ}
satisfy Assumptions \ref{a:Nonlin} and \ref{a:DNonlin} with
\begin{equ}
F_0 = F^\varepsilon_0 = \Xi - \lambda \Psi^3\;, \qquad I_0 = I^\varepsilon_0 = \Psi - \lambda \bar{\Psi}\;,
\end{equ}
and $\bar{\gamma} = \gamma + 2\hat\alpha$, $\bar{\eta} = 3\eta$.
\end{lemma}
\begin{proof}
The space $\mathcal{T}_\mathcal{U} \subset \hat{\mathcal{T}}$ introduced in Section~\ref{ss:RegStruct} is spanned by polynomials and elements of the form $\mathcal{I} (\tau)$. Thus, the fact that the function $\hat{F}$ defined in \eqref{e:NonlinAs} maps $\{I_0 + \tau : \tau \in \hat{\mathcal{T}} \cap \mathcal{T}_\mathcal{U}\}$ into $\hat{\mathcal{T}}$ is obvious. The bounds \eqref{e:Lipschitz} in the continuous and discrete cases can be proved in exactly the same way as in \cite[Prop.~6.12]{Hai14}, using Remarks~\ref{r:DistrMult} and \ref{r:DDistrMult} respectively.
\end{proof}
Our following aim is to define a discrete model $Z^\varepsilon = (\Pi^\varepsilon, \Gamma^\varepsilon, \Sigma^\varepsilon)$ on $\gen{\mathscr{T}}$, and to extend it in the canonical way to $\hat{\mathscr{T}}$ as in Remark~\ref{r:ModelLift}. To this end, we postulate, for $s, t \in \mathbf{R}$ and $x, y \in \Lambda_\varepsilon^3$,
\begin{equ}[e:ModelOnPsi]
\bigl(\Pi^{\varepsilon, t}_x \Psi\bigr)(y) = \bigl(K^\varepsilon \star_\varepsilon \xi^\varepsilon\bigr)(t,y)\;, \qquad \Gamma^{\varepsilon, t}_{x y} \Psi = \Psi\;, \qquad \Sigma^{\varepsilon, s t}_{x} \Psi = \Psi\;.
\end{equ}
Furthermore, we denote the function $\bar{\psi}^\varepsilon(t,x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl(K^\varepsilon \star_\varepsilon \bigl(\Pi^{\varepsilon, t}_x \Psi\bigr)^3\bigr)(t,x)$ and set
\begin{equs}
\bigl(\Pi^{\varepsilon, t}_x \bar{\Psi}\bigr)(y) = \bar{\psi}^\varepsilon(t,y) - \bar{\psi}^\varepsilon&(t,x)\;,\qquad \Gamma^{\varepsilon, t}_{x y} \bar{\Psi} = \bar{\Psi} - \left( \bar{\psi}^\varepsilon(t,y) - \bar{\psi}^\varepsilon(t,x)\right) \mathbf{1}\;,\\
\Sigma^{\varepsilon, s t}_{x} \bar{\Psi} = \bar{\Psi} &- \left( \bar{\psi}^\varepsilon(t,x) - \bar{\psi}^\varepsilon(s,x)\right) \mathbf{1}\;.\label{e:DModelSimple}
\end{equs}
Postulating the actions of these maps on the abstract polynomials in the standard way, we canonically extend $Z^\varepsilon$ to the whole $\hat{\mathscr{T}}$.
Furthermore, we define the renormalisation constants\footnote{One can show that $C^{(\varepsilon)}_1 \sim \varepsilon^{-1}$ and $C^{(\varepsilon)}_2 \sim \log \varepsilon$.}
\begin{equ}[e:RenormConsts]
C^{(\varepsilon)}_1 \stackrel{\mathclap{\mbox{\tiny def}}}{=} \int_{\mathbf{R} \times \Lambda_\varepsilon^3} \bigl(K^\varepsilon(z) \bigr)^2\, dz\;, \quad C^{(\varepsilon)}_2 \stackrel{\mathclap{\mbox{\tiny def}}}{=} 2 \int_{\mathbf{R} \times \Lambda_\varepsilon^3} \bigl(K^\varepsilon \star_\varepsilon K^\varepsilon\bigr)(z)^2\, K^\varepsilon(z)\, dz\;,
\end{equ}
and use them to define the renormalisation map $M^\varepsilon$ as in \cite[Sec. 9.2]{Hai14}. Finally, we define the renormalised model $\hat Z^\varepsilon$ for $Z^\varepsilon$ and $M^\varepsilon$ as in Remark~\ref{r:RenormModel}. Using the model $\hat Z^\varepsilon$ in~\eqref{e:DSPDESolution} we obtain a solution to the discretised $\Phi^4_3$ equation \eqref{e:DPhiRenorm} with
\begin{equ}
C^{(\varepsilon)} \stackrel{\mathclap{\mbox{\tiny def}}}{=} 3 \lambda C^{(\varepsilon)}_1 - 9 \lambda^2 C^{(\varepsilon)}_2\;,
\end{equ}
where $\lambda$ is the coupling constant from \eqref{e:Phi}. Before giving a proof of Theorem~\ref{t:Phi} we provide some technical results.
\subsection{Discrete functions with prescribed singularities}
\label{ss:SingularFunctions}
It follows from Proposition~\ref{p:CovarianceConvergence} that the ``strength'' of singularity of a kernel determines the regularity of the respective distribution. In this section we provide some properties of singular discrete functions. As usual we fix a scaling $\mathfrak{s}=(\mathfrak{s}_0, 1, \ldots, 1)$ of $\mathbf{R}^{d+1}$ with $\mathfrak{s}_0 \geq 1$.
For a function $K^\varepsilon$ defined on $\mathbf{R} \times \Lambda_\varepsilon^d$ and supported in a ball centered at the origin, we denote by $D_{i, \varepsilon}$ the finite difference derivative, i.e.
\begin{equ}
D_{i, \varepsilon} K^\varepsilon(t,x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-1} \left( K^\varepsilon(t, x + \varepsilon e_i) - K^\varepsilon(t,x) \right)\;,
\end{equ}
where $\{e_i\}_{i = 1 \ldots d}$ is the canonical basis of $\mathbf{R}^d$, and for $k = (k_0, k_1, \ldots, k_d) \in \mathbf{N}^{d+1}$ we define $D_\varepsilon^k \stackrel{\mathclap{\mbox{\tiny def}}}{=} D_t^{k_0} D^{k_1}_{1,\varepsilon} \ldots D^{k_d}_{d,\varepsilon}$. We allow the function $K^\varepsilon$ to be non-differentiable in time only on the set $P_0 \stackrel{\mathclap{\mbox{\tiny def}}}{=} \{(0, x) : x \in \Lambda_\varepsilon^d\}$. Furthermore, we define for $\zeta \in \mathbf{R}$ and $m \geq 0$ the quantity
\begin{equ}[e:SingularKernel]
\wnorm{K^\varepsilon}_{\zeta; m} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \max_{|k|_\mathfrak{s} \leq m} \sup_{z \notin P_0} \frac{\bigl| D^{k}_{\varepsilon} K^\varepsilon(z) \bigr|}{\senorm{z}^{(\zeta - |k|_\mathfrak{s}) \wedge 0}}\;,
\end{equ}
where $z \in \mathbf{R} \times \Lambda_\varepsilon^d$, $k \in \mathbf{N}^{d + 1}$ and $\senorm{z} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \snorm{z} \vee \varepsilon$.
By analogy with Remark~\ref{r:Uniformity}, we always consider a sequence of functions parametrised by $\varepsilon = 2^{-N}$ with $N \in \mathbf{N}$, and we assume the bounds to hold for all $\varepsilon$ with proportionality constants independent of $\varepsilon$. Thus, if $\wnorm{K^\varepsilon}_{\zeta; m} < \infty$, then we will say that $K^\varepsilon$ is of order $\zeta$.
\begin{remark}\label{r:DiscontinuousFunctions}
We stress the fact that by our assumptions the functions $K^\varepsilon$ are defined also at the origin. In particular, $K^\varepsilon$ can have a discontinuity at $t = 0$ and its time derivative behaves in the worst case as the Dirac delta function at the origin.
\end{remark}
The following result provides bounds on products and discrete convolutions $\star_\varepsilon$.
\begin{lemma}\label{l:FuncsConv}
Let functions $K^\varepsilon_1$ and $K^\varepsilon_2$ be of orders $\zeta_1$ and $\zeta_2$ respectively. Then we have the following results:
\begin{itemize}
\item If $\zeta_1, \zeta_2 \leq 0$, then $K^\varepsilon_1 K^\varepsilon_2$ is of order $\zeta_1 + \zeta_2$ and for every $m \geq 0$ one has
\begin{equ}[e:FuncsProd]
\wnorm{K^\varepsilon_1 K^\varepsilon_2}_{\zeta_1 + \zeta_2; m} \lesssim \wnorm{K^\varepsilon_1}_{\zeta_1; m} \wnorm{K^\varepsilon_2}_{\zeta_2; m}\;.
\end{equ}
Moreover, if both $K^\varepsilon_1$ and $K^\varepsilon_2$ are continuous in the time variable on whole $\mathbf{R}$, then $K^\varepsilon_1 K^\varepsilon_2$ is continuous as well.
\item If $\zeta_1 \wedge \zeta_2 > - |\mathfrak{s}|$ and $\bar{\zeta} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \zeta_1 + \zeta_2 + |\mathfrak{s}| \notin \mathbf{N}$, then $K^\varepsilon_1 \star_\varepsilon K^\varepsilon_2$ is continuous in the time variable and one has the bound
\begin{equ}[e:FuncsConv]
\wnorm{K^\varepsilon_1 \star_\varepsilon K^\varepsilon_2}_{\bar{\zeta}; m} \lesssim \wnorm{K^\varepsilon_1}_{\zeta_1; m} \wnorm{K^\varepsilon_2}_{\zeta_2; m}\;.
\end{equ}
\end{itemize}
In all these estimates the proportionality constants depend only on the support of the functions $K^\varepsilon_i$ and are independent of $\varepsilon$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{l:FuncsConv}]
The bound \eqref{e:FuncsProd} follows from the Leibniz rule for the discrete derivative:
\begin{equ}[e:DLeibniz]
D_\varepsilon^k \bigl(K^\varepsilon_1 K^\varepsilon_2\bigr)(z) = \sum_{l \leq k} \binom{k}{l} D_\varepsilon^l K^\varepsilon_1(z)\, D_\varepsilon^{k-l} K^\varepsilon_2\bigl(z + (0, \varepsilon l)\bigr)\;,
\end{equ}
where $k, l \in \mathbf{N}^d$, as well as from the standard Leibniz rule in the time variable. The bound \eqref{e:FuncsConv} can be proved similarly to \cite[Lem.~10.14]{Hai14}, but using the Leibniz rule \eqref{e:DLeibniz}, summation by parts for the discrete derivative and the fact that the products
\begin{equ}
(x)_{k, \varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \prod_{i=1}^d \prod_{0 \leq j < k_i}(x_i - \varepsilon j)
\end{equ}
with $k \in \mathbf{N}^d$ play the role of polynomials for the discrete derivative.
When bounding the time derivative of $K^\varepsilon_1 \star_\varepsilon K^\varepsilon_2$, we convolve in the worst case a function which behaves as Dirac's delta at the origin with another one which has a jump there (see Remark~\ref{r:DiscontinuousFunctions}). This operation gives us a function whose derivative can have a jump at the origin, but is not Dirac's delta. This fact explains why $K^\varepsilon_1 \star_\varepsilon K^\varepsilon_2$ is continuous in time.
\end{proof}
The following lemma, whose proof is almost identical to that of \cite[Lem.~10.18]{Hai14}, provides a bound on an increment of a singular function.
\begin{lemma}\label{l:FuncIncr}
Let a function $K^\varepsilon$ be of order $\zeta \leq 0$. Then for every $\kappa \in [0,1]$, $t \in \mathbf{R}$ and $x_1, x_2 \in \Lambda_\varepsilon^d$ one has
\begin{equ}
|K^\varepsilon(t, x_1) - K^\varepsilon(t, x_2)| \lesssim |x_1 - x_2|^\kappa \Bigl(\senorm{t, x_1}^{\zeta - \kappa} + \senorm{t, x_2}^{\zeta - \kappa} \Bigr) \wnorm{K^\varepsilon}_{\zeta; 1}\;.
\end{equ}
\end{lemma}
For a discrete singular function $K^\varepsilon$, we define the function $\mathscr{R}_\varepsilon K^\varepsilon$ by
\begin{equ}[e:FuncRenorm]
\left(\mathscr{R}_\varepsilon K^\varepsilon\right)(\varphi) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \int_{\mathbf{R} \times \Lambda_\varepsilon^d} K^\varepsilon(z) \left( \varphi(z) - \varphi(0) \right)\, dz\;,
\end{equ}
for every compactly supported test function $\varphi$ on $\mathbf{R}^{d+1}$. The following result can be proved similarly to \cite[Lem.~10.16]{Hai14} and using the statements from the proof of Lemma~\ref{l:FuncsConv}.
\begin{lemma}\label{l:FuncRenorm}
Let functions $K^\varepsilon_1$ and $K^\varepsilon_2$ be of orders $\zeta_1$ and $\zeta_2$ respectively with $\zeta_1 \in \bigl(-|\mathfrak{s}|-1, -|\mathfrak{s}|\bigr]$ and $\zeta_2 \in \bigl(-2 |\mathfrak{s}|-\zeta_1, 0\bigr]$. Then the function $\bigl(\mathscr{R}_\varepsilon K^\varepsilon_1\bigr) \star_\varepsilon K^\varepsilon_2$ is continuous in time of order $\bar{\zeta} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \zeta_1 + \zeta_2 + |\mathfrak{s}|$ and, for any $m \geq 0$, one has
\begin{equ}
\wnorm{\bigl(\mathscr{R}_\varepsilon K^\varepsilon_1\bigr) \star_\varepsilon K^\varepsilon_2}_{\bar{\zeta}; m} \lesssim \wnorm{K^\varepsilon_1}_{\zeta_1; m} \wnorm{K^\varepsilon_2}_{\zeta_2; m + \mathfrak{s}_0}\;.
\end{equ}
\end{lemma}
The following result shows how certain convolutions change singular functions. Its proof is similar to \cite[Lem.~10.17]{Hai14}.
\begin{lemma}\label{l:SmoothConv}
Let for some $\bar \varepsilon \in [\varepsilon, 1]$ the function $\psi^{\bar \varepsilon, \varepsilon} : \mathbf{R} \times \Lambda_\varepsilon^d \to \mathbf{R}$ be smooth in the time variable, supported in the ball $B(0, R \bar \varepsilon) \subset \mathbf{R}^{d+1}$ for some $R \geq 1$, and satisfies
\begin{equ}[e:SmoothConv]
\int_{\mathbf{R} \times \Lambda_\varepsilon^d} \psi^{\bar \varepsilon, \varepsilon}(z)\, dz = 1\;, \qquad |D^k_\varepsilon \psi^{\bar \varepsilon, \varepsilon}(z)| \lesssim \bar{\varepsilon}^{-|\mathfrak{s}| - |k|_\mathfrak{s}}\;,
\end{equ}
for all $z \in \mathbf{R} \times \Lambda_\varepsilon^d$ and $k \in \mathbf{N}^{d+1}$, where the proportionality constant in the bound can depend on $k$. If $K^\varepsilon$ is of order $\zeta \in (-|\mathfrak{s}|, 0)$, then for all $\kappa \in (0,1]$ one has
\begin{equ}
\wnorm{K^\varepsilon - K^\varepsilon \star_\varepsilon \psi^{\bar \varepsilon, \varepsilon}}_{\zeta - \kappa; m} \lesssim \bar{\varepsilon}^\kappa \wnorm{K^\varepsilon}_{\zeta; m + \mathfrak{s}_0}\;.
\end{equ}
\end{lemma}
\subsection{Convergence of lattice approximations of the $\Phi^4_3$ measure}
In this section we provide some properties of the lattice approximations $\mu_\varepsilon$ of the $\Phi^4_3$ measure, defined in \eqref{e:mu_eps}, which will be used in the proof of Corollary~\ref{c:Phi}. We start with tightness and moment estimates.
\begin{proposition}\label{prop:mu_tight}
If $a > 0$ and the coupling constant $\lambda$ in \eqref{e:DAction} is small enough, then for every $\alpha < -\frac{1}{2}$ the sequence $\mu_\varepsilon$ is tight in $\mathcal{C}^\alpha$ as $\varepsilon \to 0$ with uniformly bounded
moments of all orders.
\end{proposition}
\begin{proof}
The estimate \cite[Eq.~8.2]{BFS83} implies that the $2 n$-th moment of $\mu_\varepsilon$ is bounded by the second moment (up to a multiplier depending on $n$). Moreover, it follows from \cite[Thm.~6.1]{BFS83} that for any test function $\varphi \in \mathcal{C}_0^{\infty}(\mathbf{R}^3)$ one has
\begin{equ}
\int \Phi^\varepsilon(\varphi)^2 \mu_\varepsilon(d \Phi^\varepsilon) = \int \Phi^\varepsilon(\varphi)^2 \hat{\mu}_\varepsilon(d \Phi^\varepsilon) + \mathcal{O} \bigl(\lambda^2 \Vert \varphi \Vert^2_{L^2}\bigr)\;,
\end{equ}
where $\hat{\mu}_\varepsilon$ is the Gaussian measure given by \eqref{e:mu_eps} and \eqref{e:DAction} with $\lambda = C^{(\varepsilon)} = 0$. Since the covariance of $\hat{\mu}_\varepsilon$ is the kernel of $(a - \Delta^\varepsilon)^{-1}$ where $\Delta^\varepsilon$ is the nearest-neighbour approximation of the Laplacian $\Delta$ (see \cite[Eq.~3.2]{BFS83}), one has the bound
\begin{equ}
\int \Phi^\varepsilon(\varphi^{\nu})^2 \hat{\mu}_\varepsilon(d \Phi^\varepsilon) \lesssim \nu^{-1 - \kappa}\;,
\end{equ}
for any $\kappa > 0$ and any scaling parameter $\nu \in [\varepsilon, 1]$. This yields the respective bounds on the moments of $\mu_\varepsilon$ from which the claim follows.
\end{proof}
The following result shows that the measures $\mu_\varepsilon$ in fact converge as $\varepsilon \to 0$.
\begin{proposition}\label{eq:mu_limit}
The measures $\mu_\varepsilon$ on $\mathcal{C}^\alpha$ converge to the $\Phi^4_3$ measure \eqref{e:PhiMeasure}.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:mu_tight}, we can choose a subsequence of $\mu_\varepsilon$ weakly converging
to a limit $\mu$. Combining this with
\cite[Thm~2.1]{Park} (see also \cite{ParkOld})
shows that $\mu$ coincides with the $\Phi^4_3$ measure \eqref{e:PhiMeasure} constructed
in \cite{Fel74}.
\end{proof}
\subsection{Proof of the convergence result}
\label{s:ProofOfPhi}
Using the results from the previous section, we are ready to prove Theorem~\ref{t:Phi}.
\begin{proof}[Proof of Theorem~\ref{t:Phi}]
In order to prove the claim, we proceed as in \cite{Park} and introduce intermediate equations driven by a smooth noise. Precisely, we take a function $\psi : \mathbf{R}^{4} \to \mathbf{R}$ which is smooth, compactly supported and integrates to $1$, and for some $\bar{\varepsilon} \in [\varepsilon,1]$ we define $\psi^{\bar{\varepsilon}}(t, x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bar{\varepsilon}^{-|\mathfrak{s}|} \psi\bigl(\bar{\varepsilon}^{-2} t, \bar{\varepsilon}^{-1} x\bigr)$ and the mollified noise $\xi^{\bar \varepsilon, 0} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \xi \star \psi^{\bar{\varepsilon}}$. Then we denote by $\Phi^{\bar \varepsilon, 0}$ the global solution of
\begin{equ}
\partial_t \Phi^{\bar \varepsilon, 0} = \Delta \Phi^{\bar \varepsilon, 0} + \bigl(C^{(\bar \varepsilon, 0)} - a\bigr) \Phi^{\bar \varepsilon, 0} - \lambda \bigl(\Phi^{\bar \varepsilon, 0}\bigr)^3 + \xi^{\bar \varepsilon, 0}\;, \qquad \Phi^{\bar \varepsilon, 0}(0, \cdot) = \Phi_0(\cdot)\;,
\end{equ}
where $C^{(\bar \varepsilon, 0)} = 3 \lambda C^{(\bar \varepsilon, 0)}_1 - 9 \lambda^2 C^{(\bar \varepsilon, 0)}_2$, and $C^{(\bar \varepsilon, 0)}_1$ and $C^{(\bar \varepsilon, 0)}_2$ are as in \cite[Thm.~10.22 and Eq. 9.21]{Hai14}.
Let $\tilde{Z}^{\bar \varepsilon, 0}$ and $\tilde{Z}$ be the models on $\mathscr{T}$ built in \cite[Thm.~10.22]{Hai14} via the noises $\xi^{\bar \varepsilon, 0}$ and $\xi$ respectively. We will be interested only in their restrictions to the truncated regularity structure $\hat{\mathscr{T}}$. It follows from the proof of the latter theorem that we are exactly in the setting of Section~\ref{ss:ContinuousGauss}, and we can define respective inhomogeneous models $\hat Z^{\bar \varepsilon, 0}$ and $\hat Z$ on $\hat{\mathscr{T}}$ as in \eqref{e:ModelTranslation} and \eqref{e:GammaTranslation}. Furthermore, Remark~\ref{r:ContinousModelLift} and the bounds obtained in the proof of \cite[Thm.~10.22]{Hai14} on the elements in the expansions \eqref{e:ModelTranslation} of the models yield the following bounds:
\begin{equ}[e:ModelConvFirst]
\mathbf{E} \left[\vert\!\vert\!\vert \hat Z \vert\!\vert\!\vert_{\delta, \gamma; T}\right]^p \lesssim 1\;, \qquad \mathbf{E} \left[\vert\!\vert\!\vert \hat Z^{\bar \varepsilon, 0}; \hat Z \vert\!\vert\!\vert_{\delta, \gamma; T}\right]^p \lesssim \bar{\varepsilon}^{\theta p}\;,
\end{equ}
uniformly in $\bar \varepsilon \in (0,1]$, for any $T > 0$, $p \geq 1$ and for sufficiently small values of $\delta > 0$ and $\theta > 0$. Using Theorem~\ref{t:FixedMap} and Lemma~\ref{l:PhiLipschiz}, we define the solution $\Phi$ to the equation \eqref{e:Phi} as in Definition~\ref{d:SPDESolution} by solving the respective abstract equation \eqref{e:AbstractEquation} with the nonlinearity $F$ from \eqref{e:PhiNonlin} and the inhomogeneous model $\hat Z$.
In order to discretise the noise $\xi^{\bar \varepsilon, 0}$, we define the function
\begin{equ}
\psi^{\bar{\varepsilon}, \varepsilon}(t, x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-d} \int_{\mathbf{R}^{d}} \psi^{\bar{\varepsilon}}(t, y)\, \mathbf{1}_{|y - x| \leq \varepsilon/2}\, dy\;, \qquad (t,x) \in \mathbf{R} \times \Lambda_\varepsilon^d\;,
\end{equ}
and the discrete noise $\xi^{\bar \varepsilon, \varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \psi^{\bar{\varepsilon}, \varepsilon} \star_\varepsilon \xi^\varepsilon$, where $\xi^\varepsilon$ is given in \eqref{e:SimpleDNoise}. We take the function $\psi^{\bar{\varepsilon}, \varepsilon}$ in this form, because it satisfies the first identity in \eqref{e:SmoothConv}, which in general is not true for $\psi^{\bar{\varepsilon}}$. We define the discrete model $Z^{\bar \varepsilon, \varepsilon}$ by substituting each occurrence of $\xi^\varepsilon$, $C^{(\varepsilon)}_1$ and $C^{(\varepsilon)}_2$ in the definition of $Z^{\varepsilon}$ by $\xi^{\bar \varepsilon, \varepsilon}$, $C^{(\bar \varepsilon, \varepsilon)}_1$ and $C^{(\bar \varepsilon, \varepsilon)}_2$ respectively, where $C^{(\bar \varepsilon, \varepsilon)}_1$ is defined as in \eqref{e:RenormConsts}, but via the kernel $K^{\bar \varepsilon, \varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} K^\varepsilon \star_\varepsilon \psi^{\bar \varepsilon, \varepsilon}$, and $C^{(\bar \varepsilon, \varepsilon)}_2$ is defined by replacing $K^\varepsilon \star_\varepsilon K^\varepsilon$ by $K^{\bar \varepsilon, \varepsilon} \star_\varepsilon K^{\bar \varepsilon, \varepsilon}$ in the second expression in \eqref{e:RenormConsts}. Furthermore, using $\wnorm{K^\varepsilon}_{-3; r} \leq C$, which follows from Lemma~\ref{l:DGreenDecomposition} and Remark~\ref{r:KernelDiff}, and proceeding exactly as in the proof of \cite[Thm.~10.22]{Hai14}, but exploiting Proposition~\ref{p:CovarianceConvergence} and the results from Section~\ref{ss:SingularFunctions} instead of their continuous counterparts, we obtain the bounds \eqref{e:GaussModelBoundSigmaGamma} for each $\tau \in \gen{\mathcal{F}} \setminus \poly{\mathcal{F}}$, and \eqref{e:GaussModelBound} for each $\tau \in \hat{\mathcal{F}}^{-}$, uniformly in $\varepsilon \leq \bar \varepsilon$ and for $\delta > 0$ small enough. We also obtain the respective bounds on the differences $Z^{\bar \varepsilon, \varepsilon} - Z^{\varepsilon}$, with the proportionality constants of orders $\bar \varepsilon^{2\theta}$ with $\theta > 0$ sufficiently small. For this, we can use Lemma~\ref{l:SmoothConv}, because $\psi^{\bar \varepsilon, \varepsilon}$ satisfies the required conditions, which follows from the properties of $\psi$. Thus, Theorem~\ref{t:ModelsConvergence} yields
\begin{equ}[e:DPhiModelBound]
\mathbf{E} \left[\vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert_{\delta, \gamma; T}^{(\varepsilon)}\right]^p \lesssim 1\;, \qquad \mathbf{E} \left[\vert\!\vert\!\vert Z^{\bar \varepsilon, \varepsilon}; Z^\varepsilon \vert\!\vert\!\vert_{\delta, \gamma; T}^{(\varepsilon)}\right]^p \lesssim \bar{\varepsilon}^{\theta p}\;,
\end{equ}
uniformly in $\varepsilon \leq \bar \varepsilon$, for any $T > 0$ and $p \geq 1$. We denote by $\Phi^{\bar \varepsilon, \varepsilon}$ the solution of \eqref{e:DPhiRenorm}, driven by the noise $\xi^{\bar \varepsilon, \varepsilon}$, with the renormalisation constant $C^{(\bar \varepsilon, \varepsilon)} \stackrel{\mathclap{\mbox{\tiny def}}}{=} 3 \lambda C^{(\bar \varepsilon, \varepsilon)}_1 - 9 \lambda^2 C^{(\bar \varepsilon, \varepsilon)}_2$.
For every $K > 0$ we define the following stopping time:
\begin{equ}
\tau_K \stackrel{\mathclap{\mbox{\tiny def}}}{=} \inf\bigl\{T > 0 : \Vert \Phi \Vert_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T}} \geq K\bigr\}\;,
\end{equ}
where the values of $\delta$, $\alpha$ and $\bar \eta$ are as in the statement of the theorem. Then we have the limit in probability $\lim_{K \to \infty} \tau_K = T_\star$, where $T_\star$ is the random lifetime of $\Phi$. Our aim is now to prove that
\begin{equ}[e:SolutionsConvDouble]
\lim_{K \to \infty} \lim_{\varepsilon \to 0} \P \Bigl[\Vert \Phi; \Phi^\varepsilon \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \tau_K}} \geq c\Bigr] = 0\;,
\end{equ}
for every constant $c > 0$. Then the claim \eqref{e:PhiConvergence} will follow after choosing $T_\varepsilon$ as a suitable diagonal sequence.
In order to have a priori bounds on the processes and models introduced above, we define for every $K >0$ the following stopping times:
\begin{equs}
\sigma^\varepsilon_{K} &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \inf\bigl\{T > 0 : \Vert \Phi \Vert_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T}} \geq K ~\text{ or }~ \vert\!\vert\!\vert \hat Z \vert\!\vert\!\vert_{\delta, \gamma; T} \geq K\,, \text{ or }~ \vert\!\vert\!\vert \hat Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T} \geq K\bigr\}\;,\\
\sigma^{\bar \varepsilon, \varepsilon} &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \inf\bigl\{T > 0 : \Vert \Phi - \Phi^{\bar \varepsilon, 0} \Vert_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T}} \geq 1 ~\text{ or }~ \Vert \Phi^\varepsilon - \Phi^{\bar \varepsilon, \varepsilon} \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T}} \geq 1\,, \\
& ~\text{ or }~ \Vert \Phi^{\bar \varepsilon, 0}; \Phi^{\bar \varepsilon, \varepsilon} \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, T}} \geq 1\,, ~\text{ or }~ \vert\!\vert\!\vert \hat Z; \hat Z^{\bar \varepsilon, 0} \vert\!\vert\!\vert_{\delta, \gamma; T} \geq 1\,, ~\text{ or }~ \vert\!\vert\!\vert \hat Z^\varepsilon; \hat Z^{\bar \varepsilon, \varepsilon} \vert\!\vert\!\vert^{(\varepsilon)}_{\delta, \gamma; T} \geq 1\bigr\}\;,
\end{equs}
as well as $\varrho_K^{\bar \varepsilon, \varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \sigma^\varepsilon_{K} \wedge \sigma^{\bar \varepsilon, \varepsilon}$. Then, choosing two constants $\bar K > K$ and using the latter stopping time and the triangle inequality, we get the following bound:
\begin{equs}
\P \Bigl[\Vert \Phi; \Phi^\varepsilon &\Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \tau_K}} \geq c\Bigr] \leq \P \Bigl[\Vert \Phi - \Phi^{\bar \varepsilon, 0} \Vert_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}} \geq c\Bigr] + \P \Bigl[\Vert \Phi^{\bar \varepsilon, 0}; \Phi^{\bar \varepsilon, \varepsilon} \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}} \geq c\Bigr]\\
&+ \P \Bigl[\Vert \Phi^{\bar \varepsilon, \varepsilon} - \Phi^{\varepsilon} \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}} \geq c\Bigr] + \P \bigl[\varrho_{\bar K}^{\bar \varepsilon, \varepsilon} < \sigma^\varepsilon_{\bar K}\bigr] + \P\bigl[\sigma^\varepsilon_{\bar K} < \tau_K\bigr]\;. \label{e:SolutionsConv}
\end{equs}
We will show that if we take the limits $\varepsilon, \bar \varepsilon \to 0$ and $K, \bar K \to \infty$, then all the terms on the right-hand side of \eqref{e:SolutionsConv} vanish and we obtain the claim \eqref{e:SolutionsConvDouble}.
It follows from the definition of $\varrho_{\bar K}^{\bar \varepsilon, \varepsilon}$ that $\vert\!\vert\!\vert \hat Z \vert\!\vert\!\vert_{\delta, \gamma; \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}$ and $\vert\!\vert\!\vert\hat Z^{\bar \varepsilon, 0} \vert\!\vert\!\vert_{\delta, \gamma; \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}$ are bounded by constants proportional to $\bar K$. Hence, Theorems~\ref{t:DSolutions} and \ref{t:DReconstruct}, and the bounds~\eqref{e:ModelConvFirst} yield
\begin{equ}
\lim_{\bar \varepsilon \to 0} \P \Bigl[\Vert \Phi - \Phi^{\bar \varepsilon, 0} \Vert_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}} \geq c\Bigr] = 0\;,
\end{equ}
uniformly in $\varepsilon$. Similarly, we can use Theorems~\ref{t:DSolutions} and \ref{t:DReconstruct}, and the bounds on the discrete models \eqref{e:DPhiModelBound} to obtain the uniform in $\varepsilon$ convergence
\begin{equ}
\lim_{\bar \varepsilon \to 0} \P \Bigl[\Vert \Phi^\varepsilon - \Phi^{\bar \varepsilon, \varepsilon} \Vert^{(\varepsilon)}_{\mathcal{C}^{\delta, \alpha}_{\bar{\eta}, \varrho_{\bar K}^{\bar \varepsilon, \varepsilon}}} \geq c\Bigr] = 0\;.
\end{equ}
Now, we turn to the second term in \eqref{e:SolutionsConv}. It follows from our definitions that we have $\xi^{\bar \varepsilon, \varepsilon} = \varrho^{\bar \varepsilon, \varepsilon} \star \xi$, where
\begin{equ}
\varrho^{\bar \varepsilon, \varepsilon}(t,x) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \varepsilon^{-d} \int_{\Lambda_{\varepsilon}^d} \psi^{\bar \varepsilon, \varepsilon}(t,y)\, \mathbf{1}_{|y - x| \leq \varepsilon/2}\, dy\;.
\end{equ}
Moreover, for $z = (t,x) \in \mathbf{R} \times \Lambda_{\varepsilon}^d$ one has the identity
\begin{equ}
\bigl(\psi^{\bar \varepsilon} - \varrho^{\bar \varepsilon, \varepsilon}\bigr)(z) = \varepsilon^{-2d} \int_{\Lambda_\varepsilon^d} \int_{\mathbf{R}^{d}} \left( \psi^{\bar \varepsilon}(t,x) - \psi^{\bar \varepsilon}(t,u) \right) \mathbf{1}_{|u-y| \leq \varepsilon/2} \mathbf{1}_{|y - x| \leq \varepsilon/2} d u\, dy,
\end{equ}
from which we immediately obtain the bound
\begin{equ}
\sup_{z \in \mathbf{R} \times \Lambda_{\varepsilon}^d} \bigl| D_t^k \bigl(\psi^{\bar \varepsilon} - \varrho^{\bar \varepsilon, \varepsilon}\bigr)(z)\bigr| \lesssim \varepsilon \bar{\varepsilon}^{-|\mathfrak{s}| - k \mathfrak{s}_0 - 1}\;,
\end{equ}
for every $k \in \mathbf{N}$. Hence, using the a priori bounds on the solutions, which follow from the definition of $\varrho_{\bar K}^{\bar \varepsilon, \varepsilon}$, we can use the standard result from numerical analysis of PDEs (see e.g. \cite[Ch.~6]{Lui11}) that the second term in \eqref{e:SolutionsConv} vanishes as $\varepsilon \to 0$, as soon as $\bar \varepsilon$ is fixed.
The limit $\lim_{\bar \varepsilon \to 0} \lim_{\varepsilon \to 0} \P \bigl[\varrho_{\bar K}^{\bar \varepsilon, \varepsilon} < \sigma^\varepsilon_{\bar K}\bigr] = 0$ follows immediately from the definition of the involved stopping times, the bounds \eqref{e:ModelConvFirst} and \eqref{e:DPhiModelBound}, and the convergences we have just proved. Finally, it follows from \eqref{e:ModelConvFirst} that
\begin{equ}
\lim_{\bar K \to \infty} \P\bigl[\sigma^\varepsilon_{\bar K} < \tau_K\bigr] = 0\;,
\end{equ}
for a fixed $K$ and uniformly in $\varepsilon$, which finishes the proof.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{c:Phi}]
Let $\xi$ be space-time white noise on some probability space
$(\Omega, \mathcal{F}, \mathbf{P})$, and let its discretisation $\xi^\varepsilon$ be given
by \eqref{e:SimpleDNoise}. Let furthermore $\Phi^\varepsilon_0$ be a random variable on
the same probability space which is independent of $\xi$ and
such that the solution to \eqref{e:DPhiRenorm} with
the nearest neighbours approximate Laplacian $\Delta^\varepsilon$ and driven by $\xi^\varepsilon$
is stationary. We denote by $\mu_\varepsilon$ its stationary distribution \eqref{e:mu_eps}, which we view
as a measure on $\mathcal{C}^{\alpha}$ with $\alpha$ as in \eqref{e:PhiConvergence},
by extending it in a piecewise constant fashion. It then
follows from Proposition~\ref{prop:mu_tight} that if we view $\Phi^\varepsilon_0$ as an element of $\mathcal{C}^{\alpha}$
by piecewise constant extension, we can and will assume by Skorokhod's representation
theorem that
$\Phi^\varepsilon_0$ converges almost surely as $\varepsilon \to 0$ to a limit $\Phi_0 \in \mathcal{C}^\alpha$. In order to use Skorokhod's representation
theorem \cite{Kal02}, the underlying spaces have to be separable which isn't the case for $\mathcal{C}^{\alpha}$, but this
is irrelevant since our random variables belong almost surely to the closure of smooth functions under the seminorm \eqref{e:AlphaNorm} which is separable.
Before we proceed, we introduce the space $\bar \mathcal{C} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \mathcal{C}^{0, \alpha}_{\bar \eta}\bigl([0,1], \mathbf{T}^3\bigr) \cup \{\infty\}$ (the latter H\"{o}lder space is a subspace of $\mathcal{C}^{0, \alpha}_{\bar \eta}\bigl([0,1], \mathbf{R}^3\bigr)$ defined below \eqref{e:SpaceTimeNorm}, containing the spatially periodic distributions), for $\alpha$ and $\bar \eta$ as in \eqref{e:PhiConvergence}, and equipped with the metric such that
\begin{equs}
d (\zeta, \infty) &\stackrel{\mathclap{\mbox{\tiny def}}}{=} d (\infty, \zeta) \stackrel{\mathclap{\mbox{\tiny def}}}{=} \bigl( 1 + \Vert \zeta \Vert_{\mathcal{C}^{0, \alpha}_{\bar \eta, 1}} \bigr)^{-1}\;, \quad \zeta \neq \infty\;,\\
d (\zeta_1, \zeta_2) &\stackrel{\mathclap{\mbox{\tiny def}}}{=} \min \bigl\{ \Vert \zeta_1 - \zeta_2 \Vert_{\mathcal{C}^{0, \alpha}_{\bar \eta, 1}}, d (\zeta_1, \infty) + d (\zeta_2, \infty) \bigr\}\;, \quad \zeta_i \neq \infty\;.
\end{equs}
Denote now by $\Phi^\varepsilon$ the
solution to \eqref{e:DPhiRenorm} with initial condition
$\Phi^\varepsilon_0$ and by $\Phi$ the solution to \eqref{e:Phi} with initial condition $\Phi_0$.
We can view these as $\bar \mathcal{C}$-valued random variables by postulating that
$\Phi = \infty$ if its lifetime is smaller than $1$. (The lifetime of $\Phi^\varepsilon$
is always infinite for fixed $\varepsilon$.)
Since the assumptions of Theorem~\ref{t:Phi} are fulfilled,
the convergence \eqref{e:PhiConvergence} holds and, since solutions blow up at time
$T_\star$, this implies that
$d(\Phi^\varepsilon, \Phi) \to 0$ in probability, as $\varepsilon \to 0$. (The required continuity in time obviously holds for every $\Phi^\varepsilon$ and $\Phi$.)
In order to conclude, it remains to show that $\P(\Phi = \infty) = 0$.
In particular, since the only point of discontinuity of the evaluation
maps $\Phi \mapsto \Phi(t,\cdot)$
on $\bar \mathcal{C}$ is $\infty$, this would then immediately show not only that
solutions $\Phi$ live up to time $1$ (and therefore any time) almost surely,
but also that $\mu$ is invariant for $\Phi$. To show that $\Phi \neq \infty$ a.s., it suffices to prove that there is no atom of the measure $\mu$ at the point $\infty$. Precisely, our aim is to show that for every $\bar \varepsilon > 0$ there exists a constant $C_{\bar \varepsilon} > 0$ such that
\begin{equ}[e:Tightness]
\P\Bigl( \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 1}} \geq C_{\bar \varepsilon} \Bigr) \leq \bar \varepsilon\;.
\end{equ}
We fix $\bar \varepsilon > 0$ in what follows and work with a generic constant $C_{\bar \varepsilon} > 0$, whose value will be chosen later. For integers $K \ge 2$ and $i \in \{0,\ldots,K-2\}$, we denote
\begin{equs}
Q^\varepsilon_{K, i} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, [i/K, (i + 2)/K]}}\;,
\end{equs}
where the norm $\Vert \cdot \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, [T_1, T_2]}}$ is defined as below \eqref{e:SpaceTimeNorm}, but on the time interval $[T_1, T_2]$ and with a blow-up at $T_1$. Splitting the time interval $(0,1]$ in \eqref{e:SpaceTimeNorm} into subintervals of length $1/K$, and deriving estimates on each subinterval, one gets
\begin{equs}
\Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 1}} &\leq Q_{K, 0}^\varepsilon + \sum_{i = 1}^{K - 1} (i+1)^{-\bar \eta / 2}\, Q^\varepsilon_{K, i-1}
\leq \tilde C K^{-\bar \eta / 2} \sum_{i = 0}^{K-2} Q^\varepsilon_{K, i}\;,
\end{equs}
if $\bar \eta \leq 0$, and for some $\tilde C$ independent of $K$ and $\varepsilon$.
Since, by stationarity, the random variables $Q^\varepsilon_{K, i}$ all have the same law,
it follows that
\begin{equs}
\P \Bigl( \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 1}} \geq C_{\bar \varepsilon} \Bigr) &\leq \P\Bigl( \tilde C K^{-\bar \eta / 2} \sum_{i = 0}^{K-2} Q^\varepsilon_{K, i} \geq C_{\bar \varepsilon}\Bigr)\\
&\leq K \P \Bigl( \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 2/K}} \geq \tilde{C}^{-1} K^{\bar \eta / 2} C_{\bar \varepsilon}\Bigr)\;,\label{e:TightBoundOne}
\end{equs}
To make the notation concise, we write $\tilde{C}_{K, \bar \varepsilon} \stackrel{\mathclap{\mbox{\tiny def}}}{=} \tilde{C}^{-1} K^{\bar \eta / 2} C_{\bar \varepsilon}$. Furthermore, in order to have a uniform bound on the initial data and the model, we use the following estimate
\begin{equs}
\mathbf{P} \Bigl( \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 2/K}} \geq \tilde{C}_{K, \bar \varepsilon} \Bigr) \leq \mathbf{P}&\Bigl( \Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 2/K}} \geq \tilde{C}_{K, \bar \varepsilon} \Big| \Vert \Phi^\varepsilon_0 \Vert_{\mathcal{C}^{\eta}} \leq L, \vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma; 1} \leq L \Bigr) \\
&+ \mathbf{P}\Bigl( \Vert \Phi^\varepsilon_0 \Vert_{\mathcal{C}^{\eta}} > L \Bigr) + \mathbf{P}\Bigl( \vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma; 1} > L \Bigr),\label{e:PhiInitDataBound}
\end{equs}
valid for every $L$, where $\eta$ and $\gamma > 0$ are as in the proof of Theorem~\ref{t:Phi}.
Recalling that \cite[Sec.~8]{BFS83} yields uniform bounds on all moments of $\mu_\varepsilon$,
and using the first bound in \eqref{e:DPhiModelBound}, Markov's inequality implies that
\begin{equ}[e:twoterms]
\mathbf{P}\Bigl( \Vert \Phi^\varepsilon_0 \Vert_{\mathcal{C}^{\eta}} > L \Bigr) \leq B_1 L^{-q}\;, \qquad \mathbf{P}\Bigl( \vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma; 1} > L \Bigr) \leq B_2 L^{-q}\;,
\end{equ}
for any $q \geq 1$, and for constant $B_1$ and $B_2$ independent of $\varepsilon$ and $L$.
Turning to the first term in \eqref{e:PhiInitDataBound}, it follows from the fixed point argument in the proof of Theorem~\ref{t:DSolutions} and the bound \eqref{e:DReconstructSpace}, that there exists $\tilde p \geq 1$ such that one has the bound
\begin{equ}
\Vert \Phi^\varepsilon \Vert_{\mathcal{C}^{0, \alpha}_{\bar{\eta}, 2/K}} \leq B_3 L^{3}\;,
\end{equ}
with $B_3$ being independent of $\varepsilon$ and $L$, as soon as $\Vert \Phi^\varepsilon_0 \Vert_{\mathcal{C}^{\eta}} \leq L$, $\vert\!\vert\!\vert Z^\varepsilon \vert\!\vert\!\vert^{(\varepsilon)}_{\gamma; 1} \leq L$, $K \ge L^{\tilde p}$ and $L \geq 2$. In particular, the first term vanishes if we can ensure
that
\begin{equ}[e:constrC]
\tilde{C}_{K, \bar \varepsilon} \ge B_3 L^{3}\;.
\end{equ}
Choosing first $L$ large enough so that the contribution of the two terms in \eqref{e:twoterms} is smaller than $\bar \varepsilon / 2$, then $K$ large enough so that $K \ge L^{\tilde p}$,
and finally $C_{\bar \varepsilon}$ large enough so that \eqref{e:constrC} holds, the claim follows.
Let $\hat Z$ be the model from the proof of Theorem~\ref{t:Phi} and let
\begin{equ}
\bar \mathcal{S}_t \colon \bar \mathcal{C}^\eta \times \mathscr{M} \to \bar \mathcal{C}^\eta
\end{equ}
be the map $\bar \mathcal{S}_t = \mathcal{R}_t \mathcal{S}_t$ from Theorem~\ref{t:FixedMap}
yielding the maximal solution up to time $t$, i.e.\
$\Phi_t = \bar \mathcal{S}_{t} (\Phi_0, \hat Z)$, with the conventions that
$\bar \mathcal{S}_t(\infty, \hat Z) = \infty$ and $\bar \mathcal{S}_t(\Phi_0, \hat Z) = \infty$ if the maximal
existence time $T_\star$ is less than $t$. Here, $\mathscr{M}$ denotes the space of all admissible
models as in Section~\ref{ss:ContinuousGauss}.
It follows from \eqref{e:IntegralIdentity}, the locality of the reconstruction map and the
locality of the construction of the model
that $\bar \mathcal{S}_{t} (\Phi_0, \hat Z)$ depends on the underlying white noise
only on the time interval $[0, t]$. Moreover, as a consequence of \cite[Prop.~7.11]{Hai14},
one has
\begin{equ}
\bar \mathcal{S}_{s + t} (\Phi_0, \hat Z) = \bar\mathcal{S}_{t} \bigl(\bar\mathcal{S}_{s} (\Phi_0, \hat Z), \hat Z_s\bigr)\,,
\end{equ}
where $\hat Z_s$ is the natural time shift by $s$ of the model $\hat Z$. Since the underlying noise
is white in time, we conclude that the process $\Phi$ is Markov.
The fact that the measure $\mu$ is reversible for $\Phi$ follows immediately from
the fact that $\mu_\varepsilon$ is reversible for the discretised process $\Phi^\varepsilon$.
\end{proof}
\bibliographystyle{./Martin}
|
1,116,691,500,267 | arxiv | \section{Introduction}
TW Hya is the closest example of a star with a young, gas-rich protoplanetary
disk. As such, it is an ideal target to study how disk evolution is
coupled to planet formation because the disk can be observed at high spatial
resolution and with great sensitivity. First detected from its infrared excess
\citep{Rucinski:1983}, the disk emits strongly at wavelengths longer than 3~$\mu$m
with a peak in its spectral energy distribution between 10 and 100~$\mu$m \citep{Weinberger:2002}.
TW Hya may not be typical for its age. It is part of the eponymously named TW
Hydrae Association, a collection of about two dozen stars, amongst which it
sports the most massive, gas-rich disk of the group. TW Hya is 54 $\pm$ 6 pc away from Earth in
the new \citet{vanleeuwen:2007} Hipparcos catalog. The age of the TW Hya
association is found from a variety of studies of the ensemble of stars,
including measurements of the dynamics (8.3 $\pm$ 0.3 Myr)
\citep{delaReza:2006}, lithium depletion boundary (12 $\pm$ 8 Myr)
\citep{Mentuch:2008}, and these in combination with pre-main sequence tracks
(10$^{+10}_{-7}$ Myr) \citep{BarradoyNavascues:2006}. The TW Hya stars may not
be coeval, and TW Hya could even be at the young end of their age distribution
\citep{Weinberger:2013}. Associations just a bit older than TW Hya, such as
$\beta$ Pic, have no optically thick, massive protoplanetary disks.
Despite reservations about its representativeness, intensive studies at nearly every
wavelength and spectral resolution have been used to try to understand the TW
Hya disk's structure and composition. We summarize some of the main findings
here. Resolved emission at 7mm indicated the central disk is mostly clear inside of
4 AU \citep{Calvet:2002,Hughes:2007}. However, 2 $\mu$m emission resolved by the Keck
Interferometer show that an optically thin dust disk comprised of small grains
comes to within 0.06 AU of the star \citep{akeson11}. The spectral energy distribution,
particularly the large amount of emission at submm through cm wavelengths
\citep{Weintraub:1989,Wilner:2000,Weinberger:2002,Wilner:2005,andrews12} indicates substantial grain growth
to at least cm sizes. Resolved CO line maps show the disk in Keplerian
rotation with an inclination of 7 $\pm$ 1$^\circ$ beyond 50 AU \citep{qi04,andrews12,hughes11}.
The disk has been spatially resolved in scattered light in the visible and
near-infrared \citep{Krist:2000,Weinberger:2002,apai04,Roberge:2005}. These show that
the optically thick disk extends to at least 280 AU. They also show
asymmetries and changes in surface brightness of the disk in the inner 150 AU.
The presence of both small and large
grains throughout the 4 AU - 200 AU disk and the presence of small grains in a
region where they should quickly be removed, suggests that they are being
regenerated through collisions at multiple locations.
The disk chemistry has also been probed in a spatially resolved manner in
submm lines \citep{Qi:2006, Qi:2008}. For example, there appears to be ongoing
deuterium fractionation in the outer disk, and that suggests pristine
nebular material is not preserved, as is often assumed for comets.
We have undertaken a multiwavelength visible to near-infrared study of
scattered light from the disk in order to address the structure and composition of
the TW Hya disk. Observations in the visible to
near-infrared can detect ices and silicates due to their broad absorption
features and organics due to their red slopes. These types of observations
are routinely applied to comets and Kuiper Belt objects, which are thought to
be the planetesimal remnants of circumstellar disks. In addition, the
spectral scattering efficiency of disk dust grains can constrain their grain sizes at
the disk surface. The mixture of dust grains should reflect a combination of
vertical mixing from the midplane (where large grains are presumably formed),
radial transport, and collisions. Finally but by no means least, we wish to
understand the vertical structure of the disk and whether it shows evidence
for forming planets.
\section{Observations}
We took coronagraphic images with the F171M, F180M, F204M, and F222M filters (central $\lambda$=1.71, 1.80, 2.04, and 2.22~\micron\ respectively) on 09 May 2005 with the NICMOS
camera 2 for TW Hya and the PSF reference star, CD-43$^\circ$2742 as part of Program GO 10167. The observations include direct
images of both stars outside the coronagraphic hole with short exposures for point source photometry, as
well as longer exposures for coronagraphic high contrast imaging.
The instrumentally calibrated and reduced images discussed in this paper
were created from the raw NICMOS {\it multiaccum} exposures following
the processing methodolgy described by \S3 of \citet{schneider05} and
references therein.
For photometric analysis, each calibrated direct image was used to determine the total photometry of the star and
empirically determine the scaling ratios between TW~Hya and the PSF reference in each filter band. The three images for
each star in each filter were located at different positions on the
detector.
We used a median combination of the three dither points to create a final image
of each star to derive a ratio for scaling and for photometry of
TW Hya. We used a 16.5 pixel radius circular aperture to determine the
photometry. The background in the images is zero, so no background annulus was used. The individual dither points were used to get a rough estimate of
the uncertainty in the ratios and photometry. Table \ref{tab:scalings} lists the photometry of TW Hya in each band with uncertainties and the scaling ratios for each filter.
In order to determine the best subtraction we minimized a chi-squared metric on a
region of the target image dominated by the star's diffraction spikes. We assumed that good
subtraction of the diffraction spikes corresponded to the best subtraction
of the PSF within the region of interest \citep{cnc03}. We iteratively created subtractions for combinations of scaling and pixel offsets until we found an image that produced the lowest chi-squared measure. We searched within 1-$\sigma$ of the scaling ratios as determined by Table \ref{tab:scalings} and within $\pm$1~pixel to find the best x and y pixel offsets.
To quantify the systematic effects on the photometry, we repeated the subtractions varying the PSF scalings and offsets by $\pm$1 $\sigma$ from the minimum chi-square solution found above. Using a circular photometric aperture matched to the size of the disk and avoiding masked areas due to diffraction spikes, we found the standard deviation in the disk flux densities from this suite of subtractions. We then propagated this uncertainty into the total uncertainty in the flux density of the disk per pixel.
We observed TW Hya in each medium band filter at two distinct spacecraft orientations or roll angles separated by 28$^\circ$. This is essentially an azimuthal dither that allows true
objects within the field of view to be distinguished with instrumental artifacts
that do not rotate with a change in orientation. The reference star, CD-43$^\circ$2742,
was subtracted from both roll-angle images to create two separate subtraction
images for each filter. The images were then geometrically corrected for the
slight optical distortion of the NICMOS camera 2 at the coronagraphic focus. We used the x-direction pixel
scale of 0\farcs07595/pixel and the y-direction pixel scale of 0\farcs07542/pixel to create an image with pixels that have the y-direction plate scale
in both directions. The geometrically corrected images were rotated to a common celestial orientation
using the rotation centers given by the flight software in the raw data file
headers and artifacts such as diffraction spikes were masked. Figure \ref{fig:f1} shows the resulting PSF subtracted and roll-combined images of the TW Hya disk taken in the medium band filters.
Additional observations of TW Hya with NICMOS were performed as a part of GTO programs 7226 and 7223 with HST \citep{Weinberger:2002}. The observations in F110W and F160W
were recovered from the archive, reduced in the same manner as the medium band data, and median combined. Archival
PSF reference stars for
the images were
subtracted from the target observations. The F110W reference was $\tau^1$ Eridani, while the F160W reference was Gl~879. We followed the procedure of \citet{Weinberger:2002} for scaling the PSFs, but followed the above procedure for
subtraction.
TW Hya has been observed with STIS using a combination of wedge coronagraphic
images as well as spatially resolved coronagraphic spectroscopy in the visible, including point source spectroscopy of TW~Hya using the G750L grating and the 52$\times$0.2 slit. Additional STIS spectroscopy was also obtained for the PSF reference HIP 32939. The reduction for the data is detailed in \citet{Roberge:2005}.
\section{Results}
Multi-wavelength, spatially-resolved, scattered-light observations provide a powerful look into both the TW Hya disk's morphology and the wavelength and stellocentric-distance dependent scattering efficiency of its grains. The spectroscopy of TW~Hya additionally places some constraints on its spectral type. In this Section we measure the surface brightness profiles, azimuthal asymmetries, and the scattering efficiency of the dust in the TW Hya disk. We also determine TW Hya's stellar spectral type using its STIS optical/NIR spectrum.
\subsection{Radial Surface Brightness Profiles in the Medium Band Filters}
The surface brightness profiles of TW Hya reveal structure on top of a smooth decrease in surface brightness with radial distance; this structure deviates from what one would expect from a simple flared disk.
We investigate the radial surface brightness profiles in our medium band NICMOS data and compare the behavior in these wavelengths to that seen in the shorter wavelength data.
Figure \ref{fig:f3} shows the F171M through F222M azimuthally averaged surface brightness profiles. In all of them we see the characteristic shift in behavior between $\sim$80-130~AU that is seen in the visible, 1.1, and 1.6\micron, namely a change in the slope of the surface brightness. This feature is now seen in all the wavelengths of light in which TW~Hya is observed, indicating that this is a feature caused by some change in morphology rather than a compositional feature localized in wavelength. We overplot simple flared disk model surface brightness profiles consistent with those used to describe the TW~Hya disk in the sub-mm \citep{andrews12}, which clearly do not match the behavior of the surface brightness profiles for TW~Hya.
To highlight the structure in the disk, we scaled each pixel of the disk images by $R^2$,
where $R$ is the distance from the central star. Figure \ref{fig:f2} shows the STIS,
F160W, F171M, and F222M scaled images, which clearly shows a depression in the disk
structure at about 80~AU, coincident with the slope changes in the surface brightness
profiles. The structure is most pronounced in the higher spatial resolution STIS images,
but is still visible at longer wavelengths. Additionally, the STIS image shows a possible arc structure exterior to the gap at a PA of $\sim$270$^\circ$.
The depression, or deficit of surface brightness relative to the surrounding disk material can be caused by several factors, including a drastic change in dust opacity at $\sim$80~AU, a sudden change in turbulence of the disk, shadowing due to structure just interior to 80~AU, or the presence of non-axisymmetric structures like spiral arms \citep{weinberger99,Clampin,fukagawa04}. It could also be caused by a protoplanet accreting material and opening a gap structure within the disk \citep{bate,bryden99}. Current planet formation models have difficulty forming large planets far from the central star. However examples of such systems, like HR~8799 \citep{marois}, suggest that large bodies can form around stars at distances of several tens of AU.
In \S \ref{s:gap} we investigate how a physical gap in material might change the observed surface profiles and use models to determine limits on the depth of the gap as constrained from our surface brightness profiles as a function of wavelength. From these limits we make a preliminary estimate to the limiting mass of a protoplanet capable of opening such a gap in the TW~Hya disk.
\subsection{Azimuthal Surface Brightness Asymmetries}
\label{s:phase}
\citet{Roberge:2005} measured the surface brightness as a function of azimuth for the STIS image of TW Hya's disk and found it possessed a significant asymmetry between 65-140~AU consistent
with the asymmetry caused by a disk inclined to the line of sight with forward scattering dust grains. The magnitude and position of the asymmetry corresponded to a measured PA of the brightness maximum of 233.6$^\circ\pm$4$^\circ$ and an inferred inclination of 15$^{+8.7\circ}_{-6.4}$ under the assumption of a Henyey-Greenstein scattering phase function for the grains with an asymmetry parameter $g$ of
0.5. They found no significant asymmetry for larger radii. Recent CO line data places the PA of the disk at 150-155$^\circ$ \citep[][]{andrews12,rosenfeld12}, consistent with a PA for the side of the disk closest to Earth being either at 60$^\circ$ or 240$^\circ$. \citep{rosenfeld12} determined that they could reproduce this apparent brightness asymmetry with a warped inner disk at 4~AU with an inclination of 8$^\circ$, rather than 6-7$^\circ$ as inferred by CO line data for the outer disk. This particular configuration also helps to explain larger than expected CO line velocities in the inner disk regions.
In this paper we re-analyze the STIS image as well as the other NICMOS NIR images to
search for similar asymmetries. For each passband, we took the final subtracted images
and constructed photometric apertures in the shape of concentric annuli as a function of
increasing radius with equal spacings of 0\farcs357~(20~AU), to improve S/N. Each annulus was then
further equally subdivided into azimuthal bins with width 20$^\circ$. For each image a
total of 6 annuli were used, from 0\farcs88~(47~AU) to 2\farcs5~(134~AU). For the STIS
and F110W images, there was enough signal-to-noise to extend out to 4\farcs4~(236~AU). We
therefore had a measure of the surface brightness distribution between 50-130~AU for all
passbands, and extending to 230~AU for STIS and F110W.
In the NICMOS images, certain areas were completely masked either by the
coronagraphic spot or diffraction spikes. For the STIS image, masked areas included diffraction spikes
and the coronagraphic wedge. For apertures where masked pixels comprised more than 2/3 of
the aperture, no surface brightness was calculated. Each annulus was then
scaled to the median of the brightest annulus to provide equal weight to the
surface brightness in any one azimuthal bin at each radius. The final azimuthal surface
brightness distribution was then constructed by taking a median of the first
six annuli for all images and another distribution was constructed for the
STIS and F110W image of the disk from the final four annuli. For the
uncertainty at each point we took the quadrature sum of calculated photometric uncertainty and
the standard deviation of the azimuthal points at that angular position.
Figure \ref{fig:f4} shows the STIS and F110W results. Between 50 and 130~AU the asymmetry
is similar in the two passbands. Beyond 130~AU the azimuthal profile is of a lower
signal-to-noise but roughly matches what is seen at shorter distances, contrary to what
was reported in \citet{Roberge:2005}. The distance of 130~AU corresponds to a break in
the radial suface brightness of the disk. For a comparison, we also overplot our best
fitting scattered light models of the disk, described in more detail in \S
\ref{sec:bestfit}, assuming an inclination of 7$^\circ$, with Henyey-Greenstein $g$ parameters of 0 and 0.5. Even with no forward scattering, flared disks can present a brightness asymmetry when circular apertures are used due to a ``foreshortening'' effect. Even though the side of the disk closest to Earth is slightly brighter due to the line-of-sight angle to the observer from the disk surface, this occurs at a smaller stellocentric angular separation than that expected for a non-flared disk. In this scenario, the side of the TW Hya disk closest to the observer would be to the NE. For STIS and F110W, a model with moderate forward scattering can also reproduce the azimuthal brightness asymmetry, if the semi-minor axis of the TW Hya disk facing the observer is toward the SW.
Figure \ref{fig:f5} shows the F160W, F171M, F180M, and F222M azimuthal brightness profiles
with the same models overplotted. We neglect the F204M data because it has lower
signal-to-noise. For each of these four images, no significant azimuthal asymmetry is
revealed. Whatever is the cause of the asymmetry at shorter wavelengths, it is not
detected with significance in the longer wavelength observations. We discuss the
potential causes of this in Sections \ref{sec:bestfit} and \ref{s:conc}.
\subsection{Scattered Light Spectrum}
The scattered light spectrum of the disk around TW Hya is a combination of the input
stellar spectrum and the intrinsic reflectivity of the disk. We removed the stellar spectrum
of TW Hya by dividing the measured surface brightness by the flux density of the star in
each filter. We divided the STIS spectrum of the disk by the point source spectrum of the
central star.
TW Hya did not have direct photometric measurements in the broadband HST filters (F110W, F160W, and
STIS CCD) and has a complicated spectral energy distribution in both the optical and
near-IR (See \S \ref{s:spectype}). To find TW Hya's broad-band flux densities, we
bootstrapped from its flux relative to the PSF reference star. Although the PSF does not
have exactly the same spectrum as TW Hya, it is anchored to TW Hya by literature
photometric measurements of both stars at V, J, and H-bands and our measurement at
F171M. To interpolate to STIS, F110W, F160W, we fit stellar atmosphere models of a range
of effective temperatures and gravities to the broad-band photometry of the PSF, scaled to
TW Hya, and propagated the systematic model uncertainty into the TW Hya photometry. We
also compared our flux densities in the near-IR to the spectrum of \citet{vacca},
and found general agreement within the uncertainties.
Figure \ref{fig:f6} shows the total reflectance spectrum of the disk from
0.5-2.22\micron\ averaged over radial distances of 50-215~AU, the extent to which we
detected the disk in the medium band filters. Also shown is the visible light spectrum of
the disk over the same distances, normalized to the STIS photometric point, from
\citet{Roberge:2005}. The overall spectrum is relatively neutral between 0.5-1.6\micron,
becoming slightly blue at longer wavelengths. No absorption bands are seen within this
broadband spectrum.
\subsection{The Spectral Type of TW~Hya and its Inferred Mass and Accretion Rate}
\label{s:spectype}
Most previous determinations of TW~Hya's spectral type, and hence its age and mass, have
been based on optical spectral diagnostics \citep{webb,alencar02,yang:2005}. The
consensus value of the optical spectral type is K7 or an effective temperature of
$\sim$4000~K and a mass of 0.5-0.8~M$_\odot$. Recent near-IR spectroscopy of TW~Hya,
however, implied a much later spectral type of M2.5 \citep{vacca}, which corresponds to a
lower mass and younger age.
We investigated this discrepancy by finding our own spectral type for TW Hya using the
broad wavelength coverage of our G750L spectrum (5240-10266\AA) taken 17 July 2002. We
compared our spectrum to those of K7-M2 stars in the STIS Next Generation Spectral Library
\citep{heap1,heap3,gregg1,gregg2}, whose spectra also incorporate STIS G750L data. We
calculate a reduced $\chi_\nu^2$ value from a comparison of the spectra at each wavelength
with each of the six comparison stars listed in
Table 2 as models, while excluding data in the H and Ca emission lines. The reduced $\chi_\nu$ value
was determined against the 961 wavelengths from the spectrum, for a total of 960 (958) degrees of
freedom for a one- (two-)component model respectively. For each model
star, we determined a mean T$_{eff}$ and uncertainty from the literature, computed over
the range of published T$_{eff}$s. The closest match ($\chi_\nu^2$=105) was to GJ 825, a
M0V dwarf with an effective temperature of 3730 $\pm$ 160~K (Figure \ref{fig:spec1}). The
fit is rather poor at both wavelength extremes -- TW Hya is dimmer in the blue and
brighter in the red.
\citet{vacca} suggested that TW~Hya is in fact a later type star with a hot spot. An
alternative explanation is that TW Hya is a hotter star with substantial cool spot, such
as the one imaged near the magnetic poles and seen in radial velocity measurements
\citep{huelamo08,donati11,boisse12}. In order to test these hypotheses, we further fit our
spectrum with combinations of two comparison stars of different spectral types. A
significantly better ($\chi_\nu^2$=35), though not perfect, fit is obtained with a
combination of a K7 (45\% of the flux) and M2 (55\% of the flux) star. Our K7 comparison
was HR~8086 (T$_{eff}$=3990$\pm$115~K) and our M2 comparison was HD 95735
(T$_{eff}$=3600$\pm$130~K). This combination provides a good fit to both the optical and
NIR spectrum of TW~Hya. One limitation to our method is the lack of multiple comparisons
for a finer temperature constraint, as well as disagreement on the T$_{eff}$ of our
comparison stars in the literature.
We find the effective radius of TW Hya by combining the known distances to the comparison
stars and to TW~Hya with interferometrically measured \citep{vanbelle} or model-inferred
\citep{takeda07} stellar radii of the comparison stars. We find a radius of
0.84$\pm$0.2~R$_\odot$ for the M0 comparison and 1.08$\pm$0.15~R$_\odot$ for the K7+M2
comparison, roughly in agreement with the findings of \citet{vacca}. We assume the
uncertainties in the radius are driven by the uncertainty in the parallax.
This complex situation involving multiple temperature components makes an inference of a
mass, T$_{eff}$, and luminosity for TW~Hya problematic. However, a comparison of our
inferred temperature and radius with evolutionary models provides some insight into
whether we have a low mass star with accretion hot spots and a higher mass star with large
cool spots. We overplot the inferred T$_{eff}$ and radii from our spectral templates in
Figure \ref{fig:ff} with respect to isochrones for stellar masses of 0.4-0.8~M$_\odot$
from \citet{baraffe98} (inspection of other isochrones give similar results within the
uncertainties). If the coolest component represents the central star, then the inferred
radius and T$_{eff}$ are consistent with a 0.55$\pm$0.15~M$_\odot$ object at an age of
$\sim$8~Myr, in line with other age and mass estimates of TW Hya. If the hot component
dominates, however, the age would be closer to 20~Myr. Similarly, the intermediate case
of a slightly higher T$_{eff}$ object comparable to an M0 spectral type implies an older
star (t$\sim$20-30~Myr).
We conclude that the best interpretation of stellar spectrum is of a cooler star
(T$_{eff}\sim$3600) with a mass of 0.55$\pm$0.15~M$_\odot$ and an age of 8$\pm$4~Myr.
There exists, therefore, a significantly warmed atmosphere due to accretion over portions
of the star's surface. An estimation of the accretion luminosity can come from assuming
the K7 component as the SED of the warmed photosphere due to accretion (additional emission may arise at shorter wavelengths due to the hotter shock itself). Under the assumption of blackbody emission, if we subtract off the underlying luminosity
of the M2V component, the luminosity coming from accretion is estimated to be
0.03~$L_\odot$, corresponding to an areal covering fraction of 10\%. Assuming $L=GM\dot{M}/R$, this implies accretion rates for TW~Hya
of 2$\times$10$^{-9}$~M$_\odot$ yr$^{-1}$, in line with other estimates of TW~Hya's
accretion rate \citep{batalha02}, which range from
4$\times$10$^{-10}$-5$\times$10$^{-9}$~M$_\odot$/yr.
Sub-mm measurements of the Keplerian rotation of gas in the TW~Hya disk in principle provide an independent measure of the stellar mass. Given the near face-on inclination for the disk, even very precise measurements of Keplerian rotation in TW Hya provide a similar constraint to mass as to what we have obtained through our STIS spectroscopy: while CO measurements of TW Hya's disk constrain the inclination to 6-7$^\circ$ and accuracies to within a degree, that still allows for a wide range of stellar mass, since the quantity $M^{1/2}\sin{i}$ is measured. If the inferred inclination is 6$^\circ$ and the mass of TW Hya is best fit by 0.6$M_\odot$ \citep{hughes11}, an uncertainty of 1 degree would correspond to masses that ranged from 0.85$M_\odot$ to 0.44$M_\odot$ \citep{qi04,hughes11}
Most of the prior
SED and sub-mm spectral cube modellng have assumed M$_*$=0.6 M$_\odot$
\citep{Calvet:2002,Qi:2008}, while other investigators have assumed masses as high as M$_*$=0.8 M$_\odot$ \citep{donati11,andrews12}. In \S \ref{s:model}, we select our best fitting mass of 0.55 $M_{\odot}$ and determine that mass does not significantly impact most parameters of interest within the disk.
\section{Modeling the TW Hya Scattered Light Disk}
\label{s:model}
In this Section we provide a model for the observed spectrum and morphology of the disk. This requires a model of the structure of the disk, discussed in \S\ref{s:diskstruct}. We infer that the depression we observe between 50-130~AU
is caused by a ``gap'' in the disk. The origin of this feature will not be constrained by these observations, but a likely candidate is a planetary object that has succeeded in opening a gap. The limits to such a candidate are discussed in \S\ref{s:planetmass}. However, other possibilities exist, which we outline in \S\ref{s:conc}.
\subsection{Initial Disk Structure}
\label{s:diskstruct}
The disk models are generated as described in \citet{HJC_gaps}. The details
of the radiative transfer modeling is also described in \citet{paper1,paper2}
and \citet{HJC_model}. Although we assume that the overall disk structure is
axisymmetric, the calculation of the radiative transfer is done in 3D to
include the curvature of the disk both in vertical height above the midplane
($z$) and in the azimuthal direction ($\phi$).
The structure of the unperturbed planet-less disk is generated in a two-step
procedure. In the first stop, we calculate a locally plane-parallel two-dimensional model for
the entire disk. We use the same formalism developed by \citet{calvet} and
\citet{vertstruct,dalessio2}, with some simplifying assumptions. We calculate
the disk from 0.25 to 256 AU, with radial bins spaced by factors of
$\sqrt{2}$. At each radius, we assume that the disk is locally
plane-parallel. This quickly generates an estimate for the radial and
vertical temperature of the disk and the surface density profile that we can
then refine in step (2).
Then, in step (2), we remove the assumption of a local vertical plane parallel
surface, but keep the vertically integrated surface density profile fixed. We
take a radially limited slice of this disk and refine its structure, this time
taking the full three-dimensional (3D) curvature of the disk into account. We
calculate radiative transfer in 3D as a numerical integration over the surface
of the disk.
In both steps, we iteratively calculate the vertical density and temperature
structure of the disk including radiative transfer and under the assumption of
hydrostatic equilibrium. The main heating sources are stellar irradiation and
viscous dissipation. The opacities used are the same as those used for
calculating the disk brightness, although to speed up the calculations, the
opacities are averaged over the stellar and disk thermal emission. To
represent the optical depth of the disk to stellar light, we average the
wavelength-dependent opacity over the Planck spectrum evaluated at $T_{\rm
eff}$ and we use the Rosseland mean opacity to represent the optical depth
of the disk to its own radiation.
We base our disk structure modeling on our inferred stellar parameters
from Section \S \ref{s:spectype}. We assume a stellar mass of
0.55~M$_\odot$, radius of 1.08~R$_\odot$, and total
luminosity of 0.208 $L_\odot$. The spectrum of the star
is composed of two blackbodies, one at 3990 K and the other
at 3600 K, contributing 55\% and 45\%, respectively, to the
total luminosity. The opacities of the dust to stellar irradiation
are calculated using this spectrum.
Given the uncertainties in mass and T$_{eff}$ described in \ref{s:spectype}, we
consider their impact on our models. Decreasing the stellar mass while keeping
the total luminosity of the star fixed has the effect of making the disk puffier, due to
the lower gravity. The disk is then able to intercept more light, making it appear
brighter. Disk models differing only by the stellar parameters have the same
basic shape and spectral behavior but change overall brightness by 5-10\%. Changing the
stellar parameters also mainly impacts the inferred radial width of the gap, but not the
depth.
The disk parameters are those derived by \citep{Calvet:2002}
from the SED of TW Hya.
For the accretion rate, we assume a value of
$1\times 10^{-9}\, M_{\sun}\mbox{yr}^{-1}$ (see \S \ref{s:spectype}).
We assume that the dust is well-mixed with the gas
and constant throughout the disk, with a dust-to-gas
ratio of 0.0138 and a grain-size distribution of
$n(a)\propto a^{-3.5}$.
\citet{Calvet:2002} and \citet{Wilner:2005} found that large maximum grain sizes
($a_{\mbox{\scriptsize max}}$)
give good fits to the SED of TW Hya at long wavelengths
($\lambda \gtrsim 1$ cm), but
there is a degeneracy between maximum grain size
and viscosity parameter ($\alpha$) in their models,
where the viscosity is given by
$\nu = \alpha c_s^2/\Omega_{\phi}$ where $c_s$ is the thermal sound speed,
and $\Omega_{\phi}$ is the Keplerian orbital angular velocity.
Good fits to the SED of TW Hya are obtained with
$a_{\mbox{\scriptsize max}} = 1\,\mbox{mm}, 1\,\mbox{cm},$ or
$10\,\mbox{cm}$. This is because large grains are effectively
invisible to optical and infrared wavelengths.
The $\alpha$ parameter must be adjusted as the
maximum grain size changes, because the overall mass of the
disk increases with increasing grain size, assuming a fixed
gas-to-dust ratio; however, the stellar mass accretion rate must
be kept fixed.
Given that $a_{\rm max}$, $\alpha$, and $\dot{M}$ all
have the primary effect of scaling the disk mass up or down
and that the degeneracies inherently give limited constraints
on these parameters, we assume for simplicity in all our disk models that
$\dot{M} = 10^{-9}\, M_{\sun}\mbox{yr}^{-1}$,
$a_{\mbox{\scriptsize max}} = 1\,\mbox{cm}$, and
$\alpha = 5\times10^{-4}$.
The effect of increasing the
accretion rate would be to increase the total surface
density of the disk, which could be offset by adjusting
the viscosity parameter $\alpha$ downward.
Our assumed parameters give a total disk mass within the simulation boundaries between 27 and 211 AU
of 0.074 $\>{\rm M_{\odot}}$.
This assumed disk mass is comparable to estimates of the disk mass from
Herschel HD observations \citep{berginnature}.
The wavelength dependent opacities are calculated using a
Mie scattering code, miex, that includes distributions of particle sizes and
can account for large dust particles \citep{wolf04}.
The disk models rely on mean opacities calculated from
these wavelength-dependent opacities.
The Rosseland mean opacity evaluated at 100 K is used to
represent the overall disk opacity to its own emission
and the Planck averaged opacity evaluated at 100 K is
used for calculating the thermal emission from the disk.
The Planck averaged opacity evaluated over the
stellar spectrum is used to represent the opacity
of the disk to stellar irradiation.
We model the star as a two-temperature
blackbody, with 55\% of the luminosity from a
3990 K component, and 45\% of the luminosity from a 3600 K
component.
For a model with a given minimum grain size, we calculate
the entire disk model from the initial conditions
in order to be completely self-consistent.
\subsection{Scattered Light}
\label{sec:model}
The scattered light image of a disk is modeled as in
\citet{2009HJC} and \citet{HJC_gaps}.
The scattering surface of the disk is defined to be
where the optical depth from the star at a given frequency $\nu$ is
$\tau_{\nu} = 2/3$. The brightness
at the scattering surface of the disk, including multiple scattering, is
\begin{equation}\label{eq:multscat}
I^{\mbox{\scriptsize{scatt}}}_{\nu} =
\frac{\omega_{\nu}\mu\/R_*^2\/B_{\nu}(T_*)}{4r^2(\mu+\cos \eta)}
\left\{
1 + \frac{\omega_{\nu}}{1-g^2\mu^2}
\left[\frac{(2+3\mu)(\mu+\cos\eta)}{(1+2g/3)(1+g\cos\eta)} - 3\mu^2
\right]
\right\}.
\end{equation}
where $\omega_{\nu}$ is the albedo,
$g=\sqrt{3(1-\omega_{\nu})}$,
$\mu$ is the cosine of the angle of incidence to the scattering surface,
$B_{\nu}$ is the stellar brightness,
$r$ is the total distance to the star,
and $\eta$ is the angle between the line of sight to the observer and
the normal to the scattering surface.
The observed brightness of a star at distance from the observer $d$ is
\begin{equation}
F_{\nu,\mbox{\scriptsize{obs}}} = \pi B_{\nu} \left(\frac{R_*}{d}\right)^2
\end{equation}
so we can express the surface brightness in scattered light
in units of the apparent brightness of the star per square arcsecond:
\begin{equation}\label{eq:Iscatt}
I^{\mbox{\scriptsize{scatt}}}_{\nu} =
\frac{\omega_\nu \mu (1+f_{\rm mult})}{4\pi(\mu+\cos{\eta})}
\left(\frac{d}{\mbox{pc}}\right)^2
\left(\frac{r^2+z_s^2}{\mbox{AU}^2}\right)^{-1}
\left(\frac{F_{\nu,\mbox{\scriptsize{obs}}}}{\mbox{asec}^2}\right)
\end{equation}
where
\begin{equation}
f_{\rm mult} =
\frac{\omega_{\nu}}{1-g^2\mu^2}
\left[\frac{(2+3\mu)(\mu+\cos\eta)}{(1+2g/3)(1+g\cos\eta)} - 3\mu^2
\right]
\end{equation}
represents the fractional increase in brightness
caused by multiple scattering.
Finally, we need to choose a composition and size distribution (especially minimum grain
size) of the dust to calculate the final scattered light that is emitted from the disk.
We use the wavelength-dependent opacities calculated using Mie theory, as described above,
to determine the scattered brightness. Many compositions and possible grain sizes are
available, and it is not immediately clear whether a single composition is preferred for
optically thick disks, or whether each individual disk has its own properties. The
neutral color of TW~Hya in the optical compared to the quite red colors of HD~100546,
suggest that a range of compositions and size distributions exist amongst disks.
\subsection{Gap and Truncation}
\label{s:gap}
The gap in the disk is modeled as an ad hoc axisymmetric density
perturbation parameterized by a Gaussian with adjustable
width $w$ and depth $d$ centered at 80 AU.
To model the truncation of the disk, we introduce an exponential
cutoff at a knee $k$, as appears in the self-similar
solution for an accretion disk as derived in \citet{1998HCGD}.
If $\Sigma_0(r)$ is
the unperturbed disk surface density, then the new surface
density profile is
\begin{equation}
\Sigma(r) = \Sigma_0(r) \{1-d\exp[-(r- 80\mbox{ AU})^2/(2w^2)] \}
\exp[ -(r/k) ].
\end{equation}
Since the primary heating source for the disk is stellar irradiation,
the effects of shadowing and illumination must be accounted for in
determining the vertical structure of the disk.
The three-dimensional density and temperature structure of the
perturbed disk are calculated iteratively according to \citet{HJC_model}
and \citet{HJC_gaps}
That is, the illumination at the surface of the disk is determined
by ray-tracing and the disk temperatures are calculated accordingly.
Once the disk temperatures are determined, the vertical density
profile of the disk is iteratively
recalculated assuming hydrostatic equilibrium
as above, keeping the vertically integrated surface density profile
constant.
\subsection{The Predictive Power of Scattered Light Observations of Optically Thick Disks}
How predictive can models of optically thick disks be for composition and size
distribution? While we discuss below (\S\ref{sec:bestfit}) the best fit to our
measurements of TW Hya's disk, here we consider the uniqueness of composition within such
fits. We consider a fixed disk geometry by taking a single disk model from the parameter space we explored for TW~Hya--a disk with a gap located at 80~AU with width $w$=30~AU and depth $d$=0.3, a
truncation knee at $k$=100~AU, and a maximum grain size of 1~cm. We then made six
independent models of the disk structure that vary composition between pure water ice and
astronomical silicates \citep{warren84,laor93}, and vary minimum grain sizes between
5$\times10^{-3}$,1, and 10\micron. The dust is well-mixed with the gas so that the dust
density is proportional to the gas density. We calculated albedo and opacity to find the
surface brightness of scattered light as a function of wavelength.
Figure \ref{fig:f7} shows a comparison between the six predicted STIS surface brightness
profiles for the different models. Differing grain sizes and compositions indeed affect
the resulting surface brightness profiles of a given disk structure. If the composition
of the disk is incorrect, the structure of the disk can be incorrectly interpreted.
In general, pure water ice disks are dimmer than those that possess pure silicates. The reason for this difference is primarily due to the higher opacity of a pure silicate disk, demonstrated in Figure \ref{fig:f8}. In this Figure we have plotted the height of the $\tau_{\nu}=2/3$ surface ($H$) divided by the radius ($R$) for varying wavelengths.
From Eq.~(\ref{eq:Iscatt}), the brightness of the disk is proportional
to the angle of incidence $\mu$ at the surface,
\begin{equation}
\mu \approx \frac{d}{dR}\left(\frac{H}{R}\right) - \frac{H}{R}
= R \frac{d}{dR}\left(\frac{H}{R}\right)
\end{equation}
Note that $H$ represents the wavelength-dependent penetration depth
of stellar photons, not a thermal scale height.
If $H\propto R^{\beta}$, then $\mu=(\beta-1)H/R$.
Therefore, the brightness scales roughly with $H/R$.
The $H$ surface occurs higher up in the disk for silicates,
resulting in a higher surface brightness.
The different structure of the surface brightness profiles in Figure
\ref{fig:f7} is also striking, and is clarified by looking at
$d(H/R)/dR$ for the different models (See Figure \ref{fig:f9}).
Water ice has strong absorption features at 1.5 and 2.0 \micron\, which are probed by our
F160W/F171M and F204M observations. Given the $\sim$10\% accuracy of the disk photometry
measurements, we can in principle detect absorption features that have a depth of
$\sim$15-30\%. Figure \ref{fig:f10} shows the resulting reflectance spectra for each of
our models. Silicates show no absorption lines in the visible to near-IR, but are mostly
neutral or red over this wavelength range for minimum grain sizes $>$1\micron. Water ice
can show noticeable features, and disks with pure water ice can show detectable features
at 20-40\% depth.
These tests demonstrate that composition can play an important role in the observed
scattered light spectrum and surface brightness profiles of a disk. For our final model
of TW Hya, an exhaustive search of compositions, grain sizes, and structures is
computationally prohibitive and beyond the scope of this paper.
\subsection{Model Parameters}
\label{sec:bestfit}
To fit the observations of TW Hya's disk, we used dust opacities and scattering
efficiencies for a composition with the same relative abundances as used to model the SED
(\citep{dalessio3}): 29.6\% organics, 40.4\% water ice, 24.5\% astronomical silicates, and
5.5\% troilite. We test whether such a composition also fits the scattered light
observations. We ran a suite of models, varying the following disk parameters as follows:
\begin{eqnarray}
a_{\mbox{\scriptsize min}} & \in&
\{0.005, 0.5, 5\} \mbox{ microns} \label{avar} \\
w &\in& \{5, 10, 20, 30\} \mbox{ AU} \label{wvar} \\
d &\in& \{0.0, 0.3, 0.5\} \label{dvar} \\
k &\in& \{60, 80, 100, 120, 150\} \mbox{ AU} \label{kvar}
\end{eqnarray}
The surface brightness profiles are fit simultaneously
spatially and across the 7 wavelengths for which we have photometric data.
To account for uncertainties in the overall normalization such as might
arise from uncertainty in our chosen parameters, we allow
the overall brightness to vary by a constant factor
and calculate the reduced $\chi^2$ value for each model
with respect to the wavelength dependent surface brightness profiles from $56$ to $160$ AU\@.
The center of the gap is fixed at 80 AU.
The parameters for the best fits as measured by the minimum reduced $\chi^2_{\nu}$ from the fits above and are
tabulated in Table \ref{tab:fitparams}, where we found that a disk gap of 30\% depth and 20~AU width are preferred.
The profiles are well fit by a truncation knee of 100~AU. We found that we could fit the observed
brightness of the disk only with small a$_{\rm min}$ (0.005$\micron$), but if we were
willing to scale the overall brightness, we could equally fit (similar $\chi^2$) the
disk with a$_{\rm min}$=0.5$\micron$. However, larger grain sizes failed to reproduce the
neutral-to-blue color observed over large portions of the disk, resulting instead in a
redder color. Since the overall flux density of the disk is an observed quantity, we
favor those models that naturally reproduced the observed surface brightness.
A few of our assumptions could impact the expected brightness of the disk. For example,
the luminosity of the star would differ by $\sim$20\% if we instead used the luminosity
determined by \citet{vacca}, rather than that by \citet{webb} or our own determination.
Since the surface brightness of TW~Hya's disk due to scattered light depends on the square
of the distance, uncertainties of up to 23\% are possible, given the uncertainties in
TW~Hya's parallax. A different distance would also imply revised stellar parameters. A
higher or lower mass of the central star will result in a puffier (brighter) or more
vertically compact (dimmer) disk, respectively.
We have also assumed that the dust isotropically scatters light. If the dust were instead
forward scattering by a moderate degree, such as with a Henyey-Greenstein asymmetry
parameter of 0.5, the approximate scattering angle of TW~Hya would result in a dimming of
the disk of $\approx$25\%, based on our modeling. Due to flaring in the disk, our isotropic models predict a
decrease in flux along the semi-minor axis of the disk pointing to the observer.
Conversely, forward scattering dust, even in a flared disk, will produce a brightness peak
along the semi-minor axis of the disk tilted toward the observer. The true orientation of
the disk is degenerate between these two possibilities, so we orient our models such that
the brightness maximum is aligned with the observed P.A. in the STIS and F110W data.
Closer to the star, the models with isotropic scattering do not completely reproduce the
azimuthal brightness distribution, while the forward scattering model is more consistent.
Our models suggest an inclination of $\sim$7$^\circ$ and forward scattering dust for the
disk. Forward scattering grains in the outer disk are marginally favored as both the STIS
and F110W data still show a trough and peak structure in the azimuthal surface brightness
profile--while the S/N is lower, the expectation from our models is a larger brightness
asymmetry at larger distances, which is not favored by the data. This is either
indicative of a decrease in propensity to forward scatter or lack of forward scattering
grains in the outer disk.
Figures \ref{fig:f11} and \ref{fig:f12} show the radial data
compared to the best fit model for all of the observations and radial
spectral cuts of the disk. In general the model fits the observed
spectrum quite well, with the exception of the region around 80~AU,
the position of the gap.
This is also where the model fails to reproduce the observed surface
brightness profile in some of the wavelengths observed.
In comparison, models with no gap provide poor fits to the wavelength dependent
surface brightness profiles.
In figure \ref{fig:f3} we show brightness profiles
of disk models without a gap at 80 AU in comparison to the data
at the F222M waveband. The disk models include truncation
at $k=60$, 100, and 150 AU and have been scaled by
$1.05$, $0.73$, and $0.57$, respectively, in order to fit the data
across all wavelengths. The reduced $\chi^2_{\nu}$ for the
models are 20.8, 8.18, and 4.31, respectively. The
model with truncation at 150 AU is the best fit, but still does
not provide as good a fit as compared to the model with
a 30\% gap.
\section{Mass of a Gap-Opening Planet}
\label{s:planetmass}
The best model for the TW Hya disk is one where an axisymmetric
$30\%$ partial gap is opened
at 80 AU in the disk. A possible mechanism for opening such a gap
is an embedded planet. The model we have used to fit to the TW Hya disk
does not include hydrodynamics,
but rather the gap is imposed in an ad hoc manner
on the disk structure.
Numerical hydrodynamic simulations of planets embedded in gas-dominated
protoplanetary disks by \citet{bate} and \citet{bryden99}
indicate that tidal torques between the planet
and disk can clear axisymmetric partial gaps. As described in \citet{HJC_gaps}, the gap size can be correlated to planet mass according to the following arguments.
The mass at which a planet can open a gap depends upon both
the scale height of the disk and its viscosity.
The thermal scale height of the disk is
\begin{equation}\label{thermalscaleheight}
h = \frac{c_s}{v_{\phi}}a
\end{equation}
where $c_s=\sqrt{kT/\mu}$ is the thermal sound speed measured at the midplane,
$k$ is the Boltzmann constant,
$T$ is the local disk temperature,
$\mu$ is the mean molecular weight,
$v_{\phi}=\sqrt{GM_*/a}$ is the orbital speed of the planet,
and $G$ is the gravitational constant.
For a composition dominated by molecular hydrogen,
$\mu = 2 m_H$.
For a viscous protoplanetary disk, the criterion for gap-opening
can be expressed as
\begin{equation}\label{eq:viscgapcrit}
\frac{3}{4}\frac{h}{r_{\mbox{\scriptsize Hill}}} + \frac{50}{q {\cal R}}
\lesssim 1
\end{equation}
\citep{2006CridaMorbidelliMasset},
where
the Hill radius of the planet is
\(r_{\mbox{\scriptsize Hill}} = \left(\frac{m_p}{3M_*}\right)^{1/3} a\),
$q=m_p/M_*$ and ${\cal R}\equiv r^2\Omega_P/\nu_v$
is the Reynolds number, and $\nu_v$ is the viscosity,
given by $\nu_v=\alpha c_s h$ for an $\alpha$-disk model.
\citet{bate} adopt disk parameters of $h/r=0.05$ and ${\cal R}=10^5$
for their simulations, giving a gap-opening threshold of
$q=1.06\times10^{-3}$ using Eq.~(\ref{eq:viscgapcrit}), or
slightly more than 1 $M_J$. For comparison, the inviscid gap-opening
threshold would be 0.4 $M_J$.
They find that planets with
$q=3\times10^{-4}$ and $1\times10^{-4}$ clear gaps of
90\% and 50\%, respectively, and a planet with $q=3\times10^{-5}$
creates a nearly negligible gap.
Thus, a planet that clears
a gap of 30\% would be between 0.03 and 0.1 of the viscous gap-opening
threshold.
For TW Hya, our best-fit disk model has $h/r=0.081$ and
$\alpha=5\times10^{-4}$, giving
${\cal R}=3.1\times10^5$ and a viscous gap opening threshold of
$q=1.1\times10^{-3}$. If $M_*=0.55\,M_{\sun}$, this is
197 $M_{\earth}$. This implies that if an embedded planet is the
cause of the gap we have modeled, it must be between
$6-20\,M_{\earth}$. However, there is some degeneracy
with regard to $\alpha$, as discussed in \S\ref{s:diskstruct}.
A model with maximum grain size of 1 mm and $\alpha=1\times10{-3}$
gives a disk with similar properties to the best-fit model.
This larger value of $\alpha$ gives a lower Reynolds number,
requiring a more massive planet to open a gap and allowing a
larger planet to hide in the disk. If we say that
$\alpha<1\times10^{-3}$,
we find a gap opening threshold of $q<1.5\times10^{-3}$.
Thus, a more conservative upper limit on the mass of a
planet at 80 AU in the TW Hya disk is 28 $M_{\earth}$.
\citet{2006CridaMorbidelliMasset} estimate the half-width of the
gap to be $\sim 2 r_{\rm Hill}$, based on the region where
gas streamlines form horseshoe orbits.
If we take $q=10^{-4}$ and $a=80$ AU, then the gap width would be 5.1 AU\@.
However, this width does not correspond to the widths of the
Gaussian-shaped gaps modeled in this work, since the gap shapes modeled in
\citep{2006CridaMorbidelliMasset} are flat-bottomed.
A Gaussian fitted to these gap shapes would have widths larger than
$2 r_{\rm Hill}$. Thus a $\sim20$ AU gap width, as inferred by our models,
is only slightly higher than one might expect
for a planet opening a gap.
\section{Discussion}
\label{s:conc}
We have combined several resolved images of the TW~Hya disk in scattered light with
spatially resolved spectroscopy of the disk. The spectrum of the disk in the visible and
near-IR is featureless, with a broad neutral to blue trend, indicative of sub-micron
grains. Based on our scattering models, disk grains composed of organics, water ice and
silicates that fit the SED also fit the scattered light, if they are as small as
0.005$\micron$. We have chosen a composition based on what is most likely present in the
TW~Hya disk, but we have not exhaustively explored other materials. Any material that
scatters neutrally in the visible to near-IR could be suitable to explain the spectral
shape we observe, but would also have to be tested against the SED.
One implication of our work is that scattered light measurements can effectively constrain
minimum grain size since the smallest dust typically has the largest available surface
area for scattering. TW Hya's disk is very bright and thus requires the presence of small
grains. By the absence of any strong water ice absorption features, we can also directly
constrain the abundance of water ice on large grains; small grains show absorption
features at $<$10\% the disk scattering continuum for the compositions we considered (see
Figure \ref{fig:f10}), below the level of our disk photometric uncertainties. A lack of
absorption accords with the expectation that large grains settle below the optical depth
probed by scattered light observations.
For our chosen composition, we can compare our water ice mass abundance to that of
\citet{twhyanat}, whose mass is derived from their Herschel detection of water vapor
caused by UV dissociation at 100-150~AU. They interpreted the deficit of water vapor to
that predicted by their models as evidence for dust settling of large icy grains to the
disk midplane and calculated that there needed to be $>9\times10^{27}$~g of water ice in
the disk. Our scattered light images probe deeper in disk scale height than UV and are
also consistent with the idea that the bulk of the ice is on large grains that have
settled toward the midplane. If we take a disk mass in gas of 1.9$\times10^{-2} \>{\rm M_{\odot}}$,
as assumed by \citet{twhyanat}, then we would predict a total ice mass in the disk from our
models of 2$\times10^{29}$~g, based on our mass abundance of 40\% water ice in dust and a
dust-to-gas ratio of 0.0138. The mass increases by a factor of three if we use the
inferred disk masses from our best fitting model, which has an inferred disk mass closer
to 0.07 $\>{\rm M_{\odot}}$. Assuming that we probe the local mass abundance of water ice relative to
gas at our $\tau=2/3$ surface at the disk, this would imply a relative abundance of
5.6$\times10^{-3}$ at 30~AU above the midplane at 100~AU and 46~AU above the midplane at
150~AU.
Our models must include a 30\% cleared gap at 80 AU and a disk truncation
exterior to 100 AU to match the observed surface brightness profiles. We place
an upper limit on the mass of a planetary companion that could be clearing the
gap of 6--28 M$_{\earth}$. In the following we discuss potential remaining uncertainties.
\subsection{Remaining Uncertainties}
While our model successfully reproduces the observed properties of the TW~Hya
disk over our wavelength range, there remain some uncertainties that will
require follow-up study. In particular, we further discuss the nature of the
observed brightness asymmetry for the disk seen at short wavelengths, the more
spatially extended nature of small dust grains in optical light compared to
continuum sub-mm and mm observations, and alternative origins for the observed
gap at 80~AU.
The brightness asymmetry first reported by \citet{Roberge:2005} and further characterized
in \S \ref{s:phase} is strange given the asymmetry is only apparent at wavelengths
$\leq$1.1$\mu$m, whereas our models at any disk inclination would predict asymmetry at
wavelengths $>$1$\micron$ also. A warped inner disk, as suggested by
\citet{Roberge:2005} and \citet{rosenfeld12} also seems unlikely, as that would also be expected to produce
asymmetry at long wavelengths as well. What could cause a wavelength dependence to the
asymmetry?
For an inclined, optically thick disk with isotropically scattering grains, the near side of the disk appears slightly
brighter because of the lower opacity along the line of sight. However, the
near side also appears foreshortened. Our azimuthal profiles were taken assuming
a circular disk, which would artificially decrease the surface brightness for position angles corresponding to the
side of the disk pointing towards the observer. Therefore, our models do not
uniquely predict the brightness asymmetry.
Forward scattering grains cancel the foreshortening effect by
brightening the near side. If the disk grains are substantially more forward
scattering at short wavelengths, the apparent brightness asymmetry could
change with wavelength. However, forward scattering grains would also make
the whole disk look dimmer than observed, since we image it nearly face-on.
TW Hya is known to have variable accretion \citep[e.g.][]{dupree:2012} and
this could drive changes in the scale height of the inner disk. Variability in
the inner disk structure is unlikely as the source of the changing
asymmetry. Although the inner disk could change scale height due to variable
accretion, the F110W, STIS images and STIS spectroscopy show the same
asymmetry but were taken two, and then two more, years apart, respectively,
while the F160W image does not show asymmetry and was taken only three months
before the F110W image.
If portions of the outer disk were shadowed from direct starlight by a
non-axisymmetric bump or warp in the inner disk, that might be able to produce
asymmetry in the observed scattered light \citep{rosenfeld12}. The structure would become more
optically thin at longer wavelengths. The problem is that the
visible-near-infrared extinction would have to be nearly gray to match the observed photometry of the disk.
That only happens from
very large grains, whereas we determine that very small grains are plentiful
in the outer disk.
Alternatively, the lack of asymmetry at longer wavelengths could be the result
of a changing disk structure or grain population with depth, as longer
wavelengths probe deeper in the disk. The forward scattering grains seen at
wavelengths shortwards of 1.1\micron\ could no longer be present deeper in the
disk to be replaced by a different (perhaps larger) grain population. Our
models predict where the change-over in grain properties occurs as defined by
the scattering surface between F110W and F160W. At 80~AU the scattering
surface of the disk is at 19.9~AU above the midplane for 1.1\micron\, while it
is at 19.3~AU for 1.6\micron. If such a vertical transition exists, it would
be within a very narrow layer of the disk. If any upper layer structure is
the cause for the azimuthal asymmetries, they must reside only in layers
$>$19.3~AU above the midplane. Such an abrupt change seems unlikely.
A more detailed examination of the effect of forward-scattering grains,
coupled with realistic models of forward scattering from aggregate grains will
be needed to make progress on interpreting the asymmetry.
We next discuss the overall size of the disk. As has been seen in CO and dust continuum
data at sub-mm and mm wavelengths, the gas in the disk of TW~Hya extends much further than
sub-mm dust emission \citep{andrews12}. This dichotomy apparently does not extend to
smaller grain-sizes as the scattered light disk is nearly co-spatial with the extent of
the CO gas.
We find that the best fit model for the scattered light has a
large truncation knee, at 100 AU,
although we explored values down to 60 AU. Smaller values
of $k$ produce steep surface brightness profiles not warranted by our data (See Figure \ref{fig:f3}).
In comparison, radio interferometric observations find a much smaller
truncation radii for similar models.
Using the similarity solution \citep{LBP1974}:
\begin{equation}
\Sigma(R) = \frac{c_1}{R^{\gamma}}
\exp\left[-\left( \frac{R}{k} \right)^{2-\gamma} \right].
\end{equation}
\citet{Andrews2012}
find $k$=35-45 AU for values of $\gamma$ between $-1$ and $1$
fit SMA observations at 870 $\mu$m continuum and the CO J3-2 line;
\citet{Hughes2008}
find k = 30 AU and $\gamma=0.7$ for SMA observations at
1.3 mm and 870 $\mu$m (345 and 230 GHz) continuum, although
\citet{Hughes2011}
find $k=50$ AU and $\gamma=1$
is the best fit to the CO J(3-2); and
\citet{Isella2009}
find $k = 17.5$ AU and $\gamma=-0.3$
for CARMA observations at 1.3 mm continuum.
Models by
\citet{Gorti2011}
find that a truncation knee
of 35 AU and $\gamma=0.7$
provide good fits to observations of gas emission lines in
TW Hya.
The model fits to radio observations
cited above predict that the disk density beyond
$\sim50$ AU should be sharply cut off. As $k$ and $\gamma$
decrease, the more sharply the disk will be truncated.
However, the scattered light observations clearly indicate
the presence of disk material beyond 200 AU, so
we do not expect to find a small truncation knee.
The scattered light brightness profile of the TW Hya disk suggests that
the disk truncation is at a larger radius than the radio observations
imply. Since scattered light probes only the diffuse surface layer of
the disk while radio observations probe the deeper interior structure
of the disk, the difference in truncation radii may be attributable to
layered disk structure, where the vertical profile of the disk varies
with distance in a way not adequately modeled simply by vertically
integrated surface density profiles. Conversely, there could be an overall lack of large grains beyond $\sim$60-80~AU, indicative of grain growth frustration beyond this radial distance in the disk.
Finally, we investigate other possibilities for the presence of the gap. It
is interesting to note in Figure \ref{fig:f12} that at 80~AU, the center of
the gap, the spectral character of the disk is different than that seen in
other portions of the disk. If this is real, then a different composition in
this annulus may be causing the overall structure of the disk to change, or vice versa. Gaps in disks tend to receive larger amounts of UV photons due to isotropic scattering of Lyman-$\alpha$ photons from higher layers of the disk \citep{bethel11}. The excess UV photons can accelerate photo-desorption of ices, which could in turn change the spectral character of the disk within the gap \citep{bergin10,twhyanat}.
Conversely, compositional changes in this gap might be caused by enhanced collisions within
the disk or the presence of an accreting protoplanet causing a steep truncation to the disk. Both of these possibilities are supported in part by the recent discovery to a sharp cutoff in the millimeter emission from dust at 60~AU, which is not present in similar high resolution imaging of CO gas \citep{andrews12}. Since we see scattered light from dust well beyond this cut-off it is clearly due to a change in the bulk size distribution or millimeter emission properties of the dust, which in of itself may be a signature of a planetary companion or change in bulk dust composition.
The gap could instead be an unresolved spiral structure in the disk, as seen in NICMOS images of HD 141569,
where spiral structure at lower spatial resolution mimicked a gap structure
\citep{weinberger99,Clampin}. This could explain both the asymmetry as well as the
decrease in surface brightness and a hint of non-axisymmetric structure is seen in the higher spatial resolution STIS images of the TW Hya disk. Higher spatial resolution images might further solve this mystery.
The provocative nature of such a structure indicates that significant, large scale physical processes at large separations occur for protoplanetary disks and can significantly impact their midplane structures. Whatever the specific origin of this feature within the TW~Hya disk, the combined observations of optical/NIR scattering with longer wavelength photometry and spatially resolved imaging may lead to a new understanding of the conditions in which planetary systems form.
\acknowledgements{The authors would like to thank the anonymous referee for several helpful suggestions, including a suggestion to investigate the spectral type of TW Hya. We would also like to thank Ted Bergin, Diego Munoz, and Ruobing Dong for enlightening conversations on alternative origins for disk gaps, as well as A. Meredith Hughes for discussions of TW Hya CO data. Support for program \#10167 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.}
|
1,116,691,500,268 | arxiv | \section{Introduction}
It is known that the ring class field of imaginary quadratic orders can
be generated by evaluating the $j$-invariant at certain algebraic integers.
Several other modular functions, like the
Weber functions \cite{YuiZagier} can also be used for the generation
of the ring class field. In \cite{GeeBordeaux,GeeStevenhagen}
A. Gee and P. Stevenhagen developed
a method based
on Shimura reciprocity theory, in order to check whether a modular function
gives rise to a class invariant and in the case it does, they provided
a method for the efficient computation of the corresponding minimal polynomial.
This method was generalized further in \cite{steven2} to handle the ring class
fields case as well.
Shimura reciprocity law relates the Galois group of the ray class field $H_{N,\mathcal{O}}$ of conductor
$N$ over the ring class field $H_\mathcal{O}$ of the order $\mathcal{O}$, to the group
$G_N=(\mathcal{O}/N\mathcal{O})^*/\mathcal{O}^*$.
In our study we encounter modular functions of level $72$, and the
structure and order of the group $G_{72}$ depends on the decomposition of
$2 \mathcal{O}, 3\mathcal{O}$ as
product of prime ideals in $\mathcal{O}$.
If the ideals $2,3$ do not remain inert simultaneously then a variety of modular functions
like the Weber functions, double eta functions etc. can be used for constructing the ring class field.
Let $K_n=\mathbb{Q}(\sqrt{-n})$ be an imaginary quadratic number field such that
$n\equiv 19 \bmod 24$ and assume that $\mathcal{O}\subset K$. If $n$ is squarefree then $D=-n$
is a fundamental discriminant of $K_n$.
In this paper we will treat the case when $2,3$ both remain inert, i.e.,
$2 \mathcal{O}$ and $3\mathcal{O}$ are prime ideals of $\mathcal{O}$.
In this article we are interested in the $-n \equiv 1\bmod 4$ case so we set
$\theta_n=\frac{1}{2} + i \frac{\sqrt{n}}{2}$ and we consider the order
$\mathcal{O}=\Z[\theta_n]$ which is a maximal order if $n$ is squarefree.
Notice that the case $n\equiv 19 \bmod 24$ is the only case where $2,3$ remain inert.
The authors used the method of A. Gee and P. Stevenhagen \cite{KoKo,KoKo2} in order to
construct the minimal polynomials of the Ramanujan values $t_n$ for $n\equiv 11 \bmod 24$
proposed by S. Ramanujan in his third notebook, pages 392 and 393
in the pagination of \cite[vol.\ 2]{RamNotebooks}. For a definition of $t_n$, see section \ref{sec3}
eq. (\ref{gg}).
The values $t_n$ were proven to be class invariants for $n\equiv 11 \bmod 24$ by
B. Berndt and H.H.Chan in \cite{Berndt-Chan}.
However, for $n\equiv 19 \bmod 24$ the values $t_n$ are no longer class invariants
and Ramanujan
proposed the use of the values
$H_n=27 t_n^{-12}$ \cite[p.\ 317]{RamNotebooks}.
In this paper, we will prove that $H_n$ values are still not class invariants
since $K(H_n)$ is a quadratic extension of the ring
class field.
This is clearly an obstacle for the construction of the minimal polynomials of $H_n$
values, since A. Gee and P. Stevenhagen method can no longer be applied.
Therefore, we propose a modification of their method
that allows us to study the case of modular functions
which do not give class invariants and then we proceed to the study
of $H_n$.
We explicitly describe a method for the construction of their minimal polynomials and examine
some interesting properties of these polynomials.
Finally, we propose the use of values $A_n=27 t_n^{-12}+t_n^{12}/27$ that are class invariants
and generate the ring class field. Unfortunately, $A_n$ are algebraic integers which are not units.
We also study the relation with the modular functions
$\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$ introduced by A. Gee in
\cite[p.\ 73]{GeePhD}.
In remark \ref{10} we see how the Ramanujan values
$A_n$ are naturally introduced as generators of the invariant ring
$\mathbb{Q}[ \mathfrak{g}_0^{12},\mathfrak{g}_1^{12},\mathfrak{g}_2^{12},\mathfrak{g}_3^{12}]$,
under the action of a cyclic permutation $\tau$ of order $4$. Notice that
$ \mathfrak{g}_0^{6}(\theta)$,$\mathfrak{g}_1^{6}(\theta)$,$\mathfrak{g}_2^{6}(\theta),$
$\mathfrak{g}_3^{6}(\theta)$ are inside the ray class field $H_{3,\mathcal{O}}$ and we are
able to find their minimal polynomials over the ring class field.
We believe that this method of formalizing the search of class invariants in terms of invariant
theory can be applied to many other cases as well.
This method allows us to handle the case $n\equiv 3 \bmod 24$.
In section \ref{3mod24} we define some new class invariants and compute their
polynomials using the methods developed in the previous sections.
Finally, we give an example of using the $A_n$ class invariant in order to construct an elliptic
curve over the finite field $\mathbb{F}_p$,
\[p=2912592100297027922366637171900365067697538262949\]
of prime order \[
m=2912592100297027922366635123877214056291799441739.
\]
{\bf Acknowledgments} We would like to thank Professor Heng Huat Chan for suggesting
the study of
the Ramanujan class invariant for the $-D\equiv 19 \bmod 24$ case.
We would also like to thank Professor Peter Stevenhagen for making
a lot of valuable comments on a previous version of the article and for pointing us to
the relation between the $A_n$ and to the modular functions~$\mathfrak{g}_i$.
We also thank Professor Jannis Antoniadis for observing a pattern for the behavior of the
index given in section \ref{3.1}.
Finally, we would like to
thank the magma algebra team for providing us with a free extension of the license to their
system.
\section{Class Field Theory}
A. Gee and P. Stevenhagen provided us with a method to check whether a modular function is a class invariant.
We will follow the notation of \cite{GeeBordeaux},\cite[chapter\ 6]{Shimura} and
\cite{steven2}.
It is known that the modular curve $X(N)$ can be defined over $\mathbb{Q}(\zeta_N)$. Let
$\mathcal{F}_N$ be the function field of $X(N)$ over $\mathbb{Q}(\zeta_N)$, i.e. the
field of meromorphic functions on $X(N)$ with Fourier coefficients in $\mathbb{Q}(\zeta_N)$.
Observe that $\mathcal{F}_1=\mathbb{Q}(j)$. The automorphic function field
$\mathcal{F}$ is defined as $\mathcal{F}=\cup_{N \geq 1} \mathcal{F}_N$.
{
For the convenience of the reader we repeat here some elements of the adelic formulation of class
field theory and the relation to modular functions. For more information about these
subjects we refer to \cite[sections\ 5.2,6.4]{Shimura} and to
the articles of Gee-Stevenhagen \cite{GeeBordeaux},\cite{GeeStevenhagen},\cite{steven2}.
}
Fix an imaginary quadratic field $K$ and an order $\mathcal{O}=\Z[\theta]$ in $K$.
Let $K^{\mathrm{ab}}$ be the maximal abelian extension of $K$.
For each rational prime $p\in \Z$
we consider $K_p=\Q_p \otimes_{\Q} K$ and
$\mathcal{O}_p=\Z_p \otimes_\Z\mathcal{O}$.
We will denote by $\hat{\mathbb{Z}}=\lim_{\leftarrow n} \Z/n\Z,
\hat{\mathcal{O}}=\mathcal{O}\otimes_\Z \hat\Z=\lim_{\leftarrow n} \mathcal{O}/n\mathcal{O}
=\hat{\Z}\theta+\hat{\Z}
$ the profinite completions of the rings
$\Z,\mathcal{O}$.
{
Notice that $\hat{\mathcal{O}^*}=\prod_{p} \mathcal{O}^*_p$.
}
We consider the group
\begin{eqnarray*}
J_K^f=\prod_p {}' K^*_p
\end{eqnarray*}
of finite id\`eles of $K$. The restricted product is taken with respect to the subgroups
$\mathcal{O}_p^* \subset K^*_p$.
We denote by $[\sim,K]$ the Artin map on $J_K^f$.
There is a map $g_\theta$ which connects the two short exact sequences:
\[
\xymatrix{
1 \ar[r] & \mathcal{O}^* \ar[r] & \prod_{p} \mathcal{O}^*_p \ar[r]^{\!\!\!\!\![\sim,K]} \ar[d]^{g_\theta}
&
\mathrm{Gal}(
K^{
\mathrm{ab}
}
/K(j(\theta)
) \ar[r] & 1
\\
1 \ar[r] & \{\pm 1\} \ar[r] & \mathrm{GL}_2(\hat{\Z}) \ar[r] & \mathrm{Gal}(\mathcal{F}/\mathcal{F}_1) \ar[r] & 1
},
\]
such that the image of $f(\theta)^x$ of a modular function $f$ evaluated at $\theta$
under the Artin symbol of $x\in \mathcal{O}^*$
is given by
\begin{equation} \label{int-action}
f(\theta)^x=f^{g_\theta(x^{-1})}(\theta).
\end{equation}
The morphism $g_\theta$ is described as follows:
Every id\`ele $x\in \hat{\mathcal{O}}^*$ corresponds to a $2\times 2$ matrix representing the
linear action of $\theta$ on $\hat{\Z} \theta+ \hat{\Z}$ by multiplication. If $x^2+Bx+C$ is
the irreducible polynomial of $\theta$ then the matrix for $x=s\theta+t$ is computed to be
\[
g_\theta(x)=\begin{pmatrix}
t-Bs & -Cs \\ s & t
\end{pmatrix}.
\]
\begin{theorem}
Let $h\in \mathcal{F}$ which does not have a pole at $\theta$ and suppose that
$\Q(j) \subset \Q(h)$. The function value $h(\theta)$ is a class invariant if
and only if every element of the image $g_\theta \left(\prod_p \mathcal{O}_p^* \right)
\subset \mathrm{GL}_2(\hat{\Z})$ acts trivially on $h$.
\end{theorem}
\begin{proof}
See \cite[Cor.\ 3]{GeeBordeaux}.
\end{proof}
Now we will consider the non class invariant case. We have the following tower of fields:
\[
\xymatrix{
K^{\mathrm{ab}} \ar@{-}[d]^{H_1} \ar@{-}@/_2pc/[dd]_{H={\hat{\mathcal{O}}^*}/
{\mathcal{O}^*}} \\
K(h(\theta)) \ar@{-}[d] \\
K\big(j(\theta)\big)
}
\]
{ Consider the open subgroup}
\[
\mathrm{Stab}_{\Q(h)}=\{\alpha \in \mathrm{GL}_2(\hat{\Z}): h^\alpha=h\}.
\]
The preimage $g_\theta^{-1}( \mathrm{Stab}_{\Q(h)})$ contains
$\mathcal{O}^*=\{\pm 1\}$ and
$g_\theta^{-1}( \mathrm{Stab}_{\Q(h)}) \subset \prod_p \mathcal{O}_p^*$.
Notice that $h(\theta)$ is a class invariant if and only if
$g_\theta^{-1}( \mathrm{Stab}_{\Q(h)}) = \prod_p \mathcal{O}_p^*$.
Let $H_1=\mathrm{Gal}(K^{\mathrm{ab}}/K(h(\theta)))$.
We can write $H$ as a disjoint union of the cosets $H= \bigcup \sigma_i H_1$, and
if $h(\theta)$ is not a class invariant then there is more than one coset.
Now we will write the Shimura reciprocity law in full generality taking into account the
full automorphism group of the function field $\mathcal{F}$. We consider the following
two short exact sequences, connected with morphism
$g_\theta: J^f_K \rightarrow \mathrm{GL}_2(A^f_\Q)$:
\begin{equation} \label{classField1}
\xymatrix{
1 \ar[r] & K^* \ar[r] & J^f_K \ar[r]^{[\sim,K]} \ar[d]^{g_\theta} & \mathrm{Gal}(K^{\mathrm{ab}}/K) \ar[r] & 1 \\
1 \ar[r] & \Q^* \ar[r] & \mathrm{GL}_2(A^f_\Q) \ar[r] & \mathrm{Aut}(\mathcal{F}) \ar[r] & 1.
}
\end{equation}
The map $g_\theta$ is the $\Q$-linear extension of the map $g_\theta$ given in Eq. (\ref{int-action})
which is a homomorphism $J_K^f \rightarrow \mathrm{GL}_2(A^f_\Q)$.
The action of $z\in \mathrm{GL}_2(A^f_\Q)$ on $\mathcal{F}$ is given by writing
$z=u\alpha$ where $u\in \mathrm{GL}_2(\hat{\Z})$ and $\alpha\in \mathrm{GL}_2(\Q)^+$.
The group $\mathrm{GL}_2(\Q)^+$ consists of rational $2\times 2$
matrices with positive determinant and
acts on $\mathbb{H}$ via linear fractional transformations.
Then we define $f^{u\cdot \alpha}=(f^u)^\alpha$.
For more details on this construction we refer to \cite[p.\ 6]{steven2}
The Shimura reciprocity theorem states that:
\begin{theorem}
For $h\in \mathcal{F}$ and $x\in J_K^f$ we have
\[
h(\theta)^{[x^{-1},K]}=h^{g_\theta(x)}(\theta).
\]
\end{theorem}
The following proposition will be useful for us
\begin{proposition} \label{refpro}
If $\mathcal{F}/\Q(h)$ is Galois then
\[
h(\theta)^x=h(\theta) \Leftrightarrow h^{g_\theta(x)}=h.
\]
\end{proposition}
\begin{proof}
See \cite[eq.\ (3.5)]{steven2}.
\end{proof}
From now on we will focus on functions $h \in \mathcal{F}$ such that
$\mathcal{F}/\Q(h)$ is Galois. Notice that if $\Q(j) \subset \Q(h)$ then
$\mathcal{F}/\Q(h)$ is Galois since $\mathcal{F}/\Q(j)$ is.
We have the following tower of fields:
\[
\xymatrix{
K^{\mathrm{ab}} \ar@{-}[d]^{H_1} \ar@{-}@/_2pc/[dd]_{H} \ar@{-}@/^3pc/[ddd]^{G} \\
K(h(\theta)) \ar@{-}[d] \\
K(j(\theta)) \ar@{-}[d]_{\mathrm{Cl}(\mathcal{O})} \\
K
}
\]
where $H=\mathrm{Gal}(K^{\mathrm{ab}}/K(j(\theta)))$,
$G=\mathrm{Gal}(K^{\mathrm{ab}}/K)$, $H_1=\mathrm{Gal}(K^{\mathrm{ab}}/K(h(\theta))$
and $G/H\cong \mathrm{Cl}(\mathcal{O})$, where $\mathrm{Cl}(\mathcal{O})$ denotes the
class group of the order $\mathcal{O}$.
We now form the following short exact sequence:
\begin{equation} \label{ses1}
1 \rightarrow \frac{H}{H_1} \rightarrow \frac{G}{H_1} \rightarrow \frac{G}{H} \rightarrow 1.
\end{equation}
Notice that $H/H_1 \cong \mathrm{Gal}(K(h(\theta))/K(j(\theta)))$.
Suppose that $h(\theta)$ is an algebraic integer.
The class group of $\mathcal{O}$ is identified with the set of primitive forms $[a,b,c]$ of discriminant $D$.
We also set $\tau_{[a,b,c]}=\frac{-b+\sqrt{d}}{2a}$.
The proposition \ref{minpoly1} will provide us a method to
compute its minimal polynomial in $\Z[x]$.
For every element $[a,b,c] \in \mathrm{Cl}(\mathcal{O})=G/H$ we fix a
representative $\sigma_{[a,b,c]} \in G$ such that $[a,b,c]=\sigma_{[a,b,c]} H$. Notice that
the selection of the representative does not matter when one is acting on
$K(j(\theta))=(K^{\mathrm{ab}})^H$ since $H$ acts trivially on $K(j(\theta))$.
The situation changes if we try to act with $\sigma_{[a,b,c]}$ on the field $K(h(\theta))$
which is the fixed field of $H_1$ with $H_1 < H$. The class
$\sigma_{[a,b,c]} H$ gives rise to $[H:H_1]$ classes in $G/H_1$,
namely $\sigma_{[a,b,c]} \sigma_i H_1$, where $\sigma_1,\ldots,\sigma_s$
are some coset representatives of $H_1$ in $H$ and $s=[H:H_1]$.
The action of the representative
$\sigma_{[a,b,c]} \sigma_i=\sigma_i \sigma_{[a,b,c]}$ on $K(h(\theta))$ is
now well defined. Notice also that when $[a,b,c]$ runs over $G/H$ and $i$ runs over
$1,\ldots,s$ then
$\sigma_i \sigma_{[a,b,c]}$ runs over $G/H_1$.
\begin{proposition} \label{minpoly1}
Assume that $h(\theta) \in \mathbb{R}$ and $h(\theta)$ is algebraic.
Let $H_1$ be the subgroup of $G$ that stabilizes the field $K(h(\theta))$ and let
$H$ be the subgroup corresponding to the ring class field $K(j(\theta))$ of $K$.
We consider the elements $h(\theta)^{\sigma_i\sigma_{[a,b,c]}}$.
The polynomial
\begin{equation}\label{poldef22}
p_{h(\theta)}:=\prod_{i=1}^s \prod_{[a,b,c] \in \mathrm{Cl}(\mathcal{O})}
\left(
x- \left(h(\theta)^{\sigma_i \sigma_{[a,b,c]}}\right)
\right)
\end{equation}
is a polynomial in $\Z[x]$.
\end{proposition}
\begin{proof}
We have already observed that the product in eq. (\ref{poldef22}) runs over all elements in
$\mathrm{Gal}(K(h(\theta))/K(j(\theta))$.
We have the following tower of field extensions
\[
\xymatrix{
& K(h(\theta)) \ar@{-}[dl]_{H/H_1} \ar@{-}[dr]^{G_1} & \\
K(j(\theta)) \ar@{-}[d]^{\mathrm{Cl}(\mathcal{O})} \ar@{-}[drr]^{G_2} & &
\mathbb{Q}(h(\theta)) \ar@{-}[d]^{H/H_1} \\
K \ar@{-}[dr]_{\mathrm{Gal}(K/\mathbb{Q})} & & \mathbb{Q}(j(\theta))
\ar@{-}[dl]^{\mathrm{Cl}(\mathcal{O})} \\
& \mathbb{Q} &
}
\]
where $G_1,G_2$ are lifts of $\mathrm{Gal}(K/\mathbb{Q})$.
From the diagram above we deduce that
$\mathrm{Gal}(\mathbb{Q}(h(\theta))/\mathbb{Q})=
\mathrm{Gal}(K(h(\theta))/K)$.
This proves that
the polynomial $p_{h(\theta)}$ defined in Eq. (\ref{poldef22}) is the defining polynomial
of the extension $\mathbb{Q}(h(\theta))/\mathbb{Q}$. Moreover the coefficients of $p_{h(\theta)}$
are algebraic integers in $\Q$
therefore $p_{h(\theta)}\in \Z[x]$.
\end{proof}
\begin{remark}
The assumption $h(\theta)\in \mathbb{R}$ is essential as one
sees in section \ref{3mod24}, where we compute the minimal
polynomial of the class invariant $\mathfrak{g}_2^6(\theta)$.
\end{remark}
The above construction becomes practical if $h \in \mathcal{F}_N$ is a
modular function of level $N$. Then the value $h(\theta)$ is known to be
inside the ray class field modulo $N$ and the action of $\hat{\mathcal{O}}^*$
can be computed in terms of a finite quotient $(\mathcal{O}/N \mathcal{O})^*$.
Here it is important to assume also that $\Q(j) \subset \Q(h)$ so proposition \ref{refpro}
is applicable.
More precisely we can replace Eq. (\ref{classField1}) with the exact sequence:
\[
\xymatrix{
\mathcal{O}^* \ar[r] & (\mathcal{O}/N \mathcal{O})^* \ar[r] \ar[d]_{\bar{g}_\theta} &
\mathrm{Gal}\big(K(\mathcal{F}_N(\theta))/K(j(\theta))\big) \ar[r] & 1\\
\{\pm 1\} \ar[r] & \mathrm{GL}_2(\Z/N\Z) \ar[r] & \mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1) \ar[r] & 1,
}
\]
where we have considered the reduction of all rings and maps modulo $N$.
The strategy for the computations is the following:
compute generators $x_1,\ldots,x_k$ for the group $(\mathcal{O}/N \mathcal{O})^*$ and map them to
$\mathrm{GL}_2(\Z/N\Z)$ using $\bar{g}_\theta$. If each matrix $g(x_i)\in \mathrm{GL}_2(\Z/N\Z)$
acts trivially on $h$ then $h(\theta)$ is a class invariant. If not we can consider the subgroup
$A \subset \bar{g}_\theta \left((\mathcal{O}/N \mathcal{O})^* \right)$ that
acts trivially on $h$. The Galois group of $K(h(\theta))/K(j(\theta))$ equals
\[
\mathrm{Gal}(K(h(\theta))/K(j(\theta))=
\frac{(\mathcal{O}/N \mathcal{O})^*/\mathcal{O}^*}{\bar{g}_\theta^{-1} (A)}.
\]
We will now give an applicable approach to proposition \ref{minpoly1} by working modulo $N$.
Following the article of A. Gee \cite[Eq.\ 17]{GeeBordeaux} we give the next definition.
This will allow us to compute the action of the images of generators of
$G_{72}$ on the modular functions of level $72$.
\begin{definition}
Let $N \in \mathbb{N}$ and
$[a,b,c]$ be a representative of the equivalence class of an element in the class group.
Let $p$ be a prime number and $p^r$ be the maximum power of $p$ that divides $N$.
Assume that the discriminant $D=b^2-4ac \equiv 1 \bmod 4$.
The following matrix definition is motivated by the explicit
writing of the id\`ele that locally generates $[a,b,c]$ for all primes
$p$, see \cite[sec.\ 4]{GeeStevenhagen}.
Define the matrix
\[
A_{[a,b,c],p^r}=
\left\{
\begin{array}{ll}
\left(
\begin{array}{cc}
a & \frac{b-1}{2} \\
0 & 1
\end{array}
\right)
&
\mbox{ if } p\nmid a \\
\left(
\begin{array}{cc}
\frac{-b-1}{2} & -c \\
1 & 0
\end{array}
\right)
&
\mbox{ if } p\mid a \mbox{ and } p\nmid c\\
\left(
\begin{array}{cc}
\frac{-b-1}{2}-a & \frac{1-b}{2}-c \\
1 & -1
\end{array}
\right)
&
\mbox{ if } p\mid a \mbox{ and } p \mid c.
\end{array}
\right.
\]
The Chinese remainder theorem implies that
\[
\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z}) \cong \prod_{p \mid N}
\mathrm{GL}_2(\mathbb{Z}/p^r \mathbb{Z}).
\]
We define $A_{[a,b,c]}$ as the unique element in $
\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})$ that it is mapped
to $A_{[a,b,c],p^r}$ modulo $p^r$ for all $p\mid N$.
This matrix $A_{[a,b,c]}$ can be written uniquely as a product
\begin{equation} \label{prodexp}
A_{[a,b,c]}=
B_{[a,b,c]}
\left(
\begin{array}{cc}
1 & 0 \\
0 & d_{[a,b,c]}
\end{array}
\right),
\end{equation}
where $d_{[a,b,c]}=\det A_{[a,b,c]}$ and $B_{[a,b,c]}$ is a matrix with determinant $1$.
We will denote by $\sigma_{d_{[a,b,c]}}$ the automorphism of $\mathbb{Q}(\zeta_N)$
sending $\zeta_N \mapsto \zeta_N^{d_{[a,b,c]}}$.
\end{definition}
Let $\lambda \in \Q(\zeta_{N})$.
Shimura reciprocity law gives us \cite[lemma.\ 20]{GeeBordeaux} the action of
$[a,b,c] $ on $\lambda h(\theta)$ for $\theta=1/2+i \sqrt{n}/{2}$:
\[
\big( \lambda h(\theta)\big) ^{[a,-b,c]}= \lambda^{\sigma_{d_{[a,b,c]}}}
h\left(
\frac{\alpha_{[a,b,c]} \tau_{[a,b,c]}+ \beta_{[a,b,c]} }
{\gamma_{[a,b,c]} \tau_{[a,b,c]} + \delta_{[a,b,c]} }
\right)^{\sigma_{d_{[a,b,c]}}},
\]
where
$\begin{pmatrix} \alpha_{[a,b,c]} & \beta_{[a,b,c]} \\ \gamma_{[a,b,c]} & \delta_{[a,b,c]} \end{pmatrix}=A_{[a,b,c]}$
and $\tau_{[a,b,c]}$ is the (complex) root of $az^2 +bz+c$ with positive imaginary part.
\begin{theorem} \label{shimura1}
Let $\mathcal{O}=\mathbb{Z}[\theta]$ be an order of the imaginary quadratic field $K$,
and assume that $x^2+Bx+C$ is the minimal polynomial of $\theta$.
Let $N>1$ be a natural number, $x_1,\ldots,x_r$ be generators of the abelian group $\left(\mathcal{O}/N \mathcal{O} \right)^*$ and
$\alpha_i+\beta_i\theta \in \mathcal{O}$ be a representative of the class of the generator $x_i$.
For each representative we consider
the matrix:
\[A_i:=\begin{pmatrix} \alpha_i-B\beta_i & -C \beta_i \\ \beta_i & \alpha_i \end{pmatrix}.\]
If $f$ is a modular function of level $N$ and if for all matrices $A_i$ it holds that
\begin{equation} \label{act123}
f(\theta)=f^{A_i}(\theta), \mbox{ and } \mathbb{Q}(j) \subset \mathbb{Q}(f)
\end{equation}
then $f(\theta)$ is a class invariant.
\end{theorem}
\begin{proof}
\cite[Cor.\ 4]{GeeBordeaux} for the maximal order case and \cite[section\ 5]{steven2}
for the general case.
\end{proof}
\section{Ramanujan Invariants} \label{sec3}
We would like to find the minimal polynomial in $\Z[x]$ of the Ramanujan invariants
$H_n=27/t_n^{12}$ for values $n\equiv 19 \bmod 24$.
In \cite{KoKo} the authors introduced the modular functions
$R,R_1,\ldots, R_5$ of level $N=72$ in order to study $t_n$.
P. Stevenhagen pointed to us that the functions $R_i$ can be expressed
in terms of the generalized Weber functions $\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$
defined in the
work of A. Gee in \cite[p.\ 73]{GeePhD}
as
\[
\mathfrak{g_0}(\tau)=\frac{\eta(\frac{\tau}{3})}{\eta(\tau)},\;
\mathfrak{g_1}(\tau)=\zeta_{24}^{-1}\frac{\eta(\frac{\tau+1}{3})}{\eta(\tau)},\;
\mathfrak{g_2}(\tau)=\frac{\eta(\frac{\tau+2}{3})}{\eta(\tau)},\;
\mathfrak{g_3}(\tau)=\sqrt{3}\frac{\eta(3\tau)}{\eta(\tau)},
\]
where $\eta$ denotes the Dedekind eta function:
\[
\eta(\tau)=e^{2\pi i \tau/24} \prod_{n\geq 1}(1-q^n)\;\; \tau \in \mathbb{H}, q=e^{2\pi i \tau}.
\]
\begin{proposition} \label{fund-ref}
The functions $\frak{g}_i^{12}$ satisfy the polynomial:
\[
X^4+36 X^3+270 X^2+(756-j) X+3^6-0.
\]
In particular $\Q(h)\subset \Q(\frak{g}_i)$ and $\mathcal{F}/\Q(\frak{g}_i)$ is Galois.
\end{proposition}
\begin{proof}
This is a classical result see
\cite[eq.\ 5 p.\ 73]{GeePhD}, \cite[p.\ 255]{Weber}.
\end{proof}
Here will need only the $R_2(\tau)$ and $R_4(\tau)$ defined by:
\begin{equation} \label{R2}
R_2(\tau)= \frac{\eta(3\tau)\eta(\tau/3+2/3)}{ \eta^2(\tau)}=
\sqrt{3}^{-1}\mathfrak{g}_2(\tau)\mathfrak{g}_3(\tau)
\end{equation}
\begin{equation} \label{R4}
R_4(\tau)= \frac{\eta(\tau/3)\eta(\tau/3+1/3)}{ \eta^2(\tau)}=
\zeta_{24}\mathfrak{g}_0(\tau)\mathfrak{g}_1(\tau).
\end{equation}
The six modular functions $R_i$ defined in
\cite{KoKo}
correspond to the $\binom{4}{2}=6$ different products
$\mathfrak{g}_i\mathfrak{g}_j$ we can make from $\mathfrak{g}_i,$ $i=0,\ldots,3$.
The Ramanujan value can be expressed in terms of the above modular functions as
\begin{equation} \label{gg}
t_n=\sqrt{3}R_2\left( -\frac{1}{2}+i \frac{\sqrt{n}}{2} \right)=(\mathfrak{g}_2\mathfrak{g}_3)
\left( -\frac{1}{2}+i \frac{\sqrt{n}}{2}\right).
\end{equation}
Notice also that $\sqrt{3}=\zeta_{72}^6-\zeta_{72}^{30}$.
The Ramanujan invariants for $D \equiv 5 \bmod 24$ are
\[
H_n:=\frac{27}{t_n^{12}}
\]
and we also define the values
\[
A_n:=H_n+\frac{1}{H_n}=\frac{27}{t_n^{12}}+\frac{t_n^{12}}{27}.
\]
Denote by $S$ the involution $\tau \mapsto -\frac{1}{\tau}$ and by $T$ the map
$\tau \mapsto \tau+1$. The elements $S,T$ generate the group $\mathrm{SL}(2,\Z)$.
We will use the following
\begin{lemma}
The action of $S:z\mapsto -1/z$ on $\mathfrak{g}_i$ is given by
\[
(\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3)
\begin{pmatrix}
0 & 0 & 0 & 1 \\
0 & 0 & \zeta_{72}^{6} &0 \\
0 & \zeta_{72}^{-6} & 0 & 0\\
1 & 0 & 0 & 0\\
\end{pmatrix}
\]
and the action of $T:z \mapsto z+1$ on $\mathfrak{g}_i$ is given by
\[
(\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3)
\begin{pmatrix}
0 & 0 & 1 & 0 \\
1 & 0 & 0 &0 \\
0 & \zeta_{72}^{-6} & 0 & 0\\
0 & 0 & 0 & \zeta_{72}^6\\
\end{pmatrix}.
\]
The action of $\sigma_d$ on $\mathfrak{g}_i$ is given in terms of the following matrix
\[
(\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3)
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \zeta_{72}^{-2d+2} & 0 &0 \\
0 & 0 & \zeta_{72}^{2d-2} & 0\\
0 & 0 & 0 & \frac{ \zeta_{72}^{6d}-\zeta_{72}^{30d}}
{\zeta_{72}^6 -\zeta_{72}^{30}}\\
\end{pmatrix} \mbox{ if } d\equiv 1 \bmod 3
\]
and
\[
(\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3)
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & \zeta_{72}^{2d+2} &0 \\
0 & \zeta_{72}^{-2d-2} & 0 & 0\\
0 & 0 & 0 &
\frac{ \zeta_{72}^{6d}-\zeta_{72}^{30d}}{\zeta_{72}^6 -\zeta_{72}^{30}}\\
\end{pmatrix} \mbox{ if } d\equiv 2 \bmod 3
\]
\end{lemma}
\begin{proof}
The action of $S,T$ follows by using the transformation formulas of the $\eta$-function
\cite{SilvII}:
\[
\eta(\tau+1)=e^{2\pi i\tau/24}\eta(\tau)\mbox{ and }
\eta\left(-\frac{1}{\tau}\right)=\sqrt{-i\tau}\eta(\tau).\]
For the action of $\sigma_d$ observe for example that
\begin{eqnarray*}
\eta \left(\frac{\tau}{3}+\frac{1}{3} \right) &= &
\exp\left (\frac{2\pi i }{24} \left(\frac{\tau}{3}+\frac{1}{3} \right)\right)
\sum_{\nu=0}^{\infty} a_\nu
\exp \left(
\frac{2 \pi i \nu}{3 }\tau + \frac{2 \pi i \nu }{3}
\right)=\\
&=&\exp\left (\frac{2\pi i }{24} \frac{\tau}{3}\right) \zeta_{72} \sum_{\nu=0}^{\infty} \zeta_3^{\nu} a_\nu
\exp \left(
\frac{2 \pi i \nu}{3 }\tau
\right).
\end{eqnarray*}
The element $\sigma_d :\zeta_{72} \mapsto \zeta_{72}^d$ sends $\zeta_3^\nu$ to
$\zeta_3^{d\nu}=\zeta_3$ if $d\equiv 1 \bmod 3$ and to
$\zeta_3^{2\nu}$ if $d\equiv 2 \bmod 3$.
Therefore
\[
\sigma_d(\mathfrak{g}_1(\tau))=\sigma_d
\left(\zeta_{24}^{-1}
\frac{\eta(\frac{\tau+1}3)}{\eta(\tau)}
\right)=
\left\{
\begin{array}{ll}
\zeta_{72}^{-2d+2} \mathfrak{g}_1 & \mbox{ if } d\equiv 1 \bmod 3 \\
\zeta_{72}^{-2d-2} \mathfrak{g}_2 & \mbox{ if } d\equiv 2\bmod 3
\end{array}
\right.
\]
\end{proof}
\begin{remark}
We have a representation
\[
\rho:\mathrm{SL}(2,\Z) \rightarrow \langle \mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3
\rangle_\mathbb{R}=V.
\]
This representation gives rise to the representation $\mathrm{Sym}^2 V$, where
the space $\mathrm{Sym}^2 V$ has dimension $\binom{4}{2}=6$ and it is
generated by the elements $\mathfrak{g}_i \mathfrak{g}_j$, $1\leq i <j \leq 4$. The representation
$\mathrm{Sym}^2 V$ was an alternative way to express the action given in \cite{KoKo}
in terms of the modular functions $R_i$.
\end{remark}
We first study the group $(\mathcal{O}/72 \mathcal{O})^*$ for the values
$n=19,43,67$.
Using Chinese remainder theorem we compute first that
\[
\left( \frac{\mathcal{O} }{72 \mathcal{O} } \right)^* \cong
\left(\frac{\mathcal{O}}{9\mathcal{O}}\right)^* \times \left(\frac{\mathcal{O}}{8\mathcal{O}}\right)^*.
\]
Notice that the assumptions we put force $2\mathcal{O},3\mathcal{O}$ to be prime ideals.
The structure of the group $\left(\frac{\mathcal{O} }{P^k \mathcal{O} } \right)^*$ for a prime
ideal of $\mathcal{O}$ is given by the following:
\begin{theorem} \label{cohen}
Let $P$ be a prime ideal of $\mathcal{O}$ of inertia degree $f$ over the field of rationals, i.e.
if $p$ is the generator of the principal ideal $P\cap \Z$ then $N(P)=p^f$ and assume that
the ramification index $e(P/p)=1$.
The group $\left(\frac{\mathcal{O} }{P^k \mathcal{O} } \right)^*$ is isomorphic to
the direct product $\left(\frac{\mathcal{O} }{P \mathcal{O} } \right)^* \times \frac{1+P}{1+P^k}$.
The group $\left(\frac{\mathcal{O} }{P \mathcal{O} } \right)^*$ is cyclic of order $p^f-1$.
If $p\geq \min\{3,k\}$ then the group
$ \frac{1+P}{1+P^k}$ is isomorphic to $\left(\frac{\Z}{p^{k-1} \Z}\right)^f$. If
$p=2$ and $k=3$
then $ \frac{1+P}{1+P^3}$ is isomorphic to $\left(\frac{\Z}{2\Z}\right)^2 \times \left(\frac{\Z}{4\Z}\right)^{f-1}$.
\end{theorem}
\begin{proof}
The group $\left(\frac{\mathcal{O} }{P^k \mathcal{O} } \right)^*$ is isomorphic to
the direct product $\left(\frac{\mathcal{O} }{P \mathcal{O} } \right)^* \times \frac{1+P}{1+P^k}$ by
proposition \cite[prop.\ 4.2.4]{CohenAdv}.
For $e(P/p)=1$ and $p\geq \min\{3,k\}$ the $P$-adic logarithmic function defines an isomorphism of
the mutliplicative group $\frac{1+P}{1+P^k}$ to the additive group $P/P^k$
which in turn is isomorphic to $\mathcal{O}/P^{k-1}$ by \cite[lemma\ 4.2.9]{CohenAdv}.
The condition $p \geq \min\{3,k\}$ is put so that the logarithmic function converges.
By \cite[th.\ 4.2.10]{CohenAdv} we have
\[
\frac{\mathcal{O}}{P^{k-1}}\cong
\left(\frac{\Z}{p^q \Z}\right)^{(r+1)f} \times
\left(
\frac{\Z}{p^{q-1}\Z}
\right)^{(e-r-1)f}
\]
where $k+e-2=eq+r$, $0\leq r <e$. If $e=1$ then the last formula becomes:
\[
\frac{\mathcal{O}}{P^{k-1}}\cong
\left(\frac{\Z}{p^{k-1} \Z}\right)^{f}.
\]
The case $p=2$ and $k=3$ is studied in \cite[prop.\ 4.2.12]{CohenAdv}.
\end{proof}
By applying theorem \ref{cohen}
we find the structure of the multiplicative groups
\[
\left(\frac{\mathcal{O}}{9\mathcal{O}}\right)^*\cong \frac{\Z}{8\Z} \times \frac{\Z}{3\Z} \cong
\frac{\Z}{24\Z} \times \frac{\Z}{3\Z}
\]
and
\[
\left(\frac{\mathcal{O}}{8\mathcal{O}}\right)^* \cong \frac{\Z}{3\Z} \times
\left(\frac{\Z}{2\Z}\right)^2 \times \frac{\Z}{4\Z} \cong \frac{\Z}{12\Z} \times \left(\frac{\Z}{2\Z}\right)^2
\]
For finding the generators of these groups one can use the $P$-adic logarithmic function in order to
pass from the multiplicative group $\frac{1+P}{1+P^k}$ to the
additive group $\mathcal{O}/P^{k-1}\mathcal{O}$. This method does work only for large primes
(so that the logarithmic function is convergent) and not for the case $p=2$, $k=3$.
In order to find the generators we proceed as follows: We exhaust all units in $\mathcal{O}/9\mathcal{O}$
until we find one unit $U_1$ of order $24$ then we remove this unit and all its powers from the set of possible units
and we try again in order to find a unit $U_2$ of order $3$. For the units in $\mathcal{O}/8\mathcal{O}$
we work similarly. We first find a unit $V_1$ of maximal order $12$ remove all its powers
from the set of units and we try again
in order to find a unit $V_2$ of order $2$. We remove all products of powers of $U_1$ and $U_2$
and then we search on the remaining units for the third generator $V_3$.
Finally we lift these units to units of the ring $ \mathcal{O}/72\mathcal{O}$
using the Chinese remainder theorem.
This way we arrived to the following generators of the group $(\mathcal{O}/72 \mathcal{O})^*$:
$5\theta+7$, $6\theta+7$, $7\theta +7$, $4\theta+7$, $4\theta+1$.
The orders of the generators of the group $(\mathcal{O}/72 \mathcal{O})^*$ are given in the
following table:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Generator & $5\theta+7$ & $6\theta+7$ & $7\theta +7$ & $4\theta+7$ & $4\theta+1$ \\
\hline
Order & 24 & 3 & 12 & 2 & 2\\
\hline
\end{tabular}
\end{center}
These generators will be mapped to matrices $A_i$ defined in theorem \ref{shimura1}.
For example the generator $5 \theta+7$ in $(\mathcal{O}/9\mathcal{O})^*$ corresponds to the matrix
\[
\begin{pmatrix} 3 & 8 \\5 & 16 \end{pmatrix}= \begin{pmatrix} 3 & 1 \\ 5 & 2 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 8 \end{pmatrix},
\]
where $M=\begin{pmatrix} 3 & 1 \\ 5 & 2 \end{pmatrix}$ is a matrix of determinant $1 \bmod 9$.
Let
\[T=\begin{pmatrix} 1 & 1 \\0 & 1 \end{pmatrix} \mbox{ and } S=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}.\]
The matrix $M$ can be decomposed according to \cite{GeeStevenhagen} as
$\bar{T}_9^8 \bar{S}_9 \bar{T}_9^5 \bar{S}_9 \bar{T}_9^6$ where
\[\bar{S}_9=T^{-1} ST^{-65} ST^{-1} ST^{1096} \mbox{ and } \bar{T}_9=T^{-9} \] according to \cite{KoKo}.
The action of the generators on the elements $\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$
is computed by magma and it is given in table \ref{Table:orders}.
\begin{table}\caption{Orders and generators of the group $(\mathcal{O}/72 \mathcal{O})^*$}
\label{Table:orders}
\begin{tabular}{|c|r|r|r|r|r|}
\hline
& $5\theta+7$ & $6\theta+7$ & $7\theta +7$ & $4\theta+7$ & $4\theta+1$ \\
\hline
$\mathfrak{g}_0$ & $(-\zeta_{72}^{18} + \zeta_{72}^6)\mathfrak{g}_2$ & $\zeta_3\mathfrak{g}_0$ & $\mathfrak{g}_0$ & $-\mathfrak{g}_0$ &$\mathfrak{g}_0$ \\
\hline
$\mathfrak{g}_1$ & $\zeta_{72}^{12}\mathfrak{g}_3$ &$\zeta_3\mathfrak{g}_1$ & $-\mathfrak{g}_1$ & $-\mathfrak{g}_1$ & $\mathfrak{g}_1$ \\
\hline
$\mathfrak{g}_2$ & $ -\mathfrak{g}_1$ & $ -\zeta_{72}^{12}\mathfrak{g}_2$ & $-\mathfrak{g}_2$ &$ -\mathfrak{g}_2$ & $\mathfrak{g}_2$ \\
\hline
$\mathfrak{g}_3$ & $ (-\zeta_{72}^{18} + \zeta_{72}^6)\mathfrak{g}_0$ & $ -\zeta_{72}^{12}\mathfrak{g}_3 $ & $ -\mathfrak{g}_3$ & $ -\mathfrak{g}_3$ & $\mathfrak{g}_3$ \\
\hline
\end{tabular}
\end{table}
\begin{lemma}
The quantities $\mathfrak{g}_i(\theta)^6$ are in the ray class field of conductor $3$.
\end{lemma}
\begin{proof}
There is the following diagram with exact rows for every $N$ (here we will use the values $N=72,3$:
\[
\xymatrix{
\mathcal{O}^* \ar[r] & (\mathcal{O}/N \mathcal{O})^* \ar[r] \ar^{\bar{g}_\theta}[d] &
\mathrm{Gal}(H_{N,\mathcal{O}} /H_\mathcal{O}) \ar[r] & 1 \\
\{\pm 1\} \ar[r] &
\mathrm{GL}_2(\Z/N\Z) \ar[r] & \mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1) \ar[r] &1,
}
\]
where $H_{N,\mathcal{O}}$ denotes the ray class field of conductor $N$. The epimorphism
of the upper row is induced by the Artin map and allows us to see
elements in $(\mathcal{O}/N \mathcal{O})^*$ as elements in
$\mathrm{Gal}(H_{N,\mathcal{O}} /H_\mathcal{O})$.
The ray class field $H_{3,\mathcal{O}}$ of conductor $3$ is an extension of degree $4$
of the ring class field,
as one computes looking at $(\mathcal{O}/3 \mathcal{O})^*/\mathcal{O}^*$.
Indeed, the group $(\mathcal{O}/3 \mathcal{O})^*$ is isomorphic to a cyclic group of order $8$ by theorem
\ref{cohen} and by taking the quotient of $\mathcal{O}^*=\{\pm 1\}$ we arive at a group of order $4$.
The element $5 \theta+7$ generates a subgroup of order $24$ in
$(\mathcal{O}/72 \mathcal{O})^*$.
This means that the ray class field of conductor $3$ is the fixed field
of $\langle (5 \theta+7)^4 \rangle$ and all other generators
$ 6\theta+7,7\theta +7,4\theta+7,4\theta+1$.
We compute that the action of $(5 \theta+7)^4=3\theta +8$ on
$\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$ is given by
\[
(\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3)\mapsto
\left( (-\zeta_{72}^{12}+1) \mathfrak{g}_0,
(-\zeta_{72}^{12}+1)\mathfrak{g}_1,
\zeta_{72}^{12} \mathfrak{g}_2,
\zeta_{72}^{12} \mathfrak{g}_3
\right)
\]
Since $\zeta_{72}^{12}, (\zeta_{72}^{12} - 1)$ are $6$th roots of unity
we see that $(5 \theta+7)^4$, indeed leaves $\mathfrak{g}_0^6,\mathfrak{g}_1^6,
\mathfrak{g}_2^6,\mathfrak{g}_3^6$
invariant.
On the other hand, looking at table \ref{Table:orders} we see that
all other generators leave also $\mathfrak{g}_0^6,\mathfrak{g}_1^6,\mathfrak{g}_2^6,\mathfrak{g}_3^6$
invariant.
Notice that the Galois group $\mathrm{Gal}(H_{n,\mathcal{O}}/K)$ is cyclic of order $4$ generated by
$ 5\theta+7$ and the action is given by
\[
(\mathfrak{g}_0^6,\mathfrak{g}_1^6,\mathfrak{g}_2^6,\mathfrak{g}_3^6)\mapsto
( -\mathfrak{g}_2^6,
\mathfrak{g}_3^6,
-\mathfrak{g}_1^6,
-\mathfrak{g}_0^6).
\]
\end{proof}
\begin{remark} \label{invarianttheory}
Notice that we have a polynomial action of the
permutation group $\langle (0,2,1,3)\rangle$ \footnote{Here in order to be compatible
with the enumeration of $\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$ we allow $0$
as a number in the permutations.} on the polynomial ring
$\mathbb{Q}[\mathfrak{g}_0^{12},\mathfrak{g}_1^{12},\mathfrak{g}_2^{12},\mathfrak{g}_3^{12}]$.
The ring of invariants of this action can be computed to be the
polynomial ring generated by the polynomials
\[
\mathfrak{g}_0^{12} + \mathfrak{g}_1^{12} + \mathfrak{g}_2^{12} +\mathfrak{g}_3^{12},
\mathfrak{g}_0^{24} + \mathfrak{g}_1^{24} + \mathfrak{g}_2^{24} + \mathfrak{g}_3^{24},
\mathfrak{g}_0^{12}\mathfrak{g}_1^{12} + \mathfrak{g}_2^{12}\mathfrak{g}_3^{12},
\mathfrak{g}_0^{48} + \mathfrak{g}_1^{48} + \mathfrak{g}_2^{48} + \mathfrak{g}_3^{48}.
\]
Of course $\mathfrak{g}_0^{12} + \mathfrak{g}_1^{12} + \mathfrak{g}_2^{12} +\mathfrak{g}_3^{12}=-36$ is
an invariant of the linear action but not an interesting one.
All these (and their combinations) will give class invariants. Notice that the class invariant
$A_n$ introduced later in this paper comes from the third one
$\mathfrak{g}_0^{12}\mathfrak{g}_1^{12} + \mathfrak{g}_2^{12}\mathfrak{g}_3^{12}$.
\end{remark}
\begin{remark} \label{10}
Every polynomial expression given in remark \ref{invarianttheory} gives rise to
a class invariant. What are the relations of these class invariants?
Set $Y_i=\mathfrak{g}_i^{12}$. We know that $Y_i$ satisfy equation
\begin{equation} \label{gequ}
Y_i^4 + 36 Y_i^3+270Y_i^2+(756-j)Y_i+3^6=0,
\end{equation}
by proposition \ref{fund-ref}.
The first invariant given in remark \ref{invarianttheory} is just $36$. We then have
\[
36^2=\left( \sum_{i=0}^3 Y_i \right)^2=\left( \sum_{i=0}^3 Y_i^2 \right)+
2 \sum_{0\leq i<j \leq 3} Y_i Y_j = \left( \sum_{i=0}^3 Y_i^2 \right)-540.
\]
Therefore,
\[
\sum_{i=0}^3 Y_i^2=36^2+540=1836.
\]
We compute that
\begin{eqnarray}
36^3=\left( \sum_{i=0}^3 Y_i \right)^3 & = &\sum_{i=0}^3 Y_i ^3+ 6 \sum_{i,j,k} Y_i Y_j Y_k +
3 \sum_{i\neq j} Y_i Y_j^2 \nonumber \\
& =& \sum_{i=0}^3 Y_i ^3 +6(756-j)+ 3 \sum_{i\neq j} Y_i Y_j^2 \label{mac1}.
\end{eqnarray}
We now compute
\begin{eqnarray} \label{mac2}
1836\cdot 36 & =& \left(\sum_{i=0}^3 Y_i \right)\left(\sum_{j=0}^3 Y_i^2 \right)= \sum_{i=0}^3 Y_i ^3+\sum_{i\neq j} Y_i Y_j^2.
\end{eqnarray}
By combining eq. (\ref{mac1}) and (\ref{mac2}) we obtain:
\[
2 \sum_{i=0}^3 Y_i^3=151632+6(756-j).
\]
Finally, we compute that the last invariant given in remark \ref{invarianttheory} is given by eq.
(\ref{gequ})
\[
\sum_{i=0}^3 Y_i^4=-36 \sum_{i=0}^3 Y_i^3-270\sum_{i=0}^3Y_i^2-(756-j)\sum_{i=0}^3
Y_i-3^6.
\]
This means that the invariants of remark \ref{invarianttheory}
are either constant or linear combinations of $j$ (and these would
give polynomials with the same growth as the Hilbert polynomials)
and
$Y_0Y_1+Y_2Y_3$ which gives by evaluation at $\theta$ the
$A_n$ class invariants.
\end{remark}
\begin{remark}
Notice that equation (\ref{gequ}) allows us to find the minimal polynomials (over the ring
class field) of the quantities
$Z_i:=\mathfrak{g}_i(\theta)^6$, just by replacing $Y_i$ by $Z_i^2$.
\end{remark}
\begin{remark}
Notice that using only powers of the $\mathfrak{g}_i$ modular functions
we can only construct an extension of the ring class field of order $4$.
The Ramanujan invariants $H_n$ allow us to construct a quadratic extension of the
ring class field.
\end{remark}
We return now to the study of Ramanujan invariants.
Using magma and the above computations we compute that
$5 \theta+7$ sends $(1/R_2)^{12}$ to $-3^6/R_4^{12}$.
Therefore
$H_n$ is not a class invariant.
Similarly we compute that all other generators of
$\left( \frac{\mathcal{O} }{72 \mathcal{O} } \right)^*$
act trivially on $(1/R_2)^{12}$.
The field generated by the class invariant $H_n$ is a quadratic extension
of the ring class field of $K$.
On the other hand the above computation allows us to compute the minimal polynomial $p_n\in \Z[x]$ of $H(n)$
by using the formula
\begin{equation} \label{pn}
p_n (x) =\prod_{[a,b,c] \in \mathrm{Cl}(\mathcal{O})}
\left( x- 3^{-3} R_2(\tau_0)^{-12 [a,-b,c]} \right)
\left( x+ 3^3 R_4(\tau_0)^{-12 [a,-b,c]} \right),
\end{equation}
with $\tau_0=\frac{-1+i\sqrt{n}}{2}$.
The results of these computations for some values $n=19+24i$, $i=0,\ldots 18$
are shown in Table \ref{pnpoly1}.
We will now prove some properties for the minimal polynomials.
We will need the following lemma.
\begin{lemma} \label{invertibleR2}
The following identity holds:
\[
\left( R_2(\tau)R_4(\tau) \right)^{12}=-1.
\]
\end{lemma}
\begin{proof}
A. Gee in \cite[p.\ 73]{GeePhD} observes that $\mathfrak{g}_0\mathfrak{g}_1\mathfrak{g}_2\mathfrak{g}_3=\sqrt{3}$.
The result follows by eq. (\ref{R2}), (\ref{R4}).
\end{proof}
\begin{lemma} \label{invert111}
Consider a monic polynomial $f(x)=x^n+\sum_{\nu=0}^{n-1} a_\nu x^{\nu}$, with $n$ even.
Consider the set of roots $\Sigma=\{\rho_1,\ldots,\rho_n\}$ of $f$ and assume that
$f$ has no multiple roots. If the transformation
$x \mapsto 1/x$ sends the above defined set of roots $\Sigma$ to $\Sigma$ then $a_0=1$ and $a_{\nu}=a_{n-\nu}$.
\end{lemma}
\begin{proof}
Write $f=\prod_{i=1}^n(x-\rho_i)$. By the assumption all roots $\rho_i\neq 0$.
The result follows from the fact that the ``reverse polynomial''
$x^nf(1/x)$ is the polynomial $\prod_{i=1}^n (1-\rho_i X)$ having
the reciprocals of $\rho_i$ as roots.
\end{proof}
\begin{proposition}
The minimal polynomials $p_n(x)=x^{2h} +\sum_{\nu=0}^{2h-1} a_\nu x^\nu$ of $H(n)$ are palindromic, i.e.
$a_\nu=a_{2h-\nu}$. The constant coefficient $a_0$ equals $1$.
\end{proposition}
\begin{proof}
From Eq. (\ref{pn}) we have that
whenever
\[
H(n)^{[a,-b,c]}=3^{-3} R_2(\tau_0)^{-12 [a,-b,c]}
\]
is a root then
$
- 3^3 R_4(\tau_0)^{-12 [a,-b,c]}
$ is a root. But lemma \ref{invertibleR2} implies that
\[
-3^3 R_4(\tau_0)^{-12 [a,-b,c]}=3^{-3} R_2^{12[a,-b,c]}=1/H_{n}.
\] The desired result now follows by
lemma \ref{invert111}.
\end{proof}
\begin{corollary}
The values $H(n)$ are real units.
\end{corollary}
\begin{proof}
This is clear since $H(n)$ is real and the product of all roots of $p_n$ is $a_0=1$.
\end{proof}
\begin{corollary}
The polynomials $p_n(x)$ have the following simplified form:
\[
p_n (x) =\prod_{[a,b,c] \in \mathrm{Cl}(\mathcal{O})}
\left( x- 3^{-3} R_2(\tau_0)^{-12 [a,-b,c]} \right)
\left( x- 3^{3} R_2(\tau_0)^{12 [a,-b,c]} \right),
\]
\end{corollary}
We have seen that $H_n$ is not a class invariant. But the quantity $A_n=H_n+\frac{1}{H_n}$
is a class invariant as we can verify using theorem \ref{shimura1}. This new invariant is
not a unit anymore.
The minimal polynomial $q_n\in \Z[x]$ of $A_n$ is given by
\[
q_n (x) =\prod_{[a,b,c] \in \mathrm{Cl}(\mathcal{O})}
\left( x- 3^{-3} R_2(\tau_0)^{-12 [a,-b,c]} -3^{3} R_2(\tau_0)^{12 [a,-b,c]} \right).
\]
In Table \ref{qnpoly1} we give minimal polynomials $q_n$
for $19\leq n \leq 451$, $n\equiv 19 \bmod 24$.
Observe that if $p_n=\sum_{\nu=0}^{2h} a_\nu x^\nu$ and
$q_n=\sum_{\nu=0}^{h} b_\nu x^\nu$ then
$b_\nu=a_{h+\nu}$ as one can prove using the Vieta formul\ae.
\begin{table}[H] \label{pnpoly1}
\caption{Polynomials $p_n$ for $19\leq n \leq 451$, $n\equiv 19 \bmod 24$.}
\[
{\tiny
\begin{array}{|l|c|l|}
\hline
n & \mbox{C.N.} & p_n(x)\\
\hline
19 & 1 &
x^2 -302x + 1 \\
\hline
43 & 1 &
x^2 -33710x + 1 \\
\hline
67 & 1 &
x^2 -1030190x + 1 \\
\hline
91 & 2 &
x^4 -17590492x^3 + 148475718 x^2 + -17590492x +1 \\
\hline
115 &2 &
x^4 -210267100x^3 + 424646982x^2 -210267100x + 1 \\
\hline
139 & 3 &
x^6 -1960891530x^5 -13943617329x^4 -30005622092x^3 -13943617329x^2 \\
& & -1960891530x +1\\
\hline
163 & 1 &
x^2 -15185259950x +1 \\
\hline
187 & 2 &
x^4 -101627312860x^3 + 1102664076102x^2 -101627312860x + 1 \\
\hline
211 & 3 &
x^6 -604100444298x^5 + 20137792248015x^4 -414952590867788x^3 \\
& & + 20137792248015x^2 -604100444298x + 1\\
\hline
235 & 2 &
x^4 -3253104234460x^3 + 47263043424582x^2 -3253104234460x + 1 \\
\hline
259 & 4 &
x^8 -16106786824376x^7 -810131323637348x^6 -9925794993033992x^5 + \\
& &26425093196592454x^4 -9925794993033992x^3 -810131323637348x^2 \\
& & -16106786824376x + 1
\\
\hline
283& 3 &
x^6 -74167114012170x^5 -119654555118897x^4 -3009681130315340x^3 \\
& & -119654555118897 x^2 -74167114012170x +1
\\
\hline
307 & 3 &
x^6 -320508447128970x^5 -1963936794491697x^4
-5740503875332940x^3 \\
& &-1963936794491697x^2 -320508447128970x + 1
\\
\hline
331 & 3 &
x^6 -1309395837485706x^5 +113317118488006863x^4
-11556648519941425484x^3 \\
& &+ 113317118488006863x^2 -1309395837485706x + 1
\\
\hline
355 & 4 &
x^8 -5087640031882040x^7 + 583328538578918044x^6 -16479665770932342920x^5 \\
& & + 172809183517820572486x^4 -16479665770932342920x^3 \\
&& +583328538578918044x^2 -5087640031882040x + 1\\
\hline
379 & 3 &
x^6 -18895199824010634x^5 -4124999225954564913x^4 -274501369688142310220x^3 \\
&&-4124999225954564913x^2 -18895199824010634x +1
\\
\hline
403 & 2 &
x^4 -67361590779141340x^3 +
361802623368357702x^2 -67361590779141340x + 1\\
\hline
427 & 2 &
x^4 -231347688320676700x^3 + 2519902537964728902x^2 -231347688320676700
x + 1\\
\hline
451 & 6 &
x^{12} -767819046799630740x^{11} + 161913605740919729922x^{10} \\
&& -66458029641477066911812x^9
-1654687781430584516238609x^8 \\
&& -33875537641085268651117096x^7 \\
&& + 81879258106346356247143452x^6 -33875537641085268651117096x^5 \\
&& -1654687781430584516238609x^4 -66458029641477066911812x^3 \\
&& + 161913605740919729922x^2 -767819046799630740x + 1\\
\hline
\end{array}
}
\]
\end{table}
\begin{table}[H] \label{qnpoly1}
\caption{Polynomials $q_n$ for $19\leq n \leq 451$, $n\equiv 19 \bmod 24$.}
\[
{\tiny
\begin{array}{|l|c|l|}
\hline
n & \mbox{C.N.} & p_n(x)\\
\hline
19 & 1 &
x -302 \\
\hline
43 & 1 &
x -33710 \\
\hline
67 & 1 &
x -1030190x \\
\hline
91 & 2 &
x^2 -17590492x + 148475718 \\
\hline
115 &2 &
x^2 -210267100x + 424646982 \\
\hline
139 & 3 &
x^3 -1960891530x^2 -13943617329x -30005622092\\
\hline
163 & 1 &
x -15185259950\\
\hline
187 & 2 &
x^2 -101627312860x + 1102664076102 \\
\hline
211 & 3 &
x^3 -604100444298x^2+ 20137792248015x -414952590867788\\
\hline
235 & 2 &
x^2 -3253104234460x + 47263043424582 \\
\hline
259 & 4 &
x^4 -16106786824376x^3 -810131323637348x^2 -9925794993033992x + \\
& &26425093196592454
\\
\hline
283& 3 &
x^3 -74167114012170x^2 -119654555118897x -3009681130315340
\\
\hline
307 & 3 &
x^3 -320508447128970x^2 -1963936794491697x
-5740503875332940
\\
\hline
331 & 3 &
x^3 -1309395837485706x^2 +113317118488006863x
-11556648519941425484
\\
\hline
355 & 4 &
x^4 -5087640031882040x^3 + 583328538578918044x^2 -16479665770932342920x \\
& & + 172809183517820572486\\
\hline
379 & 3 &
x^3 -18895199824010634x^2 -4124999225954564913x -274501369688142310220x^3
\\
\hline
403 & 2 &
x^2 -67361590779141340x +
361802623368357702\\
\hline
427 & 2 &
x^2 -231347688320676700x + 2519902537964728902\\
\hline
451 & 6 &
x^{6} -767819046799630740x^{5} + 161913605740919729922x^{4} \\
&& -66458029641477066911812x^3
-1654687781430584516238609x^2 \\
&& -33875537641085268651117096x + 81879258106346356247143452\\
\hline
\end{array}
}
\]
\end{table}
\subsection{Some Unit Computations}
\label{3.1}
Professor P. Stevenhagen proposed to us the study of the following situation:
Consider the field $R_n:=\mathbb{Q}(H_n) \subset \mathbb{R}$. The field $R_n$ is
an abelian extension of degree $2h$ of $\Q$ where $h$ is the class number of the
order $\Z[\theta_n]$. If $h=1$, i.e., when $n=19,43,67,163$ then $R_n$ is a
real quadratic extension of $\Q$ and we can verify using magma that $H_n$ is
a fundamental unit.
The field $R_n$ has $\mathbb{Q}(\sqrt{3n})$ as a subfield and we might ask
if $\mathrm{N}_{R_n/\Q(\sqrt{3n})}H_n$ is a fundamental unit of the
real quadratic field $\mathbb{Q}(\sqrt{3n})$.
Using magma we compute that this is not always the case. In the following table we give the
index of the subgroup generated by $\mathrm{N}_{R_n/\Q(\sqrt{3n})}H_n$
inside the group generated by the fundamental unit:
\begin{center}
{
\begin{tabular}{||c||l|l|l|l|l|l|l|l|l|l||}
\hline
\hline
N &19 & 43 & 67 & 91 & 115 & 139 & 163 & 187 & 211 &235 \\
\hline
index & 1 &1 & 1 & 2 & 2 &1 & 1 & 2 & 1 & 2 \\
\hline
\hline
N & 259 & 283 & 307 & 331 &
355 & 379 & 403 & 427 & 451 & 475 \\
\hline
index & 4 & 1 & 1 & 3 & 2 & 1 & 2 & 2 & 2 & 8\\
\hline
\hline
\end{tabular}
}
\end{center}
The authors could not find an obvious pattern for the behavior of the index.
Professor J. Antoniadis pointed to us the following pattern: If the index is one then $N$ is prime but not
vice versa since $331$ gives index $3$. We have checked that this is correct for all
values of $N< 1000$.
\section{Computing class invariants for the ${3 \bmod 24}$ case}
\label{3mod24}
In this case the prime $2$ remains inert in $\mathcal{O}$ while $3$ ramifies.
We compute first the structure of the group $\left(\mathcal{O}/72\mathcal{O}\right)^*$.
Modulo $72$ we have the following values $n=3,27,51$ that are equivalent to $3$ modulo $24$.
The structure of the group $(\mathcal{O}/72 \mathcal{O})^*$ is computed to be the following:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$n$ & $ (\mathcal{O}/9 \mathcal{O})^*$ & $(\mathcal{O}/8 \mathcal{O})^*$ \\
\hline
3 & $\Z/6\Z \times \Z/3\Z \times \Z/3\Z$ & $ \Z/12\Z \times \Z/2\Z \times \Z/2\Z$ \\
27 & $\Z/18\Z \times \Z/3\Z $ & $\Z/12\Z \times \Z/2\Z \times \Z/2\Z$ \\
51 & $\Z/18\Z \times \Z/3\Z$ & $\Z/12\Z \times \Z/2\Z \times \Z/2\Z$ \\
\hline
\end{tabular}
\end{center}
The actions of the generators $\tau_i$ of each direct cyclic summand on the elements
$\mathfrak{g}_0,\mathfrak{g}_1,\mathfrak{g}_2,\mathfrak{g}_3$ for each case is computed to be:\\
$\mathbf{n=3\bmod 72}$
\[
\begin{array}{|c||r|r|r|r|r|r|r|}
\hline & \tau_{1} & \tau_{2} & \tau_{3} & \tau_{4} & \tau_{5} & \tau_{6}\\
\hline \tau(\mathfrak{g}_{0}) & {\mathfrak{g}_{0}} & -{\mathfrak{g}_{0}} & {\mathfrak{g}_{0}} & (-{\zeta}^{18}+{\zeta}^{6}){\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{0}} & ({\zeta}^{12}-1){\mathfrak{g}_{0}}\\
\tau(\mathfrak{g}_{1}) & {}-{\mathfrak{g}_{1}} & -{\mathfrak{g}_{1}} & {\mathfrak{g}_{1}} & (-{\zeta}^{12}+1){\mathfrak{g}_{3}} & -{\zeta}^{12}{\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{1}}\\
\tau(\mathfrak{g}_{2}) & {}-{\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\zeta}^{12}{\mathfrak{g}_{2}}\\
\tau(\mathfrak{g}_{3}) & {}-{\mathfrak{g}_{3}} & -{\mathfrak{g}_{3}} & {\mathfrak{g}_{3}} & -{\zeta}^{18}{\mathfrak{g}_{0}} & -{\zeta}^{12}{\mathfrak{g}_{3}} & ({\zeta}^{12}-1){\mathfrak{g}_{3}}\\\hline \end{array}\]
$\mathbf{n=27 \bmod 72}$
\[
\begin{array}{|c||r|r|r|r|r|r|r|}
\hline & \tau_{1} & \tau_{2} & \tau_{3} & \tau_{4} & \tau_{5} & \tau_{6}\\
\hline \tau(\mathfrak{g}_{0}) & {\mathfrak{g}_{0}} & -{\mathfrak{g}_{0}} & {\mathfrak{g}_{0}} & {\zeta}^{6}{\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{0}} & ({\zeta}^{12}-1){\mathfrak{g}_{0}}\\
\tau(\mathfrak{g}_{1}) & {}-{\mathfrak{g}_{1}} & -{\mathfrak{g}_{1}} & {\mathfrak{g}_{1}} & {\zeta^{12}}{\mathfrak{g}_{3}} & -{\zeta}^{12}{\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{1}}\\
\tau(\mathfrak{g}_{2}) & {}-{\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\zeta}^{12}{\mathfrak{g}_{2}}\\
\tau(\mathfrak{g}_{3}) & {}-{\mathfrak{g}_{3}} & -{\mathfrak{g}_{3}} & {\mathfrak{g}_{3}} & -{\zeta}^{18}{\mathfrak{g}_{0}} & -{\zeta}^{12}{\mathfrak{g}_{3}} & ({\zeta}^{12}-1){\mathfrak{g}_{3}}\\\hline \end{array}\]
$\mathbf{n=51 \bmod 72}$
\[
\begin{array}{|c||r|r|r|r|r|r|r|}
\hline & \tau_{1} & \tau_{2} & \tau_{3} & \tau_{4} & \tau_{5} & \tau_{6}\\
\hline \tau(\mathfrak{g}_{0}) & {\mathfrak{g}_{0}} & -{\mathfrak{g}_{0}} & {\mathfrak{g}_{0}} & {\zeta^{18}}{\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{0}} & ({\zeta}^{12}-1){\mathfrak{g}_{0}}\\
\tau(\mathfrak{g}_{1}) & {}-{\mathfrak{g}_{1}} & -{\mathfrak{g}_{1}} & {\mathfrak{g}_{1}} & -{\mathfrak{g}_{3}} & -{\zeta}^{12}{\mathfrak{g}_{1}} & -{\zeta}^{12}{\mathfrak{g}_{1}}\\
\tau(\mathfrak{g}_{2}) & {}-{\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\mathfrak{g}_{2}} & {\mathfrak{g}_{2}} & -{\zeta}^{12}{\mathfrak{g}_{2}}\\
\tau(\mathfrak{g}_{3}) & {}-{\mathfrak{g}_{3}} & -{\mathfrak{g}_{3}} & {\mathfrak{g}_{3}} & {\zeta}^{18}{\mathfrak{g}_{0}} & -{\zeta}^{12}{\mathfrak{g}_{3}} & ({\zeta}^{12}-1){\mathfrak{g}_{3}}\\\hline \end{array}\]
Now if we raise the $\mathfrak{g}_i$ functions to $12$ we observe that $(\mathcal{O}/72 \mathcal{O})^*$
acts like a $3$-cycle on $\mathfrak{g}_i^{12}$ leaving $\mathfrak{g}_2^{6}$ invariant.
Therefore $\mathfrak{g}_2^{12}$ gives rise to a class invariant but also the functions
$\mathfrak{g}_0^{12}+\mathfrak{g}_1^{12}+\mathfrak{g}_3^{12}$ give rise to class invariants but since
their sum is $-36$ the $\mathfrak{g}_2^{12}$ is more interesting (it involves computation of
only one modular function).
In table \ref{aa33} we present some small class polynomials
for the invariant $\mathfrak{g}_2^{12}$.
Notice that $\mathfrak{g}_2^6$ is a also a class invariant but its minimal polynomial
does not have coefficients in $\Z$. In table \ref{bb33} we present some of the
minimal polynomials of $\mathfrak{g}_2^6$ that are in $\Z[\sqrt{D'}][x]$,
where $D'$ is the core discriminant of $D$, i.e., the non-square part of $D$.
\begin{table}[H]
\caption{Polynomials for the invariant $\mathfrak{g}_2^{12}$, $n\equiv 3 \bmod 24$. \label{aa33}}
{\tiny
\begin{tabular}{|l|l|l|}
\hline
n & C.N. & polynomials \\
\hline
3 & 1 & x+27 \\
\hline
27 & 1 & $x + 243$ \\
\hline
51 & 2 &
$
x^2
+ 1817 x
+ 63408
$ \\
\hline
75 & 2 & $ x^2
+ 8694 x
+ 729 $ \\
\hline
99 & 2 &
$x^2
+ 33538 x
+ 675212
$
\\
\hline
133 & 2 &
$
x^2
+ 110682 x
+ 3982527$\\
\hline
147 & 2 & $
x^2
+ 326646 x
+ 729$
\\
\hline
171 & 4 &
$
x^4
+ 885577 x^3
+ 75449123 x^2
+ 1878791197 x
+ 9480040943
$ \\ \hline
195 &4 &
$
x^4
+ 2243057 x^3
+ 134435463 x^2
+ 2044439302 x
+ 4021471722$
\\ \hline
219 & 4 &
$
x^4
+ 5374182 x^3
+ 177358410 x^2
+ 3337735739 x
+ 452759$
\\ \hline
243 & 3 &
$
x^3
+ 12288753 x^2
- 36669429 x
+ 129140163$
\\ \hline
267 & 2 & $
x^2
+ 27000090 x
+ 972001215$
\\ \hline
291 & 4 &
$
x^4
+ 57302460 x^3
+ 6191231603 x^2
+ 190393837000 x
+ 2422188$
\\ \hline
315 & 4 & $
x^4
+ 117966740 x^3
+ 5465452595 x^2
- 18078266775 x
- 2283511958571$
\\ \hline
339 & 6 &
$
x^6
+ 236380194 x^5
+ 16297323547 x^4
+ 865456023300 x^3
+ 28951950717535 x^2$ \\
& &
$
+ 379087533199695 x
+ 3423896293014081$
\\ \hline
363 & 4 &
$
x^4
+ 462331692 x^3
+ 22863777174 x^2
+ 337039803468 x
+ 531441$
\\ \hline
387 & 4 &
$
x^4
+ 884736829 x^3
+ 65027878839 x^2
+ 1219285304855 x
+ 878209991853$
\\ \hline
411 & 6 &
$
x^6
+ 1659823938 x^5
+ 299376470893 x^4
+ 17714533511043 x^3
+ 122181573194844 x^2$ \\ & &
$- 5409428705176675 x
+ 70928211329527433$
\\ \hline
\end{tabular}
}
\end{table}
\begin{table}[H]
\caption{Polynomials for the invariant $\mathfrak{g}_2^{6}$, $n\equiv 3 \bmod 24$. \label{bb33}}
{\tiny
\begin{tabular}{|r|c|l|}
\hline
D & C.N. & polynomials \\
\hline
-3 &
1&
$
x
- 3 \sqrt{D'}
$ \\ \hline
-27 &
1&
$
x
- 9 \sqrt{D'}
$ \\ \hline
-51 &
2&
$
x^2
- 6 \sqrt{D'} x
- 27
$ \\ \hline
-75 &
2&
$
x^2
- 54 \sqrt{D'} x
- 27
$ \\ \hline
-99 &
2&
$
x^2
- 54 \sqrt{D'} x
+ 729
$ \\ \hline
-123 &
2&
$
x^2
- 30 \sqrt{D'} x
- 27
$ \\ \hline
-147 &
2&
$
x^2
- 330 \sqrt{D'} x
- 27
$ \\ \hline
-171 &
4&
$
x^4
- 216 \sqrt{D'} x^3
- 486 x^2
- 19683
$ \\ \hline
-195 &
4&
$
x^4
- 108 \sqrt{D'} x^3
- 15714 x^2
+ 2916 \sqrt{D'} x
+ 729
$ \\ \hline
-219 &
4&
$
x^4
- 156 \sqrt{D'} x^3
+ 22302 x^2
+ 4212 \sqrt{D'} x
+ 729
$ \\ \hline
-243 &
3&
$
x^3
- 2025 \sqrt{D'} x^2
- 6561 x
+ 6561 \sqrt{D'}
$ \\ \hline
-267 &
2&
$
x^2
- 318 \sqrt{D'} x
- 27
$ \\ \hline
-291 &
4&
$
x^4
- 444 \sqrt{D'} x^3
- 32130 x^2
+ 11988 \sqrt{D'} x
+ 729
$ \\ \hline
-315 &
4&
$
x^4
- 1836 \sqrt{D'} x^3
- 7290 x^2
- 78732 \sqrt{D'} x
+ 531441
$ \\ \hline
-339 &
6&
$
x^6
- 834 \sqrt{D'} x^5
+ 293355 x^4
+ 123444 \sqrt{D'} x^3
- 7920585 x^2
- 607986 \sqrt{D'} x
- 19683
$ \\ \hline
-363 &
4&
$
x^4
- 12420 \sqrt{D'} x^3
- 218754 x^2
+ 335340 \sqrt{D'} x
+ 729
$ \\ \hline
-387 &
4&
$
x^4
- 4536 \sqrt{D'} x^3
- 486 x^2
- 19683
$ \\ \hline
\end{tabular}
}
\end{table}
\section{An Application to Elliptic Curve Generation}
An important application of class invariants is that their minimal polynomials can be used for the
generation of elliptic curves over finite fields. In particular, a method called {\em Complex Multiplication}
or {\em CM method}
is used being raised from the theory of Complex Multiplication (CM) of elliptic curves
over the rationals.
In the case of prime
fields $\mathbb{F}_p$,
the CM method starts with the specification of
a discriminant value $D$, the determination of the order $p$ of
the underlying prime field and the order $m$ of the elliptic curve (EC). It then
computes the Hilbert polynomial,
which is uniquely determined by $D$ and locates one of its roots
modulo $p$. This root can be used to construct the parameters of
an EC with order $m$ over the field $\mathbb{F}_{p}$.
Alternative classes of polynomials can be used in the CM method as long as there is a transformation of their roots
modulo $p$ to the roots of the corresponding Hilbert polynomials. Clearly, polynomials $q_n$ can be used in the
CM method. However, firstly we have to find a relation between the $j$-invariant and the values $A_n$.
Using \cite[lemma\ 3]{KoKo2} we obtain the following relation between the $j$-invariant $j_n$ and the Ramanujan values $t_n$:
\begin{equation} \label{relate1}
j_n = (t_n^6-27t_n^{-6}-6)^3.
\end{equation}
If we set $C=t_n^6-27t_n^{-6}$ then we easily observe that
\begin{equation}
\label{c-eq}
C^2 = 27(A_n - 2).
\end{equation}
\begin{remark}
An other way to obtain a formula that relates the values $H_n$, $t_n$ is working with equation
(\ref{gequ}) that relates $\mathfrak{g}_i^{12}$ to $j$. We compute the coefficients of the polynomial
$\prod_{0\leq i < j \leq 3} (X-\mathfrak{g}_i^{12}\mathfrak{g}_j^{12})$. These are symmetric polynomials in
the variables $\mathfrak{g}_i^{12}$ and can be expressed as polynomials on the elementary
symmetric polynomials
which up to sign are the coefficients of the polynomial in eq. (\ref{gequ}). Using this approach with
magma we arrive at
\begin{eqnarray*}
G(Y,j) &= &
{Y}^{6}-270\,{Y}^{5}+ \left( -36\,j+26487 \right) {Y}^{4}+ \\
& & \left( -{j}
^{2}+1512\,j-1122660 \right) {Y}^{3} + \left( -26244\,j+19309023
\right) {Y}^{2}\\
& &-143489070\,Y+387420489.
\end{eqnarray*}
We arrive at the same polynomial if we eliminate $t_n^6$ from eq. (\ref{relate1}) and
the definition of $H_n$.
\end{remark}
If we wish to construct an EC over a prime field $\mathbb{F}_p$ using the $q_n$ polynomials, we have to
find one of their roots modulo $p$ and then transform it to a root of the corresponding
Hilbert polynomial $j_n$. The root $j_n$ can be acquired using Eq. ~(\ref{c-eq}) and the relation $j_n = (C-6)^3$.
Let us give a brief example on how $q_n$ polynomials can be used in the CM method.
Suppose that we wish to generate an EC over the prime field $\mathbb{F}_p$ where
$p=2912592100297027922366637171900365067697538262949$ and we
choose to use a discriminant equal to $n=259$. Initially,
the CM method having as input the values $p$ and $n$ constructs the order of the
EC which is equal to a prime number \[m=2912592100297027922366635123877214056291799441739.\]
Then, the polynomial $q_{259}(x)$ is constructed
\begin{eqnarray*}
q_{259}(x) & = & x^4-16106786824376x^3-810131323637352x^2 \\ & &-9877474632560864x+ 28045355843867152.
\end{eqnarray*}
This polynomial has four roots modulo $p$. Every such root can be transformed to a root $j_{n}$
using Eq. ~(\ref{c-eq}) and the relation $j_n = (C-6)^3$. Eq. ~(\ref{c-eq})
will result to two values $C$ and this means that for every root modulo $p$ of the $q_{259}(x)$
polynomial we will have two roots $j_{259}$. However, only one of these two roots
gives rise to an EC with order $m$.
The correct choice is made easily: we follow the steps of the CM method, construct an EC having as input a value
$j_{259}$ and then check whether the resulted curve (or its twist) has indeed order $m$.
If the answer is negative, then this value is rejected.
For example, one root modulo $p$ of the $q_{259}(x)$ polynomial is equal to
\[
r = 292000143869356471233943284623526736899256758497.
\]
Using Eq. ~(\ref{c-eq}), we compute the two solutions
\[
C_1 = 1555795526891231123931549739786994193545044932499
\]
and
\[
C_2 = 1356796573405796798435087432113370874152493330450
\]
and therefore the two possible values of the $j$-invariant are
\[
j_1 = 2662539171725102375366109856465433412332472450493
\]
and
\[
j_2 = 1859938916666171899538097507602720023646246323886.
\]
Selecting the first value $j_1$, we construct the EC $y^2 = x^3 + ax + b$ where
\[
a = 1545339657951389136173847270246016180230953846699
\]
and
\[
b = 59362405201916783327019122863889097588123143483.
\]
In order to check if this EC (or its twist) has order $m$, we choose a point $P$ at random in
it and we compute the point
$Q=mP$. If this point is equal to the point at infinity then the EC has order $m$. Making the necessary computations for the above EC, we see that this is the case and our construction is finished. Thus, we conclude that we have chosen the correct $j$-invariant and the second value $j_2$ is rejected.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,500,269 | arxiv | \section{Introduction}
\label{sec:introduction}
Deep neural networks (DNNs) have demonstrated significant success in various tasks. Besides the superior performance of DNNs, some attempts have been made to investigate the interpretability of DNNs in recent years. Previous studies of interpreting DNNs can be roughly summarized into two types, \emph{i.e.} the explanation of DNNs in a post-hoc manner~\cite{lundberg2017unified, ribeiro2016should}, and the analysis of the feature representation capacity of a DNN~\cite{higgins2017beta, achille2018emergence, achille2018information, fort2019stiffness, liang2019knowledge}.
This study focuses on a new perspective of analyzing the feature representation capacity of DNNs. \emph{I.e.} we propose a number of generic metrics to define, quantify, and analyze the complexity of features in DNNs. Previous research usually analyzed the \textit{theoretical} maximum complexity of a DNN according to network architectures~\cite{arora2016understanding, zhang2016architectural, raghu2017expressive, boob2018complexity,manurangsi2018computational}. In comparison, we propose to quantify the \textit{real} complexity of features learned by a DNN, which is usually significantly different from the \textit{theoretical} maximum complexity that a DNN can achieve. For example, if we use a deep network to solve a linear regression problem, the theoretical complexity of features may be much higher than the real feature complexity.
In this paper, for the feature of a specific intermediate layer, we define the real complexity of this feature as the minimum number of nonlinear transformations required to compute this feature. However, the quantification of nonlinear transformations presents significant challenges to state-of-the-art algorithms. Thus, we use the number of nonlinear layers to approximate the feature complexity. \emph{I.e.} if a feature component can be computed using $k$ nonlinear layers, but cannot be computed with $k-1$ nonlinear layers, we consider its complexity to be of the $k$-th order.
In this way, we can disentangle an intermediate-layer feature into feature components of different complexity orders, as Figure~\ref{fig:task} shows. The clear disentanglement of feature components enables the quantitative analysis of a DNN. We further investigate the reliability of feature components of different complexity orders, and explore the relationship between the feature complexity and the performance of DNNs. More specifically, we analyze DNNs from the following perspectives:
\begin{figure}[t]
\begin{small}
\begin{minipage}[t]{0.63\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figs/task.pdf}
\caption{
We disentangle the raw feature into feature components of different complexity orders. We further design metrics to analyze the disentangled feature components.
}
\label{fig:task}
\end{minipage}
\hfill
\begin{minipage}[t]{0.35\linewidth}
\centering
\includegraphics[width=1.1\linewidth]{figs/figure_3.pdf}
\vspace{-10pt}
\caption{The network for the disentanglement of reliable feature components.}
\label{fig:reli-structure}
\end{minipage}
\end{small}
\end{figure}
\textbullet\quad The distribution of feature components of different complexity orders potentially reflects the difficulty of the task.
A simple task usually makes the DNN mainly learn simple features.
\textbullet\quad We further define metrics to analyze the reliability, the effectiveness, and the significance of over-fitting for the disentangled feature components:
\begin{enumerate}
\item In this paper, \textit{reliable} feature components refer to features that can be stably learned for the same task by DNNs with different architectures and parameters.
\item \textit{The effectiveness of a feature component} is referred to as whether the feature component of a certain complexity order corresponds to neural activations relevant to the task. Usually, irrelevant neural activations can be considered as noises.
\item \textit{The significance of over-fitting of a feature component} represents that whether the feature component is over-fitted to specific training samples. In this paper, the significance of over-fitting is quantified as the difference between a feature component's numerical contribution to the decrease of the training loss and its contribution to the decrease of the testing loss.
\item We successfully discover a strong connection between the feature complexity and the performance of DNNs.
\end{enumerate}
\textbullet\quad Taking the disentangled feature components as the input feature of DNNs, especially feature components with high effectiveness and reliability, improves the performance of DNNs.
\textbf{Method:} More specifically, the disentanglement of feature components of different complexity orders is inspired by knowledge distillation~\cite{hinton2015distilling}. We consider the target DNN as the teacher network. Then, we design several disentangler networks (namely disentangler nets) with different depths to mimic the feature in an intermediate layer of the teacher network. Feature components mimicked by shallow disentangler nets usually correspond to those of low complexity.
A deeper disentangler net can incrementally learn an additional feature component of a bit higher complexity order, besides components of low complexity.
In addition, we find that the number of channels in disentangler nets does not significantly affect the distribution of feature components of different complexity orders. This demonstrates the trustworthiness of our method. The proposed method can be widely applied to DNNs learned for different tasks with different architectures.
As generic mathematical tools, the proposed metrics provide insightful explanations for the success of network compression and knowledge distillation.
\textbf{Contributions:} Our contributions can be summarized as follows:
(1) We propose a method to define, quantify, and analyze the real complexity of intermediate-layer features in a DNN. Unlike the theoretical complexity of a DNN based on its architecture, the real feature complexity quantified in this paper reveals the difficulty of tasks.
(2) The proposed method disentangles feature components of different complexity orders.
(3) We propose new metrics to analyze these feature components in terms of the reliability, the effectiveness, the significance of over-fitting, and the performance of DNNs. The analysis provides a new perspective to understand the network compression and the knowledge distillation.
(4) The disentangled feature components improve the performance of DNNs.
\section{Related Work}
In this section, we discuss related studies in the scope of interpreting DNNs.
\textbf{Visual explanations for DNNs:} The most direct way to interpret DNNs includes the visualization of the knowledge encoded in intermediate layers of DNNs~\cite{zeiler2014visualizing,simonyan2017deep,yosinski2015understanding,mahendran2015understanding, dosovitskiy2016inverting}, and the estimation of the pixel-wise attribution/importance/saliency on an input image~\cite{ribeiro2016should, lundberg2017unified, kindermans2017learning, fong2017interpretable, zhou2016learning, selvaraju2017grad, chattopadhay2018grad, zhou2014object}.
Some recent studies of network visualization reveal certain properties of DNNs. For example, \cite{fong2017interpretable} used influence functions to analyze the sensitivity to input data of a DNN.
Unlike previous studies, in this paper, we propose to disentangle and visualize feature components of different complexity orders for better understandings of DNNs.
\textbf{Explanations for the representation capacity of DNNs:} The evaluation of the representation capacity of DNNs provides a new perspective for explanations. The information-bottleneck theory~\cite{wolchover2017new,shwartz2017opening} used mutual information to evaluate the representation capacity of DNNs~\cite{goldfeld2019estimating,xu2017information}. \cite{achille2018information} further used the information-bottleneck theory to constrain the feature representation during the learning process to learn more disentangled features. The CLEVER score~\cite{weng2018evaluating} was used to estimate the robustness of DNNs.
The stiffiness~\cite{fort2019stiffness}, the Fourier analysis~\cite{xu2018understanding}, and the sensitivity metrics~\cite{novak2018sensitivity} were proposed and applied to analyze the generalization capacity of DNNs.
The canonical correlation analysis (CCA)~\cite{kornblith2019similarity} was used to measure the similarity between feature representations of DNNs. \cite{liang2019knowledge} investigated the knowledge consistency between different DNNs. \cite{chen2018learning} proposed instance-wise feature selection via mutual information for model interpretation.
Unlike previous methods, our research aims to explain a DNN from the perspective of feature complexity. In comparison, previous methods mainly analyzed the difficulty of optimizing a DNN,
~\cite{arora2016understanding,blum1989training,boob2018complexity}
the architectural complexity,
~\cite{zhang2016architectural}
and the representation complexity,
~\cite{liang2017fisher, cortes2017adanet,pascanu2013construct,raghu2017expressive}
which are introduced as follows:
\textbullet\quad Difficulty or computational complexity of optimizing a DNN: Some studies focus on the amount of computation, which is required to ensure a certain accuracy of a task.
\cite{blum1989training,livni2014computational} proved that learning a neural network with one hidden layer was NP-hard in the realizable case.
\cite{arora2016understanding} showed that a ReLU network with a single hidden layer could be trained in polynomial time when the dimension of input was constant. \cite{boob2018complexity,manurangsi2018computational} proved that it was NP-hard to train a two-hidden layer feedforward ReLU neural network.
Based on topological concepts, \cite{bianchini2014complexity} proposed to evaluate the complexity of functions implemented by neural networks. \cite{rolnick2017power} focused on the number of neurons required to compute a given function for a network with fixed depth.
\textbullet\quad Complexity measures of the feature representation in DNNs:
\cite{pascanu2013construct, zhang2016architectural} proposed three architectural complexity measures for RNNs. \cite{raghu2017expressive} proved the maximal complexity of features grew exponentially with depth. \cite{liang2017fisher, cortes2017adanet} measured the maximal complexity of DNNs with Rademacher complexity.
However, unlike the investigation of the theoretical maximal complexity of a DNN, we focus on the real complexity of the feature. We disentangle and visualize feature components of different complexity orders. In addition, we define and analyze the quality of the disentangled feature components, and successfully discover a strong connection between the feature complexity and the performance of DNNs.
\section{Algorithm}
\subsection{Complexity of feature components}
\label{sec:complexity}
Given an input image $x$, let {\small$f(x)\in \mathbb{R}^n$} denote the feature of a specific intermediate layer of the DNN. {\small$y=g(f(x))\in \mathbb{R}^C$} is given as the output of the DNN. \emph{E.g.} $C$ denotes the number of categories in the classification task. In this study, we define the complexity of feature components as the minimum number of nonlinear transformations that are required to compute the feature components. The disentanglement of feature components of different complexity orders in Figure~\ref{fig:task} can be represented as follows.
\begin{equation}
f(x) = c^{(1)}(x)+c^{(2)}(x)+\ldots+c^{(L)}(x) + \Delta f
\end{equation}
where {\small$c^{(l)}(x)$} denotes the feature component of the $l$-th complexity order (or, the \textit{$l$-order complexity} for short). {\small$\Delta f$} is the feature component with higher-order complexity.
\begin{framed}
\textbf{Definition:} The feature component $c$ of the $l$-order complexity is defined as the feature component that can be computed using $l$ nonlinear layers, but cannot be computed with $l-1$ nonlinear layers.
\emph{I.e.} {\small$l= \mathop{\arg\!\min}_{l'} \{\Phi^{(l')}(x)=c\}$}, where {\small$\Phi^{(l')}(\cdot)$} denotes a neural network with $l'$ nonlinear transformation layers.
\end{framed}
\textit{Instead of directly disentangling the feature component $c^{(l)}$, we propose to use knowledge distillation to disentangle all feature components with the complexity of no higher than the $l$-th order, \emph{i.e.} {\small$\Phi^{(l)}(x) = \sum_{i=1}^{l}c^{(i)}(x)$}.} Given a trained DNN as the teacher network, we select an intermediate layer $f$ of the DNN as the target layer. {\small$\Phi^{(l)}(x) = \sum_{i=1}^{l}c^{(i)}(x)$} is disentangled using another DNN (termed the \textit{disentangler net}) with $l$ nonlinear layers.
The MSE loss {\small$\Vert f(x) - \Phi^{(l)}(x) \Vert ^2$} is used to force {\small$\Phi^{(l)}(x)$} to mimic the target feature {\small$f(x)$}, where {\small$f(x)$} denotes the feature of the teacher network. We use disentangler nets with different depths {\small$\Phi^{(1)}, \Phi^{(2)}, \ldots, \Phi^{(L)}$}, to disentangle feature components of different complexity orders. In this way, the feature component of the $l$-order complexity is given as:
\begin{equation}
Loss = \Vert f(x) - \Phi^{(l)}(x) \Vert ^2, \qquad c^{(l)}(x) = \Phi^{(l)}(x) - \Phi^{(l-1)}(x)
\label{equ:l-depth}
\end{equation}
In particular, {\small$c^{(1)}(x) = \Phi^{(1)}(x)$}.
Thus, {\small$f(x)$} is disentangled into two parts: {\small$f(x)=\Phi^{(L)}(x)+\Delta f$} where {\small$\Delta f$} denotes the feature component with a higher complexity order than $L$.
\textbf{Significance of feature components ({\small$\boldsymbol{\rho_c^{(l)}}$}):} Furthermore, we quantify the significance of feature components of different complexity orders as the relative variance of feature components. The metric is designed as {\small$\rho_c ^{(l)} = Var[c^{(l)}(x)]/Var[f(x)]$}, where {\small$Var[c^{(l)}(x)] = \mathbb{E}_x [\Vert c^{(l)}(x) - \mathbb{E}_{x'} [c^{(l)}(x')]\Vert^2]$}.
For the fair comparison between different DNNs, we use the variance of {\small$f(x)$} to normalize {\small$Var[c^{(l)}(x)]$}. {\small$\rho_c^{(l)}$} represents the significance of the $l$-th order complex feature component \emph{w.r.t.} the entire feature.
\textbf{Limitations: accurate estimation vs. fair comparison.} Theoretically, if the teacher DNN has $D$ nonlinear transformation layers, the complexity of its features must be no higher than the $D$-th order, \emph{i.e.} {\small$\Phi^{(D')}(x) = f(x)$}, {\small$D'\le D$}. However, the optimization capacity for the learning of disentangler nets is limited. A disentangler net with $D$ nonlinear layers cannot learn all features encoded in {\small$f(x)$}. Thus, when {\small$\Phi^{(D')}\approx f(x)$} in real implementations, we have {\small$D'\ge D$}.
In this way, {\small$\rho_c^{(l)}$} measures the relative distribution of feature components of different complexity orders, instead of an accurate disentanglement of feature components.
Nevertheless, as Figure~\ref{fig:different_disentangler} shows, even if we use disentangler nets with different architectures, we still get similar distributions of feature components. This proves the trustworthiness of our method, and enables the fair comparison of feature complexity between different DNNs.
\textbf{Effectiveness of feature components ({\small$\boldsymbol{\alpha_{\textrm{effective}}^{(l)}}$})} measures whether the feature component {\small$c^{(l)}(x)$} extracted from the training sample $x$ directly contributes to the task.
The metric is defined based on the game theory.
We first quantify the numerical contribution {\small$\varphi_l^\textrm{train}$} of each feature component {\small$c^{(l)}(x)$} to the decrease of the task loss in training as the Shapley value~\cite{shapley1953value,lundberg2017unified}.
\emph{I.e.} $\varphi_l^\textrm{train}$ measures the change of the training loss caused by feature components {\small $\{c^{(l)}(x)|x\in X_\textrm{train}\}$}.
The Shapley value is an unbiased method to compute contributions of input features to the prediction, which satisfies four desirable axioms, \emph{i.e.} efficiency, symmetry, linearity, and dummy axioms~\cite{grabisch1999axiomatic}. Please see the supplementary material for theoretical foundations of Shapley values. In this way, numerical contributions of all the {\small$L$} feature components can be allocated and given as {\small$\varphi_1^\textrm{train}+\varphi_2^\textrm{train}+\dots+\varphi_L^\textrm{train}=\mathbb{E}_{x\in X_\textrm{train}}[\mathcal{L}(\Delta f_x)-\mathcal{L}(\Delta f_x + \Phi^{(L)}(x))]$}, where {\small$\Delta f_x$} is the high-order component computed using the sample $x$.
{\small$\mathcal{L}(\Delta f_x)$} represents the task loss when we remove all feature components in {\small$\Phi^{(L)}(x)$}, and {\small$\mathcal{L}(\Delta f_x+\Phi^{(L)}(x))$} denotes the task loss when both {\small$\Delta f_x$} and feature components in {\small$\Phi^{(L)}(x)$} are used for inference. Thus, the metric {\small$\alpha^{(l)}_{\mathrm{effective}}=\varphi_l^\textrm{train}/\sqrt{Var[c^{(l)}(x)]}$} measures the effectiveness of the feature component {\small$c^{(l)}$} to the decrease of the training loss. We use {\small$\sqrt{Var[c^{(l)}(x)]}$} for normalization.
Please see the supplementary material for theoretical foundations of the trustworthiness of {\small$\alpha^{(l)}_\textrm{effective}$}.
\textbf{Significance of over-fitting of feature components ({\small$\boldsymbol{\alpha_{\textrm{overfit}}^{(l)}}$})} measures whether {\small$c^{(l)}(x)$} is over-fitted to specific training samples.
Similarly, this metric is also defined based on Shapley values.
We quantify the numerical contribution {\small$\varphi_l^\textrm{overfit}$} of each feature component {\small$c^{(l)}(x)$} to over-fitting, whose significance is quantified as
{\small $\mathcal{L}_\textrm{overfit}(f)=\mathcal{L}_\textrm{overfit}(\Delta f+\Phi^{(L)})=\mathbb{E}_{x\in X_\textrm{test}}[\mathcal{L}(\Delta f_x+\Phi^{(L)}(x))]-\mathbb{E}_{x\in X_\textrm{train}}[\mathcal{L}(\Delta f_x + \Phi^{(L)}(x))]$}.
In this way, the numerical contribution can also be measured as Shapley values {\small$\varphi_1^\textrm{overfit}+\varphi_2^\textrm{overfit}+\dots+\varphi_L^\textrm{overfit}=\mathcal{L}_\textrm{overfit}(\Delta f+\Phi^{(L)})-\mathcal{L}_\textrm{overfit}(\Delta f)$}, where {\small$\mathcal{L}_\textrm{overfit}(\Delta f + \Phi^{(L)})$} is computed using both components {\small $\Delta f_x$} and components {\small$\Phi^{(L)}(x)$} in different images.
\emph{I.e.} {\small$\varphi_l^\textrm{overfit}$} measures the change of {\small$\mathcal{L}_\textrm{overfit}$} caused by the feature component {\small$c^{(l)}(x)$}. The metric of the significance of over-fitting for {\small$c^{(l)}$} is given as {\small$\alpha^{(l)}_{\mathrm{overfit}}=\varphi_l^\textrm{overfit}/\varphi^\textrm{train}_l$}.
Thus, {\small$\alpha_\textrm{overfit}^{(l)}$} represents the ratio of the increase of the gap $\Delta \mathcal{L}_\textrm{overfit}$ to the decrease of the training loss $\Delta \mathcal{L}_\textrm{train}$. Please see the supplementary material for theoretical foundations of the trustworthiness of {\small$\alpha^{(l)}_\textrm{overfit}$}.
\subsection{Reliability of feature components}
\label{sec:reliability}
In order to evaluate the reliability of a set of feature components {\small$\Phi^{(l)}(x) = \sum_{i=1}^{l}c^{(i)}(x)$}, we propose to disentangle reliable feature components {\small$\Phi^{(l),\textrm{reli}}(x)$} and unreliable feature components {\small$\Phi^{(l),\textrm{unreli}}(x)$}:
\begin{equation}
\Phi^{(l)}(x) = \Phi^{(l),\textrm{reli}}(x) + \Phi^{(l),\textrm{unreli}}(x)
\end{equation}
As discussed in~\cite{liang2019knowledge}, DNNs with different initializations of parameters usually learn some similar feature representations for the same task, and these similar features are proved to be reliable for the task.
Thus, we consider the reliable feature components as features that can be stably learned by different DNNs trained for the same task. Suppose that we have $K$ different DNNs learned for the same task. For each DNN, we select the feature of a specific intermediate layer as the target feature.
Let {\small$f_1(x), f_2(x), \ldots, f_K(x)$} denote target features of $K$ DNNs. We aim to extract features shared by {\small$f_1(x), f_2(x), \ldots, f_K(x)$}, \emph{i.e.} disentangling {\small$\Phi_1^{(l),\textrm{reli}}(x), \Phi_2^{(l),\textrm{reli}}(x), \ldots, \Phi_K^{(l),\textrm{reli}}(x)$} from features of $K$ DNNs as reliable components, respectively.
For each pair of DNNs {\small$(i,j)$}, {\small$\Phi_i^{(l),\textrm{reli}}(x)$} and {\small$\Phi_j^{(l),\textrm{reli}}(x)$} are supposed to be able to reconstruct each other by a linear transformation:
\begin{equation}
\Phi_i^{(l),\textrm{reli}}(x) =r_{j \rightarrow i}(\Phi_j^{(l),\textrm{reli}}(x)),\quad \Phi_j^{(l),\textrm{reli}}(x) = r_{i \rightarrow j}(\Phi_i^{(l),\textrm{reli}}(x))
\end{equation}
where {\small$r_{i\rightarrow j}$} and {\small$r_{j \rightarrow i}$} denote two linear transformations.
\textbf{Implementations:} Inspired by the CycleGAN~\cite{zhu2017unpaired}, we apply the idea of cycle consistency on knowledge distillation to extract reliable feature components.
To extract reliable feature components, we construct the following neural network for knowledge distillation. As Figure~\ref{fig:reli-structure} shows, the network has a total of $l$ ReLU layers. We add $K$ parallel additional convolutional layers {\small$g_1,g_2,\ldots,g_{K}$} to generate $K$ outputs {\small$\widetilde{\Phi}^{(l)}_1(x), \widetilde{\Phi}^{(l)}_2(x),\ldots,\widetilde{\Phi}^{(l)}_{K}(x)$}, to mimic {\small$f_1(x), f_2(x),\ldots,f_{K}(x)$}, respectively. More specifically, {\small$\widetilde{\Phi}^{(l)}_k(x) = g_k(\psi^{(l)}(x))$}, where {\small$\psi^{(l)}(x)$} denotes the output of the dsentangler net with $l$ ReLU layers. Then, the distillation loss is given as {\small$\mathcal{L}^{\textrm{distill}} =\sum_{k=1}^{K} \Vert f_k(x)-\widetilde{\Phi}^{(l)}_k(x) \Vert^2$}.
For the cycle consistency, we use {\small$\widetilde{\Phi}^{(l)}_k(x)$} to reconstruct {\small$\psi^{(l)}(x)$} by another linear transformation {\small$h_k$}: {\small$h_k(\widetilde{\Phi}^{(l)}_k(x))=h_k(g_k(\psi^{(l)}(x)))\rightarrow\psi^{(l)}(x)$}. We conduct cycle reconstructions between {\small$\psi^{(l)}(x)$} and {\small$\widetilde{\Phi}^{(l)}_k(x)$} for $R$ iterations ($R=10$ in experiments). Let {\small$\psi^{(l)}_{0}(x)= \psi^{(l)}(x), \psi^{(l)}_r(x)=\mathbb{E}_k[h_k\circ g_k\circ \psi^{(l)}_{r-1}(x)]$} denote the reconstruction output in the $r$-th iteration, where {\small$h_k\circ g_k$} denotes the cascaded layerwise operations. The cycle construction loss is given as follows:
\begin{equation}
\mathcal{L}^{\textrm{cycle}}= {\sum}_{r=1}^{R}{\sum}_{k=1}^{K}\Vert h_k\circ g_k \circ \psi^{(l)}_{r-1}(x)- \psi^{(l)}_{r-1}(x)\Vert ^2
\label{equ:loss of reli}
\end{equation}
Please see the supplementary material for the detailed explanation.
This loss makes the feature {\small$\widetilde{\Phi}^{(l)}_k(x)$} approximately shared by $K$ DNNs. In this way, {\small$\Phi_k^{(l),\textrm{reli}}(x)=\widetilde{\Phi}^{(l)}_k(x)$} can be considered as the reliable feature component. Compared with the traditional cycle consistency~\cite{zhu2017unpaired}, the above loss is much simpler and requires less computational cost. In this way, we can disentangle the unreliable feature component as {\small$\Phi^{(l),\textrm{unreli}}_{k}(x)=\Phi_{k}^{(l)}(x)-\Phi^{(l),\textrm{reli}}_{k}(x)$}. In experiments, in order to disentangle reliable and unreliable feature components from a target DNN, we used two additional trained DNNs $(A,B)$ to extract reliable feature components shared by the three DNNs, \emph{i.e.} $K=3$. DNNs $A$ and $B$ (namely \textit{exemplary DNNs}) were selected as those with state-of-the-art performance in the target task, in order to obtain convincing results. The same pair of DNNs $A$ and $B$ were uniformly used to analyze various DNNs, which enabled fair comparisons.
\textbf{Reliability of feature components}
in $\Phi_k^{(l)}(x)$ can be quantified as the ratio of reliable feature components in $\Phi_k^{(l)}(x)$ as {\small$\rho^{(l),\textrm{reli}} = Var[\Phi_k^{(l),\textrm{reli}}(x)]/Var[\Phi^{(l)}_k(x)]$}.
\section{Experiments}
\textbf{Datasets, DNNs \& Implementation details:} We used our method to analyze VGG-16~\cite{simonyan2017deep} and ResNet-8/14/18/20/32/34/44~\cite{he2016deep}.\footnote{Compared with the original VGG-16, we added a BatchNorm layer before the output feature of each convolutional layer, before we use its feature to guide the distillation process. ResNet-8 and ResNet-14 had the similar structure as ResNet-20, ResNet-32 and ResNet-44 in ~\cite{he2016deep}, except that they had 1 and 2 blocks in each stage, respectively.} For simplification, we limited our attention to coarse-grained and fine-grained object classification. We trained these DNNs based on the CIFAR-10 dataset~\cite{krizhevsky2009learning}, the CUB200-2011 dataset~\cite{wah2011caltech}, and the Stanford Dogs dataset~\cite{KhoslaYaoJayadevaprakashFeiFei_FGVC2011}.
For the CUB200-2011 dataset and the Stanford Dogs dataset, we used object images cropped by object bounding boxes for both training and testing.
The classification accuracy of learned DNNs is shown in the supplementary material.
\textbf{Disentangler nets:} We designed the disentangler nets {\small$\Phi^{(1)}(x),\dots,\Phi^{(L)}(x)$} with residual architectures. The disentangler net consisted of three types of residual blocks, each type having $m$ blocks. Each block of the three types consisted of a ReLU layer and a convolutional layer with {\small$128r, 256r ,512r$} channels, respectively. In most experiments, we set {\small$r=1$}, but in Figure~\ref{fig:different_disentangler}, we tried different values of $r$ to test the performance of different disentangler nets.
We used two additional convolutional layers before and after all $3m$ blocks, respectively, to match the input and output dimensions. Therefore, a disentangler net contained {\small$3m+2$} convolutional layers and {\small$l=3m+1$} ReLU layers.
For fair comparisons between DNNs, we used the same set of disentangler nets to quantify the complexity of each DNN. We analyzed the complexity of the output feature of the last convolutional layer. We set $m=1,2,4,8,16,32$, so that the non-linear layer numbers of disentangler nets were $l=4,7,13,25,49,97$. Considering the computational cost, we calculated {\small$c^{(4)}(x)\! =\! \Phi^{(4)}(x), c^{(7)}(x)\! =\! \Phi^{(7)}(x)\! -\! \Phi^{(4)}(x), c^{(13)}(x)\! =\! \Phi^{(13)}(x)\! -\! \Phi^{(7)}(x)$}, etc.
This approximation did not affect the objectiveness of the quantified distribution of feature components of different complexity orders.
We visualized the disentangled feature components of different orders in Figure~\ref{fig:visualization of c}. Simple feature components usually represented general shape of objects, while complex feature components corresponded to detailed shape and noises.
\begin{figure}[t]
\begin{small}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/2_visualization_1.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/2_visualization_2.pdf}
\end{minipage}
\vspace{-15pt}
\caption{Visualization of the disentangled feature components.}
\vspace{-12pt}
\label{fig:visualization of c}
\end{small}
\end{figure}
\textbf{Exp. 1, task complexity vs. feature complexity: DNNs learned for simpler tasks usually encoded more feature components of low complexity orders.}
We defined tasks of different complexity orders . Let \textit{Task-$n$} denote a task of the $n$-order complexity as follows: we constructed another network (namely the task DNN) with $n$ ReLU layers and randomly initialized parameters, whose output was an {\small$8\times8\times64$} tensor. We learned the target DNN\footnote{For simplicity, we designed the target DNN to have the same architecture as the disentangler net with $l=19$.} to reconstruct this output tensor via an MSE loss. Since the task DNN contained $n$ ReLU layers, we used Task-$n$ to indicate the complexity of mimicking the task DNN.
Figure~\ref{fig:task-feature} compares distributions of feature components disentangled from target DNNs learned for Task-0, Task-2, Task-8, Task-26, and Task-80, respectively. DNNs learned for more complex tasks usually encoded more complex feature components.
\textbf{Various disentangler nets generated similar distributions of feature components, which demonstrated the trustworthiness of our method.} We learned a target DNN for Task-26 on the CIFAR-10 dataset and disentangled feature components from the output feature of the target DNN. We used disentangler nets with different architectures (different values of $r$) for analysis.
Figure~\ref{fig:different_disentangler} compares the distribution of feature components disentangled by different disentangler nets.
\textbf{Exp. 2, the number of training samples had small influence on the distribution of feature components, but significant impacts on the feature reliability.}
We learned ResNet-8/14/20/32/44 using different numbers of training samples, which were randomly sampled from the the CIFAR-10 dataset. Then, we disentangled feature components of different complexity orders from the output feature of the last residual block. More specifically, two exemplary DNNs $A$ and $B$ were implemented as ResNet-44 learned on the entire CIFAR-10 dataset with different initial parameters.
\begin{figure}[t]
\begin{small}
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/task_feature_complexity_revised.pdf}
\vspace{-10pt}
\caption{Significance of feature components of DNNs learned for different tasks.}
\vspace{-13pt}
\label{fig:task-feature}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figs/task_complexity_channel_num_revised.pdf}
\caption{Significance of feature components disentangled by different disentangler nets.}
\vspace{-13pt}
\label{fig:different_disentangler}
\end{minipage}
\end{small}
\end{figure}
Figure~\ref{fig:analysis of c} compares the signficance of disentangled feature components {\small$\rho_c^{(l)}$} and the reliability of feature components {\small$\rho^{(l),\textrm{reli}}$} in different DNNs.
The DNN learned from the larger training set usually encoded more complex features, but the overall distribution of feature components was very close to the DNN learned from the smaller training set. This indicated that the number of training samples had small impacts on the significance of feature components of different complexity orders.
However, in Figure~\ref{fig:analysis of c} (right), DNNs learned from many training samples always exhibited higher reliability than DNNs learned form a few training samples, which meant that the increase of the number of training samples would help DNN learn more reliable features. Results on the CUB200-2011 dataset and the Stanford Dogs dataset are shown in the supplementary material.
\begin{figure}[t]
\begin{small}
\subfigure{\includegraphics[width=0.48\linewidth]{figs/1_distrib.pdf}}
\subfigure{\includegraphics[width=0.47\linewidth]{figs/1_reli.pdf}}
\vspace{-10pt}
\caption{Significance ({\small$\rho_c^{(l)}$}) and reliability ({\small$\rho^{(l),\textrm{reli}}$}) of the disentangled feature components.}
\label{fig:analysis of c}
\vspace{-10pt}
\end{small}
\end{figure}
\textbf{Exp. 3, analysis of the effectiveness and the significance of over-fitting of feature components.}
Figure~\ref{fig:analysis} compares the effectiveness $\alpha^{(l)}_{\textrm{effective}}$ and the significance of over-fitting $\alpha^{(l)}_{\textrm{overfit}}$ of feature components disentangled in Exp. 2.
We found that \textbf{(1)} \textit{in shallow DNNs (like ResNet-8 and ResNet-14), simple feature components were much more effective than complex feature components. However, in deep DNNs, feature components of medium complexity orders tended to be the most effective.} This indicated that the effectiveness of feature components was determined by the network architecture.
\textbf{(2)} \textit{Simple feature components learned from a small number of samples were usually more over-fitted than simple feature components learned from many samples.} \textbf{(3)} \textit{There was no clear regulation for the significance of over-fitting for high-complexity feature components.} This might be due to the low effectiveness of high-complexity feature components.
\begin{figure}[t]
\begin{small}
\centering
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figs/IncreaseOfAcc.pdf}
\caption{Increase of the classification accuracy based on {\small$\Phi^{(l)}(x)$}.}
\label{fig:improvement}
\end{minipage}
\hfill
\begin{minipage}{0.6\linewidth}
\begin{center}
\captionof{table}{The mean absolute value of prediction errors.}
\resizebox{\linewidth}{!}{\
\begin{tabular}{c| c c | c c}
\hline
\multirow{3}*{}& \multicolumn{2}{c|}{Accuracy} & \multicolumn{2}{c}{Task loss}\\
\cline{2-5}
{} & Prediction & Range & Prediction & Range \\
{} & error & of value & error &of value\\
\hline
CIFAR-10 & \textbf{2.73\%}&28.73\%-72.83\% & \textbf{0.49} & 1.59-6.42\\
CUB200 & \textbf{5.66\%}&28.18\%-56.18\% & \textbf{0.47} & 2.94-5.76\\
Dogs & \textbf{3.26\%}&9.37\%-37.95\% & \textbf{0.34} & 4.34-7.97\\
\hline
\end{tabular}}
\label{tab:regression_loss_acc}
\end{center}
\end{minipage}
\end{small}
\end{figure}
\textbf{Exp. 4, improvement of the classification accuracy based on $\Phi^{(l)}(x)$.}
We further tested the classification accuracy of ResNets learned on the CIFAR-10 dataset by directly putting $\Phi^{(l)}(x)$ (here $l=7$) into the trained ResNet to replace the original feature $f(x)$.
Figure~\ref{fig:improvement} shows that $\Phi^{(l)}(x)$ further increased the classification accuracy.
\textbf{Exp. 5, analysis of network compression and knowledge distillation.}
We learned the ResNet-32 on the CIFAR-10 dataset as the originial DNN. We used the compression algorithm~\cite{han2015deep} to learn another DNN (termed the \textit{compressed DNN}) by pruning and quantizing the trained original DNN. For the knowledge distillation, we used another network (termed the \textit{distilled DNN})\footnote{The distilled DNN had the same architecture with the disentangler net with 7 ReLU layers.}, to distill~\cite{hinton2015distilling} the output feature of the last residual block in the original DNN. The supplementary material provides more details about the network compression and knowledge distillaion in \cite{han2015deep,hinton2015distilling}. We compared the compressed DNN and the distilled DNN with the original DNN.
We disentangled feature components from the output feature of the last residual block in the original DNN and the compressed DNN, and the output feature of the distilled DNN.
Figure~\ref{fig:comparison} shows $\rho_c^{(l)}, \rho^{(l),\textrm{reli}}, \alpha^{(l)}_\textrm{effective},$ and $\alpha^{(l)}_\textrm{overfit}$ in the three DNNs.
For the compressed DNN, \textbf{(1)} \textit{the network compression did not affect the distribution of feature components and their reliability.} \textbf{(2)} \textit{Simple feature components in the compressed DNN exhibited lower effectiveness and higher significance of over-fitting than simple feature components in the original DNN.}
For the knowledge distillation, \textbf{(1)} \textit{the distilled DNN had more feature components of low complexity orders than the original DNN. The simple feature components in the distilled DNN were more effective than those in the original DNN.} \textbf{(2)} \textit{Complex feature components in the distilled DNN were more reliable and less over-fitted than complex feature components in the original DNN.}
These results demonstrated that the knowledge distillation would help DNNs learn more reliable features, which prevented over-fitting.
\begin{figure}
\begin{small}
\centering
\subfigure{\includegraphics[height=.24\linewidth]{figs/effectiveness_combined_3.pdf}}
\subfigure{\includegraphics[height=.24\linewidth]{figs/overfit_combined_3.pdf}}
\vspace{-5pt}
\caption{(left) Effectiveness of feature components {\small$\alpha^{(l)}_{\mathrm{effective}}$}. The top-right sub-figure shows the Shapley value {\small$\varphi_l^\textrm{train}$}; (right) Confidence of feature components being over-fitted {\small$\alpha^{(l)}_{\mathrm{overfit}}$}. The top-right sub-figure shows the Shapley value {\small$\varphi_l^\textrm{overfit}$}.}
\label{fig:analysis}
\end{small}
\end{figure}
\begin{figure}[t]
\begin{small}
\begin{minipage}{0.6\linewidth}
\centering
\includegraphics[height=0.35\linewidth]{figs/Original-Compression-Distillation_2.pdf}
\caption{Comparisons of feature complexity between the original DNN, the compressed DNN and the distilled DNN.}
\vspace{-5pt}
\label{fig:comparison}
\end{minipage}
\hfill
\begin{minipage}{.37\linewidth}
\centering
\vspace{18pt}
\includegraphics[height=0.45\linewidth]{figs/pca_regressor.pdf}
\caption{Relationship between the feature complexity and the accuracy.}
\vspace{-5pt}
\label{fig:regression}
\end{minipage}
\end{small}
\end{figure}
\textbf{Exp. 6, strong connections between feature complexity and performance of DNNs.}
To this end, we learned a regression model, which used the distribution of feature components of different complexity orders to predict the performance of DNNs.
For each DNN, we used disentangler nets with $l=4,7,13,25$ to disentangle out {\small$\Phi^{(l),\textrm{reli}}(x)$} and {\small$\Phi^{(l),\textrm{unreli}}(x)$}. Then, we calculated {\small$Var[\Phi^{(l),\textrm{reli}}(x)\!-\!\Phi^{(l-1),\textrm{reli}}(x)]/Var[f(x)]$} and {\small$Var[\Phi^{(l),\textrm{unreli}}(x)\!\!-\!\!\Phi^{(l-1),\textrm{unreli}}(x)]/Var[f(x)]$} for $l=4,7,13,25$, thereby obtaining an 8-dimensional feature to represent the distribution of different feature components. In this way, we learned a linear regressor to use the 8-dimensional feature to predict the testing loss or the classification accuracy.
For the CIFAR-10 dataset, we applied cross validation: we randomly selected 20 DNNs from 25 pre-trained ResNet-8/14/20/32/44 models on different training sets in Exp. 2 to learn the regressor and used the other 5 DNNs for testing.\footnote{For the CUB200-2011 dataset and the Stanford Dogs dataset, we randomly selected 11 models from 12 pre-trained ResNet-18/34 and VGG-16 models to learn the regressor. One model was used for testing.}
These 25 DNNs were learned using 200-5000 samples, which were randomly sampled from the CIFAR-10 dataset to boost the model diversity.
We repeated such experiments for 1000 times for cross validation.
Table~\ref{tab:regression_loss_acc} reports the mean absolute value of prediction error for the classification accuracy and the task loss over 1000 repeated experiments. \textit{Linear weights for reliable and unreliable components are shown in supplementary materials.} The prediction error was much less than the value gap of the testing accuracy and the value gap of the task loss, which indicated the strong connection between the distribution of feature complexity and the performance of DNNs.
Figure~\ref{fig:regression} further visualizes the plane of the linear regressor learned on the CIFAR-10 dataset. The visualization was conducted by using PCA~\cite{wold1987principal} to reduce the 8-dimensional feature into a 2-dimensional space, \emph{i.e.} $(x,y)$ in Figure~\ref{fig:regression}. There was a close relationship between the distribution of feature complexity and the performance of a DNN. Please see the supplementary material for more details.
\section{Conclusion}
In this paper, we have proposed a generic definition of the feature complexity of DNNs. We design a method to disentangle and quantify feature components of different complexity orders, and analyze the disentangled feature components from three perspectives. Then, a close relationship between the feature complexity and the performance of DNNs is discovered. Furthermore, the disentangled feature components can improve the classification accuracy of DNNs. As a generic mathematical tool, the feature complexity provides a new perspective to explain existing deep-learning techniques, which has been validated by experiments.
|
1,116,691,500,270 | arxiv | \section{Introduction}\label{sec:intro}
Staggered quarks \cite{Kogut:1974ag} employ an incomplete reduction
of the lattice doubling symmetry, and therefore have an extra
degree of freedom called ``taste.''
In four dimensions, a single staggered field on the lattice produces
four tastes in the continuum limit. It is possible to interpret taste as physical flavor ($u$, $d$, $s$, $c$)
by explicitly breaking the continuum taste symmetry with general mass terms \cite{GENERAL-KS-MASS}.
However, that approach leads to a variety of problems including complex
determinants, violations of chiral symmetry even in the limit of vanishing light quark masses,
and the necessity of fine tuning.
The current standard approach --- and the one assumed in this paper ---
is to introduce a separate staggered field for each physical flavor,
and then attempt to eliminate the unwanted taste degree of freedom by taking
a root of the staggered fermion determinant.
This procedure was proposed by Marinari, Parisi and Rebbi \cite{Marinari:1981qf} in
a two-dimensional context; a fourth root is required in four dimensions. Such ``rooted'' staggered quarks have
been used by the MILC collaboration for recent dynamical simulations \cite{MILC},
which give good agreement with experiment for many simple hadronic quantities \cite{PRL}.
There is wide-spread agreement that, whatever their practical problems in reproducing the
desired four-flavor mass spectrum, ``unrooted'' staggered fermion
quarks are a consistent way to simulate four degenerate tastes of quarks in the continuum limit.
But the correctness of the fourth root trick to reduce four tastes to one has not been proven,
and there are concerns expressed in the literature about its use
in lattice QCD simulations \cite{DEGRAND,KENNEDY,DURR}.
The difficulties arise from the fact
that taste symmetry is broken at order $a^2$, where $a$ is the lattice spacing.
This prevents one from implementing the rooting simply by projecting the local four-taste staggered Dirac operator onto a local
operator in a single-taste subspace. Without a local Dirac operator, usual physical properties
of a lattice theory such as unitarity or universality are called into question.
In the past few years, many authors have
addressed the issue of the validity of the fourth-root
procedure \cite{FOURTH-ROOT-TEST-NUMERICAL,ADAMS,SHAMIR,CBMGYS}.
While a proof is still lacking,
the result of these investigations is to make it rather plausible that staggered
quarks with the fourth-root trick do in fact have the correct continuum limit.
At finite lattice spacing, however, it seems clear that the fourth root procedure
introduces a variety of unphysical sicknesses.
This follows not only from the renormalization group
approach introduced by Shamir \cite{SHAMIR}, but also from the staggered chiral theory, as discussed
below.
The issue is then to prove that these unphysical effects disappear or decouple in
the continuum limit.
Here, I start with a simpler, but related, problem: What is the chiral theory
that correctly describes rooted staggered quarks? Lee and Sharpe \cite{LEE-SHARPE}
found the chiral theory that corresponds to a single unrooted staggered field. In the current
terminology, this is a one-flavor case, with four tastes.
It was generalized to more than one flavor (more than one staggered field) and called
``staggered chiral perturbation theory'' (S\raise0.4ex\hbox{$\chi$}PT) by Aubin and Bernard \cite{AUBIN-BERNARD}.
Certain, rather noncontroversial, assumptions go into these derivations. In particular,
one needs to know the Symanzik theory \cite{SYMANZIK} that describes
unrooted staggered quarks as one approaches the
continuum limit. In deriving the Symanzik theory, one assumes
that the taste, Lorentz, and translation symmetries become exact in the continuum, and that the
lattice symmetries fit inside the continuum group in a straightforward way.
In addition to theoretical understanding of why this
should be the case \cite{KS-BASIC,GENERAL-KS-MASS,TASTE-REPRESENTATION,SHAMIR},
there is good numerical evidence for the restoration of these
symmetries \cite{MILC,FOURTH-ROOT-TEST-NUMERICAL}.
To find the chiral theory for {\it rooted} staggered quarks, an additional assumption is needed.
In Ref.~\cite{AUBIN-BERNARD}, it was proposed that one could represent the effects of
the fourth root by locating the sea quark loops in S\raise0.4ex\hbox{$\chi$}PT, and then multiplying
each one by a factor of $1/4$. Here, I take this prescription as defining
what I mean by S\raise0.4ex\hbox{$\chi$}PT\ for rooted staggered quarks.
The question then becomes: Is S\raise0.4ex\hbox{$\chi$}PT\ the correct chiral theory?
In this paper, I show that the validity of
S\raise0.4ex\hbox{$\chi$}PT\ follows from certain nontrivial assumptions on the phase structure
and mass dependence of the theory. These assumptions will be
introduced as needed; the most important of them are also collected in
the concluding section. While I will try to argue from
simulations and experience for the plausibility of these assumptions,
significantly more work is required
to prove and/or numerically test them. On the other hand,
in most cases it will be clear that the assumptions are not only
sufficient for the validity of S\raise0.4ex\hbox{$\chi$}PT\ but also necessary. Tests of
the assumptions therefore provide new means to test
S\raise0.4ex\hbox{$\chi$}PT\ itself.
Note first of all that S\raise0.4ex\hbox{$\chi$}PT\ for rooted quarks does show unphysical
effects at nonzero lattice spacing. In the published literature, this
is seen most clearly in Prelovsek's calculation of the flavor
nonsinglet scalar correlator \cite{PRELOVSEK}. On the lattice, she finds
intermediate-state contributions with mass below that
of the lightest physical intermediate state ($\eta\pi$). I call this
sickness a ``unitarity violation'' at finite lattice spacing, since it is
due to contributions from ``extra'' light mesons of various tastes, which only
cancel completely in the continuum limit.\footnote{One might be
tempted to describe this sickness
as kind of ``nonlocality'' at finite lattice spacing, because
the correlator decays at long distances at an unphysical rate. I prefer
to avoid that terminology, because its connection with the standard issue
of the locality of a Dirac operator on the lattice is indirect.}
The flavor-singlet scalar correlator provides
another example of such unitarity violation. It
has recently been worked out for the three- and one-flavor cases, both unrooted
and rooted \cite{SCALAR}.
Because the one-flavor case is a key test of the ideas discussed
in the current paper, I present some relevant
details in \secref{one-flavor}. The scalar correlator at nonzero lattice spacing has
intermediate-state contributions from light pseudo-Goldstone pions, even
though a one-flavor theory should have only a massive pseudoscalar, the $\eta'$.
Nevertheless, these
unphysical states decouple from the correlator in the continuum limit.
Thus S\raise0.4ex\hbox{$\chi$}PT\ captures some sicknesses expected of the rooted theory at nonzero
lattice spacing. But it is not obvious that S\raise0.4ex\hbox{$\chi$}PT\
captures all such sicknesses. Perhaps there are
other violations of unitarity, or indeed other more subtle features of the rooted theory,
that should be present in the corresponding chiral effective theory but are missed by
S\raise0.4ex\hbox{$\chi$}PT. I argue below that no such effects are missed.
The starting point is a special
case in which there is virtually no doubt about the correct chiral theory: a
rooted theory with four degenerate quark flavors. In this case, the fourth-roots
of the four determinants are identical, so their product just gives the determinant
of a single, unrooted staggered field. (Note that the staggered determinant
is positive, and the algorithmic treatment of the rooting trick in the simulations
gives the positive fourth root \cite{ALGORITHM}.) With the noncontroversial assumptions
mentioned above, the corresponding chiral theory is just the S\raise0.4ex\hbox{$\chi$}PT\ of Lee and Sharpe
\cite{LEE-SHARPE}.
One can then expand around the degenerate case
to treat the case of nondegenerate masses. For technical reasons,
this requires the use of a partially quenched chiral theory with valence masses
degenerate with those of the sea quarks. Golterman, Sharpe and Singleton (GSS) \cite{GSS}
show that the phase structure of a quenched chiral theory can be subtle,
and analogous questions can be raised about the partially quenched
theory.
The use of partial quenching in this paper seems to be safe from
any GSS subtleties. However, since the theory has not yet been investigated in detail using
the GSS methods, I highlight a few places where complications could
conceivably enter. Further investigation along the lines of Ref.~\cite{GSS} is planned.
The completion of the argument for four nondegenerate flavors
requires nontrivial assumptions about the analytic structure of the mass dependence.
In particular, I need to assume that
there is no
essential singularity at zero degeneracy in the difference between S\raise0.4ex\hbox{$\chi$}PT\ and the putative
correct chiral theory. Phase transitions in the chiral theory at nonzero quark mass
differences would also be dangerous, although the
existing simulations \cite{MILC} can be put forward as evidence against
such phase transitions, at least in the region of parameter space investigated to date.
To move to the phenomenologically more interesting case of three light flavors,
the mass of one quark can be taken large. In S\raise0.4ex\hbox{$\chi$}PT, it is quite clear that
the heavy quark will decouple, leaving three-flavor S\raise0.4ex\hbox{$\chi$}PT. However, in the lattice QCD of
rooted staggered quarks, the nature of the decoupling, while in my opinion plausible,
requires an additional assumption. With this assumption, it follows that S\raise0.4ex\hbox{$\chi$}PT\ is the
correct chiral description of the rooted three flavor theory. The process can
then be repeated, leading to statements about the two- and the one-flavor theories.
If S\raise0.4ex\hbox{$\chi$}PT\ is accepted as the correct chiral description, it provides
strong evidence that rooted staggered quarks have the desired continuum limit,
in other words that they are in the correct universality class.
The point is that S\raise0.4ex\hbox{$\chi$}PT\ automatically becomes continuum chiral perturbation theory (\raise0.4ex\hbox{$\chi$}PT)
in the continuum limit, modulo the usual assumptions on
the restoration of taste symmetry in the continuum limit of unrooted staggered
quarks. Therefore, this line of reasoning says that the low energy
(pseudoscalar meson) sector of lattice QCD with rooted staggered quarks is, in the
continuum limit, indistinguishable from
that of ordinary QCD. This would significantly reduce the ``phase space'' for any
possible sicknesses of rooted staggered quarks in the continuum limit.
Another consequence of the arguments in this paper is more technical:
If S\raise0.4ex\hbox{$\chi$}PT\ is valid, the lattice theory
with rooted staggered sea quarks and ordinary staggered valence quarks (the
theory in the MILC simulations \cite{MILC}) behaves like
a ``partially quenched'' theory.\footnote{In the continuum limit, this was anticipated in
Ref.~\protect{\cite{PQ}}.} Effectively, this means that there are symmetries that connect
valence and sea quarks. As usual for a partially quenched theory, such symmetries
may be broken in a controlled way by mass differences between valence
and sea quarks. However, the symmetries are not broken by lattice
corrections. The theory therefore does not behave like a ``mixed'' theory, in which
valence and sea quarks have different lattice actions. In the mixed case, there
are no symmetries at finite lattice spacing that connect valence and sea quarks.
The chiral descriptions of mixed theories \cite{MIXED} thus have terms --- vanishing in
the continuum limit --- that violate such symmetries. These terms can, for example,
lead to mass splittings between mesons composed of two valence quarks and those
composed of one valence and one sea quark.
I show here that the chiral theory
for staggered valence and rooted staggered sea quarks does {\it not}\/ have such terms;
corresponding valence-valence, valence-sea, and sea-sea mesons are degenerate.
The remainder of this paper is organized as follows: In \secref{replica},
I discuss the replica trick in S\raise0.4ex\hbox{$\chi$}PT; this is a
systematic way to find sea quark loops in the chiral theory and
multiply each by a factor $1/4$. \Secref{notation} then introduces the notation
needed to describe the various theories considered here, at both the
QCD and the chiral levels, and makes some introductory comments about these theories.
The details of my assumptions and arguments for S\raise0.4ex\hbox{$\chi$}PT\ in the four-flavor case are presented
in \secref{details}; while the extension to three or fewer flavors is treated
in \secref{extension}.
\Secref{one-flavor} shows in some detail
how the one-flavor case works. I resolve there the apparent paradox of light pseudo-Goldstone
mesons appearing the one-flavor chiral theory.
Consequences of my arguments for the rooted theory at the QCD level are
described in \secref{consequences}.
Finally, I review the assumptions and conclusions and make some additional remarks
and speculations in \secref{conclusions}.
\section{Replica Trick}
\label{sec:replica}
In S\raise0.4ex\hbox{$\chi$}PT\ for rooted staggered quarks, one needs to identify the presence of sea quark loops
in various meson diagrams, and multiply each such loop by a factor of $1/4$.
The sea quark loops were located in Ref.~\cite{AUBIN-BERNARD} by using the quark
flow approach \cite{QUARK-FLOW}. While quark flow gives a rather intuitive physical picture,
it suffers from the disadvantage in the current case that it is formulated as a series of
rules for tracing flavor indices, not as an algebraic statement.
The replica trick provides an alternative approach that is systematic and algebraic.
It was applied to partially quenched theories by
Damgaard and Splittorff \cite{REPLICA} and was first used for S\raise0.4ex\hbox{$\chi$}PT\ in Ref.~\cite{AUBIN-BERNARD-REPLICA}.
The replica procedure for rooted S\raise0.4ex\hbox{$\chi$}PT\ is very simple:
One starts by replicating the sea-quark flavors,
replacing each dynamical staggered field by $n_R$ identical copies, where $n_R$ is a
(positive) integer. One then calculates straightforwardly order by order in
the corresponding (unrooted) S\raise0.4ex\hbox{$\chi$}PT, keeping the $n_R$ dependence explicit.
Finally, one sets $n_R=1/4$.
Note that, at any finite order in S\raise0.4ex\hbox{$\chi$}PT, the $n_R$ dependence is polynomial: It just
comes from the sum over the sea quark indices in meson loops.
Therefore, the process of taking $n_R\to {\scriptstyle \raise.15ex\hbox{${1\over4}$}}$ is straightforward and unambiguous
order by order.
As always in chiral perturbation theory, we treat the low energy constants
(LECs) as free parameters for each $n_R$. We should not use any relations
that hold only for special values of $n_R$ --- analogous to those
discussed by Sharpe and Van de Water \protect{\cite{Sharpe:2003vy}} ---
to reduce the number of chiral operators. If it turns out that we are left with some
redundant operators when $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$, we can always redefine the LECs to absorb
the redundancy at the end.
Within chiral perturbation theory, we do not worry about
(nor do we have any control over) the dependence of low energy constants themselves on
$n_R$. Such dependence, coming from an underlying QCD-like theory, would in fact
be nonperturbative in the strong coupling $\alpha_S$
and probably not polynomial in $n_R$.
At the QCD level, it is difficult to give the replica trick
any meaning beyond weak-coupling perturbation theory, in which
the $n_R$ dependence is again polynomial. Within weak-coupling perturbation theory,
the replica trick is in fact somewhat useful, because it provides a convenient way of keeping track of
sea-quark loops. This can aid in clarifying
the argument in Ref.~\protect{\cite{PQ}} of the validity of the fourth-root procedure in perturbation theory,
and will also be helpful in \secref{mixed}.
Nonperturbatively, however,
even if we were to assume that the $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$ limit should be
taken by analytic continuation, the replica trick would be ambiguous since there
is no unique continuation from the integers.
A related comment is that
the use of the replica trick for a chiral theory
is valid, {\it a priori}\/, only for order by order
calculations in chiral perturbation theory. We have no
guarantee of its correctness in general nonperturbative chiral calculations, such as
the determination of the correct vacuum state.
However, in the
degenerate four-flavor theory, we know the chiral theory (and hence
the appropriate phase) independent of the replica trick.
As I move away from the degenerate limit, I will in any case need
to assume that dependence on quark mass is smooth and no phase change occurs (see \secref{details}).
Thus, there is no further restriction coming from the perturbative
nature of the replica trick.
\section{Theories Considered; Notation}
\label{sec:notation}
We need some notation to refer to the various versions of QCD and their
corresponding chiral theories.
Define a version of lattice QCD by $(n_F,n_T,n_R)_{LQCD}$, where $n_F$ is the number
of flavors (the number of staggered fields), $n_T$ is the number
of tastes per field, and $n_R$ is the number
of replicas.
The corresponding chiral theories are denoted by $(n_F,n_T,n_R)_{\chi}$.
If $n_R$ is trivially equal to 1
(because the replica trick is not relevant), it is omitted.
When $(n_F,n_T,n_R)_{\chi}$ or $(n_F,n_T,n_R)_{LQCD}$ are used in equations,
I am referring specifically to the generating functionals for these theories, with sources to be discussed below.
I focus primarily on three versions of QCD, and four versions of chiral theories:
\begin{itemize}
\item{} $(1,4)_{\chi}$ and $(1,4)_{LQCD}$: These are the chiral and QCD theories of a single staggered field
(one flavor) with four tastes. By (noncontroversial) assumption, the chiral theory $(1,4)_{\chi}$ is just the
S\raise0.4ex\hbox{$\chi$}PT\ of Lee and Sharpe \cite{LEE-SHARPE}. No rooting is done at the QCD level, and no
replica trick is necessary at the chiral level.
\item{} $(n_F,4,n_R)_{\chi}$ and $(n_F,4,n_R)_{LQCD}$: These are the theories for $n_F$ staggered fields
($n_F$ flavors), each replicated $n_R$ times. When $n_R$ is indicated explicitly,
as in this case, it is taken to be an integer only; no rooting is done. The
chiral theories $(n_F,4,n_R)_{\chi}$ are --- again by noncontroversial assumption --- just those
of Aubin and Bernard \cite{AUBIN-BERNARD} for integer ($n_F\cdot n_R$) number of flavors.
They are obtained from the $n_F$ flavor chiral theories by
replicating the sea-quark degrees of freedom in the chiral fields.
\item{} $(n_F,``1")_{\chi}$ and $(n_F,``1")_{LQCD}$: These are the chiral and QCD theories of $n_F$ staggered fields
($n_F$ flavors) with the $\root 4 \of {\rm Det}$ taken at the QCD level to reduce 4 tastes to 1 for each flavor.
Since I do not want to assume here that the rooting procedure is correct, I write the $1$ for tastes
in quotation marks. Then $(n_F,``1")_{\chi}$ is by definition the chiral theory generated by $(n_F,``1")_{LQCD}$.
The main point of this paper is to construct
$(n_F,``1")_{\chi}$ unambiguously.
\item{} $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$: This is the chiral theory $(n_F,4,n_R)_{\chi}$, now implementing
the replica trick
by taking $n_R\to {\scriptstyle \raise.15ex\hbox{${1\over4}$}}$, with the goal of describing
rooted staggered quarks.
In the literature
(\textit{e.g.},\ Ref.~\cite{AUBIN-BERNARD,AUBIN-BERNARD-REPLICA,HEAVYLIGHT,SCHPT-OTHER,SCHPT-BARYONS}),
it is {\it assumed}\/ that
this procedure produces the right chiral theory; in other words, it is assumed
that $(n_F,``1")_{\chi}$ = $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.
Here, I define S\raise0.4ex\hbox{$\chi$}PT\ as $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$,
and then ask
the question of whether S\raise0.4ex\hbox{$\chi$}PT\ is indeed the correct chiral theory.
Note that I avoid reference to corresponding QCD theories ``$(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{LQCD}$'' because,
as discussed in \secref{replica}, I do not know how to
give unambiguous meaning beyond perturbation theory to the replica trick for those QCD-level theories.
\end{itemize}
For my arguments, the chiral theories $(n_F,4,n_R)_{\chi}$ are key objects.
On the other hand, the corresponding QCD theories
$(n_F,4,n_R)_{LQCD}$, in particular $(4,4,n_R)_{LQCD}$,
are introduced for convenience, because they allow one to keep
track more easily of the factors of $n_R$ that relate valence- to sea-quark matrix elements (see
\secref{details}). These QCD-level theories
can be eliminated at the expense of a somewhat less intuitive argument at the chiral level, related to quark flow.
An outline of such an alternative argument
is given in \secref{results-nf4}; it does however seem to require
a weak additional assumption. Because the $(4,4,n_R)_{LQCD}$ theories are just used formally,
it is probably unnecessary that the standard, broken realization
of chiral symmetry assumed in $(4,4,n_R)_{\chi}$ actually occurs in $(4,4,n_R)_{LQCD}$.
The unpleasant fact that
asymptotic freedom (and presumably spontaneous chiral symmetry breaking) is lost for $n_R>1$
in $(4,4,n_R)_{LQCD}$ seems to be irrelevant. An easy way to see this is to realize that
the precise correspondence between $(4,4,n_R)_{LQCD}$ and $(4,4,n_R)_{\chi}$ can be
maintained by an artifice,\footnote{I thank
Urs Heller for this comment.} as follows:
Note first that the order of the polynomial
dependence on $n_R$ is bounded at a given order in chiral perturbation theory. This means there is maximum
value of $n_R$, $n_R^{\rm max}$, that need be considered in order to determine the polynomial completely.
One can then simply imagine increasing the number of colors sufficiently
to ensure that the QCD theory has the standard, spontaneously broken,
realization of chiral symmetry
for any $n_R \le n_R^{\rm max}$. Recall that
the mesonic chiral theory generated by a given $(4,4,n_R)_{LQCD}$ is independent of the number
of colors as long as the phase is unchanged. The numerical values of the LECs do depend
on the number of colors, but we are uninterested in those values here.
In the next section, I argue that the replica trick produces the correct chiral theory in the four-flavor case.
In other words, I claim that
\begin{equation}\eqn{toshow}
(4,``1")_\chi \doteq (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi \ .
\end{equation}
This should be taken as a statement about the generating functionals of the two chiral theories.
I use ``$\doteq$,'' rather than ``$=$,'' to compare two chiral theories, because what I
mean is that they are the same functions of the LECs: True equality would only result if we
adjusted the LECs to be the same.
One also needs to be careful about what sources (equivalently, external fields)
one is allowing in the
Green's functions on both sides of equations such as \eq{toshow}. For example, there are more sea-quark fields
available in the $(4,4,n_R)_\chi$ theory, from which $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ is obtained, than there
are in the $(4,``1")_\chi$ theory.
Unless explicitly stated otherwise, such generating functionals should be taken to
describe partially quenched theories, with sources coupled to valence fields
only. Ghost (bosonic) fields, degenerate with the valence fields
and required to cancel the valence determinant,
are also implicit.
When I need to make the sources explicit, I will include any
valence sources $\sigma$ among the arguments, for example $(n_F,n_T,n_R;\sigma)_{LQCD}$.
Identical staggered valence fields with identical
valence sources are always assumed on both sides of equations relating generating functionals.
\section{Details of the Argument for Four Flavors}
\label{sec:details}
The key ingredient is the observation that, when there are
four degenerate flavors (four staggered fields with equal masses), the rooting procedure
clearly reduces the four flavor
theory to a one flavor theory. In other words, instead of acting on tastes
and (presumably) reducing the four tastes per flavor to one taste per flavor,
we can think of the rooting in this case as acting on flavor and reducing
four fields to one, without affecting the tastes.
Let the quark mass matrix be $\mathcal{M}$. The condition
of degeneracy is $\mathcal{M} = \bar m I$, where $\bar m$ is a number and $I$ is the
identity matrix in flavor space. It then follows that:
\begin{eqnarray}\eqn{deg-root-QCD}
(4,``1")_{LQCD}\Big\vert_{\mathcal{M}=\bar m I} & = & (1,4)_{LQCD}\Big\vert_{\bar m} \\
\eqn{deg-root-chi}
(4,``1")_{\chi}\Big\vert_{\mathcal{M}=\bar m I} & \doteq & (1,4)_{\chi}\Big\vert_{\bar m} \ \doteq\ (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi\Big\vert_{\mathcal{M}=\bar m I} \ .
\end{eqnarray}
The last equivalence in \eq{deg-root-chi} is manifest order by order in S\raise0.4ex\hbox{$\chi$}PT: Since the
result for any physical quantity is polynomial in the number of degenerate flavors, taking
$4n_R$ degenerate flavors and then putting $n_R=1/4$ gives the same chiral expansion
as a one-flavor theory.
One can make a stronger statement than \eq{deg-root-chi} by adding
sources and computing
specific Green's functions in the degenerate case. In order to keep the
arguments simple, I generally use only taste-singlet scalar sources, which are all
that are necessary to allow us to move beyond the degenerate mass limit. For
writing explicit terms in the chiral theory, however, it will be convenient below to include
pseudoscalar sources temporarily, since it is linear combinations of
scalar and pseudoscalar source that transform simply under chiral
transformations. One can also easily generalize to
sources of arbitrary taste if desired.
I start by adding
introducing the scalar sources into the sea-quark sector of the QCD-level theory
$(4,``1")_{LQCD}$.
Let
$\Psi_i(x)$ be the sea quark field of flavor $i$ at space-time point $x$. For convenience,
I work in the taste
representation \cite{TASTE-REPRESENTATION}, with taste (and spin) indices on $\Psi$
implicit, but there is no reason why one cannot work directly
with the one-component staggered fields instead.
The source $s(x)$ is taken to be a Hermitian matrix in flavor space.
The mass and source terms are then:
\begin{equation}\eqn{source-41}
\bar m\, \bar \Psi_i(x) \Psi_i(x) + \bar \Psi_i(x) \, s^{ij}(x)\,
\Psi_j(x)\;, \hspace{1.5cm}(4,``1")\ {\rm case},
\end{equation}
where sums over flavor indices $i,j$ are implied.
One needs to state precisely here what is meant by
a rooted staggered theory with sources. In this paper, I always mean:
(1) introduce the sources into the corresponding unrooted theory;
(2) integrate the sea quark
fields to get a determinant that is a function of the sources;
(3) replace the determinant by its fourth root.
Derivatives with respect to the sources, if desired, are taken only after step (3).
Now introduce the same sources into the replica QCD theories
$(4,4,n_R)_{LQCD}$,
with the specification that a given source couples equally to
all replicas.
We have:
\begin{equation}\eqn{source-44nR}
\bar m \bar \Psi^r_i(x) \Psi^r_i(x) + \bar \Psi^r_i(x) \,s^{ij}(x)
\, \Psi^r_j(x)\;, \hspace{1.5cm}(4,4,n_R)\ {\rm case} .
\end{equation}
Sums over the replica index $r = 1,2,\dots,n_R$, as well as the flavor indices $i$ and $j$, are implied.
When the sources are nonzero (which includes the case of
nondegenerate quark masses as a special case), we do not yet know that
$(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ is the right chiral theory. One could imagine that there
are extra terms in $(4,``1")_\chi$ that vanish in the limit
$s=0$. So I define the difference to be an unknown functional $V[s]$:
\begin{equation}\eqn{correction}
(4,``1";\,s)_\chi \doteq (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}};\,s)_\chi + V[s] \ ,
\end{equation}
where $V[0]\!=\!0$. As far as we know at this point, $V[s]$ could be quite sick.
For example, it could generate Euclidean correlation functions with unphysical decay rates
(unphysical intermediate states), even in the continuum limit.
There are further restrictions on $V[s]$ coming from
the fact that the two chiral theories must be equivalent when there is
exact flavor symmetry.
We must have $V[s]\!=\!0$ whenever $s(x)$ is proportional to the identity in flavor
space or can be brought there by an
$SU(4)_L\times SU(4)_R$ chiral flavor rotation.
Therefore it takes some care even to write down a possible term in $V[s]$.
I temporarily add a Hermitian pseudoscalar source $p(x)$ to the theories. For example,
corresponding to \eq{source-41} is
\begin{equation}\eqn{psource-41}
i\bar \Psi_i(x) \, \gamma_5\, p^{ij}(x)\,
\Psi_j(x)\;, \hspace{1.5cm}(4,``1")\ {\rm case}.
\end{equation}
The spurion combinations
$h \equiv \bar mI+ s +ip$ and $h^\dagger \equiv \bar mI+ s-ip$ transform
simply under chiral rotations $L\in SU(4)_L$ and $R\in SU(4)_R$:
\begin{equation}\eqn{h-transform}
h \to L\, h\, R^\dagger\ , \hspace{1.5truecm} h^\dagger\to R\, h^\dagger\, L^\dagger \ .
\end{equation}
If
\begin{equation}\eqn{deg-condition}
h(x) = c(x) U \ , \hspace{1.5truecm} h^\dagger(x) = c^*(x) U^\dagger \ ,
\end{equation}
where $U\in SU(4)$ is a constant matrix and $c(x)$ is a c-number function, then
$h(x)$ and
$h^\dagger(x)$ can be made everywhere proportional to the identity by the chiral rotation
$R=U$, $L=I$ and there is exact
flavor symmetry, unbroken by masses or sources.
We can now look for possible terms in $V$, at first expressed
as functionals of $h$ and $h^\dagger$.
An example that satisfies the above requirements
is
\begin{equation}
\tilde V_1 = \int d^4x\, d^4y
\left(\frac{1}{\square + M^2}
\right)_{\hspace{-.13cm}x,y} \hspace{-.08cm}
\bigg( \textrm{Tr}\left[ h(x)\, h^\dagger(x)\, h(y)\, h^\dagger(y)\right]
- {\scriptstyle \raise.15ex\hbox{${1\over4}$}}\textrm{Tr}\left[ h(x)\, h^\dagger(x)\right]
\textrm{Tr}\left[ h(y)\, h^\dagger(y)\right] \bigg)
\eqn{v1big}
\end{equation}
where $\textrm{Tr}$ is a flavor trace,
and $1/M$ a distance scale that might not go
to zero in the continuum. For example, one could have
$M=k\Lambda_{QCD}$, where $k$ is some constant. In the worst case, $M$ might not
even correspond to the mass of any physical particle in QCD.
Removing the pseudoscalar source $p(x)$ and keeping only the lowest nonvanishing term in $s$, one gets the following
example of a possible contribution to $V[s]$:
\begin{equation}
V_1 = 4\bar m^2\int d^4x\; d^4y \;
\left(\frac{1}{\square + M^2}
\right)_{x,y}
\bigg( \textrm{Tr}\left[ s(x)s(y)\right]
- {\scriptstyle \raise.15ex\hbox{${1\over4}$}}\textrm{Tr}\left[ s(x)\right]\,
\textrm{Tr}\left[ s(y)\right] \bigg)
\eqn{v1}
\end{equation}
The goal is of course to prove that $ V[s]$ actually vanishes.
\subsection{Expansion around the Degenerate Theory}
\label{sec:expansion}
If we take derivatives of the generating functionals
with respect to $s$ and evaluate them
at $s=0$, we will have Green's functions for degenerate quark masses.
At the level of the chiral theories, I claim that \eqs{deg-root-QCD}{deg-root-chi}
(modulo some technical assumptions) actually imply the stronger statement:
\begin{equation}\eqn{derivs}
\prod_n\frac{\partial}{\partial s^{i_nj_n}(x_n)}
(4,``1";\,s)_\chi\Big\vert_{s=0}\doteq\;\;
\prod_n\frac{\partial}{\partial s^{i_nj_n}(x_n)}
(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}};\,s)_\chi\Big\vert_{s=0}
\end{equation}
for any given combination of derivatives with respect to $s$.
A difficulty in proving
\eq{derivs} is that, as soon as the sources are taken
to be nonzero in order to compute the derivatives, we no longer know
that $(4,``1")_\chi$ and $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ are the same.
Further, I must avoid the use of $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{LQCD}$, which is not well defined.
Finally, I cannot use $(1,4)_{LQCD}$
easily as an intermediate step, because sea quark sources with nontrivial
flavor ($s^{ij}$) cannot be inserted into a one-flavor theory.
The need for nonzero sea-quark sources in \eq{derivs} can be circumvented
by using valence sectors, in other words, by considering the partially quenched version
of \eqs{deg-root-QCD}{deg-root-chi}. I thus introduce into all theories of interest
an arbitrary number, $n_V$, of staggered valence fields
$q_\alpha$, where $\alpha= 1,2,\dots n_V$ is the valence flavor index.
These have degenerate mass $\bar m$ and are coupled to valence sources $\sigma_{\alpha\beta}$,
giving mass and source terms as follows:
\begin{equation}\eqn{val-source}
\bar m \bar q_\alpha(x) q_\alpha(x) + \bar q_\alpha(x) \, \sigma^{\alpha\beta}(x)\,
q_\beta(x) \ ,
\end{equation}
with sums over $\alpha$ and $\beta$ implied. The valence-quark
source $\sigma^{\alpha\beta}$ is
exactly analogous to the sea-quark source $s^{ij}$; they only differ in the type of quarks to which they
couple.
I also introduce $n_V$ corresponding ghost (bosonic) quarks, again with
degenerate mass $\bar m$. These ghosts do not couple to the $\sigma^{\alpha\beta}$ source,
so that derivatives with respect to $\sigma^{\alpha\beta}$ produce Green's functions made
purely of (fermionic) valence quarks. When $\sigma^{\alpha\beta}=0$, the valence
and ghost determinants cancel.
The partially quenched version of \eq{deg-root-QCD} remains valid,
since the valence/ghost sectors are identical on both sides, and the sea-quark determinants are equal
as long as the sea-quark source $s$ vanishes (giving degenerate masses):
\begin{equation}\eqn{deg-root-PQQCD}
(4,``1";\,s\!=\!0,\sigma)_{LQCD} \hspace{0.2cm}=\hspace{0.2cm} (1,4;\,s\!=\!0,\sigma)_{LQCD} \ ,
\end{equation}
where sea and valence sources are indicated explicitly.
The equality of generating functionals must also be true for the corresponding chiral theories:
\begin{equation}\eqn{deg-root-PQchi}
(4,``1";\,s\!=\!0,\sigma)_{\chi} \hspace{0.2cm} \doteq\hspace{0.2cm} (1,4;\,s\!=\!0,\sigma)_{\chi} \ ,
\end{equation}
This follows by definition of what it means to be the corresponding chiral theory.
I am assuming that such partially quenched chiral theories exist. But note
that the starting LQCD theories both have local actions, so this appears
to be a rather safe assumption. I am not
claiming, however, that I know explicitly how to calculate ghost or valence
Green's functions in either of these
chiral theories. My expectation is that the ``naive'' meson Feynman rules, which follow from
the methods of Ref.~\cite{PQ}, are probably correct. However,
to prove that would require an analysis along the lines of Ref.~\cite{GSS} to determine
the proper saddle point for the mesons constructed from valence or ghost quarks,
around which the chiral perturbation
theory can be developed. Such an analysis is in progress.
In discussing \eq{deg-root-chi}, I claimed that the equivalence
of the $(1,4)_\chi$ and $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ theories is
``manifest'' order by order in S\raise0.4ex\hbox{$\chi$}PT. In the presence of
valence/ghost fields and sources, the corresponding statement is almost certainly still true.
Even if the saddle point for ghost mesons (or valence) mesons is nontrivial, it
is very difficult to see how it could be affected, order by order, by the difference
between having one sea-quark flavor
or having $4n_R$ degenerate sea flavors and then putting $n_R=1/4$.
Combined with \eq{deg-root-PQchi}, this gives
\begin{equation}\eqn{deg-replica-PQchi}
(4,``1";\,s\!=\!0,\sigma)_{\chi} \hspace{0.2cm} \doteq\hspace{0.2cm} (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}};\,s\!=\!0,\sigma)_{\chi} \ .
\end{equation}
In the limit $s=0=\sigma$, all quarks, both valence and sea,
are degenerate. This means one can relate Green's functions constructed from sea-quark fields
to those constructed from valence fields, or equivalently, relate derivatives with respect
to $s$ to those with respect to $\sigma$. This is not
completely straightforward, however. In the (4,``1") theory,
derivatives with respect to $s$
bring down factors of $1/4$ from $\root 4 \of {{\rm Det}(D+\bar m + s )}= \exp {\scriptstyle \raise.15ex\hbox{${1\over4}$}} \textrm{tr}
\ln (D+\bar m + s ) $. When more than one contraction (more than one
term resulting from the derivatives) is possible, different contractions will be associated
with different numbers of factors of $1/4$. The power of $1/4$ is just the
number of quark loops implied by the contractions.
On the other hand, with arbitrary
$n_V$, we can always adjust the flavors of the valence sources being differentiated
so that only one contraction is possible. This means we can always write an arbitrary derivative
of the generating functional with respect to $s$ as a linear combination
of derivatives with respect to $\sigma$, each term being multiplied by
$({\scriptstyle \raise.15ex\hbox{${1\over4}$}})^L$, where $L$ is the number of valence loops in the term.
The following two examples should clarify what I mean; take flavors $i\not=j$ and $\alpha\not=\beta$
and do not sum over repeated indices:
\begin{eqnarray}
\eqn{example-ij}
\frac{\partial}{\partial s^{ij}(x)}
\frac{\partial}{\partial s^{ji}(y)}
\; (4,``1";\,s,\sigma\!=\!0)_{LQCD}\Big\vert_{s=0}&=&
-\frac{1}{4}\;\langle \textrm{tr}\Big( G_{j}(x,y) G_{i}(y,x) \Big) \rangle\nonumber \\
&&\hspace{-3.0truecm}
=\frac{1}{4}\;\frac{\partial}{\partial \sigma^{\alpha\beta}(x)}
\frac{\partial}{\partial \sigma^{\beta\alpha}(y)}
\; (4,``1";\,s\!=\!0,\sigma)_{LQCD}\Big\vert_{\sigma=0} \\
\frac{\partial}{\partial s^{ii}(x)}
\frac{\partial}{\partial s^{ii}(x)}
\; (4,``1";\,s,\sigma\!=\!0)_{LQCD}\Big\vert_{s=0}=\nonumber && \\
&&\hspace{-6.0truecm}
=-\frac{1}{4}\;\langle \textrm{tr}\Big( G_{i}(x,y) G_{i}(y,x) \Big)\rangle +
\left(\frac{1}{4}\right)^2\langle \textrm{tr}\Big( G_{i}(x,x)\Big) \textrm{tr}\Big( G_{i}(y,y) \Big) \rangle\nonumber \\
&&\hspace{-6.0truecm}
=\left[\frac{1}{4}\;\frac{\partial}{\partial \sigma^{\alpha\beta}(x)}
\frac{\partial}{\partial \sigma^{\beta\alpha}(y)} +
\left(\frac{1}{4}\right)^2\frac{\partial}{\partial \sigma^{\alpha\alpha}(x)}\frac{\partial}{\partial \sigma^{\beta\beta}(y)}
\right]
(4,``1";\,s\!=\!0,\sigma)_{LQCD}\Big\vert_{\sigma=0} \nonumber \\
&&
\eqn{example-ii}
\end{eqnarray}
where $G_{i}(y,x)$ is the propagator of a quark of flavor $i$ from $x$ to $y$, expectation
values are taken in the $(4,``1")_{LQCD}$ theory with $\mathcal{M}=\bar m I$ and vanishing sources,
and the traces are over taste and spin indices. Note that the two sides of \eq{example-ij}
or \eq{example-ii} are just two different ways of expressing the expectation value of the
same combination of quark propagators, so no subtleties of partial quenching
{\it \`a la}\/\ Ref.~\cite{GSS} can
interfere with the equality.
With enough derivatives with respect to $s$, there will always be enough
repeats in sea quark flavor indices that more than one contraction contributes. On the other hand,
since we have an arbitrary number of valence quarks at our disposal, we can always arrange
the valence flavors in the derivatives with respect to $\sigma$ so that only
one contraction occurs.
In the $(4,4,n_R)_{LQCD}$ theory, equations very similar to \eqs{example-ij}{example-ii} hold, with
the simple replacement ${\scriptstyle \raise.15ex\hbox{${1\over4}$}}\to n_R$. The factors of $n_R$
are produced by the sum over replicas for each quark loop.
For an arbitrary $k^{\rm th}$ derivative of $(4,``1")_{LQCD}$ or $(4,4,n_R)_{LQCD}$ with respect to
$s$, we therefore can write:
\begin{eqnarray}\eqn{QCD-derivs-41}
\prod_{n=1}^k\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, (4,``1";\,s,\sigma\!=\!0)_{LQCD}\Big\vert_{s=0}= &&\nonumber \\
&&\hspace{-3.5cm} =\sum_C \left(\frac{1}{4}\right)^{L_C}
\prod_{n=1}^k\frac{\partial}{\partial \sigma^{\alpha^C_{n}\beta^C_{n}}(x_n)}
\, (4,``1";s\!=\!0,\sigma)_{LQCD}\Big\vert_{\sigma=0} \\
\prod_{n=1}^k\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, (4,4,n_R;\,s,\sigma\!=\!0)_{LQCD}\Big\vert_{s=0}= &&\nonumber \\
&&\hspace{-3.5cm} =\sum_C \left(n_R\right)^{L_C}
\prod_{n=1}^k\frac{\partial}{\partial \sigma^{\alpha^C_{n}\beta^C_{n}}(x_n)}
\, (4,4,n_R;\,s\!=\!0,\sigma)_{LQCD}\Big\vert_{\sigma=0}
\eqn{QCD-derivs-44nR}
\end{eqnarray}
where $C$ labels a particular contraction with $L_C$ valence quark loops, and the valence flavor
indices $\alpha^C_{n}$ and $\beta^C_{n}$
are adjusted
to make only that contraction possible. The key point in \eqs{QCD-derivs-41}{QCD-derivs-44nR}
is that the same arrangements of valence flavor indices and powers $L_C$ work in both cases.
We now pass to the chiral theory in both cases, giving:
\begin{eqnarray}\eqn{chi-derivs-41}
\prod_{n=1}^k\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, (4,``1";\,s,\sigma\!=\!0)_{\chi}\Big\vert_{s=0}= &&\nonumber \\
&&\hspace{-2.8cm} =\sum_C \left(\frac{1}{4}\right)^{L_C}
\prod_{n=1}^k\frac{\partial}{\partial \sigma^{\alpha^C_{n}\beta^C_{n}}(x_n)}
\, (4,``1";s\!=\!0,\sigma)_{\chi}\Big\vert_{\sigma=0} \\
\prod_{n=1}^k\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, (4,4,n_R;\,s,\sigma\!=\!0)_{\chi}\Big\vert_{s=0}= &&\nonumber \\
&&\hspace{-2.8cm} =\sum_C \left(n_R\right)^{L_C}
\prod_{n=1}^k\frac{\partial}{\partial \sigma^{\alpha^C_{n}\beta^C_{n}}(x_n)}
\, (4,4,n_R;\,s\!=\!0,\sigma)_{\chi}\Big\vert_{\sigma=0}
\eqn{chi-derivs-44nR}
\end{eqnarray}
At any finite order in chiral perturbation theory, both sides of
\eq{chi-derivs-44nR} are polynomial in $n_R$. Therefore
the limit $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$ is well defined:
\begin{eqnarray}
\prod_{n=1}^k\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}};\,s,\sigma\!=\!0)_{\chi}\Big\vert_{s=0}= &&\nonumber \\
&&\hspace{-2.8cm} =\sum_C \left(\frac{1}{4}\right)^{L_C}
\prod_{n=1}^k\frac{\partial}{\partial \sigma^{\alpha^C_{n}\beta^C_{n}}(x_n)}
\, (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}};\,s\!=\!0,\sigma)_{\chi}\Big\vert_{\sigma=0}
\eqn{chi-derivs-4414}
\end{eqnarray}
The right-hand sides
of \eqs{chi-derivs-41}{chi-derivs-4414} are now equal by \eq{deg-replica-PQchi}.
On the left-hand sides, the valence and ghost contributions cancel completely
since $\sigma=0$, so we may eliminate those fields.
This proves \eq{derivs}.
\subsection{Assumptions and Results in the Four Flavor Theory}
\label{sec:results-nf4}
\Equation{derivs}, together with the definition of $V[s]$, \eq{correction}, imply that
$V[s]$ and all of its derivatives vanish at $s=0$:
\begin{equation}\eqn{Vs-derivs}
\prod_{n=1}^k\left(\frac{\partial}{\partial s^{i_nj_n}(x_n)}
\, V[s]\right)\Bigg\vert_{s=0} = 0\ .
\end{equation}
Thus terms like $V_1$ in \eq{v1} are ruled out.
Indeed, if $V[s]$ is assumed to be an analytic function,\footnote{At this
point it is sufficient for my purposes to restrict $s$ to a constant matrix, just giving the mass differences.
Therefore $V$ can be thought of as a function, not a functional, and there is no subtlety
with concepts such as analyticity.} with any number of isolated singularities,
it follows that $V[s]\!=\!0$ everywhere. In other words,
\eq{toshow} is true under this assumption.
Normally one expects that when
a function is expanded in a Taylor series around some point, the expansion will have
a finite radius of convergence, given by the location of the closest singularity. But here,
every term in the expansion is zero, so we can continue past any purported isolated singularity,
and thereby show that the singularity is actually absent.
Note that $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$, as a limit of the replica theories when $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$,
is only defined order by order in chiral perturbation theory. By definition,
therefore, the vacuum state of $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ has the standard broken realization
of chiral symmetry that appears in $(4,4,n_R)_\chi$.
We know this is the correct nonperturbative vacuum in the degenerate limit, because there one
can use the chiral theory $(1,4)_\chi$, for which no replica trick is needed.
Now, if $V[s]$ really vanishes everywhere, then $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ is
the correct chiral theory even for nondegenerate masses, and
the vacuum must therefore remain the standard one. Thus the assumption of analyticity
includes the assumption that there is no phase change in $(4,``1")_\chi$ as a function
of $s$.
Of course, the assumption of analyticity of $V[s]$ is a nontrivial one. It could go wrong in
two ways. First of all, there may be a connected ``line'' of singularities, an actual ``domain
boundary'' that prevents one from extending $V[s]\!=\!0$ arbitrarily far from $s=0$.
Of course, \raise0.4ex\hbox{$\chi$}PT\ or S\raise0.4ex\hbox{$\chi$}PT\ must eventually break down for large enough
quark masses, so it is meaningless to imagine extending \eq{toshow} to mass differences
that put one or more masses outside the range of chiral perturbation theory. But here
I am talking about possible singularities that would prevent extending $V[s]\!=\!0$ over the whole
range where S\raise0.4ex\hbox{$\chi$}PT\ applies. If such a boundary occurred, it would probably imply
a phase change: that the true ground state for $(4,``1")_{\chi}$ changes discontinuously
from the ground state assumed by $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.
Although I cannot rule out this possibility from first principles, it seems rather
unlikely that a phase change would produce small enough discrepancies
to have escaped detection in the
MILC simulations and their comparison with S\raise0.4ex\hbox{$\chi$}PT\ predictions \cite{MILC}.
But the effects of a phase
change that occurred outside the (rather wide) range of masses or lattice spacings studied by
MILC would probably not have been noticed. In addition, since the MILC simulations involve
three flavors, the logical possibility exists that a phase change
occurs with four flavors but disappears when the fourth quark is decoupled.
On the positive side,
note that $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ automatically becomes standard
continuum \raise0.4ex\hbox{$\chi$}PT\ in the continuum limit (see \secref{health}). Therefore,
if $V[s]\not=0$ outside some mass region, we must at least have $V[s]\to0$ in the continuum
limit to avoid the bizarre scenario in which $(4,``1")_{LQCD}$ is a
valid four-flavor QCD theory in some range of quark mass differences but not outside this range.
A second way that the analyticity assumption could go wrong would be the presence
of essential singularities in $V[s]$ for all values of $s$ such that the flavor
symmetry is exact. For example, one could imagine that $V[s]\propto \exp(-1/V^2_1)$.
Although I cannot rule them out at this point, such singularities seem
implausible to me, since we are expanding around a massive theory in Euclidean space and there are thus
no obvious infrared problems. Note that I am not
assuming that $(4,``1")_\chi$ and $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ separately are analytic in $s$
around $s=0$ (or any other degenerate point), only that their difference is. In \secref{conclusions}, I speculate on a
possible proof of the absence of an essential singularity in $V[s]$ at $s=0$.
The assumption that $V[s]$ is analytic is equivalent to the assumption that
the expansion of $V[s]$ around $s=0$ is convergent.
The reader may therefore object that this assumption is too strong, since
we do not expect convergent weak coupling expansions in quantum field theories.
It is therefore useful to
review why we believe that usual weak coupling expansions are at best
asymptotic. The main reason comes from the factorial growth of
large orders
in perturbation theory \cite{LARGE-ORDER}. In the current case, however,
the large orders terms in perturbation theory of $V[s]$ in $s$ are not growing factorially ---
in fact they are all zero! An alternative line of reasoning for QED is due to
Dyson \cite{DYSON}. He argued that the expansion in $\alpha$ around $\alpha\!=\!0$
must be asymptotic because $\alpha\!<\!0$ leads to an unstable vacuum and therefore
cannot be smoothly connected to the $\alpha\!>\!0$ region. In fact, this argument
has been shown to be flawed \cite{BENDER-MILTON}, since it is possible to define
the theory consistently for $\alpha\!<\!0$ and to obtain it by analytic continuation from
$\alpha\!>\!0$. In any case, however, we have no similar reason to suspect
that the difference of
the chiral theories (or either of the chiral theories itself) becomes
unstable as soon as non-zero mass differences are turned on.
Of course, arguing that we have no reason to expect nonanalyticity in $V[s]$
is far from proving that $V[s]$ is analytic. This remains an assumption.
Note that it can be turned around: if $V[s]$ is not analytic then, from
\eq{Vs-derivs}, $V[s]\!\not=\!0$, so S\raise0.4ex\hbox{$\chi$}PT\ for
four flavors must be incorrect.
As mentioned in \secref{notation}, the QCD-level theories $(4,4,n_R)_{LQCD}$
are used in \secref{expansion} for convenience;
if desired, their use can be eliminated at the expense of an additional
weak assumption about the partially quenched chiral theory. I now sketch that
argument;
the example presented in \secref{one-flavor}
can be used as an illustration of this kind of analysis.
One needs to derive
\eq{chi-derivs-44nR} directly in the chiral theories.
It is not hard to see how to prove this at the chiral level, using a technique
that is basically quark-flow analysis.
Since the (vector) flavor and replica symmetries are exact in $ (4,4,n_R)_{\chi}$, one
can always follow the replica indices though the S\raise0.4ex\hbox{$\chi$}PT\ diagrams,
starting on one source index and continuing until one reaches another source index
(on the same or a different source).
Each such loop corresponds
exactly to a quark loop at the QCD level and produces
one factor of $n_R$. The same analysis then needs to be repeated for diagrams with
valence quark indices.
Note that this
argument assumes that, at the chiral level, mesons made from (fermionic) valence or sea quarks
have identical Feynman rules, except for the counting factors coming from replication.
The ordinary, bosonic, symmetries relating fermionic valence and sea quarks should
guarantee this, as long as such symmetries are not spontaneously broken in the chiral theories.
Since a rigorous
analysis of the partially quenched chiral theory along the lines of Ref.~\cite{GSS} is still lacking,
this absence of symmetry breaking must be taken as an assumption at this point if one
wants to do without the use of the QCD-level theories $(4,4,n_R)_{LQCD}$.
However, it is difficult to see how it could go wrong.
\section{Extension to Fewer than Four Flavors}
\label{sec:extension}
The most interesting cases phenomenologically are three light flavors ($u$, $d$, $s$),
or, at extremely low energies, two light flavors ($u$, $d$).
To extend the above argument to $n_F<4$, we can start by taking one the mass of one of the four quarks
large and using decoupling ideas \cite{APPELQUIST-CARAZZONE}.
Call this quark the charm quark, with mass $m_c$.
The difficult point here is that the relation \eq{toshow} can only be used
where chiral perturbation theory is applicable, so we cannot just take
$m_c\to\infty$ on both sides of \eq{toshow} and then appeal to decoupling.
In the real world, we know that the effective coupling of \raise0.4ex\hbox{$\chi$}PT\ for the strange
quark is roughly $M_K^2/(8 \pi^2 f_\pi^2) \sim 0.2$ \cite{GASSER-LEUTWYLER},
with $f_\pi \cong 131\, {\rm Me\!V}$. So it is likely that \raise0.4ex\hbox{$\chi$}PT\
breaks down completely for quark masses that are not very much larger than
the physical strange quark mass, $m_s^{\rm phys}$.
For concreteness, imagine the breakdown occurs at $\sim\!2m_s^{\rm phys}$, in other words for
meson masses greater than $\sim\!700\, {\rm Me\!V}$, which is the mass of a ``kaon'' made
with a strange quark of mass $2m_s^{\rm phys}$.
I want to decouple the charm quark from S\raise0.4ex\hbox{$\chi$}PT\ before this breakdown occurs,
say at $m_c\!\sim\!1.5m_s^{\rm phys}$. Since
there is not a lot of room between this value of $m_c$ and $m_s^{\rm phys}$,
it is useful to consider first the case where $m_s$ is significantly
smaller than $m_s^{\rm phys}$.
I try to argue that this $n_F=3$ case
is correctly described by $(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.
With $m_u$, $m_d$, and $m_s$ all small, I increase $m_c$ to $m_c\sim1.5m_s^{\rm phys}$.
Modulo the assumptions discussed in \secref{results-nf4},
the relation $(4,``1")_\chi \doteq (4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ should continue
to hold for $m_c$ in this range. I then integrate out (decouple) the charm quark degree
of freedom from the chiral theory $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$. The procedure is completely analogous
to the way the strange quark is decoupled from the continuum $SU(3)_L\times SU(3)_R$ chiral
theory to obtain the $SU(2)_L\times SU(2)_R$ theory \cite{GASSER-LEUTWYLER}.
Since this process is perturbative, there is little doubt
that what remains after the charm quark is decoupled will be the $N_f=3$ chiral theory,
$(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.\footnote{One should also be close enough
to the continuum that the taste-splittings are relatively small,
so that a non-Goldstone meson made from light quarks is significantly
lighter than any meson with a charm quark. This makes the MILC
``coarse'' lattice, with splittings as large as $\sim\!450\,{\rm Me\!V}$ in the chiral limit,
rather problematic; while the ``fine''
lattices (largest splittings $\sim\!250\,{\rm Me\!V}$),
should be acceptable.}
Nevertheless, a check of this assumption in S\raise0.4ex\hbox{$\chi$}PT\ would be reassuring,
and is planned \cite{BERNARD-DU}.
Thus I expect $(4,``1")_{LQCD}$ with $m_c\!\sim\!1.5m_s^{\rm phys}$ to be described at low energy by the chiral
theory $(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$. This should remain true as $m_c$ increases further, say until
$m_c\!\sim\! 2m_s^{\rm phys}$, which is nominally the largest mass for which
\eq{toshow} applies.
Consider what happens to $(4,``1")_{LQCD}$ as $m_c$ continues
to increase beyond the applicability of \eq{toshow}.
When $m_c$ gets to be of order of the cutoff, $m_c\sim 1/a$, one expects that
it will decouple in the usual way from the QCD-level theory, leaving $(3,``1")_{LQCD}$.
The only effect of the charm quark should be renormalizations of the $(3,``1")_{LQCD}$
couplings. The decoupling would be virtually certain if $(4,``1")_{LQCD}$ were a
normal theory described by a local lattice action.
Because of the rooting procedure, though, there may be some doubt as to whether decoupling
actually occurs. We can avoid this concern by increasing $m_c$ still further,
until $m_c \gg 1/a$. At that point, $m_c$ is much larger than all eigenvalues of
the Dirac operator $D$, and $\root 4 \of {{\rm Det}(D+ m_c)}$ becomes independent
of the gauge field. Therefore the charm quark certainly decouples
from $(4,``1")_{LQCD}$, leaving $(3,``1")_{LQCD}$.
I am now ready to state the main assumption of this section: {\it As $m_c$ is
increased from $\sim\!2m_s^{\rm phys}$ to $m_c \gg 1/a$, the low energy physics
of\/ $(4,``1")_{LQCD}$ is unaffected, except perhaps by renormalizations of the LECs}\/.
Here ``low energy physics'' means the physics of particles with masses and energies
$\ll\! 700\,{\rm Me\!V}$. An alternative way of stating the assumption is to say that
\eq{toshow} continues to be meaningful as $m_c$ is
increased from $\sim\!2m_s^{\rm phys}$ to $m_c \gg 1/a$, as long as
$(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ is interpreted to mean the chiral theory with the
charm quark decoupled, and these theories are only used
at low energy.
I believe the assumption is plausible because the chiral theory shows that $m_c$ is already
decoupled from the low energy physics by $m_c\sim\!1.5m_s^{\rm phys}$. I
am simply assuming that it stays decoupled as its mass is increased further.
The conclusion then follows immediately: $(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi}$ is the correct
chiral theory for $(4,``1")_{LQCD}$ at $m_c\!\sim\!2m_s^{\rm phys}$. By assumption, it
remains the correct theory as $m_c$ is increased to $\gg 1/a$, at which point
$(4,``1")_{LQCD}$ becomes $(3,``1")_{LQCD}$. Thus
\begin{equation}\eqn{nf3-result}
(3,``1")_\chi \doteq (3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi} \ .
\end{equation}
Note that my decoupling assumption is not only sufficient for \eq{nf3-result}, but also
necessary. Any new physical effects entering in the region
$2m_s^{\rm phys} {\,\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}\,} m_c {\,\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}\,} 1/a$ are automatically violations of the chiral theory
$(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi}$.
For the moment, \eq{nf3-result} is only true for the three masses $m_u,m_d,m_s \ll m_s^{\rm phys}$, because
these masses needed to be kept small in order to provide a clean decoupling when $m_c \!\sim\! 1.5 m_s^{\rm phys}$.
A line of reasoning parallel to that in \secref{results-nf4} can now be applied:
Once \eq{nf3-result}
is known to be valid for some range of quark masses, then the difference between
the two theories must vanish everywhere if it is analytic. The analyticity could be
violated by a phase boundary at some values of the quark mass differences. However, I
can again point to the MILC simulations \cite{MILC} as evidence against a phase
boundary within the region of parameter space that has been studied.
The arguments (and assumptions) of this section may now be repeated to show
$(2,``1")_\chi \doteq (2,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi}$
and $(1,``1")_\chi \doteq (1,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi}$.
\section{Resolution of a Paradox in the One-Flavor Theory}
\label{sec:one-flavor}
An interesting paradox arises from the final result of the previous section for $n_F=1$.
Because of the anomaly, a theory with a single quark flavor should have
no light (pseudo) Goldstone bosons, but only a heavy pseudoscalar, the
$\eta'$. On the other hand, the S\raise0.4ex\hbox{$\chi$}PT\ for a single rooted staggered flavor
contains 16 pseudoscalars (``pions"), of which only one, the taste-singlet, is
heavy. The weightings of the contributions of the different pions in this rooted theory
have factors of $1/4$
compared to those in the unrooted, four-taste theory,
but all the pions certainly
contribute in both rooted and unrooted cases at finite lattice spacing.
Even in pure-glue correlation
functions, the light pions will
appear as intermediate states.\footnote{I thank Andreas
Kronfeld for emphasizing to me the importance of addressing this paradox.}
If my previous arguments are correct, then we know the chiral theory for
a single flavor of rooted staggered quarks, and it will produce the correct
chiral theory in the continuum limit for QCD with a single flavor. The only
way out of the paradox is therefore that the light pions decouple
from physical correlation functions in the continuum limit.
In this section, I present a particular example to show
in detail how the decoupling takes place at leading order in the chiral theory.
This is a special case of the calculations of scalar, isoscalar
correlators for various numbers of flavors worked out in Ref.~\cite{SCALAR}, and will be discussed in more
detail there.
Gluon or glueball interpolating fields at physical momenta ($\ll1/a$) couple
only to taste-invariant combinations of the quark fields.
To mock up a pure-glue correlation function, we add a taste-singlet scalar
source to the rooted one-flavor theory:
\begin{equation}\eqn{one-flavor-source}
\mathcal{L}_{\rm source} =
s(z) \bar \Psi(z) \Psi(z)\ .
\end{equation}
Here $ s(z)$ is basically the same as the sources considered previously in \eq{source-41},
except that there are no flavor indices since $n_F=1$.
The generating functional of this theory, $(1,``1")_{LQCD}$, is obtained by computing the
fermion determinant in the presence of the source $ s$, taking its fourth root,
and then integrating over gauge fields. In order to show explicitly the factors
resulting from the rooting, I will take the $R^{th}$ power of the determinant, and only
set $R=1/4$ at the end. The generating functional is thus given by:
\begin{equation}\eqn{generating-11}
(1,``1")_{LQCD}= \frac{\int \mathcal{D}\! A\, \exp\{-S_G(A) + R\, \textrm{tr}\left( \ln\,(D + m + s)\right)\}}
{\int \mathcal{D}\! A\, \exp\{-S_G(A) + R\, \textrm{tr}\left( \ln\,(D + m )\right)\}} \ ,
\end{equation}
where $D$ is the Dirac operator for the staggered field, $m$ is its mass, $A$ represents
the gauge fields, with action $S_G(A)$,
and $\mathcal{D}\! A$ is the gauge measure. As usual, one should imagine
that additional valence quark fields (and the corresponding commuting ghost quark fields to
cancel the valence determinant \cite{QUENCH}) are included as needed.
Note that $R$ in \eq{generating-11} is a parameter appearing in the QCD-level generating function. It is
logically independent from $n_R$, which is the number of sea-quark replicas and is introduced (later)
as a way of representing the rooting trick at the chiral level.
Of course, in the end we want
to set both $R$ and $n_R$ to $1/4$.
I want to calculate
\begin{equation}\eqn{Gxy}
G(x\!-\!y) = \left(\frac{\partial}{\partial s(x)}
\frac{\partial}{\partial s(y)}\; (1,``1")_{LQCD}\right)_{\hspace{-0.15cm} s=0}
\hspace{-0.15cm}-\left(\frac{\partial}
{\partial s(x)}\; (1,``1")_{LQCD}\right)_{\hspace{-0.15cm} s=0}
\hspace{-0.1cm}\left(\frac{\partial}
{\partial s(y)}\; (1,``1")_{LQCD}\right)_{\hspace{-0.15cm} s=0}\hspace{-0.25cm}.\phantom{\Bigg\vert_p}
\end{equation}
The second term subtracts off the limit
at infinite separation,
proportional to $\langle\bar\Psi\Psi\rangle^2$.
What remains is just the connected part of the correlation function at the QCD level.
We are interested in the lightest particles that appear as intermediate states
in the decay of $G(x\!-\!y)$ at large $|x\!-\!y|$.
There are two possible contractions contributing to the first term in
$G(x\!-\!y)$, as in \eq{example-ii}; while there is only one contraction in each of
the factors in the second term.
Introducing valence quarks $q_\alpha$, degenerate with the sea quarks,
and corresponding valence sources $ \sigma^{\alpha\beta}$,
I rewrite the contributions in terms of valence quark contractions. With
$\alpha\not=\beta$, one has
\begin{eqnarray}
G(x\!-\!y)&=& R\left(\frac{\partial}{\partial \sigma^{\alpha\beta}(x)}
\frac{\partial}{\partial \sigma^{\beta\alpha}(y)}
(1,``1")_{LQCD}\right)_{\hspace{-.15cm}\sigma=0}
+R^2\left(\frac{\partial}{\partial \sigma^{\alpha\alpha}(x)}\frac{\partial}{\partial \sigma^{\beta\beta}(y)}
(1,``1")_{LQCD}\right)_{\hspace{-.15cm}\sigma=0} \nonumber \\
&&\hspace{1cm}
-\;
R^2\left(\frac{\partial}{\partial \sigma^{\alpha\alpha}(x)} (1,``1")_{LQCD}\right)_{\hspace{-.15cm}\sigma=0}
\left(\frac{\partial}{\partial \sigma^{\beta\beta}(y)}
(1,``1")_{LQCD}\right)_{\hspace{-.15cm}\sigma=0}
\eqn{Gxy-valence}
\end{eqnarray}
Here and below the sea quark source $ s$ has been set to zero.
The contractions in \eq{Gxy-valence}
are shown in \figref{contractions}. The first term (multiplied by $R$) is
pictured in \figref{contractions}(a); the second term (multiplied by $R^2$), in \figref{contractions}(b).
Arbitrary numbers of gluon propagators and sea quark loops are implied, except that the
third term in \eq{Gxy-valence}
is taken into account by omitting disconnected contributions to
\figref{contractions}(b).
\begin{figure}[t]
\resizebox{6.0in}{!}{\includegraphics{contractions.eps}}
\caption{Valence quark contractions in the scalar propagator $G(x-y)$, corresponding
to \protect{\eq{Gxy-valence}}. The solid dots represent the source $ \sigma$.
Only the valence quark lines are shown; completely disconnected contributions
to (b) should be omitted.
\label{fig:contractions}}
\end{figure}
By the arguments of this paper, we should be able to calculate the
low-mass contributions to $G(x\!-\!y)$
using the appropriate S\raise0.4ex\hbox{$\chi$}PT. That theory is $(1,4,n_R)_\chi$,
with $n_R$ set to $1/4$ after the calculation to implement the replica
trick.
I append to $(1,4,n_R)_\chi$ the valence degrees of freedom
associated with the two flavors $\alpha$ and $\beta$ in \eq{Gxy-valence}, as well as the
corresponding two commuting ghost quarks.
Including taste degrees of freedom, the symmetry group of $(1,4,n_R)_\chi$ is then
$SU(4n_R+8|8)_L \times SU(4n_R+8|8)_R$.
Following the notation of \cite{AUBIN-BERNARD}, I define
a meson field $\Phi$, which is a $(4n_R+16) \times (4n_R+16)$ Hermitian matrix,
and the unitary matrix $\Sigma=\exp(i\Phi/f)$, where $f$ is the LO pion decay constant.
With $a$ and $b$ flavor indices, running over both valence and sea flavor,
we may write
\begin{equation}\eqn{phi-def}
\Phi_{ab} = \sum^{16}_{\Xi=1} \Phi_{ab}^\Xi\; t_\Xi\;,
\end{equation}
where the $ \Phi_{ab}^\Xi$ correspond to mesons of specific taste and flavor, and
$t_\Xi$ are the 16 taste generators
\begin{equation}\eqn{taste-generators}
\{t_\Xi\} = \{I,\;\xi_\mu,\; \xi_{\mu\nu}(\mu>\nu),\; \xi_{\mu}\xi_5,\; \xi_5\} \ .
\end{equation}
with $\xi_\mu$ the $4\!\times\!4$ taste matrices that correspond to the Dirac gamma matrices.
All quarks (sea and valence) are degenerate,
with mass $m$.
$G(x\!-\!y)$ will be calculated at leading order (LO) in S\raise0.4ex\hbox{$\chi$}PT.
The valence source $ \sigma$ couples exactly like the valence mass term, giving a
contribution to the LO Euclidean chiral Lagrangian:
\begin{equation}\eqn{chiral-source-term}
\mathcal{L}_{\rm source} = -\frac{1}{4}\mu f^2\; \sigma_{\tau\rho}\;\textrm{tr}\!
\left(\Sigma_{\rho\tau}+\Sigma^\dagger_{\rho\tau}\right)\ ,
\end{equation}
where $\mu$ is the chiral condensate,
$\rho$ and $\tau$ are valence flavor indices (summed over valence-quark, but not ghost-quark,
flavors), and $\textrm{tr}$ indicates a trace over taste indices only. There are also terms
quadratic in $\sigma$ appearing in the next-to-leading order Lagrangian; they may be ignored
because they contribute only to contact terms in $G(x-y)$ to the order we are working.
To convert \eq{Gxy-valence} to the chiral level, we just replace
$(1,``1")_{LQCD}$ with $(1,4,n_R)_\chi$. Then, using \eq{chiral-source-term},
and expanding $\Sigma$ and $\Sigma^\dagger$ to second order in $\Phi$, we have
\begin{equation}\eqn{Gxy-chiral}
G(x\!-\!y) = R\mu^2 \Big\langle \Phi^\Xi_{\alpha a}(x)\,\Phi^\Xi_{a\beta}(x)\; \Phi^{\Xi'}_{\beta b}(y)
\,\Phi^{\Xi'}_{b\alpha}(y)\Big\rangle
+ R^2\mu^2 \Big\langle \Phi^\Xi_{\alpha a}(x)\,\Phi^\Xi_{a\alpha}(x)\; \Phi^{\Xi'}_{\beta b}(y)\,
\Phi^{\Xi'}_{b\beta}(y)\Big\rangle_{\rm conn}\ ,
\end{equation}
where there are implicit sums over the taste indices $\Xi$ and $\Xi'$, as well as
over the (generic) flavor indices $a$ and $b$,
but not over the valence flavor indices $\alpha\not=\beta$. The subscript ``conn'' on the second
term means that only those {\it meson}\/ contractions that connect the source points $x$ and $y$ should be
included.
This restriction arises from the cancellations due to the last term in \eq{Gxy-valence}.
(The first term \eq{Gxy-chiral} does not need a ``conn'' subscript because the valence indices
require that all contractions connect $x$ to $y$.) Cancellations of the disconnected pieces are also
responsible for the absence in \eq{Gxy-chiral} of contributions from the ``1'' terms in
the expansion of $\Sigma+\Sigma^\dagger$.
\Figref{mesons} shows the LO (one-loop) meson diagrams contributing to \eq{Gxy-chiral}.
The crosses indicate a
presence of one or more ``hairpin'' vertices, which can appear on
flavor-neutral meson lines. In the quark-flow sense, propagators
without hairpin vertices are connected; while those with at least one
hairpin are disconnected. (See for example Fig.~1 in the first reference in \cite{AUBIN-BERNARD}.)
Note however that even a hairpin diagram is connected in the QCD (or meson) sense, since
gluons connect the two quark lines.
\begin{figure}[t]
\resizebox{6.0in}{!}{\includegraphics{mesons.eps}}
\caption{Lowest order S\raise0.4ex\hbox{$\chi$}PT\ meson diagrams coming from
\protect{\eq{Gxy-chiral}}, and corresponding to \protect{\figref{contractions}}.
As in \protect{\figref{contractions}}, a solid dot is a source, $ \sigma$.
The cross represents one or more insertions of a ``hairpin'' vertex, and hence indicates
a meson propagator that is disconnected as a quark-flow diagram.
\label{fig:mesons}}
\end{figure}
In S\raise0.4ex\hbox{$\chi$}PT, hairpin vertices are of two types:
The first is due to the anomaly and affects only taste-singlet meson propagators.
In the notation of Ref.~\cite{AUBIN-BERNARD}, it has
strength $4m_0^2/3$ for arbitrary numbers of flavors. The anomaly contribution
to the mass-squared of the $\eta'$ is proportional
to $m^2_0$, with the proportionality constant depending on the total number of flavors (more
precisely in this case, on the number of replicas $n_R$).
The second kind of hairpin is due to the taste-violating operators that appear in S\raise0.4ex\hbox{$\chi$}PT.
These hairpins affect only taste-vector and taste-axial-vector mesons at LO, and
have strength proportional to $a^2$. Due to the explicit factors of $a^2$, the contributions
of such taste-violating hairpins automatically vanish in the continuum limit. Since I
am interested in the restoration of physical unitarity in the
continuum, I omit the taste-violating hairpins here; they of
course are included in a complete LO calculation \cite{SCALAR}.
We now go to momentum space.
The (quark-flow) connected propagator carrying Euclidean momentum $p$ is
\begin{equation}\label{eq:connected-prop}
\left\langle\Phi^{\Xi}_{ab}(-p)\;\Phi^{\Xi'}_{b'a'}(p)\right\rangle_{\hspace{-.05cm}\rm conn} =
\frac{\delta_{a,a'}\,\delta_{b,b'}\, \delta_{\Xi,\Xi'}}{p^2 + M^2_\Xi } \ ,
\end{equation}
where $M_\Xi$ is
the tree-level mass of a taste-$\Xi$ meson
\begin{equation}\label{eq:pi-masses}
M_\Xi^2 = 2\mu m + a^2\Delta_\Xi\ ,
\end{equation}
with $\Delta_\Xi$ the taste splitting. All quarks are degenerate so there
is no need to specify the flavor in \eq{pi-masses}.
Keeping only the taste-singlet disconnected meson propagator, we have
\begin{equation}\label{eq:disconnected-prop}
\left\langle\Phi^{\Xi}_{ab}(-p)\;\Phi^{\Xi'}_{b'a'}(p)\right\rangle_{\hspace{-.05cm}\rm disc}=
\delta_{a,b}\,\delta_{b',a'}\, \delta_{\Xi, I}\,\delta_{\Xi',I}\, \mathcal{D}^I(p) \ ,
\end{equation}
where \cite{AUBIN-BERNARD}
\begin{equation}\label{eq:Disc}
\mathcal{D}^I(p) = -\frac{4m_0^2}{3}\, \frac{1}{(p^2+M_I^2)}\,
\frac{1}{(p^2+M^2_{\eta'_I})}\ ,
\end{equation}
with
\begin{equation}\eqn{metap}
M^2_{\eta'_I} = M^2_I + n_R \frac{4m_0^2}{3} \ .
\end{equation}
Note that, by definition,
$M_I$ is the mass of any taste-singlet meson before including the
effect of the anomaly hairpin. Thus all 16 of the masses listed in \eq{pi-masses}, including
$M_I$, become equal in the continuum limit.
The $\eta'_I$, on the other hand, is the one meson that is a flavor (more precisely, replica) singlet
as well as a taste singlet. Its mass $M_{\eta'_I}$ can be found either by diagonalizing
the complete LO mass matrix including the anomaly term, or by
summing the geometric series of hairpin interactions.
One could take the limit $m_0^2\to\infty$ in \eq{Disc} to decouple the $\eta'$,
since it is at least as heavy as particles we have integrated
out of the chiral theory (\textit{e.g.},\ the $\rho$). However, I prefer to leave $m_0$
finite so we may see explicitly how the $\eta'$ remains after all the light mesons cancel in
the continuum limit.
I now consider the meson contractions that contribute to \eq{Gxy-chiral}.
Although it is not necessary to use a quark-flow
picture here, since adjustment for the rooting is automatically taken into
account by setting $n_R=1/4$, quark flow gives a nice physical picture. In \figref{quark-flow},
I therefore show the
quark-flow diagrams that correspond to various meson contractions.
\Figref{quark-flow}(a), (b), and (c)
come from the $R$ term in \eq{Gxy-chiral}. Note that, like
\figref{contractions}(a) from which they arise,
\figref{quark-flow}(a), (b), and (c) have a single valence-quark loop connecting the sources
(shown by solid dots). Similarly, \figref{quark-flow}(d) and (e) come from the $R^2$ term
in \eq{Gxy-chiral}, and, like
\figref{contractions}(b), have two separate valence-quark loops.
\begin{figure}[t]
\resizebox{5.0in}{!}{\includegraphics{quark_flow.eps}}
\caption{Quark flow diagrams corresponding to the S\raise0.4ex\hbox{$\chi$}PT\ contributions of \protect{\figref{mesons}}.
Not shown are two additional diagrams that are very similar to (b) and (c) but have the
roles of valence quarks $\alpha$ and $\beta$ interchanged.
Diagrams (a) and (d) have no hairpin vertices and correspond to \protect{\figref{mesons}}(a);
diagrams (b) and (c) have one hairpin vertex and correspond to \protect{\figref{mesons}}(b);
while diagram (e), with two hairpin vertices, corresponds to \protect{\figref{mesons}}(c).
In meson lines with hairpin vertices, a summation of sea-quark loop insertions is implied.
\label{fig:quark-flow}}
\end{figure}
When $a$ and $b$ in \eq{Gxy-chiral} are sea quark flavors
$i$ and $j$,\footnote{I use Latin indices from the middle of the alphabet ($i$, $j$, \dots) for
sea quark flavors (replicas, here), Greek indices ($\alpha$, $\beta$, \dots) for fermionic valence quark flavors, and Latin indices
from the beginning of the alphabet ($a$, $b$, \dots) for generic valence or sea flavors.} connected meson
propagators are only possible for the term proportional to $R$ in \eq{Gxy-chiral}, and require
$i=j$.
(In the $R^2$ term the flavors do not match up.) This generates \figref{quark-flow}(a), which
is proportional to $n_R$ due to the sum over sea quark flavors $j$.
When $a$ and $b$ are valence quark indices, connected contractions like those in \figref{quark-flow}(a) are also
possible, but there is a cancellation between valence quarks and ghost quarks, as follows in the
quark-flow picture from
the fact that \figref{quark-flow}(a) has a virtual loop.
One additional contraction with only connected
meson propagators comes from the $R^2$ term in
\eq{Gxy-valence} when $a=\beta$ and $b=\alpha$. In the quark flow picture, this gives diagram
\figref{quark-flow}(d), which is constructed entirely from valence quarks and therefore generates
no factors of $n_R$.
Contractions with a single disconnected meson
propagator are generated only by the $R$ term in \eq{Gxy-chiral}.
It gives diagram \figref{quark-flow}(b) or
the $\alpha\leftrightarrow\beta$ version
when $a=b=\alpha$ or $a=b=\beta$, respectively. Similarly, it gives diagram
\figref{quark-flow}(c) or the $\alpha\leftrightarrow\beta$ version
when $a=\alpha$, $b=\beta$ or $a=\beta$, $b=\alpha$, respectively. These four terms, which correspond
at the meson level to \figref{mesons}(b), can be seen to have the same numerical value after
adjusting the loop momentum assignment.
Finally, the $R^2$ term in \eq{Gxy-chiral} gives diagram \figref{quark-flow}(e)
when $a=\alpha$ and $b=\beta$. There is an overall symmetry factor of $2$ in this case.
We can now add the various contributions to $\tilde G(q)$, the Fourier transform
of $G(x-y)$, resulting in:
\begin{eqnarray}
\tilde G(q) &=& \mu^2 \int \frac{d^4 p}{(2\pi)^4}\;\left\{\vbox to 24pt{}\left(Rn_R + R^2\right)
\sum_\Xi \frac{1}{\left(p^2+M^2_\Xi\right)\left(\left(p+q\right)^2+M^2_\Xi\right)} \right.
\nonumber \\
&&-\; \frac{2R\left(4m_0^2/3\right)}{\left(p^2+M^2_I\right)\left(\left(p+q\right)^2+M^2_I\right)}
\left(\frac{1}{p^2+M^2_{\eta'_I}}
+ \frac{1}{\left(p+q\right)^2+M^2_{\eta'_I}}\right) \nonumber \\
&&+\; \left.\vbox to 24pt{}\frac{2R^2\left(4m_0^2/3\right)^2}{\left(p^2+M^2_I\right)\left(\left(p+q\right)^2+M^2_I\right)
\left(p^2+M^2_{\eta'_I}\right)\left(\left(p+q\right)^2+M^2_{\eta'_I}\right)}\; \right\}\ ,
\eqn{Gq-raw}
\end{eqnarray}
The first line in \eq{Gq-raw} comes from \figref{quark-flow}(a) and (d), which give the
$Rn_R$ and the $R^2$ terms, respectively; the second line, from \figref{quark-flow}(b), (c),
and their $\alpha\leftrightarrow\beta$ versions; the last line, from \figref{quark-flow}(e).
Note that the negative sign of the anomaly hairpin,
\eq{Disc}, makes the second
line negative and leads to the possibility of cancellations among the various light pions.
It is not an accident that the hairpin is negative: It is required in order to
give a positive mass to the $\eta'$ when the geometric series of
insertions is summed.
Using, from \eq{metap},
\begin{equation}\eqn{m0-rewrite}
\frac{4m_0^2}{3}
= \frac{1}{n_R}\left[ \left(p^2 + M^2_{\eta'_I}\right) - \left(p^2 + M^2_I\right)\right]
= \frac{1}{n_R}\left[ \left(\left(p+q\right)^2 + M^2_{\eta'_I}\right) - \left(\left(p+q\right)^2 + M^2_I\right)\right],
\end{equation}
one can rewrite \eq{Gq-raw} in a form that shows more clearly how the continuum limit works:
\begin{eqnarray}
\tilde G(q) &=& \mu^2 \int \frac{d^4 p}{(2\pi)^4}\;\left\{\vbox to 24pt{}
\frac{2R^2}{n_R^2}
\frac{1}{\left(p^2+M^2_{\eta'_I}\right)\left(\left(p+q\right)^2+M^2_{\eta'_I}\right)}
\right. + \nonumber \\
&&\hspace{-1.5cm}+\left(Rn_R + R^2\right)
\sum_\Xi \frac{1}{\left(p^2+M^2_\Xi\right)\left(\left(p+q\right)^2+M^2_\Xi\right)}
-\left(\frac{4R}{n_R}-\frac{2R^2}{n_R^2}\right)
\frac{1}{\left(p^2+M^2_I\right)\left(\left(p+q\right)^2+M^2_I\right)}\nonumber \\
&&\hspace{-1.5cm}+\left.\vbox to 24pt{}
\left(\frac{2R}{n_R}-\frac{2R^2}{n_R^2}\right)
\left(\frac{1}{\left(p^2+M^2_I\right)\left(\left(p+q\right)^2+M^2_{\eta'_I}\right)}
+\frac{1}{\left(p^2+M^2_{\eta'_I}\right)\left(\left(p+q\right)^2+M^2_I\right)} \right)
\right\}.
\eqn{Gq}
\end{eqnarray}
Setting $R=1/4=n_R$, the last line of \eq{Gq} vanishes immediately. The light pions
then contribute only in the second line. In the continuum limit all 16 of the
light masses $M_\Xi$ become degenerate, and the two terms on the second line
cancel also. The remainder, the first line, comes from the exchange of two
heavy singlet mesons ($\eta'_I$), and indeed has the same normalization as would be found
for this correlation function using a continuum one-flavor chiral theory.
This resolves the apparent one-flavor paradox, showing that it does not
provide a counterexample to the arguments of this paper.
\section{Consequences}
\label{sec:consequences}
\subsection{Health of the Rooted Theory}
\label{sec:health}
With the usual assumption that
taste symmetry is restored in the continuum limit for unrooted staggered
quarks, $(n_F,4,n_R)_{\chi}$ becomes ordinary chiral perturbation theory
for $4n_F\cdot n_R$ ``flavors'' in the continuum limit. This follows immediately
from the fact that, for a given combination of quark flavors, all 16 taste
pions become degenerate in the continuum limit (before the effects of the
anomaly are included, which affects only the taste singlet, flavor singlet
meson, as always). Then taking $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$ order by order
necessarily produces standard, continuum \raise0.4ex\hbox{$\chi$}PT\ for $n_F$ flavors.
All existing S\raise0.4ex\hbox{$\chi$}PT\ calculations \cite{LEE-SHARPE,AUBIN-BERNARD,PRELOVSEK,HEAVYLIGHT,SCHPT-OTHER,SCHPT-BARYONS}
have this expected continuum limit.
The statement that $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_{\chi}$ is the correct chiral theory for
$(n_F,``1")_{LQCD}$ (for $n_F\le4$) therefore has important implications for the validity of the
rooting procedure itself.
Since S\raise0.4ex\hbox{$\chi$}PT\ becomes standard \raise0.4ex\hbox{$\chi$}PT\ in the continuum limit,
the low energy
(light pseudoscalar meson) sector of $n_F$-flavor lattice QCD with rooted staggered quarks is, in the
continuum limit, indistinguishable in its structure from
that of ordinary $n_F$-flavor QCD. There are no violations of unitarity, and
no introduction of unphysical nonlocal scales.
Of course, the chiral perturbation theory arguments presented in this paper
do not address possible sickness due to rooting that would appear in
sectors of the theory not described by \raise0.4ex\hbox{$\chi$}PT. Nevertheless, the extension
of my arguments to at least some sectors other than that of the light pseudoscalar mesons
seems possible. In particular, heavy-light physics can be described
by the addition of a {\it valence}\/ heavy quark with a nonstaggered action to the
existing S\raise0.4ex\hbox{$\chi$}PT\ framework \cite{HEAVYLIGHT}. As such, the arguments in \secref{details}
should also apply in the heavy-light case with $n_F=4$ sea quarks, implying that it too is
free from unphysical effects in
the continuum limit. Further, I see no obvious problems
with an extension to $n_F<4$,
since the heavy-quark can be treated by heavy quark effective theory at both the QCD and the chiral level,
and thus does not introduce a new scale that would interfere with decoupling.
The case of baryons, described by staggered heavy-baryon
chiral perturbation theory \cite{SCHPT-BARYONS}, also
seems straightforward for $n_F=4$. However, the artifice of increasing the number of
colors at the QCD level is not applicable in this case, because it changes the nature
of the baryons. Therefore, any counting arguments like those in \secref{expansion} would need
to be performed at the chiral level only. In addition, it is not obvious that one can use
decoupling to analyze the $n_F<4$ cases,
since we would now have the baryon mass scale at the QCD level between
$700\,{\rm Me\!V}$ and $1/a$.
One {\it caveat} should be added to the discussion of this section:
Since the chiral expansion expresses physical quantities
in terms of unknown LECs, the statement that S\raise0.4ex\hbox{$\chi$}PT\ is valid
does not by itself$\,$ imply that the LECs generated by the rooted staggered
theory take on their correct (real QCD) values in the continuum limit.
On the other hand, in the four-flavor case we do know that the LECs are
correct in the degenerate
case. This follows from universality since the degenerate action is local.
But the LECs are by definition mass independent, so if S\raise0.4ex\hbox{$\chi$}PT\ is
indeed the right chiral theory for four nondegenerate flavors, the
LECs are {\it per force}\/ also correct. With fewer than four flavors, though,
my assumptions on decoupling do not appear to be strong enough to continue
to guarantee correct LECs. For that one
would need to show universality at the lattice QCD level (see for example Ref.~\cite{SHAMIR}),
or to argue from the agreement of rooted staggered simulations with experiment \cite{PRL}.
Of course, numerical checks against experiment are not proofs, and they
run the risk, at least in principle, of confounding small violations of
universality due to rooted staggered quarks with small violations of the Standard Model.
Such checks will become more convincing when one can see agreement
between at least two different lattice fermion approaches.
\subsection{A ``Mixed'' Theory?}
\label{sec:mixed}
In current dynamical staggered simulations \cite{MILC}, the fourth-root trick
is applied to the sea quarks, while the valence quarks are described by ordinary staggered
fields. In this section, I call this situation a ``rooted-staggered theory''
for simplicity. Because valence and sea quarks are treated differently,
it has been suggested \cite{KENNEDY} that rooted-staggered theories
fall into the class called ``mixed,'' where the valence and sea quarks
have fundamentally different lattice actions.
In mixed theories the mass
renormalizations of sea and valence quarks are different, meaning in particular
that there is no simple way to ensure that sea and valence quarks
have the same physical mass. Further, the continuum
symmetries that would rotate valence and sea quark into each other are violated
by discretization effects. This implies, for example,
that even if the quark masses are adjusted to make the mass of
a meson with two valence quarks equal to the mass of a meson with two sea quarks, the mass of
a meson with one valence and one sea quark will be different by
$\mathcal{O}(a)$ or $\mathcal{O}(a^2)$ terms. Such terms show up
in new operators in the \raise0.4ex\hbox{$\chi$}PT\ for the mixed theory \cite{MIXED}.
I claim, however, that the rooted-staggered case
is not a mixed case, but in fact resembles much more closely a partially
quenched theory, where the symmetries between valence and sea quarks are violated
only by explicit differences in quark masses.
First of all, I sketch a proof that the renormalization
of sea and valence quark masses are the same to all orders in (weak-coupling) perturbation theory.
Imagine we have determined the mass counterterm for a valence quark up to an including a given order
in perturbation theory. I need to show that the same mass counterterm will
work to renormalize the mass on a sea quark line that appears as a loop inside some
other diagram.
Go inside the diagram, and
draw a box around a self-energy insertion on a sea quark line.
As remarked in \secref{replica}, the replica trick shows that the rooting procedure
simply multiplies each sea quark loop by $1/4$ in perturbation theory, so the self-energy insertion
as well as the associated mass counterterms on that line are all multiplied
by the same overall factor of $1/4$, compared to the corresponding self-energy insertion
and mass counterterms on a valence line.
Thus the same counterterms work in both cases.
Of course there may be additional factors of 1/4 for any sea quark loops that appear in
sub-sub-diagrams. But these will be the same for a sea quark line as for a valence
line.
The argument in the proceeding paragraph is based on weak-coupling perturbation theory.
Could there be ``mixed effects'' that show up only nonperturbatively? I cannot
answer that question for general nonperturbative effects, but I can answer it
--- modulo the assumptions
in \secrefs{details}{extension} ---
for the large
class of effects described by the chiral theories.
The appropriate chiral
theory is $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$, which is obtained order by order from
$(n_F,4,n_R)_\chi$. The latter theories have symmetries
interchanging
valence and sea quarks. For $n_V$ flavors of valence staggered quarks,
the full symmetry group is in fact
$SU(4n_Rn_F \hspace{-.05cm}+\hspace{-.05cm} 4n_V|4n_V)_L \times SU(4n_Rn_F \hspace{-.05cm}+\hspace{-.05cm} 4n_V|4n_V)_R$. The taste
symmetries are broken on the lattice at $\mathcal{O}(a^2)$, but the ``flavor subgroup''\footnote{This flavor
subgroup is described in the first paper in Ref.~\cite{AUBIN-BERNARD}, but is there
called the ``residual chiral group.'' It has been generalized here to take into account
the partially quenched context.}
$U(n_Rn_F \hspace{-.05cm}+\hspace{-.05cm} n_V|n_V)_\ell \times U(n_Rn_F \hspace{-.05cm}+\hspace{-.05cm} n_V|n_V)_r$ is exact
up to quark mass terms.
Extra chiral operators that would split valence-sea mesons from sea-sea or
valence-valence mesons are forbidden by these symmetries.
Since such operators are absent for all $n_R$, they can have no effect when we take $n_R\to{\scriptstyle \raise.15ex\hbox{${1\over4}$}}$.
In particular, corresponding sea-sea, valence-valence, and valence-sea mesons are all degenerate
(when the quark masses are degenerate) in $(n_F,4,n_R)_\chi$, and therefore in $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.
Thus, at least within the context of chiral perturbation theory, the rooted-staggered theory
behaves like a partially quenched theory, {\it not}\/ like a mixed theory.
One does have to be careful in defining the word ``corresponding'' in the previous
paragraph. The valence sector of a rooted-staggered theory is ``richer'' than the sea sector, in that it
includes particles in the continuum limit whose sea-sector analogues have decoupled
from the physical theory.
This is not surprising, since the purpose of the fourth root is to reduce four
sea quark tastes to one, and there is no fourth root taken in the valence sector.
A simple example of this behavior can be seen from the result in \secref{one-flavor}.
If one adds together the valence contractions in \eq{Gxy-valence} without the
extra factor of $R$ relating the last two terms to the first, then one gets a
valence Green's function with no sea-quark analogue.
Intermediate light (pseudo-Goldstone)
mesons will appear as intermediate states of this Green's function in the continuum limit.
In this sense, the rooted-staggered theory,
is inherently ``partially quenched,'' even in limit of equal valence
and sea masses. In a normal partially quenched theory, one can always take more
valence quarks than there are sea quarks, so one has the option of creating valence
states that have no analogue in the sea-quark sector. The main difference here is that
one has no choice in the matter:
The physical sea-quark subspace is always a proper subspace of the complete theory in the
continuum limit.
\section{Conclusions and Discussion}
\label{sec:conclusions}
Under certain assumptions that I repeat below, I have shown in this paper that staggered
chiral perturbation theory (S\raise0.4ex\hbox{$\chi$}PT) correctly describes the low energy physics of four
or fewer flavors of rooted staggered quarks. The S\raise0.4ex\hbox{$\chi$}PT\ theory $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$
takes into account the fourth-root procedure by the replica trick (or equivalently,
by quark-flow analysis). At finite lattice spacing, S\raise0.4ex\hbox{$\chi$}PT\ reproduces unphysical
features of the rooting that may perhaps best be described as violations of unitarity,
with unwanted intermediate states contributing to amplitudes. This is clearly
seen in Ref.~\cite{PRELOVSEK} or in the example presented in \secref{one-flavor}
\cite{SCALAR}.
Because S\raise0.4ex\hbox{$\chi$}PT\ becomes standard \raise0.4ex\hbox{$\chi$}PT\ in the continuum limit,
the unitarity violations seen in S\raise0.4ex\hbox{$\chi$}PT\ at nonzero $a$ must go away when $a\to0$.
If S\raise0.4ex\hbox{$\chi$}PT\ is indeed the correct chiral theory for rooted staggered quarks, then
this implies that
the low energy (pseudoscalar meson) sector of lattice QCD with rooted staggered quarks is, in the
continuum limit, indistinguishable in its structure from
that of ordinary QCD. There are no violations of unitarity, and
no introduction of unphysical nonlocal scales.
This would not, by itself, show that the rooting procedure is valid, because
there could be problems in sectors of the theory not described by
chiral perturbation theory. Nevertheless, it would significantly reduce the possible ways
in which rooted staggered quarks could go wrong.
My S\raise0.4ex\hbox{$\chi$}PT\ results also give support to the statement that
the theory with staggered valence quarks and rooted staggered sea quarks is not
a ``mixed'' theory. Like a partially quenched theory with the same action
for the valence and sea quarks, the rooted staggered theory has flavor symmetries
rotating sea and valence quarks into each other. These symmetries may be
broken in the usual way by mass terms, but they are not broken by lattice corrections.
The starting point of my argument was the observation that four flavors of
degenerate staggered quarks simply reduce to a single flavor when the
fourth root of the determinant is taken. To make use of this observation,
I needed several assumptions, the most important of which are:
\begin{itemize}
\item[1)\ ]{} The taste symmetry is restored in the continuum limit of the
normal, unrooted staggered theory.
\item[2)\ ]{} The difference $V[s]$ between the S\raise0.4ex\hbox{$\chi$}PT\ theory for four flavors $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$
and the true chiral theory $(4,``1")_\chi$ is analytic in $s$ (for space-time
independent $s$), up to possible isolated
singularities.
\item[3)\ ]{} As a single quark mass (``charm'') is
increased beyond the point at which it has decoupled from the chiral theory
to a scale much larger than the
lattice cutoff, the low energy physics
of $(4,``1")_{LQCD}$ is unaffected, except perhaps by renormalizations of the LECs.
\end{itemize}
I consider assumption 1) to be noncontroversial, and there is a lot
of numerical evidence for it, but it has not been rigorously proven.
The renormalization group methods of Shamir \cite{SHAMIR} seem to me the best way to
make progress on this issue.
Assumption 2) is used in \secref{details} to move from degenerate
to nondegenerate masses in the four-flavor case.
The most important ``obstruction'' here would seem to be the possible existence an essential
singularity in $V[s]$ at $s=0$; I speculate below on how this possibility might be eliminated.
Note that the existence of such singularity
immediately would imply that S\raise0.4ex\hbox{$\chi$}PT\ is incorrect. A second way the assumption could
be violated would be the presence of a phase boundary at a finite distance from $s=0$. This would
imply the existence of a region of mass differences in which S\raise0.4ex\hbox{$\chi$}PT\ is valid, and another
region of larger mass differences in which S\raise0.4ex\hbox{$\chi$}PT\ is invalid. Generically, I would expect
an abrupt change like this to cause significant effects that would likely have been noticed in
simulations if they occurred within the parameter ranges studied.
Both types of potential analyticity violations certainly merit further investigation, however.
Assumption 3) allows me in \secref{extension} to extend the result in the four-flavor case
to the more interesting cases with fewer than four light flavors.
It should be possible to test this assumption
numerically by simulating a four-flavor theory in
appropriate mass range and seeing if it is
describable at low energy by the proper chiral theory with a decoupled
charm quark, $(3,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$.
Such tests are under consideration by the MILC Collaboration and may be
performed in the near future. The main uncertainty is the precision at
which these tests could be made, which would strongly influence how
convincing they would be.
\bigskip
\bigskip
To investigate a possible essential singularity,
I restrict myself to diagonal sources, constant in space-time. In other
words, I consider a function $V$ of the four mass differences
from the degenerate mass $\bar m$. To correspond with the previous notation, I
write $V=V(\hat s)$, with $\hat s^{ij} \equiv \delta^{ij} \epsilon_j$ and $\epsilon_j = m_j-\bar m$.
Considering $\hat s$ to be complex, the arguments in \secref{expansion} still go through
formally, although one may want to put the system in finite volume
to avoid any dangers from $\int d^4x$ with constant sources. We then have a complex function
$V(\hat s)$, all
of whose complex derivatives vanish at $\hat s=0$.
This would forbid essential
singularities in $V(\hat s)$, which do not have well-defined complex derivatives.
What would be needed to make such an argument reasonably rigorous?
On the S\raise0.4ex\hbox{$\chi$}PT\ side, we are defining $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$
in (chiral) perturbation theory, so I do not expect problems at any finite
order in adding
small, complex $\epsilon_j$ to the masses in Euclidean space.
On the other hand, we do not know what
$(4,``1")_\chi$ is {\it a priori}\/, so we would need to add $\epsilon_j$ to the masses
in the QCD-level theory $(4,``1")_{LQCD}$.
The main problem there
seems to be that a complex $\hat s$ makes the determinant complex. The issue
of how one chooses the phase of the fourth root thus becomes relevant, as it
is for the case of nonzero chemical potential \cite{CHEMICAL-POTENTIAL}. Unlike
the chemical potential case, however, the imaginary
part of $\epsilon_j$ adds a constant amount to all eigenvalues of flavor $j$.
Furthermore, $\epsilon_j$ may be taken very small, \textit{i.e.},\ much less
than both $\bar m$ and $\Lambda_{QCD}$.
I am hopeful therefore that any phase ambiguities can be shown to be manageable,
but that remains to be seen.
A further difficulty could come in trying to ``match'' $(4,``1")_{LQCD}$ onto
$(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ in order to turn statements about smoothness of each theory
separately into statements about $V[s]$.
I conclude with a two additional comments:
\begin{itemize}
\item{} Because the ``wrong'' mesons, which may be lighter than the physical
states, contribute to correlation functions at nonzero lattice spacings in S\raise0.4ex\hbox{$\chi$}PT,
the infinite-distance limit of some quantities may not commute with the
continuum limit. This order of limits issue is very similar to
that concerning the chiral and continuum limits, described in Ref.~\cite{LIMITS}.
It is not a practical problem, since of course only finite distances are relevant
to simulations, and the extrapolation can be taken in the proper order with the aid of
S\raise0.4ex\hbox{$\chi$}PT.
\item{}
There is nothing in my argument that $(4,4,{\scriptstyle \raise.15ex\hbox{${1\over4}$}})_\chi$ is the
correct chiral theory for four flavors of fourth-rooted staggered quarks
that is really dependent on the fact that we are taking the {\it fourth}\/ root.
The same arguments would also imply that $(3,4,{\scriptstyle \raise.15ex\hbox{${1\over3}$}})_\chi$ is
the chiral theory for three flavors of staggered quarks for which the
{\it third}\/ root is taken! The decoupling arguments in
\secref{extension} (which presumably still apply) would say further
that $(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over3}$}})_\chi$ gives the chiral theory for $n_F<3$ flavors
of third-rooted staggered fields. There is no contradiction here:
$(n_F,4,{\scriptstyle \raise.15ex\hbox{${1\over3}$}})_\chi$ describes a sick theory,
{\it even in the continuum limit}\/, except for the uninteresting
case of 3 degenerate flavors. Since a staggered
field always has four tastes, only a fourth (or square) root
the root can describe an integer number of flavors (and therefore a local
action) in the continuum limit.
The $n_F=1$ example from \secref{one-flavor} provides a simple illustration:
the contributions from light pions in the second and third lines of \eq{Gq}
vanish in the continuum if and only if $n_R=R=1/4$. (I ignore the trivial case $R=0$, as well as
$n_R=R=-1/4$, which violates the spin-statistics theorem.)
Even $n_R=R=1/2$ leaves some light-pion
contributions, as it should since that is really a two-flavor case.
\end{itemize}
\section*{ACKNOWLEDGMENTS}
I am very grateful to Maarten Golterman and Yigal Shamir for extensive discussions,
invaluable suggestions, and careful readings of the manuscript,
as well as for a question by Maarten on whether the rooted staggered case should be considered
a mixed theory, which started me thinking along the present lines.
I also thank Jon Bailey,
Carleton DeTar, Urs Heller, Andreas Kronfeld, as well as other MILC and
Fermilab colleagues, for helpful comments and questions. Finally, I am indebted
to Carl Bender for discussions on analyticity and high orders of perturbation theory.
This work is
partially supported by the US Department of Energy under grant
DE-FG02-91ER40628.
|
1,116,691,500,271 | arxiv | \section{Introduction}
The inflationary paradigm, which was originally proposed in \cite{oldinf},
is now widely accepted as a viable phenomenology describing the cosmic acceleration
in the very early Universe. The simplest inflationary scenario based on
a single scalar field predicts the generation of nearly scale-invariant and
adiabatic density perturbations \cite{oldper}. This prediction is in agreement with
the temperature fluctuations of the Cosmic Microwave Background (CMB) observed
by the WMAP \cite{WMAP} and Planck \cite{Planck} satellites.
The WMAP data showed that there is an anomaly associated with the
broken rotational invariance of the CMB perturbations \cite{aniobser}.
This implies that the statistical isotropy of the power spectrum of
curvature perturbations is broken, which is difficult to be addressed
in the context of the simplest single-field inflationary scenario.
Although we cannot exclude the possibility that some systematic
effects cause this anisotropy \cite{Hanson}, it is worth exploring
the primordial origin of such a broken rotational invariance.
If the inflaton field $\phi$ couples to a vector kinetic term $F_{\mu \nu}F^{\mu \nu}$,
an anisotropic hair can survive during inflation for a suitable choice
of the coupling $f^2(\phi)$ \cite{Watanabe}.
In such cases, the presence of the vector field
gives rise to the anisotropic power spectrum
consistent with the broken rotational invariance of
the CMB perturbations \cite{Gum,Watanabe:2010fh}
(see also Refs.~\cite{Yoko}-\cite{Rodriguez} for related works).
In addition, the models predict the detectable level of non-Gaussianities
for the local shape averaged over all directions with respect to
a squeezed wave number \cite{Bartolo,Shiraishi}.
In the two-form field models where the inflaton
couples to the kinetic term $H_{\mu \nu \lambda}H^{\mu \nu \lambda}$
the anisotropic hair can also survive \cite{Ohashi}, but their observational signatures
imprinted in CMB are different from those in the vector model \cite{Ohashi2}.
For a canonical inflaton field with the potential $V(\phi)$, the
energy density of a vector field can remain nearly constant for the coupling
$f(\phi)=\exp[\int 2V/(M_{\rm pl}^2V_{,\phi})d\phi]$ \cite{Watanabe},
where $V_{,\phi}=dV/d\phi$.
For the exponential potential $V(\phi)=ce^{-\lambda \phi/M_{\rm pl}}$
the coupling is of the exponential form $f(\phi)=e^{-2\phi/(\lambda M_{\rm pl})}$,
as it often appears in string theory and supergravity \cite{Ratra}.
In this case there exists an anisotropic power-law inflationary attractor along which
the ratio $\Sigma/H$ is constant \cite{Kanno10}, where $\Sigma$ is an anisotropic
shear and $H$ is an isotropic expansion rate.
For general slow-roll models in which the cosmic acceleration
comes to end, the solution with an anisotropic hair corresponds to
a temporal attractor during inflation \cite{Watanabe}.
There exists another inflationary scenario based on
the scalar-field kinetic energy $X=-(\partial \phi)^2/2$
with the Lagrangian $P(\phi,X)$-- dubbed k-inflation \cite{kinf}.
The representative models of k-inflation are the (dilatonic) ghost
condensate \cite{Arkani,Piazza} and
the Dirac-Born-Infeld (DBI) model \cite{DBI}.
In such cases the evolution of the inflaton can be
faster than that of the standard slow-roll inflation, so the coupling
$f(\phi)$ with the vector field can vary more significantly.
It remains to see whether the anisotropic hair survives in k-inflation.
This is important to show the generality of anisotropic inflation.
In Refs.~\cite{Piazza,Sami} it was found that in the presence of
a scalar field and a barotropic perfect fluid the condition for the existence of scaling
solutions restricts the Lagrangian of the form $P(\phi,X)=Xg(Y)$, where $g$ is an
arbitrary function in terms of $Y=X e^{\lambda \phi/M_{\rm pl}}$
and $\lambda$ is a constant.
On the flat Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) background
there exists a scalar-field dominated attractor responsible for inflation
under the condition $\lambda^2<2\,\partial P/\partial X$ \cite{Tsuji06,Ohashi2011}.
In fact, the Lagrangian $P(\phi,X)=Xg(Y)$ covers a wide class of
power-law inflationary scenarios such as the canonical scalar field
with the exponential potential ($g(Y)=1-cM_{\rm pl}^4/Y$), the dilatonic ghost
condensate ($g(Y)=-1+cY/M_{\rm pl}^4$), and the DBI
model ($g(Y)=-(m^4/Y)\sqrt{1-2Y/m^4}-M^4/Y$).
There is also another power-law inflationary scenario studied in Ref.~\cite{Unnikrishnan:2013vga}.
In the presence of a vector kinetic term $F_{\mu \nu}F^{\mu \nu}$
with the coupling $f(\phi)=f_0e^{-\mu \phi/M_{\rm pl}}$, the canonical scalar
field with the exponential potential $V(\phi)=ce^{-\lambda \phi/M_{\rm pl}}$
gives rise to stable anisotropic inflationary solutions under the
condition $\lambda^2+2\mu \lambda-4>0$ \cite{Kanno10}.
For the power-law DBI inflation it was shown that the anisotropic
hair can survive under certain conditions \cite{Kao}
(see also Ref.~\cite{Tachyon} for the power-law tachyon inflation).
In this paper we study the existence and the stability of anisotropic
fixed points for the general Lagrangian $P(\phi,X)=Xg(Y)$.
Remarkably, if anisotropic inflationary fixed points exist, they are
stable irrespective of the forms of $g(Y)$ in the regime
where the anisotropy is small ($\Sigma/H \ll 1$).
This paper is organized as follows.
In Sec.~\ref{backsec} we derive the equations of motion for the Lagrangian
$P(\phi,X)$ on the anisotropic cosmological background.
In Sec.~\ref{fixedsec} we obtain anisotropic fixed points for the
Lagrangian $P=Xg(Y)$ and discuss the stability of them against
the homogenous perturbations.
In Sec.~\ref{concretesec} we apply our general results to concrete
models of power-law inflation and numerically confirm the existence
of stable anisotropic solutions.
Sec.~\ref{consec} is devoted to conclusions.
\section{Background equations of motion}
\label{backsec}
Let us consider the theories described by the action
\begin{equation}
S=\int d^4x\sqrt{-g_M}\left[ \frac{M_{\rm pl}^2}{2}R
+P(\phi,X)-\frac{1}{4} f(\phi )^2 F_{\mu\nu}F^{\mu\nu}
\right]\,,
\label{eq:action}
\end{equation}
where $g_M$ is the determinant of the metric $g_{\mu \nu}$,
$R$ is the scalar curvature, and $P(\phi, X)$ is a function with
respect to the inflaton $\phi$ and its derivative
$X=-(1/2)g^{\mu \nu} \partial_{\mu} \phi \partial_{\nu} \phi$.
The field $\phi$ couples to a vector kinetic term
$F_{\mu\nu}F^{\mu\nu}$, where the vector field $A_{\mu}$
is related to $F_{\mu\nu}$ as $F_{\mu\nu} =
\partial_\mu A_\nu - \partial_\nu A_\mu$.
Choosing the gauge $A_0=0$, we can take the $x$-axis for
the direction of the vector field, i.e.,
$A_{\mu}=(0,v(t),0,0)$, where $v(t)$ is a function of the cosmic time $t$.
Since there is the rotational symmetry in the $(y,z)$ plane,
we take the line element of the form
\begin{equation}
ds^2 = -{\cal N}(t)^2 dt^2 + e^{2\alpha(t)} \left[ e^{-4\sigma (t)}dx^2
+e^{2\sigma (t)}(dy^2+dz^2) \right] \ ,
\label{anisotropic-metric}
\end{equation}
where ${\cal N}(t)$ is the Lapse function,
$e^\alpha \equiv a$ and $\sigma$ are the isotropic
scale factor and the spatial shear, respectively.
For this metric the action (\ref{eq:action}) reads
\begin{equation}
S=\int d^4 x \frac{e^{3\alpha}}{{\cal N}} \left[ 3M_{\rm pl}^2
(\dot{\sigma}^2-\dot{\alpha}^2)+{\cal N}^2 P(\phi,X({\cal N}))
+\frac12 f(\phi)^2 e^{-2\alpha+4\sigma} \dot{v}^2 \right]\,,
\label{actioncon}
\end{equation}
where a dot represents a derivative with respect to $t$,
and $X({\cal N})=\dot{\phi}^2 {\cal N}^{-2}/2$.
The field equation of motion for the field $v$ following from
the action (\ref{actioncon}) is integrated to give
\begin{equation}
\dot{v}=p_A\,f(\phi)^{-2} e^{-\alpha-4\sigma}\,,
\end{equation}
where $p_A$ is an integration constant.
Varying the action (\ref{actioncon}) with respect to
${\cal N}$, $\alpha$, $\sigma$, $\phi$, and setting
${\cal N}=1$, it follows that
\begin{eqnarray}
& &H^2=\dot{\sigma}^2+\frac{1}{3M_{\rm pl}^2}
\left[ 2XP_{,X}-P+\frac{p_A^2}{2}f(\phi)^{-2} e^{-4\alpha-4\sigma} \right]\,,
\label{be1} \\
& &\ddot{\alpha}=-3\dot{\sigma}^2-\frac{1}{M_{\rm pl}^2}
\left[ XP_{,X}+\frac{p_A^2}{3} f(\phi)^{-2} e^{-4\alpha-4\sigma} \right]\,,
\label{be2} \\
& & \ddot{\sigma}=-3\dot{\alpha} \dot{\sigma}+
\frac{p_A^2}{3M_{\rm pl}^2}f(\phi)^{-2} e^{-4\alpha-4\sigma}\,,
\label{be3} \\
& &(P_{,X}+2XP_{,XX})\ddot{\phi}+3P_{,X} \dot{\alpha}\dot{\phi}
+P_{,X \phi} \dot{\phi}^2-P_{,\phi}
-p_A^2f(\phi)^{-3}f_{,\phi} (\phi)e^{-4\alpha-4\sigma}=0\,,
\label{be4}
\end{eqnarray}
where $H\equiv \dot{\alpha}=\dot{a}/a$ is the Hubble expansion rate,
and $P_{,X} \equiv \partial P/\partial X$ etc.
We define the energy densities of the inflaton
and the vector field, respectively, as
\begin{equation}
\rho_{\phi} \equiv 2XP_{,X}-P\,,\qquad
\rho_A \equiv \frac{p_A^2}{2} f(\phi)^{-2}e^{-4\alpha -4\sigma} \,.
\label{rhoA}
\end{equation}
In order to sustain inflation, we require the condition $\rho_{\phi} \gg \rho_A$.
Since the shear term $\Sigma \equiv \dot{\sigma}$ should be suppressed
relative to $H$, Eq.~(\ref{be1}) reads
\begin{equation}
H^2 \simeq \frac{\rho_{\phi}}{3M_{\rm pl}^2}\,.
\label{Friap}
\end{equation}
On using the slow-roll parameter $\epsilon \equiv -\dot{H}/H^2$,
Eq.~(\ref{be2}) can be written as
\begin{equation}
\epsilon=\frac{3\dot{\sigma}^2}{H^2}+\frac{XP_{,X}}{M_{\rm pl}^2 H^2}
+\frac{2\rho_A}{3M_{\rm pl}^2 H^2}\,.
\label{epdef}
\end{equation}
Each term on the r.h.s. of this equation needs to be much smaller than
unity. In particular, if the contributions of the shear and the vector-field
energy density are negligible, Eq.~(\ref{epdef}) reduces to
the standard relation $\epsilon \simeq XP_{,X}/(M_{\rm pl}^2 H^2)$
of k-inflation \cite{kinf}.
{}From Eq.~(\ref{be3}) the shear term obeys
\begin{equation}
\dot{\Sigma}=-3H \Sigma+\frac{2\rho_A}{3M_{\rm pl}^2}\,.
\end{equation}
If $\Sigma$ converges to a constant value, it follows that
\begin{equation}
\frac{\Sigma}{H} \simeq \frac{2\rho_A}{3\rho_{\phi}}\,,
\end{equation}
where we used Eq.~(\ref{Friap}).
If the evolution of $\rho_A$ is proportional to $\rho_{\phi}$, the ratio
$\Sigma/H$ remains constant.
This actually happens for anisotropic inflationary attractors
discussed in the next section.
\section{Power-law k-inflation and the stability of anisotropic fixed points}
\label{fixedsec}
On the flat isotropic FLRW background the power-law k-inflation can be realized
by the following general Lagrangian \cite{Tsuji06,Ohashi2011}
\begin{equation}
P(\phi,X)=X\,g(Y)\,,\qquad Y \equiv X e^{\lambda \phi/M_{\rm pl}}\,,
\label{scalinglag}
\end{equation}
where $g$ is an arbitrary function of $Y$, and $\lambda$
is a constant. Originally, the Lagrangian (\ref{scalinglag}) was derived
for the existence of scaling solutions in the presence of
a barotropic perfect fluid \cite{Piazza,Sami}.
Under the condition $\lambda^2<2P_{,X}$ there exists
a power-law inflationary solution for any functions of $g(Y)$ \cite{Tsuji06}.
For the choice $g(Y)=1-cM_{\rm pl}^4/Y$, where $c$ is a constant,
the Lagrangian (\ref{scalinglag}) reduces to
$P=X-c M_{\rm pl}^4 e^{-\lambda \phi/M_{\rm pl}}$ \cite{expo},
in which case the dynamics of anisotropic inflation was studied
in Ref.~\cite{Kanno10}.
The dilatonic ghost condensate model
$P=-X+ce^{\lambda \phi/M_{\rm pl}}X^2/M_{\rm pl}^4$ \cite{Piazza}
corresponds to the choice $g(Y)=-1+cY/M_{\rm pl}^4$.
If we choose the function $g(Y)=-(m^4/Y)\sqrt{1-2Y/m^4}-M^4/Y$,
we recover the DBI Lagrangian $P=-h(\phi)^{-1} \sqrt{1-2h(\phi)X}
+h(\phi)^{-1}-V(\phi)$ with $h(\phi)^{-1} =
m^4e^{-\lambda \phi/M_{\rm pl}}$ and
$V(\phi)=(M^4+m^4)e^{-\lambda \phi/M_{\rm pl}}$.
In the following we study inflationary solutions for the Lagrangian
(\ref{scalinglag}) on the anisotropic background given
by the metric (\ref{anisotropic-metric}).
\subsection{Anisotropic fixed points}
For the Lagrangian (\ref{scalinglag}) the field equation of motion
(\ref{be4}) reads
\begin{equation}
\ddot{\phi}+3HA(Y)P_{,X}(Y) \dot{\phi}+\frac{\lambda X}{M_{\rm pl}}
\left\{ 1-[g(Y)+2g_1(Y)]A(Y) \right\}-2 \frac{f_{,\phi}}{f} \rho_A A(Y)=0\,,
\label{be5}
\end{equation}
where
\begin{equation}
g_n(Y)=Y^n \frac{dg^n(Y)}{dY^n}\,,\qquad
P_{,X}(Y)=g(Y)+g_1(Y)\,,\qquad
A(Y)=[g(Y)+5g_1(Y)+2g_2(Y)]^{-1}\,.
\end{equation}
The quantity $A=(P_{,X}+2XP_{,XX})^{-1}$ is related to
the sound speed $c_s$, as $c_s^2=P_{,X}A$ \cite{Garriga,Piazza}.
In order to study the dynamics of anisotropic power-law k-inflation,
it is convenient to introduce
the following dimensionless variables
\begin{equation}
x_1=\frac{\dot{\phi}}{\sqrt{6}HM_{\rm pl}}\,,\qquad
x_2=\frac{M_{\rm pl} e^{-\frac{\lambda \phi}{2M_{\rm pl}}}}{\sqrt{3}H}\,,
\qquad
x_3=\frac{\dot{\sigma}}{H}\,,\qquad
x_4=\frac{\sqrt{\rho_A}}{\sqrt{3}HM_{\rm pl}}\,.
\label{dimendef}
\end{equation}
The variable $Y$ is related to $x_1$ and $x_2$ via
\begin{equation}
Y/M_{\rm pl}^4=x_1^2/x_2^2\,.
\end{equation}
{}From Eq.~(\ref{be1}) there is the constraint equation
\begin{equation}
x_4^2=1-x_3^2-x_1^2 (P_{,X}+g_1)\,,
\label{x4re}
\end{equation}
whereas Eq.~(\ref{be2}) gives
\begin{equation}
\frac{\dot{H}}{H^2}=-2-x_3^2-x_1^2 (P_{,X}-2g_1)\,.
\label{dotH}
\end{equation}
On using Eqs.~(\ref{be3}), (\ref{be5}), (\ref{x4re}), and (\ref{dotH}),
we obtain the following autonomous equations
\begin{eqnarray}
\hspace{-0.3cm}
x_1'(N) &=& \frac12 x_1 \left[ 4+2x_3^2-\sqrt{6}\lambda x_1
+2x_1^2 (P_{,X}-2g_1)\right]
-\frac{\sqrt{6}A}{2}
\left[ \sqrt{6}x_1 P_{,X}-x_1^2 (P_{,X}+g_1)(\lambda+2\mu)
+2(1-x_3^2)\mu \right],\label{auto1} \\
\hspace{-0.3cm}
x_2'(N) &=& \frac12 x_2 \left[ 4+2x_3^2-\sqrt{6}\lambda x_1
+2x_1^2 (P_{,X}-2g_1) \right]\,,
\label{auto2}\\
\hspace{-0.3cm}
x_3'(N) &=& (x_3^2-1)(x_3-2)+x_1^2 \left[ P_{,X} (x_3-2)
-2g_1(x_3+1) \right]\,,
\label{auto3}
\end{eqnarray}
where $N=\ln a$, $x_i'(N)=dx_i(N)/dN$ ($i=1,2,3$), and
$\mu=-M_{\rm pl}f_{,\phi}/f$. In the following we focus on
the case of constant $\mu$, i.e., the coupling
\begin{equation}
f(\phi)=f_0 e^{-\mu \phi/M_{\rm pl}}\,,
\label{fphi}
\end{equation}
where $f_0$ is a constant.
The fixed points responsible for the cosmic acceleration
correspond to non-zero values of $x_1$ and $x_2$.
Setting the r.h.s. of Eqs.~(\ref{auto1})-(\ref{auto3}) to be 0,
we obtain the following two fixed points
\begin{itemize}
\item (i) Isotropic fixed point
\begin{equation}
P_{,X}(Y)=\frac{\lambda}{\sqrt{6}x_1}\,,\qquad
g_1(Y)=\frac{6-\sqrt{6}\lambda x_1}{6x_1^2}\,,\qquad
x_3=0\,,\qquad x_4=0\,.
\label{auiso}
\end{equation}
\item (ii) Anisotropic fixed point
\begin{eqnarray}
P_{,X} (Y) &=& \frac{(\lambda+2\mu)[2\sqrt{6}-(\lambda+6\mu)x_1]}{8x_1}\,,
\qquad
g_1(Y) = \frac{[2\sqrt{6}-(\lambda+2\mu)x_1](\sqrt{6}-\lambda x_1)}
{8x_1^2}\,, \nonumber \\
x_3 &=& \frac{\sqrt{6}}{4} (\lambda+2\mu)x_1-1\,,
\qquad
x_4^2 = \frac18 [ 3(\lambda+2\mu)x_1-2\sqrt{6}]
(\sqrt{6}-\lambda x_1)\,.
\label{auani1}
\end{eqnarray}
\end{itemize}
Provided $g(Y)$ is given, the quantities $Y$ and $x_1$ are known
by solving the first two equations of (\ref{auiso}) or (\ref{auani1}).
In the Appendix we discuss more explicit expressions of isotropic
and anisotropic solutions corresponding to the fixed points
(\ref{auiso}) and (\ref{auani1}), respectively.
For both the isotropic and anisotropic fixed points,
the slow-roll parameter is simply given by
\begin{equation}
\epsilon=-\frac{\dot{H}}{H^2}
=\frac{\sqrt{6}}{2}\lambda x_1\,,
\label{dotH3}
\end{equation}
where we used Eq.~(\ref{dotH}).
If $\lambda x_1>0$, the power-law inflation
$a \propto t^{2/(\sqrt{6}\lambda x_1)}$ is realized.
Violation of the condition $\lambda x_1>0$ means that the fixed
points correspond to super-inflationary solutions with $\dot{H}>0$.
Then, the condition for the cosmic acceleration with a decreasing
Hubble parameter is given by
\begin{equation}
0<\lambda x_1< \frac{\sqrt{6}}{3}\,.
\label{con1}
\end{equation}
The presence of the anisotropic fixed point (ii) implies that
$x_4^2>0$. This translates to
\begin{equation}
3(\lambda+2\mu)x_1>2\sqrt{6}\,,
\label{con2}
\end{equation}
where we used (\ref{con1}).
Under the condition (\ref{con2}) we also have $x_3>0$.
In the absence of the vector field coupled to $\phi$, the ghost
is absent for $P_{,X}>0$.
For the anisotropic fixed point (ii), the condition $P_{,X}>0$
corresponds to
\begin{equation}
(\lambda+6\mu)x_1<2\sqrt{6}\,,
\label{con3}
\end{equation}
where we employed the fact that, from Eq.~(\ref{con2}),
the signs of $x_1$ and $\lambda+2\mu$ are the same.
{}From Eqs.~(\ref{be1}) and (\ref{be2}) the total energy density and pressure
are given by $\rho_t=2XP_{,X}-P+p_A^2f(\phi)^{-2}e^{-4\alpha -4\sigma} /2$
and $P_t=P+p_A^2f(\phi)^{-2}e^{-4\alpha -4\sigma} /6$, respectively.
Then we have
\begin{equation}
\rho_t+P_t =
2H^2M_{\rm pl}^2 (3x_1^2 P_{,X}+2x_4^2)\,.
\label{nec}
\end{equation}
If $P_{,X}>0$, then the null energy condition (NEC) $\rho_t+P_t>0$
is automatically satisfied. At the anisotropic fixed point (ii),
it is also possible to satisfy the NEC even for $P_{,X}<0$.
Substituting Eq.~(\ref{auani1}) into Eq.~(\ref{nec}), it follows that
\begin{equation}
\rho_t+P_t=
2H^2M_{\rm pl}^2 \left[ \sqrt{6} (3\mu+2\lambda)
x_1-9(\lambda+2\mu)^2x_1^2/8-3 \right]\,.
\label{nec2}
\end{equation}
Then, the NEC translates to
\begin{equation}
\frac{2\sqrt{6}}{9} \frac{4\lambda+6\mu-\sqrt{\lambda(7\lambda+12\mu)}}
{(\lambda+2\mu)^2}<x_1<
\frac{2\sqrt{6}}{9} \frac{4\lambda+6\mu+\sqrt{\lambda(7\lambda+12\mu)}}
{(\lambda+2\mu)^2}\,,
\label{x1upper}
\end{equation}
whose existence requires that $\lambda(7\lambda+12\mu)>0$.
Let us consider the case where $\lambda>0$ and $\mu>0$.
As long as the upper bound of Eq.~(\ref{x1upper}) is larger than
the value $2\sqrt{6}/[3(\lambda+2\mu)]$, there are some values of
$x_1$ consistent with both (\ref{con2}) and (\ref{x1upper}).
This is interpreted as the condition
$\lambda+\sqrt{\lambda (7\lambda+12\mu)}>0$,
which is in fact satisfied for $\lambda>0$.
In the limit that $\lambda \to 0$ the condition (\ref{con2})
reduces to $x_1>\sqrt{6}/(3\mu)$, while the region (\ref{x1upper}) shrinks
to the point $x_1=\sqrt{6}/(3\mu)$.
When $\lambda=0$, Eq.~(\ref{nec2}) reads
\begin{equation}
\rho_t+P_t=-H^2 M_{\rm pl}^2 ( \sqrt{6}-3\mu x_1)^2\,,
\end{equation}
which is negative for $x_1>\sqrt{6}/(3\mu)$.
Notice that, from Eq.~(\ref{dotH3}), the limit $\lambda \to 0$
corresponds to the de Sitter solution with constant $H$.
Hence the NEC is generally violated on the de Sitter solution.
The violation of the NEC means that the dominant energy condition
(DEC; $\rho_t \geq |P_t|$) as well as the strong energy
condition (SEC; $\rho_t+P_t \geq 0$ and $\rho_t+3P_t \geq 0$)
are not satisfied \cite{Carroll}.
This property is consistent with the Wald's cosmic no-hair
conjecture \cite{Wald} stating that, in the presence of
an energy-momentum tensor satisfying both DEC and SEC,
the anisotropic hair does not survive on the de Sitter background.
In summary, for $\lambda>0$ and $\mu>0$, the anisotropic fixed points
satisfying both $P_{,X}>0$ and the NEC exist in the regime
\begin{equation}
\frac{2\sqrt{6}}{3(\lambda+2\mu)}<x_1<\frac{2\sqrt{6}}
{\lambda+6\mu}\,,
\label{x1confin}
\end{equation}
whose upper bound (which comes from $P_{,X}>0$) gives a tighter constraint
than that in Eq.~(\ref{x1upper}) (which comes from the NEC).
As long as $\lambda>0$, there are some allowed values of $x_1$ which
exist in the region (\ref{x1confin}).
Under the condition (\ref{x1confin}) the anisotropic parameter
$x_3=\Sigma/H$ is in the range
\begin{equation}
0<x_3<\frac{2}{1+6\mu/\lambda}\,,
\label{x3upper}
\end{equation}
whose upper limit is determined by the ratio $\mu/\lambda$.
For compatibility of the two conditions (\ref{con1}) and (\ref{con2})
we require that $\mu/\lambda>1/2$.
Hence the anisotropic parameter is generally constrained
to be $x_3<1/2$.
\subsection{Stability of the anisotropic fixed point}
We study the stability of the anisotropic inflationary solution
by considering small perturbations $\delta x_1$, $\delta x_2$,
and $\delta x_3$ about the anisotropic critical point (ii)
given by $(x_1^{(c)}, x_2^{(c)}, x_3^{(c)})$, i.e.,
\begin{equation}
x_i=x_i^{(c)}+\delta x_i \qquad (i=1,2,3).
\end{equation}
We expand the function $g(Y)$ around
$Y_c=(x_1^{(c)}/x_2^{(c)})^2 M_{\rm pl}^4$, i.e.,
\begin{equation}
g(Y)=g_c+g'(Y_c) (Y-Y_c)+\frac{g''(Y_c)}{2} (Y-Y_c)^2
+\cdots\,,
\end{equation}
where $g_c \equiv g(Y_c)$ and $g'(Y)=dg(Y)/dY$.
Taking the terms up to the second
order of $Y-Y_c$, we have $\delta P_{,X}=(2g_c'+Y_c g_c'')\delta Y$
and $\delta g_1=(g_c'+Y_c g_c'')\delta Y$.
Note that $g_c''$ and $\delta Y$ can be expressed
as $g_c''=(A^{-1}-g_c-5Yg_c')/(2Y_c^2)$ and
$\delta Y/M_{\rm pl}^4=2[x_1^{(c)} \delta x_1/(x_2^{(c)})^2
-(x_1^{(c)})^2 \delta x_2/(x_2^{(c)})^3]$.
In the following we omit the subscripts ``$c$'' and
``$(c)$'' for the background quantities.
Perturbing Eqs.~(\ref{auto1})-(\ref{auto3}) around the critical point (ii),
we can write the resulting perturbation equations in the form
\begin{eqnarray}
\frac{d }{d N}
\left(
\begin{array}{c}
\delta x_1 \\
\delta x_2 \\
\delta x_3
\end{array}
\right) = {\cal M} \left(
\begin{array}{c}
\delta x_1 \\
\delta x_2 \\
\delta x_3
\end{array}
\right) \,,
\label{uvdif}
\end{eqnarray}
where ${\cal M}$ is the $3 \times 3$ matrix expressed
in terms of $x_1$, $Y$, $A$, $\lambda$, and $\mu$.
Using the relations (\ref{auani1}), the three eigenvalues of the matrix ${\cal M}$,
which determine the stability of the anisotropic point (ii), are
\begin{equation}
\gamma_1=\frac{\sqrt{6}}{2}\lambda x_1-3\,,\qquad
\gamma_2=\frac{\sqrt{6}}{4}\lambda x_1-\frac32+\frac18 \sqrt{{\cal D}}\,,\qquad
\gamma_3=\frac{\sqrt{6}}{4}\lambda x_1-\frac32-\frac18 \sqrt{{\cal D}}\,,
\label{gam}
\end{equation}
where
\begin{eqnarray}
{\cal D} =
16 \left[ 9- \sqrt{6}\left(2\lambda+3\mu \right) x_1 \right]^2
+3A (\lambda+2\mu)
\left[ 3 \left( \lambda+2\mu \right) x_1 -2\sqrt{6}\right]
\left[ \left( \lambda^2 +28\mu\lambda +36\mu^2 \right) x_1
-2\sqrt{6} \left( \lambda + 14 \mu \right)\right] \,.
\label{deter}
\end{eqnarray}
As long as the condition (\ref{con1}) of the cosmic acceleration is satisfied,
we have that $\gamma_1<0$. The term $\sqrt{6}\lambda x_1/4-3/2$
inside $\gamma_2$ and $\gamma_3$ is also negative under the same condition.
If ${\cal D}$ is negative, then the anisotropic fixed point is a stable spiral.
For positive ${\cal D}$ the eigenvalue $\gamma_3$ is negative.
When $x_1=2\sqrt{6}/[3(\lambda+2\mu)]$, the eigenvalue
$\gamma_2$ vanishes for the same signs of $\lambda$ and $\mu$.
In order to see this more precisely, we substitute $x_1=
2\sqrt{6}/[3(\lambda+2\mu)]+\delta$ into the eigenvalue $\gamma_2$,
where $\delta$ is a small parameter. It then follows that
\begin{equation}
\gamma_2=-\frac{3\sqrt{6}}{16} (\lambda+2\mu)
\left[ 4+A(\lambda+2\mu)(\lambda+4\mu) \right]\delta
+O(\delta^2)\,.
\end{equation}
Provided that $A>0$, we have $\gamma_2<0$ either for $\lambda>0$,
$\mu>0$, $\delta>0$ or $\lambda<0$, $\mu<0$, $\delta<0$.
Then the anisotropic fixed point is stable for
$3(\lambda+2\mu)x_1>2\sqrt{6}$, which is exactly equivalent
to the condition (\ref{con2}).
Plugging $x_1=2\sqrt{6}/[3(\lambda+2\mu)]+\delta$ into
$P_{,X}$ of Eq.~(\ref{auani1}), we obtain
\begin{equation}
P_{,X}=\frac14 \lambda (\lambda+2\mu)-\frac{3\sqrt{6}}{32}
(\lambda+2\mu)^3 \delta+O(\delta^2)\,,
\end{equation}
which is positive at $x_1=2\sqrt{6}/[3(\lambda+2\mu)]$ for
the same signs of $\lambda$ and $\mu$.
If $P_{,X}>0$ and $A>0$ then the sound speed squared
$c_s^2=P_{,X}A$ is positive, so that the Laplacian instability
of small-scale perturbations can be avoided.
For $x_1$ away from $2\sqrt{6}/[3(\lambda+2\mu)]$,
the quantity $P_{,X}$ can be negative.
In order to avoid this, we require
the condition (\ref{con3}).
We also note that $A$ can change its sign at some
value of $x_1$. Since this depends on the forms of the
function $g(Y)$, we shall study this property
in several different models in Sec.~\ref{concretesec}.
We recall that $x_3$ and $x_4^2$ exactly vanish at
$3(\lambda+2\mu)x_1=2\sqrt{6}$.
In order to keep the small level of anisotropies
($x_3 \ll 1$ and $x_4^2 \ll 1$), it is required that $x_1$ is
only slightly larger than the critical value
$2\sqrt{6}/[3(\lambda+2\mu)]$ for positive
$\lambda$ and $\mu$.
In this regime the stability of the anisotropic fixed
point is ensured for $A>0$.
\section{Concrete models of power-law inflation}
\label{concretesec}
In this section we study the existence of anisotropic fixed points
as well as their stabilities in concrete models of power-law inflation.
For simplicity we shall focus on the case of the positive
values of $\lambda$ and $\mu$.
\subsection{Canonical field with an exponential potential}
\begin{figure}
\includegraphics[height=3.4in,width=3.6in]{fig1.eps}
\caption{\label{fig1}
The parameter space in the $(\lambda,\mu)$ plane for
the model $P=X-c M_{\rm pl}^4 e^{-\lambda \phi/M_{\rm pl}}$.
The two solid curves, which determine the minimum values of $\mu$
for large and small $\lambda$, correspond to the bounds (\ref{canocon1})
and (\ref{canocon2}), respectively.
The two dotted curves correspond to $\epsilon=0.1$ and $\epsilon=0.5$.
In order to realize $\epsilon\ll1$, we require that $\mu/\lambda \gg 1$.}
\end{figure}
Let us first consider the model
\begin{equation}
P=X-c M_{\rm pl}^4 e^{-\lambda \phi/M_{\rm pl}}
\qquad (c={\rm constant}),
\end{equation}
i.e., the function $g(Y)=1-cM_{\rm pl}^4/Y$.
Solving the first two equations of (\ref{auani1}) for this function,
we obtain the following anisotropic fixed point
\begin{eqnarray}
& & x_1=\frac{2\sqrt{6}(\lambda+2\mu)}
{\lambda^2+8\mu \lambda+12\mu^2+8}\,,\qquad
c x_2^2=\frac{6(2+2\mu^2+\mu \lambda)
(8+12\mu^2+4\mu \lambda-\lambda^2)}
{(\lambda^2+8\mu \lambda+12\mu^2+8)^2}\,,\nonumber \\
& & x_3=\frac{2(\lambda^2+2\mu \lambda-4)}
{\lambda^2+8\mu \lambda+12\mu^2+8}\,,\qquad
x_4^2=\frac{3(\lambda^2+2\mu \lambda-4)
(8+12\mu^2+4\mu \lambda-\lambda^2)}
{(\lambda^2+8\mu \lambda+12\mu^2+8)^2}\,,
\label{xiexp}
\end{eqnarray}
which agree with those derived in Ref.~\cite{Kanno10}.
The upper bound of Eq.~(\ref{con1}) translates to
\begin{equation}
8+12\mu^2-4\mu \lambda-5\lambda^2>0\,,
\label{canocon1}
\end{equation}
which is satisfied for $\mu \gg \lambda$.
The condition (\ref{con2}) for the existence of the anisotropic
fixed point is interpreted as
\begin{equation}
\lambda^2+2\mu \lambda-4>0\,.
\label{canocon2}
\end{equation}
Since $P_{,X}=1>0$, $x_1$ is smaller
than the upper bound of Eq.~(\ref{x1confin}).
In this model the quantity $A$ is 1, so that the stability
of the anisotropic inflationary solution is ensured under
the condition (\ref{canocon2}) in the regime where
$x_1$ is not far away from the value $2\sqrt{6}/[3(\lambda+2\mu)]$.
Even for $x_1 \gg 2\sqrt{6}/[3(\lambda+2\mu)]$ the determinant
${\cal D}$ appearing in $\gamma_2$ of Eq.~(\ref{gam}) becomes
negative and hence the fixed point is a stable spiral.
This means that the anisotropic inflationary solution is an attractor
under the condition (\ref{canocon2}) \cite{Kanno10}.
In Fig.~\ref{fig1} we show the viable parameter space in the
$(\lambda,\mu)$ plane satisfying the two bounds
(\ref{canocon1}) and (\ref{canocon2}).
The stable anisotropic inflation can be realized for the parameters
in the shaded region.
We also plot the two curves corresponding to $\epsilon=0.1$
and $\epsilon=0.5$.
For $\lambda$ and $\mu$ satisfying the conditions $\mu \gg \lambda$
and $\mu \gg 1$, we approximately have $x_1 \simeq \sqrt{6}/(3\mu)$ from
Eq.~(\ref{xiexp}) and hence $\epsilon \simeq \lambda/\mu$ from Eq.~(\ref{dotH3}).
The slow-roll parameter $\epsilon$ of the order of $10^{-2}$ can be
realized for $\mu/\lambda=O(10^2)$.
If $\mu/\lambda=10^2$, for example, the condition
$\lambda^2+2\mu \lambda-4>0$
translates to $\mu=10^2 \lambda>14$.
In the limit that $\lambda \to 0$, the condition $\lambda^2+2\mu \lambda-4>0$
is not fulfilled. Hence, in this model, the stable anisotropic solution
does not exist on the de Sitter background.
This comes from the fact that the field is frozen
in the slow-roll limit ($\epsilon \to 0$),
so that there is no variation of the coupling $f(\phi)$
in Eq.~(\ref{fphi}) to give rise to anisotropic solutions.
\subsection{Generalized ghost condensate}
The second model is the generalized ghost condensate given by
the Lagrangian
\begin{equation}
P=-X+\frac{c}{M_{\rm pl}^{4n}}e^{n\lambda \phi/M_{\rm pl}}X^{n+1}
\qquad (c,n={\rm constant~with}~n \geq 1),
\end{equation}
in which case $g(Y)=-1+c(Y/M_{\rm pl}^4)^n$.
The diatonic ghost condensate model \cite{Piazza} corresponds to the case
$n=1$. {}From the first two equations of (\ref{auani1}) we find that
\begin{equation}
x_1=\frac{-\sqrt{6}[3\lambda+2\mu+(5\lambda+6\mu)n]\pm
\sqrt{6[3\lambda+2\mu+(5\lambda+6\mu)n]^2+48(n+1)
[2(4-\lambda^2-5\lambda\mu-6\mu^2)n-\lambda(\lambda+2\mu)]}}
{2[2(4-\lambda^2-5\lambda\mu-6\mu^2)n-\lambda(\lambda+2\mu)]} \,.
\label{x1so}
\end{equation}
Since the plus sign of Eq.~(\ref{x1so}) can give positive values of $x_1$,
we use this solution in the following discussion.
Then the anisotropic parameter $x_3$ reads
\begin{equation}
x_3=\frac{3\lambda+10\mu+(\lambda+6\mu)n-\sqrt{(9\lambda^2
-20\lambda\mu-60\mu^2+64)n^2
+2(3\lambda^2-20\lambda\mu-36\mu^2+32)n+(\lambda-2\mu)^2}}
{3\lambda+2\mu+(5\lambda+6\mu)n+\sqrt{(9\lambda^2-20\lambda\mu
-60\mu^2+64)n^2
+2(3\lambda^2-20\lambda\mu-36\mu^2+32)n+(\lambda-2\mu)^2}}\,.
\label{x3so}
\end{equation}
The condition (\ref{x1confin}) translates to
\begin{equation}
\mu_1<\mu<\mu_2 \,,\quad {\rm where} \quad
\mu_1 \equiv \frac{\sqrt{(2n+1)^2\lambda^2
+24n(n+1)}-(n+2)\lambda}{6(n+1)}\,,\quad
\mu_2 \equiv \frac{1}{12} \left( \lambda+\sqrt{\lambda^2+
\frac{96n}{n+1}}\right)\,.
\label{dilacon1}
\end{equation}
{}From the condition (\ref{con1}) the variable $\mu$ is bounded to be
\begin{equation}
\mu<\mu_3\,,\quad {\rm where} \quad
\mu_3 \equiv
\frac{(2n+1)\lambda+\sqrt{24n^2-(11n^2+26n-1)\lambda^2}}{6n}\,.
\label{dilacon2}
\end{equation}
For the determinant of Eq.~(\ref{x1so}) to be positive,
we require that
\begin{equation}
\mu<\mu_4\,,\quad {\rm where} \quad
\mu_4 \equiv
\frac{4\sqrt{2n(5n+1)(n+1)^2
\lambda^2+4n(n+1)(15n^2+18n-1)}-(5n^2+10n+1)\lambda}
{2(15n^2+18n-1)}\,.
\label{dilacon3}
\end{equation}
\begin{figure}
\includegraphics[height=3.3in,width=3.5in]{fig2a.eps}
\includegraphics[height=3.3in,width=3.5in]{fig2b.eps}
\caption{\label{figab}
The parameter space in the $(\lambda,\mu)$ plane for the generalized
ghost condensate model with $n=1$ (left) and $n=4$ (right).
The four curves correspond to the borders given
in Eqs.~(\ref{dilacon1}),
(\ref{dilacon2}), and (\ref{dilacon3}). In the shaded region,
all the conditions (\ref{dilacon1})-(\ref{dilacon3}) are satisfied.}
\end{figure}
In Fig.~\ref{figab} we plot the parameter space in the
$(\lambda,\mu)$ plane satisfying the conditions
(\ref{dilacon1})-(\ref{dilacon3}) for $n=1$ and $n=4$.
In the limit that $\lambda \to 0$, the region described
by (\ref{dilacon1}) shrinks to the point
$\mu=\sqrt{2n/[3(n+1)]}$.
As we see in Fig.~\ref{figab}, the region (\ref{dilacon1})
tends to be wider for larger $\lambda$.
The condition (\ref{dilacon2}) gives upper bounds
of $\lambda$ and $\mu$.
The intersection point of the curves $\mu=\mu_1$
and $\mu=\mu_3$ is given by
$(\lambda,\mu)=(\sqrt{2n/(n+2)},\sqrt{n/[2(n+2)]})$,
whereas the curves $\mu=\mu_2$ and
$\mu=\mu_3$ intersect at the point
$(\lambda,\mu)=(\sqrt{6n/[5(n+1)]},\sqrt{5n/[6(n+1)]})$.
For $n \geq 1$ the parameters $\lambda$ and $\mu$
are in the range
\begin{equation}
0<\lambda<\sqrt{\frac{2n}{n+2}}\,,\qquad
\sqrt{\frac{n}{2(n+2)}}<\mu<\sqrt{\frac{5n}{6(n+1)}}\,.
\label{lammucon}
\end{equation}
We note that the condition (\ref{dilacon3}) does
not provide an additional bound.
{}From Eq.~(\ref{lammucon}) the parameter $\mu$
is of the order of $0.1$ (with the maximum value
$\mu=\sqrt{5/6}$ in the limit $n \to \infty$).
\begin{figure}
\includegraphics[height=3.4in,width=3.6in]{fig3.eps}
\caption{\label{fig3}
The phase space in the two-dimensional plane ($x_3,x_4$) for
the dilatonic ghost condensate model with the Lagrangian
$P=-X+e^{\lambda \phi/M_{\rm pl}}X^2/M_{\rm pl}^4$.
The model parameters are chosen to be $\lambda=0.35$ and
$\mu=0.5$ with the initial condition $x_1=1.0$ and
several different initial values of $x_2$ and $x_3$.
The solutions finally converge to the anisotropic fixed point
$(x_3,x_4)=(5.306 \times 10^{-3},8.109 \times 10^{-2})$ with
$x_1=1.216$, $x_2=1.629$, and $\epsilon=0.521$.}
\end{figure}
In Fig.~\ref{fig3} we plot the phase space trajectories in the
two-dimensional plane ($x_3,x_4$) for $n=1$,
$\lambda=0.35$, and $\mu=0.5$.
The trajectories with different initial conditions converge
to the anisotropic fixed point (ii) and
hence the fixed point is stable.
As long as $\mu$ is close to the lower bound $\mu=\mu_1$,
the anisotropic parameter $x_3=\Sigma/H$ is
much smaller than 1.
For increasing $\mu$ the anisotropy gets larger.
In the numerical simulation of Fig.~\ref{fig3} the slow-roll parameter is
$\epsilon=0.521$ along the anisotropic attractor.
In order to realize $\epsilon$ of the order of $10^{-2}$, we require
that $\lambda=O(10^{-2})$.
For $\mu$ close to its upper bound, it can happen
that the stability of the anisotropic fixed point is subject to change.
In fact, the parameter $A=1/[c(n+1)(2n+1)(Y/M_{\rm pl}^{4})^n-1]$
diverges at $c(Y/M_{\rm pl}^{4})^n=1/[(n+1)(2n+1)]$.
This leads to the sign change of the determinant
(\ref{deter}) from negative to positive by passing the
singular point at $\mu=\mu_5$.
If $n=1$ then we have $\mu_5=\sqrt{\lambda^2+8}/3-\lambda/6$,
so the anisotropic fixed point is stable for
\begin{equation}
\mu<\sqrt{\lambda^2+8}/3-\lambda/6\,.
\label{mu5}
\end{equation}
This does not give an additional bound to those given in
Eqs.~(\ref{dilacon1})-(\ref{dilacon3}).
When $n>1$ the condition (\ref{mu5}) is more involved, but
the situation is similar to that discussed for $n=1$.
It is worth mentioning that, for $n=1$, the condition (\ref{mu5})
is equivalent to $x_3<1$.
The maximum value of $x_3$ is reached for
$(\lambda,\mu)=(\sqrt{6n/[5(n+1)]},\sqrt{5n/[6(n+1)]})$.
Substituting these values into Eq.~(\ref{x3so}) we have
$x_3=1/3$, which corresponds to the upper bound of
(\ref{x3upper}) with $\mu/\lambda=5/6$.
Hence the anisotropic parameter is constrained to be
\begin{equation}
\Sigma/H<1/3\,,
\label{aniupper}
\end{equation}
which holds independent of $n$.
This bound comes from the combination of the conditions $P_{,X}>0$
and $\lambda x_1<\sqrt{6}/3$.
If we impose the NEC $\rho_t+P_t>0$ instead of $P_{,X}>0$,
the upper bound (\ref{aniupper}) gets larger.
However, such a large anisotropy is not accepted observationally.
In summary, for $n \geq 1$, there exist the allowed parameter spaces
satisfying all the conditions (\ref{dilacon1})-(\ref{dilacon3}).
In order to realize the sufficient amount of inflation ($\epsilon \ll 1$)
with the suppressed anisotropy ($x_3 \ll 1$), we require that
$\lambda \ll 1$ and that $\mu$ is close to
the lower bound $\mu_1$.
\subsection{DBI model}
The DBI model is characterized by the Lagrangian \cite{DBI}
\begin{equation}
P=-h(\phi)^{-1} \sqrt{1-2h(\phi)X}
+h(\phi)^{-1}-V(\phi)\,,
\label{DBIlag}
\end{equation}
where $h(\phi)$ and $V(\phi)$ are functions of $\phi$.
For the choice $g(Y)=-(m^4/Y)\sqrt{1-2Y/m^4}-M^4/Y$, where
$m$ and $M$ are constants having a dimension of mass,
we obtain the Lagrangian (\ref{DBIlag}) with $h(\phi)^{-1} =
m^4e^{-\lambda \phi/M_{\rm pl}}$ and
$V(\phi)=(M^4+m^4)e^{-\lambda \phi/M_{\rm pl}}$.
The ultra-relativistic regime corresponds to the case
where the quantity $Y/m^4$ is close to $1/2$.
In order to sustain sufficient amount of inflation
in this regime, we require that the ratio
$c_M \equiv M^4/m^4$ is much larger than 1 \cite{Ohashi2011}.
Since $P_{,X}=[1-2h(\phi)X]^{-1/2}>0$ in the DBI model, the upper
bound of Eq.~(\ref{x1confin}) and the NEC are automatically satisfied.
We also have $A=(1-2Y/m^4)^{3/2}$, so that there is no divergence
associated with the determinant (\ref{deter}) in the regime $Y/m^4<1/2$.
{}From the first two equations of (\ref{auani1}) we find that
the anisotropic fixed point satisfies the fourth order equation
of $x_1$, but it is not analytically solvable
for general values of $\lambda$, $\mu$, and $c_M$.
However, substituting the lower bound of Eq.~(\ref{x1confin})
into the fourth order equation of $x_1$,
we obtain the following constraint
\begin{equation}
\mu>\frac{2\sqrt{\lambda^4+12c_M\lambda^2+36}
-\lambda^2}{6\lambda}\,.
\label{DBImu}
\end{equation}
In the ultra-relativistic regime the quantity $Y/m^4$ is close to $1/2$,
so that $P_{,X}=(1-2Y/m^4)^{-1/2}$ is much larger than 1.
Using the bound (\ref{con2}), the anisotropic fixed point
of Eq.~(\ref{auani1}) satisfies the relation
$P_{,X}<\lambda (\lambda+2\mu)/4$, i.e.,
\begin{equation}
\sqrt{1-\frac{2Y}{m^4}}>\frac{4}{\lambda (\lambda+2\mu)}\,.
\label{Ycon}
\end{equation}
In order to realize the situation where $Y/m^4$ is close to $1/2$,
we require that $\lambda (\lambda+2\mu) \gg 1$.
As $x_1$ is away from the value $2\sqrt{6}/[3(\lambda+2\mu)]$,
there is a tendency that the anisotropic fixed point
deviates from the ultra-relativistic regime because of
the decrease of $P_{,X}$.
In the following we focus on the situation
where $x_1$ is close to $2\sqrt{6}/[3(\lambda+2\mu)]$,
in which case the anisotropic fixed point is stable with
a small anisotropy.
For $x_1 \simeq 2\sqrt{6}/[3(\lambda+2\mu)]$, the slow-roll parameter
is given by $\epsilon \simeq 2\lambda/(\lambda+2\mu)$ from Eq.~(\ref{dotH3}).
In order to realize $\epsilon \ll 1$, we need the condition
$\mu \gg \lambda$.
Then, the condition $\lambda (\lambda+2\mu) \gg 1$
discussed above can be interpreted as $\mu \lambda \gg 1$.
{}From Eq.~(\ref{DBImu}) the condition $\mu \lambda \gg 1$
can be satisfied for $c_M\lambda^2 \gg 10$, in which case
Eq.~(\ref{DBImu}) reduces to $\mu>2\sqrt{3c_M}/3$.
When $c_M={\cal O}(100)$, for example, we have
$\mu \gtrsim {\cal O}(10)$ and $\lambda \gtrsim {\cal O}(1)$.
\begin{figure}
\includegraphics[height=4.0in,width=4.2in]{fig4.eps}
\caption{\label{fig4}
The three-dimensional phase space ($Y/m^4,x_3,x_4$) for
the DBI model with the Lagrangian
$P=-m^4 e^{-\lambda \phi/M_{\rm pl}}\sqrt{1-2Xe^{\lambda \phi/M_{\rm pl}}/m^4}
-M^4 e^{-\lambda \phi/M_{\rm pl}}$.
The model parameters are chosen to be
$c_M=M^4/m^4=500$, $\lambda=1$, and $\mu=26$
with the initial condition $Y/m^4=10^{-2}$
and several different initial values of $x_3$ and $x_4$.
The trajectories with different initial conditions converge
to the anisotropic fixed point
$(Y/m^4,x_3,x_4)=
(4.911 \times 10^{-1}, 5.494 \times 10^{-3},9.020 \times 10^{-2})$ with
$x_1=3.098 \times 10^{-2}$ and $\epsilon=3.794 \times 10^{-2}$.}
\end{figure}
For compatibility of the two conditions (\ref{con1})
and (\ref{con2}), we require that $\mu>\lambda/2$.
If $\lambda>\lambda_m \equiv \sqrt{2c_M+2\sqrt{c_M^2+3}}$,
the condition $\mu>\lambda/2$ is stronger than the bound (\ref{DBImu}).
In the regime $\lambda<\lambda_m$, as $\lambda$ gets larger
around the lower bound of $\mu$ given in Eq.~(\ref{DBImu}),
the slow-roll parameter also increases and
it reaches the value $\epsilon=1$ at $\lambda=\lambda_m$.
Then, the realization of anisotropic inflation
demands the condition
\begin{equation}
\lambda<\sqrt{2c_M+2\sqrt{c_M^2+3}}\,.
\label{lamcon}
\end{equation}
When $c_M=500$, for example, the condition (\ref{lamcon})
translates to $\lambda<44.7$.
As long as $\lambda$ is much smaller than the upper bound of
Eq.~(\ref{lamcon}), anisotropic inflation with $\epsilon \ll 1$
occurs in the ultra-relativistic regime for $\mu$ close
to the lower bound of Eq.~(\ref{DBImu}).
In Fig.~\ref{fig4} we show the trajectories of solutions in the
three-dimensional phase space $(Y/m^4, x_3,x_4)$
for $c_M=500$, $\lambda=1$, and $\mu=26$.
In this case the solutions with several different initial conditions
converge to the anisotropic fixed point with constant values
of $x_3$, $x_4$ satisfying $x_3 \ll 1$ and $x_4 \ll 1$.
The attractor is in the ultra-relativistic regime ($Y/m^4$ close to $1/2$)
with $\epsilon$ of the order of 0.01.
It is also possible to realize stable anisotropic inflation for
$\lambda=O(10)$ and $\mu=O(10)$, but in such cases the slow-roll
parameter $\epsilon$ is not much smaller than 1.
In summary, the stable anisotropic DBI inflation can be realized
in the ultra-relativistic regime under the conditions
(\ref{DBImu}) and (\ref{lamcon}) for $\mu$ close to
the lower bound (\ref{DBImu}).
\section{Conclusions}
\label{consec}
We have studied the dynamics of anisotropic power-law
k-inflation in the presence of a vector kinetic term
$F_{\mu \nu}F^{\mu \nu}$ coupled to the inflaton field $\phi$.
Such a power-law k-inflation can be accommodated for the general
Lagrangian $P=Xg(Y)$, where $Y=Xe^{\lambda \phi/M_{\rm pl}}$.
The cosmological dynamics in the anisotropic cosmological background
is known by solving the autonomous equations
(\ref{auto1})-(\ref{auto3}).
Without specifying the functional forms of $g(Y)$, we have shown
that anisotropic inflationary solutions exist for the
exponential coupling (\ref{fphi}).
The anisotropic fixed point satisfying Eq.~(\ref{auani1}) is
present for $3(\lambda+2\mu)x_1>2\sqrt{6}$, where
$x_1=\dot{\phi}/(\sqrt{6}HM_{\rm pl})$.
The condition for the cosmic acceleration
translates to $\lambda x_1<\sqrt{6}/3$.
Provided the conditions $3(\lambda+2\mu)x_1>2\sqrt{6}$ and
$A=(P_{,X}+2XP_{,XX})^{-1}>0$ are satisfied,
the anisotropic inflationary fixed point is stable
in the regime where $x_1$ is close to
$2\sqrt{6}/[3(\lambda+2\mu)]$.
This property holds irrespective of the forms of $g(Y)$ and
hence the anisotropic hair survives whenever the anisotropic
power-law inflationary solutions are present.
The quantity $A$ is related to the sound speed $c_s$ as
$c_s^2=AP_{,X}$, so that the Laplacian instability can
be avoided for $A>0$ and $P_{,X}>0$.
For the models in which $P_{,X}$ can be negative,
it happens that the NEC $\rho_t+P_t>0$ is not satisfied
for some model parameters, see Eq.~(\ref{nec}).
In the de Sitter limit ($\lambda \to 0$)
we found that the NEC is always violated
for anisotropic solutions. This is consistent
with the Wald's cosmic no hair conjecture.
As long as $\lambda$ is not 0, there are some
parameter spaces in which the NEC is satisfied.
In Sec.~\ref{concretesec} we applied our general results to
concrete models of k-inflation such as the generalized ghost
condensate and the DBI model.
In the generalized ghost condensate
we showed that there are allowed parameter spaces in the
$(\lambda,\mu)$ plane where stable anisotropic
inflationary solutions with $P_{,X}>0$ and $A>0$
are present, see Fig.~\ref{figab}. The existence of such
anisotropic attractors is confirmed in the numerical
simulation of Fig.~\ref{fig3}.
In this model anisotropic inflation with $\lambda=O(0.1)$
and $\mu=O(0.1)$ occurs, but if the slow-roll parameter
$\epsilon$ is of the order of $10^{-2}$,
it follows that $\lambda=O(10^{-2})$.
In the DBI model there exists stable anisotropic inflationary solutions
in the ultra-relativistic regime ($Y/m^4 \simeq 1/2$)
for $\mu$ close to the lower bound of Eq.~(\ref{DBImu}) and
$\lambda$ satisfying the bound (\ref{lamcon})
(see Fig.~\ref{fig4}).
The model parameters are typically of the order of
$\lambda=O(1)$ and $\mu=O(10)$
to realize $\epsilon=O(10^{-2})$.
While we focused on the vector field coupled to the inflaton
in this paper, we expect that the similar property
should also hold for the two-form field models
studied in Ref.~\cite{Ohashi} in the context
of potential-driven slow-roll inflation.
It is also known that in k-inflation the non-Gaussianities of
scalar metric perturbations can be large for the equilateral
shape due to the non-linear field self-interactions
inside the Hubble radius \cite{nongau}.
It will be of interest to study how the non-linear
estimator $f_{\rm NL}$ of the single-field k-inflation
is modified by the interactions between
inflaton and the vector/two-form fields.
We leave these issues for future work.
\acknowledgements
This work is supported by the Grant-in-Aid for Scientific Research
Fund of the Ministry of Education, Science and Culture of Japan
(Nos.~23$\cdot$6781, 25400251, and 24540286), the Grant-in-Aid
for Scientific Research on Innovative Area (No.~21111006).
\section*{Appendix}
\label{appe}
In this Appendix we provide more explicit analysis for the properties
of isotropic and anisotropic solutions given in
Eqs.~(\ref{auiso}) and (\ref{auani1}).
The power-law inflationary solution corresponds to $\dot{\alpha}=H=\zeta/t$,
where $\zeta$ is a constant larger than 1.
Since the quantities $x_3=\dot{\sigma}/H$ and $x_1=\dot{\phi}/(\sqrt{6}HM_{\rm pl})$
are constant along the fixed points, we have that
$\dot{\sigma}=\eta/t$ and $\dot{\phi}/M_{\rm pl}=\xi/t$, respectively,
where $\eta=\zeta x_3$ and $\xi=\sqrt{6}x_1 \zeta$.
Then, the evolution of $\alpha$, $\sigma$, and $\phi$
is characterized by
\begin{equation}
\alpha = \zeta \log \frac{t}{t_0}\,,\qquad
\sigma = \eta \log \frac{t}{t_0}\,, \qquad
\frac{\phi}{M_{\rm pl}} = \xi \log \frac{t}{t_0}\,,
\label{poweras}
\end{equation}
where $t_0$ is a constant.
For $Y=Xe^{\lambda \phi/M_{\rm pl}}$ to be constant,
we need to require
\begin{equation}
\lambda \xi =2\,.
\label{scaling}
\end{equation}
For the solutions (\ref{poweras}) satisfying the relation (\ref{scaling}),
the dimensionless variables defined in Eq.~(\ref{dimendef}) read
\begin{equation}
x_1=\frac{2}{\sqrt{6}\lambda \zeta}\,,\qquad
x_2=\frac{M_{\rm pl}t_0}{\sqrt{3}\zeta}\,,\qquad
x_3=\frac{\eta}{\zeta}\,,\qquad
x_4^2=\frac{W^2}{6\zeta^2}\,,
\label{xire}
\end{equation}
where $W^2=p_A^2 t_0^2/(M_{\rm pl}^2 f_0^2)$.
Substituting the solutions (\ref{poweras})
into Eqs.~(\ref{be1})-(\ref{be4}), we obtain
\begin{eqnarray}
&& \mu \xi - 2\zeta - 2\eta = -1\,, \label{similar}\\
&& \zeta^2 = \eta^2 + \frac{2P_{,X}-g}{6} \xi^2 + \frac{W^2}{6}\,, \label{hamilton} \\
&& \zeta = 3\eta^2 + \frac{P_{,X}}{2}\xi^2 + \frac{W^2}{3}\,, \label{evolution}\\
&& \eta = 3\zeta \eta - \frac{W^2}{3}\,, \label{aniso}\\
&& \xi -3AP_{,X}\zeta \xi -\frac{\lambda}{2}\left( 1-A P_X \right) \xi^2
+ \frac{\lambda}{2} (P_{,X}-g) A\xi^2 -\mu A W^2 = 0 \label{KG}\,.
\end{eqnarray}
Notice that Eq.~(\ref{similar}) follows from the demand to have the
time dependence $t^{-2}$ for the last term of Eq.~(\ref{be1}).
Plugging the relation (\ref{scaling}) into Eq.~(\ref{KG}),
it follows that
\begin{equation}
W^2=\frac{2}{\lambda \mu} \left[ P_{,X} (2-3\zeta)-g \right]\,.
\label{W2simple}
\end{equation}
First, let us seek isotropic solutions.
In this case, Eq.~(\ref{similar}) is absent and $\eta = W=0$.
{}From Eqs.~(\ref{evolution}) and (\ref{hamilton}) we obtain
the following relations
\begin{equation}
\zeta = \frac{2P_{,X}}{\lambda^2}\,,\qquad
P_{,X}^2 -\frac{\lambda^2}{3} P_{,X} +
\frac{\lambda^2}{6} g=0 \label{Ysol}\,,
\end{equation}
respectively. Note that these are consistent with Eq.~(\ref{W2simple}).
On using the correspondence (\ref{xire}),
we find that the two relations (\ref{Ysol}) are equivalent
to the first two of Eq.~(\ref{auiso}).
Now, we move on to anisotropic power-law solutions.
{}From Eq.~(\ref{similar}) we have $\zeta + \eta = 1/2 + \mu/\lambda$.
Combining Eqs.~(\ref{evolution}) and (\ref{aniso}), it follows that
$\zeta + \eta = 3\eta (\zeta +\eta) + P_{,X} \xi^2/2$.
Then we obtain
\begin{eqnarray}
\zeta &=& \frac{(\lambda + 2\mu)(\lambda + 6\mu)+ 8 P_{,X}}
{6\lambda (\lambda + 2\mu)}\,,
\label{sol-zeta}\\
\eta &=& \frac{\lambda^2 + 2\lambda \mu -4 P_{,X}}{3\lambda (\lambda +2\mu)}\,,
\label{sol-eta}
\end{eqnarray}
by which the anisotropy of the expansion is
\begin{equation}
\frac{\Sigma}{H} = \frac{\eta}{\zeta}
= \frac{2(\lambda^2 +2\lambda \mu -4 P_{,X})}
{(\lambda + 2\mu)(\lambda + 6\mu)+ 8 P_{,X}}\,.
\label{rafi}
\end{equation}
Substituting Eqs.~(\ref{sol-zeta}) and (\ref{sol-eta})
into Eq.~(\ref{aniso}), we have
\begin{equation}
W^2 =-\frac{(\lambda^2+2\lambda \mu-4P_{,X})
(\lambda^2-4\lambda \mu-12\mu^2-8P_{,X})}
{2\lambda^2 (\lambda+2\mu)^2}\,.
\label{Wdfi}
\end{equation}
{}From Eq.~(\ref{sol-zeta}) and the first of Eq.~(\ref{xire})
we can express $P_{,X}$ in terms of $x_1$.
This exactly corresponds to the first relation of Eq.~(\ref{auani1}).
Substituting this into Eqs.~(\ref{rafi})-(\ref{Wdfi}) and
using the correspondence (\ref{xire}), we obtain
the third and fourth relations of Eq.~(\ref{auani1}).
On using Eqs.~(\ref{W2simple}) and (\ref{Wdfi}) as well as
the relation $P_{,X}=g+g_1$, we find that $g_1$ can be expressed
as the second of Eq.~(\ref{auani1}).
If we want to obtain the metric explicitly,
we need to use Eq.~(\ref{xire})
to get the following relations
\begin{equation}
\zeta = \frac{2}{\sqrt{6} \lambda x_1 } \ ,
\qquad \eta = \frac{2 x_3 }{\sqrt{6} \lambda x_1 } \ .
\end{equation}
The anisotropic power-law inflationary solutions are given by
\begin{equation}
ds^2 = - dt^2 + \left(\frac{t}{t_0}\right)^{2\zeta}
\left[ \left(\frac{t}{t_0}\right)^{-4\eta}dx^2
+\left(\frac{t}{t_0}\right)^{2\eta}(dy^2+dz^2) \right] \ .
\label{anisotropic-metric2}
\end{equation}
Now it is easy to write down the metric corresponding to
the solutions derived in Sec.~\ref{concretesec}.
|
1,116,691,500,272 | arxiv | \section*{Introduction}
Fix a prime $p\geq 5$, an integer $N>0$ prime to $p$, and
let $f\in S_2(\Gamma_0(Np))$ be a newform.
Throughout this paper, we shall assume that $f$ is \emph{split multiplicative} at $p$,
meaning that
\[
f(q)=q+\sum_{n=2}^\infty a_n(f)q^n\quad\quad\textrm{with $a_p(f)=1$.}
\]
Fix embeddings $\mathbf{C}\overset{\imath_\infty}\hookleftarrow\overline{\mathbf{Q}}\overset{\imath_p}\hookrightarrow\mathbf{C}_p$,
let $L$ be a finite extension of $\mathbf{Q}_p$ containing $\imath_p\imath_\infty^{-1}(a_n(f))$ for all $n$,
and let $\mathcal{O}_L$ be the ring of integers of $L$. Since the $U_p$-eigenvalue of $f$ is $a_p(f)=1$ by hypothesis,
the form $f$ is ordinary at $p$, and hence there is a Hida family
\[
\mathbf{f}=\sum_{n=1}^\infty\mathbf{a}_nq^n\in\mathbb{I}\pwseries{q}
\]
passing through $f$. Here $\mathbb{I}$ is a finite flat extension of
the power series ring $\mathcal{O}_L\pwseries{T}$, which for simplicity in this introduction
it will be assumed to be $\mathcal{O}_L\pwseries{T}$ itself. Embed $\mathbf{Z}$ in the space
$\mathcal{X}_{\mathcal{O}_L}(\mathbb{I})$ of continuous $\mathcal{O}_L$-algebra homomorphisms
$\nu:\mathbb{I}\longrightarrow\overline{\mathbf{Q}}_p$ by identifying $k\in\mathbf{Z}$
with the homomorphism $\nu_k:\mathbb{I}\longrightarrow\overline{\mathbf{Q}}_p$
defined by $1+T\mapsto(1+p)^{k-2}$. The Hida family $\mathbf{f}$ is then uniquely caracterized by the property that for every
$k\in\mathbf{Z}_{\geq 2}$ its \emph{weight $k$ specialization}
\[
\mathbf{f}_k:=\sum_{n=1}^{\infty}\nu_k(\mathbf{a}_n)q^n
\]
gives the $q$-expansion of a $p$-ordinary
$p$-stabilized newform $\mathbf{f}_k\in S_k(\Gamma_0(Np))$ with $\mathbf{f}_{2}=f$.
\vspace{0.1in}
Let $K$ be an imaginary quadratic field equipped with an integral ideal $\mathfrak{N}\subset\mathcal{O}_K$
with $\mathcal{O}_K/\mathfrak{N}\simeq\mathbf{Z}/N\mathbf{Z}$, assume that $p$ splits in $K$, and
write $p\mathcal{O}_K=\mathfrak{p}\overline{\mathfrak{p}}$ with $\mathfrak{p}$ the prime above $p$ induced by $\imath_p$.
If $A$ is an elliptic curve with CM by $\mathcal{O}_K$, then the pair $(A,A[\mathfrak{Np}])$ defines a
\emph{Heegner point} $P_A$ on $X_0(Np)$ defined over the Hilbert class field $H$ of $K$.
Taking the image of the degree zero divisor $(P_A)-(\infty)$ under the composite map
\begin{equation}\label{def:heeg}
J_0(Np)\xrightarrow{\rm Kum}H^1(H,{\rm Ta}_p(J_0(Np)))\longrightarrow H^1(H,V_f)\xrightarrow{{\rm Cor}_{H/K}} H^1(K,V_f)
\end{equation}
yields a class $\kappa_f\in{\rm Sel}(K,V_f)$ in the Selmer group for the
$p$-adic Galois representation
\[
\rho_f:G_{\mathbf{Q}}:={\rm Gal}(\overline{\mathbf{Q}}/\mathbf{Q})\longrightarrow{\rm Aut}_L(V_f)\simeq\mathbf{GL}_2(L)
\]
associated to $f$.
On the other hand, by working over a $p$-tower of modular curves,
Howard~\cite{howard-invmath} constructed a so-called \emph{big Heegner point}
$\mathfrak{Z}_0\in{\rm Sel}_{\rm Gr}(K,\mathbf{T}^\dagger)$ in the Selmer group
for a self-dual twist of the big Galois representation
\[
\rho_{\mathbf{f}}:G_\mathbf{Q}\longrightarrow{\rm Aut}_{\mathbb{I}}(\mathbf{T})\simeq\mathbf{GL}_2(\mathbb{I})
\]
associated to $\mathbf{f}$. The image of $\mathfrak{Z}_0$ under the
\emph{specialization map} $\nu_2:{\rm Sel}_{\rm Gr}(K,\mathbf{T}^\dagger)\longrightarrow{\rm Sel}(K,V_f)$
induced by $\nu_2:\mathbb{I}\longrightarrow\overline{\mathbf{Q}}_p$ yields a second class of ``Heegner type'' in ${\rm Sel}(K,V_f)$;
the question of comparing $\kappa_f$ with $\nu_2(\mathfrak{Z}_0)$ thus naturally arises.
\vspace{0.1in}
For $k>2$, the question of relating the specializations $\nu_k(\mathfrak{Z}_0)$ to higher dimensional Heegner cycles
was considered in \cite{cas-inv}. In that case, one could show (see [\emph{loc.cit.}, (5.31)]) that
\begin{equation}\label{eq:mathann}
{\rm loc}_\mathfrak{p}(\nu_k(\mathfrak{Z}_0))=u^{-1}\biggl(1-\frac{p^{k/2-1}}{\nu_k(\mathbf{a}_p)}\biggr)^2\cdot
{\rm loc}_\mathfrak{p}(\kappa_{\mathbf{f}_k}),
\end{equation}
where $u:=\vert\mathcal{O}_K^\times\vert/2$,
${\rm loc}_{\mathfrak{p}}:H^1(K,V_{\mathbf{f}_k})\longrightarrow H^1(K_{\mathfrak{p}},V_{\mathbf{f}_k})$
is the localization map, and $\kappa_{\mathbf{f}_k}$ is a class given by the
$p$-adic \'etale Abel--Jacobi images of certain Heegner cycles on a
Kuga--Sato variety of dimension $k-1$. However, for the above newform $f$,
the main result of \cite{cas-inv} does not immediately yield a similar
relation between $\nu_2(\mathfrak{Z}_0)$ and $\kappa_{\mathbf{f}_2}=\kappa_f$,
since in \emph{loc.cit.} a crucial use is made of the fact
that the $p$-adic Galois representations associated with the eigenforms under consideration
are (potentially) crystalline at $p$, whereas $V_f$ is well-known to be semistable but non-crystalline at $p$.
Moreover, it is easy to see that the expected relation between these two classes may not be
given by the naive extension of $(\ref{eq:mathann})$ with $k=2$: indeed, granted the injectivity of ${\rm loc}_\mathfrak{p}$,
by the Gross--Zagier formula the class ${\rm loc}_\mathfrak{p}(\kappa_f)$ is nonzero as long as $L'(f/K,1)\neq 0$,
whilst $(\ref{eq:mathann})$ for $k=2$ would imply the vanishing of ${\rm loc}_\mathfrak{p}(\nu_2(\mathfrak{Z}_0))$
is all cases, since
\begin{equation}\label{vanishing}
\biggl(1-\frac{p^{k/2-1}}{\nu_k(\mathbf{a}_p)}\biggr)\bigr\vert_{k=2}
=\left(1-\frac{1}{a_p(f)}\right)=0.
\end{equation}
As shown in \cite{howard-invmath}, the class $\mathfrak{Z}_0$ fits in compatible system of similar
classes $\mathfrak{Z}_\infty=\{\mathfrak{Z}_n\}_{n\geq 0}$ over
the anticyclotomic $\mathbf{Z}_p$-extension of $K$; thus $\mathfrak{Z}_0$ might be seen
as the value of $\mathfrak{Z}_\infty$ at the trivial character.
As suggested by the above discussion, in this paper we will show that the class
${\rm loc}_\mathfrak{p}(\nu_2(\mathfrak{Z}_0))$ vanishes, and prove an ``exceptional zero formula''
relating its derivative at the trivial character (in a precise sense to be defined) to the geometric class $\kappa_f$.
To state the precise result,
let $h$ be the class number of $K$, write $\mathfrak{p}^{h}=\pi_\mathfrak{p}\mathcal{O}_K$,
and define
\begin{equation}\label{def:Linv}
\mathscr{L}_\mathfrak{p}(f,K):=\mathscr{L}_p(f)-\frac{\log_p(\varpi_\mathfrak{p})}{{\rm ord}_p(\varpi_\mathfrak{p})},
\end{equation}
where $\mathscr{L}_p(f)$ is the $\mathscr{L}$-invariant of $f$ (see \cite[\S{II.14}]{mtt} for example),
$\varpi_\mathfrak{p}:=\pi_\mathfrak{p}/\overline{\pi}_{\mathfrak{p}}\in K_\mathfrak{p}\simeq\mathbf{Q}_p$,
and $\log_p:\mathbf{Q}_p^\times\longrightarrow\mathbf{Q}_p$ is Iwasawa's branch of the $p$-adic logarithm.
\begin{mainthmA}\label{intro:main}
Let $f\in S_2(\Gamma_0(Np))$ be a newform split multiplicative at $p$,
and define $\mathcal{Z}_{\mathfrak{p},f,\infty}=\{\mathcal{Z}_{\mathfrak{p},f,n}\}_{n\geq 0}$
by $\mathcal{Z}_{\mathfrak{p},f,n}:={\rm loc}_\mathfrak{p}(\nu_2(\mathfrak{Z}_n))$.
Then $\mathcal{Z}_{\mathfrak{p},f,0}=0$ and
\begin{equation}\label{intro:exczero}
\mathcal{Z}_{\mathfrak{p},f,0}'
=\mathscr{L}_\mathfrak{p}(f,K)\cdot{\rm loc}_\mathfrak{p}(\kappa_f).\nonumber
\end{equation}
\end{mainthmA}
In Lemma~\ref{lem:divide} below, we define the ``derivative''
$\mathcal{Z}_{\infty}'$ for any compatible system of classes
$\mathcal{Z}_\infty=\{\mathcal{Z}_n\}_{n\geq 0}$ with $\mathcal{Z}_0=0$.
Thus the above result, which corresponds to Theorem~\ref{main} in the body of the paper,
might be seen as an exceptional zero formula
relating the derivative of ${\rm loc}_\mathfrak{p}(\nu_2(\mathfrak{Z}_\infty))$ at the trivial character
to classical Heegner points.
\begin{intro-rem}
As suggested in \cite[\S{8}]{LLZ}, one might view $p$-adic $L$-functions
(as described in \cite{PR:Lp} and \cite[Ch.~8]{Rubin-ES}) as
``rank 0" Euler--Iwasawa systems. In this view, it is natural to expect higher rank
Euler--Iwasawa systems to exhibit exceptional zero phenomena similar to their rank $0$ counterparts.
We would like to see the main result of this paper
as an instance of this phenomenon in ``rank $1$''.
\end{intro-rem}
\begin{intro-rem}
It would be interesting to study the formulation of our main result
in the framework afforded by Nekov{\'a}{\v{r}}'s theory of Selmer complexes \cite{nekovar310}, similarly as the
exceptional zero conjecture of Mazur--Tate--Teitelbaum \cite{mtt}
has recently been proved by Venerucci \cite{venerucci-exp} in the rank $1$ case.
\end{intro-rem}
\begin{intro-rem}
The second term in the definition $(\ref{def:Linv})$ is
precisely the $\mathscr{L}$-invariant $\mathscr{L}_\mathfrak{p}(\chi_K)$ appearing in
the exceptional zero formula of Ferrero--Greenberg~\cite{FG} and Gross--Koblitz~\cite{GK}
for the Kubota--Leopoldt $p$-adic $L$-function associated to the quadratic Dirichlet character
$\chi_{K}$ corresponding to $K$. It would be interesting
to find a conceptual explanation for the rather surprising appearance of $\mathscr{L}_\mathfrak{p}(\chi_K)$
in our derivative formula; we expect this to be related to a comparison of $p$-adic periods (cf.~\cite{cas-BF}).
\end{intro-rem}
The proof of the above theorem is obtained by
computing in two different ways the value of a certain
anticyclotomic $p$-adic $L$-function $L_{\mathfrak{p}}(f)$
at the norm character ${\rm\mathbf{N}}_K$. The $p$-adic $L$-function $L_\mathfrak{p}(f)$
is defined by the interpolation of the central critical values for the Rankin--Selberg convolution of $f$
with the theta series attached to Hecke characters of $K$ of infinity type $(2+j,-j)$ with $j\geq 0$.
The character ${\rm\mathbf{N}}_K$ thus lies \emph{outside} the range of interpolation of $L_\mathfrak{p}(f)$,
and via a suitable extension of the methods of Bertolini--Darmon--Prasanna~\cite{bdp1}
to our setting, in Theorem~\ref{thmbdp1A} we show that
\begin{equation}\label{intro:pGZ}
L_\mathfrak{p}(f)({\rm\mathbf{N}}_K)
=(1-a_p(f)p^{-1})\cdot\langle{\rm log}_{V_f}({\rm loc}_\mathfrak{p}(\kappa_f)),\omega_{f}\rangle_{}.
\end{equation}
On the other hand, in \cite{cas-2var} we constructed a two-variable
$p$-adic $L$-function $L_{\mathfrak{p},\xi}(\mathbf{f})$ of the variables $(\nu,\phi)$
interpolating (a shift of) the $p$-adic $L$-functions $L_\mathfrak{p}(\mathbf{f}_k)$ for all $k\geq 2$,
and established the equality
\begin{equation}
\label{intro:Log}
L_{\mathfrak{p},\xi}(\mathbf{f})=\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega({\rm loc}_\mathfrak{p}(\mathfrak{Z}^{\xi^{-1}}_\infty)),
\end{equation}
where $\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega$ is a two-variable Coleman power series map whose
restriction to a certain ``line'' interpolates
\[
\biggl(1-\frac{p^{k/2-1}}{\nu_k(\mathbf{a}_p)}\biggr)^{-1}
\biggl(1-\frac{\nu_k(\mathbf{a}_p)}{p^{k/2}}\biggr)\cdot
{\rm log}_{V_{\mathbf{f}_k}}
\]
for all $k>2$. A second evaluation of $L_\mathfrak{p}(f)({\rm\mathbf{N}}_K)$ should thus follow by
specializing $(\ref{intro:Log})$ at $(\nu_2,\mathds{1})$.
However, because of the vanishing $(\ref{vanishing})$, we may not directly specialize
$\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega$ at $(\nu_2,\mathds{1})$, and we are led to utilize a different argument
reminiscent of Greenberg--Stevens'~\cite{GS}. In fact, from the form of the $p$-adic multipliers
appearing in the interpolation property defining $\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega$,
we deduce a factorization
\begin{equation}\label{intro:imp}
E_\mathfrak{p}(\mathbf{f})\cdot L_{\mathfrak{p},\xi}(\mathbf{f})=\widetilde{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^\omega({\rm loc}_\mathfrak{p}(\mathfrak{Z}^{\xi^{-1}}_0))\nonumber
\end{equation}
upon restricting $(\ref{intro:Log})$ to an appropriate ``line'' (different from the above)
passing through $(\nu_2,\mathds{1})$, where $\widetilde{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^\omega$ is a modification of
$\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega$ and $E_\mathfrak{p}(\mathbf{f})$ is a $p$-adic analytic
function vanishing at that point.
The vanishing of $\mathcal{Z}_{\mathfrak{p},f,0}$ thus follows, and exploiting
the ``functional equation'' satisfied by $\mathfrak{Z}_\infty$,
we arrive at the equality
\begin{equation}\label{intro:lim}
\mathscr{L}_\mathfrak{p}(f,K)\cdot L_\mathfrak{p}(f)({\rm\mathbf{N}}_K)
=(1-a_p(f)p^{-1})\cdot\langle{\rm log}_{V_f}(\mathcal{Z}_{\mathfrak{p},f,0}'),\omega_{f}\rangle_{}
\end{equation}
using a well-known formula for the $\mathscr{L}$-invariant as a
logarithmic derivative of $\nu_k(\mathbf{a}_p)$ at $k=2$.
The proof of our exceptional zero formula then follows by combining $(\ref{intro:pGZ})$ and $(\ref{intro:lim})$.
\vspace{0.1in}
\noindent\emph{Acknowledgements.}
We would like to thank Daniel Disegni for
conversations related to this work, and the referee for pointing out a number of
inaccuracies in an earlier version of this paper,
as well as for helpful suggestions
which led to significant improvements in the article.
\section{Preliminaries}
For a more complete and detailed discussion of the topics that we touch upon in this section,
we refer the reader to \cite{pSI} and \cite{bdp1}.
\subsection{Modular curves}\label{subsubsec:X}
Keep $N$ and $p\nmid N$ as in the Introduction, and let
\[
\Gamma:=\Gamma_1(N)\cap\Gamma_0(p)\subset\mathbf{SL}_2(\mathbf{Z}).
\]
An \emph{elliptic curve with $\Gamma$-level structure} over a $\mathbf{Z}[1/N]$-scheme $S$ is a triple
$(E,t,\alpha)$ consisting of
\begin{itemize}
\item{} an elliptic curve $E$ over $S$;
\item{} a section $t:S\longrightarrow E$ of
the structure morphism of $E/S$ of exact order $N$; and
\item{} a $p$-isogeny $\alpha:E\longrightarrow E'$.
\end{itemize}
The functor on $\mathbf{Z}[1/N]$-schemes assigning to $S$ the set of
isomorphism classes of elliptic curves with $\Gamma$-level
structure over $S$ is representable, and we let $Y/\mathbf{Z}[1/N]$ be the
corresponding fine moduli scheme. The same
moduli problem for \emph{generalized} elliptic curves
with $\Gamma$-level structure defines
a smooth geometrically connected curve $X/\mathbf{Z}[1/N]$ containing
$Y$ as a open subscheme, and we refer to $Z_X:=X\smallsetminus Y$
as the \emph{cuspidal subscheme} of $X$. Removing the data of
$\alpha$ from the above moduli problem, we obtain the modular curve $X_1(N)$
of level $\Gamma_1(N)$.
\vspace{0.1in}
For our later use (see esp.~Theorem~\ref{coleman-primitives}),
recall that if $a$ is any integer coprime to $N$, the rule
\[
\langle a\rangle(E,t,\alpha)=(E,a\cdot t,\alpha)
\]
defines an action of $(\mathbf{Z}/N\mathbf{Z})^\times$ on $X$ defined over $\mathbf{Z}[1/N]$,
and we let $X_0(Np)=X/(\mathbf{Z}/N\mathbf{Z})^\times$ be the quotient of $X$ by this action.
\vspace{0.1in}
The special fiber $X_{\mathbf{F}_p}:=X\times_{\mathbf{Z}[1/N]}\mathbf{F}_p$
is non-smooth. In fact, it consists of two irreducible components,
denoted $C_0$ and $C_\infty$, meeting transversally at the singular points $SS$.
Let ${\rm Frob}$ be the absolute Frobenius of an elliptic curve over $\mathbf{F}_p$,
and ${\rm Ver}={\rm Frob}^\vee$ be the Verschiebung. The maps
\[
\gamma_{V}:X_1(N)_{\mathbf{F}_p}:=X_1(N)\times_{\mathbf{Z}[1/N]}\mathbf{F}_p\longrightarrow X_{\mathbf{F}_p}
\quad\quad
\gamma_{F}:X_1(N)_{\mathbf{F}_p}\longrightarrow X_{\mathbf{F}_p}
\]
defined by sending a pair $(E,t)_{/\mathbf{F}_p}$ to $(E,t,{\rm ker}({\rm Ver}))$
and $(E,t,{\rm ker}({\rm Frob}))$ respectively, are closed immersions sending
$X_1(N)_{\mathbf{F}_p}$ isomorphically onto $C_0$ and $C_\infty$,
and mapping the supersingular points in $X_1(N)_{\mathbf{F}_p}$ bijectively onto $SS$.
The non-singular geometric points of $C_0$ (resp. $C_\infty$)
thus correspond to the moduli of triples $(E,t,\alpha)$ in characteristic $p$ with
${\rm ker}(\alpha)$ \'etale (resp. connected).
\vspace{0.1in}
Corresponding to the preceding description of $X_{\mathbf{F}_p}$
there is a covering of $X$ as rigid analytic space over $\mathbf{Q}_p$.
Consider the reduction map
\begin{equation}\label{eq:red}
{\rm red}_p:X(\mathbf{C}_p)\longrightarrow X_{\mathbf{F}_p}(\overline{\mathbf{F}}_p),
\end{equation}
let $\mathcal{W}_0$ and $\mathcal{W}_\infty$ be the inverse image of $C_0$ and $C_\infty$, respectively,
and let $\mathcal{Z}_0\subset\mathcal{W}_0$ and $\mathcal{Z}_\infty\subset\mathcal{W}_\infty$ be the inverse image of their non-singular points.
In the terminology of \cite{rlc}, $\mathcal{W}_0$ (resp. $\mathcal{W}_\infty$) is a \emph{basic wide open}
with underlying affinoid $\mathcal{Z}_0$ (resp. $\mathcal{Z}_\infty$).
If $x\in SS$, then $\mathcal{A}_x:={\rm red}_p^{-1}(x)$ is conformal to an open
annulus in $\mathbf{C}_p$, and by definition we have
\[
X(\mathbf{C}_p)=\mathcal{W}_0\cup\mathcal{W}_\infty=\mathcal{Z}_0\cup\mathcal{Z}_\infty\cup\mathcal{W},
\]
where $\mathcal{W}=\mathcal{W}_0\cap\mathcal{W}_\infty=\bigcup_{x\in SS}\mathcal{A}_x$ is the union of the supersingular annuli.
\subsection{Modular forms and cohomology}\label{subsubsec:mf&dR}
In this section, we regard the modular curve $X$ as a scheme over a fixed base field $F$.
Let $\mathcal{E}\xrightarrow{\;\pi\;}X$ be the universal
generalized elliptic curve with $\Gamma$-level structure,
set $\tilde{Z}_X=\pi^{-1}(Z_X)$, and consider the invertible sheaf on $X$
given by
\[
\underline{\omega}:=\pi_*\Omega_{\mathcal{E}/X}^1(\log\widetilde{Z}_X).
\]
The space of algebraic \emph{modular
forms} (resp. \emph{cusp forms}) of weight $k$
and level $\Gamma$ defined over $F$ is
\[
M_k(X;F):=H^0(X,\underline{\omega}_F^{\otimes k})\quad\quad
(\textrm{resp.}\; S_k(X;F):=H^0(X,\underline{\omega}_F^{\otimes
k}\otimes\mathcal{I})),
\]
where $\underline{\omega}_F$ is the pullback of
$\underline{\omega}$ to $X\times_{\mathbf{Q}}F$,
and $\mathcal{I}$ is the ideal sheaf of $Z_X\subset X$.
If there is no risk of confusion, $F$ will be often suppressed from the notation.
Alternatively, on the open modular curve $Y$ a form
$f\in S_k(X;F)\subset M_k(X;F)$ is a rule
on quadruples $(E,t,\alpha,\omega)_{/A}$,
consisting of an $A$-valued point $(E,t,\alpha)\in Y(A)$
and a differential $\omega\in\Omega^1_{E/A}$ over arbitrary $F$-algebras $A$,
assigning to any such quadruple a value $f(E,t,\alpha,\omega)\in A$ subject
to the \emph{weight $k$ condition}
\[
f(E,t,\alpha,\lambda\omega)=\lambda^{-k}\cdot f(E,t,\alpha,\omega)\;\;\;
\textrm{for all $\lambda\in A^\times$,}
\]
depending only on the isomorphism class of the quadruple,
and compatible with base change of $F$-algebras.
The two descriptions are related by
\[
f(E,t,\alpha)=f(E,t,\alpha,\omega)\omega^k,
\]
for any chosen generator $\omega\in\Omega_{E/A}^1$.
\vspace{0.1in}
There is a third way of thinking about modular forms that
will be useful in the following. Consider the
\emph{relative de Rham cohomology} of $\mathcal{E}/X$:
\[
\mathcal{L}:=\mathbb{R}^1\pi_*(0\longrightarrow\mathcal{O}_\mathcal{E}
\longrightarrow\Omega_{\mathcal{E}/X}^1(\log\widetilde{Z}_X)\longrightarrow 0),
\]
which fits in a short exact sequence
\begin{equation}\label{hodgedeRham}
0\longrightarrow\underline{\omega}\longrightarrow\mathcal{L}\longrightarrow
\underline{\omega}^{-1}\longrightarrow 0
\end{equation}
of sheaves on $X$ and is equipped with
a non-degenerate pairing
\begin{equation}\label{PDL}
\langle,\rangle:\mathcal{L}\times\mathcal{L}\longrightarrow\mathcal{O}_X
\end{equation}
coming from the Hodge filtration and the Poincar\'e pairing
on the de Rham cohomology of the fibers. By the Kodaira--Spencer isomorphism
\begin{equation}\label{KS}
\sigma:\underline{\omega}^{\otimes 2}\cong\Omega_X^1(\log Z_X)\nonumber
\end{equation}
given by $\sigma(\omega\otimes\eta)=\langle\omega,\nabla\eta\rangle$, where
\[
\nabla:\mathcal{L}\longrightarrow\mathcal{L}\otimes\Omega_X^1(\log Z_X)
\]
is the Gauss--Manin connection, a modular form $f$
of weight $r+2$ and level $\Gamma$ defines a section $\omega_f$ of the sheaf
$\underline{\omega}^{\otimes r}\otimes\Omega_X^1(\log Z_X)$ by the rule
\begin{equation}\label{f-wf}
\omega_f(E,t,\alpha):=f(E,t,\alpha,\omega)\omega^r\otimes\sigma(\omega^2).\nonumber
\end{equation}
If $f$ is a cusp form, then the above rule defines a section
$\omega_f$ of $\underline{\omega}^{\otimes r}\otimes\Omega_X^1$, thus
yielding an identification
\[
S_{r+2}(X)\simeq H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega_X^1).
\]
For each $r\geq 0$, let $\mathcal{L}_r:={\rm Sym}^{r}\mathcal{L}$ (with $\mathcal{L}_0:=\mathcal{O}_X$),
and define the \emph{de Rham cohomology} of $X$ (attached to $\mathcal{L}_r$) as
the hypercohomology group
\begin{equation}\label{def:HdR}
H_{\rm dR}^1(X,\mathcal{L}_r,\nabla):=\mathbb{H}^1(\mathcal{L}_r^\bullet:
\mathcal{L}_r\xrightarrow{\;\nabla\;}\mathcal{L}_r\otimes
\Omega_X^1(\log Z_X)).
\end{equation}
Twisting by the ideal sheaf $\mathcal{I}$ gives rise to
the subcomplex $\mathcal{L}_r^\bullet\otimes\mathcal{I}\longrightarrow\mathcal{L}_r^\bullet$,
and the weight $r+2$ \emph{parabolic cohomology} of $X$ is defined by
\begin{equation}\label{def:Hpar}
H^1_{\rm par}(X,\mathcal{L}_r,\nabla)
:={\rm image}(\mathbb{H}^1(\mathcal{L}_r^\bullet\otimes\mathcal{I})\longrightarrow
H_{\rm dR}^1(X,\mathcal{L}_r,\nabla)).
\end{equation}
The exact sequence $(\ref{hodgedeRham})$ induces the short exact sequence
\begin{equation}\label{hodgefilpar}
0\longrightarrow H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega_X^1)\longrightarrow H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)\longrightarrow
H^1(X,\underline{\omega}^{\otimes-r})\longrightarrow 0,
\end{equation}
and hence the above assignment $f\mapsto\omega_f$
identifies $S_{r+2}(X)$ with a subspace of $H^1_{\rm par}(X,\mathcal{L}_r,\nabla)$.
In addition, the pairing $(\ref{PDL})$ induces a non-degenerate pairing
\begin{equation}\label{PDpar}
\langle,\rangle:H_{\rm par}^1(X,\mathcal{L}_r,\nabla)\times
H_{\rm par}^1(X,\mathcal{L}_r,\nabla)\longrightarrow F
\end{equation}
with respect to which $(\ref{hodgefilpar})$ is self-dual.
\subsection{$p$-new forms}
Consider the two degeneracy maps
\begin{align*}
\pi_1, \pi_2&:X\longrightarrow X_1(N
\end{align*}
defined by sending, under the moduli interpretation,
a triple $(E,t,\alpha)$ to the pairs
$(E,t)$ and $(\alpha(E),\alpha(t))$, respectively.
These morphisms induce maps
\begin{align*}
\pi_1^*,\pi_2^*&:H_{\rm par}^1(X_1(N),\mathcal{L}_r,\nabla)\longrightarrow H_{\rm
par}^1(X,\mathcal{L}_r,\nabla),
\end{align*}
where $H^1_{\rm par}(X_1(N),\mathcal{L}_r,\nabla)$ is defined as in $(\ref{def:Hpar})$
using the analogous objects over $X_1(N)$.
\begin{lem}
The map $\pi_1^*\oplus\pi_2^*$
is injective.
\end{lem}
\begin{proof}
This is \cite[Prop.~4.1]{pSI}.
\end{proof}
Define the \emph{$p$-old} subspace $H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)^{p{\rm -old}}$ of
$H_{\rm par}^1(X,\mathcal{L}_r,\nabla)$ to be the image of $\pi_1^*\oplus\pi_2^*$,
and the \emph{$p$-new} subspace $H_{\rm par}^1(X,\mathcal{L}_r,\nabla)^{p-{\rm new}}$
to be the orthogonal complement of the $p$-old subspace under the Poincar\'e pairing $(\ref{PDpar})$.
The space of $p$-new cusp forms of weight $k$ and level $\Gamma$ is defined by
\[
S_{r+2}(X)^{p{\rm -new}}:=S_{r+2}(X)\cap H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)^{p{\rm -new}},
\]
viewing $S_{r+2}(X)$ as subspace of $H_{\rm par}^1(X,\mathcal{L}_r,\nabla)$
in the form described above.
\subsection{$p$-adic modular forms}\label{sec:p-adic}
Recall that the \emph{Hasse invariant} is a modular form $H$ over $\mathbf{F}_p$
of level $1$ and weight $p-1$ with the property that an elliptic curve $E$ over
an $\mathbf{F}_p$-algebra $B$ is \emph{ordinary} if and only if
$H(E,\omega)$ is a unit in $B$ for some (or equivalently, any) generator $\omega\in\Omega_{E/B}^1$.
\vspace{0.1in}
Let $R$ be a $p$-adic ring, i.e., a ring which is isomorphic to its pro-$p$ completion.
A \emph{$p$-adic modular form} of tame level $N$ and weight $k$ defined over $R$ is
a rule assigning to every triple $(E,t,\omega)_{/A}$, over an arbitrary $p$-adic $R$-algebra $A$,
consisting of:
\begin{itemize}
\item{} an elliptic curve $E/A$ such that the reduction $E\times_A A/pA$ is ordinary;
\item{} a section $t:{\rm Spec}(A)\longrightarrow E$ of
the structure morphism of $E/A$ of exact order $N$; and
\item{} a differential $\omega\in\Omega_{E/A}^1$,
\end{itemize}
an element $f(E,t,\omega)\in A$ depending only on the isomorphism class
of $(E,t,\omega)_{/A}$, homogeneous of degree $-k$ in the third
entry, and compatible with base change of $p$-adic $R$-algebras.
Let $\mathcal{M}_k(N;R)$ be the $R$-module of $p$-adic modular forms of weight $k$
and level $N$ defined over $R$; as before, if there is no risk of confusion
$R$ will be often suppressed from the notation.
\vspace{0.1in}
Similarly as for classical modular forms, it will be convenient to think of
$p$-adic modular forms of weight $k$ as sections of the sheaf $\underline{\omega}^{\otimes k}$
over a certain subset of the rigid analytic space $X(\mathbf{C}_p)$.
Let $E_{p-1}$ be the normalized Eisenstein series of weight $p-1$ (recall that $p\geq 5$),
and define the \emph{ordinary locus} of $X_1(N)$ by
\[
X_1(N)^{\rm ord}:=\{x\in X_1(N)(\mathbf{C}_p)\;\colon\;\vert E_{p-1}(E_x,\omega_x)\vert_p\geq 1\},
\]
where $E_x/\mathbf{C}_p$ is a generalized elliptic curve corresponding to $x$
under the moduli interpretation, $\omega_x\in\Omega_{E_x/\mathbf{C}_p}^1$
is a regular differential on $E_x$, chosen so that it extends to a regular
differential over $\mathcal{O}_{\mathbf{C}_p}$ if $E_x$ has good reduction at $p$,
or corresponds to the canonical differential on the Tate curve if $x$
lies in the residue disc of a cusp, and $\vert\cdot\vert_p$ is the absolute value on $\mathbf{C}_p$
normalized so that $\vert p\vert_p=p^{-1}$.
Since $E_{p-1}$ reduces to the Hasse invariant $H$ modulo $p$,
it follows that the points $x\in X_1(N)^{\rm ord}$
correspond to pairs $(E_x,t_x)$ with $E_x$ having ordinary reduction modulo $p$.
Thus the assignment $f\mapsto (x\mapsto f(E_x,t_x,\omega_x)\omega_x^k)$, for any chosen
generator $\omega_x\in\Omega_{E_x/\mathbf{C}_p}^1$, defines an identification
\[
\mathcal{M}_k(N)\simeq H^0(X_1(N)^{\rm ord},\underline{\omega}^{\otimes k}).
\]
Let $I:=\{v\in\mathbf{Q}\;\colon\;0< v\leq\frac{p}{p+1}\}$,
and for any $v\in I$ define
\[
X_1(N)(v):=\{x\in X_1(N)(\mathbf{C}_p)\;\colon\;\vert E_{p-1}(E_x,\omega_x)\vert_p>p^{-v}\}.
\]
The space of \emph{overconvergent $p$-adic modular forms}
of weight $k$ and tame level $N$ is given by
\[
\mathcal{M}_k^\dagger(N)=\varinjlim_{v}H^0(X_1(N)(v),\underline{\omega}^{\otimes k}),
\]
where the transition maps
$H^0(X_1(N)(v),\underline{\omega}^{\otimes k})
\longrightarrow H^0(X_1(N)(v'),\underline{\omega}^{\otimes k})$,
for $v'<v$ in $I$, are given by restriction; since these maps are injective,
$\mathcal{M}_k^\dagger(N)$ is naturally a subspace of $\mathcal{M}_k(N)$.
\vspace{0.1in}
By the theory of the \emph{canonical subgroup}
(see \cite[Thm.~3.1]{Katz350}), if $(E_x,t_x)$ corresponds to a point $x$
in $X_1(N)(\frac{p}{p+1})$, the elliptic curve $E_x$ admits a distinguished subgroup
${\rm can}(E_x)\subset E_x[p]$ of order $p$ reducing to the kernel of
Frobenius in characteristic $p$. The rule
\[
(E_x,t_x)\mapsto (E_x,t_x,\alpha_{\rm can}),
\]
where $\alpha_{\rm can}:E_x\mapsto E_x/{\rm can}(E_x)$ is the projection,
defines rigid morphism $X_1(N)(\frac{p}{p+1})\longrightarrow\mathcal{W}_\infty$,
and hence if $f$ is a modular form of weight $k$ and level $\Gamma$,
then the restriction $f\vert_{\mathcal{W}_\infty}$ gives an overconvergent
$p$-adic modular form of weight $k$ and tame level $N$.
\subsection{Ordinary CM points}\label{subsubsec:A}
Let $K$ be an imaginary quadratic field with ring of integers $\mathcal{O}_K$ equipped
with a cyclic ideal $\mathfrak{N}\subset\mathcal{O}_K$ such that
\[
\mathcal{O}_K/\mathfrak{N}\simeq\mathbf{Z}/N\mathbf{Z}.
\]
Fix an elliptic curve $A$ defined over the Hilbert class field $H$ of $K$ with
${\rm End}_H(A)\simeq\mathcal{O}_K$ having good reduction at the primes above $p$,
and choose a $\Gamma_1(N)$-level structure $t_A\in A[\mathfrak{N}]$ and
a regular differential $\omega_A\in\Omega^1_{A/H}$. The identification
${\rm End}_H(A)=\mathcal{O}_K$ is normalized so that $\lambda\in\mathcal{O}_K$ acts as
\[
\lambda^*\omega=\lambda\omega\quad\textrm{for all $\omega\in\Omega_{A/H}^1$.}
\]
For every integer $c\geq 1$ prime to $Np$,
let $\mathcal{O}_c=\mathbf{Z}+c\mathcal{O}_K$ be the order of $K$ of conductor $c$,
and denote by ${\rm Isog}_c^\mathfrak{N}(A)$ the set of
elliptic curves $A'$ with CM by $\mathcal{O}_c$ equipped with an isogeny $\varphi:A\longrightarrow A'$
satisfying ${\rm ker}(\varphi)\cap A[\mathfrak{N}]=\{0\}$.
\vspace{0.1in}
The semigroup of projective rank one $\mathcal{O}_c$-modules $\mathfrak{a}\subset\mathcal{O}_c$ prime to
$\mathfrak{N}\cap\mathcal{O}_c$ acts on ${\rm
Isog}_c^{\mathfrak{N}}(A)$ by the rule
\[
\mathfrak{a}*(\varphi:A\longrightarrow
A')=\varphi_\mathfrak{a}\varphi:A\longrightarrow A'\longrightarrow
A'_\mathfrak{a},
\]
where $A'_\mathfrak{a}:=A'/A'[\mathfrak{a}]$ and $\varphi_\mathfrak{a}:A'\longrightarrow A'_{\mathfrak{a}}$
is the natural projection. It is easily seen that this induces an action
of ${\rm Pic}(\mathcal{O}_c)$ on ${\rm Isog}_c^{\mathfrak{N}}(A)$.
\vspace{0.1in}
Throughout this paper, we shall assume that $p=\mathfrak{p}\overline{\mathfrak{p}}$ splits in $K$,
and let $\mathfrak{p}$ be the prime of $K$ above $p$ induced by our fixed embedding
$\overline{\mathbf{Q}}_p\overset{\imath_p}\hookrightarrow\mathbf{C}_p$.
Thus if $A'$ is an elliptic curve with CM by $\mathcal{O}_c$ defined over the ring class field $H_c$ of $K$ of conductor $c$,
then $A'$ has ordinary reduction at $p$, and $A'[\mathfrak{p}]\subset A'[p]$ is the canonical subgroup.
In the following, we will let $\alpha_\mathfrak{p}'=\alpha_{\rm can}:A'\longrightarrow A'/A'[\mathfrak{p}]$
denote the projection.
\subsection{Generalized Heegner cycles}
For any $r>0$, let $W_r$ be the Kuga--Sato variety
over
\[
X_0:=X_1(Np)
\]
obtained as the canonical desingularization of the
$r$-fold self-product of the universal generalized elliptic curve over
$X_0$, and define
\begin{equation}\label{genKS}
X_r:=W_r\times A^r,
\end{equation}
where $A/H$ is the elliptic curve with CM by $\mathcal{O}_K$ fixed in the preceding section.
\vspace{0.1in}
The variety $X_r$ is fibered over $X_0$,
and the fiber over a non-cuspidal point
$x$ associated with a pair $(E_x,t_{x})$
is identified with $E_x^r\times A^r$.
Thus for every isogeny $\varphi:A\longrightarrow A'$
in ${\rm Isog}_c^\mathfrak{N}(A)$, we may consider the cycle
\[
\Upsilon_\varphi:=(\Gamma_\varphi^{\rm t})^r\;\subset\;(A'\times A)^r\;\subset\; X_r,
\]
where $\Gamma_\varphi^{\rm t}$ is the transpose of the graph of $\varphi$, and following
\cite[\S{2.3}]{bdp1} define the \emph{generalized Heegner cycle} associated with $\varphi$ by
\begin{equation}\label{genheeg}
\Delta_\varphi:=\epsilon_{X}\Upsilon_\varphi,
\end{equation}
where $\epsilon_{X}$ is the idempotent defined in [\emph{loc.cit.}, (2.1.1)]
(with $X_0$ in place of their curve $C=X_1(N)$).
By \cite[Prop.~2.7]{bdp1}, the cycles $\Delta_\varphi$ are homologically trivial;
by abuse of notation, we shall still denote by $\Delta_\varphi$ the classes they define
in the Chow group ${\rm CH}^{r+1}(X_r)_0$ with rational coefficients.
For $r=0$, set
\[
\Delta_\varphi:=(A',t_{A'})-(\infty),
\]
where $t_{A'}\in A'[Np]$ is a $\Gamma_1(Np)$-level structure contained in $A'[\mathfrak{Np}]$,
and $\infty$ is the cusp $({\rm Tate}(q),\zeta_{Np})$.
\section{A semistable non-crystalline setting}
This section is aimed at proving Theorem~\ref{thmbdp1A} below,
which extends the $p$-adic Gross--Zagier formula due Bertolini--Darmon--Prasanna~\cite{bdp1}
in the good reduction case to the semistable non-crystalline setting.
\subsection{$p$-adic Abel--Jacobi maps}\label{sec:p-adic-AJ}
Let $F$ be a finite unramified extension of $\mathbf{Q}_p$,
denote by $\mathcal{O}_F$ the ring of integers of $F$, and let $\kappa$ be the residue field.
The generalized Kuga--Sato variety $X_{r}$,
which was defined in $(\ref{genKS})$ as a scheme over $\mathbf{Z}[1/Np]$,
has semistable reduction at $p$. In other words,
there exits a proper scheme $\mathcal{X}_r$ over $\mathcal{O}_F$
with generic fiber $X_r\times_{\mathbf{Z}[1/Np]}F$ and with
special fiber $\mathcal{X}_r\times_{\mathcal{O}_F}\kappa$
whose only singularities are divisors with normal crossings.
\vspace{0.1in}
By the work of Hyodo--Kato \cite{hk},
attached to $X_r$ there are log-crystalline cohomology
groups $H_{\textrm{log-cris}}^j(\mathcal{X}_r\times_{\mathcal{O}_F}\kappa)$, which are $\mathcal{O}_F$-modules of finite rank equipped with
a semilinear Frobenius automorphism $\Phi$
and a linear nilpotent monodromy operator $N$ satisfying
\begin{displaymath}
N\Phi=p\Phi N.
\end{displaymath}
Moreover, for each choice of a uniformizer of
$\mathcal{O}_F$ there is a comparison isomorphism
\begin{displaymath}
H_{\textrm{log-cris}}^j(\mathcal{X}_r\times_{\mathcal{O}_F}\kappa)\otimes_{\mathcal{O}_F}F
\simeq H_{\rm dR}^j(X_r/F)
\end{displaymath}
endowing the algebraic de Rham cohomology groups
$H_{\rm dR}^j(X_r/F)$ with the structure of
filtered $(\Phi,N)$-modules. In the following,
we shall restrict our attention to the middle degree cohomology, i.e.,
we set $j=2r+1$.
\vspace{0.1in}
Let $G_F:={\rm Gal}(\overline{F}/F)$ be the absolute Galois group of $F$,
and consider the $p$-adic $G_F$-representation given by
\[
V_r:=H_{\textrm{\'et}}^{2r+1}(X_r\times_F\overline{F},\mathbf{Q}_p).
\]
Applying Fontaine's functor $\mathbf{D}_{\rm st}$ to $V_r$ yields another filtered
$(\Phi,N)$-module associated to $X_r$.
\begin{thm}[Tsuji]\label{hk}
The $p$-adic $G_F$-representation $V_r$ is semistable,
and there is a natural
isomorphis
\begin{equation}\label{comp}
\mathbf{D}_{\rm st}(V_r)\simeq H_{\rm dR}^{2r+1}(X_r/F)\nonumber
\end{equation}
compatible with all structures. In particular,
the assignment $V\mapsto\mathbf{D}_{\rm st}(V)$ induces an isomorphism
${\rm Ext}_{\rm st}(\mathbf{Q}_p,V_r)\simeq{\rm Ext}_{{\rm
Mod}_F(\Phi,N)}(F,H_{\rm dR}^{2r+1}(X_r/F))$.
\end{thm}
Here, ${\rm Ext}_{\rm st}(\mathbf{Q}_p,V_r)\simeq H_{\rm st}^1(F,V_r):={\rm ker}(H^1(F,V_r)
\longrightarrow H^1(F,V_r\otimes_{\mathbf{Q}_p}\mathbf{B}_{\rm st}))$
is the group of extensions of the trivial representation $\mathbf{Q}_p$ by $V_r$
in the category of semistable $p$-adic $G_F$-representations.
\vspace{0.1in}
The idempotent $\epsilon_X$ used in the definition $(\ref{genheeg})$ of
the generalized Heegner cycles $\Delta_\varphi$ acts as a projector on the various cohomology
groups associated to the variety $X_r$. Let $V_r(r+1)$ be the $(r+1)$-st Tate twist of $V_r$,
and consider the \'etale Abel--Jacobi map
\[
{\rm AJ}_F^{\textrm{\'et}}:{\rm
CH}^{r+1}(X_r)_0(F)\longrightarrow{\rm
Ext}_{{\rm Rep}_{G_F}}(\mathbf{Q}_p,\epsilon_XV_r(r+1))=H^1(F,\epsilon_XV_r(r+1))
\]
constructed in \cite{nekovarCRM}.
By [\emph{loc.cit.}, Thm.~3.1$(ii)$],
the image of ${\rm AJ}_F^{\textrm{\'et}}$
lands in $H_{\rm st}^1(F,\epsilon_XV_r(r+1))$,
and hence via the comparison isomorphism (\ref{comp})
it can be seen as taking values in the group
\[
{\rm Ext}_{{\rm
Mod}_F(\Phi,N)}(F,\epsilon_XH_{\rm dR}^{2r+1}(X_r/F)(r+1))
\]
of extensions of $F$ by the twist $\epsilon_XH_{\rm dR}^{2r+1}(X_r/F)(r+1)$
in the category of filtered $(\Phi,N)$-modules over $F$.
This group admits the following explicit description.
\begin{lem}\label{fil}
Set $H_r:=H_{\rm dR}^{2r+1}(X_r/F)$ and let $n=[F:\mathbf{Q}_p]$. The assignment
\[
\{0\longrightarrow\epsilon_XH_r(r+1)\longrightarrow E\xrightarrow{\;\rho\;}F\longrightarrow 0\}
\quad\rightsquigarrow\quad\eta_E=\eta_E^{\rm hol}(1)-\eta_E^{\rm frob}(1),
\]
where $\eta_E^{\rm hol}:F\longrightarrow{\rm Fil}^0E$ (resp.
$\eta_E^{\rm frob}:F\longrightarrow E^{\Phi^n=1,N=0}$) is a section of
$\rho$ compatible with filtrations
(resp. with Frobenius and monodromy) yields an isomorphism
\begin{equation}\label{exp}
{\rm Ext}_{{\rm Mod}_F(\Phi,N)}(F,\epsilon_XH_r(r+1))
\simeq\epsilon_XH_r(r+1)/{\rm Fil}^0\epsilon_XH_r(r+1).\nonumber
\end{equation}
\end{lem}
\begin{proof}
See \cite[Lemma~2.1]{IS}, for example.
\end{proof}
Define the \emph{$p$-adic Abel--Jacobi map}
\begin{equation}\label{aj}
{\rm AJ}_F:{\rm CH}^{r+1}(X_r)_0(F)\longrightarrow\epsilon_XH_{\rm dR}^{2r+1}(X_r/F)(r+1)/{\rm
Fil}^{0}\epsilon_XH_{\rm dR}^{2r+1}(X_r/F)(r+1)
\end{equation}
to be the composite of ${\rm AJ}_F^{\textrm{\'et}}$
with the isomorphisms of Theorem~\ref{hk} and Lemma~\ref{exp}.
Since the filtered pieces ${\rm Fil}^{1}\epsilon_XH^{2r+1}_{\rm dR}(X_r/F)(r)$
and ${\rm Fil}^0\epsilon_XH^{2r+1}_{\rm dR}(X_r/F)(r+1)$ are exact annihilators
under the Poincar\'e duality
\begin{equation}\label{PD-dR}
\epsilon_XH^{2r+1}_{\rm dR}(X_r/F)(r)\times\epsilon_XH^{2r+1}_{\rm dR}(X_r/F)(r+1)\longrightarrow F,\nonumber
\end{equation}
the target of ${\rm AJ}_F$ may be identified
with the linear dual $({\rm Fil}^{r+1}\epsilon_XH^{2r+1}_{\rm dR}(X_r/F))^\vee$.
\vspace{0.1in}
Recall the coherent sheaf of $\mathcal{O}_X$-modules $\mathcal{L}_r={\rm Sym}^r\mathcal{L}$ on $X$
introduced in Section~\ref{subsubsec:mf&dR}, and set
\[
\mathcal{L}_{r,r}:=\mathcal{L}_r\otimes{\rm Sym}^rH_{\rm dR}^1(A).
\]
With the trivial extension of the Gauss--Manin connection $\nabla$ on $\mathcal{L}_r$
to $\mathcal{L}_{r,r}$, consider the complex
\[
\mathcal{L}_{r,r}^\bullet:\mathcal{L}_{r,r}
\xrightarrow{\;\nabla\;}\mathcal{L}_{r,r}\otimes\Omega_X^1(\log Z_X),
\]
and define $H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)$ as in $(\ref{def:Hpar})$.
By \cite[Prop.~2.4]{bdp1}, we then have
\begin{equation}
\epsilon_{X}H^{2r+1}_{\rm dR}(X_r/F)\simeq H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)
=H^1_{\rm par}(X,\mathcal{L}_r,\nabla)\otimes{\rm Sym}^rH^1_{\rm dR}(A/F)\nonumber
\end{equation}
and
\begin{equation}\label{2.4}
{\rm Fil}^{r+1}\epsilon_XH^{2r+1}_{\rm dR}(X_r/F)
\simeq H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega_{X}^1)\otimes{\rm Sym}^rH_{\rm dR}^1(A/F).\nonumber
\end{equation}
As a result of these identifications, we shall view the $p$-adic Abel--Jacobi map $(\ref{aj})$
as a map
\begin{equation}\label{p-AJ}
{\rm AJ}_F:{\rm CH}^{r+1}(X_r)_0(F)
\longrightarrow(H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega_{X}^1)\otimes{\rm Sym}^rH_{\rm dR}^1(A/F))^\vee.
\end{equation}
Moreover, if $\Delta=\epsilon_X\Delta\in{\rm CH}^{r+1}(X_r)_0(F)$
is the class of a cycle in the image of the idempotent $\epsilon_X$
supported on the fiber of $X_r\longrightarrow X$ over a point $P\in X(F)$,
we see that ${\rm AJ}_F(\Delta)$ may be computed using the following recipe.
Consider the commutative diagram with Cartesian squares:
\begin{displaymath}
\xymatrix{
0\ar[r]&H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)(r+1) \ar[r]\ar@{=}[d]& D_\Delta
\ar[r]\ar[d] & F \ar[d]^{{\rm cl}_\Delta}\ar[r]&0\\
0\ar[r]&H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)(r+1) \ar[r]&
H^1_{\rm par}(X\smallsetminus P,\mathcal{L}_{r,r},\nabla)(r+1) \ar[r]& \mathcal{L}_{r,r}(P)(r)\ar[r]&0,
}
\end{displaymath}
where the rightmost vertical map is defined by
sending $1\in F$ to the cycle class ${\rm cl}_P(\Delta)$.
Then ${\rm AJ}_F(\Delta)$ is given by the linear functional
\begin{equation}\label{computeAJ}
{\rm AJ}_F(\Delta)=\langle-,\eta_\Delta\rangle,\nonumber
\end{equation}
where $\eta_\Delta=\eta_\Delta^{\rm hol}-\eta_\Delta^{\rm frob}:=\mathbf{\eta}_{D_\Delta}^{\rm hol}(1)-\mathbf{\eta}_{D_\Delta}^{\rm frob}(1)$
is the ``tangent vector'' associated as in Lemma~\ref{fil} to
the extension $D_\Delta$ as filtered $(\Phi,N)$-modules, and
\begin{equation}\label{PDparr}
\langle,\rangle:H^1_{\rm par}(X,\mathcal{L}_{r,r},\nabla)(r)\times H^1_{\rm par}(X,\mathcal{L}_{r,r},\nabla)(r+1)\longrightarrow F
\end{equation}
is the Poincar\'e duality.
\subsection{Rigid cohomology}\label{sec:p-adic-dR}
Recall the rigid spaces
$\mathcal{Z}_\infty\subset\mathcal{W}_\infty$, $\mathcal{Z}_0\subset\mathcal{W}_0$
introduced in Section~\ref{subsubsec:X}.
Fix a collection of points $\{P_1,\dots,P_t\}$ of $X(F)$ contained in $\mathcal{Z}_\infty$,
containing all the cusps of $\mathcal{Z}_\infty$, and such that ${\rm red}_p(P_i)\neq{\rm red}_p(P_j)$ for $i\ne j$.
Let $w_p$ be the automorphism of $X$ defined in terms of moduli by
\begin{equation}\label{wp}
w_p(E,t,\alpha)=(\alpha(E),\alpha(t),\alpha^\vee),
\end{equation}
where $\alpha^\vee$ is the isogeny dual to $\alpha$,
and set $P_j^*:=w_pP_j$. Then the points $P_j^*$ factor through $\mathcal{Z}_0$,
and the set
\[
S:=\{P_1,\dots,P_t,P_1^*,\dots,P_t^*\}
\]
contains all the cusps of $X$.
Since the points $Q\in S$ reduce to smooth points
$\bar{Q}$ in the special fiber, the spaces
$D(Q):={\rm red}_p^{-1}(\bar{Q})$ are conformal to
the open disc in $D(0;1)$ in $\mathbf{C}_p$.
Fix isomorphisms $h_Q:D(Q)\longrightarrow D(0;1)$
mapping the point $Q$ to $0$, and for a
collection of real numbers $r_Q<1$ consider the annuli
\begin{equation}\label{eq:Vj}
\mathcal{V}_Q:=\{x\in D(Q)\;\colon\;r_Q<|h_Q(x)|_p<1\}.
\end{equation}
Denote by $\mathcal{L}_{r,r}^{\rm rig}$ the sheaf for the rigid analytic topology on $X(\mathbf{C}_p)$
defined by the algebraic vector bundle $\mathcal{L}_{r,r}$.
If $\mathcal{V}\subset X(\mathbf{C}_p)$ is a connected wide open contained in $Y(\mathbf{C}_p)$,
the Gauss--Manin connection yields a connection
\[
\nabla:\mathcal{L}_{r,r}^{\rm rig}\vert_\mathcal{V}\longrightarrow\mathcal{L}_{r,r}^{\rm rig}\vert_{\mathcal{V}}\otimes\Omega_\mathcal{V}^1,
\]
and similarly as in (\ref{def:HdR})
we define the \emph{$i$-th de Rham cohomology} of $\mathcal{V}$ attached
to $\mathcal{L}^{\rm rig}_{r,r}$ by
\[
H^i(\mathcal{L}_{r,r}^{\bullet}\vert_\mathcal{V})=H_{\rm dR}^i(\mathcal{V},\mathcal{L}^{\rm rig}_{r,r},\nabla)
:=\mathbb{H}^i(\mathcal{L}_{r,r}^{\rm rig}\vert_\mathcal{V}\xrightarrow{\;\nabla\;}\mathcal{L}_{r,r}^{\rm
rig}\vert_{\mathcal{V}}\otimes\Omega_\mathcal{V}^1).
\]
In particular, if $\mathcal{V}$ is a basic wide open, then
\[
H^1(\mathcal{L}_{r,r}^\bullet\vert_\mathcal{V})\simeq\frac{\mathcal{L}_{r,r}^{\rm rig}(\mathcal{V})\otimes\Omega_\mathcal{V}^1}
{\nabla\mathcal{L}_{r,r}^{\rm rig}(\mathcal{V})},
\]
and $H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})\simeq\mathcal{L}_{r,r}^{\rm rig}(\mathcal{V})^{\nabla=0}$
is the space of horizontal sections of $\mathcal{L}_{r,r}^{\rm
rig}$ over $\mathcal{V}$. For $r=0$, we set
\[
H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})=H^1(\mathcal{V}):=\Omega_{\mathcal{V}}^1/d\mathcal{O}_{\mathcal{V}},\quad\quad
H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})=H^0(\mathcal{V}):=\mathcal{O}_\mathcal{V}^{d=0},
\]
where $d:\mathcal{O}_\mathcal{V}\longrightarrow\Omega^1_\mathcal{V}$ is the differentiation map.
\vspace{0.1in}
In terms of the admissible cover of $X(\mathbf{C}_p)$
by basic wide opens described in Section~\ref{subsubsec:X},
the classes in $H^1_{\rm dR}(X(\mathbf{C}_p),\mathcal{L}_{r,r}^{\rm rig},\nabla)$
may be represented by hypercocycles $(\omega_0,\omega_\infty;f_{\mathcal{W}})$,
where $\omega_0$ and $\omega_\infty$ are $\mathcal{L}_{r,r}^{\rm rig}$-valued differentials
on $\mathcal{W}_0$ and $\mathcal{W}_\infty$, respectively,
and $f_{\mathcal{W}}\in\mathcal{L}_{r,r}^{\rm rig}(\mathcal{W})$ is such that
$(\omega_\infty-\omega_0)\vert_{\mathcal{W}}=\nabla f_{\mathcal{W}}$; and two
hypercocycles represent the same class if
their difference is of the form $(\nabla f_0,\nabla
f_\infty;(f_\infty-f_0)\vert_\mathcal{W})$ for some $f_0\in\mathcal{L}_{r,r}^{\rm rig}(\mathcal{W}_0)$
and $f_\infty\in\mathcal{L}_{r,r}^{\rm rig}(\mathcal{W}_\infty)$.
\vspace{0.1in}
If $\mathcal{V}$ is a wide open annulus,
associated with an orientation of $\mathcal{V}$ there is a \emph{$p$-adic
annular residue}
\begin{equation}\label{res0}
{\rm res}_\mathcal{V}:\Omega_\mathcal{V}^1\longrightarrow\mathbf{C}_p
\end{equation}
defined by expanding $\omega=\sum_na_nT^n\frac{dT}{T}\in\Omega_\mathcal{V}^1$
with respect to a fixed uniformizing parameter $T$ of compatible with the orientation,
and setting ${\rm res}_\mathcal{V}(\omega):=a_{0}$ (see \cite[Lemma~2.1]{rlc}).
Combined with the natural pairing
\[
\langle,\rangle:\mathcal{L}_{r,r}^{\rm rig}(\mathcal{V})\times\mathcal{L}_{r,r}^{\rm
rig}(\mathcal{V})\otimes\Omega_\mathcal{V}^1\longrightarrow\Omega_\mathcal{V}^1
\]
induced by the Poincar\'e duality $(\ref{PDL})$ on $\mathcal{L}_{r}$
(extended to $\mathcal{L}_{r,r}$
in the obvious manner),
we obtain a higher $p$-adic annular residue map
\begin{equation}\label{bdp-residue}
{\rm Res}_\mathcal{V}:\mathcal{L}_{r,r}^{\rm
rig}(\mathcal{V})\otimes\Omega_\mathcal{V}^1\longrightarrow\mathcal{L}_{r,r}^{\rm
rig}(\mathcal{V})^\vee
\end{equation}
by setting
\[
{\rm Res}_{\mathcal{V}}(\omega)(\alpha)
={\rm res}_{\mathcal{V}}\langle\alpha,\omega\rangle
\]
for every $\mathcal{L}^{\rm rig}_{r,r}$-valued differential $\omega$ on $\mathcal{V}$
and every section $\alpha\in\mathcal{L}^{\rm rig}_{r,r}(\mathcal{V})$.
Since ${\rm res}_\mathcal{V}$ clearly descends to a map $H^1(\mathcal{V})=\Omega_{\mathcal{V}}^1/d\mathcal{O}_{\mathcal{V}}\longrightarrow\mathbf{C}_p$,
by composing ${\rm Res}_\mathcal{V}$ with the projection $\mathcal{L}_{r,r}^{\rm rig}(\mathcal{V})^\vee\longrightarrow
H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})^\vee$
it is easily seen from the Leibniz rule that we obtain a well-defined map
\begin{equation}\label{higherRes}
{\rm Res}_\mathcal{V}:H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})\longrightarrow
H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}})^\vee.
\end{equation}
If $\mathcal{V}_Q\subset D(Q)$ is the annulus attached to
a non-cuspidal point $Q\in S$, it will be convenient,
following the discussion after \cite[Cor.~3.7]{bdp1},
to view ${\rm Res}_{\mathcal{V}_Q}$ as taking values on the fiber $\mathcal{L}_{r,r}(Q)$, using
the sequence of identifications
\begin{equation}\label{eq:ident}
H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{V}_Q})^\vee
=(H^0(D(Q),\mathcal{L}_{r,r})^{\nabla=0})^\vee =\mathcal{L}_{r,r}(Q)^\vee=\mathcal{L}_{r,r}(Q)
\end{equation}
arising from ``analytic continuation'', the choice of
an ``initial condition'', and the self-duality of $\mathcal{L}_{r,r}(Q)$, respectively.
(See \emph{loc.cit.} for the case of a cusp $Q\in S$.)
\vspace{0.1in}
For a supersingular annulus $\mathcal{A}_x$, the vector space
$H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{A}_x})$ is equipped
with a pairing $\langle,\rangle_{\mathcal{A}_x}$, arising from
an identification (similar to $(\ref{eq:ident})$)
with the de Rham cohomology of a supersingular elliptic curve
in characteristic $p$ corresponding to $x\in SS$.
Moreover, since $H^0(\mathcal{A}_x)\simeq\mathbf{C}_p$,
the residue map (\ref{res0}) yields an isomorphism
${\rm res}_{\mathcal{A}_x}:H^1(\mathcal{A}_x)\longrightarrow H^0(\mathcal{A}_x)$,
and using a trivialization of $\mathcal{L}_{r,r}^{\rm rig}\vert_{\mathcal{A}_x}$
it may be extended to an isomorphism
\begin{equation}\label{Resr}
{\rm Res}_{\mathcal{A}_x}:H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{A}_x})
\simeq H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{A}_x})
\end{equation}
(see \cite[Prop.~7.1]{pSI}). It is then easily checked that $(\ref{higherRes})$
and $(\ref{Resr})$ correspond to each other under the identification
$H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{A}_x})^\vee=H^0(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{A}_x})$
defined by $\langle,\rangle_{\mathcal{A}_x}$.
\vspace{0.1in}
Let $S$ be a set of points as introduced above,
and define
\begin{align*}
\mathcal{W}_\infty^\sharp&:=\mathcal{Z}_\infty\smallsetminus
\bigcup_{Q\in S\cap\mathcal{Z}_\infty}D(Q)\smallsetminus\mathcal{V}_Q,\quad\quad
\mathcal{U}:={\mathcal{W}}_\infty^\sharp\cup{\mathcal{W}}_0^\sharp,
\end{align*}
where $\mathcal{W}_0^\sharp:=w_p\mathcal{W}_\infty^\sharp$, and $U:=X\smallsetminus S$.
The restriction of an $\mathcal{L}_{r,r}$-valued differential on $X$ which is regular on $U$
defines a section of $\mathcal{L}_{r,r}^{\rm rig}\otimes\Omega_X^1$ over
$\mathcal{U}$. As argued in the proof of \cite[Prop.~7.2]{pSI}, this yields an isomorphism
\begin{equation}\label{alg=anal}
H_{\rm dR}^1(U,\mathcal{L}_{r,r},\nabla)\simeq H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{U}})\nonumber
\end{equation}
between algebraic and rigid de Rham cohomology.
\begin{prop}\label{prop:poincare}
Let the notations be as before.
\begin{enumerate}
\item{} A class $\kappa\in H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{U}})$
belongs to the image of $H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)$
under restriction
\[
H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)\longrightarrow H_{\rm dR}^1(U,\mathcal{L}_{r,r},\nabla)\simeq H^1(\mathcal{L}_{r,r}^\bullet\vert_{\mathcal{U}})
\]
if and only if ${\rm Res}_{\mathcal{V}_Q}(\kappa)=0$ for all $Q\in S$.
\item{}
Let $V$ be such that $\{U,V\}$ is an admissible covering of $X$.
If $\kappa_\omega$, $\kappa_\eta\in H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)$ are represented by the hypercocycles
$(\omega_U,\omega_V;\omega_{U\cap V})$, $(\eta_U,\eta_V;\eta_{U\cap V})$ respectively,
with respect to this covering, then the value
$\langle\kappa_\omega,\kappa_\eta\rangle$ under the Poincar\'e duality $(\ref{PDparr})$
is given by
\begin{equation}\label{eq:poincare}
\langle\kappa_\omega,\kappa_\eta\rangle=\sum_{Q\in S}{\rm
res}_{\mathcal{V}_Q}\langle F_{\omega,Q},\eta_U\rangle,\nonumber
\end{equation}
where $F_{\omega,Q}$ is any local primitive of $\omega_U$ on $\mathcal{V}_Q$,
i.e., such that $\nabla F_{\omega,Q}=\omega_U\vert_{\mathcal{V}_Q}$.
\end{enumerate}
\end{prop}
\begin{proof}
The first assertion follows from the same argument as in \cite[Prop.~3.8]{bdp1},
and the second is \cite[Lemma~7.1]{pSI}.
\end{proof}
\subsection{Coleman's $p$-adic integration}\label{sec:fmandCol}
In this section, we give an explicit description of the
filtered $(\Phi,N)$-module structure on $H_{\rm par}^1(X,\mathcal{L}_{r,r},\nabla)$,
following the work of Coleman--Iovita \cite{CI2}.
We state the results for $\mathcal{L}_r$,
leaving their trivial extension to
$\mathcal{L}_{r,r}=\mathcal{L}_r\otimes{\rm Sym}^rH_{\rm dR}^1(A)$
to the reader.
\vspace{0.1in}
As recalled in Section~\ref{sec:p-adic}, for every pair $(E_x,t_x)$
corresponding to a point $x\in X_1(N)(\frac{p}{p+1})$
there is a canonical $p$-isogeny $\alpha_{\rm can}:E_x\mapsto E_x/{\rm can}(E_x)$,
where ${\rm can}(E_x)\subset E_x[p]$ is the canonical subgroup.
The
map $V:X_1(N)(\frac{1}{p+1})\longrightarrow X_1(N)(\frac{p}{p+1})$
defined in terms of moduli by
\begin{equation}\label{V}
V(E_x,t_x)=(\alpha_{\rm can}(E_x),\alpha_{\rm can}(t_x))
\end{equation}
is then a lift of the absolute Frobenius on $X_1(N)_{\mathbf{F}_p}$.
Letting $s_1:X_1(N)(\frac{p}{p+1})\longrightarrow\mathcal{W}_\infty$
be defined by $(E_x,t_x)\mapsto(E_x,t_x,{\rm can}(E_x))$, and letting
$\mathcal{W}_\infty'\subset\mathcal{W}_\infty$ be the image of $X_1(N)(\frac{1}{p+1})$ under $s_1$,
the map $\phi_\infty$ defined by the commutativity of the diagram
\begin{displaymath}
\xymatrix{
\mathcal{W}_\infty
\ar[r]^{\phi_\infty}\ar[d]^{\pi_1}&\mathcal{W}_\infty\\
X_1(N)(\frac{1}{p+1})\ar[r]^V&X_1(N)(\frac{p}{p+1}),\ar[u]^{s_1}
}
\end{displaymath}
is therefore a lift of the absolute Frobenius on $X_{\mathbf{F}_p}$.
\vspace{0.1in}
As explained in \cite[p.41]{pSI} (see also the more detailed discussion in \cite[p.218]{coc}),
the canonical subgroup yields a horizontal morphism
${\rm Fr}_\infty:\phi_\infty^*\mathcal{L}_r\longrightarrow\mathcal{L}_r\vert_{\mathcal{W}_\infty'}$. Define
the \emph{Frobenius endomorphism} ${\Phi}_\infty$
on $H^1(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\infty})$ by the composite map
\[
H^1(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\infty})\simeq
\frac{\mathcal{L}_r^{\rm rig}\otimes\Omega^1_{\mathcal{W}_\infty}(\mathcal{W}_\infty)}{\nabla\mathcal{L}_r^{\rm rig}(\mathcal{W}_\infty)}
\xrightarrow{({\rm Fr}_\infty\otimes{\rm id})\phi_\infty^*}
\frac{\mathcal{L}_r^{\rm rig}\otimes\Omega^1_{\mathcal{W}_\infty}(\mathcal{W}_\infty')}{\nabla\mathcal{L}_r^{\rm rig}(\mathcal{W}_\infty')}
\simeq
H^1(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\infty}),
\]
where the last isomorphism is given by restriction (see \cite[Prop.~10.3]{pSI}).
Setting $\mathcal{W}_0':=w_p\mathcal{W}_\infty'\subset\mathcal{W}_0=w_p\mathcal{W}_\infty$
and $\phi_0:=w_p^{-1}\phi_\infty w_p$,
where $w_p$ is the automorphism of $X$ given by $(\ref{wp})$,
we similarly define a Frobenius endomorphism ${\Phi}_0$ of $H^1(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_0})$.
\begin{thm}[Coleman]\label{coleman-primitives}
Let $f=q+\sum_{n=2}^\infty a_n(f)q^n\in S_{r+2}(\Gamma_0(Np))$
be a $p$-new
eigenform of weight $r+2\geq 2$, and let $\omega_f\in H^0(X,\underline{\omega}^r\otimes\Omega_X^1)\subset H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)$ be the associated differential.
Then for each $\star\in\{\infty,0\}$ there exists a
locally analytic section $F_{f,\star}$ of $\mathcal{L}_r$ on $\mathcal{W}_\star$ such that
\begin{itemize}
\item[$(i)$]{} $\nabla F_{f,\star}=\omega_f\vert_{\mathcal{W}_\star}$; and
\item[$(ii)$]{} $F_{f,\star}-\frac{a_p(f)}{p^{r+1}}\phi_\star^*F_{f,\star}$
is rigid analytic on $\mathcal{W}_\star'$.
\end{itemize}
Moreover, $F_{f,\star}$ is unique modulo $H^0(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\star})$.
\end{thm}
\begin{proof}
This follows from the discussion in \cite[\S{11}]{pSI}.
By [\emph{loc.cit}, Lemma~11.1] we have $\Phi_\infty=pU_p$
on the image of $S_{r+2}(X)^{p-{\rm new}}$ in
$H^1(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\infty})$. Since
$U_p^2=p^r\langle p\rangle$ on the former space
and we have the relations $U_p\omega_f=a_p(f)\omega_f$ and
$\langle p\rangle\omega_f=\omega_f$ by hypothesis,
it follows that the polynomial
\begin{equation}
P(T)=1-\frac{a_p(f)}{p^{r+1}}T\nonumber
\end{equation}
is such that
$P(\Phi_\infty)([\omega_f\vert_{\mathcal{W}_\infty}])=0$,
and hence also $P(\Phi_0)([\omega_f\vert_{\mathcal{W}_0}])=0$.
The result thus follows from \cite[Thm.~10.1]{pSI}.
\end{proof}
A locally analytic section $F_{f,\star}$ as in Theorem~\ref{coleman-primitives}
is called a \emph{Coleman primitive} of $f$ on $\mathcal{W}_\star$.
\begin{rem}\label{remD}
For $r>0$, the spaces $H^0(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\star})$ are trivial,
and so the Coleman primitives $F_{f,\star}$ are unique.
On the other hand, for $r=0$ we have $H^0(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\star})\simeq\mathbf{C}_p$,
and so the $F_{f,\star}$ are unique modulo a global constant on $\mathcal{W}_\star$.
\end{rem}
\subsection{Frobenius and monodromy}
Denote by $\iota$ the inclusion of any
rigid subspace of $X$ into $X$.
Associated with the exact sequence of complexes of sheaves on $X$
\[
0\longrightarrow\mathcal{L}_r^\bullet\longrightarrow
\iota^*(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_0})\oplus\iota^*(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\infty})
\xrightarrow{\rho_\infty-\rho_0}
\iota^*(\mathcal{L}_r^\bullet\vert_{\mathcal{W}})\longrightarrow 0,
\]
there is a Mayer--Vietoris long exact sequence
\begin{align}
\cdots&\longrightarrow H^0_{\rm par}(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_0\sqcup\mathcal{W}_\infty})
\xrightarrow{\;\;\beta^0\;\;}
H^0_{\rm par}(\mathcal{L}_r^\bullet\vert_{\mathcal{W}})\xrightarrow{\;\;\delta\;\;}
H^1_{\rm par}(X,\mathcal{L}_r^{\rm rig},\nabla)\longrightarrow\nonumber\\
&\longrightarrow H^1_{\rm par}(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_0\sqcup\mathcal{W}_\infty})
\xrightarrow{\;\;\beta^1\;\;} H^1_{\rm par}(\mathcal{L}_r^\bullet\vert_{\mathcal{W}})
\longrightarrow\cdots\nonumber
\end{align}
in hypercohomology.
By \cite[\S{10}]{pSI} and the discussion in the preceding section,
each of the non-central spaces in the resulting short exact sequence
\begin{equation}\label{MV}
0\longrightarrow
\frac{H^0_{\rm par}(\mathcal{L}_{r}^\bullet\vert_{\mathcal{W}})}{\beta^0(H_{\rm
par}^0(\mathcal{L}_{r}^\bullet\vert_{\mathcal{W}_0\sqcup\mathcal{W}_\infty}))}
\xrightarrow{\;\;\;\delta\;\;\;}H_{\rm par}^1(X,\mathcal{L}_r,\nabla)
\longrightarrow H^1_{\rm par}(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_0\sqcup\mathcal{W}_\infty})^{\beta^1=0}
\longrightarrow 0
\end{equation}
is equipped with a Frobenius endomorphism. Therefore, to define a Frobenius action on
$H^1_{\rm par}(X,\mathcal{L}_r,\nabla)$ it suffices to construct a splitting of $(\ref{MV})$.
\vspace{0.1in}
As shown in \cite[\S{A.5}]{pSI}, this may be obtained as follows. Assume that $\kappa\in H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)$ is represented by the hypercocycle
$(\omega_0,\omega_\infty;f_\mathcal{W})$ with respect
to the covering $\{\mathcal{W}_0,\mathcal{W}_\infty\}$ of $X$. Since $\mathcal{W}=\bigcup_{x\in SS}\mathcal{A}_x$
is the union of the supersingular annuli,
we may write $f_\mathcal{W}=\{f_x\}_{x\in SS}$ with $f_x\in\mathcal{L}_r^{\rm rig}(\mathcal{A}_x)$.
The assignment
\begin{equation}\label{s}
\mathcal{A}_x\longmapsto F_{\omega_\infty}\vert_{\mathcal{A}_x}-F_{\omega_0}
\vert_{\mathcal{A}_x}-f_x,
\end{equation}
where $F_{\omega_\star}$ is a Coleman
primitive of $\omega_\star$ on $\mathcal{W}_\star$,
defines a horizontal section of $\mathcal{L}_r^{\rm rig}$
on $\mathcal{W}$, and its image modulo $\beta^0(H_{\rm
par}^0(\mathcal{L}_{r}^\bullet\vert_{\mathcal{W}_0\sqcup\mathcal{W}_\infty}))$
is independent of the chosen $F_{\omega_\star}$ (see Remark~\ref{remD}).
It is easily checked that $s\delta={\rm id}$,
and hence we may define a Frobenius operator $\Phi$ on $H_{\rm
par}^1(X,\mathcal{L}_r,\nabla)$ by requiring that its
action be compatible with the resulting splitting of $(\ref{MV})$.
\vspace{0.1in}
On the other hand, define the monodromy operator $N$
on $H_{\rm par}^1(X,\mathcal{L}_r,\nabla)$ by the composite map
\[
H_{\rm par}^1(X,\mathcal{L}_r,\nabla)\longrightarrow
H^1(\mathcal{L}_{r}^\bullet\vert_{\mathcal{W}})\xrightarrow{\bigoplus_{x\in SS}{\rm
Res}_{\mathcal{A}_x}}H^0(\mathcal{L}_{r}^\bullet\vert_{\mathcal{W}})\xrightarrow{\;\;\delta\;\;}
H_{\rm par}^1(X,\mathcal{L}_r,\nabla),
\]
where ${\rm Res}_{\mathcal{A}_x}$ are the $p$-adic residue maps $(\ref{Resr})$.
\begin{lem}\label{N=0}
Let $\kappa\in H_{\rm par}^1(X,\mathcal{L}_r,\nabla)$. Then:
\begin{itemize}
\item[$(i)$] For $r>0$, $N(\kappa)=0\;\Longleftrightarrow\;{\rm Res}_{\mathcal{A}_x}(\kappa)=0$ for all $x\in SS$;
\item[$(ii)$] For $r=0$, $N(\kappa)=0\;\Longleftrightarrow\;$ there is $C\in\mathbf{C}_p$ such that
${\rm res}_{\mathcal{A}_x}(\kappa)=C$ for all $x\in SS$.
\end{itemize}
\end{lem}
\begin{proof}
This follows immediately from the exact sequence $(\ref{MV})$ and
the determination of the spaces $H^0(\mathcal{L}_r^\bullet\vert_{\mathcal{W}_\star})$
recalled in Remark~\ref{remD}.
\end{proof}
By the main result of \cite{CI2}, the operators $\Phi$ and $N$ on
$H_{\rm par}^1(X,\mathcal{L}_r,\nabla)$ defined above
agree with the corresponding structures
deduced from the comparison isomorphism of Theorem~\ref{hk}.
\subsection{$p$-adic Gross--Zagier formula}\label{sec:computations}
Fix a finite extension $F/\mathbf{Q}_p$ containing the image of the Hilbert class field
$H$ of $K$ under our fixed embedding $\overline{\mathbf{Q}}\overset{\imath_p}\hookrightarrow\mathbf{C}_p$,
and let $c\geq 1$ be an integer prime to $Np$.
\begin{prop}\label{prop:AJ-Coleman}
Let $f=q+\sum_{n=2}^{\infty}a_n(f)q^n\in S_{r+2}(\Gamma_0(Np))$ be a $p$-new eigenform of weight $r+2\geq 2$.
Let $\varphi:A\longrightarrow A'$ be an isogeny in ${\rm Isog}_c^{\mathfrak{N}}(A)$, let
$P_{A'}\in X(F)$ be the point defined by $(A',t_{A'})$, and
let $\Delta_\varphi$
be generalized Heegner cycle associated to $\varphi$.
Then for all $\alpha\in{\rm Sym}^rH_{\rm dR}^1(A/F)$, we have
\begin{equation}\label{Delta-}
{\rm AJ}_F(\Delta_\varphi)(\omega_{f^{}}\wedge\alpha)
=\langle F_{f,\infty}(P_{A'})\wedge\alpha,{\rm cl}_{P_{A'}}(\Delta_\varphi)\rangle,\nonumber
\end{equation}
where $F_{f,\infty}$ is the Coleman primitive of $\omega_f\in H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega^1_X)$
on $\mathcal{W}_\infty$ (vanishing at $\infty$ if $r=0$), and the pairing on the right-hand side is the natural one
on $\mathcal{L}_{r,r}(P_{A'})$.
\end{prop}
\begin{proof}
Following the recipe described at the end of Section~\ref{sec:p-adic-AJ}, we have
\begin{equation}\label{eq:first}
{\rm AJ}_F(\Delta_\varphi)(\omega_{f^{}}\wedge\alpha)
=\langle\omega_{f^{}}\wedge\alpha,\eta_\Delta^{\rm hol}-\eta_\Delta^{\rm frob}\rangle,
\end{equation}
where:
\begin{itemize}
\item{} $\eta_\Delta^{\rm hol}$ is a cohomology class represented by a section
(still denoted $\eta_{\Delta}^{\rm hol}$) of $\mathcal{L}_{r,r}\otimes\Omega_X^1(\log Z_X)$ over $U$
having residue $0$ at the cusps, and with a simple pole at $P_{A'}$ with residue ${\rm cl}_{P_{A'}}(\Delta_\varphi)$;
\item{} $\eta_\Delta^{\rm frob}$ is section of $\mathcal{L}_{r,r}^{\rm rig}\otimes\Omega_X^1$ over $\mathcal{U}$
having the same residues as $\eta_\Delta^{\rm hol}$, and satisfying $N(\eta_\Delta^{\rm frob})=0$
and
\begin{equation}\label{frob}
\Phi\eta_\Delta^{\rm frob}=\eta_\Delta^{\rm frob}+\nabla G,
\end{equation}
for some rigid section $G$ of $\mathcal{L}_{r,r}^{\rm rig}$
on a strict neighborhood of $(\mathcal{Z}_0\cap\mathcal{W}_0^\sharp)\cup(\mathcal{Z}_\infty\cap\mathcal{W}_\infty^\sharp)$ in $\mathcal{U}$.
\end{itemize}
By the formula for the Poincar\'e pairing in Proposition~\ref{prop:poincare},
equation $(\ref{eq:first})$ may be rewritten as
\begin{align}\label{AJdelta}
{\rm AJ}_F(\Delta_\varphi)(\omega_{f}\wedge\alpha)
&=\sum_{Q\in S}{\rm res}_{\mathcal{V}_Q}\langle F_{f^{},Q}
\wedge\alpha,\eta_\Delta^{\rm hol}-\eta_\Delta^{\rm frob}\rangle,
\end{align}
where $F_{f^{},Q}\in\mathcal{L}_r^{}(\mathcal{V}_Q)$ is an arbitrary local primitive of $\omega_{f^{}}$ on the annulus $\mathcal{V}_Q$.
(Note that here we are using the fact that the connection $\nabla$ on
$\mathcal{L}_{r,r}=\mathcal{L}_r\otimes{\rm Sym}^rH_{\rm dR}^1(A/F)$ is defined from
the Gauss--Manin connection on $\mathcal{L}_r$ by extending it trivially on the second factor.)
If $F_{f,\infty}$ is a Coleman primitive of $\omega_f$ on $\mathcal{W}_\infty$,
then $F_{f,\infty}^{[p]}:=F_{f,\infty}-\frac{a_p(f)}{p^{r+1}}\phi_\infty^*F_{f,\infty}$
is rigid analytic on $\mathcal{W}_\infty'\subset\mathcal{W}_\infty$ by Theorem~\ref{coleman-primitives},
and hence
\begin{equation}\label{+Ax}
\sum_{Q\in S\cap\mathcal{W}_\infty}{\rm res}_{\mathcal{V}_Q}\langle
F_{f,\infty}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle
+\sum_{x\in SS}{\rm res}_{\mathcal{A}_x}\langle
F_{f,\infty}^{[p]}\wedge\alpha,\eta_{\Delta}^{\rm frob}\rangle=0
\end{equation}
by the Residue Theorem (see \cite[Thm.~3.8]{bdp1}).
Since $N(\eta_\Delta^{\rm frob})=0$,
Lemma~\ref{N=0} implies that we can write $\eta_\Delta^{\rm frob}=\nabla G_x$ for
some rigid section $G_x\in\mathcal{L}_{r,r}^{\rm rig}(\mathcal{A}_x)$ on each supersingular
annulus $\mathcal{A}_x$, and hence
\begin{align*}
d\langle F_{f,\infty}^{[p]}\wedge\alpha,G_x\rangle
&=\langle\nabla F_{f,\infty}^{[p]}\wedge\alpha,G_x\rangle+
\langle F_{f,\infty}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle.
\end{align*}
In particular, the right-hand side in the last equality has residue $0$, and hence
\begin{equation}\label{Ax}
{\rm res}_{\mathcal{A}_x}\langle F_{f,\infty}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle
=-{\rm res}_{\mathcal{A}_x}\langle\nabla F_{f,\infty}^{[p]}\wedge\alpha,G_x\rangle,
\end{equation}
Plugging $(\ref{Ax})$ into $(\ref{+Ax})$, we arrive at
\begin{equation}\label{00}
\sum_{Q\in S\cap\mathcal{W}_\infty}{\rm res}_{\mathcal{V}_Q}\langle
F_{f,\infty}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle
-\sum_{x\in SS}{\rm res}_{\mathcal{A}_x}\langle
\nabla F_{f,\infty}^{[p]}\wedge\alpha,\eta_{\Delta}^{\rm frob}\rangle=0.
\end{equation}
An entirely parallel reasoning with $\mathcal{W}_0$ in place of $\mathcal{W}_\infty$ yields a proof of the equality
\begin{equation}\label{0}
\sum_{Q\in S\cap\mathcal{W}_0}{\rm res}_{\mathcal{V}_Q}\langle
F_{f,0}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle
+\sum_{x\in SS}{\rm res}_{\mathcal{A}_x}\langle
\nabla F_{f,\infty}^{[p]}\wedge\alpha,\eta_{\Delta}^{\rm frob}\rangle=0,
\end{equation}
where $F_{f,0}$ is a Coleman primitive of $\omega_f$ on $\mathcal{W}_0$,
and where we used the fact that the supersingular annuli acquire opposite orientations
with respect to $\mathcal{W}_\infty$ and $\mathcal{W}_0$. Combining $(\ref{00})$ and $(\ref{0})$,
we get
\begin{equation}\label{contrib-res}
0=\sum_{Q\in S}{\rm res}_{\mathcal{V}_Q}\langle
F_{f,Q}^{[p]}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle
=\left(1-\frac{a_p(f)}{p^{r+1}}\right)\sum_{Q\in S}{\rm res}_{\mathcal{V}_Q}\langle F_{f,Q}\wedge\alpha,\eta_\Delta^{\rm frob}\rangle,
\end{equation}
where $F_{f,Q}^{[p]}$ denotes $F_{f,\infty}^{[p]}$ or $F_{f,0}^{[p]}$ depending on whether
$Q\in\mathcal{W}_\infty$ or $\mathcal{W}_0$, respectively, using $(\ref{frob})$ for the second equality
(see the argument \cite[p.1079]{bdp1}).
Since $a_p(f)^2=p^r$, this shows that there is no contribution from $\eta_\Delta^{\rm frob}$ in $(\ref{AJdelta})$.
On the other hand, since by the choice of $\eta_\Delta^{\rm hol}$ we easily have
\[
\sum_{Q\in S}{\rm res}_{\mathcal{V}_Q}\langle
F_{f,Q}^{}\wedge\alpha,\eta_\Delta^{\rm hol}\rangle
=\langle F_{f,\infty}^{}(P_{A'})\wedge\alpha,{\rm cl}_{P_{A'}}(\Delta_\varphi)\rangle
\]
(see \cite[Lemma~3.19]{bdp1}), the result follows.
\end{proof}
Let $(A,t_A,\omega_A)$ be the CM triple introduced in Section~\ref{subsubsec:A},
and let $\eta_A\in H_{\rm dR}^1(A/F)$ be the class determined by the conditions
\[
\lambda^*\eta_A=\lambda^\rho\eta_A\quad\textrm{for all $\lambda\in\mathcal{O}_K$,}\quad\quad\textrm{and}\quad\quad
\langle\omega_A,\eta_A\rangle_A=1,
\]
where $\lambda\mapsto\lambda^\rho$ denotes the action of the non-trivial automorphism of $K$,
and $\langle,\rangle_A$ is the cup product pairing on $H^1_{\rm dR}(A/F)$.
If
$(A',t_{A'},\omega_{A'})$ is the CM triple induced
from $(A,t_{A},\omega_{A})$ by an isogeny $\varphi\in{\rm Isog}_c^{\mathfrak{N}}(A)$, we define
$\eta_{A'}\in H_{\rm dR}^1(A'/F)$ by the analogous recipe. For the integers $j$ with $0\leq j\leq r$,
the classes $\omega_{A'}^j\eta_{A'}^{r-j}$ defined in \cite[(1.4.6)]{bdp1} then form a basis of ${\rm Sym}^rH_{\rm dR}^1(A'/F)$.
\begin{lem}\label{lem:3.22}
Let the notations be as in Proposition~\ref{prop:AJ-Coleman}.
Then, for each $0\leq j\leq r$, we have
\[
{\rm AJ}_F(\Delta_\varphi)(\omega_f\wedge\omega_A^j\eta_A^{r-j})=
{\rm deg}(\varphi)^j\cdot\langle F_{f,\infty}(P_{A'}),\omega_{A'}^j\eta_{A'}^{r-j}\rangle_{A'},
\]
where $F_{f,\infty}$ is the Coleman primitive of $\omega_f\in H^0(X,\underline{\omega}^{\otimes r}\otimes\Omega^1_X)$
on $\mathcal{W}_\infty$ (vanishing at $\infty$ if $r=0$),
and the pairing $\langle,\rangle_{A'}$ on the right-hand side is the natural one
on ${\rm Sym}^rH_{\rm dR}^1(A'/F)$.
\end{lem}
\begin{proof}
This follows from Proposition~\ref{prop:AJ-Coleman} as in \cite[Lemma~3.22]{bdp1}.
\end{proof}
Recall that if $f\in S_{k}(X)$ is a cusp form of weight $k$ and level $\Gamma$, then
$f\vert_{\mathcal{W}_\infty}$ defines a $p$-adic modular form $f_p\in\mathcal{M}_k(N)$ of weight $k$ and tame level $N$.
Evaluated on a CM triple $(A',t_{A'},\omega_{A'})$ of conductor $c$ prime to $p$, we then have
\[
f_p(A',t_{A'},\omega_{A'})=f(A',t_{A'},\alpha_\mathfrak{p}',\omega_{A'}),
\]
where $\alpha_\mathfrak{p}':A'\longrightarrow A'/A'[\mathfrak{p}]$ is the $p$-isogeny
defined by the canonical subgroup of $A'$ (see Section~\ref{subsubsec:A}).
By abuse of notation, in the following we will denote $f_p$ also by $f$.
The map $V$ defined in $(\ref{V})$ yields an operator
$V:\mathcal{M}_k(N)\longrightarrow\mathcal{M}_k(N)$ on $p$-adic modular forms whose effect on
$q$-expansions is given by $q\mapsto q^p$. Let $a_p(f)$ be the $U_p$-eigenvalue of $f$,
and define the \emph{$p$-depletion} of $f$ by
\[
f^{[p]}:=f-a_p(f)Vf
\]
Letting $d=q\frac{d}{dq}:\mathcal{M}_k(N)\longrightarrow\mathcal{M}_{k+2}(N)$ be
the Atkin--Serre operator, for any integer $j$ the limit
\[
d^{-1-j}f^{[p]}:=\lim_{t\to -1-j}d^tf^{[p]}
\]
is a $p$-adic modular form of weight $k-2-j$ and tame level $N$ (see \cite[Thm.~5]{Serre350}).
\begin{lem}\label{3.24}
Let the notation be as in Proposition~\ref{prop:AJ-Coleman}.
Then for each $0\leq j\leq r$ there exists a locally analytic $p$-adic modular form $G_j$ of weight
$r-j$ and tame level $N$ such that
\begin{equation}\label{eq:G}
\langle F_{f,\infty}(P_{A'}),\omega_{A'}^j\eta_{A'}^{r-j}\rangle_{A'}=G_j(A',t_{A'},\omega_{A'}),
\end{equation}
where $F_{f,\infty}$ is the Coleman primitive of $\omega_f$ on $\mathcal{W}_\infty$ (vanishing at $\infty$ if $r=0$), and
\begin{equation}\label{katz}
G_j(A',t_{A'},\omega_{A'})-\frac{a_p(f)}{p^{r-j+1}}G_j(\mathfrak{p}*(A',t_{A'},\omega_{A'}))=
j! d^{-1-j}f^{[p]}(A',t_{A'},\omega_{A'}).
\end{equation}
\end{lem}
\begin{proof}
The construction of $G_j$ as the ``$j$-th component'' of $F_{f,\infty}$ is given in \cite[p.1083]{bdp1},
and $(\ref{eq:G})$ then follows from the definition. On the the other hand, $(\ref{katz})$ follows
from the same calculations as in [\emph{loc.cit}, Lemma~3.23 and Prop.~3.24].
\end{proof}
We now relate the expression appearing in the right-hand side of
Proposition~\ref{prop:AJ-Coleman} to the value of a certain $p$-adic $L$-function associated to $f$.
\vspace{0.1in}
Recall that $(A,t_A,\omega_A)$ denotes the CM triple introduced in Section~\ref{subsubsec:A}, and
fix an elliptic curve $A_0/H_c$ with ${\rm End}_{H_c}(A_c)\simeq\mathcal{O}_c$.
The curve $A_0$ is related to $A$ by an isogeny $\varphi_0:A\longrightarrow A_0$ in
${\rm Isog}_c^{\mathfrak{N}}(A)$, and we let $(A_0,t_0,\omega_0)$
be the induced triple.
Since we assume that $p=\mathfrak{p}\overline{\mathfrak{p}}$ splits in $K$, we may fix an isomorphism
$\mu_{p^\infty}\simeq\mathcal{A}_0[\mathfrak{p}^\infty]$ of $p$-divisible groups, where
$\mathcal{A}_0/\mathcal{O}_{\mathbf{C}_p}$ is a good integral model of $A_0$. This amounts to the
choice of an isomorphism $\imath:\hat{\mathcal{A}}_0\longrightarrow\hat{\mathbb{G}}_m$ of formal groups,
and we let $\Omega_p\in\mathbf{C}_p^\times$ be the $p$-adic period defined by the rule
\[
\omega_0=\Omega_p\cdot\omega_{\rm can}
\]
where $\omega_{\rm can}:=\imath^*\frac{dt}{t}$ for the standard coordinate $t$ on $\hat{\mathbb{G}}_m$.
\vspace{0.1in}
Finally, consider the set $\Sigma_{k,c}^{+}$ of algebraic Hecke characters
$\chi:K^\times\backslash\mathbb{A}_K^\times\longrightarrow\mathbf{C}^\times$ of conductor $c$,
infinity type $(k+j,-j)$ with $j\geq 0$ (with the convention in \cite[p.1089]{bdp1}),
and such that
\[
\chi\vert_{\mathbb{A}_\mathbf{Q}^\times}=\boldsymbol{\rm N}^{k},
\]
where $\boldsymbol{\rm N}$ is the norm character on $\mathbb{A}_\mathbf{Q}^\times$, and for every
$\chi\in\Sigma_{k,c}^+$
set
\begin{equation}\label{def:Lp}
L_\mathfrak{p}(f)(\chi):=\sum_{[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_c)}\chi^{-1}(\mathfrak{a}){\rm N}(\mathfrak{a})^{-j}
\cdot d^jf^{[p]}(\mathfrak{a}*(A_0,t_0,\omega_{\rm can})),
\end{equation}
and define
\begin{equation}\label{eq:alg}
L_{\rm alg}(f,\chi^{-1}):=w(f,\chi)^{-1}C(f,\chi,c)\cdot\frac{L(f,\chi^{-1},0)}{\Omega^{2(k+2j)}},\nonumber
\end{equation}
where $w(f,\chi)$ and $C(f,\chi,c)$ are the constants defined in \cite[(5.1.11)]{bdp1}
and [\emph{loc.cit.}, Thm.~4.6], respectively, $\Omega$ is the complex period in [\emph{loc.cit.}, (5.1.16)],
and $L(f,\chi^{-1},0)$ is the central critical value of the Rankin--Selberg $L$-function
$L(f\times\theta_{\chi^{-1}},s)$ of $f$ and the theta series of $\chi^{-1}$.
\vspace{0.1in}
As explained in \cite[p.1134]{bdp1}, the set $\Sigma_{k,c}^{+}$
may be endowed with a natural $p$-adic topology,
and we let $\hat{\Sigma}_{k,c}$ denote its completion.
\begin{thm}\label{5.27-5.28}
The assignment $\chi\mapsto L_\mathfrak{p}(f)(\chi)$ extends to a continuous function
on $\hat{\Sigma}_{k,c}$ and satisfies the following interpolation property.
If $\chi\in\Sigma_{k,c}^+$ has infinity type $(k+j,-j)$, with $j\geq 0$, then
\[
\frac{L_\mathfrak{p}(f)(\chi)^2}{\Omega_p^{2(k+2j)}}=(1-a_p(f)\chi^{-1}(\bar{\mathfrak{p}}))^2
\cdot L_{\rm alg}(f,\chi^{-1},0).
\]
\end{thm}
\begin{proof}
See Theorem~5.9, Proposition~5.10, and equation $(5.2.4)$ of \cite{bdp1},
noting that $\beta_p=0$ here, since $f$ has level divisible by $p$.
\end{proof}
Let $\Sigma_{k,c}^-$ be the set of
algebraic Hecke characters of $K$ of conductor $c$
and infinity type $(k-1-j,1+j)$, with $j\geq 0$.
Even though $\Sigma_{k,c}^+\cap\Sigma_{k,c}^-=\emptyset$,
any character in $\Sigma_{k,c}^-$ can be written as a limit of characters in $\Sigma_{k,c}^+$ (see \cite[p.1137]{bdp1}).
Thus for any $\chi\in\Sigma_{k,c}^-$, the value $L_\mathfrak{p}(f)(\chi)$ is defined by continuity.
\vspace{0.1in}
The next result extends the $p$-adic Gross--Zagier formula of \cite[Thm.~5.13]{bdp1}
to the semistable non-crystalline setting.
\begin{thm}\label{thmbdp1A}
Let $f=q+\sum_{n=2}^{\infty}a_n(f)q^n\in S_{k}(\Gamma_0(Np))$ be a $p$-new eigenform
of weight $k=r+2\geq 2$,
and suppose that $\chi\in\Sigma_{k,c}^-$ has infinity type $(r+1-j,1+j)$,
with $0\leq j\leq r$. Then
\begin{equation}\label{formula:bdp1A}
\frac{L_\mathfrak{p}(f)(\chi)}{\Omega_p^{r-2j}}=
(1-a_p(f)\chi^{-1}(\bar{\mathfrak{p}}))\cdot
\biggl(\frac{c^{-j}}{j!}\sum_{[\mathfrak{a}]\in{\rm
Pic}(\mathcal{O}_c)}\chi^{-1}(\mathfrak{a}){\rm N}(\mathfrak{a})\cdot
{\rm AJ}_F(\Delta_{\varphi_\mathfrak{a}\varphi_0})
(\omega_{f}\wedge\omega_A^{j}\eta_A^{r-j})\biggr).\nonumber
\end{equation}
\end{thm}
\begin{proof}
The proof of \cite[Prop.~5.10]{bdp1} shows that the expression $(\ref{def:Lp})$
extends in the obvious way to a character
$\chi$ as in the statement, yielding
\begin{equation}\label{takelim}
\frac{L_\mathfrak{p}(f)(\chi)}{\Omega_p^{r-2j}}=
\sum_{[\mathfrak{a}]\in{\rm
Pic}(\mathcal{O}_c)}\chi^{-1}(\mathfrak{a}){\rm N}(\mathfrak{a})^{1+j}
\cdot d^{-1-j}f^{[p]}(\mathfrak{a}*(A_0,t_0,\omega_0)).
\end{equation}
On the other hand, by Lemma~\ref{3.24} we have
\begin{equation}\label{from3.24}
j! d^{-1-j}f^{[p]}(\mathfrak{a}*(A_0,t_0,\omega_0))=G_j(\mathfrak{a}*(A_0,t_0,\omega_0))-\frac{a_p(f)}{p^{r-j+1}}G_j(\mathfrak{p}\mathfrak{a}*(A_0,t_0,\omega_0)).
\end{equation}
Substituting $(\ref{from3.24})$ into $(\ref{takelim})$, summing over $[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_c)$,
and noting that
\[
\chi(\mathfrak{p})p^{-1-j}=\chi^{-1}(\overline\mathfrak{p})p^{r+1-j},
\]
we see that
\begin{equation}\label{G}
\frac{L_\mathfrak{p}(f)(\chi)}{\Omega_p^{r-2j}}=\left(1-a_p(f)\chi^{-1}(\overline\mathfrak{p})\right)
\cdot\biggl(\frac{1}{j!}
\sum_{[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_c)}\chi^{-1}(\mathfrak{a}){\rm N}(\mathfrak{a})^{1+j}\cdot G_j(\mathfrak{a}*(A_0,t_0,\omega_0))\biggr).
\end{equation}
Finally, since the isogeny
$\varphi_\mathfrak{a}\varphi_0:(A,t_A,\omega_A)\longrightarrow\mathfrak{a}*(A_0,t_0,\omega_0)$
has degree $c{\rm N}(\mathfrak{a})$, combining Lemma~\ref{lem:3.22} and Lemma~\ref{3.24} we have
\begin{equation}\label{modifAJ}
G_j(\mathfrak{a}*(A_0,t_0,\omega_0))=c^{-j}{\rm N}(\mathfrak{a})^{-j}\cdot
{\rm AJ}_F(\Delta_{\varphi_\mathfrak{a}\varphi_0})(\omega_f\wedge\omega_A^j\eta_A^{r-j}),
\end{equation}
and substituting $(\ref{modifAJ})$ into $(\ref{G})$, the result follows.
\end{proof}
\section{Main result}
In this section we prove the main result of this paper,
giving an ``exceptional zero formula'' for the specializations
of Howard's big Heegner points at exceptional primes in the Hida family.
\subsection{Heegner points in Hida families}
\label{subsec:bigHP}
We begin by briefly reviewing the constructions of \cite{howard-invmath},
which we adapt to our situation, referring the reader to [\emph{loc.cit.}, \S{2}] for further details.
\vspace{0.1in}
Recall that $p\nmid N$. Let $f=\sum_{n=1}^\infty a_n(f)q^n\in S_k(\Gamma_0(Np))$
be a newform, fix a finite extension $L$ of $\mathbf{Q}_p$ with ring of integers $\mathcal{O}_L$
containing the Fourier coefficients of $f$, and let
\[
\rho_f:G_\mathbf{Q}:={\rm Gal}(\overline{\mathbf{Q}}/\mathbf{Q})\longrightarrow{\rm Aut}_L(V_f)\simeq\mathbf{GL}_2(L)
\]
be the Galois representation associated to $f$. Also, let $K=\mathbf{Q}(\sqrt{-D_K})$
be an imaginary quadratic field as in $\S\ref{subsubsec:A}$.
For the rest of this paper, these will be subject to the following further hypotheses.
\begin{ass}\label{running-ass}
\begin{enumerate}
\item{} $f$ is ordinary at $p$, i.e., $\imath_p(a_p(f))$ is a $p$-adic unit;
\item{} $\overline{\rho}_f$ is absolutely irreducible;
\item{} $\overline{\rho}_f$ is ramified at every prime $q$ dividing $(D_K,N)$;
\item{} $p\nmid h_K:=\vert{\rm Pic}(\mathcal{O}_K)\vert$, the class number of $K$.
\end{enumerate}
\end{ass}
Note that by \cite[Lemma~2.15]{howard-invmath}, the first assumption forces the weight of $f$
to be $k=2$, which will thus be assumed for the rest of this paper.
\begin{defn}
Set $\Lambda_{\mathcal{O}_L}:=\mathcal{O}_L\pwseries{1+p\mathbf{Z}_p}$.
For any $\Lambda_{\mathcal{O}_L}$-algebra $A$, let $\mathcal{X}_{\mathcal{O}_L}^a(A)$ be
the set of continuous $\mathcal{O}_L$-algebra homomorphisms
$\nu:A\longrightarrow\overline{\mathbf{Q}}_p$ such that the composition
\[
1+p\mathbf{Z}_p\longrightarrow\Lambda_{\mathcal{O}_L}^\times\longrightarrow A^\times
\xrightarrow{\;\;\nu\;\;}\overline{\mathbf{Q}}_p^\times
\]
is given by $\gamma\mapsto\gamma^{k_\nu-2}$, for some integer $k_\nu\geq 2$ with $k_\nu\equiv 2\pmod{2(p-1)}$
called the \emph{weight} of $\nu$.
\end{defn}
Since $f$ is ordinary at $p$, by \cite[Cor.~1.3]{hida86b})
there exists a local reduced finite integral extension
$\mathbb{I}$ of $\Lambda_{\mathcal{O}_L}$, and a formal $q$-expansion
$\mathbf{f}=\sum_{n=1}^\infty\mathbf{a}_nq^n\in\mathbb{I}[[q]]$ uniquely characterized by the following property.
For every $\nu\in\mathcal{X}^a_{\mathcal{O}_L}(\mathbb{I})$ of weight $k_\nu>2$, there exists a newform
$f_\nu\in S_{k_\nu}(\Gamma_0(N))$ such that
\begin{equation}\label{p-stab}
\nu(\mathbf{f})=f_\nu(q)-\frac{p^{k_\nu-1}}{\nu(\mathbf{a}_p)}f_\nu(q^p),
\end{equation}
and we there exists a unique $\nu_f\in\mathcal{X}^a_{\mathcal{O}_L}(\mathbb{I})$
of weight $2$ such that ${\nu_f}(\mathbf{f})=f(q)$.
\vspace{0.1in}
By \cite[Thm.~1.2]{hida86b}, there is
a free $\mathbb{I}$-module $\mathbf{T}$ of rank $2$ equipped with
a continuous action
\[
\rho_\mathbf{f}:G_\mathbf{Q}\longrightarrow
{\rm Aut}_\mathbb{I}(\mathbf{T})\cong\mathbf{GL}_2(\mathbb{I})
\]
such that for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a$,
$\nu(\rho_\mathbf{f})$ is isomorphic to the Galois representation
$\rho_{f_\nu}:G_\mathbf{Q}\longrightarrow\mathbf{GL}_2(\overline{\mathbf{Q}}_p)$
associated to $f_\nu$. Moreover, by \cite[Thm.~2.2.2]{wiles88},
if $D_p\subset G_{\mathbf{Q}}$ is the decomposition group of any place $\mathfrak{P}$
of $\overline{\mathbf{Q}}$ above $p$,
there exists an exact sequence of $\mathbb{I}[D_p]$-modules
\begin{equation}\label{eq:ord}
0\longrightarrow{\mathscr{F}}^+\mathbf{T}\longrightarrow\mathbf{T}\longrightarrow{\mathscr{F}}^-\mathbf{T}\longrightarrow 0
\end{equation}
with ${\mathscr{F}}^{\pm}\mathbf{T}$ free of rank $1$ over $\mathbb{I}$, and with the $D_p$-action on ${\mathscr{F}}^-\mathbf{T}$
given by the unramified character sending an arithmetic Frobenius ${\rm Fr}_p^{-1}$ to $\mathbf{a}_p\in\mathbb{I}^\times$.
\vspace{0.1in}
Following \cite[Def.~2.1.3]{howard-invmath}, define the \emph{critical character}
$\Theta:G_\mathbf{Q}\longrightarrow\mathbb{I}^\times$ by the composite
\begin{equation}\label{def:crit}
\Theta:G_\mathbf{Q}\xrightarrow{\varepsilon_{\rm cyc}}\mathbf{Z}_p^\times\xrightarrow{\;\langle\cdot\rangle\;}
1+p\mathbf{Z}_p\xrightarrow{\gamma\mapsto\gamma^{1/2}}1+p\mathbf{Z}_p
\longrightarrow\Lambda_\mathcal{O}^\times\longrightarrow\mathbb{I}^\times,
\end{equation}
where $\varepsilon_{\rm cyc}$ is the $p$-adic cyclotomic character,
$\langle\cdot\rangle$ denotes the projection to the $1$-units in $\mathbf{Z}_p$.
Let $\mathbb{I}^\dagger$ be the free $\mathbb{I}$-module of rank $1$
where $G_\mathbf{Q}$ acts via $\Theta^{-1}$, and set
\[
\mathbf{T}^\dagger:=\mathbf{T}\otimes_{\mathbb{I}}\mathbb{I}^\dagger
\]
equipped with the diagonal Galois action.
Then, if for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^{a}(\mathbb{I})$
we let $V_{f_\nu}$ be a representation space for $\rho_{f_\nu}$,
then $\nu(\mathbf{T}^\dagger):=\mathbf{T}^\dagger\otimes_{\mathbb{I},\nu}\nu(\mathbb{I})$ is
isomorphic to a lattice in the self-dual Tate twist
$V_{f_\nu}(k_\nu/2)$ of $V_{f_\nu}$ (see
\cite[Thm.~1.4.3]{Ohta} and \cite[(3.2.4)]{Nekovar-Plater}).
\vspace{0.1in}
Let $K_\infty$ be the anticyclotomic $\mathbf{Z}_p$-extension of $K$, and for each $n\geq 0$,
let $K_n$ be the subfield of $K_\infty$ with ${\rm Gal}(K_n/K)\simeq\mathbf{Z}/p^n\mathbf{Z}$.
\begin{thm}[Howard]
\label{thm:bigHPs}
There is a system of ``big Heegner points''
\[
\mathfrak{Z}_\infty=\{\mathfrak{Z}_n\}_{n\geq 0}\in
H^1_{\rm Iw}(K_\infty,\mathbf{T}^\dagger)
:=\varprojlim_nH^1(K_n,\mathbf{T}^\dagger)
\]
with the following properties.
\begin{enumerate}
\item{} For each $n$, $\mathfrak{Z}_n$ belongs to the Greenberg Selmer group
${\rm Sel}_{\rm Gr}(K_n,\mathbf{T}^\dagger)$ of \cite[Def.~2.4.2]{howard-invmath}.
In particular, for every prime $\mathfrak{q}$ of $K$ above $p$, we have
\[
{\rm loc}_{\mathfrak{q}}(\mathfrak{Z}_\infty)\in{\rm ker}\left(H^1_{\rm Iw}(K_{\infty,\mathfrak{q}},\mathbf{T}^\dagger)
\longrightarrow H^1_{\rm Iw}(K_{\infty,\mathfrak{q}},{\mathscr{F}}^-\mathbf{T}^\dagger)\right)
\]
for the natural map induced by $(\ref{eq:ord})$.
\item{} If $\mathfrak{Z}_\infty^*$ denotes the image of $\mathfrak{Z}_\infty$ under the action of complex
conjugation, then
\begin{equation}\label{eq:w}
\mathfrak{Z}_\infty^*=w\cdot\mathfrak{Z}_\infty\nonumber
\end{equation}
for some $w\in\{\pm{1}\}$.
\end{enumerate}
\end{thm}
\begin{proof}
In the following, all the references are to \cite{howard-invmath}.
The construction of $\mathfrak{Z}_\infty$ is given in
\S\S{2.2}, 3.3 and the proof of $(1)$ is given in Prop.~2.4.5.
For the proof of $(2)$, we need to briefly recall the definition of $\mathfrak{Z}_n$.
Let $H_{p^{n+1}}$ be the ring class field of $K$ of conduction $p^{n+1}$, and note that it contains $K_n$.
By Prop.~2.3.1, the ``big Heegner points'' $\mathfrak{X}_{p^{n+1}}\in H^1(H_{p^{n+1}},\mathbf{T}^\dagger)$
safisfy ${\rm Cor}_{H_{p^{n+1}}/H_{p^n}}(\mathfrak{X}_{p^{n+1}})=U_p\cdot\mathfrak{X}_{p^n}$,
and hence the classes
\begin{equation}\label{def:Z}
\mathfrak{Z}_n:=U_p^{-n}\cdot{\rm Cor}_{H_{p^{n+1}}/K_n}(\mathfrak{X}_{p^{n+1}})
\end{equation}
are compatible under corestriction. Denoting by $\tau$ the image of a class under
the action of complex conjugation and using Prop.~2.3.5, we find that
\begin{align}\label{eq:dih}
{\rm Cor}_{H_{p^{n+1}}/K_n}(\mathfrak{X}_{p^{n+1}})^\tau
&=\sum_{\sigma\in{\rm Gal}(H_{p^{n+1}}/K_n)}\mathfrak{X}_{p^{n+1}}^{\tau\sigma}\\
&=\sum_{\sigma\in{\rm Gal}(H_{p^{n+1}}/K_n)}\mathfrak{X}_{p^{n+1}}^{\sigma^{-1}\tau}\nonumber\\
&=w\cdot{\rm Cor}_{H_{p^{n+1}}/K_n}(\mathfrak{X}_{p^{n+1}})\nonumber
\end{align}
for some $w\in\{\pm 1\}$. Combining $(\ref{def:Z})$ and $(\ref{eq:dih})$, the result follows.
\end{proof}
\subsection{Two-variable $p$-adic $L$-functions}
\label{sec:2varL}
As in the preceding section, let $f\in S_2(\Gamma_0(Np))$ be a newform split multiplicative at $p$,
and let $\mathbf{f}\in\mathbb{I}\pwseries{q}$ be the Hida family passing through $f$.
Recall the spaces of characters $\Sigma_{k,c}^{\pm}$ and $\hat{\Sigma}_{k,c}$ introduced in Section~\ref{sec:computations}.
In the following, we only consider the case $c=1$,
which will henceforth be suppressed from the notation.
\vspace{0.1in}
By \cite[Prop.~5.10]{bdp1} (see also Theorem~\ref{5.27-5.28}),
for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$
the assignment
\[
\chi\longmapsto L_\mathfrak{p}(f_\nu)(\chi):=\sum_{[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_K)}\chi^{-1}(\mathfrak{a}){\rm N}(\mathfrak{a})^{-j}
\cdot d^jf_\nu^{[p]}(\mathfrak{a}*(A_0,t_0,\omega_{\rm can}))
\]
extends to a continuous function on $\hat{\Sigma}_{k_\nu}$.
Using the explicit expression for these values,
it is easy to show the existence of a two-variable $p$-adic $L$-function interpolating
$L_\mathfrak{p}(f_\nu)$ for varying $\nu$. For the precise statement, denote by $h=h_K$ the class number of $K$
(which we assume is prime to $p$), and let $\phi_o$
be the unramified Hecke character
defined on fractional ideals by the rule
\begin{equation}\label{phi}
\phi_o(\mathfrak{a})=\alpha/\overline{\alpha},\quad\textrm{where $(\alpha)=\mathfrak{a}^h$}.
\end{equation}
Assume that $\mathcal{O}_L$ contains the values of $\phi_o$,
and denote by $\langle\phi_o\rangle$ the composition of $\phi_o$
with the projection onto the $\mathbf{Z}_p$-free quotient of $\mathcal{O}_L^\times$,
which then is valued in $1+p\mathbf{Z}_p$, and define $\xi:K^\times\backslash\mathbb{A}_K^\times\longrightarrow\mathbb{I}^\times$ by
\begin{equation}\label{def:xi}
\xi:K^\times\backslash\mathbb{A}_K^\times\xrightarrow{\;\;\phi_o\;\;}\mathcal{O}_L^\times\xrightarrow{\;\langle\cdot\rangle\;}
1+p\mathbf{Z}_p\xrightarrow{\gamma\mapsto\gamma^{1/2h}}1+p\mathbf{Z}_p
\longrightarrow\Lambda_{\mathcal{O}_L}^\times\longrightarrow\mathbb{I}^\times.
\end{equation}
Similarly, recall the critical character $\Theta:G_{\mathbf{Q}}\longrightarrow\mathbb{I}^\times$ from $(\ref{def:crit})$,
and define $\chi:K^\times\backslash\mathbb{A}_K^\times\longrightarrow\mathbb{I}^\times$ by
\[
\chi(x)=\Theta({\rm rec}_\mathbf{Q}({\rm N}_{K/\mathbf{Q}}(x))),
\]
where ${\rm rec}_{\mathbf{Q}}:\mathbb{A}_\mathbf{Q}^\times\longrightarrow{\rm Gal}(\mathbf{Q}^{\rm ab}/\mathbf{Q})$ is
the \emph{geometrically} normalized global reciprocity map.
Let $\Gamma_\infty:={\rm Gal}(K_\infty/K)$ be the Galois
group of the anticyclotomic $\mathbf{Z}_p$-extension of $K$,
and denote by $\mathcal{X}_{\mathcal{O}_L}^a(\Gamma_\infty)$
the set of continuous $\mathcal{O}_L$-algebra homomorphisms $\mathcal{O}_L\pwseries{\Gamma_\infty}\longrightarrow\mathbf{Q}_p^\times$
induced by a character $\phi$ of the form $\phi=\phi_o^{\ell_\phi}$ for some integer
$\ell_\phi\geq 0$ with $\ell_\phi\equiv 0\pmod{p-1}$ called the \emph{weight} of $\phi$.
Finally, let
\[
{\rm\mathbf{N}}_K:K^\times\backslash\mathbb{A}_K^\times\xrightarrow{{\rm N}_{K/\mathbf{Q}}}\mathbf{Q}^\times\backslash\mathbb{A}_\mathbf{Q}^\times
\xrightarrow{\;\;\rm\mathbf{N}\;\;}\mathbf{C}^\times
\]
be the norm character of $K$, and for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$,
let $\xi_\nu$ and $\chi_\nu$
be the composition of $\xi$ and $\chi$ with $\nu$, respectively.
\begin{thm}\label{prop:bigL}
The exists a continuous function $L_{\mathfrak{p},\xi}(\mathbf{f})$
on $\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})\times\mathcal{X}_{\mathcal{O}_L}^a(\Gamma_\infty)$
such that for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$ we have
\[
L_{\mathfrak{p},\xi}(\mathbf{f})(\nu,\phi)=L_\mathfrak{p}(f_\nu)(\phi\xi_\nu\chi_\nu{\rm\mathbf{N}}_K)
\]
as functions of $\phi\in\mathcal{X}^a_{\mathcal{O}_L}(\Gamma_\infty)$.
\end{thm}
\begin{proof}
See \cite[Thm.~1.4]{cas-2var}. (Note that if
$(\nu,\phi)\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})\times\mathcal{X}_{\mathcal{O}_L}^a(\Gamma_\infty)$,
then $\phi\xi_\nu\chi_\nu{\rm\mathbf{N}}_K$ is an unramified Hecke character of infinity type
$(k_\nu+\ell_\phi-1,1-\ell_\phi)$, thus lying in the domain of $L_\mathfrak{p}(f_\nu)$.)
\end{proof}
\subsection{Big logarithm maps}
By our assumption that $p\nmid h_K$, the extension $K_\infty/K$ is
totally ramified at every prime $\mathfrak{q}$ above $p$; let $K_{\infty,\mathfrak{q}}$ be the completion of $K_\infty$ at
the unique prime above $\mathfrak{q}$, and set $\Gamma_{\mathfrak{q},\infty}:={\rm Gal}(K_{\infty,\mathfrak{q}}/K_\mathfrak{q})$.
Even though $\Gamma_{\mathfrak{q},\infty}$ may be identified with $\Gamma_\infty$, in the following it will
be convenient to maintain the distinction between them.
\vspace{0.1in}
Recall the $\mathbb{I}$-adic Hecke character introduced in $(\ref{def:xi})$, and
let $\xi:G_K\longrightarrow\mathbb{I}^\times$ also denote the Galois character defined by
\[
\xi(\sigma):=[\langle\hat{\phi}_o(\sigma)\rangle^{1/2h}],
\]
where $\hat\phi_o:G_K\longrightarrow\mathcal{O}_L^\times$ is the $p$-adic avatar of the Hecke character
$\phi_o$ in $(\ref{phi})$. Finally, set
\[
\mathbb{T}:=\mathbf{T}^\dagger\vert_{G_K}\otimes\xi^{-1},
\]
and for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$ denote by $V_\nu$
the specialization of $\mathbb{T}$ at $\nu$.
\begin{thm}\label{cor:L-}
Let $\mathfrak{q}\in\{\mathfrak{p},\overline{\mathfrak{p}}\}$, define
$\lambda_\pm:=\mathbf{a}_p\cdot\Theta\xi^{\pm}({\rm Fr}_\mathfrak{q})-1\in\mathbb{I}$ and
set $\tilde{\mathbb{I}}=\mathbb{I}[\lambda_+^{-1}\lambda_-^{-1}]\otimes_{\mathbf{Z}_p}\hat{\mathbf{Z}}_p^{\rm nr}$.
There exists an $\mathbb{I}\pwseries{\Gamma_{\mathfrak{q},\infty}}$-linear map
\[
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}:H^1_{\rm Iw}(K_{\infty,\mathfrak{q}},{\mathscr{F}}^+\mathbb{T})
\longrightarrow\tilde{\mathbb{I}}\pwseries{\Gamma_{\mathfrak{q},\infty}}
\]
such for every $\mathfrak{Y}_\infty\in H^1_{\rm Iw}(K_{\infty,\mathfrak{q}},{\mathscr{F}}^+\mathbb{T})$ and every
$(\nu,\phi)\in\mathcal{X}^a_{\mathcal{O}_L}(\tilde{\mathbb{I}})\times\mathcal{X}_{\mathcal{O}_L}^a(\Gamma_{\infty})$, we have
\begin{align*}
\left(1-\frac{\Theta_\nu^{-1}\xi_\nu\phi^{-1}({\rm Fr}_{\mathfrak{q}})}{\nu(\mathbf{a}_p)}\right)
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}(\mathfrak{Y}_\infty)(\nu,\phi_\mathfrak{q})
&=\ell_\phi!^{-1}\left(1-\frac{\nu(\mathbf{a}_p)p^{-1}}{\Theta_\nu^{-1}\xi_\nu^{}\phi^{-1}({\rm Fr}_{\mathfrak{q}})}\right)
\langle{\rm log}_{}(\nu(\mathfrak{Y}_\infty)^{\phi^{}}),\breve{\omega}_{\nu}\rangle_{},
\end{align*}
where $\log=\log_{{\mathscr{F}}^+V_\nu\otimes\phi}:H^1(K_\mathfrak{q},{\mathscr{F}}^+V_\nu\otimes\phi)\longrightarrow D_{\rm dR}({\mathscr{F}}^+V_\nu\otimes\phi)$
is the Bloch--Kato logarithm map, and $\nu(\mathfrak{Y}_\infty)^{\phi^{}}\in H^1(K_{\mathfrak{q}},{\mathscr{F}}^+V_\nu\otimes\phi^{})$
is the $\phi^{}$-specialization of $\nu(\mathfrak{Y}_\infty)$.
\end{thm}
\begin{proof}
See \cite[Prop.~4.3]{cas-2var}.
\end{proof}
\begin{rem}\label{D}
Fix a compatible system $\zeta_\infty=\{\zeta_r\}_{r\geq 0}$
of $p$-power roots of unity, and let $\zeta_\infty t^{-1}$ be the associated basis
element of $D_{\rm dR}(\mathbf{Q}_p(1))$. In Theorem~\ref{cor:L-} above,
$\omega$ denotes a generator of the module
\[
\mathbb{D}:=({\mathscr{F}}^+\mathbb{T}(-1)\hat{\otimes}_{\mathbf{Z}_p}\hat{\mathbf{Z}}_p^{\rm nr})^{G_{K_\mathfrak{q}}},
\]
which by \cite[Lemma~3.3]{Ochiai-Col} is free of rank one over $\mathbb{I}$.
(Note that ${\mathscr{F}}^+\mathbb{T}(-1)$ is unramified.)
As explained in \emph{loc.cit.}, for each $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$
there is a specialization map
\[
\nu_*:\mathbb{D}\longrightarrow\mathbb{D}_\nu\otimes_{\mathbf{Z}_p}\mathbf{Q}_p\simeq D_{\rm dR}({\mathscr{F}}^+V_\nu(-1)).
\]
Then, letting $\omega_\nu$ denote the image of
$\nu_*(\omega)\otimes\zeta_\infty t^{-1}$ in
$D_{\rm dR}({\mathscr{F}}^+V_\nu(-1))\otimes D_{\rm dR}(\mathbf{Q}_p(1))\simeq D_{\rm dR}({\mathscr{F}}^+V_\nu)$,
the class $\breve{\omega}_\nu\in D_{\rm dR}({\mathscr{F}}^-V_\nu^*(1))$ in the above interpolation formulae is defined
by requiring that
\[
\langle\omega_\nu,\breve{\omega}_\nu\rangle=1
\]
under the de Rham pairing $\langle,\rangle:D_{\rm dR}({\mathscr{F}}^+V_\nu)\times D_{\rm dR}({\mathscr{F}}^-V_\nu^*(1))\longrightarrow F_\nu$.
\end{rem}
The big logarithm map $\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega$ of Theorem~\ref{cor:L-} may not be specialized
at any pair $(\nu,\mathds{1})$ with $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$ such that $\nu(\lambda_\pm)=0$.
Since such arithmetic primes are in fact the main concern
in this paper, the following construction of an ``improved'' big logarithm map
will be useful.
\begin{prop}\label{prop:improved}
There exists an $\mathbb{I}$-linear map
\[
\bar{\mathcal{L}}^\omega_{{\mathscr{F}}^+\mathbb{T}}:H^1(K_\mathfrak{p},{\mathscr{F}}^+\mathbb{T})\longrightarrow\mathbb{I}\otimes_{\mathbf{Z}_p}\mathbf{Q}_p
\]
such that for every $\mathfrak{Y}_0\in H^1(K_\mathfrak{p},{\mathscr{F}}^+\mathbb{T})$ and
every $\nu\in\mathcal{X}^a_{\mathcal{O}_L}(\mathbb{I})$, we have
\begin{align*}
\nu\left(\bar{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}(\mathfrak{Y}_0)\right)
&=\left(1-\frac{\nu(\mathbf{a}_p)p^{-1}}{\Theta^{-1}\xi^{}({\rm Fr}_\mathfrak{p})}\right)
\langle{\rm log}_{{\mathscr{F}}^+V_\nu}(\nu(\mathfrak{Y}_0)),\breve{\omega}_{\nu}\rangle_{}.
\end{align*}
\end{prop}
\begin{proof}
This can be shown by adapting the methods of Ochiai~\cite[\S{5}]{Ochiai-Col}.
Indeed, let
\[
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}:H^1(K_\mathfrak{p},{\mathscr{F}}^+\mathbb{T})\otimes\mathbf{Q}_p\longrightarrow\mathbb{D}\otimes_{\mathbf{Z}_p}\mathbf{Q}_p
\]
be the inverse of the map ${\rm exp}_{\mathbb{T}}$ constructed in \cite[Prop.~3.8]{venerucci-exp}
(see Remark~\ref{D} for the definition of $\mathbb{D}$),
and define
\[
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega:H^1(K_\mathfrak{p},{\mathscr{F}}^+\mathbb{T})\longrightarrow\mathbb{I}\otimes_{\mathbf{Z}_p}\mathbf{Q}_p
\]
by the relation $\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}(-)=\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}(-)\cdot\omega$.
Setting
\[
\bar{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^\omega=\biggl(1-\frac{\mathbf{a}_pp^{-1}}{\Theta^{-1}\xi^{}({\rm Fr}_\mathfrak{p})}\biggr)
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega:
H^1(K_\mathfrak{p},{\mathscr{F}}^+\mathbb{T})\longrightarrow\mathbb{I}\otimes_{\mathbf{Z}_p}\mathbf{Q}_p,
\]
the result follows.
\end{proof}
\begin{cor}\label{cor:factor}
For any $\mathfrak{Y}_\infty=\{\mathfrak{Y}_n\}_{n\geq 0}\in H^1_{\rm Iw}(K_{\infty,\mathfrak{p}},{\mathscr{F}}^+\mathbb{T})$
we have the factorization in $\tilde{\mathbb{I}}$:
\[
\left(1-\frac{\Theta^{-1}\xi^{}({\rm Fr}_\mathfrak{p})}{\mathbf{a}_p}\right)
\cdot\varepsilon\left(\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega(\mathfrak{Y}_\infty)\right)=
\bar{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^\omega(\mathfrak{Y}_0),
\]
where $\varepsilon:\tilde{\mathbb{I}}\pwseries{\Gamma_\infty}\longrightarrow\tilde{\mathbb{I}}$ is the augmentation map.
\end{cor}
\begin{proof}
Comparing the interpolation formulas in Theorem~\ref{cor:L-} and Proposition~\ref{prop:improved}, we
see that
\[
\left(1-\frac{\Theta_\nu^{-1}\xi_\nu^{}({\rm Fr}_\mathfrak{p})}{\nu(\mathbf{a}_p)}\right)
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega(\mathfrak{Y}_\infty)(\nu,\mathds{1})=
\nu\left(\bar{\mathcal{L}}_{{\mathscr{F}}^+\mathbb{T}}^\omega(\mathfrak{Y}_0)\right)
\]
for every $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\tilde{\mathbb{I}})$;
since these primes are dense in $\tilde{\mathbb{I}}$,
the corollary follows.
\end{proof}
The proof of our main result will rely crucially on the relation found in \cite[\S{4}]{cas-2var}
between the $p$-adic $L$-function $L_{\mathfrak{p},\xi}(\mathbf{f})$ of Theorem~\ref{prop:bigL}
and Howard's system of big Heegner points $\mathfrak{Z}_\infty$.
We conclude this section by briefly recalling that relation.
\vspace{0.1in}
By \cite[Lemma~2.4.4]{howard-invmath}, for every prime $\mathfrak{q}$ of $K$ above $p$
the natural map
\[
H_{\rm Iw}^1(K_{\infty,\mathfrak{q}},{\mathscr{F}}^+\mathbf{T}^\dagger)\longrightarrow H_{\rm Iw}^1(K_{\infty,\mathfrak{q}},\mathbf{T}^\dagger)
\]
induced by $(\ref{eq:ord})$ is injective. In light of Theorem~\ref{thm:bigHPs}, in the following
we will thus view ${\rm loc}_\mathfrak{q}(\mathfrak{Z}_\infty)$ as sitting inside $H^1_{\rm Iw}(K_{\infty,\mathfrak{q}},{\mathscr{F}}^+\mathbf{T}^\dagger)$.
\begin{thm}\label{thm:equality}
There is a generator $\omega=\omega_{\mathbf{f}}$ of the module $\mathbb{D}$ such that
\[
\mathcal{L}^{\omega}_{{\mathscr{F}}^+\mathbb{T}}({\rm loc}_\mathfrak{p}(\mathfrak{Z}_\infty^{\xi^{-1}}))
=L_{\mathfrak{p},\xi}(\mathbf{f})
\]
as
functions on $\mathcal{X}_{\mathcal{O}_L}^a(\tilde{\mathbb{I}})\times\mathcal{X}_{\mathcal{O}_L}^a(\Gamma_\infty)$.
\end{thm}
\begin{proof}
The construction of the basis element $\omega=\omega_{\mathbf{f}}$ of $\mathbb{D}$
is deduced in \cite[Prop.~10.1.2]{KLZ2} from Ohta's work \cite{OhtaII},
and it has the property that $\langle\omega_\nu,\omega_{\mathbf{f}_\nu}\rangle=1$,
for all $\nu\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$, where $\omega_{\mathbf{f}_\nu}$
is the class in ${\rm Fil}^1D_{\rm dR}(V_\nu^*)\simeq D_{\rm dR}({\mathscr{F}}^-V_\nu^*(1))$
associated to the $p$-stabilized newform $(\ref{p-stab})$; in particular,
\[
\breve{\omega}_{\nu_f}=\omega_f
\]
in the notations of Remark~\ref{D}.
The result is then the content of \cite[Thm.~4.4]{cas-2var}.
\end{proof}
\subsection{Exceptional zero formula}
Let $f=\sum_{n=1}^\infty a_n(f)q^n\in S_2(\Gamma_0(Np))$ be an ordinary newform as in Section~\ref{subsec:bigHP},
and assume in addition that $f$ is \emph{split multiplicative} at $p$, meaning that $a_p(f)=1$.
Recall the CM triple $(A,t_A,\alpha_\mathfrak{p})\in X(H)$
introduced in Section~\ref{subsubsec:A}, which maps to the point $P_A=(A,A[\mathfrak{Np}])\in X_0(Np)$
under the forgetful map $X\longrightarrow X_0(Np)$. Let $\infty$ be any cusp of $X_0(Np)$ rational over $\mathbf{Q}$,
and let $\kappa_f\in H^1(K,V_f)$ be the image of $(P_A)-(\infty)$ under the composite map
\begin{equation}\label{eq:kummer}
J_0(Np)(H)\xrightarrow{{\rm Kum}}H^1(H,{\rm Ta}_p(J_0(Np))\otimes_{\mathbf{Z}_p}\mathbf{Q}_p)
\longrightarrow H^1(H,V_f)\xrightarrow{{\rm Cor}_{H/K}}H^1(K,V_f).
\end{equation}
If $\mathbf{f}\in\mathbb{I}[[q]]$ is the Hida family passing through $f$, and $\nu_f\in\mathcal{X}^a_{\mathcal{O}_L}(\mathbb{I})$
is the arithmetic prime of $\mathbb{I}$ such that $\nu_f(\mathbf{f})=f$, it would be natural to expect a relation between
the class $\kappa_f$ and the specialization at $\nu_f$ of Howard's big Heegner point
$\mathfrak{Z}_0$.
As done in \cite[\S{3}]{howard-mathann}, one can
trace through the construction of $\mathfrak{Z}_0$ to deduce a relation between the \emph{generic}
(in the sense of [\emph{loc.cit.}, Def.~2]) weight $2$ specializations of $\mathfrak{Z}_0$ and the Kummer images of certain CM points.
However,
the arithmetic prime $\nu_f$ is not generic in that sense,
and in fact one does not expect a similar direct relation between
$\nu_f(\mathfrak{Z}_0)$ and $\kappa_f$ (see the discussion in [\emph{loc.cit.}, p.813]).
\vspace{0.1in}
In Theorem~\ref{main} below we will show that in fact
the localization at $\mathfrak{p}$ of $\nu_f(\mathfrak{Z}_0)$ vanishes,
but that nonetheless it can be related to $\kappa_f$
upon taking a certain ``derivative'' in the following sense, where we let
$\log_p:\mathbf{Q}_p^\times\longrightarrow\mathbf{Q}_p$ be Iwasawa's branch of the $p$-adic logarithm.
\begin{lem}\label{lem:divide}
Let $T$ be a free $\mathcal{O}_L$-module of finite rank equipped with a linear action of $G_{\mathbf{Q}_p}$,
let $k_\infty/\mathbf{Q}_p$ be a $\mathbf{Z}_p$-extension, and let $\gamma\in{\rm Gal}(k_\infty/\mathbf{Q}_p)$
be a topological generator. Assume that $T^{G_{k_\infty}}=\{0\}$, and let
$\mathcal{Z}_\infty=\{\mathcal{Z}_n\}_{n\geq 0}\in H_{\rm Iw}^1(k_\infty,T)$
be such that $\mathcal{Z}_0=0$.
Then there exists a unique $\mathcal{Z}_{\gamma,\infty}'
=\{\mathcal{Z}_{\gamma,n}'\}_{n\geq 0}\in H_{\rm Iw}^1(k_\infty,T)$
such that
\[
\mathcal{Z}_\infty=(\gamma-1)\cdot\mathcal{Z}_{\gamma,\infty}'.
\]
Moreover, if $\eta:{\rm Gal}(k_\infty/\mathbf{Q}_p)\simeq\mathbf{Z}_p$ is any group isomorphism,
then
\begin{equation}
\mathcal{Z}_0':=\frac{\mathcal{Z}'_{\gamma,0}}{\log_p(\eta(\gamma))}\in H^1(\mathbf{Q}_p,T[1/p])\nonumber
\end{equation}
is independent of the choice of $\gamma$.
\end{lem}
\begin{proof}
Consider the module $T_\infty:=T\hat{\otimes}_{\mathcal{O}_L}\mathcal{O}_L\pwseries{{\rm Gal}(k_\infty/\mathbf{Q}_p)}$ equipped
with the diagonal Galois action, where $G_{\mathbf{Q}_p}$ acts on the second factor via the projection
$G_{\mathbf{Q}_p}\longrightarrow{\rm Gal}(k_\infty/\mathbf{Q}_p)$. By Shapiro's Lemma, we then have
\[
H^1(\mathbf{Q}_p,T_\infty)\simeq H^1_{\rm Iw}(k_\infty,T),
\]
and the assumption that $T^{G_{k_\infty}}=\{0\}$ implies that
$H^1(\mathbf{Q}_p,T_\infty)$ is torsion-free. Therefore,
the exact sequence of $\mathcal{O}_L\pwseries{{\rm Gal}(k_\infty/\mathbf{Q}_p)}$-modules
\[
0\longrightarrow T_\infty\xrightarrow{\gamma-1}T_\infty
\longrightarrow T\longrightarrow 0
\]
induces the cohomology exact sequence
\[
0\longrightarrow H^1(\mathbf{Q}_p,T_\infty)\xrightarrow{\gamma-1}H^1(\mathbf{Q}_p,T_\infty)
\longrightarrow H^1(\mathbf{Q}_p,T),
\]
giving the proof of the first claim, and the second follows from an
immediate calculation.
\end{proof}
Let $h=h_K$ be the class number of $K$, write $\mathfrak{p}^{h}=\pi_\mathfrak{p}\mathcal{O}_K$,
and set $\varpi_\mathfrak{p}=\pi_\mathfrak{p}/\overline{\pi}_{\mathfrak{p}}\in K_\mathfrak{p}^\times\simeq\mathbf{Q}_p^\times$.
Define
\begin{equation}\label{differenceLinv}
\mathscr{L}_\mathfrak{p}(f,K):=\mathscr{L}_p(f)-\mathscr{L}_\mathfrak{p}(\chi_K),
\end{equation}
where $\mathscr{L}_p(f)$ is the $\mathscr{L}$-invariant of $f$ (as defined in \cite[\S{II.14}]{mtt}, for example), and
\[
\mathscr{L}_\mathfrak{p}(\chi_K):=\frac{\log_p(\varpi_\mathfrak{p})}{{\rm ord}_p(\varpi_\mathfrak{p})}=-\frac{2\log_p(\overline{\pi}_{\mathfrak{p}})}{h}
\]
is the $\mathscr{L}$-invariant of the quadratic character $\chi_K$ associated to $K$ (see \cite[\S{1}]{greenberg-zeros}, for example),
with ${\rm ord}_p$ the $p$-adic valuation on $\mathbf{Q}_p$ with the normalization
${\rm ord}_p(p)=1$.
\vspace{0.1in}
The following derivative formula is the main result of this paper.
\begin{thm}\label{main}
Let $f\in S_2(\Gamma_0(Np))$ be a newform split multiplicative at $p$,
let $\mathbf{f}\in\mathbb{I}[[q]]$ be the Hida family passing through $f$,
let $\mathfrak{Z}_\infty\in H_{\rm Iw}^1(K_\infty,\mathbf{T}^\dagger)$
be Howard's system of big Heegner points, and define
$\mathcal{Z}_{\mathfrak{p},f,\infty}:=\{\mathcal{Z}_{\mathfrak{p},f,n}\}_{n\geq 0}\in H^1_{\rm Iw}(K_{\infty,\mathfrak{p}},{\mathscr{F}}^+V_f)$ by
\[
\mathcal{Z}_{\mathfrak{p},f,n}:={\rm loc}_\mathfrak{p}(\nu_f(\mathfrak{Z}_n)),
\]
where $\nu_f\in\mathcal{X}_{\mathcal{O}_L}^a(\mathbb{I})$ is such that $f=\nu_f(\mathbf{f})$.
Then $\mathcal{Z}_{\mathfrak{p},f,0}=0$ and
\begin{equation}\label{eq:deriv}
\mathcal{Z}_{\mathfrak{p},f,0}'
=\mathscr{L}_{\mathfrak{p}}(f,K)\cdot{\rm loc}_\mathfrak{p}(\kappa_f),
\end{equation}
where $\mathscr{L}_{\mathfrak{p}}(f,K)$ is the $\mathscr{L}$-invariant $(\ref{def:Linv})$, and
$\kappa_f\in H^1(K,V_f)$ is the image of the degree zero divisor $(A,A[\mathfrak{Np}])-(\infty)$ under the
Kummer map $(\ref{eq:kummer})$.
\end{thm}
\begin{proof}
By Proposition~\ref{prop:improved}, Corollary~\ref{cor:factor}, Theorem~\ref{thm:equality},
and Theorem~\ref{prop:bigL}, respectively, we see that
\begin{align*}
\left(1-a_p(f)p^{-1}\right)\langle{\rm log}_{}(\mathcal{Z}_{\mathfrak{p},f,0}),\omega_f\rangle_{}
&=\lim_{\nu\to\nu_f}\nu\left(\bar{\mathcal{L}}^\omega_{{\mathscr{F}}^+\mathbb{T}}({\rm loc}_\mathfrak{p}(\mathfrak{Z}_0^{\xi^{-1}}))\right)\\
&=\lim_{\nu\to\nu_f}\left(1-\frac{\Theta_\nu^{-1}\xi_\nu^{}({\rm Fr}_\mathfrak{q})}{\nu(\mathbf{a}_p)}\right)
\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^\omega({\rm loc}_{\mathfrak{p}}(\mathfrak{Z}_\infty^{\xi^{-1}}))(\nu,\mathds{1})\\
&=\lim_{\nu\to\nu_f}\left(1-\frac{\Theta_\nu^{-1}\xi_\nu^{}({\rm Fr}_\mathfrak{q})}{\nu(\mathbf{a}_p)}\right)
L_{\mathfrak{p},\xi}(\mathbf{f})(\nu,\mathds{1})\\
&=\left(1-a_p(f)^{-1}\right) L_\mathfrak{p}(f,{\rm\mathbf{N}}_K).
\end{align*}
Since $a_p(f)=1$ by hypothesis, this shows that
$\langle{\rm log}_{}(\mathcal{Z}_{\mathfrak{p},f,0}),\omega_f\rangle_{}=0$,
and the vanishing of $\mathcal{Z}_{\mathfrak{p},f,0}$ follows. Now
to the proof of the derivative formula $(\ref{eq:deriv})$.
Denote by $L_{\mathfrak{p},\xi}(\mathbf{f})^\iota$ the image of $L_{\mathfrak{p},\xi}(\mathbf{f})$
under the involution of $\tilde{\mathbb{I}}\pwseries{\Gamma_\infty}$ induced by complex conjugation,
so that $L_{\mathfrak{p},\xi}(\mathbf{f})^\iota(\chi)=L_{\mathfrak{p},\xi}(\mathbf{f})(\chi^{-1})$
for every character $\chi$ of $\Gamma_\infty$. One immediately checks the commutativity of the diagram
\[
\xymatrix{
H^1_{\rm Iw}(K_\infty,{\mathscr{F}}^+\mathbb{T})\ar[rr]^-{{\rm loc}_\mathfrak{p}}\ar[d]^{*}
&& H^1_{\rm Iw}(K_{\infty,\mathfrak{p}},{\mathscr{F}}^+\mathbb{T}) \ar[rr]^-{\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}}\ar[d]^{*}
&& \tilde{\mathbb{I}}\pwseries{\Gamma_{\mathfrak{p},\infty}} \ar[d]^{\iota}
\\
H^1_{\rm Iw}(K_\infty,{\mathscr{F}}^+\mathbb{T})\ar[rr]^-{{\rm loc}_{\overline\mathfrak{p}}}
&& H^1_{\rm Iw}(K_{\infty,\overline{\mathfrak{p}}},{\mathscr{F}}^+\mathbb{T}) \ar[rr]^-{\mathcal{L}_{{\mathscr{F}}^+\mathbb{T}}^{\omega}}
&& \tilde{\mathbb{I}}\pwseries{\Gamma_{\overline{\mathfrak{p}},\infty}},
}
\]
where the left and middle vertical arrows denote the action of complex conjugation.
Define the functions
\begin{equation}\label{defL}
\mathcal{L}_\mathfrak{p}(k,t):=
\left(1-\frac{(p/\varpi_{\mathfrak{p}})^{k/2-1}}{\nu_k(\mathbf{a}_p)\varpi_{\mathfrak{p}}^{-t}}\right)
L_{\mathfrak{p},\xi}(\mathbf{f})(\nu_k,\phi_o^t),\quad
\mathcal{L}_{\overline{\mathfrak{p}}}(k,t)
:=\left(1-\frac{(p\varpi_{\mathfrak{p}})^{k/2-1}}{\nu_k(\mathbf{a}_p)\varpi_{\mathfrak{p}}^{t}}\right)
L_{\mathfrak{p},\xi}(\mathbf{f})(\nu_k,\phi_o^{-t}),\nonumber
\end{equation}
where $\phi_o$ is the character $(\ref{phi})$.
By the combination of Theorem~\ref{cor:L-} and Theorem~\ref{thm:equality},
we then have
\begin{equation}\label{espL}
\mathcal{L}_\mathfrak{p}(k,t)=
\frac{1}{t!}\left(1-\frac{\nu_k(\mathbf{a}_p)\varpi_{\mathfrak{p}}^{-t}}{p(p/\varpi_{\mathfrak{p}})^{k/2-1}}\right)
\langle\log_{}({\rm loc}_\mathfrak{p}(\nu_k(\mathfrak{Z}_\infty)_{1-k/2+t})),\breve{\omega}_{\nu_k}\rangle_{},\nonumber
\end{equation}
and by the above diagram we also have
\begin{equation}\label{esL*}
\mathcal{L}_{\overline{\mathfrak{p}}}(k,t)=
\frac{1}{t!}\left(1-\frac{\nu_k(\mathbf{a}_p)\varpi_{\mathfrak{p}}^{t}}{p(p\varpi_{\mathfrak{p}})^{k/2-1}}\right)
\langle\log_{}({\rm loc}_{\overline{\mathfrak{p}}}(\nu_k(\mathfrak{Z}^*_\infty)_{k/2-1-t})),\breve{\omega}_{\nu_k}\rangle_{}.\nonumber
\end{equation}
By the ``functional equation'' satisfied by $\mathfrak{Z}_\infty$ (see Theorem~\ref{thm:bigHPs}),
it follows that the function
\[
\mathcal{L}_p(k,t):=\mathcal{L}_\mathfrak{p}(k,t)-w\mathcal{L}_{\overline{\mathfrak{p}}}(k,k-2-t)
\]
vanishes identically along the ``line'' $t=k/2-1$. By \cite[Prop.~2.3.6]{howard-invmath}, the sign
$w$ is the \emph{opposite} of the sign in the functional equation for the $p$-adic $L$-function $L_p^{}(f,s)$
associated to $f$ in \cite{mtt}. Thus, if $w=1$, then ${\rm ord}_{s=1}L_p(f,s)>2$,
and by \cite[Lemma~6.1]{venerucci-exp} the right-hand side of $(\ref{eq:deriv})$ vanishes; since the
vanishing of the left-hand side follows easily from the construction of $\mathcal{Z}'_{\gamma,\infty}$
in Lemma~\ref{lem:divide}, we conclude that $(\ref{eq:deriv})$ reduces to the identify ``$0=0$'' when $w=1$.
As a consequence, in the following we shall assume that $w=-1$.
Using the formula for the $\mathscr{L}$-invariant of $f$ as the logarithmic derivative
of $\nu_k(\mathbf{a}_p)$ at $k=2$ (see \cite[Thm.~3.18]{GS}, for example)
and noting that $(p/\varpi_\mathfrak{p})^{k/2-1}=\overline{\pi}_\mathfrak{p}^{(k-2)/h}$ by definition,
we find
\begin{align}\label{eq:LHS}
\frac{\partial}{\partial k}\mathcal{L}_p(k,t)\bigr\vert_{(2,0)}
&=\left[\frac{d}{dk}\nu_k(\mathbf{a}_p)\bigr\vert_{k=2}-\frac{\log_p(\overline{\pi}_{\mathfrak{p}})}{h}
-w\left(\frac{d}{dk}\nu_k(\mathbf{a}_p)\bigr\vert_{k=2}-\frac{\log_p(\overline{\pi}_{\mathfrak{p}})}{h}\right)\right]
L_\mathfrak{p}(f)({\rm\mathbf{N}}_K)\\
&=-\left[\frac{(1-w)}{2}\left(\mathscr{L}_p(f)-\mathscr{L}_\mathfrak{p}(\chi_K)\right)\right]L_\mathfrak{p}(f)({\rm\mathbf{N}}_K)\nonumber\\
&=-\mathscr{L}_\mathfrak{p}(f,K)\cdot L_\mathfrak{p}(f)({\rm\mathbf{N}}_K).\nonumber
\end{align}
Using the aforementioned vanishing of $\mathcal{L}_p(k,k/2-1)$ for the first equality,
we also find
\begin{align}\label{eq:RHS}
\frac{\partial}{\partial k}\mathcal{L}_p(k,t)\bigr\vert_{(2,0)}
=-\frac{1}{2}\frac{\partial}{\partial t}\mathcal{L}_p(k,t)\bigr\vert_{(2,0)}
&=-\frac{(1-w)}{2}\left(1-a_p(f)p^{-1}\right)\langle{\rm log}_{}(\mathcal{Z}_{\mathfrak{p},f,0}'),\omega_{f}\rangle_{}\\
&=-(1-p^{-1})\langle{\rm log}_{}(\mathcal{Z}_{\mathfrak{p},f,0}'),\omega_f\rangle_{},\nonumber
\end{align}
and comparing $(\ref{eq:LHS})$ and $(\ref{eq:RHS})$, we arrive at the equality
\begin{equation}\label{eq:impr}
(1-p^{-1})\langle{\rm log}_{}(\mathcal{Z}_{\mathfrak{p},f,0}'),\omega_{f^{}}\rangle_{}
=\mathscr{L}_\mathfrak{p}(f,K)\cdot L_\mathfrak{p}(f)({\rm\mathbf{N}}_K).
\end{equation}
On the other hand, letting $\varphi_0:A\longrightarrow A$ be the identity isogeny,
by Theorem~\ref{thmbdp1A} we have
\begin{align*}\label{eq:HP}
L_\mathfrak{p}(f)(\mathbf{N}_K)
&=(1-a_p(f)p^{-1})\sum_{[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_K)}\langle{\rm AJ}_{F}(\Delta_{\varphi_\mathfrak{a}\varphi_0}),\omega_{f}\rangle_{}\\
&=(1-p^{-1})\langle{\log}_{}({\rm loc}_\mathfrak{p}(\kappa_f)),\omega_{f}\rangle_{},
\end{align*}
which combined with $(\ref{eq:impr})$ concludes the proof of Theorem~\ref{main}.
\end{proof}
\begin{rem}
It would be interesting to extend the main result of this paper to higher weights.
As is well-known (see \cite[Thm.~3]{Li}, for example), if $
\in S_{k}(\Gamma_0(Np))$ is a newform with $U_p$-eigenvalue $a_p(f)$,
then $a_p(f)^2=p^{k-2}$.
Thus if $k>2$, then $f$ has positive slope (i.e., it is \emph{not} $p$-ordinary),
and the extension of our Theorem~\ref{main} to this case
would require an extension to Coleman families\footnote{For a recent result along these lines
(albeit for a different Euler system), see \cite{LZ-Coleman}.}
of Howard's construction of big Heegner
points in Hida families \cite{howard-invmath}.
\end{rem}
\bibliographystyle{alpha}
|
1,116,691,500,273 | arxiv |
\section{Introduction}
Gamma-ray burst (GRB) afterglows occupy a unique position among the various high-energy astrophysical outflow phenomena. They are extremely relativistic blast waves with inferred Lorentz factors $\gamma$ that can be over several hundreds, far more than those typical for active galactic nuclei ($\gamma \sim 25$) or microquasars ($\gamma \sim 5$). They are transient events that occur only once per source. And they are relatively `clean', certainly when compared to the prompt emission, in that their outflows are (at least eventually) not dominated by complex large scale magnetic fields and in that their broadband emission from radio to X-rays is dominated by a single radiative process (synchrotron emission).
This picture, of course, becomes more murky the earlier one looks and the closer to the prompt emission one gets, and the more one looks in detail at the peculiarities of any given afterglow dataset. But broadly speaking, the main conceptual issues with respect to afterglow blast waves are (1) the geometry and dynamics of the hydrodynamical outflow and the structure of its environment, (2) the microphysics of shock acceleration of electrons and the generation of fields at the shock front and (3) how the previous two lead to local emission that combines and leads to a global synchrotron-type spectrum that is observable at cosmological distances. Because the blast waves move with nearly the speed of light, the bookkeeping effort in step (3) depends sensitively on the evolution of the blast wave during the timespan in which simultaneously arriving radiation is emitted from various parts of the outflow.
In this review I focus mostly on the most basic afterglow model, where a collisionless shock wave interacts with a circumburst medium. This scenario was originally predicted in the context of the fireball model \cite{Rees1992} but is not unique to it. Even initially magnetically dominated outflows \cite{Spruit2001} or ballistic ejecta \cite{Dado2002} will eventually lead to a blast wave of swept-up material at further distance from the progenitor. Already in its simplest form, this basic model gives rise to a wide range of observational consequences and poses a number of computational challenges. The purpose of this review is to highlight these and to identify some limitations and aspects not emphasized elsewhere. I benefit from the fact that a number of important issues regarding GRB afterglows are already reviewed elsewhere in these proceedings, such as flares, energy injection (in the context of magnetars), afterglow polarization and short GRBs.
\section{Basic dynamics of a relativistic blast wave}
The most basic model for the afterglow dynamics is that of an initially relativistic explosion collimated with half-opening angle $\theta_0$ and with isotropic equivalent energy $E_{iso}$ adiabatically expanding in a cold homogeneous medium of density $\rho_{ext}$. Once the blast wave has reached a radius far greater than its initial radius, i.e. $r \gg r_0$, at time $t \gg t_0$ and the energy in the swept-up external mass greatly exceeds that in the initial mass of the ejecta (if any), the hydrodynamical equations for the blast wave will be functions only of $\theta_0$, $E_{iso}$, $\rho_{ext}$, speed of light $c$ and coordinates $r$, $\theta$, $t$ (assuming symmetry along $\phi$). Before the launch of Swift with its fast slewing capabilities, this was also effectively the only stage of the afterglow that was observed.
Instead of using $r$, $t$ and $\theta$, the fluid equations can be written in terms of dimensionless combinations $A \equiv r c / t$, $B \equiv E_{iso} t^2 / (\rho_{ext} r^5)$, $\theta$. These variables are invariant under any transformation $E'_{iso} = \kappa E_{iso}$, $\rho'_{ext} = \lambda \rho_{ext}$, $r' = (\kappa / \lambda)^{1/3} r$, $t' = (\kappa / \lambda)^{1/3} t$ and from this straightforward dimensional analysis it follows that for a given initial opening angle $\theta_0$, the blast wave goes through exactly the same stages when explosion energy is increased (or circumburst density decreased), but at larger radii and later times. In a practical sense, this significantly reduces the parameter space for numerical simulations, to an extent that it can be fully covered and utilized for data analysis \cite{vanEerten:2011yn}.
In the ultrarelativistic stage at early times there is no causal contact yet along the different angles of the blast wave since the comoving speed of sound has a finite relativistic upper limit $\beta_S = 1 / \sqrt{3}$, in units of $c$. Expressed in the lab frame this speed is further reduced by a factor $\gamma$, the Lorentz factor of the flow. The flow is therefore effectively along radial lines initially and independent of $\theta$. Additionally, in the lab frame, all swept-up material is concentrated in an extremely thin shell of width $\Delta R \propto R / \gamma^2$, where $R$ the blast wave radius (here, one $\gamma$ follows from going from comoving to lab frame density, the other from Lorentz contraction of the shell width). This means that initially dimensionless coordinate $A \uparrow 1$ across the entire shell and the fluid equations end up being a function of $B$ only, implying a self-similar solution describing the fluid evolution exists at least to leading order in $1 / \gamma^2$. This is indeed the case and in the Blandford-McKee (BM) solution \cite{Blandford:1976uq} the full fluid profile is known analytically from combining the constraint of self-similarity with conservation of explosion energy within the expanding blast wave.
At very late stages the outflow becomes spherical regardless of its initial collimation and $\theta$ and $\theta_0$ drop out of the equations. The flow becomes non-relativistic as well, $A \downarrow 0$, and again a self-similar solution exists. In the Sedov-Taylor-von Neumann solution (e.g. \cite{Sedov:1959}), the entire fluid profile is again known analytically and the radius of the blast wave can be immediately deduced up to a multiplicative constant just from dimensional analysis, leading to a combination of parameters identical to that for $B$.
A disadvantage of the self-similar solutions is that they do not apply to the intermediate stage of deceleration. However, it is straightforward to construct simplified dynamical models describing the entire evolution for the spherical case once one assumes that all swept-up mass is concentrated in a thin homogeneous shell near the shock front and various such models exist in the literature \cite{Piran1999, Chiang1999, Huang1999, Peer2012}. The common feature of these models is that by combining the shock-jump conditions with energy conservation, a prescription for the evolution of the blast wave Lorentz factor can be found. Since many numerical studies use an equation of state (EOS) relating pressure $p$ to internal energy density $e$ that approximates analytically the exact solution for a (trans-)relativistic ideal gas, it is instructive to demonstrate a shell model for one such EOS,
\begin{equation}
p / (\rho c^2) = \frac{e / (\rho c^2)}{3} \frac{2 + e / (\rho c^2)}{1 + e / (\rho c^2)},
\end{equation}
which was taken from \cite{Mignone2005} and has been applied, for example, in \cite{Zhang2009, vanEerten2010, vanEerten2010transrelativistic, vanEerten2011chromaticbreaks}. Here $\rho$ is comoving density, $e$ does not include rest mass. The correct asymptotic limits are retrieved: $p = e / 3$ and $p = 2e / 3$ in the relativistic and non-relativistic case respectively. For this EOS, the general shock-jump conditions for a blast wave in a cold medium become simply (see also \cite{Uhm2011}):
\begin{equation}
\rho = 4 \gamma \rho_{ext}, \quad e = 4 \gamma(\gamma - 1) \rho_{ext} c^2, \quad p = 4 (\gamma^2 - 1) \rho_{ext} c^2 / 3,
\end{equation}
and tell us, for example, that the jump in density at the shock front $\rho / \gamma$ will be equal to 4 throughout the \emph{entire} evolution of the blast wave (see also \cite{vanEerten2010transrelativistic}). It further follows that the width of the homogeneous shell is \emph{always} $\Delta R = R / (12 \gamma^2)$, if it is to contain all swept-up mass $M$ with density given by the jump condition. The shell volume is then given by $V_S = M / (4 \rho_{ext} \gamma^2)$. The dynamics of the shell follow from fixing the total energy in the shell (here expressed in the lab frame):
\begin{equation}
E_{iso} = \tau V_S = [(\rho c^2 + e + p) \gamma^2 - p - \gamma \rho c^2] M / (4 \rho_{ext} \gamma^2),
\end{equation}
leading to
\begin{equation}
E_{iso} / (M c^2) = \beta^2 ( 4 \gamma^2 - 1 ) / 3.
\end{equation}
The ultra-relativistic limit has $\gamma \propto M^{-1/2} \propto t^{-3/2}$, and the non-relativistic limit $\beta \propto M^{-1/2} \propto t^{-3/5}$, as expected from the self-similar solutions. Solving the shell model reveals the enormous range of distance scales involved, which is the key numerical challenge. Simulating the deceleration of a typical BM type blast wave with $E_{iso} = 10^{53}$ erg and $\rho_{ext} \equiv n_{ext} m_p = m_p$ (i.e. one proton cm$^{-3}$) from $\gamma = 100$ onwards until $\beta \gamma \sim 10^{-2}$ means going from $10^{17}$ cm to $10^{20}$ cm, while the initial shell width $\Delta R \sim 10^{14}$ cm. Simulations therefore typically require adaptive-mesh refiniment (AMR, where the grid resolution is dynamically and locally adapted leading to an effective resolution that can be orders of magnitude larger than the base grid resolution). Alternative and complementary approaches exist, such as using moving grid boundaries \cite{Mimica:2008up}, setting up the simulation in a Lorentz boosted frame \cite{vanEerten:2012xk} or using (multi-dimensional) Lagrangian methods where the grid cells advect with the flow \cite{Kobayashi1999, Duffell2011, Duffell2013}.
Arguably the most obvious generalization in terms of dynamics is changing the circumburst density environment to one where density depends on radius as a power law. For long GRB's, where the progenitors are thought to be massive stars \cite{Woosley:1993wj, MacFadyen:1998vz}, one would expect the environment to resemble a stellar wind, $\rho_{ext} \propto r^{-2}$, presumably generated by a Wolf-Rayet type progenitor star \cite{Chevalier:1999mi}.
A number of authors have performed numerical studies of BM type blast waves decelerating in a stellar wind environment \cite{Nakar2007, Meliani2007, DeColle:2011ca}. The effect of an environment $\rho_{ext} = \rho_0 (r/r_0)^{-k}$ is that for higher $k$ the blast wave takes more time to decelerate, and the characteristic time scales change accordingly \cite{Piran2005}. Our shell model, for example, reaches $\beta \gamma = 1$ at $t_{NR} \approx 922 [(E_{iso} / 10^{53}) (m_p r_0^{-k}/ \rho_0) (3-k)]^{1 / (3-k)}$ days. The observational implication is that characteristic features (such as jet breaks, see below) will be stretched out over time. The numerical implication is that a larger grid and longer running time are required to capture the same dynamical stages. The blast wave profiles are scale invariant between energies and densities for each $k$, although the dimensionality of $\rho_0$ needs to be taken into account when expressing scale invariance in terms of $E_{iso}$ and $\rho_0$ \cite{vanEerten:2012xk}.
Further generalizations to jet dynamics include adding structure to the initial outflow (e.g. \cite{Rossi2002, Berger2003}), increasing the mass of the initial ejecta (as included in the original fireball model, see also e.g. \cite{Kobayashi1999, Duffell2013}) or prolonging the duration of energy injection (with initial mass and energy injecting both giving rise to a reverse shock) or taking into account complex circumburst medium structures and transitions (e.g. \cite{Eldridge2006, vanMarle2006, Peer2006, Mesler2012, vanMarle2012, Gat:2013zla}). When the shock dynamics are numerically resolved, it is found to be very difficult to model strong variability in afterglow light curves through circumburst medium interactions \cite{Nakar2007, vanEerten2009opticalvariability, Mimica2011, Gat:2013zla} (although it does offer a plausible explanation for late time shallowing of the light curve, \cite{Gat:2013zla}). The explanation of afterglow flares therefore most likely requires some form of magnetic reconnection (e.g. \cite{Giannios2006}) or late engine activity (see e.g. \cite{Sari2000, Perna2006, Maxham2009, Vlasis:2011yp}).
\section{Emission}
If an expanding relativistic blast wave contains a non-thermal distribution of electrons and when magnetic fields are present, synchrotron emission naturally follows. Detailed theoretical analyses of the standard model basically follow \cite{Blandford:1977}, where synchrotron and synchrotron-self Compton (SSC) emission are discussed in the context of the self-similar BM solution established previously by the same authors \cite{Blandford:1976uq}, although that article predates the discovery of afterglows by twenty years.
The standard fireball model approach to afterglows (e.g. \cite{Wijers1997, Sari1998, Granot:2001ge}) assumes that a non-thermal distribution of electrons with distribution $n_e (\gamma_e) = C_e \gamma_e^{-p}$ (and $n_e$ and $\gamma_e$ expressed in the frame comoving with the local fluid element) is generated through shock-acceleration at the front of the blast wave. The energy distribution index $p$ (which has nothing to do with pressure $p$) typically lies between 2 and 3, and the distribution cuts off below at $\gamma_m$. If $p \le 2$, the total energy density in accelerated electrons (defined here excluding rest-mass) $\int (\gamma_e - 1) n_e(\gamma_e) m_e c^2 \mathrm{d} \gamma_e$ diverges if no upper boundary $\gamma_M$ is included. When it is assumed (1) that the energy density in non-thermal electrons is a fraction $\epsilon_e$ of the available internal energy density $e$ and (2) that a fixed fraction $\xi_n$ of the available electrons $n$ are accelerated (with $n$ also the proton number density in the fluid), we can determine $\gamma_m$ and $C_e$. For $\gamma_m$ (and assuming $\gamma_M \uparrow \infty$ at the acceleration site), we obtain:
\begin{equation}
\gamma_m = \frac{p-2}{p-1} \left( \frac{\epsilon_e e}{\xi_n n m_e c^2} + 1 \right).
\label{gamma_m_equation}
\end{equation}
The rest mass term on the right is usually ignored, assuming $\gamma_m \gg 1$. This becomes less accurate as time goes on and $\gamma$ decreases. Even when rest mass is included, we find that the $\gamma_m = 1$ threshold is crossed when $\gamma = 1 + \xi_n m_e m_p^{-1} \epsilon_e^{-1} (p - 2)^{-1}$, at which point the parametrization breaks down. At very late times, therefore, $\xi_n$ \emph{must} be smaller than unity, as is the case for supernova remnants. Alternative parametrizations of the shock-microphysics that deal with late times are possible, see e.g. \cite{Huang2003, vanEerten2010transrelativistic}.
Following shock-acceleration, the population of electrons evolves according to
\begin{equation}
\frac{\mathrm{d} \gamma_e}{\mathrm{d} t} = - \frac{4 \sigma_T \gamma_e^2}{3 \gamma m_e c} (U_B + U_{IC}) + \frac{\beta_e^2 \gamma_e}{3 n} \frac{\mathrm{d} n}{\mathrm{d} t},
\label{kinetic_equation}
\end{equation}
assuming that the population remains confined to its fluid element \cite{Downes2002, Granot:2001ge} (an assumption that can be found to be justified up to very high $\gamma_e$ by checking the Larmor radius of the accelerated electrons). The first term on the right contains magnetic field energy density $U_B$, photon field energy density $U_{IC}$ and Thomson cross section $\sigma_T$ and represents energy loss due to synchrotron and synchrotron self-Compton radiation. Energy loss due to adiabatic evolution of a fixed volume in phase space is given by the second term. The electron velocity $\beta_e$ can safely be assumed to be 1 for a relativistic population. It is only electrons with $\gamma_e \gg 1$ that emit synchrotron radiation.
Once the flow becomes non-relativistic, $e \propto n^{5/3}$ rather than $e \propto n^{4/3}$, implying that once $\epsilon_e$ is set to some parametrized value (typically around 0.1) at the shock front, adiabatic evolution of the relativistic electron population will cause it to evolve further downstream according to $\epsilon_e \propto e^{-1/5}$. Since energy density decreases downstream, the relative energy content of the relativistic electrons will grow. Once radiative losses are accounted for as well, the evolution becomes even more complex. Because the adiabatic evolution induced dependency of $\epsilon_e$ on $e$ is only weak and because the evolution of $\gamma_m$ is typically dictated by the adiabatic loss term only, it is usually assumed in numerical studies for the purpose of determining $\gamma_m$ that $\epsilon_e$ remains fixed as a fluid element advects downstream post-shock \cite{Nakar2007, Mimica:2008up, Zhang2009, vanEerten2010transrelativistic, vanEerten2011chromaticbreaks, Wygoda:2011vu, DeColle:2011ca, Mimica2011}.
Each individual electron emits a synchrotron spectrum peaking at
\begin{equation}
\nu'_e(\gamma_e) = \frac{3 q_e}{4 \pi m_e c} \gamma_e^2 B,
\end{equation}
measured in the frame comoving with the fluid element, with $B$ magnetic field strength and $q_e$ electron charge. The flux from an individual electron will drop exponentially at higher frequencies. In the absence of electron cooling, the local accelerated electron population as a whole will emit a synchrotron spectrum peaking at $\nu'_m \propto \gamma_m^2 B$ and with emission coefficients $j_\nu$ asymptoting to $j_\nu \propto (\nu' / \nu_m')^{1/3}$ below and $j_\nu \propto (\nu' / \nu_m')^{(1-p)/2}$ above $\nu'_m$, with the exponential cut-offs of individual electrons adding up to a power law slope. The observed global synchrotron spectrum consists of the combined emission of all regions in the blast wave and will have the same asymptotic shape. The exact shape of the local and global spectrum and the flux level at the peak depend on the amount of detail used in modeling synchrotron emission and people use both simply connected power laws (e.g. \cite{Wijers1997, Sari1998, Zhang2009}) or detailed expressions with smooth spectral transitions based on full integration of modified Bessel functions (e.g. \cite{Granot:2001ge, vanEerten2009, Leventis2012}).
Typically, the magnetic field required for synchrotron emission is assumed to be small scale, randomly oriented and generated at the shock front. The magnetic energy density $U_B$ is parametrized by linking it to the internal energy density according to $U_B \equiv B^2 / (8 \pi) \equiv \epsilon_B e$, with $\epsilon_B$ typically of the order 0.01. This results in magnetic fields of strength $B \sim 0.6 (\epsilon_B / 0.01)^{1/2} (n_{ext} / 1. $ cm$^{-3}) (\gamma / 10.)^2$ Gauss for relativistic blast waves, much larger than what can be obtained by shock-compression by a factor of $4 \gamma$ \cite{Gallant1999, Achterberg2001} of an ambient circumburst magnetic field with field strength on the order of $\mathrm{\mu G}$. In most cases, a shock-compressed ambient field is insufficient to explain the data, but some interesting exceptions exist \cite{Kumar2010, BarniolDuran2011}.
Ultimately, due to the complexity of particle acceleration and magnetic field generation at shocks, massive numerical computations of large groups of individual particles accelerating and interacting (``particle-in-cell'' or PIC simulations) are important to obtain a physical understanding of the magnetic fields and non-thermal populations underlying gamma-ray burst emission (e.g. \cite{Spitkovsky2008, Sironi2009}). These computations can then in principle be used to inform macrophysical parametrizations (e.g. $\epsilon_e$, $\epsilon_B$, $p$, $\xi_n$) and, in that way, eventually be compared to observational data. At the moment the spatial and temporal scales covered by the particle-in-cell simulations are unfortunately still limited by computer power and no convergence has been achieved for the emergent properties of the system.
\vspace{1\baselineskip}\vspace{-\parskip}
Strictly speaking, a power law distribution of particles injected at the shock front will not remain a power law distribution further downstream, mainly through the effect of radiative cooling. Even when $1 / \gamma_M$ initially starts out near zero, it will evolve quickly according to eq. \ref{kinetic_equation}, leading to an exponential drop in flux for a local particle population beyond $\nu'_M \equiv \nu'_e(\gamma_M)$. The cut-off $\gamma_M$ can in principle be determined by comparing the acceleration time scale to the radiative cooling scale and will typically lead to $\nu'_M$ of the order GeV (e.g. \cite{Norman1995, Blandford1987, Peer2013}). Numerically and in simplified analytical models, cooling is often dealt with by assuming that for the purpose of calculating electron cooling effects, a global steady state exists in the shocked plasma where above a certain electron $\gamma_c$ the radiative loss term and energy injection term due to shock acceleration are in equilibrium. This then leads to a steepening of the spectrum by $1/2$ beyond $\nu'_c \equiv \nu'_e(\gamma_c)$ when $\nu'_c > \nu'_m$ (``slow cooling'') and, in case $\nu'_c < \nu'_m$ (``fast cooling''), a spectral slope transition from $1/3$ to $-1/2$ across $\nu'_c$ and eventually to $-p/2$ beyond $\nu'_m$. The power law steepening is consistent with the emergent spectrum for local cooling, where all the exponential drops at locally different cut-off frequencies add up to a power law. The cooling break Lorentz factor $\gamma_c$ is obtained from a rough estimate where the cooling time is equated to the life time of the blast wave, leading to
\begin{equation}
\gamma_c = 6 \pi m_e c \gamma / (\sigma_T B^2 t).
\label{global_cooling_time_equation}
\end{equation}
An alternative approach would be to solve eq. \ref{kinetic_equation} along with the hydrodynamic equations. This was done analytically for the BM solution in \cite{Granot:2001ge}, and numerically in e.g. \cite{Downes2002, Nakar2007, vanEerten2010transrelativistic, Uhm2013}. Locally calculated cooling will impact both the overall flux level and the sharpness of the cooling break \cite{Granot:2001ge, vanEerten2010, Uhm2013}. The resolution required to solve the cooling locally follows from considering
\begin{equation}
\Delta (R-r) \approx \Delta \gamma_e^{-1} \frac{\mathrm{d} (R-r)}{\mathrm{d} t} \frac{3 \gamma m_e c}{4 \sigma_T U_B},
\label{emission_region_size_equation}
\end{equation}
which can be derived from eq. \ref{kinetic_equation}, assuming the fluid conditions don't change across the hot electron region $\Delta (R-r)$. We want to resolve $\Delta \gamma_e^{-1}$ going from 0 to, say, the Lorentz factor associated with emission peaking at X-rays ($\nu \sim 5 \times 10^{17}$ Hz). When this is done for the shell model described previously and for typical afterglow values ($E_{iso} = 10^{53}$ erg, $n_{ext} = 1$ cm$^{-3}$, $\epsilon_B = 0.01$, $\epsilon_e = 0.1$), it is found that $\Delta (R-r) / \Delta R$ starts around 0.5 when $\gamma = 100$, decreases with $\Delta (R-r) / \Delta R \propto \nu^{-1/2} \gamma^{2/3}$ as the blast wave decelerates, plateaus at $\sim 5 \times 10^{-2}$ around $\beta \gamma \sim 1$ before decreasing again according to $\Delta (R-r) / \Delta R \propto \nu^{-1/2} \beta^{1/6}$. What this means is that although the size of the hot region is comparable to the blast wave width at high Lorentz factors, thus allowing for approximations like eq. \ref{global_cooling_time_equation}, this approximation gets progressively less accurate as the blast wave decelerates. It also means that it is very challenging to numerically model local cooling by rewriting eq. \ref{kinetic_equation} into an advection equation, given that the resolution requirement increases by a factor $\Delta R / \Delta (R-r)$ (but not impossible, see \cite{Downes2002, Nakar2007, vanEerten2010transrelativistic}; a Lagrangian approach is recommended in order to accurately detect the position of the shock front).
Note that, analytically, the same light curve power law behavior follows whether it is derived assuming a finite emission region (i.e. eq. \ref{emission_region_size_equation}) plus sharp emission cut-off or assuming emission from the full blast wave plus power law change in spectrum: the differences between the two approaches will mostly be apparent during sudden transitions in the outflow, such as a jet break or the rise of a reverse shock in the case of massive ejecta or change in the nature or the circumburst medium.
With a quantative model for the synchrotron emission including cooling, it is possible to check one key assumption mentioned previously: that of adiabatic expansion. A calculation of the total emitted power in synchrotron emission for our shell model, global cooling and typical afterglow values, reveals this to be a safe assumption. When the blast wave $\beta \gamma$ drops to $10^{-2}$, the total energy loss is found to be about 2 percent of $E_{iso}$.
\vspace{1\baselineskip}\vspace{-\parskip}
At low (typically radio) frequencies, the blast wave becomes opaque due to synchrotron self-absorption (SSA). Like the synchrotron emission coefficient $j_\nu$, the SSA coefficient $a_\nu$ can be modeled at varying levels of detail. Analytical scaling models often simply consider the asymptotic limit where the emitting volume is replaced by an emitting outer surface \cite{Sari1998, Waxman1998}. Alternatively, an implementation of linear radiative transfer can be used \cite{vanEerten2010transrelativistic, Mimica2010, vanEerten2010, vanEerten2011chromaticbreaks, vanEerten2010} with either a simple power law approximation to $a_\nu$ or a more complete treatment with smooth transitions \cite{Leventis2012}, which can even include the effect of electron cooling on $a_\nu$ \cite{Granot:2001ge}.
A useful property of synchrotron spectra is that they too exhibit scale invariance between energies and between circumburst densities in their asymptotic spectral regimes \cite{vanEerten2012scaleinvariance}, even when computed numerically from two dimensional simulations of spreading trans-relativistic blast waves. Although perhaps less obvious than the invariance in dynamics, this invariance amounts to the same thing and works because in the different power law regimes, additional constants with dimension entering into the flux formulae (i.e. $m_p$, $m_e$, $\sigma_T$) can be identified and grouped together, leaving the remaining terms again fixed by dimensional analysis.
\vspace{1\baselineskip}\vspace{-\parskip}
It is not difficult to come up with physically plausible complications to the standard synchrotron radiation model. Synchrotron Self Compton (SSC) was already briefly mentioned above and can be expected to impact afterglows at gamma-rays and hard X-rays, especially for high blast wave Lorentz factors \cite{Sari2001, Petropoulou2009}. I already mentioned pitfalls of using $\xi_N$ and $\epsilon_e$. In addition, the downstream evolution of $\epsilon_B$ depends on the nature of the magnetic field. A randomly oriented field with fixed number of field lines through the surface of each fluid element will evolve such that $e_B \propto \rho^{4/3}$, meaning that $\epsilon_B$ remains fixed only for relativistic flows \cite{vanEerten2010transrelativistic}. A preferred direction for the field will further complicate matters, and is an issue best addressed through polarization measurements (reviewed elsewhere in these proceedings). Finally, of the standard radiation parameters, the behavior of $p$ is likely to be more complex than usually assumed. Model fits to various afterglow datasets yield a distribution of $p$ values arguably inconsistent with a single underlying universal value \cite{Curran2010}. This would indicate that $p$ is sensitive to the physical conditions at the front of the blast wave and one would then naturally expect $p$ to evolve strongly over time within each burst as well, since the conditions across the blast wave shock front change across a wide range during the evolution of each blast wave. Although there is theoretical support for $p$ evolution across the transrelativistic regime \cite{Keshet2005}, the sample studied in \cite{Curran2010} is mostly relativistic. Generally, Swift burst data shows no clear temporal trends or variability for the spectral index, although the error bars are usually large.
\section{The jet nature of the outflow}
At some point the outflow will no longer follow radial lines but bend sideways, revealing the collimated nature of the blast wave. For a blast wave initially following the BM solution, the Lorentz factor of the shock $\Gamma$, can be found to obey
\begin{equation}
\Gamma^2 = (17 - 4k) E_{iso} / (8 \pi \rho_{ref} R_{ref}^k t^{3-k} c^{5-k}),
\end{equation}
with the numerical constants following from radial integration over the BM lab frame energy density profile. A sound wave traveling along the shock front will have its radial component in the lab frame set by $\Gamma$ in order to keep up with the outward motion of the shock. Since its magnitude in the comoving frame is also known (i.e. $\beta_S = 1 / \sqrt{3}$), the transverse component of the sound speed in the lab frame can be calculated to be $\beta_{S,\theta}' = 1 / 2 \Gamma$. Integrating $R \dot{\theta} = \beta_{S,\theta}'$ for a sound wave traveling at $t=0$ from the jet edge at $\theta_0$ to the tip then yields $\Gamma_j = (3 - k)^{-1} \theta_0^{-1}$ for the shock Lorentz factor at which the tip and the blast wave as a whole begin to decelerate and a qualitative change in the nature of the flow sets in.
In the limiting case of ultra-narrow and ultra-relativistic jets, this new stage can be shown analytically to be one where the Lorentz factor drops exponentially, while the opening angle $\theta_{max}$ widens exponentially \cite{Rhoads:1999wm, Gruzinov:2007ma} once $\theta_{max} \gg \theta_0$. This follows from the fact that a widening jet sweeps up more mass, leading to further deceleration which increases sideways expansion in the lab frame etc., leading to a runaway effect. In practice, this regime is not found to occur for jets with typical opening angles ($\theta_0 \sim 0.1$ rad, \cite{Frail2001}), since by the time $\theta_{max} \gg \theta_0$ the jet is no longer in the ultra-relativistic regime. Note that e.g. for $\theta_0 \sim 0.05$, the fluid Lorentz factor of the tip $\gamma \approx 4.7$ at the \emph{onset} of deceleration, leaving no room for an intermediate stage where $\gamma \ll 1/ (3 \theta_0 \sqrt{2} )$ \emph{and} $\gamma \gg 1$ \emph{and} $\theta \gg \theta_0$.
The slow expansion in practice of afterglow jets has been confirmed numerically by various authors \cite{Granot:2001cw, Zhang2009, vanEerten2010, vanEerten2011chromaticbreaks, Wygoda:2011vu, DeColle:2011ca, vanEerten2012observationalimplications}. A phase of exponential expansion can be found \cite{MacFadyen2013} for jets with $\theta_0 \ll 0.04$, but these simulations require a special approach such as a boosted frame due to the requirement $\gamma \gg 1 / \theta_0$ for the initial conditions discussed previously. Advanced analytical models incorporate a smooth transition between the exponentional and logarithmic stages of spreading \cite{Wygoda:2011vu, Granot:2011gg} (see also \cite{vanEerten2012observationalimplications, MacFadyen2013}).
The jet nature of the blast wave will become apparent to an observer in two ways, both leading to a steepening of the light curve. On the one hand, due to strong relativistic beaming, an observer originally only sees a small patch of the blast wave surface. Once the blast wave has decelerated sufficiently, and the relativistic beaming cones (with width $\theta \sim 1 / \gamma$) have widened sufficiently, this patch will have grown to include the edge of the blast wave and a lack of emission from beyond the edges will cause the observed flux to decrease more steeply. On the other hand, the decrease in beaming due to the additional deceleration caused by the spreading of the jet will also lead to a steeper decrease of the observed flux. Since jet spreading is not as extreme as originally thought, both effects contribute noticeably and the first effect is not overwhelmed by the second. A specific consequence of this is that the shape and onset of the jet break become different even for small changes in observer angle, even when still within $\theta_0$. As opposed to the second, dynamical, cause for the jet break, the onset and completion of the first effect depend on the angle between observer and outer edges (i.e. on $\theta_0 \pm \theta_{obs}$) rather than on $\theta_0$ alone. This implies that jet breaks are stretched out over time and often do not become fully clear until sufficient time has passed, which might be beyond the capabilities of Swift to observe. This provides a natural explanation \cite{vanEerten2010} for the lack of clear jet breaks detected by Swift \cite{Kocevski2008, Racusin2009}.
\section{Comparison to data}
Ultimately, we wish to compare between model and data. A number of approaches are possible. One can fit basic functions, such as (smoothly broken) power laws to the various bands and subsequently interpret these. Or one can directly fit analytical or simulation-derived synthetic light curves. The latter has the advantage of potentially getting the most out of the data, but the number of free parameters of the standard afterglow model ($E_{iso}$, $\theta_0$, $\rho_0$, $p$, $\epsilon_e$, $\epsilon_B$, $\xi_N$, observer angle $\theta_{obs}$) can be problematic. Full broadband afterglow datasets covering the full range from radio to X-rays (and thus all spectrum regimes) are very rare. Solutions are to either add constraints to the model (e.g. $\epsilon_B \equiv \epsilon_e$ or $\xi_N \equiv 1$) or carefully study the probability distributions of the various fit parameters in order to determine what is and what isn't constrained. A Bayesian approach is well suited to this task (see e.g. the contributions by B.B. Zhang and by Ryan elsewhere in these proceedings), and would, for example, naturally bring out the extent to which the degeneracy between $\xi_N$ and other model parameters \cite{Eichler2005, Leventis2012} is broken by the strict upper limit of 1 on $\xi_N$.
\bigskip \bigskip \begin{center} \begin{large
This research was supported in part by NASA through grant NNX10AF62G issued
through the Astrophysics Theory Program and by the NSF through grant AST-
1009863 and by the Chandra grant TM3-14005X.
|
1,116,691,500,274 | arxiv | \section{Introduction}
\label{sec1}
Continuous integrable systems are nonlinear differential equations that can be solved analytically. For example, integrability of ordinary differential equations (ODEs) is judged by the Arnold-Liouville theorem, which requires integrable
ODEs to have sufficient number of first integrals (i.e., conserved quantities, constants of motions) \cite{Arnold}.
There is little ambiguity in the notion of integrability in continuous cases.
On the other hand, in the case of discrete equations, universally accepted definition of integrability does not exist.
One of the most widely used criteria for integrability might be the `singularity confinement test' (SC test) introduced in \cite{SC} by B. Grammaticos, A. Ramani and V. Papageorgiou, as a discrete analogue of the Painlev\'{e} property \cite{Conte}.
According to the SC test, a difference equation is considered to be integrable, if every singularity of the equation is cancelled out to give a finite value after a finite number of iterations of the mapping.
The SC test has been successfully applied to several types of ordinary difference equations, in particular the non-autonomous generalizations of the QRT mappings \cite{QRT}, to produce discrete versions of the Painlev\'{e} equations \cite{RGH}.
On the other hand, it is usually not easy to conduct the SC test to partial difference equations.
Indeed, there is a result on the SC test of partial difference equations in their bilinear forms \cite{RGS}, where the Hirota-Miwa equation and its reductions are studied. Also, the singularity confinement of the discrete KdV equation in its nonlinear form is discovered in \cite{SC}, where two patterns of confining singularities
on the lattice are presented.
However, in both cases, not all the patterns of singularities have been investigated.
One of the most difficult points in conducting the SC test for partial difference equations is that, it is not practical to investigate whether all the patterns of singularities are eliminated after finite number of iterations of the given equation, because the partial difference equations have infinite dimensional (or high dimensional depending on the size of the system) space of initial conditions.
To overcome this problem, we have introduced in our previous papers a method to reformulate the SC test in terms of the algebraic relations of the general terms of the equations \cite{dKdVSC,dKdVSC2}.
In these papers, we have introduced the notion of `co-primeness', which can be used as a new integrability criterion for both ordinary and partial difference equations, and have proved the co-primeness theorems for a type of QRT mappings and the nonlinear form of the discrete KdV equation.
In these previous works, we treated those equations under the semi-infinite boundary conditions. In the proof of the co-primeness, we utilized the fact that the bilinear forms of those equations have the Laurent property, which has already been established in relation with the notion of cluster algebras \cite{FZ,FZ2}.
In the case of Dirichlet and periodic boundary conditions, however, the Laurent property of integrable equations has not been clarified. In fact, as we shall see below in corollary \ref{periodthm2}, the Laurent property does not hold in its naive form for the periodic boundary condition.
The aim of this paper is to examine whether co-primeness theorems similar to those in our previous works are satisfied
for integrable equations with boundary conditions other than the semi-infinite one.
For this purpose, we consider the celebrated discrete Toda equation under three types of boundary conditions: i.e., semi-infinite, molecule, and periodic.
We shall prove that the co-primeness theorem does hold for these boundary conditions.
The Toda lattice equation has been introduced by M. Toda as a mechanical model of the chain of particles under nonlinear interaction force \cite{Toda1}.
It is an important example of integrable systems with multi-soliton solutions. It reduces to the KdV equation with an appropriate continuum limit \cite{Toda2}.
The Toda equation has numerous applications to physical phenomena, such as a wave propagation on two-dimensional water surfaces, an electric current in circuits.
Later the time discretization of the Toda equation has been studied and it has been shown that the system is completely integrable \cite{Suris1, Suris2}.
The discrete Toda equation is the following coupled equations:
\begin{eqnarray}
I_n^{t+1} &=& I_n^t+V_n^t-V_{n-1}^{t+1},\label{dtodaIV1} \\
V_n^{t+1} &=& \frac{I_{n+1}^t V_n^t}{I_n^{t+1}}, \label{dtodaIV2}
\end{eqnarray}
with suitable boundary conditions. For example, in the case of semi-infinite boundary condition, we take $V_0^t=0$ for $t\ge 0$.
In the case of molecule boundary condition, we take $V_0^t=V_{N+1}^t=0$ for $t\ge 0$,
where $N$ is the size of the system.
In the case of periodic boundary condition, we take
$V_n^t=V_{N+n}^t$, $I_n^t=I_{N+n}^t$ for $t\ge 0$ and $n\ge 0$.
\begin{figure}
\centering
\includegraphics[width=11cm,bb=70 200 750 550]{semiinf1-3.eps}
\caption{Initial values of discrete Toda equation with semi-infinite boundary condition, where $x_i:=\tau_i^0$, $y_j:=\tau_j^1$.}
\label{figure1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=11cm,bb=70 280 750 550]{molecule1-3.eps}
\caption{Initial values of discrete Toda equation with molecule boundary condition, where $x_i:=\tau_i^0$, $y_j:=\tau_j^1$.}
\label{figure2}
\end{figure}
The bilinear form of the discrete Toda equation is as follows:
\begin{equation}
\tau_n^{t+1} \tau_n^{t-1} = \tau_{n-1}^{t+1} \tau_{n+1}^{t-1} + (\tau_n^t)^2. \label{dtoda}
\end{equation}
The boundary condition for the equation \eqref{dtoda} is determined in accordance with that of equations \eqref{dtodaIV1}, \eqref{dtodaIV2}.
The following proposition \ref{prop1} determines the correspondence between two sets of equations for the molecule boundary condition. The correspondence for the semi-infinite boundary can be obtained with the limit $N\to \infty$. The case of periodic boundary condition is discussed later in section \ref{sec2}.
\begin{Proposition} \label{prop1}
If we are given the solution $\tau_n^t$ of \eqref{dtoda} with conditions
$\tau_{-1}^t=\tau_{N+2}^t=0$ $(t\ge 0)$,
then the following set of variables $I_n^t$ and $V_n^t$ defined by \eqref{tauIV} satisfy the discrete Toda molecule equation (i.e., \eqref{dtodaIV1} and \eqref{dtodaIV2} with conditions $V_0^t=V_{N+1}^t=0$ for all $t\ge 0$):
\begin{equation}\label{tauIV}
I_n^t=\frac{\tau_{n-1}^t \tau_n^{t+1}}{\tau_n^t \tau_{n-1}^{t+1}},\;\; V_n^t=\frac{\tau_{n+1}^t \tau_{n-1}^{t+1}}{\tau_n^t \tau_n^{t+1}}.
\end{equation}
\end{Proposition}
See figures \ref{figure1} and \ref{figure2} for the configurations of initial values.
Let us fix the definition of co-primeness of Laurent polynomials and rational functions here.
\begin{Definition} \label{laurentcoprime}
Two Laurent polynomials $f,g$ are `co-prime' in the ring $R:=\mathbb{Z}[a_i^{\pm}; 1\le i\le n]$ if the following condition is satisfied:
If we have decompositions $f=hf_2$, $g=hg_2$ in $R$, then $h$ must be a unit in $R$ (i.e., a monomial in $\{a_i\}_{i=1}^n$ with coefficient $1$).
\end{Definition}
\begin{Definition} \label{rationalcoprime}
Two rational functions $f$ and $g$ are `co-prime' in the field $F:=\mathbb{C}(a_i; 1\le i\le n)$ if the following condition is satisfied:
Let us express $f,g$ as $f={F_1}/{F_2}$ and $g={G_1}/{G_2}$
where $F_i,G_i\in \mathbb{C}[a_i^{\pm}; 1\le i\le n]$ $(i=1,2)$, $(F_1,F_2)$ and $(G_1,G_2)$ are coprime pairs of polynomials.
Then every pair of polynomials $(F_i,G_j)$ $(i,j=1,2)$ is coprime in the sense of definition \ref{laurentcoprime}. (No common factor except for monomial one is allowed.)
\end{Definition}
\begin{Lemma}[\cite{dKdVSC2}] \label{locallemma}
Let $\{p_1,p_2,\cdots,p_m\}$ and $\{q_1,q_2,\cdots ,q_m\}$ be two sets of independent variables with the following properties:
\begin{eqnarray}
p_j&\in&\mathbb{Z}\left[ q_1^{\pm}, q_2^{\pm},\cdots ,q_m^{\pm}\right], \label{p}\\
q_j&\in&\mathbb{Z}\left[ p_1^{\pm}, p_2^{\pm},\cdots ,p_m^{\pm}\right], \label{q}\\
q_j&&\mbox{is irreducible as an element of}\ \mathbb{Z}\left[ p_1^{\pm}, p_2^{\pm},\cdots ,p_m^{\pm}\right], \notag
\end{eqnarray}
for $j=1,2,\cdots, m$.
Let us take an irreducible Laurent polynomial
\[
f(p_1,\cdots,p_m)\in \mathbb{Z}\left[ p_1^{\pm}, p_2^{\pm},\cdots ,p_m^{\pm}\right],
\]
and another Laurent polynomial
\[
g(q_1,\cdots, q_m) \in \mathbb{Z}\left[ q_1^{\pm}, q_2^{\pm},\cdots ,q_m^{\pm}\right],
\]
which satisfies $f(p_1,\cdots,p_m)=g(q_1\cdots, q_m)$.
In these settings, the function $g$ is decomposed as
\[
g(q_1,\cdots, q_m)=p_1^{r_1}p_2^{r_2}\cdots p_m^{r_m}\cdot \tilde{g}(q_1,\cdots,q_m),
\]
where $r_1,r_2, \cdots, r_m\in\mathbb{Z}$ and $\tilde{g}(q_1,\cdots,q_m)$ is irreducible in $\mathbb{Z} \left[ q_1^{\pm}, q_2^{\pm},\cdots ,q_m^{\pm}\right]$.
\end{Lemma}
The proof can be found in \cite{dKdVSC2}.
\section{Co-prime property of the discrete Toda} \label{sec2}
\subsection{Semi-infinite boundary}
We take the initial values as
\begin{equation} \label{semiinfcond}
\tau_0^t=1\ (t=0,1,2,\cdots),\; \tau_n^0=x_n,\ \tau_n^1=y_n\ (n=1,2,\cdots).
\end{equation}
Note that taking $\tau_{-1}^t=0$ $(t\ge 0)$ together with $\tau_0^0=\tau_0^1=1$ is equivalent to imposing $\tau_0^t=1$ $(t\ge 0)$.
\begin{Theorem} \label{thmsemiinf}
Every term of the discrete Toda equation \eqref{dtoda} is a Laurent polynomial of the initial variables:
\[
\tau_n^t\in\mathbb{Z}[x_1^{\pm}, x_2^{\pm},\cdots, y_1^{\pm}, y_2^{\pm},\cdots],
\]
where $\tau_n^0=x_n$ and $\tau_n^1=y_n$.
Moreover, the term $\tau_n^t$ is an irreducible Laurent polynomial,
and two distinct terms are coprime as Laurent polynomials.
\end{Theorem}
\textbf{Proof of theorem \ref{thmsemiinf}}\;\;
Let us define the ring of Laurent polynomials as
\[
R_{m,n}:=\mathbb{Z}[x_1^{\pm},x_2^{\pm},\cdots,x_m^{\pm};y_1^{\pm},y_2^{\pm},\cdots,y_n^{\pm}],
\]
and use the notation as $R:=\lim_{m,n\to \infty} R_{m,n}$.
The subset of irreducible Laurent polynomials is
\[
R_{irr}:=\left\{f\in R| f \ \mbox{is an irreducible element of} \ R \right\}.
\]
The following lemma is immediate:
\begin{Lemma} \label{fgirred}
For $f=a x_m^{\pm 1}+b$, $a,b\in R_{m-1,n}\setminus \{0\}$,
$f$ is irreducible in $R$ if $a$ and $b$ are coprime in $R$.
For $g= cy_n^{\pm 1}+d$, $c,d\in R_{m,n-1}\setminus \{0\}$,
$g$ is irreducible in $R$ if $c$ and $d$ are coprime in $R$.
\end{Lemma}
\begin{Lemma} \label{c3a3}
Let us rewrite
\begin{align*}
&(\tau_{n-2}^{t+2},\tau_{n-1}^{t+2},\tau_n^{t+2},\tau_{n-1}^{t+1},\tau_n^{t+1},\tau_{n-1}^{t},\tau_n^t,\tau_{n+1}^t)=(a_1,a_2,a_3,b_2,b_3,c_2,c_3,c_4),\\
&(\tau_n^{t-1},\tau_{n+1}^{t-1},\tau_n^{t-2},\tau_{n+1}^{t-2},\tau_{n+2}^{t-2})=(d_3,d_4,e_3,e_4,e_5).
\end{align*}
Then we have
\[
c_3 a_3=c_3\frac{(a_1c_3e_5+ a_1 d_4^2+ b_2^2 e_5)d_3^2+(c_3^3+2b_2c_3d_4)c_2e_4+b_2^2d_4^2e_3}{c_2 d_3^2 e_4}.
\]
See figure \ref{figure3} for the configuration of these values.
\end{Lemma}
\textbf{Proof of lemma \ref{c3a3}}\;\;
Equation \eqref{dtoda} shows that $a_3 c_3=b_3^2+a_2 c_4$.
We substitute other relations $a_2={(b_2^2+a_1 c_3)}/{c_2}$ and $c_4={(d_4^2+c_3e_5)}/{e_4}$ to the equality above. Direct calculation shows that
\[
c_3a_3=\frac{1}{c_2d_3^2e_4}\{c_3(a_1d_3^2d_4^2+c_2c_3^3e_4+2b_2c_2c_3d_4e_4+b_2^2d_3^2e_5+a_1c_3d_3^2e_5)+(b_2^2 d_4^2)(d_3^2+c_2e_4)\}.
\]
We then use $d_3^2+c_2e_4=c_3e_3$ to obtain the desired result.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\begin{figure}
\centering
\includegraphics[width=7cm,bb=100 600 300 740]{c3a3.eps}
\caption{Configuration of the variables in lemma \ref{c3a3}.}
\label{figure3}
\end{figure}
Using these lemmas we prove theorem \ref{thmsemiinf}, which is restated as follows: ``We have $\tau_n^t\in R_{irr}$ for every $t,n\ge 0$, and
two distinct terms are coprime.''
We prove this by induction with respect to $t$.
\paragraph{The case of $t=2$:}
Let us rewrite $z_n:=\tau_n^2$ for simplicity.
We prove that $z_n\in R_{n+1,n}$ and that $z_n$ is an irreducible linear function with respect to $x_{n+1}$.
First, $z_1={(y_1^2+x_2)}/{x_1}\in R_{2,1}$ is trivially irreducible
(or we can use lemma \ref{fgirred}).
Since $z_1$ is not a monomial, it is coprime with $\tau_n^1 (n\ge 1)$ and $\tau_n^2 (n\ge 2)$. Next
\[
z_n=\frac{1}{x_n}(z_{n-1}x_{n+1} + y_n^2)
\]
for $n \ge 2$ tells us inductively that $z_n\in R_{n+1,n}$ and that $z_n$ is a linear function of $x_{n+1}$. Moreover $z_n$ is not a monomial because $y_n\neq 0$.
Therefore, from lemma \ref{fgirred}, we obtain inductively that $z_n$ is an irreducible Laurent polynomial.
We also have that $z_n$ and $z_m$ with $n \neq m$ are coprime,
since both $z_n$ and $z_m$ are irreducible and each one is linear with respect to $x_{n+1}$ (resp. $x_{m+1}$). It is clear that $z_n$ is coprime with $x_m$ and $y_k$ for all $n,m,k$.
\paragraph{The case of $t=3$:}
We can prove the following relation by induction:
\begin{equation} \label{yzu}
\tau_n^{t+2}=\tau_{n+1}^t \sum_{k=0}^n \frac{(\tau_k^{t+1})^2}{\tau_k^t \tau_{k+1}^{t}}.
\end{equation}
Let us rewrite $u_n:=\tau_n^3$.
By taking $t=1$ in \eqref{yzu}, we have
\[
u_n=y_{n+1}\sum_{k=0}^{n} \frac{(z_k)^2}{y_k y_{k+1}}\in R.
\]
In particular, we have $u_1={(y_2+z_1^2)}/{y_1}$. Since $z_1\in R_{2,1}$, from lemma \ref{fgirred}, the term $u_1$ is irreducible in $R$.
As $z_k$ is irreducible and is not linear in $y_2$, $z_k$ is coprime with $u_1$ for all $k$.
We next prove by induction that
\[
u_n=\frac{u_{n-1} y_{n+1}+z_n^2}{y_n}
\]
is irreducible and coprime with other elements ($\tau_n^t;$ $t=0,1,2$). Let us suppose that $u_{n-1}$ is irreducible and is coprime with $z_n$, then, using lemma \ref{fgirred}, we conclude that $u_n$ is irreducible.
Since neither $z_j (j\ge 1)$ nor $u_k (k\le n-1)$ contains $y_{n+1}$,
while $u_n$ is linear in $y_{n+1}$,
$u_n$ is coprime with $z_j$ and $u_k$.
The proof is finished for $t=3$.
\paragraph{The case of $t\ge 4$:}
Let us define a region $\mathcal{D}_k$ in $(n,t)$-plane as
\[
\mathcal{D}_k=\{(n,t)\, |\, 1\le n \le k,\, 0\le t\le 2k-2n+1 \},
\]
where $k=1,2,\cdots$,
and prove that theorem \ref{thmsemiinf} is true in the region $\mathcal{D}_k$ by induction.
The case of $k=1$ is trivial because $\mathcal{D}_1=\{(1,0),(1,1)\}$.
The case of $k=2$ is true from the previous two paragraphs for $t=2,3$,
since $(n,t)\in \mathcal{D}_2$ always satisfies $t\le 3$.
Let us assume that $\tau_n^t$ is irreducible for $(n,t)\in \mathcal{D}_n$, and prove that $\tau_n^t$ is irreducible for $(n,t)\in \mathcal{D}_{n+1}$.
Let us define the set $\mathcal{I}_{n+1}=\mathcal{D}_{n+1}\setminus \mathcal{D}_n$ for $n\ge 1$.
We rewrite some elements in $\mathcal{I}_{n}\cup \mathcal{I}_{n+1}$ for simplicity as
\begin{align*}
&A_0=x_{n+1}, B_0=x_n, C_0=0, A_1=z_n, B_1=z_{n-1}, C_1=y_n,\\
& A_m=\tau_{\bar{m}}^{2m}, B_m=\tau_{\bar{m}-1}^{2m}, C_m=\tau_{\bar{m}-1}^{2m-1},
\end{align*}
where $1\le m\le n$ and $\bar{m}:=n+1-m$.
\begin{figure}
\centering
\includegraphics[width=6cm,bb=200 220 400 400]{DnIn1.eps}
\caption{Regions $\mathcal{D}_n$ and $\mathcal{I}_n$. The region $\mathcal{D}_n$ is enclosed by gray lines in the left hand side of the figure. The region $\mathcal{I}_{n+1}$ consists of collection of boxes over $\mathcal{D}_n$.}
\label{figure4}
\end{figure}
See figure \ref{figure4}. Then from equation \eqref{dtoda}, we have
\begin{equation}
A_m=\frac{1}{B_{m-1}}\left(B_m A_{m-1}+(C_{m})^2 \right)\;\; (m=1,2,\cdots, n). \label{Am}
\end{equation}
\begin{Lemma} \label{AmR}
We have $A_m \in R$.
\end{Lemma}
\textbf{Proof of lemma \ref{AmR}}\;\;
We show by induction.
Equation \eqref{Am} is equivalent to $a_3 c_3=a_2c_4+b_3^2$ with the notation in lemma \ref{c3a3}. Using lemma \ref{c3a3}, we have
\[
a_3 c_3=\frac{c_3}{c_2 d_3^2 e_4}\cdot P \in R,
\]
where $P$ is a polynomial term as in lemma \ref{c3a3}.
From the induction hypothesis that every pair of two terms is coprime in $\mathcal{D}_n$, the term $c_3=B_{m-1}$ in $\mathcal{D}_n$ is coprime with $c_2,d_3,e_4$. Therefore $P$ has to be divisible by $c_2 d_3^2 e_4$ in $R$. Thus $a_3\in R$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
Next we prove $A_m\in R_{irr}$. We use the following lemma \ref{AmRirred}, which can be proved inductively from \eqref{dtoda}:
\begin{Lemma} \label{AmRirred}
The term $A_m$ is a linear equation with respect to $x_{n+1}$.
When we write $A_m=\alpha_m x_{n+1} + \beta_m$, we have $\alpha_m, \beta_m \in R_{n,n}$ and $\alpha_m=B_m/x_n$.
\end{Lemma}
From the induction hypothesis, $B_m\in R_{irr}$. From the expression $A_m=\alpha_m x_{n+1}+\beta_m$ in lemma \ref{AmRirred}, we have from lemma \ref{fgirred} that `$A_m\in R_{irr}$ if $\beta_m$ does not have $B_m$ as one of its factors'. We prove this by contradiction.
Let us suppose that $\beta_m$ has the factor $B_m$. Then $B_m$ divides $A_m$. Equation \eqref{Am} is equivalent to
\[
A_m B_{m-1}=B_m A_{m-1}+ C_m^2,
\]
which indicates that $C_m^2$ has the factor $B_m$.
This contradicts the induction hypothesis that every pair of terms is coprime in $\mathcal{D}_n$. Therefore $A_m\in R_{irr}$.
The irreducibility of $\tau_{\bar{m}}^{2m+1}$ in $\mathcal{I}_{n+1}$ (the element $\tau_{\bar{m}}^{2m+1}$ is just above $A_m$ in $n$-$t$ plane) is proved in the same manner.
We also have that $A_m$ and $A_{m'}$ are coprime if $m\neq m'$, since the coefficients of $x_{n+1}$ in $A_m$ is $B_m/x_n$ (resp. in $A_{m'}$ is $B_{m'}/x_n$), and $B_m$ and $B_{m'}$ are coprime. Next since the elements $\tau_l^s$ in $\mathcal{D}_n$ does not contain $x_{n+1}$, we have that $\tau_l^s$ and $A_m$ are coprime. Thus we have proved the irreducibility and co-primeness for the elements in $\mathcal{I}_{n+1}$, and therefore in $\mathcal{D}_{n+1}$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
The next proposition states that the same result is true even for a specialized initial conditions $\tau_1^0=x_1=1$.
\begin{Proposition} \label{thmsemiinf1}
Every term of the discrete Toda equation \eqref{dtoda} is a Laurent polynomial of the initial variables:
\[
\tau_n^t\in\mathbb{Z}[x_2^{\pm}, x_3^{\pm}, \cdots, y_1^{\pm}, y_2^{\pm},\cdots],
\]
where $\tau_1^0=1$, $\tau_n^0=x_n$ $(n\ge 2)$ and $\tau_n^1=y_n$ $(n\ge 1)$.
Moreover, the term $\tau_n^t$ is an irreducible Laurent polynomial.
\end{Proposition}
\textbf{Proof}\;\;
The Laurent property
\[
\tau_n^t\in\mathbb{Z}[x_2^{\pm}, x_3^{\pm}, \cdots, y_1^{\pm}, y_2^{\pm},\cdots],
\]
is trivially obtained by substituting $\tau_1^0=1$ in theorem \ref{thmsemiinf}.
We now prove the irreducibility.
We consider the transformation $\tau_n^t=(x_1)^n \sigma_n^t$.
If $\tau_n^t$ satisfies the discrete Toda equation \eqref{dtoda},
new function $\sigma_n^t$ satisfies the same form:
\[
\sigma_n^{t+1} \sigma_n^{t-1} = \sigma_{n-1}^{t+1} \sigma_{n+1}^{t-1} + (\sigma_n^t)^2.
\]
The function $\sigma_n^t$ is obtained by developing equation \eqref{dtoda} from the initial values $\tau_1^0=1$, $\tau_n^0={x_n}/{(x_1)^n}\, (n\ge 2)$ and $\tau_n^1={y_n}/{(x_1)^n}\, (n\ge 1)$, and therefore satisfies
$\sigma_n^t=(x_1)^{-n} \tau_n^t$ for all $n,t$.
The irreducibility and co-primeness are preserved under the transformation
$\sigma_n^t=(x_1)^{-n} \tau_n^t$, since it is only a multiplication by a monomial.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\begin{Theorem} \label{todaIVsemiinfthm}
The solution $I_n^t$, $V_n^t$ of the discrete semi-infinite Toda equation
(\eqref{dtodaIV1} and \eqref{dtodaIV2} with $V_0^t=0 (t\ge 0)$) satisfies the following `co-prime' property:
Let us define the set $\mathcal{D}=\{I_n^t\}\cup\{V_n^t\}$. We denote by $D_n^t$ an element in $\mathcal{D}$ with scripts of time $t$ and position $n$.
Two distinct elements $D_n^t$ and $D_m^s$ in the set $\mathcal{D}$
do not have common factors other than monomials of the initial variables, on condition that $|n-m|\ge 3$ or $|t-s|\ge 2$.
\end{Theorem}
\textbf{Proof}\;\;
By equation \eqref{tauIV}, the correspondence of initial values between the bilinear and the nonlinear discrete Toda equation is as follows:
\begin{align*}
&I_1^0=y_1, V_1^0=\frac{x_2}{y_1}, I_2^0=\frac{y_2}{x_2 y_1}, V_2^0=\frac{x_3 y_1}{x_2 y_2}, I_3^0=\frac{x_2 y_3}{x_3 y_2}, V_3^0=\frac{x_4 y_2}{x_3 y_3},\\
&\cdots, V_{N-1}^0=\frac{x_N y_{N-2}}{x_{N-1} y_{N-1}}, I_N^0=\frac{x_{N-1} y_N}{x_N y_{N-1}},\cdots.
\end{align*}
The co-primeness in terms of $\tau_n^t$ proved in theorem \ref{thmsemiinf} and proposition \ref{thmsemiinf1} is transformed into the `co-primeness' of $I_n^t,V_n^t$ by equation \eqref{tauIV}.
For example, $I_n^t$ and $I_m^s$ share a certain $\tau_l^u$ in their numerators or denominators, if and only if
$|n-m|\le 1$ and $|t-s|\le 1$. Therefore $I_n^t$ and $I_m^s$ are coprime as rational functions (cf. definition \ref{rationalcoprime}), if and only if
\[
|n-m|\ge 2,\ \mbox{or},\ |t-s|\ge 2.
\]
In the same manner, $V_n^t$ and $V_m^s$ are coprime if and only if
\[
(m-n,s-t)\neq (\pm 1,0),(0,\pm 1),\pm(1,-1),\pm(2,-1).
\]
We also have that $I_n^t$ and $V_m^s$ are coprime if and only if
\[
(m-n,t-s)\neq (0,0),(\pm 1,0),(0,\pm 1),\pm(1,-1),(-1,-1),(-2,0),(-2,1).
\]
All these three conditions are satisfied if $|n-m|\ge 3$ or $|t-s|\ge 2$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\subsection{Molecule boundary}
We impose the molecule boundary condition on the equation \eqref{dtoda} as
\[
\tau_{N+2}^t=0\ (t=0,1,2,\cdots),
\]
in addition to the conditions
\begin{equation} \label{moleculeboundary}
\tau_0^t=1\ (t\ge 0),\; \tau_n^0=x_n,\ \tau_n^1=y_n\ (1\le n\le N+1),
\end{equation}
where $N(\ge 1)$ is the system size of the discrete Toda molecule equation.
We study the irreducibility and co-primeness under this condition, and prove that statements very similar to those of theorems \ref{thmsemiinf} and \ref{todaIVsemiinfthm} hold.
\begin{Theorem} \label{thmmolecule}
Every term of the discrete molecule Toda equation \eqref{dtoda} with $\tau_{N+2}^t=0$ $(t=0,1,2,\cdots)$, and \eqref{moleculeboundary} is a Laurent polynomial of the initial variables:
\[
R:=\tau_n^t\in\mathbb{Z}[x_1^{\pm}, x_2^{\pm},\cdots, x_{N+1}^{\pm}, y_1^{\pm}, y_2^{\pm},\cdots, y_{N+1}^{\pm}].
\]
Moreover, the term $\tau_n^t$ is an irreducible Laurent polynomial.
\end{Theorem}
\textbf{Proof}\;\;
Define the set of irreducible Laurent polynomials as
\[
R_{irr}:=\{f\in R\ |\ f\ \mbox{is irreducible}\ \}.
\]
To ease notation, we use the same symbol $R$ of previous section for different rings.
First let us prove the Laurentness and then the irreducibility.
\begin{Lemma} \label{moleculelau}
We have $\tau_n^t\in R$.
\end{Lemma}
\textbf{Proof of lemma \ref{moleculelau}}
We already have the Laurent property for the discrete Toda equation with semi-infinite boundary condition in theorem \ref{thmsemiinf}.
The discrete Toda equation with molecule boundary condition is obtained by substituting $x_n=y_n=0$ for every $n\ge N+2$.
Let us take an arbitrary Laurent polynomial $f\in\mathbb{Z}[x_i^{\pm}, y_i^{\pm} ; 1\le i]$.
By substituting $x_n=y_n=0$ $(n\ge N+2)$ in $f$, we have either
\[
f|_{x_n=y_n=0\, (n\ge N+2)}\in\mathbb{Z}[x_i^{\pm}, y_i^{\pm} ;\, 1\le i\le N+1],
\]
or $f|_{x_n=y_n=0\, (n\ge N+2)}$ is not defined because of the zero denominator.
However, for $\tau_n^t$ ($1\le n\le N+1, \, t\ge 2$) here, we do not encounter zero in the denominators, since all the terms $\tau_n^t$ are well-defined by \eqref{dtoda} and the conditions $\tau_{N+2}^t=0$ and \eqref{moleculeboundary}.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\begin{Lemma} \label{moleculeirred}
We have $\tau_n^t\in R_{irr}$.
\end{Lemma}
\textbf{Proof of lemma \ref{moleculeirred}}\;\;
Let us rewrite $z_n:=\tau_n^2$, $u_n:=\tau_n^3$, $v_n:=\tau_n^4$,
$w_n:=\tau_n^5$, $s_n:=\tau_n^6$.
\paragraph{The case of $\boldsymbol{N=1}$:}
Terms $z_1$ and $u_1$ are the same as those for semi-infinite boundary condition, and therefore are irreducible.
Since we have $\tau_2^t=(y_2)^t /(x_2)^{t-1}$ for $t\ge 2$,
$\tau_2^t\in R_{irr}$ for all $t\ge 2$.
Therefore we only have to prove the irreducibility of $\tau_1^t$ $(t\ge 4)$, and their co-primeness with other terms.
The term $u_1$ is a function of $x_1,x_2,y_1,y_2$. If we substitute $x_i\to y_i$ and $y_i\to z_i$ in $u_1$, we obtain $v_1$:
\[
v_1=u_1 \big|_{x_i\to y_i, y_i\to z_i}.
\]
We use lemma \ref{locallemma} for $m=4$, $(p_1,p_2,p_3,p_4)=(y_1,y_2,z_1,z_2)$, $(q_1,q_2,q_3,q_4)=(x_1,x_2,y_1,y_2)$, and $f(y_1,y_2,z_1,z_2)=v_1$, to obtain
\[
v_1 = z_1^{r_1} z_2^{r_2} \cdot P,
\]
where $P\in R_{irr}$, $r_1,r_2\in\mathbb{Z}_{\ge 0}$. (cf. $f(x_1,x_2,y_1,y_2)=u_1$)
Now let us substitute $x_i=y_i=1$ $(i=1,2)$, to obtain $z_1=2,v_1=13$.
Therefore, $13$ should be divisible by $2^{r_1}$ in $\mathbb{Z}$. Thus we have $r_1=0$. Since $z_2=y_2^2/ x_2$ is a unit in $R$, we conclude that $v_1\in R_{irr}$.
We also have that $v_1$ is coprime with $z_1$ and $u_1$, because two irreducible Laurent polynomials with distinct degrees are coprime.
Next we prove that $w_1:=\tau_1^5 \in R_{irr}$.
We use lemma \ref{locallemma} for $m=4$, $(p_1,p_2,p_3,p_4)=(u_1,u_2,v_1,v_2)$, $(q_1,q_2,q_3,q_4)=(x_1,x_2,y_1,y_2)$, and $f(u_1,u_2,v_1,v_2)=w_1$, to obtain
\[
w_1 = u_1^{r_1} v_2^{r_2} u_2^{r_3} v_2^{r_4} \cdot P_2,
\]
where $P_2\in R_{irr}$, each $r_i\in\mathbb{Z}$.
Substituting $x_i=y_i=1$ $(i=1,2)$, we have
\[
34=5^{r_1} 13^{r_2} \cdot 1 \cdot 1\cdot p_2,
\]
where $p_2:=(P_2)|_{x_i=y_i=1}\in\mathbb{Z}$.
Therefore $r_1=r_2=0$. Together with the fact that $u_2,v_2$ are units in $R$, we have $w_1\in R_{irr}$.
In the same manner we have from lemma \ref{locallemma} that,
\[
s_1=v_1^{r_1} w_1^{r_2} v_2^{r_3} w_2^{r_4}\cdot P_3,
\]
where $P_3\in R_{irr}$, each $r_i\in\mathbb{Z}$ (To ease notation we used the same $r_i$ as before for different values).
Substituting $x_i=y_i=1$ $(i=1,2)$, we have
\[
89=13^{r_1} 34^{r_2}\cdot p_3,
\]
where $p_3\in\mathbb{Z}$. Thus $r_1=r_2=0$. Therefore $s_1\in R_{irr}$.
Co-primeness of $s_1$ with other elements can be proved in the same manner.
Finally we prove the case of $\tau_n^t$ $(t\ge 7)$.
We have the following three decompositions for $\tau_1^t$ $(t\ge 7)$:
\begin{eqnarray*}
\tau_1^t&=&z_1^{r_1} z_2^{r_2}\cdot Q_1\\
&=&u_1^{r_3} v_1^{r_4} u_2^{r_5} v_2^{r_6}\cdot Q_2\\
&=& w_1^{r_7} s_1^{r_8} w_2^{r_9} s_2^{r_{10}}\cdot Q_3,
\end{eqnarray*}
where $Q_i\in R_{irr}$, $r_i\in\mathbb{Z}$.
Since we have already proved that $z_1,u_1,v_1,w_1,s_1$ are irreducible and coprime with each other, we have $r_1=r_3=r_4=r_7=r_8=0$.
Note that $\tau_n^2$ is a unit in $R$. Thus we have $\tau_1^t\in R_{irr}$ for $t\ge 7$.
\paragraph{The case of $\boldsymbol{N=2}$:}
The proof is very similar to that of $N=1$ case.
Since $\tau_3^t$ is a unit in $R$ for every $t$, we prove the irreducibility of $\tau_1^t$ and $\tau_2^t$. Co-primeness between two terms are proved by investigating the degrees of the terms.
We use lemma \ref{locallemma} repeatedly and then substitute $x_i=y_i=1$ $(i=1,2)$.
Note that
\begin{align*}
&(x_1,y_1,z_1,u_1,v_1,w_1,s_1)=(1,1,2,5,14,42,131),\\
&(x_2,y_2,z_2,u_2,v_2,w_2,s_2)=(1,1,3,14,70,353,1782).
\end{align*}
\paragraph{The case of $\boldsymbol{N\ge 3}$:}
First, we prove the irreducibility of $\tau_n^t$ for $2\le n\le N+1$ and $t\le N$.
For $t\le 6$, we have only to prove the irreducibility of the four terms $v_N,w_N,s_N,s_{N-1}$, since other terms $\tau_n^t$ with $2\le n\le N$ are the same as in the case of semi-infinite boundary condition, and $\tau_{N+1}^t$ is a unit in $R$ for every $t\ge 0$.
Let us prove the irreducibility of these four terms individually.
\subparagraph{The case of $\boldsymbol{v_N}$:}
First we prove that $v_N\in R_{irr}$.
With some calculation we have the following expression for $v_N$:
\begin{equation} \label{vN}
v_N=\frac{1}{z_N}\left( \frac{v_{N-1}}{x_{N+1}}+\frac{(u_{N-1})^2}{(y_N)^2}\right) (y_{N+1})^2+\frac{2 u_{N-1} z_N}{(y_N)^2}y_{N+1} +\frac{(z_N)^3}{(y_N)^2}.
\end{equation}
Let us rewrite the coefficients of $(y_{N+1})^2$ as $G_N$ and obtain the recurrence relation for $G_N$ as follows:
\begin{align*}
G_N&=\frac{1}{z_N}\left( \frac{v_{N-1}}{x_{N+1}}+\frac{(u_{N-1})^2}{(y_N)^2}\right)=\frac{x_N}{x_{N+1} z_{N-1}} \left( \frac{v_{N-2}}{x_{N}}+\frac{(u_{N-1})^2}{(y_N)^2}\right)\\
&=\frac{x_N}{x_{N+1} z_{N-1}}\left( \frac{v_{N-2}}{x_{N}}+\frac{(u_{N-2})^2}{(y_{N-1})^2}+\frac{(u_{N-1})^2}{(y_N)^2}- \frac{(u_{N-2})^2}{(y_{N-1})^2}\right)\\
&=\frac{x_N}{x_{N+1} z_{N-1}}\left( \frac{v_{N-2}}{x_{N}}+\frac{(u_{N-2})^2}{(y_{N-1})^2}+ \frac{(z_{N-1})^2}{y_{N-1} y_N}\left( \frac{u_{N-1}}{y_N}+\frac{u_{N-2}}{y_{N-1}} \right) \right)\\
&=\frac{x_N}{x_{N+1}}G_{N-1}+\frac{z_{N-1} x_N}{x_{N+1} y_{N-1} y_N}\left( \frac{u_{N-1}}{y_N}+\frac{u_{N-2}}{y_{N-1}}\right).
\end{align*}
Here we have used in the first equality the following relations obtained from \eqref{dtoda}:
\[
v_{N-1}=\frac{(u_{N-1})^2+z_N v_{N-2}}{z_{N-1}},\; z_N=\frac{(y_N)^2+z_{N-1} x_{N+1}}{x_N}.
\]
By using this recurrence relation we obtain
\begin{equation} \label{GNrec}
x_{N+1} G_N=x_2 G_1+\sum_{k=1}^{N-1} \frac{z_k x_{k+1}}{y_k y_{k+1}}\left( \frac{u_k}{y_{k+1}}+\frac{u_{k-1}}{y_k} \right),
\end{equation}
where $G_1=\frac{x_1}{x_2(y_1)^2}$, $u_0=1$. Since the right hand side of
\eqref{GNrec} does not depend on $x_{N+1}$, we can express $G_N$ as
\begin{equation} \label{GNGamma}
G_N=\frac{\Gamma_N}{x_{N+1}},
\end{equation}
where $\Gamma_N$ does not contain $x_{N+1}$.
We have that $\Gamma_N$ does not have the factor $z_N$, since $z_N$ is linear with respect to $x_{N+1}$, whose constant term $\frac{y_N^2}{x_N}$ is nonzero.
Therefore if we suppose that $v_N$ is not irreducible, only the following type of decomposition is possible:
\[
v_N=(a y_{N+1}+ b)(cy_{N+1}+d),
\]
where $a,b,c,d\in R_{N+1,N}$, because the decomposition of the type $v_N=z_N\cdot P$ ($P\in R$) is not possible from equation \eqref{GNGamma}.
We prove that $b/z_n$ and $d/(z_n)^2$ are both units in $R$.
Since $a,c\in R_{N+1,N}$, the terms $a,c$ do not have a factor $z_N$.
From $bd=z_N^3/y_N^2$, and from the irreducibility of $z_N$, we can decompose $(bd)$ as $b=\beta z_N^k$, $d=\beta' z_N^{3-k}$, where $k\in\mathbb{Z}$ and $\beta,\beta'$ are units in $R$. (Note that $y_N$ is also a unit in $R$.) Since $ad+bc$ has a factor $z_N$ as $(z_N)^1$, we have $\min [k,3-k]=1$. Therefore $k=1$ or $k=2$.
We can choose $k=1$ without losing generality and write down $b,d$ as
\[
b=\gamma z_N,\;\; d=\frac{(z_N)^2}{\gamma (y_N)^2},
\]
where $\gamma$ is a unit in $R$. This expression, together with $ac=\Gamma_N /x_{N+1}$ from \eqref{GNGamma}, indicates that the
coefficient of $y_{N+1}$ in equation \eqref{vN} satisfies
\[
\frac{(ac+bd)}{z_N}=\left(a\frac{z_N}{\gamma (y_N)^2}+c\gamma\right)=\frac{2 u_{N-1}}{(y_N)^2}.
\]
Since the right hand side does not depend on $x_{N+1}$, and therefore on $z_N$, while the middle term depends on $z_N$, we reach a contradiction. Therefore we conclude that $v_N$ is irreducible.
\subparagraph{The case of $\boldsymbol{w_N}$:}
By using lemma \ref{locallemma} we obtain the following two types of decomposition for $w_N$:
\begin{align*}
w_N&=z_1^{r_1}\cdots z_{N+1}^{r_{N+1}}\cdot P\\
&=u_1^{s_1}\cdots u_{N+1}^{s_{N+1}}\cdot v_1^{q_1}\cdots v_{N+1}^{q_{N+1}}\cdot Q,
\end{align*}
where $P,Q\in R_{irr}$ and $r_i,s_i,q_i\in\mathbb{Z}$.
Since each $z_i,u_i,v_i$ are irreducible and coprime with each other,
the only possible decompositions are one of the two types:
\begin{equation}
w_N=\delta z_i v_j,\ w_N=\delta z_i u_j, \label{wdecomp}
\end{equation}
where $\delta$ is a unit in $R$.
We prove that none of the two decomposition is possible by investigating the degrees of the terms in $y_{N+1}$.
Let us denote deg $f$ as the degree of $f$ as a polynomial of $y_{N+1}$.
We have deg $w_N$=3, deg $v_{N+1}=4$, deg $v_N=2$, deg $u_{N+1}=3$, deg $u_N=1$, deg $z_{N+1}=2$, deg $v_i=$ deg $u_i=0\, (i\le N-1)$, deg $z_i=0\, (i\le N)$.
For the degrees to be equal in both sides of the equation \eqref{wdecomp}, we have the following two possibilities:
\[
w_N=\delta u_N z_{N+1},
\]
or
\[
w_N=\delta u_{N+1} z_i, (i\le N).
\]
Note that the unit $\delta$ does not depend on $y_{N+1}$, since we easily verify that the constant term of $w_N$ as a polynomial of $y_{N+1}$ is nonzero.
However, two terms $z_{N+1}=y_{N+1}^2/x_{N+1}$ and $u_{N+1}=y_{N+1}^3/x_{N+1}^2$ are both monomials of $y_{N+1}$. These facts contradict the nonzero constant term of $w_N$.
Therefore none of the two decomposition is possible, and we have proved that $w_N\in R_{irr}$. Co-primeness with other terms is also proved by investigating the degrees of the terms.
\subparagraph{The case of $s_{N-1}, s_N$:}
Other two terms $s_N,s_{N-1}$ are proved to be irreducible in similar discussions.
Lemma \ref{locallemma} gives the following two types of decomposition for $s_{N-1}$:
\begin{align*}
s_{N-1}&=u_1^{r_1}\cdots u_{N+1}^{r_{N+1}}\cdot P\\
&=v_1^{s_1}\cdots v_{N+1}^{s_{N+1}}\cdot w_1^{q_1}\cdots w_{N+1}^{q_{N+1}}\cdot Q,
\end{align*}
where $P,Q\in R_{irr}$ and $r_i,s_i,q_i\in\mathbb{Z}$.
Since each $u_i,v_i,w_i$ are irreducible and coprime with each other,
the only possible decompositions are one of the two types:
\begin{equation}
s_{N-1}=\delta u_i v_j,\ s_{N-1}=\delta u_i w_j, \label{sdecomp1}
\end{equation}
where $\delta$ is a unit in $R$.
By investigating the degrees of these terms as polynomials of $y_{N+1}$, only the following two cases are possible:
\begin{eqnarray}
s_{N-1}&=&\delta u_i v_N\, (i\le N-1), \label{sdecomp2}\\
\mbox{or} \notag \\
s_{N-1}&=&\delta u_N w_{N-1}. \label{sdecomp3}
\end{eqnarray}
From \eqref{dtoda} we have $s_{N-1} v_{N-1}=w_{N-1}^2+s_{N-2}v_N$.
The first equation \eqref{sdecomp2} gives
\[
(\delta u_i v_{N-1}-s_{N-2})v_N=w_{N-1}^2,
\]
which is a contradiction because of the irreducibility of $w_{N-1}$ proved in the previous paragraph.
The second one \eqref{sdecomp3} is also a contradiction, since it gives
\[
(\delta u_N v_{N-1}- w_{N-1})w_{N-1}=s_{N-2} v_N,
\]
and every pair of terms here is coprime. Therefore both decompositions in \eqref{sdecomp1} are impossible, and thus $s_{N-1}\in R_{irr}$.
As for the term $s_N$, by the same investigations, we obtain the three possible factorizations:
\[
s_N=\delta u_N w_N,\ \mbox{or}\ s_N =\delta u_{N+1} w_{N-1},\ \mbox{or}\ s_N=\delta u_i v_{N+1}\, (i\le N-1),
\]
none of which turns out to be possible. By substituting $s_N v_N=w_N^2+s_{N-1} v_{N+1}$ in the first equation $s_N=\delta u_N w_N$, we obtain
\[
(\delta u_N v_N-w_N)w_N=s_{N-1} v_{N+1},
\]
which is impossible from the irreducibility and co-primeness of the terms. Note that we used here the irreducibility of $s_{N-1}$, which has just been proved.
The latter two equations are impossible because the relations $u_{N+1}=y_{N+1}^3/x_{N+1}^2$ and $v_{N+1}=y_{N+1}^4/x_{N+1}^3$ contradict the fact that $s_N$ has a nonzero constant term as a polynomial of $y_{N+1}$.
Thus $s_N\in R_{irr}$.
We have proved the irreducibility of $\tau_n^t$ for $t\le 6$.
Finally we prove the case for $t\ge 7$.
We have the following three types of decompositions of $\tau_n^t$ for $t\ge 7$:
\begin{align*}
\tau_n^t&= z_1^{r_{1,1}}\cdots z_{N+1}^{r_{1,N+1}} P_1\\
&=u_1^{r_{2,1}}\cdots u_{N+1}^{r_{2,N+1}} v_1^{r_{3,1}}\cdots v_{N+1}^{r_{3,N+1}}P_2\\
&=w_1^{r_{4,1}}\cdots w_{N+1}^{r_{4,N+1}} s_1^{r_{5,1}}\cdots s_{N+1}^{r_{5,N+1}}P_3.
\end{align*}
Here, each $r_{i,j}\in\mathbb{Z}$ and $P_i\in R_{irr}$.
Since any pair from $\{u_i,v_i,w_i,s_i\}$ is coprime, this decomposition is only possible when
$r_{i,j}=0$ for all $i,j$. Therefore $\tau_n^t\in R_{irr}$.
Thus theorem \ref{thmmolecule} is proved.
\hfill\hbox{$\Box$}\vspace{10pt}\break
We have the following proposition for a specialized initial condition $\tau_1^0=x_1=1$:
\begin{Proposition}
Every term of the discrete Toda equation \eqref{dtoda} is a Laurent polynomial of the initial variables:
\[
\tau_n^t\in\mathbb{Z}[x_2^{\pm}, x_3^{\pm}, \cdots, x_{N+1}^{\pm}, y_1^{\pm}, y_2^{\pm},\cdots, y_{N+1}^{\pm}],
\]
where $\tau_n^0=x_n$ $(2\le n\le N+1)$ and $\tau_n^1=y_n$ $(1 \le n\le N+1)$.
Moreover, the term $\tau_n^t$ is an irreducible Laurent polynomial.
\end{Proposition}
\textbf{Proof}\;\;
The proof is just the same as that of proposition \ref{thmsemiinf1}.\hfill\hbox{$\Box$}\vspace{10pt}\break
\begin{Theorem}
The solution $I_n^t$, $V_n^t$ of the discrete molecule Toda equation
(\eqref{dtodaIV1} and \eqref{dtodaIV2} with $V_0^t=0$,$V_{N+1}^t=0$ $(t\ge 0)$) satisfies the following `co-prime' property:
Let us define the set $\mathcal{D}=\{I_n^t\}\cup\{V_n^t\}$.
Two distinct elements $D_n^t$ and $D_m^s$ in the set $\mathcal{D}$
do not have common factors other than monomials of the initial variables, on condition that $|n-m|\ge 3$ or $|t-s|\ge 2$.
\end{Theorem}
\textbf{Proof}\;\; The proof is just the same as in theorem \ref{todaIVsemiinfthm}.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\subsection{Periodic boundary}
We can obtain a co-primeness property similar to those in previous two sections for periodic discrete Toda equation, with more elaborated discussion.
Here the periodic boundary condition is imposed on the system \eqref{dtodaIV1} and \eqref{dtodaIV2} as follows:
\begin{equation}
I_{n+N}^t=I_n^t,\;\;V_{n+N}^t=V_n^t \label{IVperiod}
\end{equation}
for every $t$ and $n$, where $N$ is a positive integer which determines the system size.
\begin{Lemma} \label{lemmaperiod}
Let us suppose that $\prod_{i=1}^N V_i^t \neq \prod_{i=1}^N I_i^t$.
The time evolution of the periodic discrete Toda system \eqref{dtodaIV1}, \eqref{dtodaIV2} with \eqref{IVperiod} is determined by
\[
I_n^{t+1}=V_n^t+I_n^t Y_n^t,\ V_n^{t+1}=\frac{I_{n+1}^t V_n^t}{V_n^t+I_n^t Y_n^t},
\]
where
\[
Y_n^t=\frac{\left(1-\frac{\prod_{i=1}^N V_i^t}{\prod_{i=1}^N I_i^t}\right)}{1+\frac{V_{n-1}^t}{I_{n-1}^t}+\frac{V_{n-1}^t V_{n-2}^t}{I_{n-1}^t I_{n-2}^t}+\cdots +\frac{V_{n-1}^t V_{n-2}^t \cdots V_{n+1}^t}{I_{n-1}^t I_{n-2}^t \cdots I_{n+1}^t}}.
\]
Aside from the trivial solution $I_n^{t+1}=V_n^t$, $V_n^{t+1}=I_{n+1}^t$, this is the only solution for a fixed set of initial data $\{V_i^0, I_i^0\}_{i=1}^N$.
\end{Lemma}
We have to take care that the function $\tau_n^t$ does not necessarily satisfy the periodic condition $\tau_{n+N}^t=\tau_n^t$.
The reason is as follows. If we were to impose $\tau_{n+N}^t=\tau_n^t$, then we have
\[
\prod_{i=1}^N I_i^t=\prod_{i=1}^N \frac{\tau_{i-1}^t \tau_i^{t+1}}{\tau_i^t \tau_{i-1}^{t+1}}=\frac{\tau_N^{t+1} \tau_0^t}{\tau_0^{t+1}\tau_N^t}=1.
\]
In the same manner, we have $\prod_{i=1}^N V_i^t=1$.
However, the discrete Toda equation \eqref{dtodaIV1} and \eqref{dtodaIV2} cannot be determined under the condition \eqref{IVperiod} in the case of $\prod_{i=1}^N V_i^t=\prod_{i=1}^N I_i^t$ from lemma \ref{lemmaperiod}.
In fact it is reasonable to take the boundary condition as follows:
\begin{equation}
\tau_{n+N}^t=K \lambda^t \mu^n \tau_n^t,\; \tau_0^0=\tau_1^0=\tau_0^1=1 \label{period}
\end{equation}
where
\begin{eqnarray}
K&=&\prod_{i=1}^N (V_i^0 I_i^0)^{N-i}, \label{K}\\
\mu&=&\prod_{i=1}^N V_i^0 I_i^0, \label{mu}\\
\lambda&=& \prod_{i=1}^N I_i^0. \label{lambda}
\end{eqnarray}
This condition is obtained as follows.
First we assume that the function $\tau_n^t$ obeys the rule \eqref{period}, and then show that the constants $K$, $\lambda$ and $\mu$ can be determined uniquely in compatible with the evolution of the systems.
We obtain $\tau_n^0$ and $\tau_n^1$ $(n\ge 1)$ inductively from \eqref{tauIV} as follows:
\begin{eqnarray}
\tau_1^1&=&I_1^0,\; \tau_2^0=V_1^0 I_1^0,\; \tau_2^1=I_1^0 I_2^0 \tau_2^0,\cdots\\
\tau_n^0&=&\prod_{i=1}^{n-1} \left( V_i^0 I_i^0 \right)^{n-i},\;\;(n\ge 2), \label{taun0} \\
\tau_n^1&=&\left(\prod_{i=1}^n I_i^0 \right) \tau_n^0,\;\;(n\ge 1). \label{taun1}
\end{eqnarray}
Using \eqref{taun0} for $n=N$, we obtain the value of $K=K\tau_0^0=\tau_N^0$ as in \eqref{K}.
Since $\tau_{N+1}^0=K\mu \tau_1^0=K\mu$, we have from \eqref{taun0} the equality \eqref{mu}.
From equation \eqref{taun1}, $\tau_N^1=K\lambda$ and $\tau_N^0=K$, we obtain \eqref{lambda}.
\begin{Proposition}
The function $\tau_n^t$ defined by
\begin{equation} \label{periodtau3}
\tau_n^{t+1}=\frac{\tau_{n+1}^{t-1}}{\lambda^2/\mu -1} \sum_{k=1}^N \frac{(\tau_{n+k}^t)^2}{\tau_{n+k}^{t-1} \tau_{n+k+1}^{t-1}}
\end{equation}
satisfies the bilinear form of the discrete Toda equation \eqref{dtoda} and also the periodic boundary condition \eqref{period}.
What is more, functions $I_n^t$ and $V_n^t$ obtained by the relation \eqref{tauIV} satisfies the discrete Toda equations \eqref{dtodaIV1} and \eqref{dtodaIV2}.
\end{Proposition}
\textbf{Proof}\;\;
We easily show by induction that $\tau_n^t$ defined by \eqref{periodtau3} satisfies the relation $\tau_{n+N}^t=K \lambda^t \mu^n \tau_n^t$ in \eqref{period}.
Next we show that \eqref{periodtau3} satisfies the discrete Toda equation \eqref{dtoda}:
\begin{align*}
&\tau_n^{t+1} \tau_n^{t-1}-\{\tau_{n-1}^{t+1} \tau_{n+1}^{t-1} +(\tau_n^t)^2 \}=\\
&\left( \frac{\tau_{n+1}^{t-1} }{ \lambda^2 / \mu-1} \sum_{k=1}^N \frac{(\tau_{k+n}^t)^2}{\tau_{k+n}^{t-1} \tau_{k+n+1}^{t-1}} \right) \tau_n^{t-1}- \left\{ \left( \frac{\tau_{n}^{t-1} }{ \lambda^2 / \mu-1} \sum_{k=1}^N \frac{(\tau_{k+n-1}^t)^2}{\tau_{k+n-1}^{t-1} \tau_{k+n}^{t-1}} \right) \tau_{n+1}^{t-1}+(\tau_n^t)^2\right\}=\\
& \frac{\tau_n^{t-1} \tau_{n+1}^{t-1} }{\lambda^2/\mu -1}\left\{ \frac{(\tau_{n+N}^t)^2}{\tau_{n+N}^{t-1}\tau_{n+N+1}^{t-1}}-\frac{(\tau_n^t)^2}{\tau_n^{t-1} \tau_{n+1}^{t-1}} \right\}-(\tau_n^t)^2=\\
& \frac{\tau_n^{t-1} \tau_{n+1}^{t-1} }{\lambda^2/\mu -1}\left\{ \frac{(K\lambda^t \mu^n)^2}{K\lambda^{t-1}\mu^n\cdot K\lambda^{t-1} \mu^{n+1}}-1 \right\}\frac{(\tau_n^t)^2}{\tau_n^{t-1} \tau_{n+1}^{t-1}}-(\tau_n^t)^2=0.
\end{align*}
\hfill\hbox{$\Box$}\vspace{10pt}\break
Note that the equality \eqref{periodtau3} can be re-written as
\begin{equation} \label{periodtau4}
\tau_n^{t+1}=\frac{\tau_{n+1}^{t-1}}{\lambda^2/\mu -1} \left[ \frac{\lambda^2}{\mu}\sum_{j=0}^{n} \frac{(\tau_j^t)^2}{\tau_j^{t-1} \tau_{j+1}^{t-1} } +\sum_{j=n+1}^{N-1} \frac{(\tau_j^t)^2}{\tau_j^{t-1} \tau_{j+1}^{t-1}} \right],
\end{equation}
using the boundary condition \eqref{period}.
What we are going to prove is that ``The function $\tau_n^t$ is an irreducible Laurent polynomial of the initial variables ($\tau_n^0,\tau_n^1$; $0\le n \le N-1$) \textbf{and} a power of $(\lambda^2/\mu -1)$''.
To eliminate a power of $\lambda^2/\mu -1$, we change the variables.
\begin{Lemma}
The new variable $\tilde{\tau}_n^t$ defined by the transformation
\begin{equation} \label{tautilda}
\tau_n^t=\left(\frac{\lambda^2}{\mu}-1\right)^{-t(t-1)/2}\tilde{\tau}_n^t
\end{equation}
satisfies the following equation:
\begin{equation} \label{dtoda5.5}
\tilde{\tau}_n^{t+1} \tilde{\tau}_n^{t-1} = \tilde{\tau}_{n-1}^{t+1} \tilde{\tau}_{n+1}^{t-1} + \left(1-\frac{\lambda^2}{\mu}\right)(\tilde{\tau}_n^t)^2.
\end{equation}
\end{Lemma}
Note that the initial conditions are unchanged: $\tilde{\tau}_n^0=\tau_n^0$, $\tilde{\tau}_n^1=\tau_n^1$, since $t(t-1)/2=0$ for $t=0,1$. Also note that $\tau_N^0=K$, $\tau_N^1=K \lambda$ from the boundary condition \eqref{period}.
We are going to prove the Laurent property of this function $\tilde{\tau}_n^t$.
\begin{Theorem} \label{periodthm}
Let $k$ be an arbitrary natural number.
(A) The general term $\tilde{\tau}_n^t$ $(1\le n \le N$, $0\le t\le k)$ of the equation \eqref{dtoda5.5}
is in the following ring of Laurent polynomial
\[
\tilde{\tau}_n^t\in R:=\mathbb{Z}\left[(\tilde{\tau}_n^0)^{\pm}; 2\le n\le N-1,(\tilde{\tau}_n^1)^{\pm}; 1\le n \le N-1, K^{\pm},\lambda^{\pm},\mu^{\pm} \right].
\]
(B) Moreover $\tilde{\tau}_n^t$ is irreducible for any $n,t$ $(1\le n\le N$, $0\le t\le k)$, and two distinct terms $\tilde{\tau}_n^t$ and $\tilde{\tau}_m^s$ with $(n,t)\neq (m,s)$ are co-prime in this ring $R$.
\end{Theorem}
\textbf{Proof}\;\;We prove theorem \ref{periodthm} by induction using the following propositions and lemmas. We rewrite $\tilde{\tau}$ as $\tau$ to simplify the notation. We also use $a_n:=\tau_n^0$, $b_n:=\tau_n^1$, $c_n:=\tau_n^2$, $\cdots$. We have
\[
R=\mathbb{Z}[(a_n)^{\pm},(b_n)^{\pm};\; 1\le n\le N].
\]
\begin{Proposition} \label{AB}
If both of the statements (A) and (B) are satisfied for a fixed $k\ge 1$, then
the statement (A) is true for $k+2$.
\end{Proposition}
\textbf{Proof of proposition \ref{AB}}\;\;
Note that (A) is trivial for $k=0,1,2,3$, and (B) is for $k=0,1$.
First let us prove (B) for $k=2$, and then prove (A) for $k=4$ using ``(B) for $k=2$'', and then prove the statement for general $k$.
\paragraph{Proof of (B) for $\boldsymbol{k=2}$}
By making the transformation \eqref{tautilda},
the factor $\lambda^2/\mu-1$ is eliminated:
\begin{equation} \label{periodtau3kai}
\tau_n^{t+1}=\tau_{n+1}^{t-1} \left[ \frac{\lambda^2}{\mu}\sum_{j=0}^n \frac{(\tau_j^t)^2}{\tau_j^{t-1} \tau_{j+1}^{t-1}} + \sum_{j=n+1}^{N-1} \frac{(\tau_j^t)^2}{\tau_j^{t-1} \tau_{j+1}^{t-1}} \right].
\end{equation}
Next, by substituting $t=1$ in equation \eqref{periodtau4}, we have
\begin{equation} \label{cn}
c_n=a_{n+1}\left[ \frac{\lambda^2}{\mu}\sum_{j=0}^n \frac{(b_j)^2}{a_j a_{j+1}} + \sum_{j=n+1}^{N-1} \frac{(b_j)^2}{a_j a_{j+1}} \right].
\end{equation}
We have that the term $c_n$ is irreducible in the ring $R$.
We prove this by contradiction. If $c_n$ is reducible, it has to be factored as $(b_1+\alpha)(b_1+\beta)$ as a quadratic function of $b_1$, where $\alpha, \beta$ are expressed by $a_j$ and $b_k\, (k\neq 1)$. Since $c_n$ does not have a term of $(b_1)^1$, we have $\alpha=-\beta$. Therefore the constant term of $c_n$ in terms of $b_1$ is negative ($-\alpha^2<0$), which contradicts the fact that every coefficient is non-negative in $c_n$. Therefore $c_n=\tau_n^2$ is irreducible.
\paragraph{Proof of (A) for $\boldsymbol{k=4}$}
By shifting the superscripts to $t=3$ for equation \eqref{cn}, we obtain the following equality:
\begin{equation} \label{en}
e_n=c_{n+1}\left[ \frac{\lambda^2}{\mu}\sum_{j=0}^n \frac{(d_j)^2}{c_j c_{j+1}} + \sum_{j=n+1}^{N-1} \frac{(d_j)^2}{c_j c_{j+1}} \right].
\end{equation}
We prove that $e_n\in R$ for all $0\le n\le N-1$. We only have to prove that $e_0\in R$, since the subscripts are cyclic modulo $N$.
Reducing to a common denominator of \eqref{en}, we have
\begin{align}
&c_0 c_2 \cdots c_{N-1} e_0 = \notag \\
&\frac{\lambda^2}{\mu} (d_0)^2 c_2 c_3 \cdots c_{N-1}+c_0 (d_1)^2 c_3 \cdots c_{N-1}+c_0c_1(d_2)^2c_4\cdots c_{N-1}+ \notag \\
&\cdots +c_0c_1c_2 \cdots c_{N-3} (d_{N-2})^2+\frac{c_0}{c_N} c_1c_2\cdots c_{N-2} (d_{N-1})^2. \label{c0e0}
\end{align}
Since $\frac{c_0}{c_N}=\frac{1}{K\lambda^2}$ from the boundary condition \eqref{period}, the right hand side is a Laurent polynomial (i.e., $\in R$). Thus $c_0c_2c_3\cdots c_{N-1} e_0\in R$.
Next let us prove that $c_2c_3\cdots c_{N-1}e_0\in R$.
Let us pick up all the terms which do not contain $c_0$ from the right hand side of \eqref{c0e0} and define it as $E$:
\begin{equation}
E:=\left(\frac{\lambda^2}{\mu}(d_0)^2 c_{N-1} + \frac{1}{K\lambda^2}c_1 (d_{N-1})^2\right)\cdot c_2c_3\cdots c_{N-2}.
\end{equation}
We prove that $E$ itself has a factor $c_0$.
From equation \eqref{periodtau3kai} with $(n,t)=(0,2)$ and $(n,t)=(N-1,2)$, we have
\[
\frac{d_0}{b_1}=\frac{\lambda^2}{\mu}\frac{(c_0)^2}{b_0 b_1}+\sum_{j=1}^{N-1} \frac{(c_j)^2}{b_j b_{j+1}},\;\; \frac{d_{N-1}}{b_N}=\frac{\lambda^2}{\mu} \sum_{j=0}^{N-1} \frac{(c_j)^2}{b_j b_{j+1}}.
\]
By substituting these equations to $E$, we obtain
\begin{equation} \label{E6}
E/(c_2c_3 \cdots c_{N-2})=(c_0)^2\cdot P +\frac{\lambda^2}{\mu}\left[(b_1)^2 c_{N-1}+ \frac{1}{K\lambda^2} (b_N)^2 c_1\right]\left( \sum_{j=1}^{N-1} \frac{(c_j)^2}{b_j b_{j+1}} \right)^2,
\end{equation}
where $P\in R$.
From the evolution of the equation \eqref{dtoda5.5} (note that we have omitted $\tilde{\ }$ here), we have
\[
c_1=\left\{a_2 c_0 +\left(1-\frac{\lambda^2}{\mu}\right)(b_1)^2\right\}\frac{1}{a_1},\;\;
c_{N-1}=\left\{a_N c_N - \left(1-\frac{\lambda^2}{\mu}\right)(b_N)^2\right\}\frac{1}{K \mu a_1},
\]
where we have used $a_{N+1}=K \mu a_1$.
Therefore we obtain
\begin{equation} \label{E6sup}
(b_1)^2 c_{N-1}+ \frac{1}{K\mu} (b_N)^2 c_1=\frac{K \lambda^2 (a_0 b_1^2 + a_2 b_0^2)}{\mu a_1}\cdot c_0.
\end{equation}
Using the equations \eqref{E6} and \eqref{E6sup}, we have proved that
$E/c_0\in R$. Therefore we have
\[
c_2c_3\cdots c_{N-1}e_0=(c_0 c_2c_3\cdots c_{N-1}e_0)/c_0= (E+\mathcal{O}(c_0))/c_0\in R.
\]
By a cyclic permutation, we have $(c_0 c_2 \cdots c_{N-1} e_0)/c_j \in R$ for all $0\le j\le N-1$.
From these results for all $j$, and from the fact that $c_j$ are irreducible for all $j$ (which has been proved as [(B) for $k=2$] in the previous paragraph), we have
\[
e_0=\frac{c_0 c_2 \cdots c_{N-1} e_0}{c_0c_2\cdots c_{N-1}}\in R.
\]
\paragraph{Proof of proposition \ref{AB} for general $\boldsymbol{k}$}
By shifting the time variable $t$ from $t=2$ to $t=k+1$ in the previous paragraph, we can prove that
\begin{equation} \label{tauk+2}
\left( \tau_0^k \tau_2^k\cdots \tau_{N-1}^k \right)\tau_0^{k+2}\in R.
\end{equation}
We also obtain
\[
\tau_0^{k+2}=\frac{L}{M},
\]
where $L,M\in R$ and $M$ is a monomial in $\{\tau_j^{k-1}, \tau_j^{k-2}\}_{j=1}^N$,
by shifting the time variable $t$ from $t=2$ to $t=k+1$ in equations from \eqref{en} through \eqref{E6sup} ($e_n\to \tau_n^{k+2}$, $b_n\to \tau_n^{k-1}$, $a_n\to \tau_n^{k-2}$).
Let us suppose that (B) is true for $k$, and define $P:=\tau_0^k \tau_2^k\cdots \tau_{N-1}^k$.
Then, by the irreducibility of each element, we have that $P$ and $M$ are coprime in $R$. On the other hand, we have $P\dfrac{L}{M}\in R$ from \eqref{tauk+2}, which indicates that $M$ must divide $L$ in the ring of Laurent polynomial $R$. Therefore $L/M \in R$.
We have proved (A) for $k\to k+2$, i.e., $\tau_0^{k+2}$ is a Laurent polynomial in $\{a_n,b_n\}_{n=0}^{N-1}$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
Using the proposition \ref{AB} repeatedly, the theorem \ref{periodthm} is derived by proving the following proposition:
\begin{Proposition} \label{kiyakulemmaperiod}
Let us assume that (A) is true for all $k\ge 1$.
Then (B) is true for all $k\ge 1$.
\end{Proposition}
\textbf{Proof of proposition \ref{kiyakulemmaperiod}}\;\;
We prove the irreducibility of $\tau_n^k$ for $k\ge 3$, since the case of $k=0,1$ is trivial, and the case of $k=2$ is already proved in the proof of proposition \ref{AB}.
\paragraph{The case of $\boldsymbol{k=3}$:}
Let us apply lemma \ref{locallemma} in the case of $m=2N$,
\begin{eqnarray*}
\{p_1, \cdots,p_m\}&=&\{b_1,\cdots, b_N,c_1,\cdots,c_N\},\\
\{q_1, \cdots, q_m\}&=&\{a_1,\cdots, a_N, b_1,\cdots, b_N\}.
\end{eqnarray*}
From (A), each $d_j$ is irreducible in $\mathbb{Z}[\{b_i^{\pm}\},\{c_i^{\pm}\};1\le i\le N]$.
Therefore $d_j$ is decomposed as
\[
d_j=c_0^{r_0} c_1^{r_1} \cdots c_{N-1}^{r_{N-1}} \cdot G,
\]
where $G$ is an irreducible Laurent polynomial in $\mathbb{Z}[\{a_i^{\pm}\},\{b_i^{\pm}\}]$, and each $r_j\in\mathbb{Z}$. Since $c_j$ is irreducible, and $d_j$ is a Laurent polynomial, we have $r_j\ge 0$.
\begin{Lemma} \label{k=3kiyakulemma}
In the setting above, we have $r_0=r_1=\cdots=r_{N-1}=0$.
\end{Lemma}
\textbf{Proof of lemma \ref{k=3kiyakulemma}}\;\;
Because every subscript $n$ is cyclic for $\tau_n^t$, it is enough to prove $r_j=0$ for only one specific $j$: e.g., $j=N-1$.
Let us take a specific initial condition from here on only in this proof:
\[
I_n^0=1\ (1\le n\le N),\ V_n^0=1\ (1\le n\le N-1),\ V_N^0=\frac{1}{x}.
\]
Then we have
\[
a_n=1,b_n=1\ (0\le n\le N),\ \lambda=K=1,\ \mu=1/x,
\]
using equations from \eqref{K} to \eqref{taun1}.
We have from equation \eqref{cn} that
\[
c_k=(k+1)x+N-k-1\ (k=0,1,\cdots, N-1).
\]
Using the equation \eqref{periodtau3kai} for $t=2$, we have
\begin{equation} \label{dN-1}
d_{N-1}=b_N\left[\frac{\lambda^2}{\mu} \sum_{k=0}^{N-1}\frac{(c_k)^2}{b_k b_{k+1}}\right]=x\sum_{k=0}^{N-1} (c_k)^2.
\end{equation}
If $x=-(N-k-1)/(k+1)$ then, we have $c_k=0$. Since $x\neq 0$ for $0\le k \le N-2$, we have $d_{N-1}\neq 0$ from \eqref{dN-1}.
Therefore we have proved that $d_{N-1}$ does not have a positive power of $c_k$ $(0\le k\le N-2)$ as a factor.
Thus $r_k=0$ $(0\le k\le N-2)$.
Lastly we prove that $d_{N-1}$ does not have a factor $c_{N-1}$.
By a cyclic permutation, it is enough to prove that $d_0$ does not have a factor $c_0$.
We have from \eqref{periodtau3kai} that
\[
d_0=x (c_0)^2 +\sum_{k=1}^{N-1} (c_k)^2.
\]
When we substitute $x=1-N$, $c_0=0$ in $d_0$, we have
\[
d_0=0 + \sum_{k=1}^{N-1} k^2 N^2=\frac{1}{6} N^3 (N-1)(2N-1)\neq 0.
\]
Thus $d_0$ does not have a positive power of $c_0$ as a factor. Therefore $r_{N-1}=0$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
Summing up these results, we have proved that
$d_j$ is irreducible for all $0\le j\le N-1$.
The co-primeness of distinct $d_j$ and $d_k$ $(j\neq k)$ follows immediately.
Finally we can prove that $d_j$ and $c_k$ are co-prime for $0\le j,k\le N-1$, since they are both irreducible and have different degrees.
Thus we have proved (B) for $k=3$.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\paragraph{The case of $\boldsymbol{k=4}$:}
Let us apply lemma \ref{locallemma} in the case of $m=2N$,
\begin{eqnarray*}
\{p_1, \cdots,p_m\}&=&\{b_1,\cdots, b_N,c_1,\cdots,c_N\},\\
\{q_1, \cdots, q_m\}&=&\{a_1,\cdots, a_N, b_1,\cdots, b_N\}.
\end{eqnarray*}
In the same manner as in the previous paragraph for $k=3$ we have the decomposition
\[
e_j=c_0^{s_0} c_1^{s_1} \cdots c_{N-1}^{s_{N-1}} \cdot H,
\]
where $H$ is an irreducible Laurent polynomial in $\mathbb{Z}[\{a_i^{\pm}\},\{b_i^{\pm}\}]$, and each $s_j\in\mathbb{Z}$, $s_j>0$.
Let us prove that $s_0=s_1=\cdots=s_{N-1}=0$, in order to prove that $e_j$ is an irreducible Laurent polynomial.
For this purpose we prove the following lemma.
\begin{Lemma} \label{eN-1}
The term $e_{N-1}$ does not have a positive power of $c_j$ as a factor for any $0\le j\le N-1$.
\end{Lemma}
\textbf{Proof of lemma \ref{eN-1}}\;\;
Let us choose the same specific initial condition in this proof as in the previous paragraph
\[
a_n=1,b_n=1\ (0\le n\le N),\ \lambda=K=1,\ \mu=1/x,
\]
\[
c_k=(k+1)x+N-k-1\ (k=0,1,\cdots, N-1).
\]
\subparagraph{The case of $\boldsymbol{c_0}$:}
From the evolution of discrete Toda equation \eqref{dtoda5.5},
\[
c_{N-1} e_{N-1} = c_{N} e_{N-2} +\left( 1-\frac{\lambda^2}{\mu} \right)(d_{N-1})^2.
\]
From the induction hypothesis that $\{c_i\},\{d_i\}$ are coprime, we conclude that $e_{N-1}$
does not have a positive power of $c_{N}=c_0$ as a factor.
\subparagraph{The case of $\boldsymbol{c_{N-1}}$:}
By a cyclic permutation, it is enough to prove that $e_0$ does not have a factor $c_0$.
Equation \eqref{periodtau3kai} tells us that
\[
e_0=c_1\left[x\frac{d_0^2}{c_0 c_1}+\sum_{k=1}^{N-1} \frac{d_k^2}{c_k c_{k+1}}\right].
\]
In the case of $x=1-N$, we have $c_k=-kN$ $(k=0,1,\cdots, N-1)$, and
\[
d_0=\frac{1}{6}N^3(N-1)(2N-1)\neq 0.
\]
Therefore $e_0$ diverges if we take the limit $c_0\to 0$.
Thus $e_0$ cannot have a positive power of $c_0$ as a factor.
\subparagraph{The case of $\boldsymbol{c_i}$ $\boldsymbol{(1\le i\le N-2)}$:}
The equation \eqref{periodtau3kai} for $t=3$ and $c_{k+1}-c_{k}=x-1$ shows that
\begin{eqnarray}
e_{N-1}&=&c_N x \sum_{k=0}^{N-1} \frac{(d_k)^2}{c_k c_{k+1}} \notag \\
&=&c_N x \left[ \sum_{k=0}^{N-1} \frac{1}{c_{k+1}-c_{k}}\left( \frac{d_k^2}{c_k}-\frac{d_k^2}{c_{k+1}} \right) \right] \notag \\
&=&c_N \frac{x}{x-1}\left[ \frac{d_0^2}{c_0}-\frac{d_{N-1}^2}{c_N}+\sum_{k=1}^{N-1} \frac{d_k^2-d_{k-1}^2}{c_k} \right]. \label{eN-1keisan}
\end{eqnarray}
We have
\[
d_k=x \sum_{i=0}^k (c_i)^2 + \sum_{i=k+1}^{N-1} (c_i)^2.
\]
Thus $d_k-d_{k-1}=(x-1) c_k^2$. Therefore
the term $\frac{1}{c_k}$ in equation \eqref{eN-1keisan} is eliminated:
\begin{equation} \label{eN-1keisan2}
e_{N-1}=\frac{c_N x}{x-1}\left[ \frac{d_0^2}{c_0}-\frac{d_{N-1}^2}{c_N}+\sum_{k=1}^{N-1} (x-1) c_k (d_k+d_{k-1}) \right].
\end{equation}
Let us substitute $c_j=0$ $(x=1-\frac{N}{j+1})$ in the equation \eqref{eN-1keisan2} to obtain the following result:
\begin{equation}
e_{N-1}=-\frac{x N (N-1)}{180(x-1)} \cdot F,
\end{equation}
where
\begin{align}
F&=180j^4 + (390-420N) j^3+30(3N-2)(4N-5) j^2 \notag \\
&-2(2N-1)(34N^2-84N+47)j + 5(N-1)(N-2)(2N-1)^2. \label{valueF}
\end{align}
Derivation of \eqref{valueF} is in the appendix.
We have the following lemma on the positivity of $F$:
\begin{Lemma} \label{positiveF}
We have $F>0$ for all $N\ge 3$ and for all $1\le j\le N-2$.
\end{Lemma}
Proof of this lemma is straightforward but technical, therefore explained in the appendix.
From lemma \ref{positiveF}, we conclude that $e_{N-1}$ does not have a factor $c_j$ for $1\le j\le N-2$.
Summing up the three sub-paragraphs, we have proved lemma \ref{eN-1}.
\hfill\hbox{$\Box$}\vspace{10pt}\break
\paragraph{The case of $\boldsymbol{k=5}$:}
Let us apply lemma \ref{locallemma} in the case of $m=2N$,
\begin{eqnarray*}
\{p_1, \cdots,p_m\}&=&\{d_1,\cdots, d_N,e_1,\cdots,e_N\},\\
\{q_1, \cdots, q_m\}&=&\{a_1,\cdots, a_N, b_1,\cdots, b_N\}.
\end{eqnarray*}
We can prove that these variables satisfy the conditions of lemma \ref{locallemma} from the induction hypotheses.
In the same manner as in the paragraph for $k=3$ we have the decomposition
\begin{equation} \label{fdecomp1}
f_j:= \tau_j^5 = d_0^{s_0} d_1^{s_1} \cdots d_{N-1}^{s_{N-1}}\cdot e_0^{t_0} e_1^{t_1}\cdots e_{N-1}^{t_{N-1}} \cdot H_1,
\end{equation}
where $s_i,t_i\in\mathbb{Z}_{\ge 0}$, and $H_1$ is an irreducible Laurent polynomial in the initial variables $\{a_i\}$, $\{b_i\}$.
Similarly, we also have another decomposition of $f_j$ as
\begin{equation} \label{fdecomp2}
f_j=c_0^{r_0} c_1^{r_1} \cdots c_{N-1}^{r_{N-1}}\cdot H_2,
\end{equation}
where $r_i\in\mathbb{Z}_{\ge 0}$, and $H_2$ is an irreducible Laurent polynomial in the initial variables.
Let us suppose that $f_j$ is not irreducible.
Since arbitrary two elements from $\{c_i\} \cup \{d_i\} \cup \{e_i\}$ are coprime,
the only possible decomposition of $f_j$ compatible with both \eqref{fdecomp1} and \eqref{fdecomp2} are one of the following two types:
\begin{equation}
f_j=Mc_k d_l, \label{fcd}
\end{equation}
or
\begin{equation}
f_j=M c_k e_l, \label{fce}
\end{equation}
where $M$ is a monomial in the initial variables $\{a_i\},\{b_i\}$.
Let us choose the same specific initial condition as in the previous paragraph
\[
a_n=1,b_n=1\ (0\le n\le N),\ \lambda=K=1,\ \mu=1/x,
\]
and take the limit $x\to 1$.
Then we have
\[
c_j=N,\ d_j=N^3,\ e_j=N^6,\ f_j=N^{10},
\]
for all $j\ge 0$.
We also have that the monomial $M\to \pm 1$.
Therefore
the degree (w.r.t. $N$) of left hand side of the equation \eqref{fcd} is $10$, while that of right hand side is $4$, which is a contradiction.
The degree of equation \eqref{fce} also has the same contradiction.
Thus $f_j$ does not have a decomposition, and is therefore irreducible.
Co-primeness of two terms is directly proved by the irreducibility.
\paragraph{The case of $\boldsymbol{k=6}$:}
The proof is just the same as in the case of $k=5$.
We note that $g_j=N^{15}$ under the same conditions as in the previous case.
\paragraph{The case of $\boldsymbol{k\ge 7}$:}
We have the following three types of decompositions at the same time
for $\tau_j^t$ $(t\ge 7)$:
\begin{align*}
\tau_j^t&=c_0^{r_0}\cdots c_{N-1}^{r_{N-1}}H_1=d_0^{s_0}\cdots d_{N-1}^{s_{N-1}}e_0^{t_0}\cdots e_{N-1}^{t_{N-1}} H_2\\
&=f_0^{p_0}\cdots f_{N-1}^{p_{N-1}} g_0^{q_0}\cdots g_{N-1}^{q_{N-1}}H_3,
\end{align*}
where $H_1,H_2,H_3$ are irreducible Laurent polynomials of initial variables.
Since from $c_i$ through $g_i$ are all irreducible elements, arbitrary two of which are coprime, we conclude that $r_i=s_i=t_i=p_i=q_i=0$ for all $i$.
Therefore $\tau_j^t$ is irreducible for $t\ge 7$. Co-primeness of $\tau_j^t$ and $\tau_i^t$ is proved by the irreducibility of themselves and the cyclic property in terms of the subscripts. Co-primeness of $\tau_j^t$ with arbitrary $\tau_j^s$ with $s< t$ is proved by the irreducibility of $\tau_j^t$ and by the fact that $\tau_j^t$ and $\tau_k^s$ has different degrees if $t\neq s$.
The proof of proposition \ref{kiyakulemmaperiod} is finished.
\hfill\hbox{$\Box$}\vspace{10pt}\break
The proof of theorem \ref{periodthm} is now completed.\hfill\hbox{$\Box$}\vspace{10pt}\break
Remember that theorem \ref{periodthm} is for the transformed function $\tilde{\tau}_n^t$, and the statement for the original $\tau_n^t$ is as follows:
\begin{Corollary} \label{periodthm2}
The function $\tau_n^t$ is an irreducible Laurent polynomial of the initial variables and a power of $(\lambda^2/\mu -1)$:
\[
\tau_n^t\in \mathbb{Z}[(\lambda^2/\mu -1)^{\pm},(\tau_n^0)^{\pm},(\tau_n^1)^{\pm};0\le n\le N-1],
\]
and two distinct terms are co-prime.
\end{Corollary}
Using corollary \ref{periodthm2}, we can prove our main theorem of co-primeness of the discrete Toda equation with periodic boundary condition.
\begin{Theorem}
Let us take $N\ge 6$.
The solution $I_n^t$, $V_n^t$ of the periodic discrete Toda equation (\eqref{dtodaIV1} and \eqref{dtodaIV2} with $I_{n+N}^t=I_n^t$ and $V_{n+N}^t=V_n^t$) satisfies the following `co-prime' property:
Let us define the set $\mathcal{D}=\{I_n^t\}_{0\le n\le N-1, 0\le t}\cup\{V_n^t\}_{0\le n\le N-1, 0\le t}$.
Two elements $D_n^t$ and $D_m^s$ in the set $\mathcal{D}$
do not have common factors other than monomials of the initial variables, on condition that $N-3\ge |n-m|\ge 3$ or $|t-s|\ge 2$, where $n,m,t,s$ can be considered as the values modulo $N$.
\end{Theorem}
\textbf{Proof}\;\;
We use the relation \eqref{tauIV}, and the co-primeness of $\tau_n^t$ and $\tau_m^s$ for $(n,t)\neq (m,s)$ in corollary \ref{periodthm2}. The factor $(1-\lambda^2/\mu)$ is eliminated in $I_n^t$ and $V_n^t$ from \eqref{tauIV}.
The rest of the proof is the same as in the previous theorem \ref{locallemma}.\hfill\hbox{$\Box$}\vspace{10pt}\break
Note that we are not stating that no pair of two terms is co-prime when $N<6$. The above theorem is a sufficient condition (good enough for large system size $N$) for co-primeness under the periodic boundary condition.
\section{Concluding remarks and discussions}
In this paper, we studied the discrete Toda equation in terms of the properties of irreducibility and co-primeness of the solutions.
We studied the discrete Toda equation under three different cases of boundary conditions: semi-infinite, molecule and periodic. We proved the coprime condition for all the three cases.
Our results, along with preceding results for the discrete KdV equation and the Quispel-Roberts-Thompson type mappings \cite{dKdVSC,dKdVSC2}, justify our assertion that the coprime property is an integrability detector.
Since our results include the case of the equation with periodic boundary condition, which cannot be easily dealt with the singularity confinement approach, the coprime property is expected to be applicable to wider class of integrable and non-integrable mappings under various conditions than conventional integrability tests.
The co-primeness has another advantage that it contains global information on the common factors of the general terms of the equation.
Because of this global property, rigorously proving the co-primeness sometimes involves long and technical calculations. However, when we use the co-primeness as an aid to \textbf{conjecture} the integrability of the given equation, difficulty of a proof does not pose a problem. We just have to compute a finite number of terms using a mathematical software, and observe the appearance of common factors. If the computation is too heavy, it may be a good idea to substitute arbitrary \textbf{integer} numbers to some of the independent variables, which greatly reduces the computing time.
Indeed, we have to note that the irreducibility and co-primeness are not preserved after substituting numbers to the variables, but the result is usually practical enough to grasp the appearance of common factors.
One of the future works is to study the co-primeness of other discrete integrable and non-integrable equations. In particular, we will investigate the equations, for which several integrability criteria give conflicting results on their integrability.
For example the Hietarinta-Viallet equation \cite{HV} passes the singularity confinement test, but it has a positive algebraic entropy \cite{BV}, which is an indication of non-integrability.
Some of the linearizable discrete mappings \cite{RGSM} do not pass the singularity confinement test, although their algebraic entropy is zero.
By applying the co-prime criterion and by investigating the common factors even more closely, we expect to obtain convincing results on the integrability of these equations in future works.
It is also a good idea to investigate the relation of our results with other integrability criteria such as the $p$-adic number theoretic interpretation of the confined singularities \cite{KMTT}, and the singularity confinement for ultra-discrete systems \cite{Joshi}, which has recently been studied in relation to the tropical geometry \cite{Ormerod}.
\section*{Acknowledgments}
The authors wish to thank Prof. R. Willox and Dr. T. Mase for useful comments.
This work is partially supported by Grant-in-Aid for Scientific Research of JSPS ($26\cdot 242$).
|
1,116,691,500,275 | arxiv | \section{Introduction}
Understanding the structure of the $\Lambda(1405)$ with spin-parity
$J^\pi=1/2^-$ and strangeness $S=-1$ is a
long-standing issue in hadron physics.
The mass of the $\Lambda(1405)$ is slightly less than the $\bar{K}N$ threshold
energy. The $\Lambda(1405)$ can be considered as a $\bar{K}N$ quasi-bound
state embedded in the $\pi\Sigma$ continuum~\cite{Dalitz:1959dn,Dalitz:1960du}.
Guided by this picture, $\bar{K}N$ interactions which reproduce the mass
of $\Lambda(1405)$ and two-body scattering data have been constructed phenomenologically~\cite{Akaishi:2002bg,Shevchenko:2011ce}.
On the other hand, $\bar{K}N$ interactions have been studied for a
long time based on
chiral SU(3) dynamics~\cite{Kaiser:1995eg,Oset:1997it,Hyodo:2011ur}.
Between the phenomenological and chiral SU(3) $\bar{K}N$ interactions,
subthreshold $\bar{K}N$ amplitudes are quite
different~\cite{Hyodo:2007jq}.
The phenomenological model describes $\Lambda(1405)$ as a single
pole of the scattering amplitude around 1405~MeV.
The $\bar{K}N$ amplitude from the interaction based on chiral SU(3)
dynamics has two poles, one of which located not at 1405 MeV but around
1420 MeV~\cite{Oller:2000fj,Jido:2003cb}.
The differences in the pole structure come from the different
off-shell behavior,
especially as a consequence of the energy-dependence of the $\bar{K}N$ interaction.
The $\bar{K}N$ interaction based on chiral SU(3) dynamics
is energy-dependent, and its attraction becomes weaker as one moves below the $\bar{K}N$
threshold energy. Hence the (upper) pole of the $\bar{K}N$ amplitude shows up around 1420 MeV.
On the other hand, the phenomenological $\bar{K}N$ interaction is energy-independent and
strongly attractive so that the pole shows up around 1405 MeV.
These differences are enhanced in the so-called few-body kaonic nuclei, such as the strange
dibaryon resonance under discussion in the
$\bar{K}NN$-$\pi YN$ coupled
system~\cite{Yamazaki:2002uh, Yamazaki:2007cs, Dote:2008in, Dote:2008hw,
Wycech:2008wf, Barnea:2012qa, Shevchenko:2006xy, Shevchenko:2007ke,
Ikeda:2007nz, Ikeda:2008ub, Ikeda:2010tk}.
How a possible signature of this strange
dibaryon resonance shows up
in the resonance production reaction is also of interest as it reflects the two-body
dynamics of the $\Lambda(1405)$~\cite{Ohnishi:2013rix}.
One of the possible kaon-induced processes forming the $\Lambda(1405)$
is $K^-d\rightarrow \Lambda(1405)\,n$.
The signature of the
$\Lambda(1405)$ was observed in an old bubble-chamber experiment
that measured the $\pi\Sigma$ invariant mass distribution in the
$K^-d\rightarrow \pi^+\Sigma^-n$ reaction\,\cite{Braun:1977wd}.
A new experiment is planned at J-PARC\,\cite{Noumi}.
Theoretical investigations of the $K^-d\rightarrow \pi\Sigma n$ reaction
have previously
been performed in simplified models assuming a two-step process\,\cite{Jido:2009jf,Miyagawa:2012xz,Jido:2012cy,YamagataSekihara:2012yv}.
In this contribution we examine how the
$\Lambda(1405)$ resonance shows up in the $K^-d\rightarrow \pi\Sigma n$
reaction by making use of the approach based on the coupled-channels
Alt-Grassberger-Sandhas~(AGS) equations developed in
Refs.~\cite{Ikeda:2007nz, Ikeda:2008ub, Ikeda:2010tk, Ohnishi:2013rix}.
This is the first calculation of this process which incorporates
the full three-body dynamics.
\section{Three-body Scattering Equations}
\label{sec:1}
Throughout this paper, we assume that the three-body processes take place via
separable two-body interactions, which have the following form
in the two-body center-of-mass (CM) frame,
\begin{align}
V_{\alpha\beta}(\vec q_\alpha,\vec q_\beta; E) =
g_\alpha(\vec q_\alpha) \lambda_{\alpha\beta}(E) g_\beta (\vec q_\beta) ~,
\label{eq:v_sepa}
\end{align}
where $\vec q_\alpha$ [$g_\alpha(\vec q_\alpha)$] is the relative
momentum [form factor]
of the two-body channel $\alpha$; $E$ is the total energy of the
two-body system.
With this assumption the amplitudes for the
quasi-two-body scattering of an ``isobar'' and a spectator particle, $X_{ij}(\vec p_i, \vec p_j; W)$, are then obtained by solving
the AGS equations~\cite{Alt:1967fx,PhysRev.132.485},
\begin{align}
X_{ij}({\vec p}_i,{\vec p}_j,W)&=(1-\delta_{ij})Z_{ij}({\vec p}_i,{\vec p}_j,W)
\nonumber\\
&
+\sum_{n\ne i}\int d{\vec p}_n Z_{in}({\vec p}_i,{\vec p}_n,W)
\tau_n\left(W-E_n(\vec p_n)\right) X_{nj}({\vec p}_n,{\vec p}_j,W)~.
\label{AGS}
\end{align}
Here the subscripts $i,j,n$ specify the reaction channels; $W$ and $\vec p_i$ are the total scattering energy and the relative momentum of channel $i$
in the three-body CM frame, respectively;
$Z_{ij}({\vec p}_i,{\vec p}_j;W)$ and $\tau
_i\left(W-E_i(\vec p_i)\right)$ are the one-particle exchange potential and
the two-body propagator.
With the quasi-two-body amplitudes,
the scattering amplitudes for the break-up process $d+\bar{K}\rightarrow \pi
+\Sigma+ N$
are obtained as
\begin{align}
T_{\pi\Sigma N\text{-}\bar{K}d}(\vec q_N,\vec p_N, \vec p_{\bar{K}},W)
&=
g_{Y_\pi}(\vec q_N) \tau_{Y_\pi Y_K}\left(W-E_N(\vec p_N)\right) X_{Y_K d}(\vec p_N, \vec p_{\bar{K}},W)
\nonumber\\
&+
g_{Y_\pi}(\vec q_N) \tau_{Y_\pi Y_\pi}\left(W-E_N(\vec p_N)\right) X_{Y_\pi d}( \vec p_N, \vec p_{\bar{K}},W)
\nonumber\\
&+
g_{N^*}(\vec q_\Sigma) \tau_{N^*N^*}\left(W-E_\Sigma(\vec p_\Sigma)\right) X_{N^* d}( \vec p_\Sigma, \vec p_{\bar{K}},W)
\nonumber\\
&+
g_{d_y}(\vec q_\pi) \tau_{d_yd_y}\left(W-E_\pi(\vec p_\pi)\right) X_{d_y d}( \vec p_\pi, \vec p_{\bar{K}},W)~,
\label{eq:t_break}
\end{align}
where $X_{Y_K d}(\vec p_N, \vec p_{\bar{K}},W)$ is the quasi-two-body
amplitude anti-symmetrized for two nucleons; the subscripts denote the
isobars.
The notations for the isobars are $Y_K=\bar{K}N$,
$Y_\pi = \pi Y$, $d = NN$, $N^*=\pi N$ and $d_y=YN$, respectively.
In this contribution we employ the first two terms of
Eq.~(\ref{eq:t_break}) as a first step.
These terms emerge directly from the $\Lambda(1405)$ in the final
state interaction; they are the dominant parts of the full T-matrix.
Using this T-matrix, the differential cross section of the
break-up process
$d+\bar{K}\rightarrow \pi +\Sigma+ N$ is calculated as:
\begin{align}
\frac{d\sigma}{d{E_n}} &= (2\pi)^4\frac{E_dE_{\bar{K}}}{Wp_{\bar{K}}}\frac{m_N m_\Sigma
m_\pi}{m_N+m_\Sigma+m_\pi}\nonumber\\
&\times\int d\Omega_{p_N}d\Omega_{q_N}
p_Nq_N \sum_{\bar{i}f}|<N\Sigma \pi|T(W)|d\bar{K}>|^2~~\label{eq:differential2},
\end{align}
where
$E_n$ is the neutron energy in the center-of-mass frame of $\pi\Sigma$
defined by
\begin{align}
E_n = m_N + \frac{p_N^2}{2\eta_N} ~~.\label{eq:inv_mass}
\end{align}
\section{Models of Two-body Interaction}
\label{sec:4}
We use two-body $s$-wave meson-baryon interactions obtained from
the leading order chiral Lagrangian,
\begin{equation}
L_{WT} = \frac{i}{8F_\pi^2}
Tr(\bar{\psi}_B\gamma^\mu[[\phi,\partial_\mu\phi],\psi_B]).
\end{equation}
Here, we examine two interaction
models, both of which are derived from the above Lagrangian
but have different off-shell behavior:
one is the energy dependent model (E-dep),
\begin{eqnarray}
V_{\alpha \beta}(q',q;E)
=&-&\lambda_{\alpha\beta}\frac{1}{32\pi^2 F_\pi^2}
\frac{2E-M_\alpha -M_\beta}{\sqrt{m_\alpha m_\beta}}
\left( \frac{\Lambda_\alpha^2}{q'\,^2+\Lambda_\alpha^2} \right)^2
\left( \frac{\Lambda_\beta^2}{q^2+\Lambda_\beta^2} \right)^2.
\label{eq:e-dep}
\end{eqnarray}
while the other is the energy independent model (E-indep),
\begin{eqnarray}
V_{\alpha \beta}(q',q)
=&-&\lambda_{\alpha\beta}\frac{1}{32\pi^2 F_\pi^2}
\frac{m_\alpha +m_\beta}{\sqrt{m_\alpha m_\beta}}
\left( \frac{\Lambda_\alpha^2}{q'\,^2+\Lambda_\alpha^2} \right)^2
\left( \frac{\Lambda_\beta^2}{q^2+\Lambda_\beta^2} \right)^2 ,
\label{eq:e-indep}
\end{eqnarray}
Here, $m_\alpha$ ($M_\alpha$) is
the meson (baryon) mass;
$F_\pi$ is the pion decay constant;
$\lambda_{\alpha\beta}$ are determined by the
flavor SU(3) structure of the chiral Lagrangian.
In the derivation of these potentials we have assumed
the so-called ``on-shell factorization''~\cite{Oset:1997it}
for Eq.~(\ref{eq:e-dep}) and
$q, q'\ll M_\alpha$ for Eq.~(\ref{eq:e-indep}).
The cutoff parameters $\Lambda$ are determined by fitting
experimental data {as shown in} Table~\ref{tab:1}.
In the E-dep model, the $\bar{K}N$ amplitudes have two poles for $l=I=0$
{in the $\bar{K}N$-physical and $\pi\Sigma$-unphysical sheets},
corresponding to
those derived from the chiral unitary model~\cite{Jido:2003cb}.
On the other hand, the E-indep model has a single pole
that corresponds to $\Lambda(1405)$.
It is interesting to examine how
this difference of the two-body interaction models
appears in the
neutron energy spectrum of the $K^- d \rightarrow
\pi\Sigma n$ reaction.
\begin{table}
\caption{Cutoff parameters of $\bar{K}N$-$\pi Y$ interaction.}
\centering
\label{tab:1}
\begin{tabular}{lccccc}
\br
&$\Lambda ^{I=0}_{\bar{K}N}$(MeV) &$\Lambda ^{I=0}_{\pi\Sigma}$(MeV)
&$\Lambda ^{I=1}_{\bar{K}N}$(MeV) &$\Lambda
^{I=1}_{\pi\Sigma}$(MeV)&$\Lambda ^{I=1}_{\pi\Lambda}$(MeV) \\[3pt]
\mr
E-dep&1000 & 700 & 725&725&725 \\
E-indep&1000 & 700 & 920&960&640 \\
\br
\end{tabular}
\end{table}
\section{Results and Discussion}
\label{sec:5}
In Fig.\ref{fig:1}, we present the differential cross section
of $K^- d\rightarrow \pi \Sigma n$ [Eq. (\ref{eq:differential2})]
computed using the E-dep (a) and E-indep (b) models, respectively.
We investigate the cross section for initial kaon momentum
$p_{K^-}^{lab}=1000$ MeV {in accordance with the planned J-PARC
experiment}~\cite{Noumi}.
Here, we decompose the isospin basis states into charge basis states
using Clebsch–Gordan coefficients:
the solid curve represents the $K^- + d\rightarrow\pi^++\Sigma^-+n$;
the dashed curve refers to the $K^- + d\rightarrow\pi^-+\Sigma^++n$;
the dotted curve represents the $K^- + d\rightarrow\pi^0+\Sigma^0+n$
reaction, respectively.
We subtract the neutron energy $E_{th}$ at which the amplitudes have the
$\bar{K}N$ threshold cusp from the neutron energy $E_n$,
i.e. $\bar{K}N$ threshold cusp shows up on
the differential cross section at $E_n-E_{th} = 0$.
Well defined maxima are found at
$E_n\sim 17$-$30$ MeV for the E-dep model and
a peak or bumps at
{$E_n\sim 32$-$38$ MeV} for the E-indep model, depending in the charge
combination of $\pi\Sigma$ in the final state.
These peak and bump structures appear about
5 MeV higher in energy than the calculated
binding energy of the
$\Lambda(1405)$
$E_B\sim 13$ MeV for the E-dep model and
$E_B\sim 28$ MeV for the E-indep model).
The magnitude of the differential cross section for the E-dep model is
twice larger than that for the E-indep model, and the interference
patterns with backgrounds are quite different between these two models.
This clear difference in the differential cross section,
arising from the model dependence of the two-body interactions,
suggests that the $K^- d\rightarrow \pi \Sigma n$ reaction can indeed provide
useful information on the $\bar{K}N$-$\pi Y$ system.
\begin{figure}
\includegraphics[width=0.5\textwidth]{eps/partial_1000_dep2_shift.eps}
\includegraphics[width=0.5\textwidth]{eps/partial_1000_indep2_shift.eps}
\caption{Differential cross section
{$d\sigma/dE_n$}
for $K^- + d\rightarrow \pi+\Sigma +n$.
{The initial kaon momentum is set to $p_{K^-}^{lab}=1000$~MeV.}
Panel~(a): the E-dep model; Panel~(b) the E-indep model.
Solid curves: $\pi^+\Sigma^-n$;
dashed curves: $\pi^-\Sigma^+n$;
dotted curves: $\pi^0\Sigma^0n$ in the final
state, respectively.
}
\label{fig:1}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{eps/partial_1000_dep_shift.eps}
\includegraphics[width=0.5\textwidth]{eps/partial_1000_indep_shift.eps}
\caption{
Contribution of each partial wave component to the differential cross
section
{$d\sigma/dE_n$}
for $d+ K^-\rightarrow \pi^++\Sigma^- +n$.
Panel~(a): the E-dep model; Panel~(b) the E-indep model.
The thick solid curve represents the summation of total orbital angular
momentum $L=0$ to $14$;
The thin solid curve represents $L=0$ only;
The dashed curve represents $L=1$ only;
The dotted curve represents $L=2$ only;
The dashed-dotted curve represents $L=3$ only;
The dashed-two-dotted curve represents $L=4$ only, respectively.
The initial kaon momentum is set to $p_{K^-}^{lab}=1000$~MeV.}
\label{fig:2}
\end{figure}
Finally, we examine the contribution of each partial wave component for
total orbital angular momentum $L$ to the
differential cross section (Fig.~\ref{fig:2}).
We conclude that the $s$-wave component is dominant in the low-energy region,
but around the $\bar{K}N$ threshold higher partial waves such as the
$p$-wave component become important.
In summary, we have calculated the differential cross sections
(\ref{eq:differential2})
for $K^- + d \rightarrow \pi+\Sigma+ n$ reactions.
We have found peak and bump structures
in the
neutron energy spectrum, and therefore it is possible to
observe the signal of the
$\Lambda(1405)$ resonance in the physical cross sections.
We have also shown that the $K^- d\rightarrow \pi\Sigma n$ reactions are
useful for judging existing dynamical models of $\bar{K}N$-$\pi\Sigma$
coupled systems with $\Lambda(1405)$.
{Further improvements of the present model to account for the
neglected contributions in Eq.~(\ref{eq:t_break}) and relativistic
corrections are under investigation.}
\ack
The simulation has been performed on a supercomputer (NEC SX8R) at the Research Center
for Nuclear Physics, Osaka University.
This work was partly supported by RIKEN Junior Research Associate Program,
by RIKEN iTHES Project
{and by JSPS KAKENHI Grants Nos. 25800170, 24740152 and 23224006}.
\section*{References}
|
1,116,691,500,276 | arxiv | \section{Introduction}
\label{intro}
Slow-roll inflation \cite{KT,Linde} requires an unusually flat scalar
potential.
This is quantified by the conditions
\begin{equation}
\left( \frac{V'}{V} \right)^2 \ll \frac{1}{M_{\rm Pl}^2}
\end{equation}
and
\begin{equation}
\label{src2}
\left| \frac{V''}{V} \right| \ll \frac{1}{M_{\rm Pl}^2}
\end{equation}
where $ M_{\rm Pl} = 1 / \sqrt{8\pi G} $.
The first condition is generally not difficult to achieve; one simply
needs to be sufficiently near an extremum of the potential with
positive potential energy.
In many cases `sufficiently' just means $ \ll M_{\rm Pl} $ from the
extremum, which is almost difficult to avoid!
However, the second condition presents the most, perhaps only, serious
obstacle to building a model of inflation \cite{fvi,iss}.
This is because the positive potential energy required for inflation
spontaneously breaks supersymmetry\footnote{
To build a model of inflation, not to mention the rest of particle
physics, in the absence of supersymmetry seems hopeless.
For an introduction to supersymmetry and supergravity, see
Ref.~\cite{susy}.},
generically inducing
\begin{equation}
\left| \frac{V''}{V} \right| \gsim \frac{1}{M_{\rm Pl}^2}
\end{equation}
for all scalar fields \cite{Dine,fvi,iss}, and in particular for the
inflaton \cite{fvi,iss}.
The $\sim$ holds when the field has no supersymmetric mass and the
supersymmetry breaking is only communicated via gravitational strength
interactions.
Many models of inflation are built ignoring gravitational strength
interactions, and so are implicitly setting $ M_{\rm Pl} = \infty $.
Clearly one cannot achieve Eq.~(\ref{src2}) in this context.
Models of inflation built in the context of supergravity
have traditionally resolved this problem by fine tuning, either
explicit or implicit.
Only recently have there been any plausible proposals for solving
this problem.
The first\footnote{
Natural inflation \cite{nat} naturally achieves a small $V''$ by
assuming an approximate global $U(1)$ symmetry, but does {\em not\/}
naturally satisfy Eq.~(\ref{src2}) because $V$ vanishes in the limit
where the symmetry is exact.
A hybrid natural inflation model might avoid this problem though,
as was noted in Ref.~\cite{fip}.}
was given in Refs.~\cite{fvi,iss}.
It employed forms for the Kahler potential that had been derived from
weakly coupled heterotic string theory, in combination with a subset
of the modular symmetries, to cancel the supergravity corrections.
This method should also work in appropriate limits of M-theory.
It requires an inflationary energy density well above the vacuum
supersymmetry breaking scale, $ V^{1/4} \gg M_{\rm s} $;
$ M_{\rm s} \sim 10^{10} $ to $ 10^{11}\,$GeV in our vacuum for
gravity mediated supersymmetry breaking.
A natural implementation of this method would be to have the
non-perturbative physics that leads to gaugino condensation at a scale
$ \Lambda_{\rm gc} \sim 10^{13}\,$GeV in our vacuum generate the
inflationary potential at a similar energy scale
$ V^{1/4} \sim \Lambda_{\rm gc} $.
It should be possible to stabilise the moduli during inflation by the
same method that stabilises them in our vacuum, though the transition
from inflation to vacuum could be dangerous if the end of inflation is
not sufficiently smooth.
A simple model of inflation with the appropriate energy scale and a
smooth end to inflation was given in Ref.~\cite{mut}.
This method has the practical disadvantage that it requires detailed
control of the effective supergravity theory, but otherwise remains
very promising.
It was also noted in Ref.~\cite{iss} that the supergravity corrections
could be avoided if the inflationary potential energy was dominated by
the $D$-term, and a hybrid inflation \cite{hybrid,fvi} implementation
of that idea was constructed with a Fayet-Iliopoulos term dominating
the energy density.
The main problem with this method is to obtain a Fayet-Iliopoulos
term at a low enough energy scale to obtain the COBE normalisation for
the density perturbations, but at a scale higher than the $F$-term
supersymmetry breaking to avoid the $F$-term supersymmetry breaking
inducing too large a mass for the inflaton.
The stabilisation of the moduli, the dilaton in the case of the
heterotic string, may also be a serious problem.
A Fayet-Iliopoulos term does arise in many compactifications of the
heterotic string \cite{FI}, but its scale,
typically $ V_{\rm FI}^{1/4} \sim 10^{17} $ to $ 10^{18}\,$GeV,
a conservative lower bound being
$ V_{\rm FI}^{1/4} > 2 \times 10^{16}\,$GeV,
is at the limit of being too high for any type of inflation to meet
the COBE constraint \cite{iss}, and in particular is always too high
if the slope of the inflaton's potential is dominated by the loop
correction as in Ref.~\cite{Dvali}, in which case the COBE
normalisation requires
$ V_{\rm infl}^{1/4} \sim 5 \times 10^{15}\,$GeV.
However, in M-theory scales are more flexible \cite{Witten}, and so it
may be possible to obtain a Fayet-Iliopoulos term at a sufficiently
low scale.
There still remains the problem of stabilising the moduli.
One possibility would be if either the gauge coupling, or the
Fayet-Iliopoulos term itself, had a large non-perturbative dependence
on the moduli.
Whether this can be achieved at the same time as having a sufficiently
low energy scale for the Fayet-Iliopoulos term remains to be seen.
Another possibility would be to generate the Fayet-Iliopoulos term by
field theory methods at a lower energy scale, but as this probably
requires $F$-term supersymmetry breaking it may be difficult to avoid
the inflaton getting too large a mass from this source.
Especially in this latter case, one might also have to consider the
inflaton dependence of the gauge coupling and Fayet-Iliopoulos term,
which could provide an effective mass for the inflaton.
The method of \cite{Murayama} uses a global Heisenberg symmetry,
which has been derived from string theory at tree and one loop level,
to cancel the supergravity corrections.
The stabilisation of the modulus that forms an integral part of the
mechanism is a serious problem for this method.
This is the only method that could conceivably implement the naively
popular $\phi^n$ chaotic inflation models.
The method of \cite{Ross} is somewhat complex and requires specific
couplings of a Goldstone boson to the inflaton, but is an interesting
possibility.
In this paper I will use the method of Ref.~\cite{fip}.
It has the advantages that it does not require any special features of
the high energy theory, and can be implemented at the scale
$ V^{1/4} \sim 10^{10} $ to $ 10^{11}\,$GeV where the moduli are
already stabilised.
I will rely heavily on Ref.~\cite{fip}.
The reader would benefit from reading it first.
From now on I set $ M_{\rm Pl} = 1 $.
The potential of the model (see Figure~\ref{potfig}) is \cite{fip}
\begin{equation}
\label{pot}
V(\phi) = V_0 \left[ 1 - \frac{1}{2} f(\epsilon\ln\phi) \phi^2
+ \ldots \right]
\end{equation}
with $ f(0) \sim 1 $, the minimum value expected in a generic
supergravity theory, and $\epsilon \ll 1 $.
The function $f$ is determined by the renormalisation group running of
the mass of the inflaton $\phi$.
The potential is assumed to have a maximum at $ \phi = \phi_* $,
with $ V_0^{1/2} \ll \phi_* \ll 1 $.
We then have $ f_* = {\cal O}(\epsilon) $, allowing slow roll inflation
near $\phi_*$ \cite{fip}.
We are free to choose $ \phi_* = e^{-1/\epsilon} $, so that
$ f_* = f(-1) $.
The most natural scale for the potential is $ V_0 \sim M_{\rm s}^4 $,
where $M_{\rm s}$ is the supersymmetry breaking scale in our vacuum.
I will take $ M_{\rm s} \sim 10^{-8} $ corresponding to gravity
mediated supersymmetry breaking.
In this case the COBE normalisation gives $ \phi_* \sim 10^{-11} $ and
so $ \epsilon \simeq 0.04 $ \cite{fip}.
In Section~\ref{renorm}, $\epsilon$ will be derived from a gauge
coupling of strength similar to that of the GUT gauge coupling
inferred from the LEP data, $\alpha_{\rm GUT} \sim 0.04$.
It is worth noting that the fact that $M_{\rm s}$ is so small is
crucial to our being able to achieve slow roll inflation without fine
tuning.
\section{The renormalisation group}
\label{renorm}
In this section I will give a rough derivation of the function
$f(\epsilon\ln\phi)$.
The inflaton $\phi$ gives masses proportional to its expectation value
to the fields to which it couples, and so $\phi$ acts like an infra-red
cutoff to the renormalisation.
I will assume that $\phi$ is charged under some asymptotically free
gauge group and for simplicity neglect the Yukawa couplings.
Defining
\begin{equation}
x \equiv \epsilon \ln \phi
\end{equation}
so that $ x_* = -1 $,
the renormalisation group equation \cite{susy} for $\phi$'s mass becomes
\begin{equation}
\frac{dm_\phi^2}{dx} = - \frac{ 2 c \alpha }{ \pi \epsilon }
\tilde{m}^2
\end{equation}
where $c$ is the quadratic Casimir invariant of $\phi$'s
representation under the gauge group.
For example, $ c = 3/4 $ for a fundamental representation of SU(2) and
$ c = 4/3 $ for that of SU(3).
$ \alpha = g_{\rm gauge}^2 / 4 \pi $, where $g_{\rm gauge}$ is the
gauge coupling.
$\tilde{m}$ is the gaugino mass.
The renormalisation of $\alpha$ is given by
\begin{equation}
\frac{d\alpha}{dx} = - \frac{ b \alpha^2 }{ 2 \pi \epsilon }
\end{equation}
where, for example, $ b = 3 N_{\rm c} - N_{\rm f} $ for an
SU($N_{\rm c}$) gauge group with $N_{\rm f}$ pairs of fundamentals and
antifundamentals.
$b>0$ corresponds to an asymptotically free gauge group.
The renormalisation of $ \tilde{m} $ is given by
\begin{equation}
\tilde{m}(x) = \frac{\alpha(x) \tilde{m}(0)}{\alpha(0)} \,.
\end{equation}
Integrating these equations gives
\begin{equation}
m_\phi^2(x) = - m_\phi^2(0) \left[ A
\left( \frac{1}{ 1 + \frac{b}{2\pi} \frac{\alpha(0)}{\epsilon} x }
\right)^2 - \left( A + 1 \right) \right]
\end{equation}
where
\begin{equation}
A = \frac{ 2c }{ b } \left( \frac{ \tilde{m}^2(0) }{ - m_\phi^2(0) }
\right) \,.
\end{equation}
To lowest order, $\epsilon$ is defined by $ m_\phi^2(-1) = 0 $, which
gives
\begin{equation}
\epsilon = \alpha(0) \frac{b}{2\pi}
\left[ A + 1 + \sqrt{A(A+1)} \right] \,.
\end{equation}
Therefore
\begin{equation}
\label{mphi}
m_\phi^2(x) = - (A+1) m_\phi^2(0)
\left[ \left( \frac{y_\infty}{y_\infty+1+x} \right)^2 - 1 \right]
\end{equation}
where
\begin{equation}
\label{yinf}
y_\infty = A + \sqrt{ A ( A + 1 ) } \,.
\end{equation}
The potential, Eq.~(\ref{pot}), is then
\begin{equation}
V(\phi) = V_0 + \frac{1}{2} m_\phi^2(\epsilon\ln\phi) \phi^2 + \ldots
\end{equation}
with $ m_\phi^2(\epsilon\ln\phi) $ given by Eq.~(\ref{mphi})
and $ m_\phi^2(0) \sim - V_0 $.
\section{Initial conditions}
In Ref.~\cite{fip}, I said that the above model with the slow roll
inflation occuring as $\phi$ rolls from the maximum at
$ \phi = \phi_* $ towards the false vacuum at $\phi=0$, had
problematic initial conditions.
Here, I argue to the contrary that this model has very natural initial
conditions if we choose the vacuum at $ \phi > \phi_* $ to be another
false vacuum.
Then the inflaton $\phi$ will be easily trapped in the large false
vacuum potential well at $ \phi > \phi_* $, giving rise to eternal
\cite{Linde} old inflation.
There will be an extremely small, but {\em finite}, chance that
quantum fluctuations will kick $\phi$ to the top of the barrier at
$ \phi = \phi_* $, where once again eternal inflation occurs.
The chance that this will happen is extremely small, but, because it
is finite, it will be more than compensated for by the two lots of
eternal inflation.
Thus the initial conditions for the slow roll inflation,
$ \phi = \phi_* $, are naturally obtained.
\section{Slow roll inflation}
As we are focussing on the case where the slow roll inflation
occurs as $\phi$ rolls off the maximum at $ \phi = \phi_* $ towards
the false vacuum at $\phi=0$, it will be convenient to define
\begin{equation}
y \equiv \epsilon \ln \left( \frac{\phi_*}{\phi} \right) = - 1 - x \,,
\end{equation}
so that $ y_* = 0 $ and $ y > 0 $ during the slow roll inflation, and
\begin{equation}
g(y) \equiv - f(x) - \frac{\epsilon}{2} f'(x) \,,
\end{equation}
so that $ g_* = 0 $, $ g > 0 $ during the slow roll inflation, and
$ g_*' = f_*' + {\cal O}(\epsilon) $.
Eq.~(19) of Ref.~\cite{fip} then becomes
\begin{equation}
\label{tauint}
\tau = \epsilon H t
= \frac{2}{3} \int \frac{ dy }{ 1 - \sqrt{ 1 - 4g/3 } } \,.
\end{equation}
Following Section~\ref{renorm}, we take as an example
\begin{equation}
g(y) = a \left[ \left( \frac{y_\infty}{y_\infty-y} \right)^2 - 1 \right]
\end{equation}
with
\begin{equation}
a = ( 1 + A ) f(0) + {\cal O}(\epsilon)
\end{equation}
and $y_\infty$ given by Eq.~(\ref{yinf}).
Then
\begin{equation}
\label{gp}
g_*' = \frac{2a}{y_\infty} \,.
\end{equation}
With this choice for $g$, we can integrate Eq.~(\ref{tauint}) to give
\begin{equation}
\label{tau}
\tau = \frac{1}{g_*'} \left\{ \ln y
+ \left( 1 - \frac{y}{y_\infty} \right)
\left( 1 + \sqrt{ 1 - \frac{4g}{3} } \right)
- \ln \left[ 1 + \left( 1 - \frac{y}{y_\infty} \right)
\sqrt{ 1 - \frac{4g}{3} } \right]
- 2 + \ln 2 \right\} \,.
\end{equation}
This is only valid for $ g < g_{\rm fr} = 3/4 $ or
\begin{equation}
\label{yfr}
y < y_{\rm fr}
= y_\infty \left( 1 - \frac{ 1 }{ \sqrt{ 1 + \frac{3}{4a} } } \right)
\end{equation}
or
\begin{equation}
\label{tfr}
\tau < \tau_{\rm fr} = \frac{1}{g_*'} \left( \ln y_{\rm fr}
+ \frac{ 1 }{ \sqrt{ 1 + \frac{3}{4a} } } - 2 + \ln 2 \right) \,.
\end{equation}
For $ \tau > \tau_{\rm fr} $, $\phi$ fast rolls towards $\phi=0$.
I will neglect the small number of $e$-folds of inflation during this
fast rolling stage.
During slow roll, $ y \ll 1 $ and $ g \ll 1 $, and so \cite{fip}
\begin{equation}
\label{tsr}
\tau \simeq \tau_{\rm sr} = \frac{1}{g_*'} \ln y \,.
\end{equation}
\section{Ending inflation}
\label{end}
To end inflation, one must use a hybrid inflation type mechanism
\cite{hybrid,mut,inv} to exit from the false vacuum at some critical
value $ \phi_{\rm c} < \phi_* $.
A standard hybrid inflation exit from the false vacuum \cite{hybrid},
with the new field $\psi$ also having a mass squared
$ m_\psi^2 \sim V_0 $ \cite{Lisa}, and with a coupling
$ \lambda \lsim 1 $ between the fields,
\begin{equation}
V(\phi, \psi) = V_0
+ \frac{1}{2} \left[ g(y) + {\cal O}(\epsilon) \right] V_0 \phi^2
- \frac{1}{2} m_\psi^2 \psi^2 + \frac{1}{2} \lambda^2 \phi^2 \psi^2
+ \ldots
\end{equation}
would give $ \phi_{\rm c} = m_\psi / \lambda \gsim 10^{-16} $,
or $ y_{\rm c} \lsim 0.46 $.
Typically, $ g_{\rm c} > 3/4 $, in which case $\phi$ will be fast
rolling at $ \phi = \phi_{\rm c} $ making the precise value of
$\phi_{\rm c}$ not very important.
Such a hybrid inflation exit from the false vacuum may lead to a large
spike in the spectrum \cite{Lisa}, which could be dangerous.
In our model, contrary to that of Ref.~\cite{Lisa}, the inflaton is
typically fast rolling at $ \phi = \phi_{\rm c} $ which may make the
spike less dangerous \cite{Lisa}.
If dangerous, such a spike could be avoided by a mutated hybrid
inflation type exit from the false vacuum \cite{mut,Juan,inv}.
For example, one could naturally have an additional term
$ \sim V_0 \phi \psi $ in the potential which would avoid the spike
but leave $ \phi_{\rm c} $ essentially unchanged.
As the gauge coupling is getting stronger as $\phi$ is decreasing, it
is also conceivable that the exit from the false vacuum might be
controlled by strong coupling effects.
Finally, there will be a stage of non-slow roll inflation \cite{Lisa}
in which $\psi$ rolls from $ \psi = \psi_{\rm init} \sim 10^{-16} $ to
our vacuum at $ \psi = \psi_{\rm vac} \sim 1 $.
It will last for
\begin{eqnarray}
N_{\psi} & = & \frac{V_0}{2 m_\psi^2}
\left( 1 + \sqrt{ 1 + \frac{4 m_\psi^2}{3V_0} } \right)
\ln \frac{\psi_{\rm vac}}{\psi_{\rm init}} \\
\label{Npsi}
& \sim & \frac{37 V_0}{2 m_\psi^2}
\left( 1 + \sqrt{ 1 + \frac{4 m_\psi^2}{3V_0} } \right)
\end{eqnarray}
$e$-folds.
For $ V_0 \sim 10^{-32} $, and assuming that thermal inflation
\cite{thermal} occurs after this inflation, observable scales would
leave the horizon between 20 and 30 $e$-folds before the end of this
inflation \cite{David}.
If reheating was rapid and there was no thermal inflation, this would
be increased to between 40 and 50 $e$-folds at maximum.
For $ m_\psi^2 = V_0 $, Eq.~(\ref{Npsi}) gives $ N_\psi = 47 $,
so that even in the latter case observable scales would leave
the horizon during this non-slow roll stage of inflation leading to an
unacceptable spectral index.
To get observable scales safely leaving the horizon during the slow
roll inflation, say at least between 10 and 20 $e$-folds before $\phi$
reached $ \phi = \phi_{\rm c} $, would require $ N_\psi \lsim 10 $ and
so $ m_\psi^2 \gsim 8 V_0 $ in the case with thermal inflation, and
$ N_\psi \lsim 30 $ and so $ m_\psi^2 \gsim 1.7 V_0 $ in the case
without.
The latter requirement is clearly not very severe, and even the former
is not unreasonable. For example,
$ V(\psi) = \frac{1}{2} V_0
\left[ 1 + \cos \left( 2 \pi \psi \right) \right] $,
which has a period of one, gives $ m_\psi^2 = 2 \pi^2 V_0 $.
However, I do not find this feature of the model entirely
satisfactory in the case with thermal inflation.
\section{The spectral index}
To lowest order in the slow roll approximation, the spectral index is
\cite{fip}
\begin{equation}
n = 1 + 2 \frac{V''}{V}
= 1 + 2 g'_* \left( y - \epsilon \right) \,.
\end{equation}
Eqs.~(\ref{gp}), (\ref{yfr}), (\ref{tfr}) and (\ref{tsr}) then give
\begin{equation}
\label{n}
n = 1 - 2 \epsilon g'_*
+ 8 a \left( 1 - \frac{ 1 }{ \sqrt{ 1 + \frac{3}{4a} } } \right)
\exp \left[ - \epsilon g'_* \left( N - N_{\rm fr} \right) - 2
+ \frac{ 1 }{ \sqrt{ 1 + \frac{3}{4a} } } \right]
\end{equation}
where $ N - N_{\rm fr} = ( \tau_{\rm fr} - \tau ) / \epsilon $ is the
number of $e$-folds of inflation from the scale $ k \propto e^{-N} $
leaving the horizon to $ \phi = \phi_{\rm fr} $.
From the discussion of Section~\ref{end}, one might expect observable
scales to leave the horizon during the interval
$ N - N_{\rm fr} \sim 10 $ to 20.
I have been assuming that $ \phi_{\rm c} < \phi_{\rm fr} $, as will
usually be the case.
If $ \phi_{\rm c} > \phi_{\rm fr} $, inflation may end slightly earlier
so that one should take a slightly larger value of $ N - N_{\rm fr} $.
A more accurate but implicit formula is given by using
\cite{2nd}
\begin{eqnarray}
n & = & 1 + 2 \frac{V''}{V} + 2.13 \frac{V'V'''}{V^2}
+ \frac{2}{3} \left( \frac{V''}{V} \right)^2 \\
& = & 1 + 2 \left( g - \epsilon g' \right)
- 2.13 \epsilon g \left( g' - \epsilon g'' \right)
+ \frac{2}{3} \left( g - \epsilon g' \right)^2
\end{eqnarray}
and Eq.~(\ref{tau}) instead of Eq.~(\ref{tsr}).
Some example spectra and spectral indices are plotted in
Figures~\ref{specfig} and~\ref{indfig} respectively.
\section{Conclusions}
In a previous paper \cite{fip} I described how to construct a
potential flat enough for slow roll inflation without fine tuning and
without making any assumptions about the high energy theory.
In this paper I have emphasised an implementation of that idea which
can naturally produce an observationally viable spectral index
even for $ V^{1/4} \sim 10^{10} $ to $ 10^{11}\,$GeV.
Although any observationally viable spectral index can be obtained,
the model does predict a distinctive shape to the spectrum,
given to lowest order in the slow roll approximation by
\begin{equation}
P(k) = Q k^{-2\nu} \exp \left( \sigma k^\nu \right) \,,
\end{equation}
or in terms of the spectral index
\begin{equation}
n(k) = 1 - 2 \nu + \sigma \nu k^\nu \,,
\end{equation}
which may allow it to be distinguished from other models of inflation
by sufficiently precise observations.
The extra power on small scales, for a given tilt on larger scales,
may be helpful for mixed dark matter scenarios of structure formation.
Finally, I will compare the model with that of Randall, Soljacic and
Guth (RSG) \cite{Lisa}.
The models share some of the same motivations and features, such as to
use the flat directions characteristic of supersymmetric and string
theories, lifted by supersymmetry breaking at a scale
$ M_{\rm s} \sim 10^{10} $ to $ 10^{11}\,$GeV, to drive inflation.
Both also use a hybrid inflation type mechanism to exit from the false
vacuum.
However, they also have important differences:
\begin{enumerate}
\item
RSG fine tune the inflaton's mass while our model uses the method of
Ref.~\cite{fip}, \mbox{i.e.} the renormalisation of the inflaton's mass
that would be present anyway, to obtain a mass small enough for slow
roll inflation without fine tuning.
\item
The numbers tend to work out better here, \mbox{e.g.} the coupling
$\lambda$ involved in the hybrid inflation mechanism does not need to
be small; our small parameter $\epsilon$ is derived from a gauge
coupling of the order of the GUT gauge coupling inferred from the LEP
data.
The only parameter we may have to tune slightly is the mass of the
other field involved in the hybrid inflation mechanism, but even this
was to a lesser extent than in RSG.
\item
The possible spike in the spectrum produced by the hybrid inflation
mechanism is probably less dangerous here because the inflaton will
typically be fast rolling at this point.
\item
The spectral index in RSG is $ n > 1 $, while our model can give
$ n < 1 $ which may be preferred by observations.
\item
The spectrum has a significant bend. It is not clear whether this is
an advantage or not, but is at least observationally interesting.
\end{enumerate}
\subsection*{Acknowledgements}
I thank David Lyth for helpful discussions.
I am supported by a JSPS Fellowship at RESCEU, and my work is
supported by Monbusho Grant-in-Aid for JSPS Fellows No.\ 96184.
\frenchspacing
|
1,116,691,500,277 | arxiv | \section{INTRODUCTION}
Gaussian processes~\cite{RW05} possess properties that make
them the approach of choice in time series forecasting:
\begin{itemize}
\item A Gaussian process works with as little or as much
data as available.
\item Non-uniformly sampled observations, missing
observations, and observation noise are handled
organically.
\item Uncertainty of a future observation is predicted along
with the mean.
\end{itemize}
In a basic setting though, a Gaussian process models a
stationary time series with homoscedastic noise. When either
the covariance between observations or the noise vary depending
on observations inputs or outputs, predictions produced by a
Gaussian process with a stationary kernel and constant noise
variance will be either biased or overly uncertain, hence
handling non-stationarity and heteroscedasticity is crucial in
many applications. Non-stationarity often arises in financial
time series, where market volatility, affecting forecasting
uncertainty, changes with time~\cite{WHG14,NB18};
heteroscedastic noise is common in vital signs monitoring of
patients in intensive care units, where the noise depends on
patient activity and medical interventions~\cite{CRC19}, both
non-stationarity and heteroscedasticity are characteristic for
time series of sensor readings in mobile robotics~\cite{KPB07}.
Various approaches have been proposed for handling
non-stationarity using Gaussian processes. When a time series is
piecewise stationary, change point detection is deemed an
appropriate model, with a stationary homoscedastic Gaussian
process modelling stretches between consequent change
points~\cite{GOR09,STR10,CV11}. In cases where the covariance
or noise change gradually and smoothly, it is common to
introduce a non-parametric dependency of kernel and noise
parameters on inputs~\cite{GWB98,PS03,LSC05,SG06,KPB07,WN12},
however, this makes structural modelling of time series, which
constitutes an important advantage of Gaussian processes and
facilitates introduction of prior knowledge in the model, more
challenging.
Another popular way to handle both abrupt and gradual changes in
time series is through mapping of the input
space~\cite{SG92,MK98,DL13,SSZ+15,BHH+16,MG18,CPR+16}. Covariance
between observations depends on observations inputs as well as
on kernel parameters, and non-stationarity can be modelled by
smoothly modifying observation inputs (warping the input space).
Several methods have been proposed to learn the input space
transformation, and a number of other approaches can be viewed
as employing input space warping for handling non-stationarity.
However, many such approaches either meet difficulties in
practical application, or require elaborated inference
algorithms~\cite{TBT15,BHH+16,SD17,D18}, which may impact the
simplicity of use of Gaussian processes.
In this work we introduce a model for non-parametric warping of
input space for Gaussian processes, suitable in particular for
time series forecasting but also applicable to other domains.
The model is easy to implement, imposes only a small
computational overhead on training and prediction, and allows to
use the whole arsenal of Gaussian process kernels to model time
series structure using prior knowledge. We provide a reference
implementation of the model and evaluate the model on a
synthetic and real-world data, comparing forecasting performance
with both baseline and state-of-the-art approaches and show that
the model exhibits state-of-the-art forecasting performance at a
lower implementation and computation cost.
This work brings the following contributions:
\begin{itemize}
\item A novel approach to handling non-stationarity
in Gaussian processes.
\item A Gaussian process model for forecasting in
non-stationary time series.
\item A reference implementation of the model within a
probabilistic programming framework.
\end{itemize}
\section{PRELIMINARIES}
A Gaussian Process is a collection of random variables, any
finite number of which have (consistent) joint Gaussian
distributions. A Gaussian process is fully specified by its
mean function $m(x)$ and covariance, or kernel, function
$k(x,x')$ and defines a distribution over functions. The mean
function is often set to zero, $m(x) \equiv 0$. A Gaussian
process defines a distribution over functions:
\begin{equation}
f \sim \mathcal{GP}(m(x), k(x,x'))
\end{equation}
Any finite set of values of function $f$ at inputs $\pmb{x}$
follows the multivariate normal distribution
$\mathcal{N}(\pmb{\mu_x}, \Sigma_{\pmb{x}})$ with mean
$\pmb{\mu_x} = m(\pmb{x})$ and covariance matrix $\Sigma_{\pmb{x}} = \{k(x_i,x_j)\}$.
Posterior inference in Gaussian processes can be performed
analytically. Let $\pmb{f}$ be the observations at inputs
$\pmb{x}$. Then the posterior distribution of values $\pmb{f}_*$
at inputs $\pmb{x}_*$ is
\begin{equation}
\pmb{f}_*|\pmb{f} \sim \mathcal{N}(\pmb{\mu_{x_*}} + \Sigma_{\pmb{xx_*}}^\top\Sigma^{-1}(\pmb{f}-\pmb{\mu_{x}}), \Sigma_{\pmb{x_*}} - \Sigma_{\pmb{xx_*}}^\top\Sigma^{-1}\Sigma_{\pmb{xx_*}})
\label{eq:posterior}
\end{equation}
where $\Sigma_{\pmb{xx_*}}$ is the covariance matrix between $\pmb{x}$ and
$\pmb{x_*}$.
Kernel functions normally have hyperparameters; we shall write
$k(x, x'; \theta)$ to denote that the kernel function $x$ has
hyperparameter $\theta$, possibly multidimensional, or omit the
hyperparameters when they are clear from the context. Training a
Gaussian process involves choosing $\theta$ based on the
observations. For example, the Gaussian, or RBF, kernel has
the form
\begin{equation}
\mathrm{RBF}(x, x'; l) = \exp\left(- \frac {(x-x')^2} {2l^2}\right)
\label{eq:rbf}
\end{equation}
and is parameterized by a single hyperparameter $l$.
A straightforward way to choose $\theta$ is to maximize
log marginal likelihood $L$ of observations $(\pmb{x}, \pmb{f})$:
\begin{equation}
L = \log p(\pmb{f}|\pmb{x},\theta) = - \frac 1 2 | \Sigma |
- \frac 1 2(\pmb{f} - \pmb{\mu})^\top\Sigma^{-1}(\pmb{f} -
\pmb{\mu}) - \frac n 2 \log (2 \pi)
\label{eq:lml}
\end{equation}
where $n$ is the number of observations.
There is no closed form solution for maximizing $L$ in general,
however gradient-based optimization methods allow to obtain an
approximate solution efficiently.
\section{WARPED INPUT GAUSSIAN PROCESS MODEL}
A major advantage of Gaussian process regression in general, and
for application to time series in particular, is the explicit
inclusion of uncertainty in the model: both the mean and the
variance are predicted at unobserved inputs. However, perhaps
somewhat counterintuitively, the variance, given the kernel and
the kernel's hyperparameters, does not depend on observed
outputs. Indeed, the covariance matrix in equation
(\ref{eq:posterior}) does not depend on $\pmb{f}$.
One way to circumvent this limitation of Gaussian processes is
to introduce non-stationarity into the kernel function, such
that the covariance depends on both the distance between
inputs, $||x, x'||$ and on inputs themselves. In some
kernels, such as the dot product kernel $k(x,x') = x \cdot x'$,
non-stationarity is fixed in the kernel design. In other
kernels, non-stationarity comes through dependency of kernel
hyperparameters on the inputs, and the dependency $\theta(x,x')$
itself can be learned from data~\cite{G97,PS03,MG18}. Related to
varying kernel hyperparameters with inputs is the idea
of \textit{warping} the input space~\cite{SG92}. A stationary
kernel depends on both the distance between inputs and
the hyperparameters. Consider, for example, the RBF kernel
(\ref{eq:rbf}). Increasing hyperparameter $l$, customarily called
the length scale, has the same effect on the covariance as
decreasing the distance between $x$ and $x'$ by the same
relative amount. Moving points away from each other will
effectively decrease the length scale and covariance between the
points. Warping of the input space has an intuitive
interpretation for time series: the time runs faster in areas
with high output volatility and slower when the output is
stable.
A research problem addressed by different warping methods is
how the warping is represented and what objective should be
maximized to learn the optimal warping for a given problem
instance. In what follows, we introduce warping of the input space
of a one-dimensional Gaussian process by imposing a prior on
the distances between adjacent inputs. We train the
process by maximizing the combined log marginal likelihood of the
observations under the prior and of the Gaussian process. The
model is trivially extended to a multi-dimensional Gaussian
process where only a single dimension is warped, such a
in the case of a time series where there are multiple predictors
but only the time needs to be warped to account for temporal
non-stationarity.
\subsection{Model}
In a Gaussian process model for handling non-stationarity
through displacement of observation inputs, the choice is of the
form of the prior imposed on the inputs. One option is to impose
a Gaussian process prior on the inputs. This is a rich prior
allowing to model complex structured non-stationarity; deep
Gaussian processes~\cite{D18} is a realization of such prior.
However, inference in the presence of such prior requires
special techniques and is computationally expensive. On the
other extreme is imposing an independent Gaussian prior on each
input, which is related to the modelling of input
uncertainty~\cite{MR11,DTL16}. \cite{MR11} show though that
independent input noise may be reduced to independent output
noise, and as such is not sufficiently expressive for modelling
non-stationarity for forecasting. Here, we propose a prior that
is just a single step away from an independent prior on each
input, namely one which corresponds to a 3-diagonal striped
covariance matrix $\Sigma=\{\sigma_{ij}\}$, such that
$\sigma_{ij}=0 \; \forall |i-j| > 1$, which is equivalent to
imposing independent priors on \textit{distances} between
adjacent inputs. An intuition behind this prior is that the
distance between adjacent locations increases, and the effective
length scale decreases, in areas with high volatility, and vice
versa in areas with low volatility. For convenience of
inference, we formulate the prior in terms of relative change of
distance between inputs. We exploit the structure of this prior
to specify the model compactly, without having to manipulate the
full covariance matrix of the prior.
Formally, let $\mathcal{GP}$ be a one-dimensional Gaussian
process. Let also $D$ be a distribution on $\mathcal{R}^+$.
Then, given inputs $\pmb{x}, x_{i+1} > x_{i} \forall i$, the
generative model for outputs is
\begin{align}
\label{eq:gen}
\tilde{x}_1 & = x_1 \\ \nonumber
\lambda_i & \sim D \\ \nonumber
\tilde{x}_i & = \tilde{x}_{i - 1} + \lambda_i(x_i - x_{i - 1}) \mbox{ for } i > 1\\ \nonumber
\pmb{f} & \sim \mathcal{GP}(\pmb{\mu}_{\pmb{\tilde{x}}}, \Sigma_{\pmb{\tilde{x}}})
\end{align}
In words, inputs $\pmb{x}$ are transformed (warped) into
$\pmb{\tilde{x}}$ by stretching or compressing distances
between adjacent inputs $x_{i-1}$ and $x_{i}$ by relative amounts
$\lambda_i$ drawn from $D$. For brevity, we call the introduced
model WGP in the rest of the paper. $D$ serves as a prior belief
on distances between adjacent inputs, relative to the original
distances. Without loss of generality, the mean of $D$ can be
assumed to be 1, so that the mean of the prior belief is that no
warping is applied.
\subsection{Training}
Training of a WGP model is performed by maximizing the log
marginal likelihood $L_{WGP}$:
\begin{equation}
L_{WGP} = L + \sum_{i=2}^{n} \log p_D\left(\frac {\tilde{x}_i - \tilde{x}_{i-1}} {x_i - x_{i-1}}\right) + C
\label{eq:lw}
\end{equation}
where $C$ is a normalization constant that does not depend on
either hyperparameters or observations and is not required for
training. As with kernel hyperparameters, derivatives of $L_W$
by both hyperparameters and transformed inputs $\pmb{\tilde{x}}$ are
readily obtainable analytically or through algorithmic
differentiation~\cite{GW08}.
\subsection{Forecasting}
After training, forecasting is virtually identical to that of a
regular Gaussian process, with one exception: for prediction in
a new location $x_*$, the warped image $\tilde{x}_*$ of the
location must be obtained for substituting into
(\ref{eq:posterior}). The possible options are:
\begin{itemize}
\item Choosing $\tilde{x}_*$ that maximizes $L_{WGP}$ for $\pmb{x}\circ x_*$ and $\pmb{f} \circ f*$.
\item Setting $\lambda_*=1$ and, consequently, $\tilde{x}_* = \tilde{x}_n + x_* - x_n$.
\item Setting $\lambda_*=\lambda_n$ and $\tilde{x}_* = \tilde{x}_n + (x_* - x_n)\frac {\tilde{x}_n - \tilde{x}_{n-1}} {x_n - x_{n-1}}$.
\end{itemize}
The first option is best aligned with log marginal likelihood
maximization during training but computationally expensive. The
last option expresses a smoothness assumption: the length scale
is likely to be similar in adjacent inputs. We experimented
with the three options and found that empirically on synthetic
and real-world datasets predictive accuracy of the third option
is virtually indistinguishable from the first one. In the
empirical evaluation, we computed the warped location for
forecasting as $\tilde{x}_n + \lambda_n(x_* - x_n)$.
\subsection{Modelling Seasonality}
Time series are often modelled by combining \textit{trend} and
\textit{seasonality}, that is, similarity between nearby
observations on one hand and observations at similar phases of a
period on the other hand. In Gaussian processes, kernels based
on the periodic kernel~\cite{MK98} are used to model seasonality.
Warping of the input space, used to maintain would interfere
with dependencies induced by the periodic kernel. Consider
monitoring of vital sign time series in intensive care
unit~\cite{CCP+12}: while volatility of the time series may
evolve over time, and warping the time may be adequate for
modelling non-stationarity, observations at the same
\textit{astronomical} time of the day tend to be similar.
A solution for warping the trend time but keeping the
seasonality time unwarped is to include both original and warped
dimension into the input space. This way, kernel features
modelling the trend and thus affected by non-stationarity are
made dependant on the warped time, and those modelling
seasonality --- on the original time. Generative
model~(\ref{eq:gen-seasonality}) extends (\ref{eq:gen}) by
combining $\pmb{x}$ and $\pmb{\tilde{x}}$ on input to the
Gaussian process:
\begin{align}
\label{eq:gen-seasonality}
\tilde{x}_1 & = x_1 \\ \nonumber
\lambda_i & \sim D \\ \nonumber
\tilde{x}_i & = \tilde{x}_{i - 1} + \lambda_i(x_i - x_{i - 1}) \mbox{ for } i > 1\\ \nonumber
\pmb{f} & \sim \mathcal{GP}(\pmb{\mu}_{\pmb{\tilde{x} \circ x}}, \Sigma_{\pmb{\tilde{x} \circ x}})
\end{align}
Consider, for example, the following kernel, composed of locally
periodic and trend terms:
\begin{align}
\label{eq:periodic-trend}
& k(x, x'; \theta) = c_1 \mathrm{RBF}(x, x') \mathrm{Periodic}(x, x') + c_2 \mathrm{Matern_{\frac 3 2}}(x, x') \\ \nonumber
& \mbox{where} \\ \nonumber
& \mathrm{RBF}(x, x'; l_1) = \exp \left( - \frac {(x-x')^2} {2l_1^2}\right) \\ \nonumber
& \mathrm{Periodic}(x, x';p, l_2) = \exp \left( - \frac {2\sin^2\left(\frac {\pi|x-x'|} p \right)} {l_2^2}\right) \\ \nonumber
& \mathrm{Matern_{\frac 3 2}}(x, x';l_3) = \left(1 + \frac {\sqrt{3}|x-x'|} {l_3}\right)\exp\left(-\frac {\sqrt{3}|x-x'|} {l_3}\right) \\ \nonumber
& \theta = (c1, c_2, l_1, l_2, l_3, p)
\end{align}
In this kernel, the $\mathrm{RBF}$ and $\mathrm{Matern_{\frac 3
2}}$ components reflect local dependencies between inputs
and hence should be affected by input warping. The
$\mathrm{Periodic}$ component, however, expresses dependencies
between points at similar phases of different periods, with the
period length $p$ normally known upfront and staying fixed.
Thus, in the presence of warping, the modified kernel
$\tilde{k}(\cdot, \cdot)$ must receive both warped and original
inputs and pass appropriate inputs to each of the
components:
\begin{align}
\label{eq:periodic-trend-warped}
&\tilde{k} ((\tilde{x},x), (\tilde{x'},x'); \theta) = \\ \nonumber
& \hspace{6em}c_1 \mathrm{RBF}(\tilde{x}, \tilde{x'}) \mathrm{Periodic}(x, x')\!+\!c_2 \mathrm{Matern_{\frac 3 2}}(\tilde{x}, \tilde{x'})
\end{align}
\section{EMPIRICAL EVALUATION}
\begin{table*}
\centering
\caption{Negative log predictive density on synthetic datasets.}
\label{tab:nlpd-synthetic}
\begin{tabular}{r c c c c}
{\bf dataset} & {\bf no warping} & {\bf warped} & {\bf warped, periodic} & {\bf deep GP} \\ \hline
trend & 0.2734$\pm$0.0464 & {\bf 0.2384$\pm$0.0649}& 0.2387$\pm$0.0620 & 0.6110$\pm$0.0511 \\
trend+seasonal & -0.2575$\pm$0.0273 & -0.3278$\pm$0.0288 & {\bf -0.3683$\pm$0.0312} & 0.1236$\pm$0.0659 \\
\end{tabular}
\end{table*}
\begin{figure*}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{trend+seasonal-6-no-warping.pdf}
(a) no-warping, NLPD = 0.1887
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{trend+seasonal-6.pdf}
(b) warped, NLPD = -0.1620
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{trend+seasonal-6-dgp.pdf}
(c) deep GP, NLPD = -0.0912
\end{minipage}
\caption{Forecasting on an instance from the synthetic dataset.}
\label{fig:synthetic}
\end{figure*}
\begin{table*}
\centering
\caption{Negative log predictive density on real-world datasets.}
\label{tab:nlpd-real}
\begin{tabular}{r c c c}
{\bf dataset} & {\bf no warping} & {\bf warped} & {\bf deep GP} \\ \hline
LIDAR & 0.2543 & {\bf 0.2290} & 0.2370 \\
Marathon & 0.1887 & -0.1620 & {\bf -0.2183} \\
Motorcycle & 1.9320 & {\bf 0.8063} & 1.191897
\end{tabular}
\end{table*}
\begin{figure*}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{marathon-men-gold-no-warping.pdf}
(a) no-warping, NLPD = 0.1887
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{marathon-men-gold.pdf}
(b) warped, NLPD = -0.1620
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{marathon-men-gold-dgp.pdf}
(c) deep GP, NLPD = -0.2183
\end{minipage}
\caption{Forecasting on Marathon dataset.}
\label{fig:marathon-men-gold}
\end{figure*}
\begin{figure*}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[width=0.96\linewidth]{motor-no-warping.pdf}
(a) no-warping, NLPD = 1.9320
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{motor.pdf}
(b) warped, NLPD = 0.8063
\end{minipage}
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics[scale=0.5,width=0.96\linewidth]{motor-dgp.pdf}
(c) deep GP, NLPD = 1.1919
\end{minipage}
\caption{Forecasting on Motorcycle dataset.}
\label{fig:motor}
\end{figure*}
The empirical evaluation relies on modelling and inference
capabilities provided by differentiable probabilistic
programming~\cite{GW08,GHN+14}. We implemented WGP using
Infergo~\cite{T19}, a probabilistic programming facility, and
GoGP~\cite{GoGP}, a library for probabilistic programming with
Gaussian processes. The source code, data, and detailed results
of empirical evaluations are available at
\url{https://bitbucket.org/dtolpin/wigp}. An implementation of
LBFGS~\cite{LN89} provided by Gonum~\cite{Gonum} was used
for inferring hyperparameters. As a state-of-the-art algorithm
for non-stationary Gaussian process regression, we used an
implementation of deep Gaussian processes from
\url{https://github.com/SheffieldML/PyDeepGP}.
We evaluated the model on synthetic and real world data. Two
kernels were employed in the empirical evaluation:
\begin{enumerate}
\item A $\mathrm{Matern}_{\frac 5 2}$ kernel, used with both
synthetic and real-world data.
\item A weighted sum of $\mathrm{Matern}_{\frac 5 2}$ kernel and a
periodic kernel, used with synthetic data.
\end{enumerate}
The latter kernel was applied to synthetic data generated both
with and without seasonal component, to evaluate influence of
prior structural knowledge on one hand, and possible adverse
effect of model misspecification (periodic component where there
is no seasonality in the data) on the other hand. A
parameterized homoscedastic noise term was added to the kernel
in all evaluations. Vague log-normal priors were imposed on
kernel hyperparameters.
\subsection{Synthetic datasets}
Synthetic data was generated by sampling 100 instances from
Gaussian processes with Matern(5/2) kernel, and with a sum of
Matern(5/2) and a periodic kernel, to emulate seasonality in the
data. To emulate non-stationarity, log distances between inputs
were sampled from a Gaussian process with an RBF kernel and then
unwarped into equidistant inputs. Samples from the periodic
kernel component were drawn for equidistant inputs, in
accordance with the assumption that the seasonality period is
fixed.
Table~\ref{tab:nlpd-synthetic} provides negative log
predictive density (NLPD) for regular, unwarped, Gaussian
process, warped Gaussian process with and without the
periodic component, and deep Gaussian process on
the synthetic dataset. Smaller NLPD means better forecasting
accuracy. WGP outperforms both regular and deep Gaussian process
by a wide margin on the synthetic dataset. Using a kernel with
periodic component on seasonal data improves forecasting, but
accounting for non-stationarity through warping always results
in much better accuracy.
Figure~\ref{fig:synthetic} shows a typical forecast by each of
the models on a single instance from the synthetic dataset.
\subsection{Real-world datasets}
We used three real-world datasets for the evaluation:
\begin{itemize}
\item Marathon --- olympic marathon time records for years
1896--2016, obtained from
\url{https://www.kaggle.com/jayrav13/olympic-track-field-results}.
\item LIDAR --- observations from light detection and
ranging experiment~\cite{S94}.
\item Motorcycle --- data from a simulated motorcycle
accident~\cite{S85}.
\end{itemize}
Table~\ref{tab:nlpd-real} compares performance of regular
Gaussian process WGP, and deep Gaussian process on the data
sets. WGP shows the best predictive performance on LIDAR and
Motorcycle data. On the Marathon time series, deep Gaussian
process performs slightly better, apparently due to smoothness
of the data. Figures~\ref{fig:marathon-men-gold}
and~\ref{fig:motor} show forecasting with each of the models on
Marathon and Motorcycle datasets.
\section{RELATED WORK}
Work related to this research is concerned with Gaussian
processes for time series forecasting, non-stationarity in
Gaussian process regression, and warping of the input space for
representing non-stationarity, in the order of narrowing focus.
\cite{ROE+13} gives an introduction to Gaussian processes for
time series modelling, including handling of non-stationarity
through change point detection.
Non-stationarity in Gaussian processes is attributed to either
heteroscedasticity, that is, varying observation noise, or to
non-stationarity of the covariance, or to both.
Heteroscedasticity is addressed by modelling dependency of noise
on the input and, possibly,
output~\cite{GWB98,LSC05,KPB07,WN12,DTL16}. Non-stationarity of
the covariance is represented through change
points~\cite{GOR09,STR10,CV11}, non-stationary
kernels~\cite{G97,PS03,PKB08} or input and output space
transformations (warping)~\cite{SG92,CPR+16,MG18,DL13,D18}.
Current work uses warping of the input space to represent
non-stationarity. However, unlike previous research, only
observation inputs are transformed rather than the whole
input space, allowing for a simpler representation and
more efficient inference. Due to the non-parametric nature of
transformation employed in this work, the introduced model
is applicable to time series both with change points and with
smooth non-stationarities.
\section{CONCLUSION}
We introduced a Gaussian process-based model where
non-stationarity is handled through non-parametric warping of
observation inputs. In application to time series, the model
facilitates forecasting of future observations with variances
depending on outputs, as well as inputs, of past observations,
while staying within the framework of `standard' Gaussian
process inference. When the data is known to possess periodic
properties or non-local correlations, these correlations can be
encoded in the model while still handling non-stationarity
through warping. The introduced approach to input warping can be
used with existing Gaussian process libraries and algorithms,
and there is room for compromise between accuracy of modelling
non-stationarity and computation time.
It still remains an open question to which extent a more expressive
warping may improve the quality of forecasting. Combining the
introduced model with change-point detection may be beneficial
in cases of abrupt changes in process parameters. Still, in
cases where simplicity of implementation and robustness in face
of variability in time series are of high priority, the
introduced model appears to provide a practical and efficient
solution.
\clearpage
|
1,116,691,500,278 | arxiv | \section{Introduction}
The imprint of redshifted Lyman-$\alpha$ (Ly$\alpha$) forest
absorption on the spectra of distant quasars provides an exquisitely
sensitive probe of the distribution of baryons in the intergalactic
medium (IGM) at large cosmological lookback times. Among the
remarkable achievements of modern cosmology is the ability of
cosmological hydrodynamical simulations to explain the origin of this
absorption pattern, and reproduce its statistical properties to
percent level accuracy \citep[e.g.][]{Cen94,MEscude96,Rauch98}. But
the wealth of information which can be gathered from the Ly$\alpha$
forest is far from being exhausted. The thermal state of the baryons
in the IGM reflects the integrated energy balance of heating --- due
to the collapse of cosmic structures, radiation, and possibly other
exotic heat sources --- and cooling due to the expansion of the
Universe \citep[e.g.][]{miraldarees94,HuiGnedin97,HH03,Meiksin09}. Cosmologists
still do not understand how the interplay of these physical processes
sets the thermal state of the IGM, nor has this thermal state been
precisely measured.
There is ample observational evidence that ultraviolet radiation
emitted by the first star-forming galaxies ended the `cosmic dark
ages' ionizing hydrogen and singly ionizing helium at $z\sim 10$
\citep[e.g.][]{BarkanaLoeb01,Ciardi05,Fan06,Zaroubi2013}. A second
and analogous reionization episode is believed to have occurred at
later times $z\sim 3-4$ \citep{MadauMeiksin94,Jacobsen94,Reimers97,Croft97},
when quasars were sufficiently abundant to supply the hard photons
necessary to doubly ionized helium. The most recent observations from
HST/COS provide tentative evidence for an extended \ion{He}{2}
reionization from $z\sim 2.7-4$
\citep[][Worseck et al. 2013, in preparation]{Shull10,Furl10,Worseck2011}, with a duration of
$\sim 1$\,Gyr, longer than naively expected. Cosmic reionization
events are watersheds in the thermal history of the Universe,
photoheating the IGM to tens of thousands of degrees. Because cooling
times in the rarefied IGM gas are long, memory of this heating is
retained
\citep{miraldarees94,HuiGnedin97,Haehnelt98,HH03,Theuns02a,Theuns02b}.
Thus an empirical characterization of the IGMs thermal history
constrains the nature and timing of reionization.
From a theoretical perspective, the impact of reionization events on
the thermal state of the IGM is poorly understood. Radiative transfer
simulations of both hydrogen \citep{Bolton04,Iliev06,Titley07} and
helium \citep{Abel99,McQuinn09,MeiksinTittley12} reveal that the heat
injection and the resulting temperature evolution of the IGM depends
on the details of how and when reionization occurred. There is
evidence that the thermal vestiges of \ion{H}{1} reionization heating
may persist until as late as $z\sim 4-5$, and thus be observable in
the Ly$\alpha$ forest \citep{HH03,FurlanettoOh09,Cen09}, whereas for
HeII reionization at $z\sim 3$, the Ly$\alpha$ forest is observable
over the full duration of the phase transition. Finally, other
processes could inject heat into the IGM and impact its thermal
state, such as the large-scale structure shocks which eventually
produce the Warm Hot Intergalactic Medium
\citep[WHIM;e.g.][]{CenOstriker1999,Dave99,Dave01}, heating from
galactic outflows \citep{Kollmeier06,CenOstriker06}, photoelectric
heating of dust grains \citep{Nath99,Inoue03}, cosmic-ray heating
\citep{Nath93}, Compton-heating from the hard X-ray background
\citep{MadauEf99}, X-ray preheating \citep{ROG05,Tanaka12}, or blazar
heating \citep{Blazar1,Blazar2,Blazar3,Puchwein12}. Precise
constraints on the thermal state of the IGM would help determine the
relative importance of photoheating from reionization and these more
exotic mechanisms.
Despite all the successes of our current model of the IGM, precise
constraints on its thermal state and concomitant constraints on
reionization (and other exotic heat sources) remain elusive. Attempts
to characterize the IGM thermal state from Ly$\alpha$ forest
measurements have a long history. In the simplest picture, the gas in
the IGM obeys a power law temperature-density relation $T =
T_0(\rho\slash {\bar \rho})^{\gamma-1}$, which arises from the balance
between photoionization heating, and cooling due to adiabatic
expansion \citep{HuiGnedin97}. The standard approach has been to
compare measurements of various statistics of the Ly$\alpha$ forest to
cosmological hydrodynamical simulations. Leveraging the dependence of
these statistics on the underlying temperature-density relation, its
slope and amplitude $(T_0,\gamma)$ parameters can be constrained. To
this end a wide variety of statistics have been employed, such as the
power spectrum \citep{Zald01,VielBolton09} or analogous statistics quantifying the
small-scale power like wavelets \citep{Theuns02b,Lidz09,Garzilli2012}
or the curvature \citep{BeckerBolton2011}. The flux PDF
\citep{McDonald2000,kbv+07,Bolton08,Calura2012,Garzilli2012} and the shape of the
$b$-parameter distribution
\citep{Haehnelt98,Theuns00,Ricotti00,BryanMach00,Schaye00,McDonald2001,Theuns02a,Rudie2012}
have also been considered. Multiple statistics have also been
combined such as the PDF and wavelets \citep{Garzilli2012}, or PDF and
power spectrum \citep{VielBolton09}. Overall, the results of such
comparisons are rather puzzling. First, the IGM appears to be
generally too hot, both at low ($z \sim 2$) and high ($z\sim 4$)
redshift \citep{HH03}. In particular, the high inferred temperatures
at $z\sim 4$
\citep[e.g.][]{Schaye00,Zald01,McDonald2001,Theuns02b,Lidz09} suggest
that HeII was reionized at still higher redshift $z > 4$ \citep{HH03},
possibly conflicting with the late $z\sim 2.7$ reionization of HeII
observed in HST/COS spectra
\citep[][Worseck et al. 2013, in preparation]{Furl10,Shull10,Worseck2011,Syphers12}. Second,
\citet{Bolton08} considered the PDF of high-resolution quasar
spectra and concluded that, at $z\simeq 3$ the slope of the
temperature-density relation $\gamma$ is either close to isothermal
($\gamma = 1$) or even inverted ($\gamma < 1$), suggesting ``that the
voids in the IGM may be significantly hotter and the thermal state of
the low-density IGM may be substantially more complex than is usually
assumed.'' Although this result is corroborated by additional work
employing different statistics/methodologies \citep[][but see Lee et
al. 2012]{VielBolton09,Calura2012,Garzilli2012}, radiative
transfer simulations of HeII reionization cannot produce an isothermal
or inverted slope, unless a population other than quasars reionized
HeII \citep{Bolton04,McQuinn09,MeiksinTittley12} , which would fly in
the face of conventional wisdom. To summarize, despite nearly a decade
of theoretical and observational work, published measurements of the
thermal state of the IGM are still highly confusing, and concomitant
constraints on reionization scenarios are thus hardly compelling.
Fortunately, there is another important record of the thermal history
of the Universe: the Jeans pressure smoothing scale. Although baryons
in the IGM trace dark matter fluctuations on large Mpc scales, on
smaller scales $\lesssim 100~{\rm kpc}$, gas is pressure
supported against gravitational collapse by its finite temperature.
Analogous to the classic Jeans argument, baryonic fluctuations are
suppressed relative to the pressureless dark matter (which can
collapse), and thus small-scale power is `filtered' from the IGM
\citep{GnedHui98}, which explains why it is sometimes referred to as the
\emph{filtering scale}. Classically the \emph{comoving} Jeans scale is
defined as $\lambda_J^{0}=\sqrt{\pi c_s^2 / G\rho}(1+z)$, but in reality
the amount of Jeans filtering is sensitive
to both the instantaneous pressure and hence temperature of the IGM,
\emph{as well as the temperature of the IGM in the past}. This arises because
fluctuations at earlier times expanded or failed to collapse depending
on the IGM temperature at that epoch. Thus the Jeans scale
reflects the competition between gravity and pressure integrated
over the Universe's history, and cannot be expressed as a mere deterministic function of the
instantaneous thermal state. Heuristically, this can be understood because reionization
heating is expected to occur on the reionization timescales of several
hundreds of Myr, whereas the baryons respond to this heating on the
sound-crossing timescale $\lambda_J^{0}\slash [c_s(1+z)] \sim
\left(G\rho\right)^{-1\slash 2}$, which at mean density is comparable to the
Hubble time $t_H$.
\citet{GnedHui98} considered the behavior of the Jeans smoothing in
linear theory, and derived an analytical expression for the filtering
scale $\lambda_J$ as a function of thermal history
\begin{equation}
\begin{split}
\left. \lambda_J^2(t)=\frac{1}{D_{+}(t)}\int_0^t dt' a^2(t')(\lambda_J^{0}(t'))^2 \times \right. \\
\left.(\ddot{D}_+(t')+2H(t')\dot{D}_+(t'))\int_{t'}^{t}\frac{dt''}{a^2(t'')} , \right.
\end{split}
\label{eqn:jeans}
\end{equation}
where $D_+(t)$ is the linear growth function at time $t$, $a(t)$ is the
scale factor, and $H(t)$ the Hubble expansion rate.
Although this simple linear approximation provides intuition about the
Jeans scale and its evolution, Fourier modes with wavelength
comparable to the Jeans scale are already highly nonlinear at $z\sim
3$, and hence this simple linear pictures breaks down due
to nonlinear mode-mode coupling effects.
Thus given that we do not know the thermal history of the Universe,
that we expect significant heat injection from HeII reionization at
$z\sim 3-4$ concurrent with the epoch at which we observe the IGM, and
that IGM modes comparable to the Jeans scale actually respond
non-linearly to this unknown heating, the true relationship between the Jeans
scale and the temperature-density relation at a given epoch should be regarded as
highly uncertain.
Besides providing a thermal record of the IGM, the small-scale
structure of baryons, as quantified by the Jeans scale, is a fundamental
ingredient in models of reionization and galaxy formation. A
critical quantity in models of cosmic reionization is the
clumping factor of the IGM $C=\langle n_H^2\rangle/\bar{n}_H^2$
\citep[e.g.][]{MadauHR99,MEscudeHaehnelt00,PawlikSchaye2009,Haardt12,Emberson13,McQuinn11},
because it determines the average number of recombinations per atom,
or equivalently the total number of UV photons needed to keep the IGM
ionized. The clumping and the Jeans scale are directly
related. Specifically, \begin{equation} C = 1 + \sigma^2_{\rm IGM} \equiv 1 +
\int d\ln k \,\frac{k^3 P_{\rm IGM}(k)}{2\pi^2}\label{eqn:clump}, \end{equation}
where $\sigma^2_{\rm IGM}$ is the variance of the IGM density, and
$P_{\rm IGM}(k)$ is the 3D power spectrum of the
baryons in the IGM. Given the shape of $P_{\rm IGM}(k)$, the
integral above is dominated by contributions from small-scales
(high-$k$), and most important is the Jeans cutoff $\lambda_J$, which
determines the maximum $k$-mode $k_{\rm J}\sim 1\slash \lambda_{\rm
J}$ contributing. The small-scale structure of the IGM strongly
influences the propagation of cosmological ionization fronts during
reionization \citep{Iliev05}. Furthermore, several numerical studies
have revealed that the hydrodynamic response of the baryons in the IGM
to impulsive reionization heating is significant
\citep[e.g.][]{Gnedin00,Haiman01,Kuhlen05,Ciardi07,PawlikSchaye2009},
indicating that a full treatment of the interplay between IGM
small-scale structure and reionization history probably requires
coupled radiative transfer hydrodynamical simulations.
Reionization heating also evaporates the baryons from low-mass halos
or prevents gas from collapsing in them altogether
\citep[e.g.][]{BarkanaLoeb99,Dijkstra04}, an effect typically modeled
via a critical mass, below which galaxies cannot form
\citep{Gnedin2000,Bullock00,Benson02a,Benson02b,Somerville02,Kulkarni11}.
\citet{Gnedin2000} used hydrodynamical simulations to show that this
scale is well approximated by the \emph{filtering mass}, which is the
mass-scale corresponding to the Jeans filtering length,
i.e. $M_F(z)=4\pi{\bar \rho}\lambda^3_J/3$ \citep[see
also][]{Hoeft06,Okamoto08}. Finally, because the Jeans scale has
memory of the thermal events in the IGM (see eqn.~\ref{eqn:jeans}),
its value at later times can potentially constrain models of early IGM
preheating. In this scenario, heat is globally injected into the IGM
at high-redshift $z\sim 5-15$ from blast-waves produced by outflows
from proto-galaxies or miniquasars
\citep{Voit96,Madau2000,Madau01,CenBryan01,TheunsMoSchaye01,BensonMadau03,Scannapieco02,Scannapieco04}
X-ray radiation from early miniquasars \citep{Tanaka2012,Parsons13},
which sets an entropy floor in the IGM and the raises filtering mass
scale inhibiting the formation of early galaxies.
A rough estimate of the filtering scale at $z=3$ can be obtained from
eqn.~(\ref{eqn:jeans}) and the following simplified assumptions: the
temperature at $z=3$ is $T(z=3) \approx 15000$\,K as suggested by
measurements \citep[e.g.][]{Schaye00,Ricotti00,Zald01,Lidz09},
temperature evolves as $T \propto 1+z$, the typical overdensity probed
by the $z=3$ ${\rm Ly\alpha}$\ forest is $\delta \sim 2$
\citep{BeckerBolton2011}. One then obtains $\lambda_J(z=3) \approx
340$ kpc (comoving), smaller than the classical or instantaneous Jeans
scale $\lambda_J^0$ by a factor of $\sim 3$.
This distance maps to a velocity interval
$v_J=Ha\lambda_J\approx 26\,{\rm km~s}^{-1}$ along the line of sight due to
Hubble expansion. Thermal Doppler broadening gives rise to a cutoff
in the longitudinal power spectrum, which occurs at a comparable
velocity $v_{\rm th}\approx 11.3\,{\rm km~s}^{-1}$, for gas heated to the same
temperature. The similarity of the characteristic scale of 3D Jeans
pressure smoothing and the 1D thermal Doppler smoothing suggests that
disentangling the two effects will be challenging given purely
longitudinal observations of the ${\rm Ly\alpha}$\ forest, as confirmed by
\cite{Peeples09a}, who considered the relative impact of thermal
broadening and pressure smoothing on various statistics applied to
longitudinal Ly$\alpha$ forest spectra. Previous work that has aimed
to measure thermal parameters such as $T_0$ and $\gamma$ from
Ly$\alpha$ forest spectra, have largely ignored the degeneracy of the
Jeans scale with these thermal parameters. The standard approach has
been to assume values of the Jeans scale from a hydrodynamical
simulation \citep[e.g.][]{Lidz09,VielBolton09,BeckerBolton2011}, which
as per the discussion above, is equivalent to assuming perfect
knowledge of the IGM thermal history. Because of the degeneracy with
the Jeans scale, it is thus likely that previous measurements of the
thermal parameters $T_0$ and $\gamma$ are significantly biased, and
their error bars significantly underestimated, if indeed Jeans scale
takes on values different from those assumed (but see Zaldarriaga et
al. 2001 who marginalized over the Jeans scale, and Becker et al. 2011
who also considered its impact). We will investigate such
degeneracies in detail in this paper with respect to power-spectra,
and we consider degeneracies for a broader range of IGM statistics in
a future work (A.Rorai et al. 2013, in preparation).
The Jeans filtering scale can be directly measured using close quasar
pair sightlines which have comparable transverse separations
$r_{\perp} \lesssim 300\,$kpc (comoving; $\Delta\theta \lesssim
40\arcsec$ at $z=3$). The observable signature of Jeans smoothing is
increasingly coherent absorption between spectra at progressively smaller pair
separations resolving it \citep{Peeples09b}. The idea of using pairs
to constrain the small scale structure of the IGM is not new. However,
all previous measurements have either focused on lensed quasars, which
probe extremely small transverse distances $r_{\perp} \sim 1\,$kpc
$\ll \lambda_J$
\citep[e.g.][]{Young81,McGill90,PIF98,Smette95,Rauch01} such that the
Ly$\alpha$ forest is essentially perfectly coherent, or real physical quasar pairs
with $r_{\perp} \sim 1$ Mpc $\gg \lambda_J$ \citep{DOdorico06} far too
large to place useful constraints on the Jeans scale. Observationally,
the breakthrough enabling a measurement of the Jeans scale is the
discovery of a large number of close quasar pairs
\citep{Hennawi04,BINARY,Myers08,HIZBIN} with $\sim 100\,$kpc
separations. By applying machine
learning techniques \citep{richards04,Bovy11,Bovy12} to the Sloan Digital Sky
Survey \citep[SDSS;][]{yaa+00} imaging, a sample of $\sim 300$ close
$r_{\perp} < 700\,$kpc quasar pairs at $1.6 < z \lesssim
4.3$\footnote{The lower redshift limit is corresponds to Ly$\alpha$
forest absorption being above the atmospheric cutoff.} has been
uncovered \citep{Hennawi04,BINARY,HIZBIN}.
In this paper we introduce a new method which will enable the first
determination of the Jeans scale, and we estimate the precision with
which it can be measured from this close quasar pair dataset. We
explicitly consider degeneracies between the canonical thermal
parameters $T_0$ and $\gamma$, and the Jeans scale $\lambda_J$, which
have been heretofore largely ignored. To this end, we use an
approximate model of the Ly$\alpha$ forest based on dark matter only
simulations, allowing us to independently vary all thermal parameters
and simulate a large parameter space. The structure of this paper is
as follows: we describe how we compute the Ly$\alpha$ forest flux
transmission from dark matter simulations, and our parametrization of
the thermal state of the IGM in section \S~\ref{sim_met}. In
\S~\ref{ps_cps} we consider thermal parameter degeneracies which
result when only longitudinal observations are available, and we show
how the additional transverse information provided by quasar pairs can
break them. In \S~\ref{phase_ang} we introduce our new method to
quantify absorption coherence using the difference in phase between
homologous longitudinal Fourier modes of each member of a quasar
pair. We focus on the probability distribution function (PDF) of
these phase differences, and find that the shape of this phase PDF is
very sensitive to the Jeans smoothing. A Bayesian likelihood
formalism that uses the phase angle PDF to determine the Jeans scale
is presented in \S~\ref{jeans_meas}. Our Bayesian method allows us to
combine the Jeans scale information with other Ly$\alpha$ forest
statistics such as the longitudinal power spectrum, and we conduct a
Markov Chain Monte Carlo (MCMC) analysis in this section to determine
the resulting precision on $T_0$, $\gamma$, and $\lambda_J$ expected
for realistic datasets, explore parameter degeneracies, and study the
impact of noise and systematic errors. We conclude and summarize in
\S~\ref{summary}.
Throughout this paper we use the $\Lambda$CDM cosmological model with the
parameters $\Omega_m=0.28, \Omega_{\Lambda}=0.72, h=0.70, n=0.96, \sigma_8=0.82 $.
All distances quoted are in comoving kpc.
\section{Simulation Method}\label{sim_met}
\subsection{Dark Matter Simulation}
Our model of the Ly$\alpha$ forest is based on a single snapshot of a
dark matter only simulation at $z=3$. In this scheme, the dark matter
simulation provides the dark matter density and velocity field
\citep{Croft98,MeiksinWhite2001}, and the gas density and temperature are
computed using simple scaling relations motivated by the results of
full hydrodynamical simulations
\citep{HuiGnedin97,GnedHui98,GnedinBaker2003}. Our objective is then
to explore the sensitivity with which close quasar pairs can be used to constrain
the thermal parameters defining these scaling relations, and in particular the
Jeans scale. To this end, we require a dense sampling of the thermal
parameter space, which is computationally feasible with our
semi-analytical method applied to a dark matter simulation snapshot,
whereas it would be extremely challenging to simulate such a dense
grid with full hydrodynamical simulations. We do not model the
redshift evolution of the IGM, nor do we consider the effect of
uncertainties on the cosmological parameters, as they are constrained
by various large-scale structure and CMB measurements to much higher
precision than the thermal parameters governing the IGM.
We used an updated version version of the TreePM code described in
\citet{TreePM} to evolve $1500^3$ equal mass ($3\times
10^{6}\,h^{-1}M_\odot$) particles in a periodic cube of side length
$L_{\rm box}=50\,h^{-1}$Mpc with a Plummer equivalent smoothing of
$1.2\,h^{-1}$kpc. The initial conditions were generated by displacing
particles from a regular grid using second order Lagrangian
perturbation theory at $z=150$. This TreePM code has been compared to
a number of other codes and has been shown to perform well for such simulations
\citep{Hei08}. Recently the code has been modified to use a hybrid
MPI+OpenMP approach which is particularly efficient for modern
clusters.
\subsection{Description of the Intergalactic Medium}
The baryon density field is obtained by smoothing the dark matter
distribution; this smoothing mimics the effect of the Jeans pressure
smoothing. For any given thermal model, we adopt a constant filtering
scale $\lambda_J$, rather than computing it as a function of the
temperature, and this value is allowed to vary as a free parameter
(see discussion below). The dark matter distribution is convolved
with a window function $W_{\rm IGM}$, which, in
Fourier space, has the effect of quenching high-$k$ modes
\begin{equation}
\delta_{\rm IGM}(\vec{k})=W_{\rm IGM}(\vec{k},\lambda_J)\delta_{\rm DM}(\vec{k})
\end{equation}
For example a Gaussian kernel with $\sigma=\lambda_J$,
$W_{\rm IGM}(k)=\exp (-k^2\lambda_J^2/2)$, would truncates the 3D power spectrum at $k \sim 1/\lambda_J$.
Because we smooth the dark matter particle distribution in real-space, it is more convenient to adopt a
function with a finite-support
\begin{equation}
\delta_{\rm IGM}(x) \propto \sum_i m_i K(|x-x_i|,R_J)
\end{equation}
where $m_i$ and $x_i$ are the mass and position of the particle $i$, $K(r)$ is the kernel, and $R_J$ the smoothing parameter which sets the Jeans scale. We adopt the followoing cubic spline kernel
\begin{equation}
K(r,R_J)=\frac{8}{\pi R_J^3}
\begin{cases}
1-6\left(\frac{r}{R_J}\right)^2+6\left(\frac{r}{R_J}\right)^3 & \frac{r}{R_J} \leq \frac{1}{2} \\
2\left(1-\frac{r}{R_J}\right)^3 & \frac{1}{2}<\frac{r}{R_J}\leq 1 \\
0 & \frac{r}{R_J} >1
\end{cases}.\label{eqn:kernel}
\end{equation}
In the central regions the shape of $K(r)$ very closely resembles a Gaussian
with $\sigma \sim R_J/3.25 $, and we will henceforth take this $R_J/3.25$ to be our definition
of $\lambda_J$, which we will alternatively refer to as the `Jeans
scale' or the `filtering scale'. The analogous smoothing procedure is also
applied to the particle velocities; however, note that the velocity field has
very little small-scale power, and so the velocity distribution is
essentially unaffected by this pressure smoothing operation. As we discuss further in
Appendix \ref{sec:appendixa}, the mean inter-particle separation of our simulation cube $\delta l= L_{\rm box}\slash N_{\rm p}^{1\slash 3}$ sets the minimum Jean smoothing
that we can resolve with our dark matter simulation, hence we can safely model values
of $\lambda_J > 50\,{\rm kpc}$.
At the densities typically probed by the Ly$\alpha$ forest, the IGM is
governed by relatively simple physics. Most of the gas has never been shock heated,
is optically thin to ionizing radiation, and can be considered to be
in ionization equilibrium with a uniform UV background. Under these
conditions, the competition between photoionization heating and
adiabatic expansion cooling gives rise to a tight relation between
temperature and density which is well approximated by a power law
\citep{HuiGnedin97},
\begin{equation}
T(\delta)=T_0 (1+\delta)^{\gamma-1} \label{eqn:rhoT}
\end{equation}
where $T_0$, the temperature at the mean density, and $\gamma$,
the slope of the temperature-density relation, both depend on the thermal history of the gas.
We thus follow the standard approach, and parametrize the thermal state of the IGM in this way.
Typical values for $T_0$ are on the order of $10^4$ K, while $\gamma$ is expected to be around
unity, and asymptotically approach the value of $\gamma_{\infty}=1.6$, if there is no other
heat injection besides (optically thin) photoionzation heating.
Recent work suggests that an inverted temperature-density relation $\gamma < 1$
provides a better match to the flux probability distribution of the Ly$\alpha$ forest \citep{Bolton08}, but
the robustness of this measurement has been debated \citep{Lee2012}.
The optical depth for Ly$\alpha$ absorption is proportional to the density of
neutral hydrogen $n_{HI}$, which, if the gas is highly ionized ($x_{HI}\ll 1$)
and in photoionization equilibrium, can be calculated as \citep{gp65}
\begin{equation}
n_{HI} = \alpha(T) n_{H}^2/ \Gamma
\end{equation}
where $\Gamma$ is the photoionization rate due to a uniform metagalactic ultraviolet background (UVB),
and $\alpha(T)$ is the recombination coefficient which scales as $ T^{-0.7}$ at typical IGM temperatures.
These approximations result in a power law relation between Ly$\alpha$ optical depth and
overdensity often referred as the fluctuating Gunn-Petersonn approximation (FGPA)
$\tau\propto (1+\delta)^{2-0.7(\gamma-1)}$, which does not include the effect of peculiar motions and thermal
broadening. We compute the observed optical depth in redshift-space via the following
convolution of the real-space optical depth
\begin{equation}
\tau(v)=\int_{-\infty}^{\infty} \tau(x) \Phi(Hax+v_{p,\parallel}(x)-v, b(x))dx \label{eqn:tau},
\end{equation}
where $Hax$ is the real-space position in velocity units,
$v_{p,\parallel}(x)$ is the longitudinal component of the peculiar
velocity of the IGM at location $x$, and $\Phi$ is the normalized
Voigt profile (which we approximate with a Gaussian) characterized by
the thermal width $b=\sqrt{2K_BT/mc^2}$, where we compute the
temperature from the baryon density via the temperature-density
relation (see eqn.~\ref{eqn:rhoT}). The observed flux transmission is
then given by $F(v)=e^{-\tau(v)}$.
We apply the aforementioned recipe to $2\times 100^2$ lines-of-sight
(\emph{skewers}) running parallel to the box axes, to generate the
spectra of $100^2$ quasar pairs, and we repeat this procedure for 500
different choices of the parameter set $(T_0,\gamma,\lambda_J)$. Half
of the spectra (the first member of each pair) are positioned on a
regular grid in the $y-z$ plane, in order to distribute them evenly in
space. Subsequently, a companion is assigned to each of them, and our
choice for the distribution of radial distances warrants further
discussion. Our goal is to statistically characterize the coherence of
pairs of spectra as a function of impact parameter, and near the Jeans
scale this coherence varies rapidly with pair separation. Hence
computing statistics in bins of transverse separation is undesirable,
because it can lead to subtle biases in our parameter determinations
if the bins are too broad. To circumvent these difficulties, we focus
our entire analysis on 30 linearly-spaced discrete pair separations
between $0$ and $714$ kpc. For each of the $100^2$ lines-of-sight on
the regular grid, a companion sightline is chosen at one of these
discrete radial separations, where the azimuthal angle is drawn from a
uniform distribution.
We follow the standard approach, and treat the metagalactic
photoionization rate $\Gamma$ as a free parameter, whose value is
fixed \emph{a posteriori} by requiring the mean flux of our Ly$\alpha$
skewers $\langle \exp(-\tau)\rangle$ to match the measured values from
\cite{faucher08}. This amounts to a simple constant re-scaling of the
optical depth. The value of the mean flux at $z=3$ is taken to be
fixed, and thus assumed to be known with infinite precision. This is
justified, because in practice, the relative measurement errors on the
mean flux are very small in comparison to uncertainties of the
thermal parameters we wish to study. In a future work, we conduct a full parameter
study using other Ly$\alpha$ forest statistics, and explore the effect
of uncertainties of the mean flux (A.Rorai et al. 2013, in preparation).
Examples of our spectra are shown in Figure~\ref{fig:spec_w_phases}.
\begin{figure*}
\centering \centerline{\epsfig{file=spectra_with_phases.ps,
width=\textwidth}}
\vskip -0.1in
\caption{\label{fig:spec_w_phases} An example of three simulated
spectra. The left and the right panels represent the same spectra
in the simulation calculated for two models with different Jeans
smoothing length $\lambda_J$. The middle and the lower panel
represent two spectra respectively at separation $0.5$ Mpc and $1$
Mpc from the top one. The coloured sine curves track homologous
Fourier modes in each spectrum, with rescaled mean and amplitude
to fit the range $[0,1]$. The wave shifts provide a graphical
visualization of phase differences, which we will use to quantify spectral
coherence and probe the Jeans scale. The right panels suggest that a larger
$\lambda_J$ results in greater spectral coherence and generally smaller
phase differences between neighboring sightlines.}
\end{figure*}
To summarize, our models of the Ly$\alpha$ forest are uniquely described by the three parameters
($T_0, \gamma$, $\lambda_J$), and we reiterate that these three parameters
are considered to be independent. In particular the Jeans scale
is not related to the instantaneous temperature at mean density $T_0$. Although this may at
first appear unphysical, it is motivated by the fact that $\lambda_J$ depends non-linearly
on the entire thermal history of the IGM (see eqn.~\ref{eqn:jeans}), and both this
dependence and the thermal history are not well understood, as discussed in the introduction.
Allowing $\lambda_J$ to vary independently is the most straightforward
parametrization of our ignorance. However, improvements in our theoretical understanding of the relationship
between $\lambda_J$ and the thermal history of the IGM ($T_0$,$\gamma$) could inform more intelligent parametrizations.
Furthermore, inter-dependencies between thermal parameters can also be trivially included into our
Bayesian
methodology for estimating the Jeans scale as conditional priors, e.g. $P(\lambda_J,T_0)$, in the parameter space.
\section{Power Spectra and Their Degeneracies}
\label{ps_cps}
Although many different statistics have been employed to isolate and
constrain the thermal information contained in Ly$\alpha$ forest
spectra, the flux probability density function (PDF; 1-point function) and
the flux power spectrum or auto-correlation function (2-point
function), are among the most
common\citep[e.g.][]{McDonald2000,Zald01,kbv+07,VielBolton09}. But because
the Ly$\alpha$ transmission $F$ is significantly non-Gaussian,
significant information is also contained in higher-order statistics.
For example wavelet decompositions, which contains a hybrid of
real-space and Fourier-space information, have been advocated for
measuring spatial temperature fluctuations
\citep{Lidz09,Zaldarriaga02,Garzilli2012}. Several studies have
focused on the on the $b$-parameter distribution to obtain constraints
on thermal parameters
\citep{Ricotti00,Schaye00,McDonald2001,Rudie2012}, and recently
\citet{BeckerBolton2011} introduced a `curvature' statistic as an
alternative measure of spectral smoothness to the power spectrum.
As gas pressure acts to smooth the baryon density field in 3D, it is
natural explore power spectra as a means to constrain the Jeans
filtering scale. A major motivation for working in Fourier space, as
opposed to the real-space auto-correlation function, is that it is
much easier to deal with limited spectral resolution in Fourier
space. The vast majority of close quasar pairs are too faint to be
observed at echelle resolution ${\rm FWHM} \simeq 5\,{\rm km\,s^{-1}}$
where the Ly$\alpha$ forest is completely resolved. Instead, spectral
resolution has to be explicitly taken into account.
But to a very good approximation the smoothing caused by limited spectral resolution simply low-pass
filters the flux, and thus the shape of the flux power spectrum is
unchanged for $k$-modes less than the spectral resolution cutoff
$k_{\rm res}$. Thus by working in $k$-space, one can simply ignore
modes $k \gtrsim k_{\rm res}$ and thus obviate the need to precisely
model the spectral resolution, which can be challenging for
slit-spectra. Finally, another advantage to $k$-space is that,
because fluctuations in the IGM are only mildly non-linear, some of
the desirable features of Gaussian random fields, such as the
statistical independence of Fourier modes, are approximately retained,
simplifying error analysis. In what follows we consider the impact of
Jeans smoothing on longitudinal power spectrum, as well as the
simplest 2-point function that can be computed from quasar pairs, the
cross-power spectrum.
\subsection{The Longitudinal Power Spectrum}
\label{1dps}
\begin{figure*}
\centering
\centerline{\epsfig{file=power_spectra.eps,width=\textwidth}}
\vskip -0.1in
\caption{ \label{fig:power_spectra}\emph{Left panel:} The 1D
dimensionless power spectrum of the Ly$\alpha$ forest at $z=3$. In our large
grid of thermal models, we can identify two very different parameter
combinations, represented by the solid (blue) and dashed (green)
curves, which provide an equally good fit to the longitudinal power
spectrum measurements from \citet{McDonald2000} (red squares) and
\cite{Croft2002} (cyan circles), illustrating the strong
degeneracies between these parameters
($T_0$,$\gamma$,$\lambda_J$). In light of these degeneracies, it is
clear that it would be extremely challenging to constrain these
parameters with the longitudinal power alone. \emph{Right panel:}
The dimensionless cross power spectrum $\pi(k;r_{\perp})k/\pi$
(solid line) at $k\approx 0.05$ s/km from our simulated skewers, as
a function of $r_{\perp}$ for the same two thermal models shown at
left, with error bars estimated from a sample of 20 pairs. The
degeneracy afflicting the 1D power is broken using the new
information provided by close quasar pairs, because the different
Jeans scales result in differing amounts of transverse spectral
coherence, providing much better prospects for measuring
$\lambda_J$. We also show the cross modulus
$\langle\rho_1(k)\rho_2(k)\rangle k/\pi$ (dashed lines) for the same
two models, which show flat variation with $r_{\perp}$, and a very
weak dependence on the Jeans scale. Most of the information about
the 3D Jeans smoothing resides not in the moduli, but rather in the
phase differences between homologous modes (see discussion in
\S~\ref{sec:den}).}
\end{figure*}
It is well known that the shape of the longitudinal power spectrum,
and the high-$k$ thermal cutoff in particular, can be used constrain the
$T_0$ and $\gamma$ \citep{Zald01,VielBolton09}. This cutoff arises because
thermal broadening smooths $\tau$ in redshift-space
(e.g. eqn.~\ref{eqn:tau}). In contrast to this 1D
smoothing, the Jeans filtering smooths the IGM in 3D, and it is exactly this confluence between 1D and 3D
smoothing that we want to understand \citep[see
also][]{Peeples09a,Peeples09b}. We consider the quantity
$\delta F(v)=(F-\bar{F})/\bar{F}$, where $\bar{F}$ is the mean
transmitted flux, and compute the power spectrum according to
\begin{equation}
P(k)=\langle|\delta\tilde{F}(k)|^2\rangle,
\label{los_pow_def}
\end{equation}
where $\delta\tilde{F}(k)$ denotes the Fourier transform of $\delta F$ for longitudinal
wavenumber $k$, and
angular brackets denote an suitable ensemble average (i.e. over our full sample of spectra).
In Figure~\ref{fig:power_spectra} we compare two thermal models in our thermal parameter grid
to measurements of the longitudinal power spectrum of the Ly$\alpha$
forest at $z\simeq 3$ \citep{McDonald2000,Croft2002}. The blue (solid)
curve has a large Jeans scale $\lambda_{\rm J} = 214\,{\rm kpc}$, a
cooler IGM $T_0=13,000\,$K, and a nearly isothermal
temperature-density relation $\gamma = 0.9$, which is mildly inverted
such that voids are hotter than overdensities. Such isothermal or even
inverted equations of state could arise at $z\sim 3$ from \ion{He}{2}
reionization heating \citep{McQuinn09,Tittley07}, and recent
analyses of the flux PDF \citep{Bolton08} as well joint analysis of
PDF and power-spectrum \citep{VielBolton09, Calura2012, Garzilli2012} have argued
for inverted or nearly isothermal values of $\gamma$. The green
(dashed) curves have a smaller Jeans scale $\lambda_{\rm J} =
100\,{\rm kpc}$, a hotter IGM $T_0=18,000\,$K, and a steep $\gamma =
1.6$ temperature-density relation consistent with the asymptotic value
if the IGM has not undergone significant recent heating events
\citep{HuiGnedin97,HH03}. Thus with regards to the
longitudinal power spectrum, the Jeans scale is clearly degenerate
with the amplitude and slope $(T_0,\gamma)$ of the temperature-density
relation. One would clearly come to erroneous conclusions about the
equation of state parameters ($T_0$,$\gamma$) from longitudinal power
spectrum measurements, if the lack of knowledge of the Jeans scale is not
marginalized out \citep[see e.g.][for an example of this
marginalization]{Zald01}.
This degeneracy in the longitudinal power arises because the Jeans
filtering smooths the power in 3D on a scale which project to a
longitudinal velocity
\begin{equation}
v_J=\frac{H(z=3)}{1+3} \lambda_J \approx 26 \left(\frac{\lambda_J}{340 \mbox{ kpc}}\right)\,{\rm km~s}^{-1},
\end{equation}
resulting in a cutoff of the power at $k_J\approx0.04\,{\rm s~km}^{-1}$ (for the
typical values assumed in the introduction\footnote{We caution that this
estimate assumes a thermal history where $T\propto 1+z$, without
considering the effect of HeII reionization. In that case the
deduced value for the filtering scale $\lambda_J$ would probably be smaller.}).
The thermal Doppler broadening of Ly$\alpha$ absorption lines smooths the power in 1D, on a scale governed
by the \emph{b-parameter}
\begin{equation}
b =\sqrt{\frac{2 k_B T}{\mu m_p}}\approx 15.7 \left(\frac{T}{1.5\times 10^4 \mbox{ K}}\right)^{1/2}\,{\rm km~s}^{-1},
\end{equation}
which results in an analogous cutoff at $k_{\rm th} = \sqrt{2}\slash b
\approx 0.09\,{\rm s~km}^{-1}$ for a temperature of 15000 K. Above $k_B$ is the Boltzmann constant, $m_p$ the
proton mass, and $\mu\approx 0.59$ is the mean molecular weight for a
primordial, fully ionized gas. The fact that the two cutoff scales are
comparable results in a strong degeneracy which is very challenging to
disentangle with longitudinal observations alone. Similar degeneracies
between the Jeans scale and ($T_0$,$\gamma$) exist if one considers
wavelets, the curvature, the $b$-parameter distribution, and the flux PDF,
which we
explore in an upcoming study (Rorai et al. 2013, in prep). In the next
section we show that this degeneracy between 3D and 1D smoothing can
be broken by exploiting additional information in the transverse
dimension provided by close quasar pairs.
\subsection{Cross Power Spectrum}
\label{sec:ps_cps}
The foregoing discussion illustrates that the 3D (Jeans) and
1D (thermal broadening) smoothing are mixed in the longitudinal power
spectrum, and ideally one would measure the full 3D power spectrum to
break this degeneracy. For an isotropic random field the 1D power
spectrum $P(k)$ and the 3D power $P_{3D}(k)$ are related according to
\begin{equation}
P_{3D}=\frac{1}{2\pi} \frac{1}{k}\frac{dP(k)}{dk} \label{eqn:3dpow}.
\end{equation}
However, in the Ly$\alpha$ forest redshift-space distortions and
thermal broadening result in an anisotropies that render this expression invalid.
With close quasar pairs, transverse correlations measured across the
beam contain information about the 3D power, and can thus
thus disentangle the 3D and 1D smoothing. Consider for example the
cross-power spectrum $\pi(k,r_{\perp})$ of two spectra
$\delta F_1(v)$ and $\delta F_2(v)$ separated by a transverse distance $r_{\perp}$
\begin{equation}
\pi(k;r_\perp) =\Re[\delta \tilde{F}^*_1(k)\delta\tilde{F}_2(k)].
\label{cps_pow_def}
\end{equation}
When $r_{\perp} \rightarrow 0$ then $\delta F_2 \rightarrow
\delta F_1$ and the cross-power tends to the longitudinal power $P(k)$. The
cross-power can be thought of as effectively a power spectrum in the
longitudinal direction, and a correlation function in the transverse
direction \citep[see also][]{Viel2002}. Alternatively stated, the cross
power provides a transverse distance dependent correction to the
longitudinal power $P(k)$, reducing it from its maximal value at
`zero lag' $r_{\perp}=0$. This further implies that measuring the
cross power of closely separated and thus highly coherent spectra
amounts to, at some level, a somewhat redundant measurement of the
longitudinal power which could be simply deduced from isolated
spectra. In the next section, we will explain how to isolate the
genuine 3D information provided by close quasar pairs using a
statistic that is more optimal than the cross-power. Nevertheless,
Figure~\ref{fig:power_spectra} shows the cross-power spectrum for the
two degenerate models discussed in the previous section, clearly
illustrating that even the sub-optimal
cross-power spectrum can break the strong degeneracies between thermal
parameters that are present if one considers the longitudinal
power alone.
\section{Phase Angles and the Jeans Scale}\label{phase_ang}
Although the cross-power has the ability to break the degeneracy
between 3D and 1D smoothing present in the longitudinal power,
we demonstrate here that the cross-power (or equivalently the cross-correlation
function) is however not optimal, and indeed the genuine 3D
information is encapsulated in the \emph{phase differences} between
homologous Fourier modes.
\subsection{Drawbacks of the Cross Power Spectrum}
\label{cps_vs_phase}
Let us write the 1D Fourier transform of the field $\delta F$ as
\begin{equation}
\delta \tilde{F}(k) = \rho (k) e^{i\theta(k)}
\end{equation}
where the complex Fourier coefficient is described by a modulus $\rho$ and phase angle
$\theta$, both of which depend on $k$. Note that for any ensemble of spectra $P(k)=\langle \rho^2(k)\rangle$, hence
the modulus $\rho(k)$ is a random draw from a distribution whose variance is given by
the power spectrum. From eqn.~(\ref{cps_pow_def}), the
cross-power of the two spectra $\delta F_1(v)$ and $\delta F_2(v)$ is then
\begin{equation}
\label{cpeq}
\pi_{12}(k) =\rho_1(k)\rho_2(k) \cos(\theta_{12}(k)),
\end{equation}
where $\theta_{12}(k)=\theta_1(k)-\theta_2(k)$ is the phase difference
between the homologous $k-$modes. The distribution of the moduli
$\rho_1$ and $\rho_2$ are also governed by $P(k)$, but at small impact
parameter they are not statistically independent because of spatial
correlations. Nevertheless, the moduli contain primarily information
already encapsulated in the longitudinal power, and are thus affected
by the same thermal parameter degeneracies that we described in the
previous section. For the purpose of constraining the Jeans scale, we
thus opt to ignore the moduli $\rho_1$ and $\rho_2$ altogether, in an
attempt to isolate the genuine 3D information, increasing sensitivity
to the Jeans scale, while minimizing the impact of thermal
broadening, removing degeneracies with the temperature-density relation
parameters ($T_0$,$\gamma$).
The foregoing points are clearly illustrated by the dashed curves in
the right panel of Figure~\ref{fig:power_spectra}, which compares the quantity $\langle
\rho_1(k)\rho_2(k)\rangle$ as a function of impact parameter $r_\perp$
for the same pair of thermal models discussed in \S~\ref{1dps}, which
are degenerate with respect to the longitudinal power. The similarity
of these two curves reflects the degeneracy of the
longitudinal power for these two models, and one observes a flat trend with $r_{\perp}$ and a
very weak dependence on the Jeans scale $\lambda_J$, substantiating our argument that the moduli contain
primarily 1D information.
As the moduli contain minimal information about the 3D power, we are
thus motivated to explore how the phase difference $\theta_{12}(k)$
can constrain the Jeans scale. In terms of Fourier coefficients,
$\theta_{12}(k)$ can be written
\begin{equation}
\theta_{12}(k)=\arccos\left(\frac{\Re[\delta \tilde{F}^*_1(k)
\delta \tilde{F}_2(k)]}{\sqrt{|\delta \tilde{F}_1(k)|^2|\delta \tilde{F}_2(k)|^2}}\right).
\label{eqn:phase}
\end{equation}
Note that because the phase difference is given by a ratio of Fourier
modes, it is completely insensitive to the normalization of $\delta
F$, and hence to quasar continuum fitting errors, provided that these
errors do not add power on scales comparable to $k$. In the remainder
of this section, we provide a statistical description of the
distribution of phase differences and we explore the properties and
dependencies of this distribution.
To simplify notation we will omit the subscript and henceforth denote the
phase difference as simply $\theta(k,r_{\perp})= \theta_1(k)-\theta_2(k)$,
where $r_{\perp}$ is the transverse distance between the two spectra
$\delta F_1(v)$ and $\delta F_2(v)$.
\subsection{An Analytical Form for the PDF of Phase Differences}
\label{sec:WC}
\begin{figure}
\centering
\centerline{\epsfig{file=phase_scheme.ps,
width=.4\textwidth}}
\vskip -0.1in
\caption{ \label{fig:ph_sc} Schematic representation of the
heuristic argument used to determine the phase difference
distribution: phase are determined by density filaments crossing
the lines of sight of two quasars. If the orientation of the
filaments $\varphi$ is isotropically distributed then $\theta^\prime$,
dependent on the longitudinal distance $L=r_{\perp}\tan\varphi$,
follows a Cauchy distribution.}
\end{figure}
The phase difference between homologous $k$-modes is a random variable
in the domain $[-\pi,\pi]$, which for a given thermal model, depends
on two quantities: the longitudinal mode in
question $k$ and the transverse separation $r_{\perp}$. One might
advocate computing the quantity $\langle
\cos\theta(k,r_\perp)\rangle$ analogous to the cross-power (see
eqn.~\ref{cps_pow_def}), or the mean phase difference $\langle
\theta(k,r_\perp)\rangle$, to quantify the coherence of quasar pair
spectra. However, as we will see, the
distribution of phase differences is not Gaussian, and hence is not
fully described by its mean and variance. This
approach would thus fail to exploit all the information encoded in its
shape. Our goal is then to determine the functional form of the
distribution of phase differences at any $(k,r_{\perp})$, and relate this to the thermal
parameters governing the IGM. This is a potentially daunting task,
since it requires deriving a
unique function in the 2-dimensional space $\theta(k,r_{\perp})$
for any location in our 3-dimensional thermal parameter grid
$(T_0,\gamma,\lambda_J)$. Fortunately, we are able to reduce the complexity
considerably by deriving a simple analytical form for the
phase angle distribution.
We arrive at a this analytical form via a simple heuristic argument,
whose logic is more intuitive in real space. Along the same lines, we
focus initially on the IGM density distribution along 1D skewers, and
then later demonstrate that the same form also applies to the
Ly$\alpha$ flux transmission. Consider a filament of the cosmic web
pierced by two quasar sightlines separated by $r_\perp$, and oriented
at an angle $\varphi$ relative to the transverse direction. A
schematic representation is shown in Figure~\ref{fig:ph_sc}. This
structure will result in two peaks in the density field along the two
sightlines, separated by a longitudinal distance of $L=r_{\perp} \tan\varphi$.
If we assume that the positions of these density maxima dictate the
position of wave crests in Fourier space, the phase difference for
a mode with wave number $k$ can be written as
$\theta^{\prime}=kL = k r_{\perp} \tan \varphi$. We can derive the
probability distribution of the phase difference by requiring that
$p(\theta^\prime)d\theta^\prime=p(\varphi)d\varphi$, and assuming
that, by symmetry, $\varphi$ is uniformly distributed. This implies
that $\theta^\prime$ follows the Cauchy-distribution
\begin{equation}
p(\theta^\prime)=\frac{1}{\epsilon \pi }\frac{1}{1+ (\theta^\prime/\epsilon )^2},
\end{equation}
where $\epsilon$ parametrizes the distribution's concentration.
As a final step, we need to
redefine the angles such that they reside in the proper
domain. Because $\tan \varphi$ spans the entire real line, so will
$\theta^\prime$; however, for any integer $n$,
all phases $\theta^{\prime} + 2\pi n$
corresponding to distances $L + 2\pi n\slash k$ will map to
identical values of $\theta$, defined to be the phase difference in the domain
$[-\pi,\pi]$. Redefining
the domain, requires that we re-map our probabilities according to
\begin{equation}
P_{[-\pi,\pi]}(\theta)=\sum_{n\in \mathbb{Z}}p(\theta + 2\pi n),
\end{equation}
a procedure known as `wrapping' a distribution. Fortunately, the exact form of the
wrapped-Cauchy distribution is known:
\begin{equation}
P_{\rm WC}(\theta)= \frac{1}{2\pi}\frac{1-\zeta^2}{1+ \zeta^2 - 2\zeta \cos(\theta - \mu)},
\label{WCD}
\end{equation}
where $\mu=\langle\theta\rangle$ is the mean value (in our case
$\mu=0$ by symmetry), and $\zeta$ is a concentration parameter between
0 and 1, which is the wrapped analog of $\epsilon$ above. In the
limit where $\zeta \rightarrow 1$ the distribution tends to a Dirac
delta function $\delta_D(x)$, which is the behavior expected for
identical spectra. Conversely, $\zeta=0$ results in a uniform
distribution, the behavior expected for uncorrelated spectra.
A negative $\zeta$ gives distributions peaked at $\theta = \pi$ and is
unphysical in this context.
\subsection{The Probability Distribution of Phase Differences of the IGM Density}
\label{sec:den}
\begin{figure*}
\psfrag{r= 71 kpc}[c][][1.5]{$r_{\perp}= 71$ kpc}
\psfrag{r=142 kpc}[c][][1.5]{$r_{\perp}=142$ kpc}
\psfrag{r=333 kpc}[c][][1.5]{$r_{\perp}=333$ kpc}
\psfrag{r=666 kpc}[c][][1.5]{$r_{\perp}=666$ kpc}
\centering \centerline{\epsfig{file=density_phases_log.ps,
width=\textwidth}}
\vskip -0.1in
\caption{ \label{fig:denphase} Phase difference probability
functions of the density fields at different
separations $r_{\perp}$, wavenumbers $k$ and Jeans scale
$\lambda_J$. Points with errorbars represent the binned phase
distribution of the density field as obtained from the simulation,
while the solid lines are the best-likelihood fit using a
wrapped-Cauchy distribution. When the spectra are highly correlated
the phases are small and the distribution is peaked around zero,
whereas independent skewers result in flat probability functions.
The error are estimated from the number of modes available
in the simulation, assuming a Poisson distribution.
By symmetry $p(\theta)$ must be even in
$\theta$, hence it is convenient to plot only the range $[0,\pi]$, summing
positive and negative probabilities (clearly obtaining $p(|\theta|)$ ) to increase
the sampling in each bin. We express the scale of each mode both giving
the wavelength $\lambda$ in Mpc and the wave number $k$ (in s km$^{-1}$) in
the transformed velocity space. The wrapped-Cauchy function traces with
good approximation the phase distribution obtained from the simulation, showing
less accuracy in the cases of strongly concentrated peaks, where low-probability
bins are noisy. Each color is a different smoothing length: $\lambda_J=50,100$ and 200
kpc (respectively black, red and blue). It is important to notice that the relative
distributions are different not only at scales comparable to $\lambda_J$, but also for
larger modes, because the 3D power of high-$k$ modes when projected on a 1D line
contributes to all the low-$k$ components (see the text for a detailed discussion).
Secondly, it is clear that the most relevant pairs are the closest ($r_{\perp} \lesssim \lambda_J$),
because for wide separations the coherence is too low to get useful information.
These two consideration together explain why close quasar pairs are the most
effective objects to measure the Jeans scale, even if they cannot be observed at
high resolution.
}
\end{figure*}
We now show that this wrapped-Cauchy form does a good job of
describing the real distribution of phase differences for our
simulated IGM density skewers. Note that for our simple heuristic
example of randomly oriented filaments, the concentration parameter
$\zeta$ only depends on the product of $k r_{\perp}$; whereas, in the
real IGM, one expects the spectral coherence quantified by $\zeta$ to
depend on the Jeans scale $\lambda_J$. Because we do not know how to
directly compute the concentration parameter in terms of the Jeans
scale from first principles, we opt to calculate $\zeta$ from our
simulations. At any longitudinal wavenumber $k$, pair separation
$r_\perp$, and Jeans scale $\lambda_J$, our density skewers provide a
discrete sampling of the $\theta$ distribution. We use the maximum
likelihood procedure from \citet{Jammalamadaka} to calculate
the best-fit value of $\zeta$ from an ensemble of $\theta$ values, as
described further in Appendix \ref{rho_determin}. Figure \ref{fig:denphase} shows the
distribution of phases determined from our IGM density skewers
(symbols with error bars) compared to the best-fit wrapped-Cauchy
distributions (curves) for different longitudinal modes $k$,
transverse separations $r_\perp$, and values of the Jeans scale
$\lambda_J$. We see that the wrapped-Cauchy distribution typically
provides a good fit to the simulation data points to within the
precision indicated by the error bars. For very peaked distributions which
correspond to more spectral coherence (i.e. low-$k$ or large
$\lambda_J$), there is a tendency for our wrapped-Cauchy fits to
overestimate the probability of large phase differences relative to
the simulated data, although our measurements of the probability are
very noisy in this regime. We have visually inspected similar curves
for the entire dynamic range of the relevant $k$, $r_\perp$ and
$\lambda_J$, for which the shape of the wrapped-Cauchy distribution
varies from nearly uniform $(\zeta\simeq 0)$ to a very high degree of
coherence $(\zeta\simeq 1)$, and find similarly good agreement.
It is instructive to discuss the primary dependencies of the phase
difference distribution on wavenumber $k$, separation $r_\perp$, and
the Jeans scale $\lambda_J$ illustrated in Figure \ref{fig:denphase}.
At a fixed wavenumber $k$, a large separation relative to the Jeans
scale results in a flatter distribution of $\theta$, which approaches
uniformity for $r_\perp\gg \lambda_J$. The distribution approaches
the fully coherent limit of a Dirac delta function for $r_\perp\ll
\lambda_J$, and the transitions from a strongly peaked distribution to
a uniform one occurs when $r_{\perp}$ is comparable to the Jeans scale
$\lambda_J$. We see that quasar pairs with transverse separations
$r_\perp$ $\lesssim 3 \lambda_J$, contain information about the Jeans
scale, whereas this sensitivity vanishes for larger impact parameters.
At fixed $r_\perp$, lower $k$-modes (i.e. larger scales) are more
highly correlated (smaller $\theta$ values) as expected, because
sightlines spaced closely relative to the wavelength of the mode
$kr_{\perp}\ll 1$, probe essentially the same large scale density
fluctuation. Overall, the dependencies in Figure \ref{fig:denphase}
illustrate that there is information about the Jeans smoothing spread
out over a large range of longitudinal $k$-modes. Somewhat surprisingly, even modes
corresponding to wavelengths $\gtrsim 100$ times larger than
$\lambda_J$ can potentially constrain the Jean smoothing.
This sensitivity of very large-scale longitudinal $k$-modes to a much
smaller scale cutoff $\lambda_J$ in the 3D power merits further
discussion. First, note that the range of wavenumbers typically probed by
longitudinal power spectra of the Ly$\alpha$ forest lie in the range
$0.005\,{\rm s~km}^{-1} < k < 0.1\,{\rm s~km}^{-1}$ (see Figure~\ref{fig:power_spectra}),
corresponding to modes with wavelengths $60\,{\rm km~s}^{-1} < v < 1250\,{\rm km~s}^{-1}$ or
$830\,{\rm kpc} < \lambda < 17\,{\rm Mpc}$. Here the low-$k$
cutoff is set by systematics related to determining the quasar
continuum \citep[see e.g.][]{Lee2012}, whereas the high-$k$ cutoff
is adopted to mitigate contamination of the small-scale power from
metal absorption lines \citep{McDonald2000}. In principle
high-resolution (echelle) spectra FWHM$=5\,{\rm km~s}^{-1}$ probe even higher
wavenumbers as large as $k\simeq 3$, however standard practice is to
only consider $k\lesssim 0.1$ in model-fitting \citep[see
e.g.][]{Zald01}. Thus even the highest $k$-modes at our
disposable $k\simeq 0.1$ correspond to wavelengths $\simeq 830\,{\rm
kpc}$ significantly larger than our expectation for the Jeans scale
$\sim 100\,$kpc. Furthermore, we saw in \S~\ref{1dps} that degenerate
combinations of the Jeans smoothing and the IGM temperature-density
relation can produce the same small-scale cutoff in the longitudinal
power. Thus both metal-line contamination and degeneracies with thermal broadening
imply that while it is extremely challenging to resolve the Jeans scale spectrally, the great advantage of close
quasar pairs is that they resolve the Jeans scale spatially, provided
they have transverse separations $r_{\perp}$ comparable to $\lambda_J$. We will
thus typically be working in the regime where $k\slash k_{\perp} \ll
1$, where we define $k_\perp \equiv x_0\slash aHr_\perp$, where
$aHr_\perp$ is the transverse separation converted to a velocity and
$x_0=2.4048$ is a constant the choice of which will become clear
below.
In this regime, it is straightforward to understand why the phase
differences between large-scale modes are nevertheless sensitive to
the Jeans scale. Consider the quantity $\langle
\cos{\theta(k,r_{\perp})}\rangle$, which is related to the cross-power
discussed in \S~\ref{cps_vs_phase}. This `moment' of the phase angle PDF
can be written
\begin{equation}
\langle \cos\theta(k,r_{\perp})\rangle = \int_{-\pi}^{\pi} P(\theta(k,r_{\perp}))
\cos\theta(k,r_{\perp})d\theta\label{eqn:moment},
\end{equation}
which tends toward zero for totally uncorrelated spectra
($P(\theta)=1\slash 2\pi$) and towards unity for perfectly correlated,
i.e. identical spectra ($P(\theta)=\delta_D(\theta)$)
spectra. Following the discussion in \S~\ref{cps_vs_phase}, we can
write \begin{eqnarray} \pi(k,r_\perp) = \langle \rho_1(k)\rho_2(k) \cos\theta(k,r_{\perp})\rangle
&\approx& \\\langle \rho_1(k)\rho_2(k)\rangle\langle
\cos\theta(k,r_{\perp})\rangle\nonumber &\approx& P(k)\langle
\cos\theta(k,r_{\perp})\rangle, \end{eqnarray} where the first approximation is a
consequence of the approximate Gaussianity of the density
fluctuations, and the second from the fact that $\langle \rho_1
\rho_2\rangle\approx P(k)$ for $k\slash k_\perp \ll 1$, as
demonstrated by the dashed curves in the right panel of
Fig~\ref{fig:power_spectra}. Thus we arrive at
\begin{equation}
\langle \cos\theta(k,r_{\perp})\rangle \approx \frac{\pi(k,r_\perp)}{P(k)} =
\frac{\int_k^\infty dq q J_0(r_\perp\sqrt{q^2 - k^2})P_{\rm 3D}(q)}{\int_k^\infty dq q P_{\rm 3D}(q)},\label{eqn:costheta}
\end{equation}
where $J_0$ is the cylindrical Bessel function of order zero.
The numerator and denominator of the last equality in
eqn.~(\ref{eqn:costheta}) follow from the definitions of the
longitudinal and cross power for an isotropic 3D power spectrum
\citep[see e.g.][]{Lumsden1989,Peacockbook,Hui99,Viel2002}. The
denominator is the familiar expression for the 1D power expressed as a
projection of the 3D power. Note that 1D modes with wavenumber $k$
receive contributions from all 3D modes with wavevectors $\ge k$, which
results simply from the geometry of observing a 3D field along a 1D
skewer. A long-wavelength (low-$k$) 1D longitudinal mode can be produced
by a short-wavelength (high-$k$) 3D mode directed nearly perpendicular to the
line of sight \citep[see e.g.][]{Peacockbook}. The numerator of eqn.~(\ref{eqn:costheta}) is
similarly a projection over all high-$k$ 3D modes, but because of the non-zero separation of the
skewers the 3D power spectrum is now modulated by the cylindrical Bessel function $J_0(x)$.
Because $J_0(x)$ is highly oscillatory, the primary contribution to this projection integral will come
from arguments in the range $0 < x < x_0$.
Here $x_0=2.4048$ is the first zero of $J_0(x)$, which
motivates our earlier definition of $k_\perp \equiv x_0\slash
aHr_\perp$. For larger arguments $x$, the decay of $J_0(x)$ and its
rapid oscillations will result in cancellation and negligible
contributions. Thus for $k\slash k_\perp \ll 1$, we can finally write
\begin{equation}
\langle \cos\theta(k,r_{\perp})\rangle \approx
\frac{\int_k^{k_{\perp}}dq q J_0(r_\perp\sqrt{q^2 - k^2})P_{\rm 3D}(q)}{\int_k^\infty dq q P_{\rm 3D}(q)}.\label{eqn:costheta2}
\end{equation}
This equation states that the average value of the phase difference
between homologous $k$ modes is determined by the ratio of the 3D
power integrated against a `notch filter' which transmits the range
$[k,k_\perp]$, relative to the total integrated 3D power over the full
range $[k,\infty]$. Hence phase angles between modes with wavelengths
$\gtrsim 100$ times larger than $\lambda_J$, are nevertheless
sensitive to the amount of 3D power down to scales as small as the
transverse separation $r_{\perp}$. This results simply from the
geometry of observing a 3D field along 1D skewers, because the power in
longitudinal mode $k$ is actually dominated by the superposition of 3D power from much
smaller scales $\gg k$. Provided that quasar pair separations resolve
the Jeans scale $r_{\perp}\sim \lambda_J$, even large scale modes
with $k \ll k_{\perp} \sim 1\slash \lambda_{J}$ are sensitive to
the shape of the 3D power on small-scales,
which explains the sensitivity of low-$k$ modes to the Jeans scale in
Figure \ref{fig:denphase}.
\begin{figure*}
\psfrag{r= 71 kpc}[c][][1.5]{$r_{\perp}= 71$ kpc}
\psfrag{r=142 kpc}[c][][1.5]{$r_{\perp}=142$ kpc}
\psfrag{r=333 kpc}[c][][1.5]{$r_{\perp}=333$ kpc}
\psfrag{r=666 kpc}[c][][1.5]{$r_{\perp}=666$ kpc}
\centering \centerline{\epsfig{file=phase_distributions.ps,
width=\textwidth}}
\vskip -0.1in
\caption{ \label{fig:fluxphase} Same plot of figure~\ref{fig:denphase} but
for the ${\rm Ly\alpha}$\ transmitted flux field instead of density. We vary the Jeans
scale $\lambda_J$, keeping fixed the equation-of-state parameters, $T_0=10000$ K
and $\gamma=1.6$. The properties of the distributions are analogous to the previous
plot, they follow with good approximation a wrapped-Cauchy profile and they exhibit
the same trends with $r_{\perp},k$ and $\lambda_J$. Overall, the flux shows an higher degree
of coherence and a slightly smaller sensitivity to $\lambda_J$.}
\end{figure*}
Finally, the form of eqn.~(\ref{eqn:costheta2}) combined with
eqn.~(\ref{eqn:moment}) explains the basic qualitative trends in
Figure~\ref{fig:denphase}. For large $r_{\perp}$ (small $k_{\perp}$)
the projection integral in the numerator decreases, $\langle
\cos\theta(k,r_{\perp})\rangle$ approaches zero, indicating that $P(\theta(k,r_{\perp}))$
approaches uniformity. Similarly, as $r_{\perp} \rightarrow
\lambda_J$, $\langle \cos\theta(k,r_{\perp})\rangle$ grows indicating that
$P(\theta(k,r_{\perp}))$ is peaked toward small phase angles, and in the limit
$r_{\perp} \ll \lambda_J$ $\langle \cos\theta(k,r_{\perp})\rangle \rightarrow
1$ and $P(\theta(k,r_{\perp}))$ approaches a Dirac delta function. At fixed
$r_{\perp}$, lower $k$ modes will result in more common pathlength in
the projection integrals in the numerator and denominator of
eqn.~(\ref{eqn:costheta2}), thus $\langle \cos\theta(k,r_{\perp})\rangle$ is
larger, $P(\theta(k,r_{\perp}))$ is more peaked, and the phase angles are more
highly correlated.
To summarize, following a simple heuristic argument, we derived a
analytical form for the phase angle distribution in \S~\ref{sec:WC},
which is
parametrized by a single number, the concentration $\zeta$. We
verified that this simple parametrization provides a good fit to the
distribution of phase differences in our simulated skewers, and
explored the dependence of this distribution on transverse separation
$r_{\perp}$, wavenumber $k$, and the Jeans scale $\lambda_J$. Phase
differences between large-scale modes with small wavenumbers $k \ll 1/\lambda_J$, are
sensitive to the Jeans scale, because geometry dictates that low-$k$
cross-power across correlated 1D skewers is actually dominated by
high-$k$ 3D modes up to a scale set by the pair separation $k_\perp
\sim 1\slash r_\perp$.
\subsection{The Probability Distribution of Phase Differences of the Flux}
\label{sec:flux}
\begin{figure*}
\psfrag{r= 71 kpc}[c][][1.5]{$r_{\perp}= 71$ kpc}
\psfrag{r=142 kpc}[c][][1.5]{$r_{\perp}=142$ kpc}
\psfrag{r=333 kpc}[c][][1.5]{$r_{\perp}=333$ kpc}
\psfrag{r=666 kpc}[c][][1.5]{$r_{\perp}=666$ kpc}
\centering
\centerline{\epsfig{file=nov_l130.ps,
width=\textwidth}}
\vskip -0.1in
\caption{ \label{fig:fluxden}
Phase difference probability density functions for different separations
r⊥ and wavenumbers k. All models have the same Jeans scale λJ = 140 kpc. For clarity
we plot only the best-fit wrapped-Cauchy function without simulated points with
errorbars. The black and the red lines are the phase angle PDFs for the transmitted
flux of the Lyα forest and the IGM density field, respectively.
The green line represents the case of the Lyα forest flux where peculiar velocities
are set to zero. By comparing the green and the black lines we see that in peculiar
motions always increase the coherence between the two sightlines, which partly
explains the differences between the flux and density distributions, since the
latter is calculated in real space. The flux and density further differ because of
the non-linear FGPA transformation, which has a stronger effect on smaller scale
modes.}
\end{figure*}
\begin{figure*}
\psfrag{r= 71 kpc}[c][][1.5]{$r_{\perp}= 71$ kpc}
\psfrag{r=142 kpc}[c][][1.5]{$r_{\perp}=142$ kpc}
\psfrag{r=333 kpc}[c][][1.5]{$r_{\perp}=333$ kpc}
\psfrag{r=666 kpc}[c][][1.5]{$r_{\perp}=666$ kpc}
\centering
\centerline{\epsfig{file=phase_distributions2.ps,
width=\textwidth}}
\vskip -0.1in
\caption{ \label{fig:ph_dist}Phase difference probability
density functions for different separations $r_{\perp}$, wavenumbers $k$ and
equation-of-state parameters $T_0 - \gamma$. Points with
errorbars (estimated Poisson error) are the results of our simulations,
while the coloured lines are the best-likelihood fit using
a wrapped-Cauchy distribution. All models have the same Jeans
scale $\lambda_J = 140$ kpc. This plot shows the most remarkable
property of phases: they do not exhibit any relevant
sensitivity to the equation of state, so they
robustly constrain the spatial coherence given by pressure
support.}
\end{figure*}
Having established that the wrapped-Cauchy distribution provides a
good description of the phase difference of IGM density skewers, we
now apply it to the Ly$\alpha$ forest
flux. Figure~\ref{fig:fluxphase} shows the PDF of phase differences
for the exact same transverse separations $r_\perp$, wavenumbers $k$,
and Jeans smoothings $\lambda_J$ that were shown in
Figure~\ref{fig:denphase}. The other thermal parameters $T_0$ and
$\gamma$ have been set to $(T_0,\gamma)=(10,000\,{\rm K},1.6)$. Overall, the
behavior of the phase angle PDF for the flux is extremely similar to
that of the density, exhibiting the same basic trends. Namely, the
flux PDF also transitions from a strongly peaked distribution
($r_\perp\lesssim \lambda_J$) to a flat one ($r_\perp\gg \lambda_J$)
at around $r_\perp \simeq \lambda_J$. Lower $k$-modes tend to be more
highly correlated, and low-$k$ modes corresponding to wavelengths
$\gtrsim 100 \lambda_J$ are nevertheless very sensitive to the Jeans
scale, in exact analogy with the density field. Note that because the 3D power spectrum
of the flux field is now anisotropic, the assumptions leading to the derivation of
eqn.~\ref{eqn:costheta2} in the previous section breaks down for the flux.
Nevertheless, the explanation for the sensitivity of low-$k$ modes to the Jeans scale
is likely the same, namely the
low-$k$ power across correlated skewers is actually dominated by projected
high-$k$ 3D power up to a scale $k_\perp \sim 1\slash r_\perp$, which is set
by the pair separation.
The primary difference between the phase angle PDF of flux versus the density
appears to be that the flux PDF is overall slightly less sensitive to
the Jeans scale. In general, we do not expect the two distributions to
be exactly the same for several reasons.
First, the flux represents a highly nonlinear
transformation of the density: according to the FGPA formula $\delta F
\sim \exp{[-(1+\delta)^\beta]}$ where $\beta =
2-0.7(\gamma-1)$. Second, the flux is observed in redshift space,
and the peculiar velocities which determine the mapping from real to redshift
space, can further alter the flux relative to the density.
Finally, the flux field is sensitive to other
thermal parameters $T_0$ and $\gamma$, both through the nonlinear FGPA
transformation, and because of thermal broadening. In what follows, we
investigate each of these effects in turn, and discuss how each alters
the phase angle PDF and its sensitivity to the Jeans scale.
In Figure~\ref{fig:fluxden} we show the flux PDF (black) alongside the
density PDF (red) for various modes and separations, again with the
thermal model fixed to $(T_0,\gamma,\lambda_J)=(20,000 \mbox{
K},1.0,140)$ kpc. To isolate the impact of peculiar velocities, we
also compute the phase angle PDF of the real-space flux, i.e. without peculiar
velocities (green). Specifically, we disable peculiar velocities by
computing the flux from eqn.~(\ref{eqn:tau}) with $v_{p,\parallel}$
set to zero. Overall, the PDFs of the real-space flux and density (also real-space) are quite
similar. For low wavenumbers, the real-space flux skewers are always slightly more
coherent than the density ($P(\theta)$ more peaked) for all
separations. However, at the highest $k$, the situation is reversed
with the density being more coherent than the real-space flux.
A detailed explanation of the relationship between the phase angle PDF
of the real-space flux and the density fields requires a better understanding of
the effect of the non-linear FGPA transformation on the 2-point function
of the flux, which is beyond the scope of the present work.
Here we only argue that the 3D power spectrum of the real-space flux has in
general a different shape than that of the density, and using our
intuition from eqn.~(\ref{eqn:costheta2}), this will result in a
different shape for the distribution of phase angles. The net effect of peculiar velocities on
the redshift-space flux PDF is to increase the amount of coherence
between the two sightlines ($P(\theta)$ more peaked)
relative to the real-space flux. This likely arises because the
peculiar velocity field is dominated by large-scale power, which
makes the 3D power of the flux steeper as a function of $k$. Again based
on our intuition from eqn.~(\ref{eqn:costheta}), a steeper power
spectrum will tend to increase the coherence ($\langle \cos(\theta(k,r_{\perp}))\rangle$
closer to unity), because the projection integrals in
the numerator and denominator of eqn.~(\ref{eqn:costheta}) will both have
larger relative contributions from the interval $[k,k_{\perp}]$.
Note that the relative change in the flux PDF due to peculiar velocities is comparable to the
differences between the real-space flux and the density. At the highest $k$-values
where the real-space flux is less coherent than the density (lowest
panel of Figure~\ref{fig:fluxden}), peculiar velocities conspire to make
the redshift-space flux PDF very close to the density PDF.
Finally, we consider the impact of the other thermal parameters $T_0$
and $\gamma$ on the distribution of phases in
Figure~\ref{fig:ph_dist}. There we show the PDF of the phase
angles for the flux for a fixed Jeans scale $\lambda_J=140$\,kpc, and
three different thermal models. Varying $T_0$ and $\gamma$ over the
full expected range of these parameters has very little impact on the
shape of the phase angle PDF, whereas we see in
Figure~\ref{fig:fluxphase} that varying the Jeans scale has a much
more dramatic effect. The physical explanations for the insensitivity
to $T_0$ and $\gamma$ are straightforward. The thermal parameters
$T_0$ and $\gamma$ can influence the phase angle PDF in two
ways. First, the FGPA depends weakly on temperature $T^{-0.7}$ through
the recombination coefficient. As a result the non-linear
transformation between density and flux depends weakly on $\gamma$
$\delta F \sim \exp{[-(1+\delta)^\beta]}$ where $\beta =
2-0.7(\gamma-1)$. We speculate that the tiny differences between the
thermal models in Figure~\ref{fig:ph_dist} are primarily driven by
this effect, because we saw already in Figure~\ref{fig:ph_dist} that
the non-linear transformation can give rise to large differences
between the density and flux PDFs. This small variation of the PDF
with $\gamma$ then suggests that it is actually the exponentiation
which dominates the differences between the flux and density PDFs in
Figure~\ref{fig:ph_dist}, with the weaker $\gamma$ dependent
transformation $(1+\delta)^{2-0.7(\gamma-1)}$ playing only a minor
role, which is perhaps not surprising. Note that there is also a
$T_0^{-0.7}$ dependence in the coefficient of the FGPA optical depth,
but as we require all models to have the same mean flux
$\langle \exp(-\tau)\rangle$, this dependence is compensated by the
freedom to vary the metagalactic photoionization rate $\Gamma$.
Second, both $T_0$ and $\gamma$ determine the temperature of gas at
densities probed by the Ly$\alpha$ forest, which changes the amount of
thermal broadening. The insensitivity to thermal broadening is also
rather easy to understand. Thermal broadening is effectively a
convolution of the flux field with a Gaussian smoothing kernel. In
$k$-space this is simply a multiplication of the Fourier transform of
the flux $\delta\tilde{F}(k)$ with the Fourier transform of the
kernel. Because all symmetric kernels will have a vanishing imaginary
part\footnote{The imaginary part of the Fourier transform of the
symmetric function $W(|x|)$ is $\Im[W(k)]= \int W(|x|)\sin(kx)dx$
which is always odd and will integrate to zero.}, the convolution
can only modify the moduli of the flux \emph{but the phases are
invariant.} Thus the phase differences between neighboring flux
skewers are also invariant to smoothing, which explains the
insensitivity of the flux phase angle PDF to thermal broadening, and
hence the parameters $T_0$ and $\gamma$.
The results of this section constitute the cornerstones of our method for measuring
the Jeans scale. We found that the phase angle PDF of the flux has a shape very
similar to that of the density, and that both are well described by
the single parameter wrapped-Cauchy distribution. Information about
the 3D smoothing of the density field $\lambda_J$, is encoded in the
phase angle PDF of the flux, but it is essentially independent of the
other thermal parameters governing the IGM. This results because 1)
the non-linear FGPA transformation is only weakly dependent on temperature
2) phase angles are invariant under
symmetric convolutions. The implication is that close quasar pair
spectra can be used to pinpoint the Jeans scale without suffering from
any significant degeneracies with $T_0$ and $\gamma$. Indeed, in
the next section we introduce a Bayesian formalism for estimating
the Jeans scale, and our MCMC analysis in \S~\ref{sec:measure} will
assess the accuracy with which the thermal parameters can be measured,
and explicitly demonstrate the near independence of constraints on
$\lambda_J$ from $T_0$ and $\gamma$.
\section{Estimating the Jeans Scale }\label{jeans_meas}
\begin{figure*}
\psfrag{R1}[c][][1.5]{$r_{\perp}= 70$ kpc}
\psfrag{R2}[c][][1.5]{$r_{\perp}=430$ kpc}
\psfrag{log C}[c][][1.2]{$\log C_{\theta}$}
\centering
\centerline{\epsfig{file=correlation_matrix.ps,
width=\textwidth}}
\vskip -0.1in
\caption{\label{fig:cov_mat}Logarithm of the phase $k-k$ correlation for separations $r_{\perp} = 70 $ kpc (left)
and $r_{\perp}= 430$ kpc (right). This matrices are calculated for a model with $\lambda_J=143$ kpc, $T_0=20000$ K
and $\gamma = 1$. Phases are more correlated when the impact parameter is smaller than the jeans
scale and at high $k$ where nonlinear growth of perturbations couples different modes. Even in this
cases we rarely find correlations higher than $\approx 3\%$, for which reason we will work in
the diagonal approximation. This approximation may break out if the measured Jeans scale will be
significantly larger than expected.}
\end{figure*}
\subsection{The Covariance of the Phase Differences}
\label{sec:cov}
In the previous section, we showed that the PDF of
phase differences between homologous longitudinal modes of the flux
field are well described by the wrapped-Cauchy distribution (see
eqn.~\ref{WCD}). However, the one-point function alone is
insufficient for characterizing the statistical properties of the
stochastic field $\theta(k,r_{\perp})$, because in principle values of
$\theta$ closely separate in either wavenumber $k$ or real-space could be
correlated. Understanding the size of these two-point correlations is of utmost
importance. Any given quasar pair spectrum provides us with a
realization of $\theta(k,r_{\perp})$, and we have seen that the
distribution of these values depends sensitively on the Jeans scale
$\lambda_J$. In order to devise an estimator for the thermal
parameters in terms of the phase differences, we have to understand
the degree to which the $\theta(k,r_{\perp})$ are independent.
It is easy to rule out the possibility of spatial correlations among
the $\theta$ values deduced from distinct quasar pairs. Because quasar
pairs are extremely rare on the sky, the individual quasar pairs in any
observed sample will typically be $\sim$ Gpc away from each other, and
hence different pairs will never probe correlated small-scale density
fluctuations. However, the situation is much less obvious when it
comes to correlations between $\theta$ values for different $k$-modes
of the same quasar pair. In particular, nonlinear structure formation
evolution will result in mode-mode coupling, which can induce correlations
between mode amplitudes and phases \citep[e.g.][]{Chiang2002,Watts2003,Coles2009}. We are
thus motivated to use our simulated skewers to directly quantity the
size of the correlations between phase differences of distinct
longitudinal $k$-modes.
We calculate the correlation coefficient matrix of $\theta$ between
modes $k$ and $k^\prime$ defined as
\begin{equation}
C_{\theta}(k,k^\prime ; r_{\perp})=\frac{\left\langle \theta(k,r_{\perp})\theta(k^\prime,
r_{\perp}) \right\rangle}{\sqrt{\left\langle \theta^2(k,r_{\perp}) \right\rangle\left\langle\theta^2(k^\prime,r_{\perp})\right\rangle}}.
\end{equation}
Our standard setup of $330$ pairs at each discrete separation $r_{\perp}$ results
in a very noisy estimate of $C_{\theta}(k,k^\prime;r_{\perp})$, so we proceed by
defining a new set of 80,000 skewers at two distinct discrete transverse
separations of $r_\perp =70$ kpc and $r_\perp = 430$ kpc for a
single thermal model with $(T_0,\gamma,\lambda_J)=(20,000\,{\rm K},1,143\,{\rm
kpc})$.
Figure \ref{fig:cov_mat} displays the correlation
coefficient matrix for the two separations $r_{\perp}$ that we
simulated. We find that the off-diagonal correlations between
$k$-modes are highest at high $k$ values and for smaller impact
parameters. This is the expected behavior, since higher
longitudinal $k$-modes will have a larger relative contributions from higher-$k$ 3D
modes, which will be more non-linear and have larger mode-mode correlations.
Likewise, as per the discussion in \S~\ref{sec:den},
phase differences at smaller pair separations $r_\perp$ are sensitive to
higher $k$ 3D power $\sim k_\perp$, and
should similarly exhibit larger correlations between modes. Note
however that over the range of longitudinal $k$ values which we will
use to constrain the Jeans scale $0.005 < k < 0.1$, the size of the
off-diagonal elements are always very small, of the order of $\sim 1-3\%$.
The small values of the off-diagonal elements indicates that the
mode-mode coupling resulting from non-linear evolution does not result
in significant correlations between the phase angles of longitudinal
modes. This could result from the fact that the intrinsic phase
correlations of the 3D modes is small, and it is also possible that
the projection of power inherent to observing along 1D skewers (see
\S~\ref{sec:den}) dilutes these intrinsic phase correlations, because a
given longitudinal mode is actually the average over a large range of
3D modes. From a practical perspective, the negligible off-diagonal
elements in Figure~\ref{fig:cov_mat} are key, because
they allow us to consider each phase difference $\theta(k,r_{\perp})$
as an \emph{independent} random draw from the probability
distributions we explored in \S~\ref{sec:flux}, which as we show in
the next section, dramatically simplifies the estimator that we will
use to determine the Jeans scale.
\subsection{A Likelihood Estimator for the Jeans Scale}\label{sec:estimate}
The results from the previous sections suggest a simple method for
determining Jeans scale. Namely, given any quasar pair, the phase angle
difference for a given $k$-mode represents a draw from the underlying
phase angle PDF determined by the thermal properties of the IGM (as
well as other parameters governing e.g. cosmology and the dark matter
which we assume to be fixed). In \S~\ref{sec:flux} we showed that the
phase angle PDF is well described by the wrapped-Cauchy distribution
and in \S~\ref{sec:cov} we argued that correlations between phase angle
differences $\theta(k,r_{\perp})$, in both $k$-space and real-space
can be neglected. Thus for a hypothetical dataset ${\theta(k,r_\perp)}$ measured from
a sample of quasar pairs, we can write that the likelihood of the thermal model
$M=\{T_0,\gamma,\lambda_J\}$ given the data is
\begin{equation}
\mathscr{L}(\{\theta \}|M)=
\prod_{i,j} P_{\rm WC}(\theta(k_i,r_j)|\zeta(k,r_\perp|M)).
\label{diaglik}
\end{equation}
This states that the likelihood of the data is the product of the phase angle PDF
evaluated at the measured phase differences for all $k$-modes and over
all quasar pair separations $r_\perp$. Note that the simplicity of this
estimator is a direct consequence of the fact that there are
negligible $\theta$ correlations between different $k$-modes and pair
separations. All dependence on $(T_0,\gamma,\lambda_J)$ is encoded in
the single parameter $\zeta$, which is the concentration of the
wrapped-Cauchy distribution (eqn.~\ref{WCD}).
We can then apply Bayes' theorem to make inferences about any
thermal parameter, for example for $\lambda_J$
\begin{equation}
P(\lambda_J|\{\theta \})=\frac{\mathscr{L}(\{\theta \}|\lambda_J) p(\lambda_J)}{P(\{\theta \})}
\label{fulllik}
\end{equation}
where $p(\lambda_J)$ is our prior on the Jeans scale and the
denominator acts as a renormalization factor which is implicitly
calculated by a Monte Carlo simulation over the parameter space.
The same procedure can be used to evaluate the probability distribution of the
other parameters. Throughout this paper, we assume flat priors
on all thermal parameters, over the full domain of physically plausible parameter values.
In \S~\ref{sec:measure} we will use MCMC techniques to numerically
explore the likelihood in eqn.~(\ref{fulllik}) and deduce the
posterior distributions of the thermal parameters. In order to do
this, we need to be able to evaluate the function
$\zeta(k,r_\perp|T_0,\gamma, \lambda_J)$ at any location in thermal
parameter space. This is a non-trivial computational issue, because we
do not have a closed form analytical expression for $\zeta$ which can be evaluated
quickly, and
thus have to resort to our cosmological simulations of the IGM to
numerically determine it for each model, as described in Appendix
\ref{rho_determin}. In practice, computational constraints limit the size of our
thermal parameter grid to only 500 thermal models, and we
thus evaluate $\zeta$ at only these 500 fixed locations.
In the
next section, we describe a fast procedure referred to as an \emph{emulator},
which allows us to interpolate $\zeta$ from these 500 locations in our finite thermal
parameter grid, onto any value in thermal parameter space $(T_0,\gamma,\lambda_J)$.
\subsection{Emulating the IGM}
\label{sec:emulator}
Our goal is to define an algorithm to calculate
$\zeta(k,r_\perp|T_0,\gamma, \lambda_J)$ as a function of the thermal
parameters, interpolating from the values determined on a fixed
grid. As we will also compare Jeans scale constraints from the phase
angle PDF (eqn.~\ref{fulllik}), to those obtained from other
statistics, such as the longitudinal power $P(k)$ and cross-power
$\pi(k,r_\perp)$ (see \S~\ref{sec:measure}), we also need to be able
to smoothly interpolate these functions as well. To achieve this, we follow the
approach of the 'Cosmic Calibration Framework' (CCF) to provide an accurate
prediction scheme for cosmological observables
\citep{Heitmann06,Habib07}. The aim of the CCF is to build
\emph{emulators} which act as very fast -- essentially instantaneous
-- prediction tools for large scale structure observables such as the
nonlinear power spectrum \citep{Heitmann09,Heitmann2010,Lawrence09}, or
the concentration-mass relation \citep{Kwan12}. Three essential steps
form the basis of emulation. First, one devises a sophisticated
space-filling sampling scheme that provides an optimal sampling
strategy for the cosmological parameter space being studied. Second, a
principle component analysis (PCA) is conducted on the measurements from the
simulations to compress the data onto a minimal set of basis functions
that can be easily interpolated. Finally, Gaussian process modeling is
used to interpolate these basis functions from the locations of the
space filling grid onto any value in parameter space. A detailed
description of our IGM emulator will be described in a companion paper
(A.Rorai et al. 2013, in preparation). Below we briefly summarize the key aspects.
Whereas CCF uses more sophisticated space filling Latin Hypercube
sampling schemes \citep[e.g.][]{Heitmann09}, we adopt a simpler
approach motivated by the shape of the IGM statistics we are trying to
emulate, which change rapidly at scales comparable to
either the Jeans or thermal smoothing scale. We opt for an irregular
scattered grid which fills subspaces more effectively than a cubic lattice.
We consider parameter values over the domain
$\{(T_0,\gamma,\lambda_J):\,T_0 \in [5000,40000]\,{\rm K};\, \gamma \in [0.5,2];\,
\lambda_J \in [43, 572]\,{\rm kpc}\}$.
The lower limit of 43 kpc for the Jeans scale is chosen because this is about the smallest
value we can resolve with our simulation (see Appendix \ref{sec:appendixa}), while the
upper limit of 572 kpc is a conservative constraint deduced from the
longitudinal power spectrum: a filtering scale greater than this
value would be inconsistent with the high$-k$ cutoff, regardless of
the value of the temperature. The ranges considered for $T_0$ and $\gamma$ are
consistent with those typically considered in the literature and our expectations
based on the physics governing the IGM. We sample the 3D thermal parameter
space at 500 locations, where we consider a discrete set of 50 points in each dimension.
A linear spacing of these points is adopted for $\gamma$, whereas we find it
more appropriate to distribute $T_0$ and $\lambda_J$ such that the scale of the cutoff
of the power spectrum $k_{f}$ is regularly spaced.
Since $k_f \propto \lambda_J^{-1}$ for Jeans smoothing and $k_f \propto T_0^{-1/2}$
for thermal broadening, we choose regular intervals of these parameters after transforming
$\lambda_J \rightarrow 1/\lambda_J$ and $T_0 \rightarrow 1/\sqrt{T_0}$.
Each of the 50 values of the parameters is then repeated exactly 10 times in
the 500-point grid, and we use 10 different random permutations of their indices
to fill the space and to avoid repetition. For each thermal model in this grid,
we generate 10,000 pairs of skewers at 30 linearly spaced discrete pair
separations between 0 and 714 kpc.
We then use these skewers to compute the IGM statistics
$\zeta(k,r_\perp)$, $P(k)$, and $\pi(k,r_{\perp})$ for all $k$ and
$r_{\perp}$ for each thermal model. A PCA decomposition is then performed
in order to compress the information present in each statistic and represent its
variation with the thermal parameters using a handful of basis functions $\phi$.
A PCA is an orthogonal transformation that converts a family of correlated
variables into a set of linearly uncorrelated combinations of principal components.
The components are ordered by the variance along each basis dimension,
thus relatively few of them are sufficient to describe the entire
variation of a function in the space of interest, which is here the thermal parameter
space. To provide a concrete example, the longitudinal power spectrum $P(k)$ is fully
described by the values of the power in each $k$ bin, but it is likely that some
of these $P(k)$ values do not change significantly given certain combinations of thermal
parameters. The PCA determines basis functions of the $P(k)$ that best describe its variation
with thermal parameters, enabling us to represent this complex dependence
with an expansion onto just a few principal components
\begin{equation}
P(k|T_0,\gamma,\lambda_J)=\sum_{i} \omega_{i}(T_0,\gamma,\lambda_J) \Phi_{i}(k),
\end{equation}
where $\{\Phi(k)\}$ are the basis of principal components, and
$\{\omega\}$ are the corresponding coefficients which depend on the
thermal parameters. The number of components for a given function is
set by the maximum tolerable interpolation errors of the emulator,
and these are in turn set by the size of the error bars on the
statistic that one is attempting to model. We defer a detailed
discussion of the PCA analysis and the procedure used to determine the
number of components to an upcoming paper (Rorai et al. 2013, in
prep), but we note that the number of PCA components we used to fully represent the
functions $\zeta(k,r_\perp)$, $P(k)$, and $\pi(k,r_{\perp})$ were 25, 15, and 25, respectively
(phase distribution and cross power spectrum are 2D functions, so they need more components).
Gaussian process interpolation is then used to interpolate these PCA
coefficients $\omega_{i}(T_0,\gamma,\lambda_J)$ from the irregular
distribution of points in our thermal grid to any location of interest
in the parameter space. The only input for the Gaussian interpolation
is the choice of \emph{smoothing length}, which quantifies the degree
of smoothness of each function along the direction of a given
parameter in the space. We choose these smoothing lengths to be a
multiple of the spacing of our parameter grid. The choice of these
smoothing lengths is somewhat arbitrary, but we checked that the
posterior distributions of thermal parameters (eqn.~\ref{fulllik})
inferred do not change in response to a reasonable variations of these
smoothing lengths. A full description of the calibration and testing
of the emulator is presented in an upcoming paper (Rorai et al. 2013,
in prep).
To summarize, our method for measuring the Jeans scale of the IGM involves the following steps:
\begin{itemize}
\item Calculate the phase differences ${\theta(k,r_\perp)}$ for each $k$-mode of an observed sample of quasar
pairs with separations $r_\perp$.
\item Generate Ly$\alpha$ forest quasar pair spectra for a grid of
thermal models in the parameter space $(T_0,\gamma,\lambda_J)$, using
our IGM simulation framework. For each model, numerically determine
the concentration parameter $\zeta(k,r_\perp|T_0,\gamma, \lambda_J)$
at each wavenumber $k$ and separation $r_{\perp}$, from
the distribution of phase differences $\theta(k,r_{\perp})$.
\item Emulate the function $\zeta(k,r_\perp|T_0,\gamma, \lambda_J)$,
enabling fast interpolation of $\zeta$ from the fixed values in the
thermal parameter grid to any location in thermal parameter space.
\item Calculate the posterior distribution in eqn.~(\ref{fulllik}) for $\lambda_J$, by exploring
the likelihood function in eqn.~(\ref{diaglik}) with an MCMC algorithm.
\end{itemize}
\section{How Well Can We Measure the Jeans Scale?}
\label{sec:measure}
Our goal in this section is to determine the precision with which
close quasar pair spectra can be used to measure the Jeans
scale. To this end, we construct a mock quasar pair dataset from our IGM simulations
and apply our new phase angle PDF likelihood formalism to it. A key
question is how well constraints from our new phase angle technique
compare to those obtainable from alternative measures, such as the
cross-power spectrum, applied to the same pair sample, or from the
longitudinal power spectrum, measured from samples of individual
quasars. In what follows, we first present the likelihood used to
determine thermal parameter constraints for these two additional
statistics. Then we describe the specific assumptions made for the
mock data. Next we quantify the resulting
precision on the Jeans scale, explore degeneracies with other thermal
parameters, and compare to constraints from these two alternative statistics. We
explore the impact of finite signal-to-noise ratio and spectral resolution on our
measurement accuracy, and discuss possible sources of systematic error. Finally, we
explicitly demonstrate that our likelihood estimator provides unbiased
determinations of the Jeans scale.
\subsection{The Likelihood for $P(k)$ and $\pi(k,r_\perp)$}
\label{sec:p_lik}
For the longitudinal power $P(k)$, we assume that the distribution of
differences, between the measured band powers of a $k$-bin and the true
value, is a multi-variate Gaussian \citep[see e.g.][]{msb+06},
which leads to the standard likelihood for the power-spectrum
\begin{eqnarray}
\mathscr{L}(P_{d}|M) &=& (2\pi)^{-N\slash 2} \det{(\Sigma)}^{-1\slash 2} \\
&& \exp\left[-\frac{1}{2}(P_d-P_M)^T \Sigma^{-1}(P_d-P_M) \right]\nonumber,
\label{P_los_lik}
\end{eqnarray}
where $P_d$ is a vector of $N$ observed 1D band powers, $P_M$ is a vector of power spectrum predictions
for a given thermal model $M=(T_0,\gamma,\lambda_J)$, and
\begin{equation}
\Sigma(k,k^{\prime}) = \langle [P(k)-P_M(k)][P(k^{\prime})-P_M(k^{\prime})]\rangle,
\end{equation}
is the full covariance matrix of the power spectrum measurement. As we
describe in the next subsection, we will choose a subset of the
skewers from a fiducial thermal model to represent the `data' in this
expression, which are then compared directly to thermal models
$(T_0,\gamma,\lambda_J)$, where the same emulator technique described
in \S~\ref{sec:emulator} is used to interpolate
$P_M(k|T_0,\gamma,\lambda_J)$ to parameter locations in the thermal
space. To determine the covariance of this mock data
$\Sigma(k,k^{\prime})$, we use the full ensemble of $2\times 10,000$
1D skewers for the fiducial thermal model, directly evaluate the
covariance matrix, and then rescale it to the size of our mock dataset
by multiplying by the ratio of the diagonal terms
$\sigma^2_{\mbox{dataset}}/\sigma^2_{\mbox{full}}$. This procedure of evaluating the
covariance implicitly assumes that the only source of noise in the
measurement is sample variance, or that the
instrument noise is negligible. For the high-resolution and high
signal-to-noise ratio spectra used to measure the longitudinal power
spectrum cutoff \citep{McDonald2000,Croft2002}, this is a reasonable
assumption. For reference, the relative magnitude of off-diagonal
terms of the covariance,
$\Sigma(k,k')/\sqrt{\Sigma(k,k)\Sigma(k',k')}$, are at most $20-30\%$
with the largest values attained at the highest $k$.
For the cross-power spectrum $\pi(k,r_\perp)$, we follow the same procedure. Namely, a mock dataset
is constructed for the fiducial thermal model by taking a subset of the full ensemble of quasar pair
spectra. We again assume that the band power errors are distributed according to a multi-variate
Gaussian, but because we must now account for the variation with separation $r_{\perp}$, the corresponding
likelihood is
\begin{equation}
\mathscr{L}(\pi|M) = \prod_i \mathscr{L}(\pi_{d}(k,r_{\perp,i})|M)\label{eqn:crosslik},
\end{equation}
where $\mathscr{L}(\pi_{d}(k,r_{\perp,i})|M)$ has the same form as the longitudinal
power in eqn.~(\ref{P_los_lik}). In exact analogy with the longitudinal power, we
compute the full covariance matrix $\Sigma(k,k^{\prime}|r_{\perp})$ of the cross-power using our full
ensemble of simulated pair spectra for our fiducial model, but now at each value of $r_{\perp}$.
\subsection{Mock Datasets}
\label{sec:mock}
To determine the accuracy with which we can measure the Jeans scale
and study the degeneracies with other thermal parameters, we construct
a dataset with a realistic size and impact parameter distribution, and
use an MCMC simulation to explore the phase angle likelihood in
eqn.~(\ref{diaglik}). We compare these constraints to those obtained
from the cross-power spectrum for the same mock pair dataset, by
similarly using an MCMC to explore the cross-power likelihood in
eqn.~(\ref{eqn:crosslik}). We also compare to parameter constraints
obtainable from the longitudinal power alone, by exploring the
likelihood in eqn.~(\ref{P_los_lik}), for which we must also construct
a mock dataset for longitudinal power measurements.
For the mock quasar pair sample, we assume 20 quasar pair spectra at
$z=3$, with fully overlapping absorption pathlength between Ly$\alpha$
and Ly$\beta$. Any real quasar pair sample will be composed of both
binary quasars with full overlap and projected quasar pairs with
partial overlap, so in reality 20 represents the total effective pair
sample, whereas the actual number of quasar pairs required could be
larger. The distribution of transverse separations for these pairs is
taken to be uniform in the range $24 < r_{\perp} < 714$ kpc. Specifically, we require 200
pairs of skewers in order to build up the necessary path length for 20
full Ly$\alpha$ forests, and these are randomly selected from our
10,000 IGM pair skewers which have 30 discrete separations.
We draw these pairs from a simulation with a
fiducial thermal model $(T_0,\gamma,\lambda_J)=(12,000\,{\rm K},1.0,
110,\,{\rm kpc})$, which lies in the middle of our
thermal parameter grid. Note that follow-up observations of
quasar pair candidates has resulted in samples of $> 400$ quasar
pairs in the range $1.6 < z \lesssim 4.3$ with $r_{\perp} < 700\,{\rm
kpc}$, and for those with $> 50\%$ overlap, the total effective
number of fully overlapping pairs is $\simeq 300$
\citep{Hennawi04,BINARY,Myers08,HIZBIN}. Many of these sightlines
already have the high quality Ly$\alpha$ forest spectra required for a
Jeans scale measurement \citep[e.g.][]{QPQ1,QPQ2,QPQ3,QPQ4,QPQ5},
hence the mock dataset we have assumed already exists, and can be easily enlarged
given the number of close quasar pairs known.
Longitudinal power spectrum measurements which probe the small-scale
cutoff of the power have been performed on high-resolution ($R\simeq
30,000-50,000$; FWHM=$6-10\,{\rm km~s}^{-1}$) spectra of the brightest quasars.
Typically, the range of wavenumbers used for model fitting is
$0.005\,{\rm s~km}^{-1} < k < 0.1\,{\rm s~km}^{-1}$ (see Figure~\ref{fig:power_spectra}),
where the low-$k$ cutoff is chosen to avoid systematics related to
quasar continuum fitting \citep{Lee2012}, and the high-$k$ cutoff is
adopted to mitigate contamination from metal absorption lines
\citep{McDonald2000,Croft2002,Kim04}. Because quasar pairs are very
rare, one must push to faint magnitudes to find them in sufficient
numbers. In contrast with the much brighter quasars used to measure
the small-scale longitudinal power
\citep{McDonald2000,Croft2002,Kim04}, quasar pairs are typically too
faint to be observable at echelle resolution (FWHM=$6-10\,{\rm km~s}^{-1}$) on 8m
class telescopes. However, quasar pairs can be observed with higher
efficiency echellette spectrometers, which deliver $R\simeq 10,000$ or
FWHM$=30\,{\rm km~s}^{-1}$. The cutoff in the power spectrum induced by this
lower resolution is $k_{\rm res}=1/\sigma_{\rm res} = 2.358/{\rm FWHM}
= 0.08\,{\rm s~km}^{-1}$, which is very close to the upper limit $k < 0.1\,{\rm s~km}^{-1}$
set by metal-line contamination. For these reasons, we will consider
only modes in the range $0.005\,{\rm s~km}^{-1} < k < 0.1\,{\rm s~km}^{-1}$ in the likelihood in
eqn.~(\ref{diaglik}). We initially consider perfect data, ignoring the
effect of finite signal-to-noise rate and resolution. Then in
\S~\ref{sec:noise}, we will explore how noise and limited spectral resolution
influence our conclusions.
For the mock sample used to study the longitudinal power,
we assume perfect data, which is reasonable considering
that such analyses are typically performed on spectra with
signal-to-noise ratio ${\rm S\slash N}\sim 30$ and resolution FWHM$=6\,{\rm km~s}^{-1}$
\citep{McDonald2000,Croft2002,Kim04} such that the Ly$\alpha$ forest, and in particular
modes with $k< 0.1$, are fully resolved. For the size of this sample,
we again assume 20 individual spectra at $z=3$ with full coverage of
the Ly$\alpha$ forest, which is about twice the size employed in recently
published analyses \citep{McDonald2000,Croft2002,Kim04}. However, the number of
existing archival high-resolution quasar spectra at $z=3$ easily
exceeds this number, so samples of this size are also well within
reach. Also, adopting a sample for the longitudinal power with the same Ly$\alpha$
forest path length as the quasar pair sample, facilitates a
straightforward comparison of the two sets of parameter constraints.
\subsection{The Precision of the $\lambda_J$ Measurement}
\label{sec:accuracy}
\begin{figure*}
\centering
\centerline{\epsfig{file=ps_vs_phase_paper1,
width=\textwidth}}
\vskip -0.1in
\caption{\label{fig:cont_phase}Constraints on the $\gamma-\lambda_J$ and $T_0-\lambda_J$ planes. The contours show
the estimated $65\%$ and $96\%$ confidence levels
obtained with the longitudinal power (blue) and the phase difference (red).
The white dot marks the fiducial model in the parameter space.
The degeneracy affecting the 1D power already shown in figure~\ref{fig:power_spectra}
can now be seen clearly in the parameter space through the inclination of
the black contours. Conversely, the fact that constraints given by the phase difference
statistic are horizontal guarantees that this degeneracy is broken and that
the measurement of the Jeans scale is not biased by the uncertainties on the
equation of state.}
\end{figure*}
Given our mock dataset and the expression for the phase angle likelihood in eqn.~(\ref{diaglik}),
and armed with our IGM emulator, which enables us to quickly evaluate this likelihood
everywhere inside our thermal parameter space, we are now ready to explore this likelihood
with an MCMC simulation to determine the precision with which we can measure the Jeans scale
and explore degeneracies with other thermal parameters.
We employ the publicly available MCMC package described in
\cite{MCpackage}, which is particularly well adapted to explore
parameter degeneracy directions. The result of our MCMC simulation is
the full posterior distribution in the 3-dimensional
$T_0-\gamma-\lambda_J$ space for each likelihood that we consider. It
is important to point out that, in general, these posterior
distributions will not be exactly centered on the true fiducial
thermal model $(T_0,\gamma,\lambda_J)={12,000\,{\rm K}, 1, 110,\,{\rm
kpc}}$. Indeed, the expectation is that the mean or mode of the
posterior distribution for a given parameter will scatter around the
true fiducial value at a level comparable to the width of this
distribution. Nevertheless, the posterior distribution should provide
an accurate assessment of the precision with which parameters can be
measured and the degeneracy directions in the parameter space. In
\S~\ref{sec:bias} we will demonstrate that our phase angle PDF
likelihood procedure is indeed an unbiased estimator of the Jeans
scale, by applying this method to a large ensemble of
mock datasets, and showing that on average, we recover the input
fiducial Jeans scale.
The red shaded regions in Figure~\ref{fig:cont_phase} show the constraints
in thermal parameter space resulting from our MCMC exploration of the
phase angle likelihood (eqn.~\ref{diaglik}). The results are shown
projected onto the $T_0-\lambda_J$ and $\gamma-\lambda_J$ planes,
where the third parameter ($\gamma$ and $T_0$, respectively) has been
marginalized over. The dark and light shaded regions show $65\%$ and
$96\%$ confidence levels, respectively. The phase difference technique
(red) yields essentially horizontal contours, which pinpoint the value
of the Jeans scale, with minimal degeneracy with other thermal
parameters. This is a direct consequence of the near independence of
the phase angle PDF of $T_0$ and
$\gamma$ shown in Figure~\ref{fig:fluxden}, and discussed in
\S~\ref{sec:flux}. The physical explanation for this independence is
that 1) the non-linear FGPA transformation in only
weakly dependent on temperature 2) phase angles are invariant to the
thermal broadening convolution. This truly remarkable result is the
key finding of this work: phase angles of the Ly$\alpha$ forest flux
provide direct constraints on the 3D smoothing of the IGM density
independent of the other thermal parameters governing the IGM.
\begin{figure*}
\centering
\centerline{\epsfig{file=ps_vs_cps_paper1.eps,
width=\textwidth}}
\vskip -0.1in
\caption{\label{fig:cont_cps}Constraints on the $\gamma-\lambda_J$
and $T_0-\lambda_J$ planes. The contours show the
estimated $65\%$ and $96\%$ confidence levels
obtained with the longitudinal power (blue) and the cross power
(green). The white dot marks the fiducial model in the parameter space.
Comparing this plot with figure~\ref{fig:cont_phase} makes clear why the
cross power spectrum is not the optimal statistic for measuring $\lambda_J$
since the phase information is diluted and the degeneracy is not efficiently
broken.
}
\end{figure*}
The blue shaded regions in Figure~\ref{fig:cont_phase} show the
corresponding parameter constraints for our MCMC of the longitudinal
power spectrum likelihood (eqn.~\ref{P_los_lik}). Considering the
longitudinal power spectrum alone, we find that significant
degeneracies exist between $\lambda_J$, $T_0$ and $\gamma$, which
confirms our qualitative discussion of these degeneracies in \S~\ref{1dps}
and illustrated in Figure~\ref{fig:power_spectra}. These
degeneracy directions are easy to understand. The longitudinal power
is mostly sensitive to thermal parameters via the location of the
sharp small-scale cutoff in the power spectrum. This thermal cutoff is set by a
combination of both 3D Jeans pressure smoothing and 1D thermal
broadening. The thermal broadening component is set by the temperature
of the IGM at the characteristic overdensity probed by the forest, which is
$\delta \approx 2$ at $z=3$ \citep{BeckerBolton2011}. One naturally
expects a degeneracy between $T_0$ and $\gamma$, because it is
actually the temperature at $T(\delta \approx 2)$ that sets the
thermal broadening. A degeneracy between $\lambda_J$ and $T(\delta
\approx 2)$ is also expected because both smoothings contribute to the
small-scale cutoff. Thus, a lower Jeans scale can be compensated by
more thermal broadening, which can result from either a steeper
temperature density relation (larger $\gamma$) or a hotter temperature
at mean density $T_0$, since both produce a hotter $T(\delta \approx
2)$.
Previous work that has aimed to measure thermal parameters such as
$T_0$ and $\gamma$, from the longitudinal power spectrum
\citep{Zald01,VielBolton09}, the curvature statistic
\citep{BeckerBolton2011}, wavelets \citep{Theuns02b,Lidz09,Garzilli2012},
and the $b$-parameter distribution
\citep{Haehnelt98,Theuns00,Ricotti00,BryanMach00,Schaye00,McDonald2001,Theuns02a,Rudie2012},
have for the
most part ignored the degeneracies between these thermal parameters
and the Jeans scale (but see Zaldarriaga et al. 2001 who marginalized
over the Jeans scale, and Becker et al. 2011 who also considered its
impact). Neglecting the possible variation of the Jeans scale is
equivalent to severely restricting the family of possible IGM thermal
histories. Because the phase angle method accurately pinpoints the
Jeans scale independent of the other parameters, it breaks the
degeneracies inherent to the longitudinal power spectrum and will enable
accurate and unbiased measurements of both $T_0$ and $\gamma$, as
evidence by the intersection of the red and black contours in
Figure~\ref{fig:cont_phase}. Similar degeneracies between the Jeans
scale and ($T_0$,$\gamma$) exist when one considers other statistics such
as the flux PDF \citep{McDonald2000,kbv+07,Bolton08,Calura2012,Garzilli2012},
which we will explore in an upcoming study (Rorai et al. 2013, in prep). In light
of these significant degeneracies with the Jeans scale, it may be necessary to reassess the
reliability and statistical significance of previous measurements of $T_0$ and $\gamma$.
Figure~\ref{fig:cont_cps} shows the resulting thermal parameter
constraints for our MCMC analysis of the cross-power spectrum
likelihood (eqn.~\ref{eqn:crosslik}) in green, determined from exactly
the same mock quasar pair sample that we analyzed for the phase
angles. The confidence regions for the longitudinal power are shown
for comparison in blue. The cross-power spectrum is a straightforward
statistic to measure and fit models to, and the green confidence
regions clearly illustrate that it does exhibit some sensitivity to
the Jeans scale, as discussed in \S~\ref{sec:ps_cps} and shown in the
right panel of Figure~\ref{fig:power_spectra}. However, a comparison
of the cross-power confidence regions in Figure~\ref{fig:cont_cps}
(green) with the phase angle PDF confidence regions in
Figure~\ref{fig:cont_phase} (red) reveals that there is far more
information about the Jeans scale in quasar pair spectra than can be
measured with the cross-power. The cross-power produces constraints
which are effectively a hybrid between the horizontal Jeans scale
contours for the phase angle distribution and the diagonal banana
shaped contours produced by the longitudinal power, which reflects the
degeneracy between Jeans smoothing and thermal broadening. This
quantitatively confirms our argument in \S~\ref{cps_vs_phase}, that
the cross-power is a product of moduli, containing information about
the 1D power, and the cosine of the phase, which depends on the 3D
power.
The results of this section indicate that among the statistics that we
have considered, the phase angle PDF is the most powerful for
constraining the IGM pressure smoothing, because it is more sensitive
to the Jeans scale and results in constraints that are free of
degeneracies with other thermal parameters.
We demonstrate this
explicitly in Figure~\ref{fig:marg_l}, where we show the fully
marginalized posterior distribution (see eqn.~\ref{fulllik}) of the
Jeans scale for each the statistics we have considered. The
probability distributions quantify the visual impression from the
contours in Figures~\ref{fig:cont_phase} and \ref{fig:cont_cps}, and
clearly indicate that the phase angle PDF is the most sensitive.
The relative error on the Jeans scale
$\sigma_{\lambda}\slash \lambda_J = 3.9\%$, which is a remarkable
precision when compared to the typical precision $\sim 30\%$ of
measurements of $T_0$ and $\gamma$ in the published literature
\citep[see e.g. Figure 30 in][for a recent compilation]{Lidz09},
especially when one considers that only 20 quasar pair spectra are
required to achieve this accuracy.
We close this section with a caveat to our statements that our Jeans
scale constraints are free of degeneracies with other thermal
parameters. The phase angle PDF is \emph{explicitly} nearly
independent of the temperature-density relation because 1) the
non-linear FGPA transformation is only weakly dependent on temperature
and 2) the phase angle PDF is invariant to the thermal broadening
convolution (see \S~\ref{sec:flux}). However, in our idealized dark-matter only simulations, the
Jeans scale is taken to be completely independent of $T_0$ and
$\gamma$; whereas, in reality all three parameters are correlated by
the underlying thermal history of the Universe. In this regard, the
Jeans scale may \emph{implicitly} depend on the $T_0$ and
$\gamma$ at the redshift of the sample, as well as with their values
at earlier times. We argued that because the thermal history is not
known, taking the Jeans scale to be free parameter is reasonable.
However, the validity of this assumption and the implicit dependence of the
Jeans scale on other thermal parameters is clearly something that
should be explored in the future with hydrodynamical simulations.
\begin{figure}
\centering
\centerline{\epsfig{file=l_post_distr.eps,
width=0.5\textwidth}}
\vskip -0.1in
\caption{ \label{fig:marg_l} Estimated accuracy on the measurement
of $\lambda_J$, obtained marginalizing over $T_0$ and $\gamma$ the
posterior distribution from the MCMC analysis. The phase
difference statistic (red) sets tighter constraints than the cross
power (blue) and the longitudinal power (black), which are
affected by parameter degeneracies. In this case we do not
account for the effect of noise and limited resolution, and we
find a relative precision of 3.9\% for $\lambda_J$.}
\end{figure}
\subsection{The Impact of Noise and Finite Spectral Resolution}
\label{sec:noise}
Up until this point we have assumed perfect data with infinite
signal-to-noise ratio and resolution. This is unrealistic, especially
considering, as discussed in \S~\ref{sec:mock}, that that close quasar
pairs are faint, and typically cannot be observed at echelle resolution
or very high signal-to-noise ratio $\gtrsim 20$, even with 8m class
telescopes. In this section we explore the impact of noise and finite
resolution on the precision with which we can measure the Jeans scale.
We consider the exact same sample of 20 mock quasar spectra, but now
assume that they are observed with spectral resolution corresponding
to FWHM $=30\,{\rm km~s}^{-1}$, and two different signal-to-noise ratios of ${\rm S\slash N}
\simeq 5$ and ${\rm S\slash N} \simeq 10$ \emph{per pixel}. These
values are consistent with what could be achieved using an echellette
spectrometer on an 8m class telescope. To create mock observed spectra with these
properties, we first smooth our simulated spectra with a Gaussian kernel to model
the limited spectral resolution, and interpolate these smoothed spectra onto a coarser
spectral grid which has $10\,{\rm km~s}^{-1}$ pixels, consistent with the spectral pixel scale
of typical echellette spectrometers. We then add Gaussian white noise to each pixel with
variance $\sigma^2_{\rm N}$ determined by the relation ${\rm S\slash N}=\bar{F}/\sigma_{\rm N}$,
where $\bar{F}$ is the mean transmitted flux. This then gives an average signal-to-noise ratio
equal to the desired value.
As we already discussed in \S~\ref{sec:flux} in the context of thermal
broadening, phase angles are invariant under a convolution with a
symmetric Gaussian kernel. Thus we do not expect spectral resolution
to significantly influence our results, provided that we restrict
attention to modes which are marginally resolved, such that we can
measure their phases. Indeed, the cutoff in the flux power spectrum
induced by spectral resolution is $k_{res}=1/\sigma_{\rm res}\approx
2.358/{\rm FWHM} = 0.08\,{\rm s~km}^{-1}$, is comparable to the maximum wavenumber we
consider $k=0.1\,{\rm s~km}^{-1}$, and hence we satisfy this criteria. Note
further that this invariance to a symmetric spectral convolution
implies that we do not need to be able to precisely model the
resolution, provided that it has a nearly symmetric shape and does
not vary dramatically across the spectrum. This is another significant
advantage of the phase angle approach, since the resolution of a
spectrometer often depends on the variable seeing, and can be
challenging to accurately calibrate.
\begin{figure}
\centering
\centerline{\epsfig{file=l_post_noise.eps,
width=0.5\textwidth}}
\vskip -0.1in
\caption{\label{fig:marg_l_noise} The effect of noise and
resolution in the measurement of $\lambda_J$. The plots shows
the posterior distribution of the Jeans scale, marginalized
over $T_0$ and $\gamma$. Each line represent a different degree
of noise, assuming a resolution of FWHM=30 km/s. We selected a
different subsample of the simulation as our mock dataset which
has a precision of 3.6\% for S/N=$\infty$ (black solid), 4.8\% for
S/N=$10$ (green dot-dashed) and 7.2\% for S/N=$5$ (red dashed). }
\end{figure}
Although our results are thus likely to be very independent of
resolution, noise introduces fluctuations which are uncorrelated
between the two sightlines, and this will tend to reduce the coherence
of the flux that the phase angle PDF quantifies. Noise will
thus modify the shape of the phase angle PDF away from the intrinsic shape shown in
Figure~\ref{fig:fluxphase}. In order to deal with noise and its
confluence with spectral resolution, we adopt a forward-modeling
approach. Specifically, for each thermal model we smooth all 10,000
IGM skewers to finite resolution, interpolate onto coarser spectral
grids, and add noise consistent with our desired signal-to-noise
ratio. We then fit the resulting distribution of phase angles to the
wrapped-Cauchy distribution, determining the value of the
concentration parameter $\zeta(k,r_{\perp})$, at each $k$ and
$r_{\perp}$ as we did before. We again emulate the function
$\zeta(k,r_\perp|T_0,\gamma, \lambda_J)$ using the same thermal
parameter grid, but now with noise and spectral resolution included,
enabling fast evaluations of the likelihood in
eqn.~(\ref{diaglik}). Thermal parameter constraints then follow from MCMC
exploration of this new likelihood, for which the impact of noise and resolution
on the phase angle PDF have been fully taken into account.
In Figure~\ref{fig:marg_l_noise} we show the impact of noise on the
fully marginalized constraints on the Jeans scale from the phase angle
PDF. The solid curve represents the posterior distribution for a mock
dataset with infinite resolution and signal-to-noise ratio, which is
identical to the red curve in Figure~\ref{fig:marg_l}. The dotted and
dashed curves illustrate the impact of ${\rm S\slash N}=10$ and ${\rm
S\slash N}=5$, respectively. Note that the slight shift in the modes
of these distributions from the fiducial value are expected, and
should not be interpreted as a bias. Different noise realizations
generate scatter in the phase angles just like the intrinsic noise
from large-scale structure. The inferred Jeans scale for any given
mock dataset or noise realization will not be exactly equal to the
true value, but should rather be distributed about it with a scatter
given by the width of the resulting posterior distributions. The
relative shifts in the mode of the posterior PDFs are well within
$1\sigma$ of the fiducial value, and are thus consistent with our
expectations.
The upshot of Figure~\ref{fig:marg_l_noise} is that noise and limited
spectral resolution do not have a significant impact on our ability to
measure the Jeans scale. For a signal-to-noise ratio of ${\rm S\slash
N}=10$ per pixel we find that the relative precision with which we
can measure the Jeans scale is $\sigma_{\lambda}\slash \lambda_J
=4.8\%$, which is only a slight degradation from the precision
achievable from the same dataset at infinite signal-to-noise ratio
and resolution $\sigma_{\lambda}\slash \lambda_J=3.9\%$. The
small impact of noise on the Jeans scale precision is not
surprising. For the $10\,{\rm km~s}^{-1}$ spectral pixels that we simulate, the
standard deviation of the normalized Ly$\alpha$ forest flux per pixel is
$\sqrt{\langle \delta F^2\rangle} \simeq 32\%$,
whereas for ${\rm S\slash
N}=10$ our Gaussian noise fluctuations are at a significantly
smaller $\simeq 10\%$ level. Heuristically, these two `noise' sources
add in quadrature, and thus the primary source of `noise' in measuring
the phase angle PDF results from the Ly$\alpha$ forest
itself, rather than from noise in the data. For a lower
signal-to-noise ratio of ${\rm S\slash N}=5$ per pixel, the precision is further
degraded to $\sigma_{\lambda}\slash \lambda_J=7.2\%$, which reflects the
fact that noise fluctuations are becoming more comparable to the intrinsic
Ly$\alpha$ forest fluctuations.
These numbers on the scaling of our precision with signal-to-noise
ratio ${\rm S\slash N}$ provide intuition about the optimal observing
strategy. For a given sample of pairs, it will require four times more
exposure time to increase the signal-to-noise ratio from ${\rm S\slash
N}\simeq5$ to ${\rm S\slash N}\simeq10$, whereas the same telescope time allocation
could be used to increase the sample size by a factor of four at the
same signal-to-noise ratio (assuming sufficient close pair sightlines
exist). For the latter case of an enlarged sample, the precision will
scale roughly as $\propto \sqrt{N_{\rm pairs}}$, implying a
$\sigma_{\lambda}\slash \lambda_J=3.6\%$ for a sample of 80 pairs
observed at ${\rm S\slash N}=5$. This can be compared to
$\sigma_{\lambda}\slash \lambda_J =4.8\%$ for 20 pairs observed at
${\rm S\slash N}\simeq10$. There is thus a marginal
gain in working at lower ${\rm S\slash N}\simeq 5$ and observing a larger
pair sample, although we have not considered various systematic errors
which could impact our measurement. However, higher signal-to-noise spectra are
usually preferable for the purposes of mitigating systematics, and
hence one would probably opt for higher signal-to-noise ratio, a smaller pair sample, and
tolerate slightly higher statistical errors.
\subsection{Systematic Errors}
\begin{figure}
\centering
\centerline{\epsfig{file=paper_overestimated_SN.eps,
width=0.5\textwidth}}
\vskip -0.1in
\caption{\label{fig:wrong_noise} The effect of overestimating the signal-to-noise
ratio by a 20\% factor (red, dashed line) when the real value is S/N$=10$: we do not
find any significant bias on the measured value of the Jeans scale.}
\end{figure}
We now briefly discuss the systematic errors which could impact a
measurement of the Jeans scale. First, consider the impact of errors
in the continuum normalization. Because the phase angle is a ratio of
Fourier modes of the normalized flux eqn.~(\ref{eqn:phase}), it is
completely insensitive to the continuum normalization of $\delta F$,
provided that the continuum is not adding significant power on the
scale of wavelength of the $k$-mode considered. In the previous section, we argued
that finite spectral resolution does not have a significant impact the phase angle PDF, because
phase angles are invariant under convolutions with symmetric kernels. We do take resolution
into account in our forward-modeling of the phase angle PDF, but precise knowledge of the
spectral resolution or the line spread function is not required,
since the line spread function will surely be symmetric when averaged
over several exposures, thus leaving the phase angles invariant. The
only requirement is that we restrict attention to modes less than the
resolution cutoff $k \lesssim k_{\rm res}$ whose amplitudes are not
significantly attenuated, such that we can actually measure their
phase angles.
Noise does modify the phase angle PDF, but our forward-modeling
approach takes this fully into account provided the noise estimates
are correct. One potential systematic is uncertainty in the noise
model. The typical situation is that the standard-deviation of a spectrum
reported by a data reduction pipeline is underestimated at the $\sim
10-20\%$ level (${\rm S\slash N}$ overestimated), because of systematic
errors related to the instrument and data reduction \citep[see
e.g.][]{msb+06,KGLeeBOSS13}. To address this issue we directly
model the impact of underestimated noise for a case where we think the
${\rm S\slash N}\simeq 10$, but where in reality it is actually $20\%$ lower ${\rm
S\slash N}\simeq 8$. Specifically, using our same mock dataset we
generate 20 quasar pair spectra with ${\rm S\slash N}\simeq 8$. However,
when forward-modeling the phase angle PDF with the IGM simulations, we
take the signal-to-noise ratio to be the overestimated value of ${\rm
S\slash N}\simeq 10$. Excess noise above our expectation would tend to
reduce the coherence in the spectra (less peaked phase angle PDF)
mimicking the effect of a smaller Jeans scale. We thus expect a bias
in the Jeans scale to result from the underestimated
noise. Figure~\ref{fig:wrong_noise} compares the posterior
distributions of the Jeans scale for the two cases
${\rm S\slash N}\simeq 10$ (black curve) and signal-to-noise ratio
overestimated to be ${\rm S\slash N}\simeq 10$ but actually equal to ${\rm
S\slash N}\simeq 8$ (red curve). We see that $\simeq 20\%$ level
uncertainties in the noise lead to a negligible bias in the Jeans
scale.
The only remaining systematic that could impact the Jeans scale
measurement is metal-line absorption within the forest. Metal
absorbers cluster differently from the IGM, and it is well known that
metals add high-$k$ power to the Ly$\alpha$ forest power spectrum
because the gas traced by metal lines tends to be colder than
\ion{H}{1} in the IGM \citep{McDonald2000,Croft2002,Kim04,Lidz09}. As this
metal absorption is not present in our IGM simulations, it can lead to
discrepancies between model phase angle PDFs and the actual data,
resulting in a biased measurement. This is very
unlikely to be a significant effect. We restrict attention to
large scale modes with $k < 0.1\,{\rm s~km}^{-1}$, both because this is comparable
to our expected spectral resolution cutoff, and because below these
wavenumbers metal line absorption results in negligible contamination
of the longitudinal power \citep{McDonald2000,Croft2002,Kim04,Lidz09}. Since the metal absorbers
have a negligible effect on the \emph{moduli} of these large scale modes, we also
expect them to negligibly change their phase angles.
We thus conclude that the phase angle PDF is highly insensitive to the
systematics that typically plague Ly$\alpha$ forest measurements, such
as continuum fitting errors, lack of knowledge of spectral resolution,
poorly calibrated noise, and metal line absorption.
\subsection{Is Our Likelihood Estimator Unbiased?}\label{sec:bias}
Finally, we determine whether our procedure for measuring the Jeans
scale via the phase angle likelihood (eqn.~\ref{diaglik}) outlined at
the end of \S~\ref{sec:emulator}, produces unbiased estimates. To
quantify any bias in our Jeans scale estimator we follow a Monte Carlo
approach, and generate 400 distinct quasar pair samples by randomly
drawing 20 quasar pair spectra (allowing for repetition) from our
ensemble of 10,000 skewers. Note that the distribution of transverse
separations is approximately the same for all of these realizations,
since we only simulate 30 discrete separations, and the full sample of
20 overlapping pair spectra requires 200 pairs of skewers, which are
randomly selected from among the 30 available pair separations. We
MCMC sample the likelihood in eqn.~(\ref{diaglik}) for each
realization, and thus generate the full marginalized posterior
distribution (eqn.~\ref{fulllik}; red curve in
Figure~\ref{fig:marg_l}). The `measured' value of the Jeans scale for
each realization is taken to the be the mean of the posterior
distribution. We conducted this procedure for the case of finite
spectral resolution (FWHM $=30\,{\rm km~s}^{-1}$) and signal-to-noise ratio ${\rm
S\slash N}\simeq 5$, where our forward-modeling procedure described
in \S~\ref{sec:noise} is used to model the impact of resolution and
noise on the phase angle PDF.
The distribution of Jeans scale measurements resulting from this Monte
Carlo simulation is shown in Figure~\ref{fig:bias}. We find that the distribution of
'measurements' is well centered on the true value of $\lambda_J=110$
kpc, and the mean value of this distribution is $\lambda_J=111.1$ kpc,
which differs from the true value by only $1\%$, confirming that our
procedure is unbiased to a very high level of precision. The relative
error of our measurements from this Monte Carlo simulation is
$\sigma_{\lambda_J}\slash \lambda_J=6.3\%$, which is consistent with
the value of $\sigma_{\lambda_J}\slash \lambda_J=7.2\%$, which we
deduced in \S~\ref{sec:accuracy} from an MCMC sampling of the
likelihood for a single mock dataset. This confirms that the posterior
distributions derived from our MCMC do indeed provide an accurate
representation of the errors on the Jeans scale and other thermal
parameters. However, we note that there is some small variation in the
value of $\sigma_{\lambda_J}\slash \lambda_J$ inferred from the posterior distributions
for different mock data realizations, as expected. Given that we only generated 400 samples,
the error on our determination of the mean of the distribution in
Figure~\ref{fig:bias} is $\simeq \sigma_{\lambda_J}\slash
\lambda_J\slash \sqrt{400} = 0.3\%$, and thus our slight bias of $1\%$
constitutes a $\sim 3\sigma$ fluctuation. We suspect that this is too
large to be a statistical fluke, and speculate that a tiny
amount of bias could be resulting from interpolation errors in our
emulation of the IGM. It is also possible that choosing an alternative
statistic of the posterior distribution as our `measurement' instead
of the mean, for example the mode or median, could also further reduce
the bias. But we do not consider this issue further, since the bias is so small
compared to our expected precision.
\begin{figure}
\centering
\centerline{\epsfig{file=bias_test.eps,
width=0.5\textwidth}}
\vskip -0.1in
\caption{ \label{fig:bias} Probability distribution of the measured value of $\lambda_J$
for 400 different mock datasets drawn from the fiducial simulation.
This plot confirms that our method is not biased, since
the distributions is be centered at the true value,
marked with a vertical dashed line. This test is performed assuming
S/N$=5$. The red line is the posterior distribution deduced from our MCMC sampling of the
phase angle PDF likelihood for one of these 400 mock dataset realizations. Its
similarity in shape to the distribution of mock measurements illustrates that our
MCMC simulations provide reliable error estimates.}
\end{figure}
We conclude that our phase angle PDF likelihood procedure for estimating
the Jeans scale has a negligible $\simeq 1\%$ bias. We would need to
analyze a sample of $\simeq 500-1000$ quasar pair spectra for this
bias to be comparable to the error on the Jeans scale. Furthermore, it
is likely that we could, if necessary, reduce this bias even further by
either reducing the interpolation error in our emulator or by applying
a different statistic to our posterior distribution to determine the
measured value.
\section{Discussion and Summary}\label{summary}
Spectra of correlated Ly$\alpha$ forest absorption in close quasar
pair sightlines represent a unique opportunity to improve our
understanding of the physics governing the IGM. In this paper we have
shown that the degree of coherence of Ly$\alpha$ absorption in quasar
pair spectra is sensitive to the Jeans filtering scale, provided the
pair separation is small enough to resolve it. Although the Jeans
scale has never been measured, it has fundamental cosmological
implications: it provides a thermal record of heat injected by
ultraviolet photons during cosmic reionization events, determines the
clumpiness of the IGM, a critical ingredient in reionization models,
and sets the minimum mass of galaxies to gravitationally collapse from
the IGM.
We introduce a novel technique to directly measure the Jeans scale
from quasar pair spectra based on the probability distribution
function (PDF) of phase angle differences of homologous longitudinal
Fourier modes in close quasar pair spectra. To study the efficacy of
this new method, we combined a semi-analytical model of the
${\rm Ly\alpha}$\ forest with a dark matter only simulation, to generate a grid
of 500 thermal models, where the temperature at mean density $T_0$,
slope of the temperature-density relation $\gamma$, and the Jeans
scale $\lambda_J$ were varied. A Bayesian formalism is introduced
based on the phase angle PDF, and MCMC techniques are used to conduct
a full parameter study, allowing us to characterize the precision of a
Jeans scale measurement, explore degeneracies with the other thermal
parameters, and compare parameter constraints with those obtained from
other statistics such as the longitudinal power and the cross-power
spectrum.
The primary conclusions of this study are:
\begin{itemize}
\item The longitudinal power is highly degenerate with
respect to the thermal parameters $T_0$, $\gamma$ and $\lambda_J$,
which arises because thermal broadening smooths the IGM along the
line-of-sight (1D) at a comparable scale as the Jeans pressure
smoothing (3D). It is extremely challenging to disentangle this
confluence of 1D and 3D smoothing with longitudinal observations
alone. Similar analogous degeneracies are likely to exist in other
previously considered statistics sensitive to small-scale power such
as the wavelet decomposition, the curvature, the $b$-parameter
distribution, and the flux PDF. Hence it may be necessary to
reassess the reliability and statistical significance of previous
measurements of $T_0$ and $\gamma$.
\item The cross-power measured from close quasar pairs is
sensitive to the 3D Jeans smoothing, and can break
degeneracies with the unknown Jeans scale. However, it is not the
optimal statistic, because it mixes 1D information in the moduli of
longitudinal Fourier modes, with the 3D information encoded in their phase
differences. We show that by focusing on the phase differences
alone, via the full PDF of phase angles, one is much more sensitive to 3D power and
the Jeans smoothing.
\item Based on a simple heuristic geometric argument, we derived an
analytical form for the phase angle PDF. A single parameter family
of wrapped-Cauchy distributions provides a good fit to the phase
differences in our simulated spectra for any $k$, $r_{\perp}$, the
full range of $T_0$,$\gamma$ and $\lambda_J$.
\item Our phase angle PDFs indicate that phase differences between
large-scale longitudinal modes with small wavenumbers $k \ll
1/\lambda_J$, are nevertheless very sensitive to the Jeans scale. We
present a simple analytical argument showing that this sensitivity
results from the geometry of observing a 3D field along 1D
skewers: low-$k$ cross-power across correlated 1D skewers is
actually dominated by high-$k$ 3D modes up to a scale set by the
pair separation $k_\perp \sim 1\slash r_\perp$.
\item The phase angle PDF is essentially independent of the
temperature-density relation parameters $T_0$ and $\gamma$. This
results because 1) the non-linear FGPA transformation is only weakly
dependent on temperature 2) phase angles of longitudinal modes are invariant to the
symmetric thermal broadening convolution.
\item Our full Bayesian MCMC parameter analysis indicates that a
realistic sample of only 20 close quasar pair spectra observed at
modest signal-to-noise ratio ${\rm S\slash N}\simeq 10$, can pinpoint the
Jeans scale to $\simeq 5\%$ precision, fully independent of the
amplitude $T_0$ and slope $\gamma$ of the temperature-density
relation. The freedom from degeneracies with $T_0$ and $\gamma$ is a direct consequence
of the near independence of the phase angle PDF of these parameters.
\item Our new estimator for the Jeans scale is unbiased and
insensitive to a battery of systematics that typically plague
Ly$\alpha$ forest measurements, such as continuum fitting errors,
imprecise knowledge of the noise level and/or spectral resolution,
and metal-line absorption.
\end{itemize}
In order for the parameter study presented here, with a large grid
(500) of thermal models, to be computationally feasible, we had to
rely on a simplified model of the IGM, based on a dark-matter only
simulation and simple thermal scaling relations. In particular, the
impact of Jeans pressure smoothing on the distribution of baryons is
approximated by smoothing the dark-matter particle distribution with a
Gaussian-like kernel, and we allowed the three thermal parameters
$T_0$, $\gamma$, and $\lambda_J$ to vary completely independently.
Although the Gaussian filtering approximation is valid in linear
theory \citep{GnedinBaker2003}, the Jeans scale is highly nonlinear at
$z\simeq 3$, hence a precise description of how pressure smoothing
alters the 3D power spectrum of the baryons requires full
hydrodynamical simulations. Furthermore, the three thermal parameters
we consider are clearly implicitly correlated by the underlying
thermal history of the Universe. Indeed, a full treatment of the
impact of impulsive reionization heating on the thermal evolution of
the IGM and the concomitant hydrodynamic response of the baryons,
probably requires coupled radiative transfer hydrodynamical
simulations.
Our approximate IGM model is thus justified by the
complexity and computational cost of fully modeling the Jeans smoothing problem, and
despite its simplicity, it provides a good fit to current measurements
of the longitudinal power (see Figure~\ref{fig:power_spectra}). Most
importantly, our simple model allowed us to develop valuable physical
intuition about how 3D pressure smoothing of baryons is manifest in
Ly$\alpha$ forest spectra of close quasar pairs. Based on this
intuition, we devised a powerful new method which isolates this
small-scale 3D information. By combining this new technique with
existing close quasar spectra, we will make the first direct
measurement of the Jeans scale of the IGM. Given that precise $\simeq 5\%$
constraints on the Jeans scale will soon be available, the time is
now ripe to use hydrodynamical and radiative transfer simulations to
improve our understanding of how reionization heating altered the
small-scale structure of baryons in the IGM.
\acknowledgments
We thank P. McDonald and U. Seljak for first suggesting to JFH that
close quasar pairs could be used to measure the Jeans scale. We also
thank the members of the ENIGMA
group\footnote{http://www.mpia-hd.mpg.de/ENIGMA/} at the Max Planck
Institute for Astronomy (MPIA) for reading an early version of the
manuscript and for helpful discussions. JFH acknowledges generous
support from the Alexander von Humboldt foundation in the context of
the Sofja Kovalevskaja Award. The Humboldt foundation is funded by the
German Federal Ministry for Education and Research.
\bibliographystyle{../Bibli/apj}
|
1,116,691,500,279 | arxiv | \section{Introduction}
\label{sec:intro}
Causal language modeling possesses great flexibility among most natural language tasks due to its unsupervised and generative nature. Large-scale pre-training on Transformer architecture like GPT2 has resulted in powerful models capable of capturing general knowledge of natural language. However, unlike bidirectional language models such as BERT, RoBERTa, causal language modeling
can only look at the word history to predict the next word. While this is mathematically sound,
such left-hand contextual information may potentially hinder the language model from capturing semantic knowledge at its fullest.
On the other hand, while BERT provides satisfactory performance for sequence encoding thanks to its bi-directional nature, it is designed for masked language modeling which predicts the word identity at a masked position in a sentence. BERT is non-causal and thus is not suitable for sequence generation.
Recent studies have shown that retrieving prefix contextual information from an external data store can further improve performance of a causal language model without increasing the number of model parameters~\cite{realm}. However, the retrieved information are still uni-directional. In this paper, we propose a novel language model, {\bf SU}ffix {\bf RE}trieval-{\bf A}ugmented {\bf LM} (SUREALM), that employs an embedding retriever for suffix retrieval from a data store. During sequence generation, the current word history, or referred as prefix in the rest of the paper, is submitted to an embedding retriever to search for training sentences that share similar prefixes. Then the corresponding suffixes of these training sentences are viewed as ``future'' context to guide sequence generation. The intuition is that sentences sharing a similar given prefix may probably have strong correlation on their suffixes. For example, ``how may i'' and ``how can i'' are similar prefixes. If the model also knows the complete reference sentence ``how can i help you'', then the model would tend to predict ``help you'' given a novel prefix ``how may i''. To exploit this assumption, we perform all possible splitting of each training sentence into a triple: a prefix, a word, and a suffix. We employ pre-trained sentence transformers~\cite{reimers-2019-sentence-bert} to encode the prefix and suffix of each training sentence to create an embedding data store. Then an embedding retriever such as FAISS~\cite{johnson2019billion} is employed for prefix-suffix embedding retrieval given an encoded prefix. The retrieved prefix-suffix embeddings are augmented into the word embedding inputs during sequence generation, achieving the causal language modeling with a simulated bi-directional effect. SUREALM is causal because it only uses word history for predicting the next word. SUREALM is simulated bi-directional because it exploits ``future'' context from other similar sentences.
Our contributions are two-folded: First, we propose SUREALM, a new causal language model enhanced by prefix-suffix embedding retrieval to simulate a bi-directional effect for sequence generation.
Second, we perform extensive experiments and show effectiveness of our model on the DSTC9 dialogue corpus.
\section{Related Work}
\label{sec:related_work}
Improving language model using retrieval technique is not new. ~\cite{milind99irlm} employs document retrieval to retrieve relevant documents which are used to create an adaptive language model and interpolating it with the background statistical N-gram language model. ~\cite{eck-etal-2004-language} employs information retrieval to perform language model adaptation for statistical machine translation. Once the language models are adapted, they are kept fixed during sequence generation.
Most recent development in language modeling is based on transformers~\cite{NIPS2017_3f5ee243}. Masked language modeling that are BERT-based~\cite{devlin-etal-2019-bert, DBLP:journals/corr/abs-1907-11692} exploits bi-directional information of a sentence to predict the word identity of the masked tokens. While BERT is effective in encoding sequences, it is not suitable for sequence generation due to its non-causal nature. Causal language modeling such as GPT2~\cite{Radford2018ImprovingLU} is uni-directional. Our proposed model attempts to retain the best of the two worlds as autoregressive and simulated bi-directional via augmentation of suffix embeddings during sequence generation.
One noticeable work for language modeling using embedding retrieval is nearest neighbor language model (KNN-LM)~\cite{khandelwal20generalization}. Their approach store dynamically changed information in an external knowledge base. During sequence generation, KNN-LM uses the current prefix to retrieve similar prefixes in the data store using embedding retrieval. Then the output word probability distribution is estimated by looking at the corresponding next words in the retrieved prefixes. Such word probability distribution is linearly interpolated with the output word distribution from the causal transformer LM. While their approach has shown effectiveness in reducing word perplexity, their approach is uni-directional in terms of utilization of information for word prediction. Our proposed model enjoys the simulated bi-directional effect of utilizing ``future'' contextual information to guide sequence generation.
Another work is retrieval-augmented generation for question and answering~\cite{rag}. Their approach employs an embedding retrieval over the encoded document embeddings. Then the top-K retrieved document embeddings are viewed as latent variables for answer generation. These latent variables are marginalized in the generator within a sequence-to-sequence generation framework. Related work of using retrieval technique for language modeling pre-training and question answering also includes~\cite{realm}. Our proposed model differs from their approach that we do not employ marginalization on the top-K retrieved results. In contrast, our model counts on the attention mechanism to attend to all previously retrieved suffix embeddings such that the cross-entropy loss is minimized.
\section{Proposed approach}
\label{sec:approach}
Our proposed approach extends causal language models with suffix retrieval. Denote a sentence $W=w_1w_2...w_N$. Then our model defines the negative log likelihood of $W$ as follows:
\begin{equation}
\label{eqn:surealm}
\mathcal{L}(W;\Theta) = -\log P(W;\Theta) = -\sum_{i=1}^N \log P(w_i|p_i,f(p_i;\Phi);\Theta)
\end{equation}
where $p_i$ denotes the word history (or prefix) of the word token $w_i$. $f(p_i;\Phi)$ denotes a retrieval function parameterized by $\Phi$, to search for sentences that have similar prefixes in a data store. Then the suffixes of the retrieved sentences are augmented into the language model via suffix embedding. Although the true future context is unseen in causal language models, we hypothesize that such future context may be estimated by leveraging sentences that share similar prefixes. Thus, our model, {\bf SU}ffix {\bf RE}trieval-{\bf A}ugmented {\bf LM} (SUREALM), achieves a bi-directional modeling as in BERT and still be able to generate sentences in an autoregressive manner as in GPT.
In summary, our proposed approach has three steps: (1) Data preprocessing and indexing; (2) SUREALM training; (3) SUREALM decoding. We describe the steps in Section~\ref{subsec:preprocess}--~\ref{subsec:decode}.
\subsection{Data pre-processing and indexing}
\label{subsec:preprocess}
Given a training corpus $\mathcal{D} = \{W\}$ containing a set of unique sentences $W$, each sentence $W$ generates all possible partitions of $W$ into 3 parts: prefix, current word, and suffix, denoted as $(p_i, w_{i}, s_i)$ where $p_i = w_1w_2...w_{i-1}$, $s_j = w_{i+1}...w_N$ with valid word position $0 < i < N $.
Motivated from masked language modeling, we exclude the current word $w_{i}$ so that
each data entry for indexing into a data store is
a prefix-suffix pair $(p_i, s_i)$. This formulation enforces our model to use information from the prefix (left context) and the retrieved suffixes (``right context'' from other training sentences).
Thanks to the recent development in sentence embedding retrieval, we employ pre-trained sentence transformer~\cite{reimers-2019-sentence-bert} to encode prefixes and suffixes such that retrieval is based on similarity of prefix embeddings. Then the data store returns the prefix and suffix embeddings. The rationale of encoding variable-length prefix and suffix into a fixed dimensional vector is to make the prefix and suffix representation smoother to mitigate word-level noises that are irrelevant to the current prefix under consideration. Intuitively, $(emb(p_i), emb(s_i)) \in \mathcal{R}^{d}$ can be viewed as a key-value pair where $d$ is the embedding dimension. To preserve positional information in representation, we use absolute positions in the original sentence when computing the prefix and suffix embeddings. We employ FAISS~\cite{johnson2019billion} for embedding search.
The final number of prefix-suffix pairs to index is $O(M\cdot N)$ where $M$ is the number of unique training sentences and $N$ is the maximum sequence length of a sentence. Essentially, our model requires to perform embedding retrieval at every word position. Therefore, we introduce a hyperparameter $\delta$ to control the frequency of embedding retrieval. For example, say if embedding retrieval occurs at time $t'$, then the next time to retrieve will be at time $t'+\delta$. This implies that during the time interval $[t'+1,t'+\delta-1]$, all the previously retrieved suffix embeddings are reused to save computation. This allows us to explore the tradeoff between computation and accuracy.
Regarding the suffix representation, we consider applying suffix truncation assuming that word tokens in a suffix that are closer to prediction time $t$ may be more helpful. We introduce a hyper-parameter $m$ for suffix truncation so that a truncated suffix $s_i'=w_{i+1}w_{i+2}...w_{i+m}$ with $m < N$ is fed into a sentence transformer for encoding. When the number of tokens $N$ is large, we conjecture that a right $m$ may avoid an overly smoothed suffix embedding representation due to the pooling mechanism in a sentence transformer. Table~\ref{tbl:faiss} shows sample retrieval results using some prefixes as input queries.
\begin{table}[htb]
\vspace{-4mm}
\centering
\caption{Samples of retrieval results using FAISS.}
\begin{tabular}{|c|c|c|}
\hline
{\bf Input Query} & {\bf Retrieved Word} & {\bf Retrieved Suffix} \\
\hline
'i also want'&'free'& 'wifi' \\
\hline
'i'd like to' & 'book' & 'this hotel' \\
\hline
'is the hotel &&\\ equipped with an' & 'elevator' & 'for convenience?'\\
\hline
'i have several' & 'options' &'for you. would you \\&& like a particular area...' \\
\hline
\end{tabular
\label{tbl:faiss}
\vspace{-4mm}
\end{table}
\subsection{SUREALM training}
\label{subsec:training}
\subsubsection{Offline retrieval}
One complication in SUREALM is the embedding retrieval required for each prefix $p_i$ at each word position $i$. First, it would be computationally expensive to perform on-the-fly embedding retrieval during training. Since we freeze the sentence transformer for encoding, top-K similar prefix and suffix embeddings can be precomputed offline using FAISS to speed up training. $K$ is a hyper-parameter to determine the number of retrieved suffix embeddings to be included for SUREALM training. To avoid cheating
, we exclude all the embedding results belonging to the same training sentence ID. Empirically, we found this step to be crucial so that SUREALM learns from additional suffix information provided by other similar training sentences.
\subsubsection{Mini-batch training}
Another challenge is to make SUREALM fit to mini-batch training where a batch of padded word sequences are fed into the model and different suffix embeddings should be applied at different time $i$. To enable mini-batch training, we first append all offline-retrieved suffix embeddings with the input word embeddings. Then we construct a suitable attention mask so that each word position $i$ should only attend to the allowed suffix embeddings, and the previous word positions as in causal LM.
Denote $\mathcal{C}_K^{\bigoplus} (W_{\le i}) = \bigoplus_{i'=1}^i \bigoplus_{k=1}^K \{(p_{i'}^{(k)}, s_{i'}^{(k)})\}$ as a concatenation of all previously retrieved top-K prefix-suffix embedding pairs. Probability of generating a word sequence $W$ becomes:
\begin{equation}\label{p}
P(W) = \prod_{i=1}^{N} P(w_i| W_{\leq i-1}, \mathcal{C}^{\bigoplus}_K(W_{\leq i-1})).
\end{equation}
where $W_{\leq i-1} = w_1w_2...w_{i-1}$. SUREALM employs a transformer architecture which follows the Query-Key-Value inputs defined as follows:
\begin{align}
Q &= \text{Embedding}(W) \\
K &= \text{Concat}(\text{Embedding}(W), p_1, p_2, \dots, p_J) \\
V &= \text{Concat}(\text{Embedding}(W), s_1, s_2, \dots, s_J) \\
M_{i,j} &= \begin{cases}
0, &\text{ $j \leq i$ or} \\
& \text{ $j> |Q|$ and $ (j -|Q|)\leq K (\lceil i / \delta \rceil- 1)$ }\\
-\infty, &\text{Otherwise.}
\end{cases}
\end{align}
Here $M\in \mathcal{R}^{|Q|\times(|Q|+J)}$ is an attention mask which masks future positions in keys and values regarding the input word embeddings and the retrieved suffix embeddings.
Finally, we obtain the output attention weights in the masked attention block as follows:
\begin{equation}
\text{Attention}(Q,K,V,M) = \text{Softmax}(\frac{QK^{T} + M}{\sqrt{d_k}})V
\end{equation}
where $d_k$ denotes the embedding dimension of the keys and values.
Figure~\ref{fig:workflow} illustrates SUREALM architecture.
Notice that SUREALM can be initialized with any pre-trained transformer model weights from the BERT or GPT model families. Then, we finetune the model using the training text to minimize the cross-entropy loss.
\subsection{SUREALM Decoding}
\label{subsec:decode}
SUREALM decoding is similar to any decoding algorithm in sequence generation except that suffix embedding retrieval is performed when the prefix is updated during generation. We start from the start symbol as the initial prefix. Then suffix embedding retrieval takes place using the current prefix as a query. The top-K suffix embeddings are added into the
set $\mathcal{C}_K^{\bigoplus} (W_{\le i})$ as extra inputs to the transformer in a progressive manner. The next word is generated from the output word distribution. The generated word is then appended to the prefix, giving a new prefix. The generation process is repeated until the end-of-sentence symbol is encountered. We follow the decoding algorithm implementation in the Huggingface transformers library~\cite{wolf-etal-2020-transformers} and augment an embedding retriever in our implementation.
\begin{figure}[htb]\label{fig:workflow}
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[scale=0.3]{workflow_new.png}}
\vspace{-2mm}
\end{minipage}
\caption{SUREALM training with $\delta=1, K=2$.}
\label{fig:proposed}
\end{figure}
\section{Experiments}
\label{sec:expt}
In this section, we compare SUREALM with different configurations with baseline LMs for sequence generation. We report word perplexity for all experiments. We also present some details on the choice of hyper-parameters.
\subsection{Setup}
\label{subsec:data}
We used the dataset from the Dialogue System Technology Challenge 9 (DSTC9)\cite{kimdstc9}. The original dataset were designed for evaluating spoken dialogues that involves accessing an external knowledge base containing a list of question-answer pairs about each named entity, such as ``Can I bring my pet to A and B Guest House?'' and ``No, pets are not allowed at this property.''. For language modeling purpose, we treated each dialogue turn as independent sentence and we only kept unique sentences in our training, validation and test sets. Then each sentence was assigned with an unique sentence ID so that they can be uniquely identified in embedding retriever. Our resulting training dataset contained 126,877 unique dialogue turns mentioning 145 named entities covering four domains: hotel, restaurant, taxi and train. Our validation dataset contained 18,037 unique dialogue turns.
The test dataset had 18,021 unique dialogue turns covering 523 unseen named entities including a new domain on attraction. Due to the introduction of a new domain, we further split the test dataset into in-domain and out-of-domain portions and only evaluated on the in-domain portion. Since the test turns did not have the manual domain label, we applied named entity recognition on all dialogue turns and we used the detected named entity and applied its corresponding domain label to dialogue turns. The question-answering knowledge base was not added into our data store. The data store only contained the prefix-suffix embeddings from the training sentences.
We followed the data preprocessing procedure in Section~\ref{subsec:preprocess} to generate the prefix-suffix pairs of the training dataset, pre-computing the prefix and suffix embeddings that were indexed and stored using FAISS~\cite{johnson2019billion}. The prefix-suffix embeddings were computed using pre-trained sentence transformers~\cite{reimers-2019-sentence-bert}. We also precomputed the prefix embeddings for validation and test set evaluation to speed up the retrieval process. However, their embeddings were not indexed in FAISS to avoid cheating.
\subsection{Training details}
In SUREALM, there are two modeling components to consider: (1) encoding model; (2) language model. For the encoding model, we used pre-trained sentence transformers~\cite{reimers-2019-sentence-bert} to encode prefixes and suffixes. We tried small-scale and standard-scale models with 6-layer 384-dimension and 12-layer 768-dimension respectively in our experiments. For the language model, we employed transformer-based language model with various weight initialization strategies. Inspired by~\cite{rothe-etal-2020-leveraging}, we explored different sentence transformer checkpoints to initialize the language model weights.
On small-scale model training, we used a batch size of 128, \texttt{AdamW} optimizer with learning rate of \texttt{2e-5}, and linear learning rate scheduler with 500 warmup steps. On standard-scale model training, we used a batch size of 64 and learning rate of \texttt{1e-5} and kept the same settings as in the small-scale model training. Since our dataset was relatively small, we trained SUREALM for a maximum of 200 epochs and chose the model with the minimum validation perplexity.
In our preliminary experiments, we chose the best configuration of the hyper-parameters based on the validation perplexity. Results showed that it was crucial to retrieve at each prediction time step, i.e. $\delta=1$. Moreover, we chose $m=10$ for suffix truncation. Retrieving the top-K ($K=8$) prefix-suffix embeddings yielded the best perplexity results. We fixed these hyper-parameters for further experiments below.
\subsection{Results}
\subsubsection{Small-scale models}
For baselines, we fine-tuned the 6-layer transformer-based masked LM (MiniLM) initialized with random and pre-trained weights. We used the pre-trained sentence transformers ( \texttt{multi-qa-MiniLM-} \texttt{L6-cos-v1} and \texttt{all-MiniLM-L6-v2} ) as our encoding models. We initialized our LM with random weights, pre-trained sentence transformer weights, and the masked LM weights.
Results in Table~\ref{table:small} show that:
\begin{enumerate}
\item SUREALM achieved lower perplexity compared to baselines in all experiments. Our best model achieved relative test perplexity reduction of 7.2\% compared to the baseline.
\item LM weights can be initialized differently from the ST weights for model training without any performance degradation. This implies flexibility for different weight combinations.
\end{enumerate}
\subsubsection{Standard-scale models}
We then trained SUREALM using standard-scale models and compare with popular state-of-the-art LM baselines such as BERT, RoBERTa and GPT2. However, since they used different word tokenizers resulting in different output vocabulary sizes, we can only compare models with with similar vocabulary sizes. Table~\ref{table:big_bert} shows that SUREALM achieved relative test perplexity reduction by 7.1\%.
Table~\ref{table:big} shows perplexity results with increased vocabulary size of 50k. SUREALM achieved relative test perplexity reduction by 3.2\% and 2\% compared to GPT2 and RoBERTa baseline respectively.
\begin{table}[htb]
\vspace{-3mm}
\centering
\caption{Validation and test perplexities using small models with 30k output vocabulary. ST stands for sentence transformers.}
\begin{tabular}{|c|c|c|c|}
\hline
ST weight & LM weight & Val. ppl & Test ppl \\
\hline
N/A (baseline) & MiniLM & 5.43 & 7.94 \\
\hline
Multiqa-MiniLM & Multiqa-MiniLM & 5.05 & 7.52 \\
Multiqa-MiniLM & MiniLM & 5.04 & 7.47 \\
All-MiniLM & All-miniLM & \textbf{5.02} & 7.44 \\
All-MiniLM & MiniLM & 5.04 & \textbf{7.37} \\
\hline
\end{tabular
\label{table:small}
\end{table}
\begin{table}[htb]
\vspace{-3mm}
\centering
\caption{Validation and test perplexities using standard models with 30k output vocabulary. ST stands for sentence transformers.}
\begin{tabular}{|c|c|c|c|}
\hline
ST weight & LM weight & Val. ppl & Test ppl \\
\hline
N/A (baseline) & BERT-base & 5.19 & 7.58 \\
\hline
Multiqa-distilbert & BERT-base & \textbf{4.88} & \textbf{7.04} \\
\hline
\end{tabular
\label{table:big_bert}
\end{table}
\begin{table}[htb!]
\vspace{-3mm}
\centering
\caption{Validation and test perplexities using standard models with 50k output vocabulary. ST stands for sentence transformers.}
\begin{tabular}{|c|c|c|c|c|}
\hline
ST weight & LM weight & Val. ppl & Test ppl \\
\hline
N/A (baseline) & RoBERTa-base & 5.35 & 7.69 \\
N/A (baseline) & GPT2-base & {\bf 5.01} & 7.60 \\
\hline
All-distilroberta & RoBERTa-base & 5.13 & 7.55 \\
Multiqa-distilbert & RoBERTa-base & 5.12 & 7.54 \\
Multiqa-mpnet & GPT2-base & 5.03 & 7.37 \\
Multiqa-distilbert & GPT2-base & 5.03 & 7.37 \\
All-distilroberta & GPT2-base & 5.03 & {\bf 7.36} \\
\hline
\end{tabular
\label{table:big}
\end{table}
\section{Discussions}
\label{sec:discuss}
During embedding retrieval, we investigated the inclusion of the current word into the suffix in a training sentence, meaning that we only split a training sentence into prefix and suffix instead of prefix, current word and suffix mentioned in Section~\ref{subsec:preprocess}. Then we followed the same procedure to encode the prefixes and suffixes and reran SUREALM training and evaluation. However, we observed no test perplexity reduction compared to the baseline. Excluding the current word from suffix may be analogous to applying a mask token in the mask LM. After excluding the current word, SUREALM focuses on information from the word history and the retrieved suffix context for word prediction. It is possible that the embedding retrieval results may contain sentences that share similar prefixes but having an identical suffix as in the current input sentence. From this perspective, excluding the current word from suffix is reasonable to avoid SUREALM from overly relying on the suffix embeddings and forgetting the word history in word prediction.
\section{Conclusions}
We have proposed a suffix retrieval-augmented language model to simulate bi-directional contextual effect while remains autoregressive so that our model can be used for sequence generation. Our proposed model shows promising perplexity performance compared to state-of-the-art LM baselines. In the future, we plan to evaluate our model on large corpora. In addition, we plan to extend our model on conditional generation such as dialogue response generation. Lastly, we will investigate domain LM adaptation using our proposed model.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
1,116,691,500,280 | arxiv | \section*{References}%
}
\makeatletter
\setcounter{tocdepth}{1}
\makeatother
\DeclareFieldFormat[article]{title}{{\it #1}}
\DeclareFieldFormat{journaltitle}{{\rm #1}}
\renewbibmacro{in:}{%
\ifentrytype{article}{}{\printtext{\bibstring{in}\intitlepunct}}}
\DeclareFieldFormat[incollection]{title}{{\it #1}}
\DeclareFieldFormat{journaltitle}{{\rm #1}}
\section{The Hodge DR conjecture} \label{sect:universal_bundle}
In this section we present several equivalent constructions of the universal line bundle introduced in \ref{sec:intro:hodge_DR}, discuss its various properties, and prove \ref{thm:HDR_k_is_0}.
As explained in \ref{sec:intro:hodge_DR}, the projectivized space of (generalised) multi-scale differentials comes with a map to the projectivized Hodge bundle, by taking the differential at top level, and allowing it to vanish at all lower levels. Pulling back $\ca O(1)$ from the Hodge bundle gives a line bundle on the generalized multi-scale space. We begin by giving several equivalent versions of this construction.
First we write out explicitly the objects of the fibred category $\bb P(\cat{Rub})$:
\begin{equation*}
\bb P(\cat{Rub}) \= \{(\pi\colon X\to B, \beta, \ca F)\}\,,
\end{equation*}
where $(X/B, \beta)$ is a point of $\cat{Rub}$ as in \ref{def:rub}, and $\ca F$ is a line bundle on $B$. The Abel--Jacobi map sends such an object to $\pi^*\ca F (\beta)$, giving a proper map $\bb P(\cat{Rub}) \to \mathfrak{Pic}$.
Now fix a line bundle $\ca L$ on $X_{g,n}/\overline{\ca M}_{g,n}$, which is of total degree 0 on each fiber. Then we can write explicitly the fibred category of $\bb P(\cat{Rub}_{\ca L})$ as
\begin{equation*}
\bb P(\cat{Rub}_{\ca L}) \= \{(X/B, \beta, \ca F, \phi)\}
\end{equation*}
where $(X/B, \beta, \ca F)$ is an object of $\bb P(\cat{Rub})$ with $X/B$ stable of genus $g$, and $\phi\colon \pi^*\ca F (\beta) \to \ca L$ is an isomorphism.
\subsection*{Construction 1: tautological bundle}
This is just the bundle $\ca F$ on $\bb P(\cat{Rub})$, or its pullback to $\ca F$ on $\bb P(\cat{Rub}_{\ca L})$ along the tautological map. We denote the {\em dual} of this line bundle by $\eta$.
\subsection*{Construction 2: projective embedding}
Let $D$ be an effective divisor on $X_{g,n}$ such that $\pi_* \mathcal L(D)$ is a vector bundle on $\overline{\ca M}_{g,n}$. Such a $D$ can always be found as an element of the linear system of a sufficiently relatively ample sheaf on $X_{g,n}$ over $\overline{\ca M}_{g,n}$. Then over $\bb P(\cat{Rub}_{\ca L})$ we have natural maps
\begin{equation}
\pi^*\ca F \iso \ca L(-\beta) \to \ca L \to \ca L(D)\,,
\end{equation}
where the first map is induced by $\phi$, the second is induced by the natural map $\ca O(-\beta) \to \ca O$, and the third by the natural map $\ca O \to \ca O(D)$. Adjunction yields a map
\begin{equation}
\ca F \= \pi_*\pi^*\ca F \to \pi_*\ca L(D)\,,
\end{equation}
which is by definition\footnote{Our projectivizations are moduli of sub-bundles, not quotient bundles. } a map
\begin{equation}\label{lem:map-hodge}
F\colon \bb P(\cat{Rub}_{\ca L}) \to \bb P_{\bb P(\cat{Rub}_{\ca L})}(\pi_*\ca L(D))\,.
\end{equation}
\begin{lemma}\label{lem_eta_comparison_1}
$F^*\ca O(1) = \eta$\,.
\end{lemma}
\begin{proof}
The equality $F^*\ca O(1) = \ca F^\vee$ is immediate from \cite[\href{https://stacks.math.columbia.edu/tag/0FCY}{Example 0FCY}]{stacks-project}; the fact that we obtain $\ca F^\vee$ instead of $\ca F$ is because we define the projectivization to be the moduli of rank 1 sub-bundles, not rank 1 quotient bundles.
\end{proof}
\par
In particular, we observe that the line bundle $F^*\ca O(1)$ turns out to be independent of the choice of the sufficiently relatively ample divisor $D$. In the case considered in the introduction, we take
\begin{equation}
\mathcal L = \omega^{\otimes k}_{X_{g,n}/\overline{\ca M}_{g,n}}\Big(-\sum_{i} (a_i - k) z_i\Big)
\end{equation}
and $D=\sum_{i: a_i > k} (a_i - k) z_i$.
\subsection*{Construction 3: pullback from rubber target}
For this construction we restrict to the case where $\ca L = \ca O_X(\sum_i a_i z_i)$ for $k = 0$; put another way, we choose a rational section of $\ca L$ whose locus of zeros and poles is contained in a union of disjoint sections of $X \to B$.
\par
We write
\begin{equation*}
E \= \sum_{i : a_i >0} a_i z_i \;\;\; \text{and} \;\;\; D \= -\sum_{i : a_i <0} a_i z_i\,.
\end{equation*}
Since these are effective divisors we have natural maps
\begin{equation*}
\ca O_X \to \ca O_X(E) \;\;\; \text{and} \;\;\; \ca O_X \to \ca O_X(D)\,,
\end{equation*}
and combining with the natural map $\ca O_X \to \ca O_X(\beta)$ and the isomorphism $\phi\colon \pi^*\ca F (\beta) \iso \ca O_X(E - D)$ yields maps
\begin{equation*}
\ca O_X(-D)(-\beta) \to \ca O_X \;\;\; \text{and} \;\;\; \ca O_X(-D)(-\beta) \to \ca O_X(E - D)(-\beta) \iso \pi^* \ca F\,.
\end{equation*}
The induced map
\begin{equation*}
\ca O_X(-D)(-\beta) \to \ca O_X \oplus \pi^* \ca F\,
\end{equation*}
is universally injective since the first map is injective around the support of $E$ and the second is injective away from the support of $E$. This induces a map
\begin{equation*}
X \to \bb P(\ca O_B \oplus \ca F)\,.
\end{equation*}
The cotangent line at $\infty$ to this rubber target is then given by
\begin{equation}
\Psi_\infty = \ca F^\vee\,.
\end{equation}
We have deduced
\begin{lemma}\label{lem_eta_comparison_2}
$\Psi_\infty = \eta$\,.
\end{lemma}
\begin{remark}
Above we have constructed a rubber target of length 1 (i.e. with no expansions).
This is because we are only interested in what happens near the infinity section, so we do not need to construct the whole expanded chain. The reader who is more comfortable with expansions may verify that the length-1 target we construct here is exactly what is obtained by following through the proof of the expanded target in \cite[Proposition~50]{BHPSS}, and then contracting all except the top component.
\end{remark}
\subsection{Computation of $\eta$ for $k=0$}
Here we prove \ref{thm:HDR_k_is_0}, which we restate for the convenience of the reader.
\begin{theorem
\ref{conj:HDR} is true for $k=0$: for any $g,u \geq 0$ and any vector $A \in \mathbb{Z}^n$ with sum $|A|=0$ we have
\[
p_*\left(\left[{\mathbb{P}}\big(\cat{Rub}_{\mathcal L_A}\big) \right]^\mathrm{vir} \cdot \eta^u \right)
= p_*\left([\overline{\ca M}_{g,A}(\mathbb{P}^1, 0, \infty)^\sim]^\mathrm{vir} \cdot \Psi_\infty^u \right)
\= [r^u]{\rm Ch}_{g,A}^{0, r, g+u}\,.
\]
\end{theorem}
\begin{proof}
The first equality follows from \ref{lem_eta_comparison_1} and \ref{lem_eta_comparison_2}. For the second equality, we note that the term on the left has been computed in \cite[Corollary 4.3]{FWY} in terms of a slightly modified Chiodo class. Indeed, we define an $r$-shifted version $A(r)$ of $A$ by
\[
A(r)_i \= \begin{cases}
a_i & \text{for }a_i \geq 0\,,\\
r+a_i & \text{for }a_i < 0\,.
\end{cases}
\]
In other words, for all indices $i$ with $a_i<0$ (which form a subset $I_\infty \subseteq \{1, \ldots, n\}$), we shift the vector $A$ by $r$ in the $i$-th entry.
Then the Chiodo class $\mathrm{Ch}_{g,A(r)}^{0,r,d}$ is a polynomial in $r$, for $r$ sufficiently large. Denote by
\[
\mathrm{Ch}_{g,A(r)}^{0,r,\bullet} \= \sum_{d \geq 0} \mathrm{Ch}_{g,A(r)}^{0,r,d}
\]
the associated mixed-degree class. Then in this notation, the formula from \cite[Corollary 4.3]{FWY} reads as follows:
\begin{align*}
p_*\left([\overline{\ca M}_{g,A}(\mathbb{P}^1, 0, \infty)^\sim]^\mathrm{vir} \cdot \Psi_\infty^u \right) &= \sum_{\vec e \in \mathbb{Z}_{\geq 0}^{I_\infty}} \prod_{i \in I_\infty} (a_i \psi_i)^{e_i} \cdot [r^{u-|\vec e|}] \mathrm{Ch}_{g,A(r)}^{0,r,u+g-|\vec e|}\\
&= [r^u] \left[\sum_{\vec e \in \mathbb{Z}_{\geq 0}^{I_\infty}} \prod_{i \in I_\infty} \left(\frac{a_i}{r} \psi_i\right)^{e_i} \cdot \mathrm{Ch}_{g,A(r)}^{0,r,\bullet}\right]_{\mathrm{codim}\ g+u}\\
&= [r^u] \left[\prod_{i \in I_\infty} \frac{1}{1-\frac{a_i}{r} \psi_i} \cdot \mathrm{Ch}_{g,A(r)}^{0,r,\bullet}\right]_{\mathrm{codim}\ g+u}\\
&= [r^u] \left[\mathrm{Ch}_{g,A}^{0,r,\bullet}\right]_{\mathrm{codim}\ g+u}
\end{align*}
Here the last step uses \cite[Theorem 4.1 (ii)]{DaniloEulerChar}.
\end{proof}
\subsection{(A)symmetry}
Above we gave three constructions of the line bundle $\eta = \eta(\ca L)$ on $\bb P(\cat{Rub}_\ca L)$. We know that the push-forwards to $\overline{\ca M}_{g,n}$ of $[\bb P(\cat{Rub}_{\ca L})]^\mathrm{vir}$ and $[\bb P(\cat{Rub}_{\ca L^\vee})]^\mathrm{vir}$ agree. However, once we intersect with the class $\eta$ things are a little more subtle. The universal curve over ${\mathbb{P}}(\cat{Rub})$ carries a PL function $\beta$, totally ordered and with maximum value $0$. The \emph{minimum} value of $\beta$ we denote $\beta^\textrm{min}$; this is a PL function on ${\mathbb{P}}(\cat{Rub})$.
\par
\begin{lemma}
We have
\begin{equation}
p_*\left([\bb P(\cat{Rub}_{\ca L^\vee})]^\mathrm{vir} \cdot c_1(\eta)^u\right)
\= p_*\left([\bb P(\cat{Rub}_{\ca L})]^\mathrm{vir} \cdot (-c_1(\eta(\beta^\textrm{min})))^{u}\right).
\end{equation}
\end{lemma}
\begin{proof}
There is a natural isomorphism (compatible with the virtual fundamental classes) over $\overline{\ca M}_{g,n}$ from $\bb P(\cat{Rub}_{\ca L})$ to $\bb P(\cat{Rub}_{\ca L^\vee})$, given by
\begin{equation}
(X/B, \beta, \ca F, \phi) \mapsto (X/B, \beta^\textrm{min} - \beta, (\ca F(\beta^\textrm{min}))^\vee, \phi')\,,
\end{equation}
where $\phi'$ is the composite
\begin{equation}
\pi^*(\ca F(\beta^\textrm{min}))^\vee(\beta^\textrm{min} - \beta)= \pi^* \ca F^\vee (-\beta) \stackrel{(\phi^\vee)^{-1}}{\longrightarrow} \ca L^\vee\,.
\end{equation}
\end{proof}
\section{The underlying algebraic stack of Rub}\label{sec:minimal_log_str}
The category $\cat{Rub}$ is naturally fibred over $\cat{LogSch}$. Our goal in this section is to understand its underlying algebraic stack (a fibred category over $\cat{Sch}$). We use the notion of minimal log structures from \cite{Gillam} and \cite[Appendix~B]{Wise2016Moduli-of-morph}. We describe here the minimal log structures on points of $\cat{Rub}$, a variation on the description of the minimal log structures on $\cat{Div}$ given in the proof of
\cite[Theorem 4.2.4]{MarcusWiseLog}.
Throughout this section we work with $\cat{Rub}_{\ul 0}$ in place of $\cat{Rub}$, as it is notationally slightly simpler, and fits better with what we do in the rest of the paper. The interested reader will check that the results go through for $\cat{Rub}$ essentially unchanged.
\subsection{Brief recap on minimal log structures}
\label{sec:minimal_ls_for_rub}
This is taken from \cite[Appendix~B]{Wise2016Moduli-of-morph}, based
on \cite{Gillam}. The purpose of minimal log structures is to understand how to pass from a category fibred in groupoids (CFG) over $\cat{LogSch}$ to a CFG over $\cat{Sch}$. Now $\cat{LogSch}$ is a CFG over $\cat{Sch}$ via forgetting the log structure, so one could just take the composite. However, this is the `wrong' way to extract the underlying CFG over $\cat{Sch}$.
For an elementary example, let $X \coloneqq (pt, \bb N^2)$ be a point with log structure $\bb N^2$. Then there are very many maps from $Y \coloneqq (pt, \bb N)$ to $X$: one can choose both the underlying monoid map $\bb N^2 \to \bb N$, and the lift to the log structure giving a $\bb C^\times$ parameter. Hence if we take the CFG over $\cat{LogSch}$ associated to $X$ and view it as a CFG over $\cat{Sch}$ via the forgetful functor, we will get a very large and complicated object\footnote{For example the fiber over $pt \in \cat{Sch}$ is the category of pairs of a log structure $M$ on $pt$ and an associated log morphism $(pt, M) \to X$. }, when what we really wanted was a point!
\par
However, given a map $T \to pt$ of schemes, there is a unique log structure $M$ on $T$ and morphism $(T,M) \to X = (pt, \mathbb{N}^2)$ such that any other log morphism $(T, M') \to X$ factors through $(T,M) \to X$. Namely, $M$ is simply the pullback log structure under $T \to pt$ of the log structure $\mathbb{N}^2$ on $pt$. Such a log structure is called \emph{minimal}, and if we take the full subcategory of log schemes over $X$ given by minimal objects, then view it as a CFG over $\cat{Sch}$ via the forgetful functor, we recover exactly what we wanted, namely a point.
\par
In the next two subsections we will apply the same machinery to the CFG $\cat{Rub}_{\ul 0}$ over $\cat{LogSch}$. An object $(X/B, \beta)$ of $\cat{Rub}_{\ul 0}$ is called \emph{minimal} if every solid diagram in $\cat{Rub}_{\ul 0}$
\begin{equation}
\begin{tikzcd}
(X'/B', \beta') \arrow[rr]\arrow[dr] && (X/B, \beta) \\
& (X''/B'', \beta'') \arrow[ur, dashed]
\end{tikzcd}
\end{equation}
with the induced maps $\ul B' \to \ul B$ and $\ul B' \to \ul B''$ on underlying schemes being isomorphisms, admits a unique dashed arrow.
\par
Gillam proves that the full subcategory of $\cat{Rub}_{\ul 0}$ consisting of minimal objects, together with its natural forgetful functor to $\cat{Sch}$, is (equivalent to) the underlying algebraic stack of $\cat{Rub}_{\ul 0}$. Thus, objects are those log points of $\cat{Rub}_{\ul 0}$ for which the log structure is minimal, and morphisms are simply the usual morphisms of log objects
\footnote{A warning: suppose that one starts with a CFG over $\cat{LogSch}$ which is equivalent to a category fibred in setoids, and which has enough minimal objects. It is then representable by an algebraic stack with log structure, but this \emph{need not} be equivalent to a category fibred in setoids over schemes (in other words, it can still have non-trivial stacky structure). The most elementary example of this is perhaps the subdivision of $\bb G_m^\mathsf{trop}$ at $1$, which is certainly a category fibred in setoids over $\cat{LogSch}$, but whose underlying algebraic stack is $[\bb P^1/\bb G_m]$. This is because a given schematic point can admit two (or more) different minimal logarithmic structures, which can have several isomorphisms between them even if we have a CFS over $\cat{LogSch}$; the fiber over \emph{any given} log scheme can still have no non-trivial automorphisms.}.
\par
As such, if we want to understand the relative inertia of $\cat{Rub}_{\ul 0}$ over $\mathfrak M$, we need to understand not only the minimal objects and their morphisms, but also all possible ways of equipping a schematic object of $\cat{Rub}_{\ul 0}$ with minimal log structure.
\subsection{Minimal log structures for $\cat{Rub}_{\ul 0}$}
Let $(X/B, \beta)$ be a point of $\cat{Rub}_{\ul 0}$ with $X/B$ nuclear, where $\mathsf{M}_B$ is the sheaf of monoids on~$B$.
Recall that from this family, we obtain
\begin{itemize}
\item the stable graph $\Gamma$ describing the shape of $X_b$,
\item the length maps $\delta\colon E(\Gamma) \to \overline{\mathsf{M}}_{B,b}$, which we extend to a monoid homomorphism
\bes
\delta\colon \bb N\Span{E(\Gamma)} \to \overline{\mathsf{M}}_{B,b}\,,
\ees
\item the value map $\beta: V(\Gamma) \to \overline{\mathsf{M}}_{B,b}^\mathsf{gp}$ at vertices, whose image is totally ordered, inducing the level map
\[
\ell : V(\Gamma) \to \{0,-1, \ldots, -N\} \= \{0\} \sqcup L(\Gamma),
\]
\item the slopes $\kappa : H(\Gamma) \to \mathbb{Z}$ at half-edges, where given an edge $e \in E(\Gamma)$ consisting of half-edges $h,h'$ we set $\kappa_e = |\kappa(h)| = |-\kappa(h')|$ and let $E^v = \{e \in E(\Gamma) : \kappa_e > 0\}$ be the set of vertical edges and $E^h = \{e \in E(\Gamma) : \kappa_h = 0\}$ be the set of horizontal edges.
\end{itemize}
For $i\in L(\Gamma)$, we define
with~\ref{eq:aidef}
\bes
a_i \,\coloneqq\, \on{lcm}_e \kappa_e
\ees
where the $\on{lcm}$ runs over the set of all edges $e$ such that $\ell(e^-) \le i<\ell(e^+)$ (we say such an edge~$e$ \emph{crosses level $i$}).
We let $\tilde P \coloneqq \bb N\Span{p_{-1},\dots, p_{-N}}$ be the free monoid on $N=|L(\Gamma)|$ generators. Then we can define a map
$g\colon E^v \to \tilde P$ by
\begin{equation}\label{eq:g_delta_e}
g(e) \coloneqq \sum_{i = \ell(e^-)}^{\ell(e^+) - 1}\frac{a_i}{\kappa_e}p_i\,,
\end{equation}
and extend this map additively to a map
$g\colon \bb N\Span{E^v} \to \tilde P$.
Finally, we let
$$
\sigma_i \coloneqq \beta(v_i) - \beta(v_{i-1}) \in \overline{\mathsf{M}}_{B,b}\,,
$$
where $v_i$ is any vertex of level $i$.
\begin{lemma}
$\sigma_i$ is divisible by $a_i$ in $\overline{\mathsf{M}}_{B,b}$.
\end{lemma}
\begin{proof}
Showing that $\sigma_i$ is divisible by $a_i$ is exactly equivalent to showing that it is divisible by $\kappa_e$ for every edge $e$ crossing level $i$ (since we work with saturated monoids, if an element is divisible by two integers then it is also divisible by their least common multiple). But this is exactly condition~(3) in \ref{prop:rub_translation}.
\end{proof}
Set $\tau_i\coloneqq \sigma_i /a_i\in \overline{\mathsf{M}}_{B,b}$ (noting that division in $\overline{\mathsf{M}}_{B,b}$ is unique since it is sharp, integral and saturated), and define a monoid homomorphism
\begin{equation}\label{eq:hom_psi}
\psi\colon \tilde P \to \overline{\mathsf{M}}_{B,b};\qquad \psi\colon p_i \mapsto \tau_i.
\end{equation}
\begin{lemma} \label{lem:gdeltapsi}
The triangle
\begin{equation}
\begin{tikzcd}
\bb N\Span{E^v} \arrow[r, "g"] \arrow[dr, "\delta"] & \tilde P \arrow[d, "\psi"]\\
& \overline{\mathsf{M}}_{B,b}
\end{tikzcd}
\end{equation}
commutes.
\end{lemma}
\begin{proof}
We compute:
\begin{equation}
\begin{split}
\psi(g(\delta_e)) & = \psi\left(\sum_i \frac{a_i}{\kappa_e} p_i\right) = \sum_i \frac{a_i}{\kappa_e} \tau_i = \frac{1}{\kappa_e}\sum_i \sigma_i = \frac{1}{\kappa_e} (\beta(v_+) - \beta(v_-)) = \delta_e
\end{split}
\end{equation}
where the last equality comes from the fact that $\beta$ is a PL function.
\end{proof}
\begin{df}
\label{def:basicness}We say $(X/B, \beta)$ is \emph{basic} if the natural map
$$\psi \oplus \delta|_{E^h} : \tilde P \oplus \bb N \Span{E^h} \to \overline{\mathsf{M}}_{B,b}$$
is an isomorphism. In general we say a point of $\cat{Rub}_{\ul 0}$ is \emph{basic} if it is so on a nuclear cover.
\end{df}
Our motivation for introducing this definition lies in \ref{lem:basic_is_minimal}. The intuition behind the definition is that $\overline{\mathsf{M}}_{B,b}$ is precisely big enough to contain the elements that are necessary to accommodate the images of the maps $\delta$, the differences of images of $\beta$, and roots of these differences whose existence is required by condition (3) in \ref{prop:rub_translation}.
\begin{lemma}
Every point of $\cat{Rub}_{\ul 0}$ comes with a map to a basic object.
\end{lemma}
\begin{proof}
For $(X/B, \beta)$ a nuclear point of $\cat{Rub}_{\ul 0}$, we define a sheaf of monoids $P$ on $B$ as the fiber product
\begin{equation}
P \coloneqq \left(\tilde P\oplus \bb N \Span{E^h}\right) \times_{\overline{\mathsf{M}}_B} \mathsf{M}_B.
\end{equation}
This $P$ comes with a map $P \to {\mathcal O}_B$, namely the composition of the
projection to the second factor $\mathsf{M}_B$ and the old log structure map
$\mathsf{M}_B \to {\mathcal O}_B$, making it into a log structure.
\par
This uses that for \emph{any} nuclear point $(X/B, \beta)$ the map $\psi \oplus \delta|_{E^h}$ from the definition above satisfies that the preimage of $0 \in \overline{\mathsf{M}}_B$ is $0 \in \tilde P\oplus \bb N \Span{E^h}$. From this it also follows that the ghost sheaf $\overline{P}$ of $P$ equals
\[
\overline{P} \= \left(\tilde P\oplus \bb N \Span{E^h}\right)
\otimes_{\overline{\mathsf{M}}_B} {\mathcal O}_B^\times \= \tilde P\oplus \bb N \Span{E^h}.
\]
Now we make $(\ul B, P)$ into a point of $\cat{Rub}_{\ul 0}$: we take the underlying family $\ul X / \ul B$ of curves, and equip $\ul X$ with a log structure making it a log curve over $(\ul B, P)$ with length map
\[
\widetilde \delta : E(\Gamma) \to \tilde P\oplus \bb N \Span{E^h}, \quad e \mapsto
\begin{cases}
\left(\sum_{i=\ell(e^-)}^{\ell(e^+)-1} \frac{a_i}{\kappa_e} p_i, 0 \right) & \text{ for }e \in E^v,\\
\left(0, e \right) & \text{ for }e \in E^h.
\end{cases}
\]
With this we obtain a family of log curves $(\widetilde X/(\ul B, P))$. Using \ref{prop:rub_translation} we then lift to a $(\ul B, P)$-point of $\cat{Rub}_{\ul 0}$ by specifying the combinatorial PL function
\begin{equation}
\beta\colon V(\Gamma) \to \left(\tilde P\oplus \bb N \Span{E^h}\right)^\mathsf{gp},
\quad v \mapsto -\sum_{j = \ell(v)}^{-1}a_j p_j\,.
\end{equation}
The construction gives a map from $(X/B, \beta)$ to this basic object $(\ul B,P) \to \cat{Rub}_{\ul 0}$.
\end{proof}
\par
\begin{lemma}
\label{lem:basic_is_minimal} The $\cat{Rub}_{\ul 0}$-point $(X/B, \beta)$ is basic
if and only if it is minimal.
\end{lemma}
\begin{proof}
We proceed just as in the proof of \cite[Theorem 4.2.4]{MarcusWiseLog}, using that the image of $\bb N\Span{E}$ has finite index in $\tilde P \oplus \bb N\Span{E^h}$, and that division is unique in sharp integral saturated monoids.
\end{proof}
\par
\begin{df}
Let $\cat{Rub}_{\ul 0}' $ be the full subcategory of $\cat{Rub}_{\ul 0}$ whose objects have minimal log structure, viewed as a fibred category over $\cat{Sch}$ via forgetting the log structure and the curve.
\end{df}
As explained in \ref{sec:minimal_ls_for_rub}, Gillam's minimality machinery immediately yields the main theorem of this section, slightly refining the results of \cite{MarcusWiseLog}:
\begin{theorem}\label{thm:underlying_stack_of_rub}
The underlying algebraic stack of $\cat{Rub}_{\ul 0}$ is given by $\cat{Rub}_{\ul 0}'$.
\end{theorem}
\subsection{Smoothness of $\cat{Rub}_{\ul 0}$}\label{sec:smoothness_of_Rub}
With the preparations above, we can now prove \ref{thm:rub_smooth}, stating that the algebraic stack $\cat{Rub}_{\ul 0}$ is smooth.
\begin{proof}[Proof of \ref{thm:rub_smooth}]
Note first that $\cat{Rub}_{\ul 0} \to \mathfrak M$ is log \'etale; this is proven in \cite[Lemma 4.2.5 and Corollary 5.3.5]{MarcusWiseLog} for their version of $\cat{Rub}_{\ul 0}$ (without condition (2)), and our version of $\cat{Rub}_{\ul 0}$ is obtained from theirs by taking a root stack, which is again a log \'etale morphism. Since $\mathfrak M$ is log smooth, this implies that $\cat{Rub}_{\ul 0}$ is itself log smooth.
\par
Now \ref{def:basicness}, \ref{lem:basic_is_minimal}, and \ref{thm:underlying_stack_of_rub} together imply that the stalks of the characteristic monoid of $\cat{Rub}_{\ul 0}$ are free monoids of finite rank. Fix a geometric point $p \in \cat{Rub}_{\ul 0}$, and suppose the characteristic monoid has stalk $\bb N^r$ at $p$. Then by log smoothness of $\cat{Rub}_{\ul 0}$ there exist a scheme $U$ and smooth strict morphisms $f\colon U \to \cat{Rub}_{\ul 0}$ and $g\colon U \to \bb A^r$ such that $p$ lies in the image of $f$. In particular $\cat{Rub}_{\ul 0}$ is smooth in a neighborhood of~$p$.
\end{proof}
\par
Note that the base-change $\cat{Rub}_\mathcal L$ is \emph{not} in general smooth, except in genus 0 (when the map $\mathfrak M \to \on{Pic}$ is an open immersion, hence smooth). In particular, the smoothness of the main component of $\cat{Rub}_{\mathcal L_\mu}$ (proven in \cite{LMS} granting the verification that the spaces named $\LMS$ there and
in \ref{prop:smoothDM} indeed agree) does not follow directly from \ref{thm:rub_smooth} outside of genus 0.
\subsection{Relative automorphisms}\label{sec:log_automorphism_example}
As a log stack, $\cat{Rub}_{\ul 0}$ has trivial automorphisms relative to the stack of log
curves. But (as discussed in footnote 2) this does not mean that the underlying
algebraic stack of minimal objects has trivial automorphisms. Rather, they come
from automorphisms of the log structure; the following remark makes this precise.
\begin{remark}
In general, given a map $\mathcal X \to {\mathcal Y}$ of log stacks with underlying stacks $\ul{\mathcal X}, \ul{{\mathcal Y}}$ and a point $\ul x : \mathrm{Spec}(\bb C) \to \ul{\mathcal X}$, we can ask: what is the relative inertia of $\ul x$ over $\ul y = (\ul{\mathcal X} \to \ul{{\mathcal Y}}) \circ \ul x$? For this, let $(\mathrm{Spec}(\bb C), \mathsf{M}_x) \to \mathcal X$ and $(\mathrm{Spec}(\bb C), \mathsf{M}_y) \to {\mathcal Y}$ be the minimal log structures lifting $\ul x, \ul y$. Then by minimality of $\mathsf{M}_y$ the composition $(\mathrm{Spec}(\bb C), \mathsf{M}_x) \to \mathcal X \to {\mathcal Y}$ must factor through a map
\[
f \colon (\mathrm{Spec}(\bb C), \mathsf{M}_x) \to (\mathrm{Spec}(\bb C), \mathsf{M}_y)\,.
\]
Such a map is uniquely described by a monoid map $\mathsf{M}_y \to \mathsf{M}_x$ over $\bb C^\times = {\mathcal O}_{\mathrm{Spec}(\bb C)}^\times$. Then the desired group of automorphisms is just the group of those automorphisms of $\mathsf{M}_x$ that are constant on the image of $\mathsf{M}_y$, and commute with the map to $\bb C^\times$.
\end{remark}
Returning to our situation, the `tropical'
part of the log structure (the ghost sheaf $\overline{\mathsf{M}}$) has no non-trivial
automorphisms. Thus the automorphisms all arise from automorphisms of the log
structure $\mathsf{M}$ that are trivial on $\overline{\mathsf{M}}$ and trivial on the structure sheaf.
So they are really automorphisms of the extension structure of $\mathsf{M}$.
\par
\subsection{The worked example again}
Let $(X/\bb C, \beta\in \overline{\mathsf{M}}^\mathsf{gp}_X)$ be a point of $\cat{Rub}_{\ul 0}$ with the
underlying enhanced level graph given by \ref{fig:X_L_pic}, still
restricting to the case $\kappa_1 = \kappa_2 = 1$ and $\kappa_3 = n$. We would
like to understand the relative inertia of this point of $\cat{Rub}_{\ul 0}$
over~$\mathfrak M$.
\par
The minimal monoid on $\bb C$ for the curve $X/\bb C$
is just $\bb N\Span{E} = \bb N\Span{e_1, e_2, e_3}$, and the
minimal monoid as a point in $\cat{Rub}_{\ul 0}$ is given by $\tilde P = \bb N\Span{p_{-1}, p_{-2}}$, with one generator $p_i$ for each \emph{level} $i$ (there are no horizontal edges in this example, otherwise they should also appear in this minimal monoid). The natural map is then given by
\bes
g\colon \bb N\Span{E} \to \tilde P\,; \qquad e_1 \mapsto np_{-1}\,, \;\;\; e_2 \mapsto np_{-2}\,, \;\;\; e_3 \mapsto p_{-1} + p_{-2}\,.
\ees
To see this, note that $a_1 = a_2 = n$, and then apply formula \ref{eq:g_delta_e}. Note that there are no non-trivial automorphisms of $\tilde P$ that act as the identity on the image of $g$. The map $g$ extends in the obvious manner to a map on the stalks of the log structures
\bes
\bb N\Span{E} \oplus \bb C^\times \to P = \tilde P \oplus \bb C^\times,
\ees
and the relative inertia is then given by the automorphisms of $\tilde P\oplus \bb C^\times$ which act as the identity on the image of $\bb N\Span{E} \oplus \bb C^\times$, and which lie over the identity map on $\tilde P$ (since any automorphism of $\tilde P$ constant on the image of $g$ must be the identity). Such an automorphism sends
\begin{equation*}
((1,0),1) \mapsto ((1,0),u) \quad \text{ and } \quad ((0,1),1) \mapsto ((0,1),v)
\end{equation*}
for some $u$, $v \in \bb C^\times$ satisfying
\begin{enumerate}
\item
$u^n = 1$, because $n((1,0),1) = ((n,0), 1^n)$ lies in the image of
$\bb N\Span{E} \oplus \bb C^\times$ and is thus fixed;
\item $v^n = 1$ for the analogous reason;
\item $uv = 1$ because $((1,1),1)$ lies in the image of $\bb N\Span{E} \oplus \bb C^\times$ and is thus fixed.
\end{enumerate}
Such a choice of $u$, $v$ evidently determines such an automorphism.
(Or more precisely, there are two canonical isomorphisms with the roots
of unity, one coming from `above' and the other from `below' on the graph,
and the composite of these isomorphisms is the inversion map on the group
of roots of unity).
\par
We conclude that the \emph{relative inertia for this triangle graph is
equal to the group~$K_\Gamma$ computed in~\ref{eq:KGammaexample}.}
\section{Generalized multi-scale differentials\xspace} \label{sec:famGMS}
We recall basic notions from \cite{LMS}, in order to define the groupoids
$\GSMS$ of \emph{simple generalized multi-scale differentials} and
$\GMS$ of \emph{generalized multi-scale differentials},
where $\mu = (m_1, \ldots, m_n)$ is a tuple of integers with sum~$2g-2$. The
adjective `generalized' refers to the fact that we do not impose the global
residue condition.
\subsection{Enhanced level graphs}
\label{sec:tori}
The boundary strata of the stack of generalized multi-scale differentials are
indexed by \emph{enhanced level graphs}. Such an enhanced level graph, typically
denoted by~$\Gamma$, is the dual graph of a stable curve,
with legs corresponding to the marked points, with a level structure
(i.e.\ a weak full order) on the set of vertices $V(\Gamma)$, and with enhancements
$\kappa_e$, which are non-negative integers attached to the edges. The edges~$E(\Gamma)$
are grouped into the set of horizontal edges~$E^h(\Gamma)$ joining vertices
at the same level, and the set of vertical edges~$E^v(\Gamma)$.
The enhancements are required to be zero precisely for horizontal edges.
We thus may consider an enhancement as
a function
\bes \kappa\colon H(\Gamma) \to \bb Z
\ees
on the set of half edges of~$\Gamma$, assigning~$\kappa_e>0$ to the upper half
and $-\kappa_e<0$ to the lower half of a vertical edge, assigning zero to both
halves of a horizontal edge, and letting $\kappa$ agree with $m_i$ at the legs
of the graph. We normalize the set of levels so that the top level is zero,
and let $L(\Gamma)$ be the set of levels below zero, usually given by consecutive
negative integers $L(\Gamma)=\{-1,\dots,-N\}$, where $N\coloneqq |L(\Gamma)|$,
so that we typically use the \emph{normalized level function}
\begin{equation}\label{eq:normlev}
\ell\colon V(\aG) \twoheadrightarrow \{0,-1,\dots,-N\}\,.
\end{equation}
Occasionally we use $L^\bullet(\aG)$ for the set of all levels including the zero level. In the sequel we will only consider enhancements that are
{\em admissible} in the sense that the degree equality
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:admenhancement}
\deg(v) \,\coloneqq\, \sum_{j\mapsto v} m_j + \sum_{e \in E^+(v)} (\kappa_e-1)
- \sum_{e \in E^-(v)} (1+\kappa_e) - h(v) \= 2g(v)-2
\ee
holds, where the first summand is over all legs attached to $v$,
where $E^+(v)$ (resp. $E^-(v)$) is the set of vertical edges whose upper
(resp.\ lower) end is the vertex~$v$, and $h(v)$ is the number of
horizontal half-edges adjacent to $v$.
\par
Enhanced level
graphs come with two kinds of undegeneration maps. First, there are
vertical undegeneration maps $\delta_{i_1,\dots,i_n}$
for any subset $I = \{i_1,\dots,i_n\} \subseteq \{-1,\dots,-N\}$
which contract all vertical edges except
those that go from level at or above $i_{k}+1$ to a level at or below
$i_{k}$, for some $i_k \in I$. Especially important among those are the two-level undegenerations~$\delta_i$, which contract all vertical edges except those that cross a level passage above~$i$,
i.e.~go from a vertex at level $i+1$ or above, to a vertex at level $i$ or below.
Second, there are horizontal
undegeneration maps $\delta^h_S$ that contract all the horizontal
edges except those in $S \subset E^h(\Gamma)$. An \emph{undegeneration}
of a level graph is a composition of a vertical and a horizontal undegeneration.
Undegenerations determine the adjacency of boundary strata of the space of
multi-scale differentials.
\subsection{Prong-matchings}
\label{sec:PM}
Let $(X,\omega)$ be a smooth complex curve with a meromorphic 1-form. We fix a
direction, i.e., an element in $S^1 \subset {\mathbb{C}}$ throughout, say the
positive horizontal direction. If a differential~$\omega$ has a zero of
order $m \geq 0$ at~$q \in X$, then there are $m+1$ choices of
local coordinate~$z$ on~$X$ centered at~$q$ such that locally in this
coordinate $\omega=z^mdz$; similarly for a pole of order $m\le -2$ at~$q\in X$, one can find local coordinates such that $\omega=(z^m+r/z)dz$. and the tangent vectors $\partial/\partial z$ of these
coordinates differ by multiplying a root of unity of order $-m-1 = |m+1|$; see \cite[Theorem 4.1]{LMS}.
The horizontal directions in one of these coordinates are called {\em prongs},
which can be positive or negative (also called outgoing and incoming),
depending on which direction the ray goes.
We think of the outgoing prongs as a collection of $\kappa=m+1$ points
$P_q^{\rm out}$ in the tangent space at a zero of order~$m$, and of the incoming prongs as a collection $P_q^{\rm inc}$ of $\kappa=-m-1$ points in the tangent space at a pole of order~$m$.\footnote{In differential geometry it is more common to use the real prongs, lying
in the real projectivized tangent space $P_pX = T_pX/ {\mathbb{R}}_{>0} \cong S^1$. These
are in obvious bijection to the (complex) prongs we use here.}
\par
Let now $X$ be a stable curve with a node~$q$ corresponding to a vertical edge $e\in E^v(\Gamma)$, where two components of $X$
meet, and suppose these components $X_1$ and $X_2$ come with differentials
forms $\omega_1$ and $\omega_2$ having a zero and a pole respectively at the respective preimages $q^+\in X_1$ and $q^-\in X_2$ of~$q$.
A {\em (local) prong-matching} at the node~$q$ is a cyclic order-reversing
bijection $\sigma_e\colon P^{\rm in}_{q^-} \to P^{\rm out}_{q^+}$ between the incoming
prongs at~$q^-$ and the outgoing prongs at~$q^+$.
\par
Let now $(X,{\boldsymbol{z}},\Gamma,{\boldsymbol{\omega}})$ be a pointed stable curve with an
enhanced level graph~$\Gamma$ and let ${\boldsymbol{\omega}} = (\omega_{(i)})_{i \in L^\bullet(\Gamma)}$
be a \emph{twisted differential of type~$\mu$} compatible
with~$\Gamma$, possibly except for the global residue condition.
Following \cite{BCGGM1}, this means a collection of meromorphic differentials~$\omega_v$
for each vertex~$v$, vanishing to order~$m_i$ at each of the marked points~$z_i$,
vanishing to order~$\kappa(h)-1$ at the preimages of nodes associated to the half-edges~$h\in H'(\Gamma)$
and such that the residues at the two sides of a horizontal node add up to zero.
Grouping objects level-wise, we denote $\omega_{(i)}$ the tuple of
differentials~$\omega_v$ for all vertices~$v$ on level~$i$.
\par
Given a twisted differential, we have the data to define local prong-matchings
for each vertical edge. Packaging such a choice for each vertical edge
$e \in E^v(\Gamma)$, we call the collection ${\boldsymbol{\sigma}} = (\sigma_e)_{e \in E^v(\Gamma)}$
a {\em global prong-matching}.
\par
\medskip
There is an alternative viewpoint on prong-matchings, which can be generalized
to germs of families $\mathcal X \to B$, where a node~$q$ corresponding
to an edge~$e$ in the dual graph of the special fiber persists over the base.
In the normalization of the family there are two components $X^\pm$ (as the edge is vertical, necessarily $X^+\ne X^-$)
that admit sections~$q^\pm$ that specify the two preimages of the node~$q$.
We let
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:defNe}
{\mathcal N}_e^\vee \,\coloneqq\, (q^+)^*\omega_{X^+}\otimes (q^-)^*\omega_{X^-}\,.
\ee
A \emph{local prong-matching} is then a section $\sigma_e$ of ${\mathcal N}^\vee_e$ such
that for any pair $(v^+,v^-)$ of an incoming and an outgoing horizontal prong the
equation $\sigma_e(v^+\otimes v^-)^{\kappa_e} = 1$ holds. To see the
equivalence, given $\sigma_e$, we assign to $v^-$ the prong $v^+$ given by the
condition $\sigma_e(v^+\otimes v^-) = 1$. A \emph{global prong-matching}
is a collection of local prong-matchings for each persistent node
(as defined formally in \ref{sec:germfam})
in the family.
\par
We give another reformulation that eliminates the dependence on the choice
of a preferred (`horizontal') direction. Let $U^\pm$ be neighborhoods of the points $q^\pm$ in the
normalization of~$\mathcal X$. Suppose the edge~$e$ joins level~$i$ to the lower level~$j$.
Then $\omega_{(i)}$ extends uniquely to a section of $\omega_{U^+/B}(-(\kappa_e-1) q^+)$
and $\omega_{(j)}$ to a section of $\omega_{U^-/B}((\kappa_e+1) q^-)$.
Restricting to $q^+$ and $q^-$, respectively, yields canonical elements
\bes
\tau^+ \in \omega_{U^+/B}(-(\kappa_e-1) q^+)|_{q^+} = T_{q^+}^{\otimes -\kappa_e} \quad
\text{and} \quad
\tau^- \in \omega_{U^-/B}((\kappa_e+1) q^-)|_{q^-} = T_{q^-}^{\otimes \kappa_e}
\ees
(where we use the residue isomorphism for the equalities).
We define
\bes
\tau_e \coloneqq (\tau^+)^{-1} \otimes (\tau^-) \in (T_{q^+} \otimes T_{q^-})^{\otimes \kappa_e}
\= {\mathcal N}_e^{\otimes \kappa_e}.
\ees
\par
\begin{lemma}\label{lem:prong_matching_comparison}
In the notation of the previous definition, let $v^+$ and $v^-$ be some horizontal
prongs at $e$. Then $(v^+ \otimes v^-)^{\otimes \kappa_e} \in {\mathcal N}_e^{\otimes \kappa_e}$
is independent of the choice of prongs and of the direction to be called horizontal, and we have
\begin{equation}
\tau_e \= (v^+ \otimes v^-)^{\otimes \kappa_e}.
\end{equation}
\end{lemma}
\begin{proof}
For a fixed direction, the different choices of prongs~$v^+$ differ by
$\kappa_e$-th roots of unity, and likewise for~$v^-$. Thus the formula
for~$\tau_e$ implies that it does not depend on these prong choices. On the
other hand, changing the direction from horizontal to direction~$\theta$
multiplies~$v^+$ by $e^{2\pi i\theta}$ and $v^-$ by $e^{-2\pi i\theta}$, and thus
preserves $v^+\otimes v^-$. The equality is obvious, by writing it out in any
local coordinate that puts the differentials in normal form.
\end{proof}
\par
This implies that the earlier definitions of prong-matching agree with
the following:
\par
\begin{df} \label{df:PMfinal}
A \emph{local prong-matching} is a section $\sigma_e$ of ${\mathcal N}^\vee_e$
such that $\sigma_e^{\kappa_e}(\tau_e) = 1$.
\end{df}
\par
\subsection{Level rotation tori}
\label{sec:LRT}
To an enhanced level graph we associate some groups and algebraic tori.
The {\em level rotation group}~$R_\Gamma \cong {\mathbb{Z}}^{L(\Gamma)}$ acts on the
set of all global prong-matchings, where the $i$-th factor twists by one (i.e.\
multiplies $\sigma_e$ by $e^{2 \pi i/\kappa_e}$)
all prong-matchings associated to edges that cross the {\em $i$-th level
passage}, a horizontal line above level~$i$ and below level
$i+1$. \footnote{In this paper we index levels and all quantities indexed by them,
such as $t_i$, $s_i$, $\delta_i$ below, by negative integers, as in \cite{LMS},
but contrary to several subsequent papers that use this compactification.}
The {\em (vertical) twist group} is the subgroup $\Tw[\Gamma] \subset R_\Gamma$
fixing the prong-matchings under the above action. The level rotation
group also acts (via its $i$-th component) on the set of prong-matchings of the
two-level undegenerations $\delta_i(\Gamma)$.
We define the {\em simple twist
group} $\sTw[\Gamma] \subset \Tw[\Gamma]
\subset R_\Gamma$ to be the subgroup that fixes each of the prong-matchings of
each $\delta_i(\Gamma)$.
\par
Let ${\mathbb{C}}^{L(\Gamma)} \to ({\mathbb{C}}^*)^{L(\Gamma)}$ be the universal covering
of the algebraic torus $({\mathbb{C}}^*)^{L(\Gamma)}$; we identify the level rotation group
$R_\Gamma \subset {\mathbb{C}}^{L(\Gamma)}$ as the kernel of this covering. As a subgroup of the
level rotation group, the (simple) twist group acts on ${\mathbb{C}}^{L(\Gamma)}$, and we define
the {\em level rotation torus} $T_\Gamma \coloneqq {\mathbb{C}}^{L(\Gamma)}/\Tw[\Gamma] $,
together with its simple counterpart, the {\em simple level rotation
torus} $T^s_\Gamma \coloneqq {\mathbb{C}}^{L(\Gamma)}/\sTw[\Gamma]$.
\par
Next we define the data that provide the model for the toroidal embedding of the
boundary inside the space of multi-scale differentials.
Since $\sTw[\Gamma] = \oplus_i \Tw[\delta_i(\Gamma)]$ has by definition
a direct sum decomposition level by level, the
simple level rotation torus comes with a natural level-wise identification
$T^s_\Gamma \cong ({\mathbb{C}}^*)^{L(\Gamma)}$. The
embedding ${\mathbb{C}}^* \hookrightarrow {\mathbb{C}}$ with respect to these coordinates
defines an embedding $T^s_\Gamma \hookrightarrow \overline{T}^s_\Gamma
\coloneqq{\mathbb{C}}^{L(\Gamma)}$. We let
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:aidef}
a_i \coloneqq a_{\delta_i(\Gamma)} \coloneqq \operatorname*{lcm}_{e \in \delta_i(\Gamma)} \kappa_e
\ee
be the least common multiple of the enhancements of the edges of~$\Gamma$ that persist in the two-level undegeneration $\delta_i(\Gamma)$.
Then $\sTw[\Gamma] \cong \oplus_i a_i {\mathbb{Z}} \subset R_\Gamma$.
Consequently, $T^s_\Gamma$ is a cover of the
original torus~$({\mathbb{C}}^*)^{L(\Gamma)}$, of degree $\prod_i a_i$. Finally,
we define the \emph{quotient twist group} to be
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*}
K_\Gamma \,\coloneqq\, \Tw[\Gamma] /\sTw[\Gamma]\,.
\ee
This group acts on $T_\Gamma^s$ with quotient $T_\Gamma$. In coordinates the
quotient map is given by
\be\begin{aligned}} \def\ea{\end{aligned}\ee} \def\bas{\bes\begin{aligned}} \def\eas{\end{aligned}\ees \label{eq:simpletotorus}
({\mathbb{C}}^*)^{L(\Gamma)} & \,\to\, ({\mathbb{C}}^*)^{L(\Gamma)} \times ({\mathbb{C}}^*)^{E^v(\Gamma)} \\
(q_i) &\,\mapsto\,
(r_i, \rho_e) \= \Bigl(q_i^{a_i}, \prod_{i=\lbot}^{\ltop-1} q_i^{a_i/\kappa_e} \Bigr)\,,
\ea
where we view ${T}_\Gamma \subset ({\mathbb{C}}^*)^{L(\Gamma)} \times ({\mathbb{C}}^*)^{E^v(\Gamma)}$
as cut out by the equations
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:rrho}
r_{\lbot} \dots r_{\ltop-1} \= \rho_e^{\kappa_e}
\ee
for each~$e$. The action of $K_\Gamma$ on $T^s_\Gamma$ extends to an action on the
closure $\overline{T}^s_\Gamma$, and we let $\overline{T}^n_\Gamma \coloneqq
\overline{T}^s_\Gamma/K_\Gamma$,
which is the normalization of the closure of ${T}_\Gamma \subset ({\mathbb{C}}^*)^{L(\Gamma)}
\times ({\mathbb{C}}^*)^{E^v(\Gamma)}$.
\par
All these tori come with their {\em extended versions}, denoted with an extra
dot (e.g. $T_\Gamma^\bullet$), that have an extra ${\mathbb{C}}^*$-factor. This factor
will act on differentials of all levels simultaneously by multiplying all differentials by a common factor, and lead to the {\em projectivized}
version of the corresponding quotient functor.
\par
\subsection{Germs of families of generalized multi-scale differentials}
\label{sec:germfam}
We now relate those tori to parameters of families of curves and
differentials. In this subsection we assume throughout that
$B = B_b$ is the spectrum of a strictly Henselian local ring
with closed point~$b$. We start with a family $(\pi\colon \mathcal X \to B, {\boldsymbol{z}})$
of pointed stable curves and let~$\Gamma$ be the dual graph of the
special fiber~$X := X_b$.
\par
For each node~$q_e$ of~$X_b$ there is a function $f_e\in {\mathcal O}_{B}$ called
the {\em smoothing parameter}, such that the family has the local form $u_e v_e = f_e$
in a neighborhood of~$q_e$. In fact, such a function exists in general after
an \'etale base change by \cite[\href{https://stacks.math.columbia.edu/tag/0CBY}{Tag 0CBY}]{stacks-project}, see also
\cite[Proposition~X.2.1]{acgh2} for the version in the analytic category.
Since~$B$ is strictly Henselian, any \'etale cover is a product of trivial
covers and the function~$f_e$ exists over~$B$ itself. The parameter $f_e$ is only
defined up to multiplication by a unit in ${\mathcal O}_{B}$. We will write
$[f_e]\in {\mathcal O}_{B} / {\mathcal O}_{B}^*$ for the equivalence class of the smoothing parameter.
\par
We say that a node $e$ is \emph{persistent} in the family~$\mathcal X$ if $f_e=0\in{\mathcal O}_B$. If the dual
graph~$\Gamma_b$ has been provided with an enhanced level graph structure, we say
that a node $e$ is \emph{semi-persistent} if $f_e^{\kappa_e}=0$. The notion of
prong-matchings makes sense for a persistent node~$q$.
\par
\medskip
For our families of multi-scale differentials, we need to include an explicit
choice of smoothing parameters $f_e$ into our data. This can be achieved via
a section of the partial compactification $\Tsnorm[\Gamma]$ of the simple
level-rotation torus. Indeed, given the coordinates $(r_i, \rho_e)$ on the
torus closure from~\ref{eq:rrho}, a morphism $R^s\colon B\to \Tsnorm[\Gamma]$
determines for each vertical edge~$e$ a function $f_e\in {\mathcal O}_{B}$
and for each level~$i$ a function $s_i\in {\mathcal O}_{B}$, defined as
the compositions $f_e= \rho_e\circ \pi \circ R^s$, and
$s_i = r_i\circ \pi \circ R^s$, where
$\pi\colon \Tsnorm[\Gamma] \to \Tnorm[\Gamma]$ is the canonical
morphism. If an edge $e$ joins levels $j<i$, then by \cref{eq:rrho}
these functions satisfy
\begin{equation} \label{eq:f2sj}
f_e^{\kappa_e} \= s_j \dots s_{i-1}\,.
\end{equation}
The following definition makes precise the notion that a morphism~$R^s$ as
above defines a compatible system of node smoothing parameters:
\par
\begin{df} \label{def:RescEns}
A \emph{simple rescaling ensemble} is a morphism $R^s\colon B_b\to
\Tsnorm[\Gamma_b]$ such that the parameters $f_e\in {\mathcal O}_{B, b}$ for each
vertical edge~$e$ determined by~$R^s$ lie in the equivalence class~$[f_e]$
determined by the family $\pi\colon\mathcal X\to B$. A \emph{rescaling ensemble}
is a morphism $R\colon B\to \Tnorm[\Gamma_b]$ which arises as the composition
$\pi\circ R^s$ for some simple rescaling ensemble $R^s$.
\end{df}
\par
The $s_i$ and $f_e$ will be called the \emph{rescaling parameters} and
\emph{smoothing parameters} determined by $R$ or $R^s$.
The composition of $R^s$ with the coordinate projections gives functions~$t_i$
such that $s_i = t_i^{a_i}$. We refer to those $t_i$ as the \emph{level parameters}.
\par
The adjective `generalized' in the following definition refers again to the fact that
the global residue condition has been dropped, compared to~\cite{LMS}. For an
illustration of some elements of the definition see \ref{fig:rescaleddiff}.
The well-definedness of the period in the following definition is checked (in any
characteristic) e.g.\ in \cite[Lemma~1.8]{Boj}.
\par
\begin{df} \label{def:collRD}
A \emph{collection of generalized rescaled differentials of type~$\mu=(m_1, \ldots,
m_n)$} on the family $(\pi\colon \mathcal X \to B, {\boldsymbol{z}})$ is a collection of
sections~$\omega_{(i)}$ of~$\omega_{\mathcal X/B}$ defined on
open subsets $U_i$ of~$\mathcal X$, indexed by the levels~$i$ of the enhanced level
graph~$\Gamma$. The irreducible components of the special fiber~$X$ on level
strictly below~$i$ are called \emph{vertical zeros}, those strictly above~$i$
are called \emph{vertical poles} of~$\omega_{(i)}$.
Each $U_i$ is required to be a neighborhood of the
subcurve~$X_{(\leq i)} \setminus (X_{(>i)} \cup {\mathcal Z}^\infty)$, where ${\mathcal Z}^\infty$ denotes
the locus of marked poles in the universal curve. For each level~$i$ and each edge~$e$
of $\Gamma$ whose lower end is at level~$i$ or below, we define $r_{e, (i)}\in {\mathcal O}_{B}$
to be the period of $\omega_{(i)}$ along the vanishing cycle~$\gamma_e$
for the node~$q_e$.
We require the collection to satisfy the following constraints:
\begin{enumerate}
\item[(1)] There exist sections $s_i \in H^0(B, \mathcal{O}_B)$ with $s_i(b)=0$ such
that for any levels $j<i$ the differentials satisfy $\omega_{(i)} \= s_j
\cdots s_{i-1} \omega_{(j)}$ on $U_i\cap U_j$.
\item[(2)] For any edge $e$ joining levels $j<i$,
the vanishing orders of $\omega_{(i)}$ and $\omega_{(i)}$ at the corresponding node
in the special fiber are $\kappa_e -1$ and $-\kappa_e-1$ respectively.
\item[(3)] The $\omega_{(i)}$ have order~$m_k$ along the sections ${\mathcal Z}_k$ of the $k$-th marked point that meet the level-$i$ subcurve of $X_b$; these are called {\em horizontal zeros and poles} (where ${\mathcal Z}^{\infty}$ records the horizontal poles). Moreover, $\omega_{(i)}$ is holomorphic and non-zero away from its horizontal and vertical zeros and poles.
\end{enumerate}
\par
If the rescaling and smoothing parameters $s_i, f_e$ for the collection
$\omega_{(i)}$ agree with those of a rescaling ensemble~$R^s$ or~$R$,
we call them {\em compatible}. We denote the
collection by ${\boldsymbol{\omega}} = (\omega_{(i)})_{i \in L^\bullet(\Gamma)}$.
\end{df}
\par
The reader comparing with the definition in \cite{LMS} will realize that there
in item~(2) there is the following additional requirement:
For any edge $e$ joining levels $j<i$
of $\Gamma$, there are functions $u_e, v_e$ on~$\mathcal X$ and $f_e$ on $B$,
such that the family has local normal form $u_e v_e = f_e$, and in these
coordinates
\begin{equation}\label{eq:5}
\omega_{(i)} \= (u_e^{\kappa_e} + f_e^{\kappa_e}r_{e, (j)})\frac{du_e}{u_e}
\quad\text{and} \quad
\omega_{(j)} \= - ( v_e^{-\kappa_e} + r_{e, (j)}) \frac{dv_e}{v_e}\,,
\end{equation}
where $\kappa_e$ is the enhancement of $\Gamma_b$ at $e$. In fact,
for any edge which is not semi-persistent, this normal form is automatic
by \cite[Theorem~4.3]{LMS}. For any semi-persistent edge this condition
is not needed here, since we do not require that the family is smoothable.
\par
\medskip
\def\surfacehole#1{
\begin{scope}[shift={#1}, scale=0.7]
\draw[thick] plot [smooth, tension = 0.5,xshift=-1.4cm, yshift=2.15cm] coordinates {(1.1,-2.1) (1.2,-2.2) (1.4,-2.3) (1.6,-2.2) (1.7,-2.1)};
\draw[thick] plot [smooth, tension = 0.8,xshift=-1.4cm, yshift=2.15cm] coordinates { (1.2,-2.2) (1.4,-2.13) (1.6,-2.2)};
\end{scope}
}
\begin{figure}[htb]
\centering
\[
\begin{tikzpicture}
\filldraw (0,3) circle (3pt);
\filldraw (0,5) circle (3pt);
\draw[thick] (0,3) to node[midway, right]{$e$} (0,5);
\draw[thick] (0,3) -- (0,2.6) node[below] {$1$};
\draw (0,1.5) node {$\Gamma$};
\draw[thick] (1.5,0) -- (9,0) node[below left]{$B = \mathrm{Spec}\ \mathbb{C}[[t]]$};
\filldraw (3,0) circle (3pt) node[below]{$b$};
\draw[thick] plot [smooth cycle, tension = 1, xshift=0 cm, yshift = 0cm] coordinates { (3,4) (2.5,5) (3,6) (3.5,5) };
\draw[thick] plot [smooth cycle, tension = 1, xshift=0 cm, yshift = 0cm] coordinates { (3,4) (2.25,2.5) (3,1) (3.75,2.5) };
\surfacehole{(3,5)};
\surfacehole{(2.7,3)};
\surfacehole{(3.3,2.4)};
\filldraw (2.7,1.4) circle (2pt) node [below left]{$z_1$};
\draw[thick] plot [smooth cycle, tension = 1, xshift=3 cm, yshift = 0cm] coordinates { (3.2,4) (3.5,5) (3,6) (2.5,5) (2.8,4) (2.25,2.5) (3,1) (3.75,2.5) };
\surfacehole{(6,5)};
\surfacehole{(5.7,3)};
\surfacehole{(6.3,2.4)};
\filldraw (5.7,1.4) circle (2pt) node [below left]{$z_1$};
\draw[thick, blue, dashed] plot [dotted, smooth cycle, tension = 1] coordinates { (3,3.8) (2.6,3.7) (3,3.6) (3.4,3.7) };
\draw[thick, blue, dashed,xshift=3 cm] plot [dotted, smooth cycle, tension = 1] coordinates { (3,3.8) (2.72,3.7) (3,3.6) (3.28,3.7) };
\draw[blue] (4.5,3.7) node {$\gamma_e$};
\draw[blue, ->] (4.2,3.7) -- (3.6,3.7);
\draw[blue, ->] (4.8,3.7) -- (5.4,3.7);
\filldraw[green] (3,4) circle (2pt);
\draw[thick, green, dashed] plot [dotted, smooth cycle, tension = 0.0] coordinates { (3,4) (2.1,4) (2.1,0.7) (3,0.7) (7,0.7) (7,6.5) (6,6.5) (3.5,4) };
\draw[green] (7.5,1.2) node {$U_{-1}$};
\draw[thick, red, dashed] plot [dotted, smooth cycle, tension = 0.0] coordinates { (1.9,0.5) (8,0.5) (8,6.7) (1.9,6.7) };
\draw[red] (8.5,1.2) node {$U_{0}$};
\end{tikzpicture}
\]
\caption{The underlying curve for a family of generalized rescaled differentials of type $\mu=(4)$, with neighborhoods $U_{0}$, $U_{-1}$ (in red, green) and the vanishing cycle $\gamma_e$ (in blue).}
\label{fig:rescaleddiff}
\end{figure}
\par
\begin{remark}\label{rem:induced_prong_matching}
Let ${\boldsymbol{\omega}}$ be a collection of generalized rescaled differentials with
a compatible rescaling ensemble~$R^s$ or $R$ . Then for any non-semi-persistent
edge~$e$, there is a natural \emph{induced prong-matching} $\sigma_e$
over~$B_e$, the vanishing locus of $f_e$, which is determined by the choice
of the rescaled differentials $\omega_{(i)}$ and the rescaling ensemble.
This prong-matching~$\sigma_e$ is defined explicitly in local coordinates by
writing it as $\sigma_e=d u_e \otimes dv_e$ when restricting to the nodal
locus corresponding to $e$, where $u_e$ and $v_e$ are as in \ref{eq:5}
with~$f_e$ prescribed by the rescaling ensemble. Any two possible choices
of $u_e$ and $v_e$ are of the form~$\alpha_e u_e$ and $\alpha^{-1}_e v_e$ for
some unit $\alpha_e \in {\mathcal O}_{B}^*$ (see \cite[Section 4]{LMS}),
so the induced prong-matching does not depend on this choice.
\end{remark}
\par
We can now package everything into our main notion.
\par
\begin{df} \label{def:germMSD}
Given a family of pointed stable curves $(\pi\colon\mathcal X\to B, {\boldsymbol{z}})$ and $B_{b}$
a germ of~$B$ at~$b$, the {\em germ of a family of generalized {simple}
multi-scale differentials\xspace} of type~$\mu$ over $B_{b}$ consists of the following data:
\begin{enumerate}
\item the structure of an enhanced level graph on the dual graph $\Gamma_b$ of the
fiber $X_b$;
\item a simple rescaling ensemble $R^s\colon B \to \Tsnorm[\Gamma_b]$, compatible
with
\item a collection of generalized rescaled differentials ${\boldsymbol{\omega}}
= (\omega_{(i)})_{i \in L^\bullet(\Gamma_b)}$ of type~$\mu$, and
\item a collection of prong-matchings ${\boldsymbol{\sigma}} = (\sigma_e)_{e \in E^v(\Gamma)}$,
where $\sigma_e$ is a section of~${\mathcal N}_e^\vee$ over~$B_e$, the vanishing locus of $f_e$.
If~$e$ is non-semi-persistent nodes, $\sigma_e$ is required to agree with the induced
prong-matching defined in~\Cref{rem:induced_prong_matching}. \qedhere
\end{enumerate}
\end{df}
\par
A section of the simple level rotation torus $T^s_{\Gamma_b}({\mathcal O}_B)$, that is a
morphism $\xi : B \to T^s_{\Gamma_b}$, acts on all of the above data via
\[
\xi \cdot (\omega_{(i)}, R^s,
\sigma_e) \= (\xi \cdot \omega_{(i)}, \xi^{-1}\cdot R^s,\xi \cdot \sigma_e )\,.
\]
Here, for $\xi \in T^s_{\Gamma_b}({\mathcal O}_B)$ mapping to $((r_i)_{i\in L(\Gamma_b)}, (\rho_e)_{e \in E^v(\Gamma_b)})$
under the quotient map~\ref{eq:simpletotorus}, the action is defined by
\[
\xi\cdot \omega_{(i)} \= \Bigl(\prod_{\ell \geq i} r_\ell\Bigr)
\omega_{(i)}, \quad \xi\cdot \sigma_e = \rho_e \sigma_e\,,
\]
and $\xi^{-1}\cdot R^s$ denotes the post-composition of $R^s$ with the multiplication by~$\xi^{-1}$.\footnote{Most of the checks that this action is well-defined are straightforward. To verify part (2) of \ref{def:collRD}, assume we are given local coordinates $u,v$ around a node associated to $e \in E^v(\Gamma_b)$ satisfying \ref{eq:5}. Then the rescaled differential is put in the required normal form using the new coordinates $\widehat u = (\prod_{\ell \geq i} r_\ell )^{1/\kappa_e} u_e$ and $\widehat v = (\prod_{\ell \geq j} r_\ell )^{-1/\kappa_e} v_e$.}
\par
A \emph{morphism between two germs of generalized simple multi-scale differentials\xspace}
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:defmorphismMSD}
(\pi'\colon\mathcal X'\to B', {\boldsymbol{z}}', \Gamma_{b'}, (R^s)', {\boldsymbol{\omega}}', {\boldsymbol{\sigma}}')
\,\longrightarrow\, (\pi\colon\mathcal X\to B, {\boldsymbol{z}}, \Gamma_{b}, R^s, {\boldsymbol{\omega}}, {\boldsymbol{\sigma}})
\ee
is a pair of germs of morphisms $\phi\colon B'\to B$ and $\tilde{\phi}\colon
\mathcal X'\to \mathcal X$ and an element $\xi \in T^s_{\Gamma_b}({\mathcal O}_{B'})$ such that
\begin{enumerate}
\item[i)] $(\phi, \tilde{\phi})$ jointly define a morphism of families of pointed
stable curves
\item[ii)] the induced isomorphism of dual graphs $ \Gamma_{b'}\to \Gamma_b$ is also an
isomorphism of enhanced level graphs,
\item[iii)] the action of $\xi$ sends $((R^s)', {\boldsymbol{\omega}}', {\boldsymbol{\sigma}}')$
to $\tilde{\phi}^* (R^s, {\boldsymbol{\omega}},{\boldsymbol{\sigma}})$.
\end{enumerate}
\par
Pullbacks of germs of a family of generalized multi-scale differentials\xspace are defined as
in \cite[Section~11.2]{LMS}. This step requires some care, since the number
of levels, the nodes where the prong-matching is an induced prong-matching,
and the target of the rescaling ensemble map change. Given that, we
may define families of generalized multi-scale differentials\xspace by sheafification,
proceeding the same way as in \cite[Section~11.3]{LMS}.
\par
\begin{df} \label{def:simpleMSD}
We let $\GSMS$ be the groupoid of families of
generalized simple multi-scale differentials.
\end{df}
\par
There are two variants of this definition. First, replacing $T^s_{\Gamma_b}({\mathcal O}_{B, b})$
with the extended level rotation torus $\Textd[\Gamma_b]({\mathcal O}_{B, b})$ in the
definition of a morphism, we obtain projectivized generalized simple multi-scale differentials\xspace.
Here the additional torus factor acts by scaling the differential on all
levels simultaneously, including level~$0$.
These are relevant to get compact spaces. Here we compare the unprojectivized
definitions and will not elaborate further on this.
\par
Second, there is a ``non-simple'' variant that we need to compare to the
relative coarse moduli space. The remarks above about pullback and sheafification
apply here as well.
\par
\begin{df} \label{def:germnonsimpleMSD}
A {\em germ of a family of generalized multi-scale differentials\xspace} of type~$\mu$ is
defined as in \ref{def:germMSD}, replacing (2) by a
rescaling ensemble $R\colon B \to \Tnorm[\Gamma_b]$. A morphism
of such germs consists of $(\phi, \tilde{\phi}, \xi)$ as above,
except that now we allow $\xi \in T_{\Gamma_b}({\mathcal O}_{B'})$.
We let $\GMS$ be the resulting groupoid of families of generalized
multi-scale differentials.
\end{df}
\par
Modifying \Cref{def:collRD} by additionally imposing the global residue condition gives a
groupoid that we denote by $\LMS$ for the simple version (\ref{def:germMSD})
and by $\MSgrp$ for the non-simple version (\ref{def:germnonsimpleMSD}).
We state the comparison to the objects defined in \cite{LMS}.
\par
\begin{proposition} \label{prop:smoothDM}
The stack $\LMS$ is a smooth DM-stack. The stack $\MSgrp$ is a stack
with finite quotient singularities and agrees with the normalization of
the orderly blowup of the normalized incidence variety compactification
\cite[Section~14]{LMS}.
\end{proposition}
\par
The paper \cite{LMS} also defines a smooth stack denoted by $\LMS$, patched
locally from quotients of stacks with a Teichm\"uller marking. The full proof
that this stack is isomorphic to the stack with the same symbol defined here
would require recalling the lengthy definitions of level-wise real blow-up
and Teichm\"uller marking from \cite{LMS}. This identification directly implies
the second statement of the proposition. The proof given here
provides the main content of the proposition, the smoothness of this stack
without using the smoothness results from~\cite{LMS}.
\par
\begin{proof}
Recall from \cite{LMS} that a versal deformation space $B$ of $\MSgrp$ is given by a product $B = \Tnorm[\Gamma_b] \times B_0$, where $\Tnorm[\Gamma_b]$ gives a parametrization of possible rescaling ensembles $R$ (which have values in $\Tnorm[\Gamma_b]$), and where $B_0$ parameterizes the remaining data (deformations of the components $\mathcal X_v$ for $v \in V(\Gamma_b)$ and twisted differentials on these components).
In fact, this local structure is given in loc.\ cit.\ for the model space
in \cite[Section~8.1]{LMS}. This model space is locally isomorphic to Dehn space
by the plumbing construction given in \cite[Theorem~10.1]{LMS} and
Proposition~12.5 shows that every family can locally be lifted to Dehn space.
\par
Consider the fiber product
$$
\begin{tikzcd}
\widehat{B}\coloneqq B \times_{\MSgrp} \LMS \arrow[r] \arrow[d] & \LMS \arrow[d]\\
B \arrow[r] & \MSgrp
\end{tikzcd}
$$
We claim that $\widehat{B}$ is equal to the stack quotient $[\overline{T}_\Gamma^s /
K_\Gamma]$ times the product of the other factors. Then the maps $\widehat{B}
\to \LMS$ provide a smooth cover by spaces which are smooth themselves, which
is what we needed to show.
\par
To show that $\widehat{B}$ is equal to $[\overline{T}_\Gamma^s / K_\Gamma] \times B_0$, let us write down what the maps $\widetilde B \to \widehat{B}$ from the spectrum $\widetilde B$ of some strictly Henselian local ring are. For this, recall\footnote{For a reminder on fiber products of stacks, we recommend the excellent paper \cite{Fantechi-Stacks-for-everybody}.} that a morphism to a fiber product as above is given by a triple
\[
(\widetilde{B} \to \LMS, \widetilde{B} \to B, G)\,,
\]
where $G$ is a $2$-isomorphism between the compositions
\[
\widetilde{B} \to \LMS \to \MSgrp\text{ and }\widetilde{B} \to B \to \MSgrp.
\]
Inserting the definitions of the moduli stacks, this data above is equivalent to a triple of
\begin{itemize}
\item a germ $(\pi\colon\mathcal X\to \widetilde{B}, {\boldsymbol{z}}, \Gamma_{b}, R^s : \widetilde B \to \Tsnorm[\Gamma_b], {\boldsymbol{\omega}}, {\boldsymbol{\sigma}})$ of generalized simple multi-scale differentials ,
\item morphisms $s_T: \widetilde B \to \Tnorm[\Gamma_b]$ and $s_0: \widetilde B \to B_0$ (which together can be thought of as $(s_T,s_0):\widetilde B\to \Tnorm[\Gamma_b]\times B_0=B$)
\item an isomorphism ($\mathcal X \cong \mathcal X'$, $\xi \in T_{\Gamma_b}({\mathcal O}_{\widetilde{B}})$) of generalized (non-simple) multi-scale differentials, sending the family $(\pi\colon\mathcal X\to \widetilde{B}, {\boldsymbol{z}}, \Gamma_{b}, R, {\boldsymbol{\omega}}, {\boldsymbol{\sigma}})$ to the family $(\pi'\colon\mathcal X'\to \widetilde{B}, {\boldsymbol{z}}', \Gamma_{b}, R', {\boldsymbol{\omega}}', {\boldsymbol{\sigma}}')$ induced by $(s_T, s_0): \widetilde B \to B$
\end{itemize}
By identifying the families of curves $\mathcal X \cong \mathcal X'$, we can act on the
pair $(s_T, s_0)$ with the section $\xi$ of the level rotation torus.
Replacing $(s_T, s_0)$ by this modified pair, we obtain a new, equivalent triple of data, where the isomorphism in the last bullet point is taken as the identity.
But then we see that such a triple is uniquely determined by the pair
\[
(R^s : \widetilde{B} \to \Tsnorm[\Gamma_b], s_0: \widetilde{B} \to B_0),
\]
by taking $s_T$ in the second bullet point as the composition $\widetilde{B} \to \Tsnorm[\Gamma_b] \to \Tnorm[\Gamma_b]$ and the data $(\pi,{\boldsymbol{z}},\Gamma_b,{\boldsymbol{\omega}},{\boldsymbol{\sigma}})$ in the first bullet point which is determined by the non-simple generalized multi-scale differential from $(s_T, s_0): \widetilde B \to B$.
Above we have found that any morphism $\widetilde{B} \to \widehat{B}$ can be described by a morphism $(R^s, s_0) : \widetilde{B} \to \Tsnorm[\Gamma_b] \times B_0$. Two such morphisms are $2$-isomorphic if they can be related by compatible isomorphisms for the stacks $B$ and $\LMS$ in the fiber product. Since $B$ is a scheme, the only such isomorphisms come from sections $\xi : \widetilde{B} \to T^s_{\Gamma_b}$ leaving the underlying non-simple generalized multi-scale differential fixed. These are exactly identified with sections $\xi : \widetilde{B} \to K_{\Gamma_b}$, which act in a natural way on the first morphism $R^s : \widetilde{B} \to \Tsnorm[\Gamma_b]$.
Since $\widetilde{B}$ is connected, the section $\xi$ is necessarily constant, so that we have identified\footnote{For the second equality below we use that for a finite group $K$ acting on a scheme $\overline{T}$, the morphisms $\widetilde B \to [\overline{T}/K]$ from the spectrum $\widetilde{B}$ of a strictly Henselian local ring can be identified with the set-quotient $\{\widetilde{B} \to \overline{T}\}/K$. This itself uses the definition of the quotient stack together with the fact that all $K$-torsors over a scheme $\widetilde B$ as above are trivial.}
\[
\mathrm{Mor}(\widetilde{B}, \widehat{B}) = \mathrm{Mor}(\widetilde{B}, \Tsnorm[\Gamma_b] \times B_0)/K_\Gamma = \mathrm{Mor}(\widetilde{B},[\Tsnorm[\Gamma_b]/K_\Gamma] \times B_0)\,.
\]
This proves the isomorphism $\widehat{B} \cong [\Tsnorm[\Gamma_b]/K_\Gamma] \times B_0$. Since both the quotient stack $[\Tsnorm[\Gamma_b]/K_\Gamma]$ and $B_0$ are smooth, this finishes the proof.
\end{proof}
\begin{proof}[Proof of \ref{intro:mainiso}, second part]
Assuming the first part of the theorem, the proof of the second part is completed by showing that the map $\GSMS \to \GMS$ is the relative coarse moduli space over $\overline{\ca M}_{g,n}$. First, we observe that the map $\GMS \to \overline{\ca M}_{g,n}$ is representable. Indeed, the stabilizers $(\varphi, \widetilde \varphi, \xi)$ of a germ of a family of generalized multi-scale differentials lying over the identity morphism $\varphi=\mathsf{id}_B$, $\widetilde \varphi=\mathsf{id}_X$ of the underlying stable curves are those $\xi \in T_{\Gamma_b}({\mathcal O}_{B})$ fixing both the differentials ${\boldsymbol{\omega}}$ and the prong-matchings~${\boldsymbol{\sigma}}$. By the definition of the level-rotation torus, this forces $\xi$ to be trivial, so that indeed the stabilizers of $\GMS$ inject to the stabilizers of $\overline{\ca M}_{g,n}$.
By definition of the relative coarse space, we then have a factorization
\[
\GSMS \to \GSMS^{\sf{coarse}} \to \GMS\,,
\]
and we show that the second map is an isomorphism. For this, let $B \to \GMS$ be associated to a germ of a family of generalized multi-scale differentials. Then we have a commutative diagram, where we \emph{define} the diagrams on the right to be cartesian:
\[
\begin{tikzcd}
{[\overline{T}_{\Gamma_b}^s/K_\Gamma]} \arrow[dd] & \GSMS_B \arrow[r] \arrow[l, dashed] \arrow[d] & \GSMS \arrow[d]\\
& \GSMS_B^{\sf{coarse}} \arrow[r] \arrow[d] & \GSMS^{\sf{coarse}} \arrow[d]\\
\overline{T}_{\Gamma_b}^n & B \arrow[l,"R"] \arrow[r] & \GMS
\end{tikzcd}
\]
Similar arguments to the proof above then show that the fiber $\GSMS_B$ of $\GSMS$ over $B$ consists of the stack parameterizing the choices of simple rescaling ensemble $R^s$ lifting the given rescaling ensemble $R : B \to \overline{T}_{\Gamma_b}^n$ associated to the family of generalized rescaled differentials. Thus $\GSMS_B$ is also the fiber product of $R$ with the map $[\overline{T}_{\Gamma_b}^s/K_\Gamma] \to \overline{T}_{\Gamma_b}^n$, so that the dotted arrow on the top left makes the left diagram cartesian.
To conclude we first note that by \cite[Proposition 3.4]{Abramovich2011Twisted-stable-} the space $\GSMS_B^{\sf{coarse}}$, which was defined as a fiber product, is in fact a relative coarse space for $\GSMS_B$ over $\overline{\ca M}_{g,n}$.
But since the map $\GSMS_B \to \overline{\ca M}_{g,n}$ factors through the representable map $B \to \overline{\ca M}_{g,n}$, the space $\GSMS_B^{\sf{coarse}}$ is \emph{also} a coarse space of $\GSMS_B$ over $B$, by an application of \ref{lem:annoyingcoarsespaces} below to $\mathcal X = \GSMS_B$, ${\mathcal Y}'=B$ and ${\mathcal Y}=\overline{\ca M}_{g,n}$.
\par
On the other hand, since $\overline{T}_{\Gamma_b}^n$ is the coarse space of $[\overline{T}_{\Gamma_b}^s/K_\Gamma]$ (over $\mathrm{Spec}(\mathbb{C})$), applying \cite[Proposition 3.4]{Abramovich2011Twisted-stable-} again shows that $B$ \emph{itself} is the coarse space of $\GSMS_B$. This proves that the map $\GSMS_B^{\sf{coarse}} \to B$ is an isomorphism. Since we prove this for any $B \to \GMS$, we conclude that $\GSMS^{\sf{coarse}} \to \GMS$ is an isomorphism as desired.
\end{proof}
\begin{lemma} \label{lem:annoyingcoarsespaces}
Consider a sequence of morphisms $\mathcal X \to {\mathcal Y}' \to {\mathcal Y}$ of algebraic stacks, locally of finite
presentation, and assume the relative inertia $I(\mathcal X /{\mathcal Y}) \to \mathcal X$ is finite. Then if ${\mathcal Y}' \to {\mathcal Y}$ is representable, the relative coarse space $\mathcal X^{\sf{coarse},{\mathcal Y}}$ of $\mathcal X$ over ${\mathcal Y}$ is isomorphic to the relative coarse space $\mathcal X^{\sf{coarse},{\mathcal Y}'}$ of $\mathcal X$ over ${\mathcal Y}'$.
\end{lemma}
\begin{proof}
It follows from the properties of relative coarse spaces (\cite[Theorem 3.1 (2)]{Abramovich2011Twisted-stable-}) that there is a natural sequence of maps
\[
\begin{tikzcd}
\mathcal X \arrow[r] & \mathcal X^{\sf{coarse},{\mathcal Y}} \arrow[r] & \mathcal X^{\sf{coarse},{\mathcal Y}'} \arrow[r] & {\mathcal Y}' \arrow[r] & {\mathcal Y}
\end{tikzcd}.
\]
Taking the fiber product with a cover $U \to {\mathcal Y}$ by an algebraic space, we obtain a sequence of morphisms
\[
\begin{tikzcd}
\mathcal X_U \arrow[r] & \mathcal X^{\sf{coarse},{\mathcal Y}}_U \arrow[r] & \mathcal X^{\sf{coarse},{\mathcal Y}'}_U \arrow[r] & {\mathcal Y}'_U \arrow[r] & U
\end{tikzcd},
\]
and by the representability of ${\mathcal Y}' \to {\mathcal Y}$ and the properties of relative coarse spaces, all stacks in this sequence except possibly $\mathcal X_U$ are in fact algebraic spaces. Now it follows from the construction in the proof of \cite[Theorem 3.1]{Abramovich2011Twisted-stable-} that the map $\mathcal X_U \to \mathcal X^{\sf{coarse},{\mathcal Y}}_U$ is an (absolute) coarse moduli space. On the other hand, seeing ${\mathcal Y}'_U \to {\mathcal Y}'$ as a cover by an algebraic space, the same argument implies that $\mathcal X_U \to \mathcal X^{\sf{coarse},{\mathcal Y}'}_U$ is an absolute coarse space, forcing the map $\mathcal X^{\sf{coarse},{\mathcal Y}}_U \to \mathcal X^{\sf{coarse},{\mathcal Y}'}_U$ to be an isomorphism. We have checked on a cover that $\mathcal X^{\sf{coarse},{\mathcal Y}} \to \mathcal X^{\sf{coarse},{\mathcal Y}'}$ is an isomorphism, finishing the proof.
\end{proof}
\par
\begin{remark} \label{rem:PMEqcount}
In practice it is often relevant to count how many projectivized
multi-scale differentials there are on a given pointed curve with twisted
differential $(X,{\boldsymbol{z}},\Gamma, {\boldsymbol{\omega}})$.
By definition of the above equivalence relation, this is the
number of \emph{prong-matching equivalence classes}, i.e.\ the number of
orbits of the set of global prong-matchings under the action of the level
rotation group $R_\Gamma$. Determining this number is complicated in general,
but for a two-level graph with prongs $\kappa_1,\ldots,\kappa_s$ there are
$\prod \kappa_i / \operatorname*{lcm}(\kappa_i)$ prong-matching equivalence classes.
\end{remark}
\par
\bigskip
\subsection{Quotient twist group and rescaling ensembles in a worked example}
\label{sec:QuotTwist}
Consider the triangle graph $\Gamma$ with three levels, each containing one vertex, and
three edges forming a triangle, as illustrated in~\ref{fig:X_L_pic} (to which
we also refer for the labeling of the edges).
\begin{figure}[ht]
\begin{tikzpicture}[very thick]
\begin{scope}
\fill (0,0) coordinate (x0) circle (2.5pt); \node [above] at (x0) {$X_{(0)}$};
\fill (1,-1) coordinate (x1) circle (2.5pt); \node [below right] at (x1) {$X_{(-1)}$};
\fill (-1,-2) coordinate (x2) circle (2.5pt); \node [below right] at (x2) {$X_{(-2)}$};
\draw[] (x0) -- node[right]{$\kappa_1$} node[left]{} (x1);
\draw (x0) --node[left]{$\kappa_3$} node[right]{} (x2);
\draw (x1) -- node[below]{$\kappa_2$} node[above]{} (x2);
\end{scope}
\end{tikzpicture}
\quad \quad \quad
\begin{tikzpicture}[very thick] \begin{scope}
\fill (0,0) coordinate (x0) circle (2.5pt); \node [above] at (x0) {$X_{(0)}$};
\fill (1,-1) coordinate (x1) circle (2.5pt); \node [below right] at (x1) {$X_{(-1)}$};
\fill (-1,-2) coordinate (x2) circle (2.5pt); \node [below right] at (x2) {$X_{(-2)}$};
\fill (-0.5,-1) coordinate (x3) circle (2.5pt); \node [below right] at (x3) {};
\draw[] (x0) -- node[right]{$\kappa_1$} node[left]{} (x1);
\draw (x0) --node[left]{$\kappa_3$} node[right]{} (x3);
\draw (x3) --node[left]{$\kappa_3$} node[right]{} (x2);
\draw (x1) -- node[below]{$\kappa_2$} node[above]{} (x2);
\end{scope}
\end{tikzpicture}
\quad \quad
\caption{The triangle graph (the generic fiber~$X$, left) and its subdivision
(the special fiber $X_L$, right) where the extra vertex stands for a semistable rational component.}
\label{fig:X_L_pic}
\end{figure}
For simplicity we restrict to the case $\kappa_1 = 1 = \kappa_2$
and $\kappa_3 = n$. In this case the simple twist group is $\sTw[\Gamma] = n{\mathbb{Z}}
\oplus n{\mathbb{Z}}$. The full twist group is generated by the simple twist group
and the element $(1,-1)$. In particular we note that the {quotient twist group}
is
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:KGammaexample}
K_\Gamma \= \Tw[\Gamma]/\Tw[\Gamma]^s \= {\mathbb{Z}}/n{\mathbb{Z}}\,.
\ee
To work explicitly with invariants, we now specialize to the case $n=3$ in the sequel.
\par
The simple level rotation torus is isomorphic to $({\mathbb{C}}^*)^2$, hence
$\overline{T}^s_\Gamma \cong {\mathbb{C}}^2$, and a morphism $R^s\colon B \to \overline{T}^s_\Gamma$ is
given by two functions $(t_{-1},t_{-2})$. Consequently
\begin{align*}
\overline{T}^n_\Gamma \= \overline{T}^s_\Gamma / K_\Gamma &\= \{(s_{-1}, s_{-2}, f_{1}, f_{2}, f_{3})
: f_1^1 \= s_{-1}\,, f_{2}^1 \= s_{-2}\,, f_{3}^{3} = s_{-1} s_{-2} \}\\ &\= \{(f_{1}, f_{2}, f_{3}) : f_{3}^{3} = f_{1} f_{2}\}\,
\end{align*}
where $s_{-1}=f_{1}=t_{-1}^3$, $s_{-2}=f_2= t_{-2}^3$ and $f_3= t_{-1}t_{-2}$.
Thus the coordinates $(t_{-1}, t_{-2})$ on $\overline{T}^s_\Gamma$ parameterize a deformation, which is in fact the universal choice in
a neighborhood of this graph, disregarding changes of the complex structure
of the underlying curve. To summarize, the rescaling ensemble $R$ induced by $R^s$ is given by the composition of $R^s$ with the quotient map $\overline{T}^s_\Gamma \to \overline{T}^s_\Gamma / K_\Gamma$, and has coordinates
$$
(s_{-1}, s_{-2}, f_1, f_2, f_3) = (t_{-1}^3, t_{-2}^3, t_{-1}^3, t_{-2}^3, t_{-1} t_{-2})\,.
$$
\par
Let ${\boldsymbol{\omega}} = (\omega_0, \omega_{-1},\omega_{-2})$ be a twisted differential on
some pointed stable automorphism-free curve~$(X,{\boldsymbol{z}})$ compatible with
the~$\Gamma$ discussed here. By plumbing (see \cite[Section~12]{LMS})
we get a family of curves $\mathcal X \to \overline{T}^n_\Gamma$ with an underlying
collection of rescaled differentials
\bes
\omega_{(0)} \= \omega_0\,, \qquad \omega_{(-1)} \= s_{-1} \omega_{-1}\,,\qquad
\omega_{(-2)} \= s_{-1}s_{-2} \omega_{-2}\,
\ees
and the rescaling ensemble~$R$.\footnote{In fact, replacing the initial datum~${\boldsymbol{\omega}}$
by the universal equisingular deformation inside the appropriate stratum of
differentials and taking as new base $\overline{T}^n_\Gamma$
times the base of the equisingular deformation we obtain the universal family.}
\par
To summarize: \emph{near $(X,{\boldsymbol{z}},\Gamma,{\boldsymbol{\omega}})$ as above, $\GMS = \MSgrp$,
since there are no GRC; both functors are representable by an algebraic variety;
this algebraic variety is singular with a quotient singularity given by the group~$K_\Gamma$.}
\par
\medskip
Finally we remark that as illustrated in~\ref{fig:X_L_pic}, a geometric way to think of the $[\mathbb{P}^1/\mathbb{G}_m]$ subdivision is to modify the definition of level graphs by eliminating all long edges (i.e., edges crossing more than one level passage) and instead inserting semistable rational vertices at each level crossed by a long edge, with the same prong value. Then the corresponding twist group, level rotation torus and rescaling ensemble have only their `simple' versions.
To see this concretely, suppose $uv = f$ is the local model of a node corresponding to a long edge crossing $k$ level passages, where $f^\kappa = s_{i-k}\dots s_{i-1}$ as in \ref{eq:f2sj}. Introduce new parameters $u_j, v_j, f_j$ for $i-k\leq j\leq i-1$ satisfying $u_jv_j = f_j$, $f_j^{\kappa} = s_{j}$, $v_ju_{j-1} = 1$, $u_{i-1} = u$ and $v_{i-k} = v$. Then $[v_j, u_{j-1}]$ represents the inserted semistable rational vertex at each intermediate level that can subdivide the long edge into $k$ short edges, where the differential on the semistable rational vertex is $u_{j-1}^{\kappa} (du_{j-1}/u_{j-1}) = - v_j^{-\kappa} (dv_j/v_j)$ and the prong remains equal to $\kappa$.
\section{Blowup descriptions}\label{sec:blowup_descriptions}
In this section we give a description of ${\mathbb{P}}(\cat{Rub}_{\ca L}^{\sf{coarse}})$ as a global blowup.
First in genus zero, we construct an explicit sheaf of ideals on $\overline{{\mathcal M}}_{0,n}$, such that blowing up $\overline{{\mathcal M}}_{0,n}$ along this sheaf gives ${\mathbb{P}}(\cat{Rub}_{\ca L}^{\sf{coarse}})$.
In \cite{nguyen} Nguyen described the incidence variety compactification (IVC) in the case of genus zero as an explicit blowup of $\overline{\ca M}_{0,n}$.
Note that in genus zero there are no global residue conditions (because any top level vertex must have a marked pole), and hence in genus zero the rubber space and the space of generalized multi-scale differentials coincide with the space of multi-scale differentials. Our blowup description can thus recover Nguyen's result about the IVC of the strata of meromorphic $1$-forms in genus zero as a blowup of $\overline{{\mathcal M}}_{0,n}$. We also provide an example demonstrating the difference between the rubber space and the IVC in genus zero.
\par
Next for arbitrary genus, we construct a globally defined sheaf of ideals on the normalization of the incidence variety compactification (NIVC) whose blowup gives the (projectivized) multi-scale moduli space (i.e., the main component of $\cat{Rub}_{\ca L}^{\sf{coarse}}$). Consequently it follows that the (coarse) space of projectivized multi-scale differentials is a projective variety for all $g$. Recall that in \cite[Section 14.1]{LMS} the moduli space of multi-scale differentials was described as a local blowup, where the ideals locally defining the center of the blowup can differ by principal ideals on the overlaps of local charts. In particular, the description of \cite{LMS} did not yield projectivity of the space of multi-scale differentials. By constructing an explicit ample divisor class, projectivity of the moduli space of multi-scale differentials was later established in \cite[Section 3]{ccm}. Our global blowup description thus provides a direct conceptual understanding of this projectivity result.
\par
Besides projectivity, knowing a blowup description of compactified strata of differentials can be helpful for obtaining geometric invariants, such as volumes of the strata, by using intersection theory, see \cite{nguyen}. We also provide a tropical interpretation of our blowup, which sheds further light on the geometry of the construction.
\subsection{The sheaf of ideals in genus zero}
Let $\Gamma$ be the dual graph of a boundary stratum $D_\Gamma\subset\overline{\mathcal M}_{0,n}$. For each vertex $v\in V(\Gamma)$, let $d(v)$ be the degree of $\mathcal L_\mu$ restricted to $v$ (so $\sum_{v\in V(\Gamma)} d(v) = 0$ by definition). Since $\Gamma$ is a tree, there exists a unique `slope'\footnote{The justification for this terminology is given by \ref{eq:slopeadmiss}, which shows that the slopes of points $\cat{Rub}_{\ca L_\mu}$ satisfy the same conditions.} function $\kappa \colon H\to \mathbb Z$ from the set $H=H(\Gamma)$ of half-edges of $\Gamma$ such that
\begin{enumerate}
\item $\kappa$ agrees with $m_i$ at the leg corresponding to a marked point $z_i$;
\item $\kappa(h) + \kappa(h') = 0$ for any $h$ and $h'$ that are opposite halves of an edge;
\item for all vertices $v$ we have $\sum_{h \in H(v)} \kappa(h) = d(v)$, where we sum over all half-edges attached to $v$.
\end{enumerate}
For every pair of vertices $v$ and $v'$, let $\gamma$ be the unique path from~$v$ to~$v'$ in~$\Gamma$. We view this (directed) path as a sequence of half-edges, where if an edge $e = (h, h')\in E(\Gamma)$ appears in $\gamma$ in the direction going \emph{from} $h$ \emph{to} $h'$ then we put (only) $h$ in our sequence of half-edges. We define an ideal locally around the boundary stratum $D_\Gamma\subseteq \overline{{\mathcal M}}_{0,n}$ by
\begin{equation*}
I(v, v') \coloneqq \prod_{h \in \gamma} \delta(h)^{\max(\kappa(h), 0)}\,,
\end{equation*}
where we write $\delta(h)$ for the ideal associated to the edge containing $h$ (that is, for the defining equation of the boundary divisor of $\overline{{\mathcal M}}_{0,n}$ where the corresponding node exists). Define
\begin{equation*}
J(v, v') \coloneqq I(v, v') + I(v', v)\,;
\end{equation*}
this evidently satisfies $J(v, v') = J(v', v)$ and $J(v,v) = (1)$. Finally we set
\begin{equation*}
w(v) \coloneqq \on{valence}(v) - 2\,,
\end{equation*}
which is a positive integer by stability of the curve, and define
\begin{equation*}
J(\Gamma) \coloneqq\prod_{(v, v') \in V \times V} J(v, v')^{w(v) w(v')}\,.
\end{equation*}
A concrete example of this ideal is given in \ref{ex:blowup-comparison} below.
\subsection{Compatibility under degeneration in genus zero}
To show that the ideals $J(\Gamma)$ defined in the neighborhood of each stratum $D_\Gamma\subseteq \overline{{\mathcal M}}_{0,n}$ glue to a global ideal sheaf over $\overline{{\mathcal M}}_{0,n}$, we need to show that they behave well under degeneration. As any dual graph~$\Gamma$ can be obtained from any other~$\Gamma'$ by a series of operations of inserting and contracting edges, it is enough to check that the ideals glue under contracting a single edge of the graph.
\begin{lemma}\label{lm:Jcompatible}
Let $e$ be an edge of $\Gamma$, and let $\Gamma'$ be the graph obtained from $\Gamma$ by contracting~$e$. Then $J(\Gamma') = J(\Gamma)$, after inverting the ideal $\delta(e)$.
\end{lemma}
Note that inverting $\delta({e})$ geometrically corresponds to restricting to the locus where the edge $e$ is contracted, i.e.~where the corresponding node of the curve is smoothed out.
\begin{proof}
We denote $c\colon \Gamma \to \Gamma'$ the contraction map, and let $v_1$ and $v_2$ be the endpoints of $e$, and let $v'$ be the vertex of $\Gamma'$ to which $e$ is contracted, so that $d(v') = d(v_1) + d(v_2)$.
If $v$ is any vertex of $\Gamma$ different from $v_1$ and $v_2$, then clearly $w(v) = w(c(v))$. Furthermore, the slope function on $\Gamma$ clearly restricts to the slope function on $\Gamma'$. Thus for any two vertices~$u_1$ and~$u_2$ of~$\Gamma$ distinct from $v_1$ and $v_2$, we have
\bes
J_\Gamma(u_1, u_2) \sim J_{\Gamma_0}(u_1, u_2)\,,
\ees
where to simplify notation we write $I \sim J$ if the ideal sheaves $I$ and $J$ become equal after inverting $\delta(e)$. Similarly $J_\Gamma(v_1, v_2) \sim (1)$.
It therefore suffices to show that
\begin{equation}
\label{eq:sim_1}
\prod_{v \in V(\Gamma')} J(v', v)^{2w(v') w(v)} \sim \prod_{v \in V(\Gamma)} J(v_1, v)^{2 w(v) w(v_1)} J(v_2, v)^{2 w(v) w(v_2)}\,.
\end{equation}
Let $V^\circ\coloneqq V(\Gamma)\setminus\{v_1,v_2\}=V(\Gamma')\setminus\{v'\}$. Then \ref{eq:sim_1} reduces to showing
\begin{equation*}
\label{eq:sim_2}
\prod_{v \in V^\circ} J(v', v)^{w(v') w(v)} \sim \prod_{v \in V^\circ} J(v_1, v)^{w(v_1) w(v)} J(v_2, v)^{w(v_2) w(v)}\,.
\end{equation*}
This follows from $w(v') = w(v_1) + w(v_2)$ and
\begin{equation*}
J(v', v) \sim J(v_1, v) \sim J(v_2, v)
\end{equation*}
for all $v \in V'$.
\end{proof}
\begin{df}
\label{def:big_ideal_sheaf}
Define $J(\ca L_\mu)$ to be the (global) ideal sheaf on $\overline{\ca M}_{0,n}$ that for any boundary stratum $D_\Gamma$ restricts to the ideal~$J(\Gamma)$ on a neighborhood of $D_\Gamma$.
\end{df}
The existence of $J(\ca L)$ follows from the above lemma.
\subsection{A tropical picture in genus zero}
\label{subsec:tropical}
The normalized blowup in the ideal $J(\Gamma)$ corresponds tropically to a subdivision of the positive orthant in the vector space $\bb Q\Span{E}$, where $E=E(\Gamma)$ is the edge set. This subdivision is built by taking a hyperplane (or sometimes the whole space) for every pair of vertices in $\Gamma$: if $\gamma$ is the path from $v$ to $v'$ as above, then the corresponding hyperplane $L(v, v')$ is cut out by the equation
\begin{equation*}
\sum_{h \in \gamma} \kappa(h) e(h) \= 0 \,,
\end{equation*}
where $e(h)$ is the edge containing the half-edge $h$, viewed as an element of the group $\bb N\Span{E}$ (and we recall that a half-edge $h$ is said to be contained in a directed path $\gamma$ if $\gamma$ goes via~$h$ before going through the complementary half-edge of the same edge).
These local subdivisions glue to a global subdivision of the tropicalization of $\overline{\ca M}_{0, n}$, inducing a proper birational map $\widetilde{\ca M}_{0, n}\to \overline{\ca M}_{0,n}$.
\begin{lemma}\label{lem:blowup_to_subdivision}
The normalization of the blowup of $\overline{{\mathcal M}}_{0,n}$ in the ideal $J(\ca L_\mu)$ is equal to the proper birational map $\widetilde{\ca M}_{0, n}\to \overline{\ca M}_{0,n}$ induced by the subdivision above.
\end{lemma}
\begin{proof}
The standard dictionary between toric blowups and subdivisions implies that the normalized blowup in $J(v, v')$ is equal to that induced by the subdivision in $L(v, v')$. Since $w(v) \ge 1$ (by stability), blowing up in $J(v, v')$ is the same as blowing up in $J(v, v')^{w(v) w(v')}$. Normalized blowup in a product of ideals corresponds to superimposing their subdivisions.
\end{proof}
\subsection{Comparing blowups and rubber maps in genus zero}
We are ready to prove our main statement in genus zero:
\begin{theorem}
\label{thm:blowup-0}
The normalization of the blowup $\widetilde{\ca M}_{0, n}$ of $\overline{\ca M}_{0,n}$ along the ideal sheaf $J(\mathcal L_\mu)$ is the projectivized coarse moduli space of rubber differentials ${\mathbb{P}}(\cat{Rub}_{\mathcal L}^{\sf{coarse}})$.
\end{theorem}
\begin{proof}
Let $X/B$ be a nuclear log curve of genus zero.
\par
\textbf{Claim:} There exists a PL function $\beta$ on $X$ such that $\mathcal L_\mu \cong \on O(\beta)$, and moreover such $\beta$ is unique up to scaling by an element of $\overline{\mathsf{M}}_{B}(B)^\mathsf{gp}$.
To prove the claim, we use the fact that the graph is a tree to deduce that there is a unique collection of admissible slopes $\kappa_e$. We pick a vertex $v_0$, and let $\beta$ be the unique PL function vanishing on $v_0$ and with slopes given by the $\kappa_e$. The line bundle $\mathcal L_\mu(-\beta)$ has multi-degree zero, and is hence trivial since $X$ has genus 0. This proves the claim.
Now recall that $\cat{Rub}^{\sf{coarse}}_\ca L$ can be obtained by omitting the divisibility condition (2) from \ref{def:rub}. In other words, the point $X/B$ lies in $\cat{Rub}^{\sf{coarse}}_\ca L$ if and only if the values of~$\beta$ on the vertices of $\Gamma$ form a totally ordered set. It therefore remains to check that this is equivalent to the map $B \to \overline{\ca M}_{0,n}$ factoring via the subdivision described in \ref{subsec:tropical}.
If $\gamma$ is a directed path in $\Gamma$, we define
\begin{equation*}
\phi(\gamma) \coloneqq \sum_{h \in \gamma} \kappa_h \delta_h\,.
\end{equation*}
Since the difference of values of $\beta$ at the two ends of an edge is the slope $\kappa_e$ of that edge (with the appropriate sign), the values of $\beta$ at the two ends of a path~$\gamma$ differ by~$\phi(\gamma)$.
Fix a vertex $v_0$, and write $\gamma_{v}$ for the unique path from $v_0$ to $v$. Then the set $\{\beta(v) : v \in V(\Gamma)\}$ is totally ordered if and only if the set
\begin{equation*}
\{\phi(\gamma_v) : v \in V(\Gamma)\}
\end{equation*}
is totally ordered. This is in turn equivalent to requiring that for every path $\gamma\subset\Gamma$ (not necessarily a path from $v_0$), the element $\phi(\gamma)$ is comparable to $0$, i.e., either $\phi(\gamma) \in \overline{\mathsf{M}}_S$ or $-\phi(\gamma) \in \overline{\mathsf{M}}_S$. Imposing this condition is equivalent to subdividing $\bb N\langle E\rangle$ in the hyperplane $L(v, v')$ of \ref{subsec:tropical}, where~$v$ and $v'$ are the endpoints of $\gamma$.
\end{proof}
\subsection{Comparison to Nguyen's blowup in genus zero}
As mentioned, in genus zero Nguyen \cite{nguyen} described the IVC as an explicit blowup of $\overline{\ca M}_{0,n}$ (also for the general case of $k$-differentials in genus zero). Since the rubber/multi-scale space is the normalization of a blowup of the normalization of the IVC,
our blowup described in \ref{thm:blowup-0} must dominate the blowup defined by Nguyen. In this subsection we recall Nguyen's construction, provide a viewpoint of his blowup from our setup, and give an alternative proof for Nguyen's result that blowing up $\overline{\ca M}_{0,n}$ in his ideal gives the IVC.
\par
We begin by recalling Nguyen's construction of a sheaf of ideals on $\overline{\ca M}_{0,n}$.
Let $X/B$ be a nuclear log curve of genus zero with graph $\Gamma$, and let $\kappa$ be the slope function on the edges of $\Gamma$, i.e.,~the PL function constructed in the proof of \ref{thm:blowup-0}.
For a given vertex $v\in V(\Gamma)$ and an edge $e \in E(\Gamma)$, let $h_v(e)$ be the half-edge of $e$ such that the path from the end of $h_v(e)$ to $v$ passes through $e$.
For a vertex $v\in V(\Gamma)$ we define
$$\delta_v \coloneqq\prod_{e\in E(\Gamma)} \delta_{e}^{\kappa_{v,e}}\,,$$
where $\kappa_{v,e} \coloneqq \max (\kappa(h_v(e)),0)$. Let $N(\Gamma)$ be the (local) ideal (in the variables $\delta_e$, as in our setup) generated by the set of elements $\delta_v$ for all vertices $v\in V(\Gamma)$. It was shown in \cite{nguyen} that these $N(\Gamma)$ can be patched together to a sheaf of ideals $N$ globally defined on $\overline{\ca M}_{0,n}$. This can be seen the same way as \ref{lm:Jcompatible}, and we will discuss this in more generality in \ref{rem:blowup-ivc} for arbitrary genera.
Before proceeding, we illustrate Nguyen's ideal and our ideal in the following example.
\begin{example}\label{ex:blowup-comparison}
Consider a (partially ordered) dual graph $\Gamma$ as illustrated in \ref{fig:blowupgraph}, with all slopes $\kappa=1$.
\begin{figure}[tb]
\[
\begin{tikzpicture}[baseline=0pt, vertex/.style={circle,draw,font=\huge,scale=0.5, thick}]
\node[vertex] (v0) at (0,0) {$v_0$};
\node[vertex] (v1) at (-2,2) {$v_1$};
\node[vertex] (v2) at (0,2) {$v_2$};
\node[vertex] (v3) at (2,2) {$v_3$};
\draw[thick] (v0) to node[near start, below right]{$\kappa_h=-1$} node[near end, below right]{$\kappa_{h'}=1$} node[midway, above left]{$e_3$}(v3);
\draw[thick] (v0) to node[near end, below left]{$e_2$}(v2);
\draw[thick] (v0) to node[near end, below left]{$e_1$} (v1);
\draw (v0) -- ++(270:0.6);
\draw (v1) -- ++(60:0.6);
\draw (v1) -- ++(120:0.6);
\draw (v2) -- ++(60:0.6);
\draw (v2) -- ++(120:0.6);
\draw (v3) -- ++(60:0.6);
\draw (v3) -- ++(120:0.6);
\end{tikzpicture}
\]
\caption{The graph $\Gamma$ of a stratum in $\overline{\ca M}_{0,7}$; the desired slopes $\kappa$ can be obtained, e.g., by using the signature $\mu=(-1^6, 4)$ with the six markings associated to simple poles attached to the vertices $v_1$, $v_2$, and $v_3$.}
\label{fig:blowupgraph}
\end{figure}
Writing $\delta_i \coloneqq \delta_{e_i}$ to lighten notation, we obtain $\delta_{v_0} = \delta_1\delta_2\delta_3$, $\delta_{v_1} = \delta_2\delta_3$, $\delta_{v_2} = \delta_1\delta_3$, and $\delta_{v_3} = \delta_1\delta_2$. In this case Nguyen's ideal $N(\Gamma)$ is given by
$$ N(\Gamma) = (\delta_1\delta_2, \delta_1\delta_3, \delta_2\delta_3, \delta_1\delta_2\delta_3) = (\delta_1\delta_2, \delta_1\delta_3, \delta_2\delta_3)\,. $$
In contrast, our ideal $J(\Gamma)$ is given by
$$ J(\Gamma) = (\delta_1, \delta_2)^{2} (\delta_1, \delta_3)^{2}(\delta_2, \delta_3)^{2} (\delta_1)^4 (\delta_2)^4 (\delta_3)^4\,. $$
Blowing up $J(\Gamma)$, each ideal generated by a pair $(\delta_i, \delta_j)$ for $1\le i<j\le 3$ becomes principal, and so does the ideal $N(\Gamma)$. Therefore, the blowup in $J(\Gamma)$ dominates the blowup in $N(\Gamma)$.
\end{example}
Nguyen \cite{nguyen} proved that blowing up $\overline{\ca M}_{0,n}$ along the globally defined sheaf of ideals~$N$ gives the IVC. Indeed, in the example above we see explicitly that locally around the boundary stratum with the dual graph $\Gamma$, the rubber/multi-scale space obtained by blowing up along~$J$ is a further blowup of the IVC.
\par
The situation of this example can also be understood in general, from our viewpoint, which gives an alternative proof of the result of Nguyen.
\par
\begin{proposition}
\label{prop:blowup-0}
The local blowup of $\overline{\ca M}_{0,n}$ near $D_\Gamma$ along the ideal $J(\Gamma)$ makes the ideal $N(\Gamma)$ become principal.
Moreover, in genus zero the blowup of $\overline{\ca M}_{0,n}$ along the ideal sheaf $N$ is the IVC.
\end{proposition}
\par
Before giving the proof, we first reinterpret $N(\Gamma)$ geometrically as follows. If two vertices $v$ and $v'$ are joined by an edge~$e$ (which is necessarily vertical in genus zero), and if $\ell(v)>\ell(v')$, then~$\delta_v$ divides~$\delta_{v'}$. Therefore, the ideal $N(\Gamma)$ is the same as the ideal generated only by the elements $\delta_v$ where $v$ ranges over all vertices that are local maxima of $\Gamma$ (in the sense that all edges from $v$ go down --- recall that this is a partial order on the graph, and the multi-scale differential upgrades this to a full order). A vertex~$v$ that is a local maximum of~$\Gamma$, such that the corresponding $\delta_v$ generates the ideal $N(\Gamma)$ after the blowup, becomes a global top level vertex. On the other hand, those local maxima~$v$ whose $\delta_v$ terms do {\em not} generate the principal ideal after blowing up $N(\Gamma)$ may not divide each other, and thus remain unordered. This corresponds to the fact that a point in the IVC records actual differentials merely on top level vertices where the stable differential is not identically zero, while on any lower vertex the stable differential is identically zero (though the underlying marked zeros and poles of the twisted differential are still remembered).
\par
\begin{proof}
For the first claim, note that the edge parameter $\delta_e$ appears with the same exponent in the expressions of $\delta_v$ and $\delta_{v'}$ \emph{unless}~$e$ lies in the unique path from $v$ to $v'$, in which case the exponents of $\delta_e$ in $\delta_v$ and $\delta_{v'}$ are the same as those in $I(v,v')$ and $I(v',v)$, respectively. Since blowing up along $J(\Gamma)$ makes the ideal $(I(v,v'), I(v',v))$ principal, it follows that each ideal $(\delta_v, \delta_{v'})$ becomes principal under that blowup. This is to say that after blowing up in~$J(\Gamma)$, one of $\delta_v$ and $\delta_{v'}$ must divide the other. Doing this for all~$v$ and $v'$ shows that after the blowup along $J(\Gamma)$, a number of elements $\delta_{v_1},\dots,\delta_{v_k}$ will divide $\delta_v$ for any $v\in V(\Gamma)$. In particular, such $\delta_{v_i}$ and $\delta_{v_j}$ divide each other and thus differ by multiplication by a unit, and the ideal $N(\Gamma)$, after blowing up along $J(\Gamma)$, is generated by any one of these $\delta_{v_i}$, and hence it becomes principal.
\par
For the second claim, we will construct the desired morphisms between the blowup and the IVC in both directions that are inverses of each other. These will be constructed locally over each boundary stratum $D_\Gamma$ of $\overline{\ca M}_{0,n}$.
\par
The upshot underneath the constructions is that $\delta_v$ for $v\in V(\Gamma)$ is an {\em adjusting parameter} in the sense of \cite[Proposition 11.13]{LMS}, which means that multiplying by $\delta_v^{-1}$ makes the limiting differential become not identically zero on the component corresponding to $v$. To see this, let $D_{e_i}$ be the boundary divisor of $\overline{{\mathcal M}}_{0,n}$ corresponding to a given edge $e_i$ of $\Gamma$. Contracting all edges of $\Gamma$ except $e_i$ produces a graph with two vertices connected by an edge $e_i$, and the family of differentials over it vanishes on the irreducible component corresponding to the lower level vertex, with generic vanishing order $|\kappa_{e_i}|$. If the image of a given vertex $v$ of $\Gamma$ under this contraction is the lower of these two vertices, then over $D_\Gamma$ the differential vanishes identically on the irreducible component corresponding to~$v$. Therefore, $\delta_v$ is precisely the local defining equation with multiplicity equal to the total vanishing order over $D_\Gamma$ of the stable differentials on the irreducible component of the curve corresponding to the vertex $v$. By definition, this implies that $\delta_v$ is an adjusting parameter for $v$.
Now we construct a morphism from the IVC to the blowup of $\overline{\ca M}_{0,n}$ along~$N$ by using the universal property of the blowup. More precisely, as we blow up (in a neighborhood of $D_\Gamma$) the ideal generated by all $\delta_v$, it suffices to check that this ideal becomes principal on the IVC. Recall that the IVC parameterizes pointed stable differentials (of prescribed type) that are not identically zero, where
a stable differential is a section of
the dualizing sheaf over the stable curve, considered up to an overall scaling by a nonzero constant factor. If a vertex $v$ is not a local maximum of $\Gamma$, i.e., if there exists an edge $e$ going up from $v$, then the (stable) differential on the irreducible component corresponding to $v$ is identically zero. Thus given a (non-identically-zero) stable differential, we can declare a local maximum vertex $v$ of $\Gamma$ to be a global maximum if and only if the stable differential on the corresponding irreducible component of the curve is not identically zero. By the preceding discussion, this is precisely to say that all adjusting parameters $\delta_v$ for the global maxima vertices $v$ differ by units, and divide all the other $\delta_v$. Hence the ideal $N(\Gamma)$ pulls back to be principal in the IVC, which induces the map (locally) from the IVC to the blowup of $\overline{\ca M}_{0,n}$ along $N(\Gamma)$.
\par
Next we construct a morphism in the opposite direction, from the blowup of $\overline{\ca M}_{0,n}$ along~$N$ to the IVC, by using the universal property of the Hodge bundle over $\overline{\ca M}_{0,n}$ (twisted by the polar part of the differentials, and projectivized as always).
\par
Consider the universal family of differentials with prescribed zeros and poles over a punctured neighborhood of $D_\Gamma$ in ${\mathcal M}_{0,n}$. We claim that this family of differentials extends to a family of stable differentials over the local blowup of $\overline{\ca M}_{0,n}$ along $N(\Gamma)$. Indeed, for each point in the preimage of $D_\Gamma$ in the blowup, we know the set of global maxima $v_1,\dots,v_k$ of the graph (with $k\ge 1$), where the corresponding adjusting parameters $\delta_{v_1},\dots,\delta_{v_k}$ divide all the other $\delta_v$. It follows that the limiting stable differential will be not identically zero precisely on the irreducible components corresponding to $v_1,\dots,v_k$, and thus in particular not identically zero on the entire stable curve.
By the universal property of the projectivized Hodge bundle, the blowup along $N(\Gamma)$ carrying a family of (non-identically-zero) stable differentials
admits locally a morphism to this bundle. Moreover, since over the locus of smooth curves this family of differentials coincides with the family of differentials in a given stratum, it implies that the image of the morphism from the blowup to the Hodge bundle is the closure of the stratum, i.e., the IVC. By construction, it is clear that this map is the inverse of the morphism in the other direction.
\end{proof}
\subsection{A blowup description for arbitrary genus}
Recall that the NIVC denotes the normalization of the incidence variety compactification (i.e.~of the closure of the stratum in the Hodge bundle), and let $\Gamma$ be a partially ordered level graph of a boundary stratum in the NIVC. For every vertex $v\in V(\Gamma)$, by normality an \emph{adjusting parameter~ $h_v$} exists by \cite[Proposition~11.13]{LMS}. Recall that by definition this means that multiplying by $h_v^{-1}$ makes the limiting differential in a degenerating family not identically zero on the irreducible component of the stable curve corresponding to $v$. Define an ideal locally around the boundary stratum of the NIVC corresponding to $\Gamma$ by
$$ J(\Gamma) \coloneqq \prod_{(v, v') \in V(\Gamma) \times V(\Gamma)} (h_v, h_{v'})^{w(v) w(v')}\,, $$
where the product runs over {\em all ordered} pairs of vertices (including the case $v = v'$) and where $w(v) \coloneqq 2g(v) -2 + {\rm valence} (v)$. Since the blowup in $J(\Gamma)$ makes the adjusting parameters comparable for any two vertices, the (local) blowup of the NIVC along $J(\Gamma)$ is orderly, and by the same argument as in the proof of \cite[Theorem~14.8]{LMS} it follows that the normalization of this blowup is isomorphic to the moduli space of multi-scale differentials.
Finally we show that the local ideals $J(\Gamma)$ are compatible under degeneration, so that they form a global sheaf of ideals~$J$ on the NIVC. For this, again it is enough to check compatibility under an edge contraction (recalling that unlike the genus zero case, the edge can be a loop). First in the case of a loop, by the formula for $w(v)$, we see that contracting a loop does not change $J(\Gamma)$. Suppose now that two distinct vertices $v_1$, $v_2$ of $\Gamma$ connected by an edge $e$ are merged, when $e$ is contracted, to a vertex $v'$ in the resulting graph $\Gamma'$. Note that this contraction makes $h_{v_1}\sim h_{v_2}\sim h_{v'}$ modulo units. Moreover, $w(v') = w(v_1) + w(v_2)$. Then for any vertex~$u$ different from $v_1, v_2, v'$ we have
$$ (h_{v_1}, h_u)^{2w(v_1)w(u)} (h_{v_2}, h_u)^{2w(v_2)w(u)} \,\sim\, (h_{v'}, h_u)^{2 w(v')w(u)}\,, $$
$$ (h_{v_1}, h_{v_1})^{w(v_1)^2} (h_{v_2}, h_{v_2})^{w(v_2)^2} (h_{v_1}, h_{v_2})^{2w(v_1)w(v_2)} \, \sim \, (h_{v'}, h_{v'})^{w(v')^2}\,. $$
It follows that $J(\Gamma')$ specializes to $J(\Gamma)$. Therefore, the local ideals $J(\Gamma)$ can be glued to a global sheaf of ideals $J$. In summary, we have proven the following theorem.
\begin{theorem}
\label{thm:blowup-g}
The main component ${\mathbb{P}}(\mathcal{MS}_\mu)$ of ${\mathbb{P}}(\GMS)$ is the normalization of the blowup of the NIVC in the ideal sheaf $J$; in particular, its coarse moduli space is a projective variety.
\end{theorem}
\par
\begin{remark}
\label{rem:blowup-ivc}
For arbitrary genera one can describe the IVC (and then also the rubber and multi-scale spaces) by blowing up the normalization of the closure of the stratum in the Deligne--Mumford compactification $\overline{\ca M}_{g,n}$, which we denote by NDM. The argument is similar to the one in the proof of~\ref{prop:blowup-0}. Since the NDM is normal, for every vertex $v$ of $\Gamma$ an adjusting parameter $h_v$ for $v$ exists as in \cite[Proposition 11.13]{LMS}. Then the blowup of the NDM along the (local) ideals $(h_{v_1}, \ldots, h_{v_k})$, where $v_1, \ldots, v_k$ are local maximum vertices of $\Gamma$, carries a family of stable differentials and hence it maps to the IVC by the universal property of the Hodge bundle. The inverse map from the IVC to this blowup is similarly obtained by using the universal property of the blowup.
To see that these local ideals patch together to form a global sheaf of ideals, suppose that a local maximum vertex $v_1$ joins a lower vertex $v_0$ via an edge $e$. Suppose further that $e$ is contracted so that $v_1$ and $v_0$ merge as one vertex $v'_1$, which makes $h_{v_1} \sim h_{v'_1}$ modulo units. If $v'_1$ remains to be a local maximum, then $(h_{v_1}, h_{v_2}, \ldots, h_{v_k}) = (h_{v'_1}, h_{v_2}, \ldots, h_{v_k})$ after contracting $e$, so these ideals match. If $v'_1$ is not a local maximum, then there exists another local maximum vertex, say $v_2$, that goes along a path downward to $v'_1$ (in terms of the partial order of $\Gamma$). It follows that $h_{v_2}$ divides $h_{v_1} \sim h_{v'_1}$ and hence $(h_{v_1}, h_{v_2}, \ldots, h_{v_k}) = (h_{v_2}, \ldots, h_{v_k}) $ after contracting $e$, so these ideals still match.
\end{remark}
\section{From logarithmic to multi-scale}\label{sec:log_to_GMS}
In this section we construct the morphism of functors~$F\colon \cat{Rub}_{\mathcal L_\mu} \to
\GMS$ whose existence is claimed in~\ref{intro:mainiso}, and then prove the theorem. At the end of the section we include two related results about describing the multi-scale space as a Zariski closure and describing a morphism from the rubber space to the Hodge bundle, which can be of independent interest.
\par
Let $(X/B, \beta\in \Gamma(X,\overline{\mathsf{M}}_X^\mathsf{gp}),
\phi\colon {\mathcal O}_X(\beta) \iso {\mathcal L_\mu}) \in \cat{Rub}'_{{\mathcal L_\mu}}$.
Recall that the dash on $\cat{Rub}$ indicates that we are working with
the minimal log structure as described in \ref{sec:minimal_log_str},
and that we work always with \emph{saturated} log structures.
\par
We assume moreover for now, and for most of this section, that $X/B$ is nuclear,
and explain at the end why the functor glues to general families.
\par
\subsection{The enhanced level graph} \label{eq:enhLG}
The first item to build the $F$-image of $(X/B,\beta,\phi)$ is an
enhanced level graph.
As the underlying graph~$\Gamma$, we simply take the dual graph of the curve fiber over the closed stratum
of~$B$. The level structure, given in terms of a normalized level function,
comes from $\beta\in \overline{\mathsf{M}}_X^\mathsf{gp}(X)$ as explained in~\ref{eq:defell}. The definition
of the enhancement~$\kappa$ is given in~\ref{eq:defkappa}, where the divisibility
required for this definition has been proven in~\Cref{le:divisibility}. The stability
condition just comes from the fact that we work with stable curves.
\par
Given a vertex $v$ and the corresponding component $X_v$ of the central fiber, the
admissibility of $\kappa$ comes down to the equalities
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:slopeadmiss}
2g(v) - 2 + \#H'(v) - \sum_{j\mapsto v} m_j \= \deg({\mathcal L_\mu}|_{X_v})
\= \sum_{h \mapsto v} \kappa_h.
\ee
The first equality is immediate from the definition of $\ca L_\mu$, and the second comes from the isomorphism $\phi$ and a computation of the degree of $\ca O_X(\beta)$ on the component $X_v$ presented in \ref{lem:Obetarestriction}.
The dual graph $\Gamma_{b'}$ of the fiber over a general $b'$ (possibly
outside the closed stratum) comes with a level structure obtained from $\Gamma$ by
undegeneration (as defined in \ref{sec:tori}), by the Key Property~(4) of nuclear log curves
from \ref{sec:unpacking_rub_definition}. Constructing the rest of the data of a generalized multi-scale differential requires more work, which we now begin.
\subsection{Logarithmic splittings and rotations}
We write $\tilde P = \bb N\Span{p_{-1}\dots, p_{-N}}$ as
in \ref{sec:minimal_log_str}.
\par
\begin{df}\label{df:logsplitting}
A \emph{log splitting} is a map
\begin{equation}
\lspl \colon \tilde P \to \mathsf{M}_B
\end{equation}
whose composition with the canonical map $\mathsf{M}_B \to \overline{\mathsf{M}}_{B,b}$ is
the map $\psi\colon \tilde P \hookrightarrow \overline{\mathsf{M}}_{B,b}$ from~\ref{eq:hom_psi}
(recall that we work throughout this section with minimal objects).
\par
The \emph{simple log level rotation torus} $T^s_{log}$, abbreviated
\emph{simple LLRT}, is the set of log splittings.
\end{df}
\par
\begin{remark}
Let us unpack the simple log level rotation torus definition a bit. Recall our key
exact sequence~\ref{eq:stdsequence}. The presence of the $\mathsf{gp}$ is not so
important, as we work always with integral monoids (i.e. monoids which inject
into their groupifications).
Consequently, a choice of a splitting is essentially
a choice of an invertible function on $B$ (which we think of as a scalar) for every level below $0$. Pre-composing $\tilde \psi$ with the map $g$ from \ref{eq:g_delta_e} and using \ref{lem:gdeltapsi}, we then also obtain a lift of the map $\delta$, i.e., a choice of a scalar for each edge. These must satisfy appropriate
compatibility equations, and the saturation condition also imposes the existence
of certain roots.
\end{remark}
\par
\begin{df}
The \emph{simple log rotation group} is the group
\[\on{Hom}_{\rm mon}(\tilde P, {\mathcal O}_B^\times(B))
\= \on{Hom}_{\mathsf{gp}}(\tilde P^{\mathsf{gp}}, {\mathcal O}_B^\times(B))\,,\]
where the identification stems from the universal property of the
groupification.\footnote{Note that there is also a (non-simple) log rotation group, consisting of the set of compatible choices of elements in ${\mathcal O}_B^\times(B)$ for all $e \in E^v$ and the elements $\sigma_i = \beta(v_i) - \beta(v_{i-1})$. Since this non-simple group will not be needed in the following, we don't give a formal definition.}
\end{df}
\par
We define an action of an element $\phi$ of the simple log rotation group on
the simple log rotation torus by the formula
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:logrotaction}
(\phi \cdot \lspl)(p) \coloneqq \phi(p)\lspl(p) \quad \text{for $p \in
\tilde P$}\,.
\ee
\par
\begin{lemma} \label{le:pseudotorsor}
Via the action~\ref{eq:logrotaction}, the simple LLRT is
either empty, or a torsor for the simple log rotation group. After possibly shrinking $B$, we can ensure the existence of a log splitting.
\end{lemma}
\par
Recall that a pseudo-torsor is a space with a free transitive action, but
unlike a torsor, it may be empty (here, if the base~$B$ is too large
to support the appropriate sections). Thus the above lemma says that the
simple LLRT is a pseudo-torsor.
\par
\begin{proof}
In the exact sequence
\[
1 \to H^0(B, {\mathcal O}_B^\times) \to H^0(B,\mathsf{M}_B^\mathsf{gp}) \to \underbrace{H^0(B, \overline{\mathsf{M}}_B^\mathsf{gp})}_{=\overline{\mathsf{M}}_{B,b}^\mathsf{gp}} \to H^1(B, {\mathcal O}_B^\times) \to \cdots
\]
if all elements $\psi(p_i)=\tau_i \in \overline{\mathsf{M}}_{B,b}$ map to zero in $H^1(B, {\mathcal O}_B^\times)$, then they have preimages in $H^0(B,\mathsf{M}_B)$ (i.e. there exists a log splitting). Any such choices of preimages differ precisely by elements in $H^0(B, {\mathcal O}_B^\times)$, which together define an element of the simple log rotation group. Thus the action of this group is free and transitive.
Finally, if the elements $\tau_i \in \overline{\mathsf{M}}_{B,b}$ do \emph{not} map to zero in $H^1(B, {\mathcal O}_B^\times) = \mathrm{Pic}(B)$, we can always find an open neighborhood $B_0$ of $b \in B$ on which these $N$ line bundles are trivial after all. Then on $B_0$, the long exact sequence and the argument above shows the existence of a lift, finishing the proof.
\end{proof}
\subsection{Log viewpoint on smoothing and rescaling parameters}
In this subsection we construct the rescaling ensemble from the choice of
a log splitting, and provide auxiliary statements about the smoothing and
rescaling functions contained in the ensemble.
\par
Let $\lspl \colon \tilde P \to \mathsf{M}_B$ be a log splitting. Recall the definition
of the maps $g\colon \bb N\Span{E^v} \to \tilde P$ from \ref{eq:g_delta_e} and
of $\alpha\colon \mathsf{M}_B \to {\mathcal O}_B$ from the definition of a log scheme.
\par
\begin{df}
The \emph{smoothing parameter associated
to a vertical edge $e\in E^v(\Gamma)$ by the log splitting $\lspl$} is
\begin{equation}\label{eq:logfedef}
f_e \coloneqq (\alpha \circ \widetilde{\psi} \circ g)(e)\,.
\end{equation}
Fix a level $i\in L(\Gamma)$. The \emph{level parameter} and \emph{rescaling
parameter} associated to $i$ by $\lspl$ are
\begin{equation}\label{eq:logstdef}
t_i \coloneqq (\alpha \circ \widetilde{\psi})(p_i) \quad \text{and} \quad
s_i \coloneqq (\alpha \circ \widetilde{\psi})(a_i p_i)\,. \qedhere
\end{equation}
\end{df}
The collection of functions ${\boldsymbol{t}} = (t_i)_{i\in L(\Gamma)}$ defines a map
$R^s\colon B \to \overline{T}^s_\Gamma$ to the closure of the simple level rotation
torus, which is just ${\mathbb{C}}^N,$ and a rescaling parameter $s_i = r_i \circ
\pi \circ R^s$ in the notation of \ref{sec:germfam}.
\par
\begin{lemma} \label{lem:getsimplescalingens}
The morphism $R^s\colon B \to \overline{T}^s_\Gamma$ defined above is a simple
rescaling ensemble.
\end{lemma}
\begin{proof}
By \ref{def:RescEns} we must verify that the functions $f_e$ from \ref{eq:logfedef} are indeed smoothing parameters for their respective nodes, lying in the correct equivalence class in ${\mathcal O}_B / {\mathcal O}_B^\times$. To see this, consider the following diagram
\[
\begin{tikzcd}
\bb N \Span{E^v} \arrow[r,"g"] \arrow[ddd,"\delta"] & \tilde P \arrow[dd,"\tilde \psi"] & & \\
& & {\mathcal O}_B^\times \arrow[ld] &{\mathcal O}_B^\times \arrow[ld] \\
& \mathsf{M}_B \arrow[r,"\alpha"] \arrow[dl] & {\mathcal O}_B \arrow[dl] &\\
\overline{\mathsf{M}}_{B,b} \arrow[r,"\overline \alpha"] & {\mathcal O}_B / {\mathcal O}_B^\times & &
\end{tikzcd}
\]
What we must show is that $f_e = (\alpha \circ \widetilde{\psi} \circ g)(e) \in {\mathcal O}_B$ maps to the class of a smoothing parameter in ${\mathcal O}_B / {\mathcal O}_B^\times$. Now the commutativity of the upper left rectangle follows from \ref{lem:gdeltapsi} and the assumption that $\tilde \psi$ lifts the map $\psi: \tilde P \to \overline{\mathsf{M}}_{B,b}$. On the other hand, the map $\overline \alpha$ is just \emph{defined} to make the lower triangle commute. Then we have
\[
[f_e] \= (\alpha \circ \widetilde{\psi} \circ g)(e) \=
\overline \alpha(\delta(e)) \in {\mathcal O}_B / {\mathcal O}_B^\times\,.
\]
The fact that $\delta(e)$ maps to a smoothing parameter for the node associated to $e$ under $\overline \alpha$ is then just a basic property of families of log curves, see point (2) of \ref{sec:unpacking_rub_definition}.
\end{proof}
\subsection{The collection of rescaled differentials}
\label{subset:rescaled_from_log}
By definition of lying in $\cat{Rub}_{\mathcal L_\mu}$, we are given an isomorphism
\begin{equation}
\phi\colon \omega_{X/B}\Big(-\sum_{k=1}^n m_k z_k\Big)\iso {\mathcal O}_X(\beta)\,.
\end{equation}
\par
On the other hand, it follows from the definition of $\psi$ that the element $-\sum_{j=i}^{-1} a_j p_j \in \tilde P^\mathsf{gp}$ maps to $\beta(v_i) \in \overline{\mathsf{M}}_{B,b}^\mathsf{gp}$ under $\psi$, where $v_i \in V(\Gamma)$ is any vertex on level $i$. Using the log splitting $\tilde \psi$, we obtain the elements
\[
o_i \coloneqq \tilde \psi\Bigl(- \sum_{m=i}^{-1} a_m p_m \Bigr) \in \mathsf{M}_B^\mathsf{gp}
\]
in the preimage of $\beta(v_i)$. Since this preimage can be identified as the complement of the zero section in ${\mathcal O}_B(\beta(v_i))$, we can see $o_i$ as a nowhere-vanishing section of ${\mathcal O}_B(\beta(v_i))$.
\par
Finally, we claim that there is a well-defined map
\begin{equation} \label{eqn:betavitobeta}
w_i: \pi^* {\mathcal O}_B(\beta(v_i))|_{U_i} \to {\mathcal O}_X(\beta)|_{U_i}\,.
\end{equation}
Indeed, the left hand side is the line bundle on $U_i$ associated to the piece-wise linear function which is \emph{constant}, equal to $\beta(v_i)$. Since we removed $X_{>i}$, this function dominates the function $\beta$ on the right, so we have a map as desired. Thus $w_i(\pi^* o_i)$ gives a section of ${\mathcal O}_B(\beta)$ on $U_i$, and we define
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:defomegafromlogspl}
\omega_{(i)} \,\coloneqq \, \phi^* w_i(\pi^* o_i) \in H^0\Bigl(U_i, \omega_{X/B}\Big(-\sum_{k=1}^n m_k z_k\Big) \Bigr)\,.
\ee
\par
We check that $\omega_{(i)}$ satisfies the conditions in~\ref{def:collRD} and that the smoothing and rescaling parameters $f_e$, $s_i$ defined in \ref{eq:logfedef} and~\ref{eq:logstdef} (and thus the simple rescaling ensemble~$R^s$) are compatible with these generalized rescaled differentials.
\begin{enumerate}
\item[(1)]
For any levels $j < i <0 $, there is a natural map of line bundles
${\mathcal O}_B(\beta(v_i)) \to {\mathcal O}_B(\beta(v_j))$.
On the level of sections we then have
\[
o_i = \tilde \psi(- \sum_{m=i}^{-1} a_m p_m) = \tilde \psi(- \sum_{m=j}^{-1} a_m p_m) \cdot \prod_{m=j}^{i-1} \tilde \psi(a_m p_m) \mapsto o_j \cdot \prod_{m=j}^{i-1} \tilde \psi(a_m p_m))\,.
\]
Via the isomorphism $\varphi^*$, and using that $s_m = \alpha(\tilde \psi(a_m p_m))$, this becomes the desired equality $\omega_{(i)} = \omega_{(j)} \cdot \prod_{m=j}^{i-1} s_m$. The fact
that $s_i$ vanishes at the closed point of $B$ comes from the fact that the
map of line bundles is the zero map when restricted to the fibers over the
closed point of $B$.
\item[(2 \& 3)] On the normalization $Y_i$ of all components of the special fiber $X_b$ sitting at level $i$, we have (see \ref{lem:Obetarestriction})
\bes
{\mathcal O}_X(\beta)|_{Y_i} \= \pi^*{\mathcal O}_B(\beta(v_i)) \otimes_{{\mathcal O}_{Y_i}}
{\mathcal O}_{Y_i}\Big(\sum_h \kappa_h h\Big)\,,
\ees
where the sum is over all non-leg half edges~$h$ attached to the vertices
at level $i$, and $\kappa_h$ is the slope.
Pulling back via $\varphi^*$, the line bundle on the left becomes
\[
\omega_{X_b}\Big(-\sum_{k=1}^n m_k z_k\Big)|_{Y_i} \=
\omega_{Y}\Big(-\sum_{k=1}^n m_k z_k + \sum_h h\Big)\,.
\]
Rearranging the equality of line bundles, we then get
\[
\omega_{Y}\Big(-\sum_{k=1}^n m_k z_k - \sum_h (\kappa_h -1) h\Big) \cong \pi^*{\mathcal O}_B(\beta(v_i))\,.
\]
Seeing $\omega_{(i)}$ as a meromorphic section on the left, it then corresponds to the nowhere vanishing section $\pi^* o_i$ on the right. Thus it extends to all of $Y$ on the left hand side. But then seeing this extension as a meromorphic section of $\omega_Y$, it has the desired order $m_k$ at the marked points $z_k$ and $\kappa_h-1$ at the preimage of the node associated to $h$.
\end{enumerate}
\subsection{Prong-matchings}
To recall the notion of a prong-matching, consider a vertical edge $e \in E^v$
and let $B_e \hookrightarrow B$ be the closed subscheme of $B$ over which the
node~$e$ persists, i.e. the vanishing locus of the smoothing parameter~$f_e$.
\par
The sections~$q^\pm$ of the two preimages of the node identify $B_e$ as
a subscheme of the blowup of $X \times _B B_e$ along the section corresponding
to $e$. Recalling \ref{eq:defNe}, we let ${\mathcal N}^\vee_e \= (q^+)^*\omega_{X_+}\otimes (q^-)^*\omega_{X_-}$ be the corresponding line bundle on $B_e$. Then a local prong-matching at $e$ is a section
$\sigma_e$ of ${\mathcal N}_e^\vee$ such that $\sigma_e^{\kappa_e}(\tau_e)=1$ for the section $\tau_e \in {\mathcal N}_e^{\kappa_e}$ defined in \ref{lem:prong_matching_comparison}.
\par
To identify this notion in the logarithmic context, recall that we have the element $\delta(e) \in \overline{\mathsf{M}}_{B,b}$. Then the bundle ${\mathcal N}_e^\vee$ has an interpretation as follows:
\par
\begin{lemma} \label{le:nbiso}
There are canonical isomorphisms of line bundles
\begin{equation}\label{eq:O_delta_eq_W}
{\mathcal O}_B(\delta(e))|_{B_e} \= {\mathcal N}^\vee_e
\end{equation}
for each edge~$e$.
Moreover, let $\widehat{f} \in \mathsf{M}_B$ be an element mapping to $\delta(e) \in \overline{\mathsf{M}}_{B,b}$, so that we can see it as a section of ${\mathcal O}_B(\delta(e))$.
Then the function $f=\alpha(\widehat{f}) \in {\mathcal O}_B$ is a smoothing parameter for the node associated to $e$. Let $u,v$ be local coordinates around the node on $X$ such that the local ring at the node is the localization of ${\mathcal O}_B[u,v]/(uv-f)$. Then the isomorphism \ref{eq:O_delta_eq_W} sends the section
$\widehat{f}|_{B_e} \in {\mathcal O}_B(\delta(e))|_{B_e}$ to
$$
du \otimes dv \in {\mathcal N}_e^\vee \= (q^+)^*\omega_{X_+}\otimes (q^-)^*\omega_{X_-}.
$$
\end{lemma}
\par
\begin{proof}
Since both sides commute with base change, it is enough to check this in the
universal case, in which the log structure is divisorial coming from the
boundary (and the map $\alpha$ of the log structure is injective, so there
are no non-trivial automorphisms of the log structure). Over a versal
deformation $R$, the local equation of the node is given by $R[u,v]/(uv-f)$,
where $f \in R$ is an element corresponding to $\delta(e)$. So ${\mathcal O}_B(\delta(e))$
is canonically identified with the ideal sheaf generated by~$f$ in $R$
(cf.\ \ref{appendix:sign_conventions}). On the other hand, $\ca{N}^\vee_e$
is canonically identified with the conormal bundle in $R$ to the locus $f=0$
(see \cite[Section~XIII.3]{acgh2
) and thus agrees with ${\mathcal O}_B(\delta(e))|_{Z_e}$. Tracing through the constructions of these canonical identifications yields the second part of the lemma; alternatively this can be seen as a very slight generalization of \cite[Section 4]{edixhoven1998neron}, where $c(x)$ corresponds to the element $du \otimes dv$ and $\pi^{x(e)}$ to the element $f$.
\end{proof}
\par
Let $\lspl\colon \tilde P \to \mathsf{M}_B$ be a log splitting, and let $e$ be a
vertical edge. By \ref{lem:gdeltapsi} the element $(\lspl \circ g)(e) \in \mathsf{M}_B$
maps to $\delta(e) \in \overline{\mathsf{M}}_{B,b}$ and hence lies in ${\mathcal O}_B^\times(\delta(e))
\subseteq \mathsf{M}_B$ (by the definition of this bundle via ~\ref{eq:stdsequence}).
Applying the isomorphism
of \ref{eq:O_delta_eq_W}, we thus obtain a section of ${\mathcal N}^\vee_e$.
\begin{df} \label{def:logPM}
We call the section $\sigma_e = (\lspl \circ g)(e)|_{B_e} \in H^0(B_e, {\mathcal N}^\vee_e)$ the \emph{local prong-matching $\sigma_e = \sigma_e(\lspl)$
at~$e$ determined by the log splitting}. The collection
${\boldsymbol{\sigma}} = (\sigma_e)_{e \in E^v(\Gamma)}$ of these is called the
\emph{global prong-matching determined by the log splitting}.
\end{df}
\par
There are two compatibility statements to check for this definition: to
get a prong-matching, see the discussion after \ref{eq:defNe}, and to make this
part of a multi-scale differential, see \ref{def:germMSD}~(iv).
\par
\begin{lemma}
The prong-matching ${\boldsymbol{\sigma}}$ determined by any log splitting is indeed a
prong-matching in the sense of \ref{sec:PM}, i.e.\ the condition
$\sigma_e(v^+\otimes v^-)^{\kappa_e} = 1$
holds for each edge $e \in E^v(\Gamma)$, for each pair $(v^+,v^-)$ of an incoming and outgoing prong at~$e$.
\end{lemma}
\par
\begin{proof}
Assume that the vertical edge $e$ connects levels $i>j$ in $\Gamma$. Via the
translation of the notion of prong-matching given by \ref{df:PMfinal}, it is
equivalent to show that $\sigma_e^{\kappa_e}(\tau_e) = 1$, where~$\tau_e$ is the
section of ${\mathcal N}_e^{\kappa_e}$ obtained as
$\tau_e = (q^+)^* \omega_{(i)}^{-1} \otimes (q^-)^* \omega_{(j)}$.
\par
On the other hand, the differentials $w_{(i)}$ and $w_{(j)}$ are also determined
in \ref{eq:defomegafromlogspl} by the formulae
\[
\omega_{(i)} \= \phi^* w_i(\pi^* \tilde \psi(- \sum_{m=i}^{-1} a_m p_m)) \;\;\;
\text{and} \;\;\;\omega_{(j)} \= \phi^* w_j(\pi^* \tilde \psi(- \sum_{m=j}^{-1} a_m p_m))\,.
\]
Putting this into the formula for $\tau_e$, the pullbacks $(q^+)^*, (q^-)^*$ cancel
the pullback $\pi^*$. Interpreting~$\tau_e$ as a section of ${\mathcal O}_B(-\kappa_e \delta(e))$ via \ref{eq:O_delta_eq_W} we thus have
\[
\tau_e \= \tilde \psi \Bigl(\sum_{m=i}^{-1} a_m p_m - \sum_{m=j}^{-1} a_m p_m \Bigr)
\= \tilde \psi \Bigl(- \sum_{m=j}^{i-1} a_m p_m \Bigr)
\= \tilde \psi \left(- \kappa_e g(e) \right) \= \sigma_e^{- \kappa_e}\,.
\]
Here in the second to last equality we used the definition of $g$ from \ref{eq:g_delta_e}. This finishes the proof that $\sigma_e^{\kappa_e}(\tau_e)=1$, and thus that $\sigma_e$ is a local prong-matching.
\end{proof}
\par
\begin{lemma}
Let $\lspl\colon \tilde P \to \mathsf{M}_B$ be a log splitting and $e$ a non-semi-persistent vertical node (i.e. $f_e^{\kappa_e} \neq 0$). Then the local prong-matching determined by $\lspl$ is equal to that induced in \ref{rem:induced_prong_matching}.
\end{lemma}
\par
\begin{proof}
The local prong-matching $\sigma_e$ of \ref{rem:induced_prong_matching} is constructed
by writing the local equation of the node as $uv=f_e$, and setting
$$\sigma_e \,\coloneqq \,du \otimes dv \in {\mathcal N}_e^\vee
\= (q^+)^*\omega_{X_+}\otimes (q^-)^*\omega_{X_-}. $$
On the other hand, the local prong-matching $\widetilde\sigma_e$ associated to~$e$
by $\lspl$ is given by applying the isomorphism in \ref{le:nbiso} to the
element $(\lspl \circ g)(e)$.
\par
Recalling that $f_e = (\alpha \circ \lspl \circ g)(e)$, the desired equality $\sigma_e=\widetilde\sigma_e$ is then the second part of \ref{le:nbiso}.
\end{proof}
\subsection{Morphism of functors from rubber to multi-scale}
\label{sec:functor_rub_to_gms}
We put the above together to build a morphism of functors, first when the base
is strictly local. We start with with a family $(X/B, \beta\in \overline{\mathsf{M}}_X(X), \phi)$,
which we take to have minimal saturated log structure. This immediately gives us the
structure of an enhanced level graph. We \emph{choose} a log splitting
$\lspl \colon \tilde P \to \mathsf{M}_B$.
This determines a simple rescaling ensemble, a collection of rescaled differentials, and
induces local prong-matchings at each node. Hence we have a simple multi-scale differential.
We next claim that a different choice of log splitting yields an isomorphic simple multi-scale differential, together with a choice of isomorphism. Indeed, for a sufficiently small~$B$, by \ref{le:pseudotorsor} any two log splittings differ by the action of the simple LLRT, and one checks easily that the action of the simple LLRT corresponds to the action of the simple level rotation torus.
\par
It is clear from the constructions that the above map is independent of
choices and is compatible with shrinking the base~$B$. By descent it
then glues to a global morphism of functors $F\colon\cat{Rub}_{\mathcal L_\mu}
\to \GSMS$.
\par
\subsection{Showing the map of functors induces an isomorphism}\label{sec:ess_surj}
The above construction gives a morphism from the logarithmic space to the
multi-scale space. In this section we complete the proof of \ref{intro:mainiso}
by showing that this functor induces an isomorphism.
\par
\begin{theorem}
The morphism
\begin{equation}
F \colon \cat{Rub}_{{\mathcal L_\mu}} \to \GSMS\,.
\end{equation}
is an isomorphism.
\end{theorem}
\begin{proof}
Given a strictly local scheme $B$ and a map $B \to \GSMS$, we show that there exists a unique map $B \to \cat{Rub}_{\mathcal L_\mu}$ making the diagram
\begin{equation}
\begin{tikzcd}
B \arrow[d, dashed] \arrow[dr] & \\
\cat{Rub}_{\mathcal L_\mu} \arrow[r] & \GSMS
\end{tikzcd}
\end{equation}
commute.
Let $(\pi\colon X \to B, {\boldsymbol{z}}, \Gamma, R^s, {\boldsymbol{\omega}}, {\boldsymbol{\sigma}})$ be the simple multi-scale differential corresponding to $B \to \GSMS$. Given $i \in L(\Gamma)$, we write $t_i \in {\mathcal O}_B(B)$ for the composite with the appropriate coordinate projections $B \to \overline T^s \to \bb C$.
Let $\mathsf{M}_B$ be the minimal log structure making $X/B$ into a log curve; in particular its characteristic monoid $\overline{\mathsf{M}}_{B,b}$ is canonically identified with the free monoid $\bb N \Span{E}$ on the edges of $\Gamma$. For each edge $e$ choose a section $\mathfrak f_e \in \mathsf{M}_B(B)$ lifting $f_e$, yielding a splitting
\begin{equation*}
\mathfrak f \colon \overline{\mathsf{M}}_{B,b} \to \mathsf{M}_B\,.
\end{equation*}
Denote $\tilde P \coloneqq \Span{p_{-1}, \dots, p_{-N}}$ the free monoid on the levels, and define
\begin{equation}
t\colon \tilde P \to {\mathcal O}_B; \;\;\; p_i \mapsto t_i\,,
\end{equation}
and define
\begin{equation}
t'\colon \tilde P \oplus \bb N\Span{E^h}\to {\mathcal O}_B\,,
\end{equation}
acting as $t$ on the first summand and as $\mathfrak f$ on the second.
Let then
\begin{equation}
g'\colon \bb N\Span{E} \to \tilde P \oplus \bb N\Span{E^h}
\end{equation}
be the map given by $g$ on the vertical edges and by the identity on the horizontal edges.
\par
The equalities
\begin{equation}
f_e \= \prod_{i = \ell(e^-)}^{\ell(e^+) - 1} t_i^{\frac{a_i}{\kappa_e}}
\end{equation}
imply that the diagram
\begin{equation}
\begin{tikzcd}
\mathsf{M}_B \arrow[r, "\alpha"] & {\mathcal O}_B\\
\overline{\mathsf{M}}_B \arrow[u, "\mathfrak f"] \arrow[r, "g'"] & \tilde P \oplus\bb N\Span{E^h} \arrow[u, "t'"]
\end{tikzcd}
\end{equation}
commutes.
\par
Now we define a sheaf of monoids $P$ as the pushout
\begin{equation} \label{eqn:minlogpushout}
\begin{tikzcd}
\mathsf{M}_B \arrow[r] & P\\
\overline{\mathsf{M}}_B \arrow[u, "\mathfrak f"] \arrow[r, "g'"] & \tilde P \oplus\bb N\Span{E^h} \arrow[u]
\end{tikzcd}
\end{equation}
which by the commutativity of the previous diagram comes with a map $\alpha_P\colon P \to {\mathcal O}_B$. One checks easily that $P$ is in fact a log structure on $B$, with characteristic sheaf $\overline P = \tilde P \oplus \bb N \Span{E^h}$ at a point $b \in B$ in the closed stratum. The map $\mathsf{M}_B \to P$ gives $X/(B, P)$ the structure of a log curve, and mapping a vertex $v$ of level $i$ to the element
\begin{equation}
\left(-\sum_{j = i}^{-1} a_j p_j, 0 \right) \in (\tilde P \oplus \bb N \Span{E^h})^\mathsf{gp}
\end{equation}
defines a map $\beta\colon V \to \overline P^\mathsf{gp}$ so that the pair $(X/B, \beta)$ is a (minimal) point of $\cat{Rub}$.
\par
To lift this point to a point of $\cat{Rub}_{\mathcal L_\mu}$, we need to build an isomorphism of line bundles
\begin{equation}
{\mathcal O}_X(\beta) \iso \omega_{ X /B}\Big(-\sum_{i=1}^n m_i z_i\Big)\,.
\end{equation}
We first define this map on the smooth locus; let $p \in B$ and let $x \in X_p$ be a smooth point of $X_p$, lying in the component associated to a vertex $v \in \Gamma$.
Then the image of $\beta$ in $\overline{\mathsf{M}}_{X, x}^\mathsf{gp} = \overline P_p^\mathsf{gp}$ is given by $\beta(v)$.
Our splitting $\overline{P} \to P$ from \ref{eqn:minlogpushout} extends to $\overline{P}^\mathsf{gp} \to P^\mathsf{gp}$ and thus $\beta(v)$ maps to a unique section of ${\mathcal O}_B(\beta(v)) \subseteq P$. Then we define
\begin{equation}
{\mathcal O}_X(\beta)_x \iso \omega_{X/B, x}
\end{equation}
to be the unique map sending this section to the differential $\omega_{(\ell(v))}$. That this isomorphism extends over the nodes then follows from the compatibilities conditions on prong-matchings by a local calculation.
Unraveling the constructions earlier in this section yields that the constructed point of $\cat{Rub}_{{\mathcal L_\mu}}$ does indeed lie over our starting point in $\GSMS$.
To show that we have constructed an isomorphism of fibred categories, we must finally check that the composites
\begin{equation}
\cat{Rub}_{\mathcal L_\mu}(B) \to \GSMS(B) \to \cat{Rub}_{\mathcal L_\mu}(B)
\end{equation}
and
\begin{equation}
\GSMS(B) \to \cat{Rub}_{\mathcal L_\mu}(B) \to \GSMS(B)
\end{equation}
are isomorphic to the respective identities. This can be done by comparing the actions of the simple LLRT and the simple level rotation torus on the respective spaces; we omit the details.
\end{proof}
\subsection{The multi-scale space as a Zariski closure}
Fix $g$, $n$, and define $\ca L_\mu$ on the universal curve over $\overline{\ca M}_{g,n}$ as before.
\par
\begin{dfx} We define $\cat{Rub}_{\ca L_\mu}^\mathsf{trop}$
to be the fibred category of $\cat{LogSch}_{\overline{\ca M}_{g,n}}$ whose objects are pairs
$(X/B, \beta)$, where $X/B \in \overline{\ca M}_{g,n}$ and $\beta$ is a PL function
satisfying condition~(1) of \ref{def:rub}, and such that the line bundle
$\ca L_\mu(-\beta)$ has multi-degree 0 on each geometric fiber.
\end{dfx}
\par
This is a slight variant on ${\mathbb{P}}(\cat{Rub}_{\ca L_\mu})$. By ignoring the divisibility condition in \ref{def:rub} we are effectively taking the coarse moduli space, and we only require that $\ca L_\mu(-\beta)$ has multi-degree 0, rather than requiring it to be trivial. Since we in particular do not record the data of an isomorphism, we are effectively also taking a $\bb C^*$-quotient.
The map $\cat{Rub}_{\ca L_\mu}^\mathsf{trop} \to \overline{\ca M}_{g,n}$ is birational and representable, but not in general proper. Using stability conditions as in \cite{HMPPS} we can construct a compactification
\bes
\cat{Rub}_{\ca L_\mu}^\mathsf{trop} \to {\mathbb{P}}(\cat{Rub}_{\ca L_\mu}^\theta) \to \overline{\ca M}_{g,n}\,,
\ees
where ${\mathbb{P}}(\cat{Rub}_{\ca L_\mu}^\theta) \to \overline{\ca M}_{g,n}$ is proper, birational, and representable, and $\cat{Rub}_{\ca L_\mu}^\mathsf{trop} \to {\mathbb{P}}(\cat{Rub}_{\ca L_\mu}^\theta)$ is an open immersion; but we do not pursue this here as it would require substantial additional notation.
Let ${\mathbb{P}}(\ca{MS}^0) \subseteq \ca M_{g,n}$ be the locus of smooth curves over which $\ca L_\mu$ admits a non-zero global section; this can be seen as the interior of the locus of (projectivized, generalized) multi-scale differentials.
\begin{theorem}
The Zariski closure of $\ca{MS}^0$ in $\cat{Rub}_{\ca L_\mu}^\mathsf{trop}$ (or, equivalently, in ${\mathbb{P}}(\cat{Rub}_{\ca L_\mu}^\theta)$) is equal to ${\mathbb{P}}(\ca{MS}_\mu)$, the projectivized space of (non-generalized) multi-scale differentials.
\end{theorem}
\begin{proof}
There is a natural closed immersion ${\mathbb{P}}(\cat{Rub}^{\sf{coarse}}_{\ca L_\mu}) \to \cat{Rub}_{\ca L_\mu}^\mathsf{trop}$, and the main component of ${\mathbb{P}}(\cat{Rub}^{\sf{coarse}}_{\ca L_\mu})$ is ${\mathbb{P}}(\ca{MS}_\mu)$.
\end{proof}
One can obtain the stacky version $\Xi \overline{\ca M}_{g,n}(\mu)$ (of which $\ca{MS}_\mu$ is the relative coarse moduli space) in a similar fashion, replacing $ {\mathbb{P}}(\cat{Rub}_{\ca L_\mu}^\theta)$ with a stacky modification; we leave the details to the interested reader.
\section{Logarithmic rubber maps}\label{sec:rubber}
\subsection{Overview of log divisors}
\label{sec:logoverview}
A log scheme is a pair
\begin{equation}\label{eq:log_str}
(B, \alpha\colon \mathsf{M}_B \to {\mathcal O}_B)\,,
\end{equation}
where $B$ is a scheme, $\mathsf{M}_B$ is a sheaf of monoids on $B$, and $\alpha$ is a map of monoids, where ${\mathcal O}_B$ is equipped with the multiplicative monoid structure, and where we assume that $\alpha$ induces an isomorphism between the submonoids of invertible elements. We denote
$\overline{\mathsf{M}}_B \coloneqq \mathsf{M}_B/\alpha^{-1}({\mathcal O}_B^\times)$, called the \emph{ghost sheaf} or \emph{characteristic sheaf}.
Recall that a monoid $M$ is called \emph{saturated} if the natural map $M \to M^\mathsf{gp}$ to its \emph{groupification} is injective, and if for every $n \in \bb Z_{\ge 1}$ and $g \in M^\mathsf{gp}$ with $ng \in M$ we have $g \in M$.
A log structure is called saturated if all its stalks are saturated. We work throughout only with fine saturated log structures (log structures admitting charts by finitely generated saturated monoids).
If $\beta \in \Gamma(B, \overline{\mathsf{M}}_B^\mathsf{gp})$, then the preimage of $\beta$ in the short exact sequence
\begin{equation}\label{eq:stdsequence}
1 \to {\mathcal O}_B^\times \to \mathsf{M}_B^\mathsf{gp} \to \overline{\mathsf{M}}_B^\mathsf{gp} \to 1
\end{equation}
is a $\bb G_m$-torsor, which we denote by ${\mathcal O}_B^\times(\beta)$. We write ${\mathcal O}_B(\beta)$ for the associated line bundle (see \ref{appendix:sign_conventions} for our sign convention here).
\par
Following \cite{Kato2000Log-smooth-defo}, the formal definition of a \emph{log curve} is a morphism of log schemes\footnote{The reader concerned about the case $g=1$, $n=0$ should rather take log algebraic spaces.} $\pi\colon X \to B$ that is proper, integral, saturated, log smooth, and has geometric fibers which are reduced and of pure dimension 1. This definition is rarely important to us, so rather than explicating the terms involved we present a crucial structure result (to be found in \cite{Kato2000Log-smooth-defo,Gross2013Logarithmic-Gro}). If $\pi\colon X \to B$ is a log curve, then the underlying map of schemes is a prestable curve, and if $x$ is a geometric point of $X$ mapping to a geometric point $b$ of $B$, then exactly one of the following three cases holds:
\begin{enumerate}
\item $x$ is a smooth point of $X$, and the natural map $\overline{\mathsf{M}}_{B,b} \to \overline{\mathsf{M}}_{X,x}$ is an isomorphism;
\item $x$ is a smooth point of $X$, and there is a natural isomorphism $\overline{\mathsf{M}}_{B,b} \oplus \bb N \to \overline{\mathsf{M}}_{X,x}$ (in this case $x$ we say is a marked point, and we choose a total ordering on our markings to be compatible with the standard definition of marked prestable curves);
\item $x$ is not a smooth point of the fiber $X_b$ (i.e. $x$ is a node), and there is a unique element $\delta_x \in \overline{\mathsf{M}}_{B,b}$ and an isomorphism
\begin{equation}\label{eq:smoothing_param_1}
\overline{\mathsf{M}}_{X,x} \cong \{(u,v) \in \overline{\mathsf{M}}_{B,b}^2 \textrm{ such that } \delta_x \textrm{ divides } u-v\}.
\end{equation}
\end{enumerate}
We write $\mathfrak M$ for the fibred category over $\cat{LogSch}$ whose objects are log curves $X/B$, with the fiber functor taking $X/B$ to $B$. This is representable by an algebraic stack with log structure, see \cite{Gross2013Logarithmic-Gro}, generalizing the construction of \cite{Kato2000Log-smooth-defo} in the stable case. As shown in those references, the underlying algebraic stack of $\mathfrak M$ is naturally isomorphic to the stack of prestable curves. The stack $\mathfrak M$ naturally contains all $\overline{\ca M}_{g,n}$ as open substacks, by equipping a stable curve $X/B$ with its basic log structure (see \cite{Kato2000Log-smooth-defo,Gross2013Logarithmic-Gro}); equivalently, with the log structure coming from the boundary divisor.
\par
Given a log scheme, we define
\begin{equation*}
\mathbb G_m^{\mathsf{trop}}(B) \,\coloneqq\, \Gamma(B, \overline{\mathsf{M}}_B^{\mathsf{gp}})\,,
\end{equation*}
which we call the tropical multiplicative group. It can naturally be extended to a presheaf $\mathbb G_{m,B}^{\mathsf{trop}}$ on the category $\cat{LogSch}_B$ of log schemes over $B$,
and admits a log smooth cover by log schemes (with subdivision $[\bb P^1/\bb G_m]$); see \cite{MarcusWiseLog}.
\par
\begin{df}\label{def:rub}
$\cat{Rub}$ is the stack in groupoids over $\mathfrak M$ with objects tuples
\bes
(\pi \colon X \to B, \, \beta\colon X \to \bb G_{m,B}^\mathsf{trop})
\ees
with $X/B$ a log curve, satisfying two conditions on each strict geometric fiber:
\begin{enumerate}
\item
The image of $\beta$ is fiberwise totally ordered\footnote{Here we mean that for any two elements in the image of $\beta$, one of their differences is contained in~$\overline{\mathsf{M}}_B$.}, with largest element~$0$.
\item
Writing $R$ for the stack obtained from $\bb G_m^\mathsf{trop}$ by subdividing at the image of $\beta$, we require that the fiber product $X \times_{\beta, \bb G_m^\mathsf{trop}} R$ is a log curve.
\end{enumerate}
The morphisms are defined by pullback.
\end{df}
\par
Over a given geometric point of $B$, write $N+1$ for the cardinality of the image of $\beta$; since the
latter is totally ordered, there is a unique isomorphism~$\tau$ of totally ordered
sets between the image of $\beta$ and $\{0,-1,\ldots, -N\}$. The composition
\begin{equation}} \def\ee{\end{equation}} \def\bes{\begin{equation*}} \def\ees{\end{equation*} \label{eq:defell}
\ell \coloneqq \tau \circ \beta
\ee
is then called the \emph{normalized level function} associated with~$\beta$.
\par
\begin{remark}
This definition will be unpacked in \ref{sec:unpacking_rub_definition}, but
for now we make a couple of remarks on how it differs from that given in
Marcus--Wise \cite{MarcusWiseLog}. Firstly, they declare the image of~$\beta$
to have \emph{smallest} element 0; this makes no material difference, and the
reason for our choice of conventions is explained in \ref{appendix:sign_conventions}.
\par
More significantly, condition (2) is not stated by Marcus and Wise. However, it is assumed, for example in datum (R1) in Section 5.5 of their paper. Most of their results go through without this condition, but it is necessary for making a connection to the spaces of rubber maps of Jun Li, Graber-Vakil etc., and is also necessary for the comparison results in the present paper.
\end{remark}
\par
\begin{theorem}[\cite{MarcusWiseLog}]
The category $\cat{Rub}$ is a log algebraic stack locally of finite
presentation.
\end{theorem}
Marcus and Wise prove this in the absence of condition (2) above, but imposing this condition simply corresponds to a root stack construction, and does not affect the result. One benefit of imposing condition (2) is the following theorem, which did not hold for the version of $\cat{Rub}$ considered by Marcus and Wise (and which will be proven in \ref{sec:smoothness_of_Rub}).
\par
\begin{theorem}\label{thm:rub_smooth}
The algebraic stack $\cat{Rub}$ is smooth.
\end{theorem}
\par
Given $\beta \in \overline{\mathsf{M}}_X^\mathsf{gp}(X)$, then taking the preimage in the standard exact sequence \ref{eq:stdsequence} applied to $X$ yields the line bundle $\ca O_X(\beta)$; in other words, it yields an Abel-Jacobi map
\bes
\mathsf{aj}\colon \cat{Rub} \to \mathfrak{Pic}
\ees
to the Picard stack of the universal curve over $\mathfrak M$ (the stack of pairs $(X/B, \ca F)$ where $X/B$ is a log curve and $\ca F$ is a line bundle on $X$).
One of the main results of \cite{MarcusWiseLog} is that the composite of this Abel-Jacobi map with the forgetful map $\mathfrak{Pic} \to \on{Pic}$ to the relative Picard space is proper.
\par
\begin{df}
Write $n$ for the locally constant function on $\mathfrak M$ giving the number of markings. Then there is an \emph{outgoing slopes} map
\bes
\cat{Rub} \to \bb Z^n
\ees
sending a point $(X/B, \beta)$ to the outgoing slopes of $\beta$, i.e., the values of $\beta$ in the groupifications of the stalks $\overline{\mathsf{M}}_{X/B}(z_i) \coloneqq \overline{\mathsf{M}}_X(z_i) / \overline{\mathsf{M}}_B(\pi(z_i)) = \bb N$ at the markings.
\par
Given a tuple $\mu = (m_1, \ldots, m_n)$ of integers, we define $\cat{Rub}_\mu$ to be the open-and-closed substack of $\cat{Rub}$ where the log curve has $n$~markings and the outgoing slopes are given by~$\mu$.
\end{df}
Note that the forgetful map from $\cat{Rub}_\mu$ to $\mathfrak M$ is birational (it is an isomorphism over the locus of smooth curves); in particular if we fix a genus and a number of markings, then $\cat{Rub}_\mu$ is connected.
Writing $d\coloneqq \sum_{i=1}^n m_i$, the image of $\cat{Rub}_\mu$ under the Abel-Jacobi
map~$\mathsf{aj}$ lands in the connected component $\mathfrak{Pic}^d$ of $\mathfrak{Pic}$ consisting of line bundles of (total) degree $d$.
\begin{remark}
In fact one can show that the map $\cat{Rub}_\mu \to \mathfrak M$ is not only birational but also \emph{log \'etale}.
This is a type of a birational map basically consisting of an iterated blowup of boundary strata, followed by root constructions on some of these strata, and then followed by taking some open subset. For the details, we refer the reader e.g.~to the paper \cite{HMPPS}, where such morphisms are used extensively. An important point there is that they can be described uniquely by an (incomplete) subdivision of the tropicalization of $\mathfrak M$. While again we do not explain the details, one consequence is that one can obtain a smooth local model of the morphism $\cat{Rub}_\mu \to \mathfrak M$ by the toric map induced via some explicit subdivision of a cone.
\par
In \ref{fig:Rub_subdivision_fan}
we use this to illustrate the importance of condition (2) in \ref{def:rub}. For this, consider a point of $\mathfrak M$ where the curve has genus $0$ and the stable graph $\Gamma$ has three vertices and two edges $e_1, e_2$ as illustrated. Assume that each vertex carries one marking and that $\mu$ is chosen so that the unique slopes of a piece-wise linear function on the edges are $1,2$ for $e_1, e_2$ (see \ref{df:PL} for a discussion of PL functions).
\par
Then the tropicalization of $\mathfrak M$ contains a cone $\sigma_\Gamma = \mathbb{R}_{\geq 0}^2$ parameterizing the ways of putting lengths $\ell_1, \ell_2$ on the two edges. Depending on which of the quantities $\ell_1$ or $2 \ell_2$ is greater, a piece-wise linear function on $\Gamma$ with the given slopes will take a larger value either on $v_2$ or $v_1$.
Then the smooth local picture of $\cat{Rub}_\mu \to \mathfrak M$ is given by the map of toric varieties associated to the subdivision of $\sigma_\Gamma$ at the ray spanned by $(\ell_1, \ell_2) = (2,1)$.
However, there is a subtlety: for the standard integral structure (black dots), the upper cone is simplicial, but not smooth.
Indeed, the primitive generators $(0,1)$, $(2,1)$ of its rays form a rational basis, but not an integral basis. Hence, the toric variety associated to this cone has a singularity, which would contradict \ref{thm:rub_smooth}. And indeed, this is precisely what happens for the variant of $\cat{Rub}$ defined by omitting condition (2) from \ref{def:rub}. Putting this condition forces us to adjoin the element $(0,1/2)$ to the lattice on the upper cone (adding the points marked by crosses).\footnote{Note that in contrast to the toric situation, not all cones in the tropicalization of $\cat{Rub}$ lie in the same ambient vector space with integral structure, so that it is possible to change this integral structure on different cones of the tropicalization.} Then the new ray generators are $(0,1/2)$, $(1,1/2)$, which indeed form a basis of the integral structure $\mathbb{Z} \oplus (1/2) \mathbb{Z}$, so that $\cat{Rub}$ is smooth as claimed.
\usetikzlibrary{shapes.misc}
\begin{figure}
\[
\begin{tikzpicture}
\tikzset{cross/.style={cross out, draw=black, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt},
cross/.default={3pt}}
\filldraw[black!20] (0,0) -- (4,0) -- (4,2) -- (0,0);
\filldraw[black!10] (0,0) -- (4,2) -- (4,3) -- (0,3) -- (0,0);
\draw[thick, ->] (0,0) -- (4.5,0) node[below right]{$\ell_1$};
\draw[thick, ->] (0,0) -- (0,3.5) node[above left] {$\ell_2$};
\draw[thick] (0,0) -- (4.5,2.25);
\foreach \x in {0,...,4}
\foreach \y in {0,...,3}
{\filldraw[black] (\x, \y) circle (1pt);}
\foreach \p in {(0,0.5),(0,1.5),(0,2.5),(1,0.5),(1,1.5),(1,2.5),(2,1.5),(2,2.5),(3,1.5),(3,2.5),(4,2.5)}
{\draw \p node[cross] {};}
\filldraw (5.5,1.5) circle (2pt) {};
\filldraw (5.2,0.5) circle (2pt) node[below]{$v_1$};
\filldraw (5.8,0.8) circle (2pt) node[below]{$v_2$};
\draw[thick] (5.5,1.5) to node[midway, left] {$\ell_1$} (5.2,0.5);
\draw[thick] (5.5,1.5) to node[midway, right] {$\ell_2$} (5.8,0.8);
\filldraw (5.5,4.5) circle (2pt) {};
\filldraw (5.2,3.8) circle (2pt) node[below]{$v_1$};
\filldraw (5.8,3.5) circle (2pt) node[below]{$v_2$};
\draw[thick] (5.5,4.5) to node[midway, left] {$\ell_1$} (5.2,3.8);
\draw[thick] (5.5,4.5) to node[midway, right] {$\ell_2$} (5.8,3.5);
\end{tikzpicture}
\]
\caption{Subdivision associated to the drawn stable graph, with slope $1$ at edge $e_1$ (of length $\ell_1$) and slope $2$ at edge $e_2$ (of length $\ell_2$).}
\label{fig:Rub_subdivision_fan}
\end{figure}
\end{remark}
\subsection{Logarithmic rubber differentials} \label{sec:rubdiffspace}
The stack $\cat{Rub}$ is in some sense the universal space of logarithmic rubber maps. In this section we specialize to the case of logarithmic rubber differentials. For this we fix $g$, $n$ and write $X_{g,n}/\overline{\ca M}_{g,n}$ for the universal curve, with markings ${\boldsymbol{z}} = (z_1,\ldots,z_n)$. Fix a tuple~$\mu = (m_1, \dots, m_n)$ such that $d= \sum_{i=1}^n m_i = 2g-2$.
We define a line bundle on the universal curve $X_{g,n}$ over $\overline{\ca M}_{g,n}$ by
the formula
\bes
\mathcal L \,\coloneqq\, \mathcal L_\mu \,\coloneqq\,
\omega_{X_{g,n}/\overline{\ca M}_{g,n}}\Big(-\sum_{i=1}^n m_i z_i\Big)\,,
\ees
where $\omega = \omega_{X_{g,n}/\overline{\ca M}_{g,n}}$ is the relative dualizing sheaf of $X_{g,n}\to\overline{\ca M}_{g,n}$. Then $\mathcal L$ induces a morphism
\bes
\phi_{\mathcal L}\colon \overline{\ca M}_{g,n} \to \mathfrak{Pic}\,.
\ees
\begin{df}\label{def:rub_L}
We define the \emph{space of logarithmic rubber differentials} to be
\begin{equation}\label{eq:rub_L_def}
\cat{Rub}_\mathcal L \coloneqq \cat{Rub}_{\ul 0} \times_{\mathfrak{Pic}, \phi_{\mathcal L}} \overline{\ca M}_{g,n}\,.
\end{equation}
\end{df}
\begin{remark}
If we had taken the fiber product over the relative Picard space (instead of the Picard stack) we would have obtained the projectivized space ${\mathbb{P}}(\cat{Rub}_\mathcal L)$. This is the approach taken in \cite{MarcusWiseLog,BHPSS}, as the space ${\mathbb{P}}(\cat{Rub}_\mathcal L)$ is what is needed for the study of the double ramification cycle.
\end{remark}
\begin{remark}\label{rk:no_leg_weights}
There are two equivalent descriptions of the rubber differential space as
\bes
\cat{Rub}_\mathcal L \= \cat{Rub}_{\ul 0} \times_{\mathfrak{Pic}, \phi_\mathcal L} \overline{\ca M}_{g,n}
\= \cat{Rub}_\mu \times_{\mathfrak{Pic}, \phi_{\omega}} \overline{\ca M}_{g,n}\,.
\ees
\end{remark}
\subsection{Local description}\label{sec:unpacking_rub_definition}
In what follows we will make the definition of the space $\cat{Rub}$ more explicit for log curves over `sufficiently small' bases; more precisely, for \emph{nuclear} log curves as defined in \cite{Holmes2020Models-of-Jacob}. This is a slight refinement of asking for the base to be atomic (in the sense of \cite{AbraWise}), and is needed because a log curve even over a point does not have a well-defined dual graph unless the residue field is sufficiently large. We omit the details of the definition of a nuclear log curve, mentioning only the key properties we use:
\begin{enumerate}
\item
For any family of log curves $X/B$, there exists a strict\footnote{A map $f\colon X \to Y$ of log schemes is \emph{strict} if the log structure on $X$ is the pullback of the log structure on $Y$. In particular, the strict \'etale topology on log schemes reflects very closely the usual \'etale topology on schemes. } \'etale cover $\bigsqcup_{i\in I} B_i \to B$ such that each $X \times_B B_i \to B_i$ is nuclear;
\item
For $X/B$ a nuclear log curve and for any $b \in B$, the curve $X_b$ has a well-defined dual graph $\Gamma_b$, with edges labeled by non-zero elements of $\overline{\mathsf{M}}_{B,b}$; we denote the \emph{label} (also called
\emph{length}) of $e$ by $\delta_e$; this was denoted $\delta_x$ in \ref{eq:smoothing_param_1}. If $\delta_e' \in \mathsf{M}_B(B)$ is a lift of $\delta_e$, then $\alpha(\delta_e') \in \ca O_B(B)$ is a \emph{smoothing parameter} for $e$, in the sense that $X$ can be described locally around the corresponding point by an equation $uv = \alpha(\delta_e')$.
The stalk of $\overline{\mathsf{M}}_{X}$ at the corresponding node $q$ of the fiber over $b\in B$
is given by
\begin{equation}\label{eq:ghost-at-singular_point}
\overline{\mathsf{M}}_{X,q} = \{(u, v) \in \overline{\mathsf{M}}_{B, b} \oplus \overline{\mathsf{M}}_{B, b} \text{ such that } \delta_e \mid (u-v)\};
\end{equation}
\item
For $X/B$ nuclear, the base $B$ has a unique closed stratum\footnote{Every log scheme comes with a decomposition into locally closed subschemes (called \emph{strata}). }, and for any $b$ in that closed stratum the restriction gives an isomorphism $\Gamma(B, \overline{\mathsf{M}}_{B}) \iso \overline{\mathsf{M}}_{B,b}$.
\item If $X/B$ is nuclear and $b$, $b' \in B$, with $b$ in the closed stratum, there is a natural identification (of labeled graphs) of $\Gamma_{b'}$ with the graph obtained from $\Gamma_b$ by mapping every label to $\overline{\mathsf{M}}_{B,b'}$, and then contracting all edges that are labeled by 0. We often abuse notation by writing $\overline{\mathsf{M}}_B\coloneqq\overline{\mathsf{M}}_{B,b}$ (for $b$ in the closed stratum) in place of $\Gamma(B, \overline{\mathsf{M}}_{B})$. We often write $\Gamma$ for the graph over any point in the closed stratum, which comes with an $\overline{\mathsf{M}}_B$-metric.
\end{enumerate}
If $B$ is the spectrum of a strictly Henselian local ring with atomic log structure (for example, if $B$ is the spectrum of a separably closed field), then by \cite[Lemma 3.40]{Holmes2020Models-of-Jacob} any log curve $X/B$ is nuclear.
\par
Let $X/B$ be a nuclear log curve. Let $b \in B$ be a point in the closed stratum, with associated dual graph $\Gamma$ with \emph{vertex set}
$V = V(\Gamma)$, {\em set of half-edges} $H=H(\Gamma)$ (including legs), and {\em set of non-leg half-edges} $H'=H'(\Gamma)$.
\par
\begin{df}\label{df:PL}
A \emph{piece-wise linear (PL) function} on $X/B$ is an element of $\Gamma(X,\overline{\mathsf{M}}_X^\mathsf{gp})$.
\par
A \emph{combinatorial PL function} on $X/B$ consists of the data:
\begin{enumerate}
\item
a function $\beta'\colon V(\Gamma) \to \overline{\mathsf{M}}_{B, b}^\mathsf{gp}$ (the \emph{values} on the vertices), and
\item a function $\kappa\colon H'(\Gamma) \to \bb Z$ (the \emph{slopes} on the non-leg\footnote{In this paper we do not include slopes on the legs, as we are interested only in the case where these slopes are equal to $0$ (since we work throughout with $\cat{Rub}_{\ul 0}$). Recall as in \ref{rk:no_leg_weights} that we have moved the data of the zeros and poles in to the line bundle $\ca L_\mu$. } half-edges),
\end{enumerate}
such that if $h_1$ and $h_2$ are half edges forming an edge $e$, with $h_i$
attached to vertex $v_i$, we have
\bes
\kappa(h_2) \delta_e \= \beta'(v_2) - \beta'(v_1)
\ees
(so that in particular $\kappa(h_1)+\kappa(h_2)=0$).
Edges of~$\Gamma$ with slope 0 (that is, where both half-edges have slope zero) are called \emph{horizontal}; all the other edges of~$\Gamma$ are called \emph{vertical}.
\end{df}
\par
\medskip
We want to show that these two types of PL functions are in natural bijection.
First, we construct a combinatorial PL function from any PL function.
At generic points $\eta$ of $X_b$ there is a natural isomorphism $\overline{\mathsf{M}}_{B, b} = \overline{\mathsf{M}}_{X,\eta}$, so the section $\beta \in H^0(X,\overline{\mathsf{M}}_X^\mathsf{gp})$
determines a function $\beta'\colon V \to \overline{\mathsf{M}}_{B, b}^\mathsf{gp}$.
To complete the definition of~$\kappa$ we first show:
\par
\begin{lemma} \label{le:divisibility}
If $h_1$ and $h_2$ are half edges forming an edge $e$, with $h_i$ attached to vertex $v_i$, then for the function $\beta'$ constructed from $\beta$ as above, the value $\beta'(v_2) - \beta'(v_1)$ is an integer multiple of $\delta_e$.
\end{lemma}
\par
\begin{proof}
This follows from \ref{eq:ghost-at-singular_point} and the fact that the images of $\beta$ under the two projections to $\overline{\mathsf{M}}_{B, b}^\mathsf{gp}$ are exactly given by $\beta'(v_1)$ and $\beta'(v_2)$.
\end{proof}
\par
In the notation of the lemma above we can then \emph{define}
\begin{equation} \label{eq:defkappa}
\kappa(h_2) \,\coloneqq\, \frac{\beta'(v_2) - \beta'(v_1)}{\delta_e}
\end{equation}
(which is unique because $\overline{\mathsf{M}}_{B,b}$ is torsion-free). This accomplishes one direction of the following lemma.
\par
\begin{lemma} \label{le:PLbijection}
The above construction induces a bijection between the set of PL functions and the set of combinatorial PL functions.
\end{lemma}
\par
\begin{proof} Let $\beta'$ be a combinatorial PL function; we build a PL function $\beta$ giving the inverse image of $\beta'$ under the construction above. If $x$ is a smooth point of $X_b$, then $\overline{\mathsf{M}}_{X, x} = \overline{\mathsf{M}}_{B, b}$, and we define the value of $\beta$ at $x$ to be $\beta'(v)$, where $v$ corresponds to the irreducible component of $X_b$ containing $x$. The presentation \ref{eq:ghost-at-singular_point} makes it clear that there is a unique way to extend this section to all non-smooth points $x \in X_b$. For any other point $b' \in B$ the combinatorial PL function can naturally be transferred (using property (4) of the definition of a nuclear log curve) to the fiber $X_{b'}$, and we repeat the above argument to give a PL function on $X_{b'}$. These then fit together to a global PL function on $X/B$,
\end{proof}
\par
Our concrete local description of $\cat{Rub}$ is now given by the next
proposition.
\par
\begin{proposition}\label{prop:rub_translation}
For $X/B$ nuclear and $b\in B$ in the closed stratum, there is a natural bijection between the set of
$X/B$-points of $\cat{Rub}_{\ul 0}$ (i.e., the set of maps $B \to \cat{Rub}_{\ul 0}$ lying over $X/B$)
and the set of maps
\begin{equation}\label{eq:PL_literal}
\beta'\colon V \to \overline{\mathsf{M}}_{B, b}^\mathsf{gp}
\end{equation}
satisfying the following conditions:
\begin{enumerate}
\item The divisibility condition $\delta_e \mid \beta'(v_2) - \beta'(v_1)$
holds at every edge $e$ in $\Gamma_b$ connecting vertices $v_1,v_2 \in V$.
\item The image of $\beta'$ is a totally ordered subset of $\overline{\mathsf{M}}_{B, b}^\mathsf{gp}$ with largest element being $0$;
\item For every edge $e$ connecting vertices $v_1$ and $v_2$, with slope $\kappa_e$ (defined as the absolute value of \ref{eq:defkappa}), and for every $y \in \on{Image}(\beta')$ with $\beta'(v_1) < y < \beta'(v_2)$, the monoid $\overline{\mathsf{M}}_{B, b}$ contains the element $\frac{y - \beta'(v_1)}{\kappa_e}$.
\end{enumerate}
\end{proposition}
\par
\begin{proof}
Conditions (1) and (2) are translations of point (1) of \ref{def:rub}. Condition (3) corresponds to point (2) of \ref{def:rub}, as explained in \cite[Section~6.2]{BHPSS}.
\end{proof}
\begin{remark}
If $\beta_1'$ and $\beta_2'$ are combinatorial PL functions with the same slopes $\kappa_e$, then there exists $c \in \overline{\mathsf{M}}_{B,b}^\mathsf{gp}$ such that $\beta_1' = \beta_2' + c$. In the definition of $\cat{Rub}$ we restrict to PL functions whose values are totally ordered and take maximum value $0$, and such functions are completely determined by the values of their slopes $\kappa$.
\end{remark}
We would like to characterize in a similar spirit when a point of $\cat{Rub}$ lifts to $\cat{Rub}_\mathcal L$. More concretely, this means describing explicitly the line bundle ${\mathcal O}_X(\beta)$ associated to a PL function. The next lemma describes the \emph{restriction} of ${\mathcal O}_X(\beta)$ to the irreducible components of the curve $X_b$ (in the case where $\beta$ comes from $\cat{Rub}_{\ul 0}$, i.e. has vanishing outgoing slopes). To describe the gluing between irreducible components would require us to get into quite a few more details of log geometry, and is not necessary for what we do in this paper.
\par
\begin{lemma}[{\cite[Lemma 2.4.1]{RSPWI}}] \label{lem:Obetarestriction}
Let $Y$ be the normalization of an irreducible component of $X_b$, corresponding to a vertex $v$. For each half-edge $h$ attached to $v$, write $\kappa_h$ for the outgoing slope and $q_h \in Y$ for the associated preimage of a node of $X_b$. Then there is a canonical isomorphism
\bes
{\mathcal O}_X(\beta)|_{Y} \= \pi^*{\mathcal O}_B(\beta'(v))\otimes_{{\mathcal O}_Y} {\mathcal O}_Y\Big(\sum_h \kappa_h z_h\Big)\,.
\ees
\end{lemma}
In particular, for a point $(X_b/b, \beta)$ of $\cat{Rub}_{\ul 0}$ to lie in $\cat{Rub}_{\mathcal L}$ it is necessary (though not in general sufficient) to require that on the normalized $Y$ of any irreducible component of $X_b$ there exists an isomorphism
\begin{equation*}
\mathcal L_Y \,\cong\, {\mathcal O}_Y\Big(\sum_h \kappa_h z_h \Big)\,,
\end{equation*}
where the sum runs over all half-edges $h$ attached to $v$.
|
1,116,691,500,281 | arxiv | \section{\label{sec:level1}Introduction}
A theoretical possibility of non-resonant, fast, and efficient heating of extremely thin conducting cylindrical targets by broad electromagnetic beams was described in Ref.~\cite{Akhm4} (see also the sections on the "transverse geometry" in Refs.~\cite{Akhm10,Akhm111} and references there). The diameter of the cylinder can be orders of magnitude smaller than the wavelength of the electromagnetic radiation. Efficient heating takes place for converging axisymmetric cylindrical waves (under some limitations on the real part of the complex permittivity of the cylinder) if the diameter of the cylinder and the skin-depth are of the same order of magnitude and the electric field in the wave is directed along the common axis of the cylinder and the wave (see the exact conditions in Refs.~\cite{Akhm10,Akhm111}). This possibility can be used to create high energy density states for such applications as pumping of active media of short-wavelength lasers and nuclear fusion. For example, an exciting possibility of efficient heating of nanotubes by femtosecond laser pulses is discussed in Ref.~\cite{Akhm111}.
In this work we present the first experimental confirmation of the predictions of Refs.~\cite{Akhm4,Akhm10,Akhm111}) (some preliminary results for a somewhat problematic configuration were presented in Ref.~\cite{Akhm112}). In our experiment, a thin fiber (with a diameter three orders of magnitude smaller than the wavelength) absorbed up to 6\% of the microwave power focused on the fiber with an ellipsoidal reflector.
Work ~\cite{Akhm4,Akhm10,Akhm111} had important predecessors. Shevchenko (Ref.~\cite{Shev1}) derived optimal conditions of absorption of a plane electromagnetic wave in a thin conducting wire that are similar to the conditions of Refs.~\cite{Akhm4,Akhm10,Akhm111}. However, the possibility of efficient heating of a thin wire or fiber, when the power absorbed in a thin conducting wire or fiber is comparable to the power in the incident wave, was not noticed in work ~\cite{Shev1}, as heating by a plane wave is very inefficient. On the other hand, the transverse geometry of Refs.~\cite{Akhm4,Akhm10,Akhm111} (heating by a converging axisymmetric cylindrical wave) was also considered, e.g., in Ref.\cite{Zharov1}, but that work only contains resonant conditions of heating, which are difficult to use for practical plasma heating.
The results of Ref.~\cite{Shev1} found experimental confirmation in Ref.~\cite{Kuz1}. The experimental results of Ref.~\cite{Kuz1}, motivated by the theoretical results for a plane wave, were obtained for absorption of microwave $\mathrm{H_{01}}$ mode at the output of a waveguide, and heating efficiency was not assessed. In the present experiment, we demonstrate efficient heating of a thin fiber by an electromagnetic beam in free space. The experimental results are in satisfactory agreement (typically up to a factor of 2) with theoretical computations.
\section{\label{sec:level1-2}The experimental setup}
The experimental setup is shown schematically in Fig.~\ref{fig:fig1}.
\begin{figure*}
\includegraphics{Rys_2_Experiment_9
\caption{\label{fig:fig1}The experimental setup (not to scale)}
\end{figure*}
A thin wire or fiber is placed in focus $F_1$ of the ellipsoidal reflector. The internal surface of the reflector is a part of an ellipsoid of revolution defined by the equation:
\begin{equation}
\frac{x^2}{a^2}+\frac{y^2+z^2}{b^2}=1,\label{eq:1}
\end{equation}
where $a\approx$ 586 mm is the major semiaxis and $b\approx$ 287 mm is the minor semiaxis of the relevant ellipse. The distance between the foci $F_1$ and $F_2$ is approximately 1023 mm. The dimensions of the reflector and the position of focus $F_1$ inside the reflector are shown in Fig.~\ref{fig:fig1}. A horn with aperture 31 x 22 mm\textsuperscript{2} and length 130 mm is placed in focus $F_2$ (the distance between the horn aperture and the focus is 65 mm). The wide side of the horn is horizontal. The horn is connected to a waveguide with dimensions 7.2 x 3.4 mm\textsuperscript{2} with mode $H_{10}$. The frequency of the electromagnetic radiation varied from 24.5 GHz to 39 GHz, and the power varied from 20 mW to 61 mW. Typically, most of the power is collected by the reflector and focused in focus $F_1$.
\section{\label{sec:level1-3}Measurement and computation methods}
The wire or fiber in the focus is heated by the radiation, and the initial electrical resistance of the wire/fiber $R_0$ changes by $\Delta R$. The average wire/fiber temperature increase $\Delta T$ corresponding to $\Delta R$ was calculated as
\begin{equation}
\Delta T=\frac{\Delta R}{\alpha_r R_0},\label{eq:2}
\end{equation}
where $\alpha_r$ is the temperature coefficient of resistance.
On the other hand, the steady state wire/fiber temperature increase depends on the absorbed power $P_a$ and the conditions of heat exchange with the environment \cite{Kuz1}:
\begin{equation}
\Delta T=\frac{P_a}{a_p L},\label{eq:3}
\end{equation}
where $L$ is the length of the wire/fiber and $a_p$ is the linear heat exchange coefficient. For the carbon fiber of diameter $11\mu$, this coefficient was measured as follows.
For a direct current in the fiber, the Joule power was calculated based on the measurements of the current and the voltage, and the temperature increase
was calculated based on the measured change of the fiber resistivity. The value of the coefficient differed slightly for a vertical and horizontal fiber:
0.017 W/(m-deg) and 0.019 W/(m-deg), respectively. For the platinum wire of diameter $3.5\mu$, the following values obtained by N. G. Kokodiy and A. O. Pak (private communication) were used for the vertical and horizontal orientation of the wire: 0.023 W/(m-deg) and 0.026 W/(m-deg), respectively.
It follows from Eqs.~(\ref{eq:2},\ref{eq:3}) that
\begin{equation}
P_a=\frac{\alpha_p L}{\alpha_r}\frac{\Delta R}{R_0}.\label{eq:4}
\end{equation}
Therefore, the efficiency of absorption of microwave power in the wire/fiber equals:
\begin{equation}
K=\frac{P_a}{P}=\frac{\alpha_p L}{\alpha_r P}\frac{\Delta R}{R_0},\label{eq:5}
\end{equation}
where P is the power in the microwave beam.
In the experiment, microwave absorption in two targets was studied: 1) a platinum wire of diameter 3.5 micron and length $\approx$25 mm, and 2) a carbon fiber \cite{Cytec} of diameter 11 micron and length $\approx$30 mm. For the platinum wire, a tabular value $\alpha_r=$0.004 deg\textsuperscript{-1} was used, and for the carbon fiber, the value $\alpha_r=$-0.00021 deg\textsuperscript{-1} was found experimentally based on the current-voltage curve.
The experimental results were compared with the results of computations. The feed horn fields (incident on the reflector) were computed using the following formulas for the far zone of a pyramidal horn \cite{Balanis}.
If the origin is in the center of the horn aperture, and axes $x'$, $y'$, and $z'$ are directed parallel to the broad sides of the horn, parallel to the narrow sides of the horn, and along the horn axis, respectively, the radial distance, inclination angle, and azimuthal angle of the relevant spherical system of coordinates are $r$, $\theta$, and $\varphi$, respectively. The components of the magnetic field in the far zone are
\begin{eqnarray}
\nonumber
H_r(r,\theta,\varphi)=0,\\
\nonumber
H_\theta(r,\theta,\varphi)=-\frac{i k \exp(-i k r)}{4 \pi r}\frac{E_0}{\eta}I_1 I_2 \cos(\varphi)(1+\cos(\theta)),\\
H_\varphi(r,\theta,\varphi)=\frac{i k \exp(-i k r)}{4 \pi r}\frac{E_0}{\eta}I_1 I_2 \sin(\varphi)(1+\cos(\theta)),\label{eq:6}
\end{eqnarray}
where the temporal dependence factor $\exp(i \omega t)$ is omitted, $E_0$ is the amplitude of the electric field in the center of the horn aperture for the dominant mode,
\begin{widetext}
\begin{eqnarray}
\nonumber
I_1=\frac{1}{2}\sqrt{\frac{\pi\rho_2}{k}}\times\\
\nonumber
\left(\exp(i k'^{2}_x\rho_2/2 k)
\left(C(t'_2)-C(t'_1)-i\left(S(t'_2)-S(t'_1)\right)\right)+
\exp(i k''^{2}_x\rho_2/2 k)
\left(C(t''_2)-C(t''_1)-i\left(S(t''_2)-S(t''_1)\right)\right)\right) ,\\
I_2=\sqrt{\frac{\pi\rho_1}{k}}
\exp(i k^{2}_y\rho_2/2 k)\left(C(t_2)-C(t_1)-i\left(S(t_2)-S(t_1)\right)\right),\label{eq:7}
\end{eqnarray}
\end{widetext}
$C(x)$ and $S(x)$ are the cosine and sine Fresnel integrals, respectively, $\rho_1$ is the distance from the horn aperture plane to the line of intersection of two opposite broad faces of the horn pyramid, $\rho_2$ is the distance from the horn aperture plane to the line of intersection of two opposite narrow faces of the horn pyramid,
\begin{eqnarray}
\nonumber
t'_1=\sqrt{\frac{1}{\pi k \rho_2}}\left(-\frac{k a_1}{2}-k'_x\rho_2\right),\\
\nonumber
t'_2=\sqrt{\frac{1}{\pi k \rho_2}}\left(+\frac{k a_1}{2}-k'_x\rho_2\right),\\
\nonumber
k'_x=k \sin(\theta)\cos(\varphi)+\frac{\pi}{a_1},\\
\nonumber
t''_1=\sqrt{\frac{1}{\pi k \rho_2}}\left(-\frac{k a_1}{2}-k''_x\rho_2\right),\\
\nonumber
t''_2=\sqrt{\frac{1}{\pi k \rho_2}}\left(+\frac{k a_1}{2}-k''_x\rho_2\right),\\
\nonumber
k''_x=k \sin(\theta)\cos(\varphi)-\frac{\pi}{a_1},\\
\nonumber
t_1=\sqrt{\frac{1}{\pi k \rho_1}}\left(-\frac{k b_1}{2}-k_y\rho_1\right),\\
\nonumber
t_2=\sqrt{\frac{1}{\pi k \rho_1}}\left(\frac{k b_1}{2}-k_y\rho_1\right),\\
\nonumber
k_y=k \sin(\theta)\sin(\varphi),\\
\end{eqnarray}
$a_1$ and $b_1$ are the lengths of the wide and the narrow sides of the horn aperture, respectively, and $\eta$ is the intrinsic impedance of the media (air).
The reflected fields for the ellipsoidal reflector were estimated using methods of physical optics \cite{Silver}, as the radii of curvature of the reflector are much greater than the wavelength everywhere. The reflected electric field $\boldsymbol{E}_s$ in a field point is calculated using the following formula (Ref.~\cite{Silver}):
\begin{eqnarray}
\nonumber
\boldsymbol{E}_s=\frac{1}{2\pi i \omega\varepsilon}\times\\
\int_{S_0}\left(\left(\boldsymbol{n}\times\boldsymbol{H}_i\right)\cdot\boldsymbol{\nabla}\left(\boldsymbol{\nabla}
\Psi\right)+k^2\left(\boldsymbol{n}\times\boldsymbol{H}_i\right)\Psi\right)dS,
\label{eq:9}
\end{eqnarray}
where $\Psi=\exp(-ikR/R)$, $R$ is the distance from the field point to the element of area $dS$ on the reflector, gradient operations in the integrand are referred to the field point as an origin, $\boldsymbol{n}$ is the normal to the reflector surface, $S_0$ is the geometrically illuminated surface of the reflector, components of the incident magnetic field $\boldsymbol{H}_i$ are defined by Eq.~(\ref{eq:6}), $\varepsilon$ is the media permittivity.
Absorption of the reflected field of the ellipsoidal reflector in the wire/fiber in the focus of the reflector was computed using the rigorous solution of the problem of diffraction of electromagnetic field on a homogeneous cylinder (Ref.~\cite{Wait1}), which has a simpler form in the case of axisymmetrical field (non-axisymmetrical field is not efficiently absorbed in a very thin cylinder). The field incident on the cylinder (wire/fiber) is described by the electric Hertz vector (Ref.~\cite{Stratton}) $\bm{\Pi}(\rho,\varphi,z)=\{0,0,\Pi(\rho,\varphi,z)\}$. We use a cylindrical coordinate system $\rho,\varphi,z$, where axis $z$ coincides with the axis of the cylinder. We do not use the magnetic Hertz vector as the relevant TE field is not efficiently absorbed in a thin cylinder. Therefore, the reflected field of the ellipsoidal reflector (which is the incident field for the cylinder) is defined by the following expansion of the $z$-component of the electric Hertz vector into cylindrical waves (Ref.~\cite{Stratton}):
\begin{eqnarray}
\Pi(\rho,\varphi,z)=\int d\gamma\alpha(\gamma)J_0(\lambda_1(\gamma)\rho)\exp(i \gamma z),\label{eq:1p2k}
\end{eqnarray}
where the limits of integration are $-\infty$ and $\infty$, $J_n(x)$ is the Bessel function of order $n$, $\lambda_1^2(\gamma)=\epsilon_1 k_0^2-\gamma^2$ (it is assumed that magnetic permeabilities of air and the cylinder, $\mu_1$ and $\mu_2$, equal 1), $\epsilon_1\approx1$ is the electric permittivity of air, $k_0=\omega/c$ is the wave vector in vacuum, $\omega=2\pi\nu$ is the frequency of the electromagnetic field (the factor $\exp(-\omega t)$ is omitted). Function $\alpha(\gamma)$ can be defined as follows. The $z$-component of the incident electric field corresponding to Eq.~(\ref{eq:1p2k}) can be written as follows:
\begin{eqnarray}
E_z(\rho,\varphi,z)=\int d\gamma\alpha(\gamma)\frac{\lambda_1^2(\gamma)}{\epsilon_1}J_0(\lambda_1(\gamma)\rho)\exp(i \gamma z),\label{eq:2p2k}
\end{eqnarray}
so
\begin{eqnarray}
\alpha(\gamma)\lambda_1^2(\gamma)=E_z(\gamma)=\frac{1}{2\pi}\int d z E_z(z)\exp(i\gamma z),\label{eq:3p2k}
\end{eqnarray}
where $E_z(z)$ is the $z$-component of the incident electric field on the axis of the cylinder (where $\rho=0$ and $J_0(\lambda_1(\gamma)\rho)=1$), computed using Eq.~(\ref{eq:9}), and $E_z(\gamma)$ is its Fourier transform. Eq.~(\ref{eq:2p2k}) correctly describes the $z$-component of the incident electric field in the vicinity of the axis, although Eq.~(\ref{eq:1p2k}) does not include the TE-field and non-axisymmetrical field, which are not efficiently absorbed in the thin cylinder.
The $z$-component of the electrical Hertz vector of the field refracted in the cylinder can be calculated using the rigorous solution of the problem of diffraction of electromagnetic field on a homogeneous cylinder (Ref.~\cite{Wait1,Akhm10}):
\begin{eqnarray}
u_2(\rho,\varphi,z)=\int d\gamma a_2(\gamma)J_0(\lambda_2(\gamma)\rho)\exp(i \gamma z),\label{eq:4p2k}
\end{eqnarray}
where
\begin{eqnarray}
a_2(\gamma)=\frac{\alpha(\gamma)\frac{\epsilon_2}{\epsilon_1}\frac{1}{J_0(p_2(\gamma))}\frac{-2 i}{\pi p_2^2(\gamma)H_0^{(1)}(p_1(\gamma))}}{-\left(\frac{1}{p_1(\gamma)}\frac{H_0^{(1)'}(p_1(\gamma))}
{H_0^{(1)}(p_1(\gamma))}-\frac{1}{p_2(\gamma)}\frac{\epsilon_2}{\epsilon_1}\frac{J_0'(p_2(\gamma))}
{J_0(p_2(\gamma))}\right)}=
\nonumber\\
=\frac{\alpha(\gamma)\epsilon_2\frac{1}{J_0(p_2(\gamma))}\frac{2 i}{\pi p_2^2(\gamma)H_0^{(1)}(p_1(\gamma))}}{\frac{1}{p_2(\gamma)}\epsilon_2\frac{J_1(p_2(\gamma))}
{J_0(p_2(\gamma))}-\frac{1}{p_1(\gamma)}\frac{H_1^{(1)}(p_1(\gamma))}
{H_0^{(1)}(p_1(\gamma))}},\label{eq:5p2k}
\end{eqnarray}
as $\epsilon_1 \approx 1$ and, for example, $H_0^{(1)'}=-H_1^{(1)}$, $\epsilon_2=\epsilon=\epsilon'+4\pi i \sigma/\omega$ is the complex electric permittivity of the cylinder, $\epsilon'$ is the real part of the permittivity, $\sigma$ is the conductivity of the cylinder, $p_1(\gamma)=\lambda_1(\gamma) a$, $p_2(\gamma)=\lambda_2(\gamma) a$, $a$ is the radius of the cylinder, $H_n^{(1)}(x)$ is the Hankel function, $\lambda_2^2(\gamma)=\epsilon_2 k_0^2-\gamma^2$.
The averaged $\rho$-component of the Poynting vector at the surface of the cylinder equals
\begin{eqnarray}
\frac{1}{2}\frac{c}{4\pi}\Re\left(\left(\bm{E}(a,\varphi,z)\times\bm{H}^*(a,\varphi,z)\right)_\rho\right)=
\nonumber\\
=-\frac{1}{2}\frac{c}{4\pi}\Re\left( E_z(a,\varphi,z)H_\varphi^*(a,\varphi,z)\right),\label{eq:6p2k}
\end{eqnarray}
as $H_z(a,\varphi,z)=0$ .
The total power absorbed in the cylinder $W$ equals
\begin{eqnarray}
-2\pi a \int d z \left(-\frac{1}{2}\frac{c}{4\pi}\right)\Re\left( E_z(a,\varphi,z)H_\varphi^*(a,\varphi,z)\right)=
\nonumber\\
=\frac{a c}{4 \pi} \int d z \Re\left( E_z(a,\varphi,z)H_\varphi^*(a,\varphi,z)\right)\label{eq:7p2k}
\end{eqnarray}
(an extra minus sign is introduced as positive $\rho$-component of the Poynting vector corresponds to energy flow out of the cylinder, and we are interested in the absorbed power.) Although the limits of integration are $-\infty$ and $\infty$, the reflected fields of the ellipsoidal reflector are negligible beyond the focal area. The components of the electric and magnetic field for the electric Hertz vector of Eq.~(\ref{eq:4p2k}) equal
\begin{widetext}
\begin{eqnarray}
E_z(a,\varphi,z)=\int d\gamma a_2(\gamma)\exp(i \gamma z)\frac{\lambda_2^2(\gamma)}{\epsilon_2}J_0(p_2(\gamma)),
\nonumber\\
H_\varphi^*(a,\varphi,z)=\int d\gamma' a_2^*(\gamma')\exp(-i \gamma' z)(-i k_0)\lambda_2^*(\gamma')J_0^{'*}(p_2(\gamma')),\label{eq:8p2k}
\end{eqnarray}
so
\begin{eqnarray}
\int d z E_z(a,\varphi,z)H_\varphi^*(a,\varphi,z)=
\nonumber\\
=\int d z \int d\gamma a_2(\gamma)\exp(i \gamma z)\frac{\lambda_2^2(\gamma)}{\epsilon_2}J_0(p_2(\gamma))\int d\gamma' a_2^*(\gamma')\exp(-i \gamma' z)(-i k_0)\lambda_2^*(\gamma')J_0^{'*}(p_2(\gamma'))=
\nonumber\\
=\int \int d\gamma d\gamma' a_2(\gamma)a_2^*(\gamma')\frac{\lambda_2^2(\gamma)}{\epsilon_2}(-i k_0)\lambda_2^*(\gamma')J_0(p_2(\gamma))J_0^{'*}(p_2(\gamma'))\int d z \exp(i \gamma z)\exp(-i \gamma' z)=
\nonumber\\
=\int \int d\gamma d\gamma' a_2(\gamma)a_2^*(\gamma')\frac{\lambda_2^2(\gamma)}{\epsilon_2}(-i k_0)\lambda_2^*(\gamma')J_0(p_2(\gamma))J_0^{'*}(p_2(\gamma'))2\pi\delta(\gamma-\gamma')=
\nonumber\\
=2 \pi \int d\gamma a_2(\gamma)a_2^*(\gamma)\lambda_2^2(\gamma)\lambda_2^*(\gamma)\frac{-i k_0}{\epsilon_2}J_0(p_2(\gamma))J_0^{'*}(p_2(\gamma)),\label{eq:9p2k}
\end{eqnarray}
\end{widetext}
and
\begin{eqnarray}
\Re\left(\frac{-i k_0}{\epsilon_2\lambda_2^*(\gamma)}J_0(p_2(\gamma))J_0^{'*}(p_2(\gamma))\right)=
\nonumber\\
=\Im\left(\frac{k_0}{\epsilon_2\lambda_2^*(\gamma)}J_0(p_2(\gamma))J_0^{'*}(p_2(\gamma))\right)=
\nonumber\\
=k_0 J_0(p_2(\gamma))J_0^{*}(p_2(\gamma))\Im\left(\frac{1}{\epsilon_2\lambda_2^*(\gamma)}
\frac{J_0^{'*}(p_2(\gamma))}{J_0^*(p_2(\gamma))}\right)=
\nonumber\\
=-k_0 J_0(p_2(\gamma))J_0^{*}(p_2(\gamma))\Im\left(\frac{1}{\epsilon_2^*\lambda_2(\gamma)}
\frac{J_0^{'}(p_2(\gamma))}{J_0(p_2(\gamma))}\right).\label{eq:10p2k}
\end{eqnarray}
Therefore, Eq.~(\ref{eq:7p2k}) can be rewritten as follows:
\begin{widetext}
\begin{eqnarray}
W=\frac{a c}{4}2 \pi\int d \gamma \left|a_2(\gamma)\lambda_2^2(\gamma)J_0(p_2(\gamma))\right|^2
(-k_0) \Im\left(\frac{1}{\epsilon_2^*\lambda_2(\gamma)}
\frac{J_0^{'}(p_2(\gamma))}{J_0(p_2(\gamma))}\right)=
\nonumber\\
=\frac{k_0 a c}{4}2 \pi\int d \gamma \left|a_2(\gamma)\lambda_2^2(\gamma)J_0(p_2(\gamma))\right|^2
\Im\left(\frac{1}{\epsilon_2^*\lambda_2(\gamma)}
\frac{J_1(p_2(\gamma))}{J_0(p_2(\gamma))}\right),\label{eq:11p2k}
\end{eqnarray}
\end{widetext}
as $J_0^{'}(x)=-J_1(x)$.
The radiated power of the pyramidal horn antenna (in the Gaussian system of units) equals \cite{Balanis}
\begin{eqnarray}
W_0=\frac{1}{4}\frac{c}{4\pi}a_1 b_1 E_0^2,\label{eq:12p2k}
\end{eqnarray}
where $a_1$ and $b_1$ are the dimensions of the horn aperture and $E_0$ is the amplitude of the electric field in the center of the aperture. Therefore, the heating efficiency (the part of the radiated power that is absorbed in the cylinder) equals
\begin{widetext}
\begin{eqnarray}
\frac{W}{W_0}=
\frac{8\pi^2 k_0 a\int d \gamma \left|a_2(\gamma)\lambda_2^2(\gamma)J_0(p_2(\gamma))\right|^2
\Im\left(\frac{1}{\epsilon_2^*\lambda_2(\gamma)}
\frac{J_1(p_2(\gamma))}{J_0(p_2(\gamma))}\right)}{a_1 b_1 E_0^2}.\label{eq:13p2k}
\end{eqnarray}
\end{widetext}
The following values of resistivity were used for the platinum wire and the carbon fiber, respectively: 0.106 $\mu$Ohm-m and 13 $\mu$Ohm-m.
While some manufacturer's data (Ref.~\cite{Cytec}) were used in computations for the carbon
fiber, the experimental data for the specific fiber were somewhat different. For example, the fiber diameter was measured
using diffraction of a broad laser beam on the fiber, and the measured value was $10.1\pm0.5$ micron, rather than 13 micron.
The fiber resistivity was determined using measurements of the fiber resistance and dimensions. The value of resistivity
was $16\pm2$ $\mu$Ohm-m, rather than 13 $\mu$Ohm-m. Using these parameters in computations did not result in significant modifications. For example, the part of power absorbed
in the fiber changes from 9.7\% to 10.4\% at 39 GHz. The computed absorbed power changed insignificantly when
the value of the real part of the complex electric permittivity of the fiber changed, e.g., from -1 to 5.
\section{\label{sec:level1-3}Experimental and theoretical results}
In Fig.~\ref{fig:processing2}, the dependence of absorption of electromagnetic power in the platinum wire on the frequency is shown. This case is not optimal for target heating, so only about 1\% of the beam power is absorbed in the wire.
The experimental values are in good agreement with the theoretical ones. The test point scattering is caused by environmental factors. The wire is heated just by 1$^\circ$ or less, so air flows can significantly influence the results.
\begin{figure*}
\includegraphics{processing2
\caption{\label{fig:processing2}Absorption of microwave radiation by a platinum wire in the focus of the reflector. Red line -- theoretical curve; green line -- experimental data.}
\end{figure*}
\begin{figure*}
\includegraphics{processing1
\caption{\label{fig:processing1}Absorption of microwave radiation by a carbon fiber in the focus of the reflector. Red line -- theoretical curve; green and blue lines -- experimental data.}
\end{figure*}
In Fig.~\ref{fig:processing1}, the same dependence is shown for the carbon fiber. In this case, significantly more power is absorbed -- up to 6\%. The experimental values are less than the theoretical ones, but the agreement seems satisfactory. Two experimental curves are given in the figure. The blue line represents the results for the case where the reflector was covered by a sheet of polystyrene foam to decrease air flows. The foam refraction index is close to unity, so microwave reflection coefficient for normal incidence is of the order of 0.1\%. Therefore, the polystyrene foam sheet has very little effect
on microwave propagation.
The discrepancy between theory and experiment may be due to inaccuracies of the fiber positioning or to a discrepancy between conductivity for direct current and for microwave frequencies.
To asses the effect of microwave field polarization on absorption in the fiber, the experiment was conducted for a horizontal orientation of the fiber, when the electric field is orthogonal to the wire. Within our approximations, the theoretical absorption efficiency is zero in this case. The experimental values
(less than 0.4\%) were much less than for the other polarization, and the specific experimental values were not very reliable,
as the fiber resistance changes were comparable to instrument error.
Absorption in the cylindrical target can be significantly greater for an axisymmetric converging cylindrical wave incident on the target. In this case, up to 40\% of the incident power can be absorbed in the carbon fiber (see Fig.~\ref{fig:fig4}).
\begin{figure*}
\includegraphics{Rys_5_C_Pt_Cyl_Wave_9
\caption{\label{fig:fig4}Absorption of an axisymmetric converging cylindrical wave by a thin wire/fiber}
\end{figure*}
The absorption in the platinum wire is significantly less -- about 4\%.
The increase of absorption with frequency in our experiment probably can be explained as follows: for higher frequency, the horn pattern is more narrow, so the reflector gathers more power.
In this experiment, the absorption is relatively modest -- about 6\% for the carbon fiber. This is due to the selected configuration. As continuous wave was used in the experiment, care was taken to exclude a possibility of multi-path heating of the target. Electromagnetic radiation reflected from the wire and then from the reflector would be directed to the other focus of the reflector. The absorption efficiency is proportional to the square of the angle from which the incident power is directed on the target. In our experiment, this angle is significantly less than 360$^\circ$ (about 200$^\circ$), so the efficiency is at least 3 times less than for the axisymmetrical cylindrical wave. However, in fast heating applications, the target can be irradiated from all directions, so higher efficiency can be achieved.
\section{\label{sec:level1-3}Conclusion}
The results of the experiment confirm the feasibility of efficient heating of thin cylindrical targets with electromagnetic radiation with wavelength (and, consequently, the dimensions of the focal area) several orders of magnitude greater than the diameter of the target. To this end, it is necessary to create an incident field with a high axisymmetric content (with respect to the axis of the target) and proper polarization, and there needs to be a match between the diameter of the target, its conductivity, and the wavelength. However, the conditions of efficient heating are non-resonant and therefore very promising for numerous applications. The heating efficiency of tens percent can be achieved for very thin targets.
|
1,116,691,500,282 | arxiv | \section{Introduction}
\label{s:intro}
The discovery of powerful quasars at $z\approx 7.5$ \citep{banetal18}
implies that fully fledged supermassive black holes (BHs) as heavy as
$M_{\rm{BH}}\sim 10^9M_\odot$ already existed when the Universe was just 700
million years old. If a significant fraction of this mass has been
accumulated by accretion at a nearly critical rate, then the growth of
such objects must have started very early on (at $z\gtrsim 20$) from
seeds that already had masses $\sim 10^3M_\odot$ or more. What kind of
object these seeds were is one the most interesting open questions in
astrophysics (see \citealt{volonteri10,latfer16,wooetal18} for
reviews). There is a hope that looking into the $z\sim 20$--10 epochs
with next-generation telescopes, one will be able to observe the early
accretion growth of supermassive BHs when their masses were $\sim
10^4$--$10^6M_\odot$. Hereafter, we will refer to such accretors as
'miniquasars'.
One of the most promising ways to find such miniquasars is in X-rays,
since we know that both stellar-mass and supermassive black holes emit
copious amounts of X-rays during accretion (X-ray binaries, XRBs, and
active galactic nuclei, AGN, respectively). Unfortunately, even the
sensitivity of the {\it Chandra} X-ray Observatory is not sufficient
for detecting miniquasars at $z\gtrsim 6$. The situation will change
dramatically if a mission such as the proposed {\it Lynx} is
implemented in the future. {\it Lynx} is planned to achieve a
sensitivity as high as $10^{-19}$~erg~cm$^{-2}$~s$^{-1}$ (0.5--2~keV)
in combination with {\it Chandra}-like (arcsecond) angular resolution
and substantial sky coverage ($\sim 400$~arcmin$^2$) in its deep
extragalactic surveys \citep{lynx18,benetal18}. This implies that
X-ray sources with luminosities as low as a few $10^{41}$~erg~s$^{-1}$
(rest-frame 2--10~keV) will be detectable without confusion at $z\sim
15$. Assuming that hard X-ray emission carries a significant ($\sim
10$\%) fraction of the near-Eddington bolometric luminosity of a
miniquasar, {\it Lynx} will be able to detect accreting BHs with
masses as low as a few $10^4M_\odot$ in the early Universe.
The bulk of the gravitational potential energy released during
radiatively efficient accretion onto a BH emerges in the form of
quasi-thermal radiation from the accretion disk \citep{shasun73}, with
the effective waveband shifting from the optical--UV for supermassive
BHs to soft X-rays for stellar-mass BHs, as observed in AGN and XRBs
(in so-called soft/high states for the latter,
e.g. \citealt{gilmer14}). For intermediate-mass BHs, the bulk of the
disk's emission is expected to fall into the far UV--ultrasoft X-ray
band. Therefore, due to cosmological redshift, {\it Lynx} will not be
able to detect this primary emission component, but, as already
mentioned above, it should be able to observe additional, harder
radiation that can arise due to Comptonization of thermal emission
from the disk in its hot corona. In principle, the redshifted thermal
emission from miniquasars could be observed directly in the
optical--infrared band, but since the Eddington luminosity for a BH of
mass $\sim 10^5M_\odot$ at $z\sim 15$ corresponds to an AB magnitude of
more than 30, the detection of such miniquasars will be extremely
challenging even with the next-generation IR observatories such as the
{\it James Web Space Telescope} and {\it Wide-field Infrared Survey
Telescope} (see \citealt{masetal15} for the expected sensitivities
of future surveys with these telescopes).
There is, however, another, indirect way to reveal the primary thermal
radiation from the first miniquasars, which is to observe its impact on
the ambient intergalactic medium (IGM) in the early Universe using the
21~cm spin-flip transition of neutral hydrogen. As has been actively
discussed over the past two decades, the first generations of X-ray
sources could significantly heat the primordial IGM prior to cosmic
reionization and strongly modify the global 21~cm signal from the
$z\sim 15$--10 epochs (see \citealt{priloe12} for a review). A lot of
recent literature on this subject is devoted to discussing the
potentially observable effect of the first generations of stellar-type
X-ray sources and in particular high-mass X-ray binaries (HMXBs,
e.g. \citealt{miretal11,cohetal17,madfra17,sazkha17a}), which likely
were present in significant numbers since the beginning of active star
formation in the Universe \citep{fraetal13}. The bulk of the emission
produced by HMXBs is at energies above 0.5~keV and since such photons
can travel large distances before being photoabsorbed in the IGM, the
main effect of HMXBs is expected to be a global enhancement of the IGM
temperature together with large-scale fluctuations reflecting the
large-scale structure of the early Universe
(e.g. \citealt{prifur07,rosetal17}).
In contrast, miniquasars, which presumably have much softer energy
spectra compared to HMXBs, should mainly heat the IGM in their
relatively close vicinity. One may thus expect such sources to be
surrounded by compact regions of specific 21~cm signal. Although
finding such 21~cm features in a blind search might be difficult even
for the most ambitious upcoming radio interferometers such as the Square
Kilometer Array (SKA, \citealt{meletal13}), such a search could be
greatly faciliated if carried out around miniquasar candidates found
via their coronal X-ray emission with a mission like {\it Lynx}. We
elaborate on this idea below. Before proceeding, we note that there
have been plenty of studies addressing the impact of quasars and
miniquasars on the IGM and the associated 21~cm signal
(e.g. \citealt{madetal04,ricost04,chuetal06,thozar08,yajli14,fiaetal17,ghaetal17,boletal18,vasetal18}),
but the novelty of our study is its focus on the miniquasar's primary,
thermal emission component and the synergy of 21~cm and X-ray
observations.
The following values of cosmological parameters are used throughout
the paper: $\Omega_{\rm m}=0.309$, $\Omega_\Lambda=1-\Omega_{\rm m}$, $\Omega_{\rm b}=0.049$,
$H_0=68$~km~s$^{-1}$~Mpc$^{-1}$ and $Y=0.246$ (helium mass fraction)
\citep{planck16}.
\section{Model}
\label{s:model}
Suppose a BH has an initial mass $M_{\rm{i}}$ at redshift $z_{\rm{i}}$ and accretes
matter until epoch $z_{\rm{f}}$, reaching a final mass $M_{\rm{f}}$. If the
accretion proceeded at a critical (Eddington limited) rate $\dot{M}_{\rm{E}}$,
the BH mass would be increasing exponentially,
\begin{equation}
M(t)=M_{\rm{i}} e^\frac{t}{t_{\rm{S}}},
\end{equation}
on the Salpeter time scale
$t_{\rm{S}}=\frac{\epsilon}{1-\epsilon}\frac{c^2M_\odot}{L_{\rm{Edd}}(M_\odot)}$, where
$L_{\rm{Edd}}$ is the Eddington luminosity and $\epsilon$ is the radiation
efficiency. Adopting for simplicity $\epsilon=0.1$ (as is
approximately true for standard accretion disks), $t_{\rm{S}}\approx 5\times
10^7$~yr. Therefore, the average rate expressed in units of the
critical rate (usually referred to as the Eddington ratio), at which
the BH accretes mass between epochs $z_{\rm{i}}$ and $z_{\rm{f}}$ is
\begin{equation}
\langle\dot{m}\rangle\equiv\frac{\dot{M}}{\dot{M}_{\rm{E}}}=\frac{t_{\rm{S}}}{t(z_{\rm{i}},z_{\rm{f}})}
\ln\frac{M_{\rm{f}}}{M_{\rm{i}}},
\end{equation}
where $t(z_{\rm{i}},z_{\rm{f}})$ is the cosmic time between $z_{\rm{i}}$ and $z_{\rm{f}}$.
It is unlikely though that the BH will accrete matter at a constant
rate over a cosmologically long period of time. In reality, accretion
onto the BH will be determined by evolving external and
internal (with respect to the host galaxy) conditions and is likely to
be an intermittent process. Therefore, in our simulations, described
below, we assumed that there are periods of active accretion when the
Eddington ratio takes a fixed value $\dot{m}$ and passive periods when
$\dot{m}=0$. We further assume that these two types of intervals
alternate in a random fashion\footnote{We took the duration of these
intervals to be $\Delta t=10^4$ or $10^5$~yr, with the results being
insensitive to this choice as long as $\Delta t\llt_{\rm{S}}$.}, so that
the duty cycle of BH activity is
\begin{equation}
k_{\rm{duty}}=\frac{\langle\dot{m}\rangle}{\dot{m}}.
\end{equation}
By definition, $\dot{m}\ge\langle\dot{m}\rangle$, and we also assume that $\dot{m}<1$,
i.e. we do not consider supercritical accretion in this study.
\subsection{Emission spectrum}
\label{s:spec}
One of the key aspects for this study is the spectrum of the radiation
emitted by the accreting BH. According to the standard accretion
theory \citep{shasun73}, a geometrically thin, optically thick
accretion disk around a BH is characterized by a $\propto r^{-3/4}$
temperature profile (except in the very narrow innermost region, where
only a small fraction of the total luminosity is emitted) and
generates multicolor, nearly blackbody radiation with a spectrum
(specific luminosity as a function of energy)
\begin{equation}
L_E(E)\propto \int_{r_{\rm{in}}}^{r_{\rm{out}}} rB_E(E,T(r))\,dr,
\label{eq:le}
\end{equation}
where $r_{\rm{in}}$ and $r_{\rm{out}}$ are the disk's inner and outer radii and
$B_E$ is the Planck function.
The maximum temperature of the disk is
\begin{equation}
kT_{\rm{max}}\approx 1.2\left(\frac{\dot{m}}{m}\right)^{1/4}\,{\rm keV},
\label{eq:tmax}
\end{equation}
where $m(t)$ is the growing BH mass expressed in solar
masses. According to the standard theory, this temperature is achieved
at $(49/36) r_0$, where $r_0$ is the radius of the innermost
stable circular orbit, but a fairly good approximation is that the
disk temperature reaches this value at $r_{\rm{in}}$ and then decreases as
$T(r)=T_{\rm{max}}(r/r_{\rm{in}})^{-3/4}$ at $r>r_{\rm{in}}$. For the purposes of this
study it can also be safely assumed that $r_{\rm{out}}\to\infty$. The
spectrum given by equation~(\ref{eq:le}) can then be approximated by
the power law $L_E\propto E^{1/3}$ at $E\lesssim 0.3kT_{\rm{max}}$ and by the
blackbody spectrum with a temperature of $0.7T_{\rm{max}}$ at $E\gtrsim
2kT_{\rm{max}}$ \citep{maketal86}, with its maximum (when plotted in units of
$EL_E$) being at $E_{\rm{max}}\approx 2.35kT_{\rm{max}}$. The normalization constant
in equation~(\ref{eq:le}) is determined by the condition
\begin{equation}
\int L_E\,dE=\mdotL_{\rm{Edd}} (m).
\end{equation}
The model described above is widely known as a multicolor disk
blackbody model ({\it diskbb} in {\small XSPEC}, \citealt{arnaud96}),
and we have chosen it as our baseline spectral model. This choice is
primarily motivated by the fact that
the standard accretion disk theory provides a satisfactory description
of the observed spectral energy distributions (SED) of (i) XRBs in
their soft/high states (when $\dot{m}\gtrsim 0.1$); namely, their
dominant emission component is well described by {\it diskbb} with
$kT_{\rm{max}}\lesssim 1$~keV, as expected from equation~(\ref{eq:tmax}) for
the stellar masses ($m\lesssim 10$) of the BHs in XRBs (see
\citealt{donetal07} for a review) and (ii) AGN -- supermassive BHs
accreting at $\dot{m}\sim 0.01$--1, for which the peak of the SED is
observed in the optical-UV (the so-called big blue bump,
e.g. \citealt{elvetal94,teletal02,sazetal04}), again as expected from
equation~(\ref{eq:tmax}) for the high ($m\sim 10^7$--$10^9$) BH masses
of AGN.
In reality, observations reveal significant deviations of XRB and AGN
spectra from the simple multicolor disk blackbody model described
above, and these deviations can be generally accounted for by the
conditions at the inner boundary of the accretion disk, radiative
transfer effects in the disk's atmosphere and relativistic corrections
(e.g. \citealt{korbla99,meretal00,davetal05,donetal12}). However, in
view of other, larger uncertainties related to the problem in hand (in
particular in the BH mass and accretion rate), we do not take these
subtleties into account.
Arguing further from analogy with XRBs and AGNs, it is likely that a
miniquasar's emission spectrum has an additional, harder component
due to the Comptonization of part of the thermal radiation from the
disk in its hot corona. We simulate this plausible situation by
modifying our baseline {\it diskbb} model by the {\it simpl}
\citep{steetal09} model [specifically we use {\it simpl(dikbb)} in
{\small XSPEC}], which provides a simplified description of
Comptonization by converting a given fraction, $f_{\rm sc}$, of soft thermal
photons into high-energy ones. Another free parameter of this model is
the photon index, $\Gamma$, of the power-law component. We use
$\Gamma=2$ and $f_{\rm sc}=0.05$ as fiducial values. The assumed spectral
slope is close to those of observed hard X-ray tails in XRBs and AGN
and is convenient in use since no $k$-correction is then needed in
converting luminosities to fluxes. The adopted $f_{\rm sc}$ value implies
that the power-law component, if it continues up to $E\sim 100$~keV,
contains $\sim 25$\% (with a very weak dependence on the disk
temperature, i.e. on $m$ and $\dot{m}$) of the miniquasar's bolometric
luminosity, in overall agreement with observations of XRBs in their
high state (e.g. \citealt{donetal07}) and AGN
(e.g. \citealt{sazetal04}). Note that for the adopted values of the
parameters, $\Gamma$ may be considered the slope of the high-energy
part of the spectrum at $E\gtrsim 10 kT_{\rm{max}}$, where $T_{\rm{max}}$ is given
by equation~(\ref{eq:tmax}).
As already mentioned, our current treatment is resticted to the case
of subcritical accretion ($\dot{m}<1$). In reality, in some miniquasars
and/or at some stage of their evolution accretion may proceed at a
supercritical rate. Consideration of such a case would require
adopting a substantially different spectral model, as suggested by the
measured spectra of individual ultraluminous X-ray sources in nearby
galaxies (e.g. \citealt{sazetal14,kaaetal17}), the collective X-ray
spectrum of such sources in the local Universe \citep{sazkha17b} and
theoretical considerations (e.g. \citealt{naretal17,taketal19}).
\subsection{Intragalactic absorption}
\label{s:abs}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{spectra.pdf}
\caption{{\it Top panel:} Multicolor disk blackbody emission spectra
(in the source's rest frame, in units of specific luminosity
multiplied by photon energy) modified by absorption in the
miniquasar's host galaxy, for $\dot{m}=1$, different BH masses,
$m=10^4$ (black), $10^5$ (blue) and $10^6$ (red), and various
absorption columns and metallicities ($N_{\rm{H}}$ in cm$^{-2}$, $Z$):
($10^{18}$, 0) -- solid, ($10^{20}$, 0) -- dashed, ($10^{21}$, 0) --
dotted, ($10^{21}$, 1) -- dash-dotted. {\it Bottom panel:}
Multicolor disk blackbody emission spectra modified by
Comptonization ({\it simpl(diskbb)}), for the same BH masses as
above (shown with the same colors). The solid and dashed curves
correspond to $N_{\rm{H}}=10^{18}$~cm$^{-2}$ and $N_{\rm{H}}=10^{20}$~cm$^{-2}$,
respectively ($Z=0$). The corresponding pure thermal spectra (for
$N_{\rm{H}}=10^{18}$~cm$^{-2}$) are shown with the short dash--long dashed
lines for comparison.
}
\label{fig:spectra}
\end{figure}
Before reaching the ambient IGM, the extreme UV-soft X-ray radiation
from the miniquasar may be partially photoabsorbed within its host
galaxy. This may happen (i) in the vicinity of the BH if something
like the AGN obscuring torus is present in miniquasars (in that case,
absorption will take place within a certain solid angle only), and/or
(ii) in the more distant regions of the galaxy. Given our scarce
knowledge about the first galaxies and in particular about the
parsec-scale environment of intermediate-mass BHs they may host, and
also taking into account that the miniquasar's radiation can
significantly ionize the interstellar medium in front of it and
thereby strongly diminish the net absorption effect (see, e.g.,
\citealt{sazkha18}), it is hardly possible to reliably predict the
typical line-of-sight absorption column, $N_{\rm{H}}$, for the miniquasars in
the early Universe. We therefore consider it a free
parameter. Similarly, we allow the metallicity of the absorbing gas to
vary from $Z=0$ (pure H--He gas) to $Z=1$ (normal chemical
composition): although the first galaxies (at $z\sim 20$--10) were
likely metal poor, the immediate surroundings of miniquasars might
have been significantly metal enriched because they were probably the
sites of strong star formation activity.
Figure~\ref{fig:spectra} (top panel) shows examples of (rest-frame)
spectra of miniquasars for various values of model parameters, namely
$m$ (assuming $\dot{m}=1$), $N_{\rm{H}}$ and $Z$ (absorption was modeled by
means of the {\it tbvarabs} model in {\small XSPEC}). We see that for
the range of BH masses and accretion rates expected for miniquasars
and in the absence of absorption, the bulk of the accretion disk's
emission is in the extreme UV--very soft X-ray band, at energies
$E\sim 50$--1000~eV. Even a moderate absorption ($N_{\rm{H}}\lesssim
10^{20}$~cm$^{-2}$) will cause a strong reduction of the flux below
$\sim 200$~eV. An addition of metals to the absorbing medium (the
$Z=1$ case) will further reduce the flux, but mostly above the oxygen
absorption edge at $E=536$~eV (note also that the helium absorption
edge at $E=24.6$~eV is clearly seen in the spectra).
The bottom panel of Fig.~\ref{fig:spectra} shows examples of thermal
spectra modified by Comptonization, as described above. For the
adopted value, $f_{\rm sc}=0.05$, of the fraction of Comptonized photons,
the hard tail starts to dominate over thermal emission at $\sim 1$~keV
for the lowest mass BH ($m=10^4$) and already at $\sim 300$~eV for the
highest mass BH ($m=10^6$).
\section{A crude estimate of the expected heating}
\label{s:estimates}
To a first approximation, the thermal disk emission from miniquasars
(with $m\sim 10^4$--$10^{6}$) in the presence of moderate absorption
($N_{\rm{H}}\lesssim 10^{20}$~cm$^{-2}$) may be characterized by a narrow
spectrum around an energy $\sim 300$~eV (see
Fig.~\ref{fig:spectra}). This allows us to derive order-of-magnitude
estimates for the impact of a miniquasar on the IGM before proceeding
to detailed computations.
The mean free path of soft X-ray photons of energy $E$ in the
primordial (i.e. nearly neutral H--He gas) IGM of the early Universe
can be approximated as follows \citep{sazsun15}:
\begin{equation}
\bar{\lambda}\approx
740\left(\frac{1+z}{11}\right)^{-3}\left(\frac{E}{300~{\rm
eV}}\right)^{3.2}\,{\rm kpc}.
\label{eq:path_kpc}
\end{equation}
Within this (proper) distance from the source, $1-e^{-1}\approx 63$\%
of photons of energy $E$ will be photoabsorbed (whereas 95\% of
photons will be absorbed within $3\bar{\lambda}$). The distance $\bar{\lambda}$
corresponds to an angular size
\begin{equation}
\bar{\theta}\approx 2.7
\left(\frac{1+z}{11}\right)^{-2}\left(\frac{E}{300~{\rm
eV}}\right)^{3.2}\,{\rm arcmin},
\label{eq:path_arcmin}
\end{equation}
on the sky, which is a reasonably good approximation for $z=20$--10
and $E=100$--1000~eV.
The thermal disk emission from a miniquasar can heat the IGM
efficiently only within a few $\bar{\lambda}$, since only an exponentially
decreasing fraction of the miniquasar's luminosity reaches larger
distances. This allows us to readily estimate the expected IGM
temperature increment. The total energy released by the BH during its
growth to mass $M$ is $W=\epsilon Mc^2$, and we may assume that most of
this energy is radiated away over a time of order the Salpeter time
($t_{\rm{S}}$) just before the epoch when the miniquasar and the associated
21~cm signal are observed, so that, to a first approximation,
we can ignore any effects associated with the expansion of the
Universe. We may further assume that all of this energy has been
absorbed within a volume of radius $\sim\bar{\lambda}$ (the corresponding
light travel time proves to be shorter than $t_{\rm{S}}$). Assuming that the
miniquasar ionizes the surrounding medium only moderately (i.e. the
ionization degree of hydrogen is less than a few per cent, which is a
good approximation for the bulk of the affected volume), we may
roughly estimate the mean fraction of the energy of soft X-ray photons
that goes into heating the IGM as $f_{\rm{heat}}\sim 0.2$
\citep{fursto10}. Taking into account that the hydrogen space density
changes with redshift as $n_{\rm H}(z)\approx 2.6\times
10^{-4}[(1+z)/11]^3$~cm$^{-3}$, we can write
\begin{equation}
f_{\rm{heat}} W=\frac{4\pi\bar{\lambda}^3}{3}\frac{3}{2}n_{\rm H} k\Delta T_{\rm{K}},
\end{equation}
where $k$ is the Boltzmann constant, and finally determine the
expected IGM temperature increment assuming $E=300$~eV:
\begin{equation}
\Delta T_{\rm{K}}\sim
100\frac{f_{\rm{heat}}}{0.2}\frac{\epsilon}{0.1}\left(\frac{1+z}{11}\right)^{6}\frac{m}{10^4}\,{\rm K}.
\label{eq:deltat}
\end{equation}
Comparing this with the cosmic microwave background (CMB) temperature,
$T_{\rm{CMB}}(z)\approx 30[(1+z)/11]$, we come to the conclusion that BHs
with $M\gtrsim 10^4M_\odot$ accreting at a nearly critical rate in the
early Universe will be surrounded by well-defined zones
with a radius of a few arcmin within which $T_{\rm{K}}\gtrsim T_{\rm{CMB}}$, and
these regions are thus expected to be 21~cm emitters, in contrast to
the surrounding sky, which is likely to exhibit 21~cm absorption at
$z\gtrsim 10$. Importantly, for $m\gtrsim 10^4$, the size of the
heating region is determined simply by the mean free path of soft
X-ray photons, rather than by the radiative power of the
miniquasar. Following the same argument, we may expect that for BHs of
smaller mass, $m\lesssim 10^4$, the region of strong heating
[$\Delta T_{\rm{K}}\gtrsimT_{\rm{CMB}}(z)$] will be smaller than $\sim\bar{\lambda}$ and its
actual size will be determined by the total energy released
by accretion onto the BH.
\section{Simulations}
\label{s:simul}
Based on the assumptions outlined in \S\ref{s:model}, we performed a
series of numerical calculations of IGM heating and associated 21~cm
emission/absorption in the vicinity of miniquasars in the early
Universe. We limited simulations to a redshift range of $z=20$--10
and neglected any global heating (i.e. outside the region affected by
the miniquasar) of the IGM by X-ray sources and/or other
mechanisms\footnote{In particular, by low-energy cosmic rays from the
first supernovae \citep{sazsun15,leietal17}.}. We adopted the
following initial parameters of the IGM:
$x_{\rm HII}\equivn_{\rm HII}/n_{\rm H}=2.2\times 10^{-4}$ (hydrogen ionization
fraction), $x_{\rm HeII}\equivn_{\rm HeII}/n_{\rm He}=0$, $x_{\rm HeIII}\equivn_{\rm HeIII}/n_{\rm He}=0$
(helium ionization fractions) and either $T_{\rm{K}}=9.3$~K (for $z_{\rm{i}}=20$) or
$T_{\rm{K}}=5.4$~K (for $z_{\rm{i}}=15$). These were found using {\small RECFAST}
\citep{seaetal99} and correspond to the conditions after cosmic
recombination and adiabatic cooling of the primordial gas. Our
assumption about the absence of significant global heating might be a
good approximation at least at $z\gtrsim 15$, as suggested by the
recent detection of a strong, sky-averaged 21~cm absorption signal in
the Experiment to Detect the Global Epoch of Reionization Signature
(EDGES, \citealt{bowetal18}).
X-ray ionization and heating of the IGM was calculated in
logarithmically binned spherical shells around the miniquasar, out to
a comoving distance of 500~Mpc. Although this maximal distance is
fairly large, we ignored photon travel time effects (i.e. the response
of the IGM to radiation emitted by the central source was considered
instantaneous), since X-ray heating proves to be noticeable only
within $\sim 30$~cMpc of the miniquasar and the corresponding
light travel time at $z\gtrsim 10$ is much shorter than the Salpeter
timescale on which BH growth occurs.
The evolution of the ionization state of hydrogen and helium with time
in a given shell was calculated as follows
[equations~(\ref{eq:dxh})--(\ref{eq:heatrate}) below are adopted from
\citealt{madfra17,sazkha17a}]:
\begin{eqnarray}
\frac{dx_{\rm HI}}{dt} &=& -x_{\rm HI}\Gamma_{\rm HI}+n_{\rm e} (1-x_{\rm HI})\alpha_{\rm HII},\nonumber\\
\frac{dx_{\rm HeI}}{dt} &=& -x_{\rm HeI}\Gamma_{\rm HeI}+\nelx_{\rm HeII}\alpha_{\rm HeII},\nonumber\\
\frac{dx_{\rm HeII}}{dt} &=&
-x_{\rm HeII}\Gamma_{\rm HeII}+\nelx_{\rm HeIII}\alpha_{\rm HeIII}-\frac{dx_{\rm HeI}}{dt},\nonumber\\
&&
\label{eq:dxh}
\end{eqnarray}
where $n_{\rm e}$ is the number density of free electrons, $\alpha_{\rm HII}$,
$\alpha_{\rm HeII}$ and $\alpha_{\rm HeIII}$ are the recombination coefficients
(adopted from \citealt{theetal98}), and $\Gamma_{\rm HI}$, $\Gamma_{\rm HeI}$ and
$\Gamma_{\rm HeII}$ are the photoionization coefficients, which were
calculated as follows:
\begin{eqnarray}
\Gamma_{\rm HI}=\frac{1}{4\pi r^2}\left(\int_{I_{\rm HI}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}\sigma_{\rm HI}(1+N_{\rm s,HI}(E,I_{\rm HI}))dE\right. \nonumber\\
\left.+\int_{I_{\rm HeI}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}\sigma_{\rm HeI}\frac{n_{\rm HeI}}{n_{\rm HI}}N_{\rm
s,HI}(E,I_{\rm HeI})dE\right.\nonumber\\
\left.+\int_{I_{\rm HeII}}^\infty\frac{L_E e^{-\tau(r,E)}}{E}\sigma_{\rm HeII}\frac{n_{\rm HeII}}{n_{\rm HI}}N_{\rm s,HI}(E,I_{\rm HeII})dE\right),\nonumber\\
\Gamma_{\rm HeI}=\frac{1}{4\pi r^2}\left(\int_{I_{\rm HI}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}\sigma_{\rm HI}\frac{n_{\rm HI}}{n_{\rm HeI}}N_{\rm s,HeI}(E,I_{\rm HI})dE\right.\nonumber\\
\left.+\int_{I_{\rm HeI}}^\infty\frac{L_E e^{-\tau(r,E)}}{E}\sigma_{\rm HeI}(1+N_{\rm
s,HeI}(E,I_{\rm HeI}))dE\right.\nonumber\\
\left.+\int_{I_{\rm HeII}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}\sigma_{\rm HeII}\frac{n_{\rm HeII}}{n_{\rm HeI}}N_{\rm
s,HeI}(E,I_{\rm HeII})dE\right),\nonumber\\
\Gamma_{\rm HeII}=\frac{1}{4\pi r^2}\int_{I_{\rm HeII}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}\sigma_{\rm HeII} dE,
\label{eq:gammas}
\end{eqnarray}
where $I_{\rm HI}=13.6$~eV, $I_{\rm HeI}=24.6$~eV and $I_{\rm HeII}=54.4$~eV are the
ionization thresholds for HI, HeI and HeII, $N_{\rm s,HI}$ and $N_{\rm s,HeI}$ are
the mean numbers of secondary ionizations of HI and HeI (secondary
ionization of HeII is practically unimportant) caused by the fast
photoelectron, with the notation $N_{\rm s,HI}(E,I_{\rm HI})$ meaning that
$N_{\rm s,HI}$ is a function of the photoelectron energy, $E-I_{\rm HI}$
(the corresponding dependencies for HI and HeI are adopted from
\citealt{fursto10}), and
\begin{equation}
\tau(r,E)=\int_0^r[n_{\rm HI}(r')\sigma_{\rm HI}(E)+n_{\rm HeI}(r')\sigma_{\rm HeI}(E)+n_{\rm HeII}(r')\sigma_{\rm HeII}(E)]dr'
\end{equation}
is the IGM photoionization optical depth within radius $r$ from the
source, with the cross-sections $\sigma_{\rm HI}(E)$, $\sigma_{\rm HeI}(E)$ and
$\sigma_{\rm HeII}(E)$ adopted from \cite{veretal96}.
The evolution of the gas temperature with time in a given shell is
given by
\begin{equation}
\frac{dT_{\rm{K}}}{dt}=-2HT_{\rm{K}}+\frac{T_{\rm{K}}}{\mu}\frac{d\mu}{dt}+\frac{2\mum_{\rm p}}{3k\rho_{\rm b}}(\mathcal{H}-\Lambda),
\label{eq:dtk}
\end{equation}
where the photoionization heating rate is given by
\begin{eqnarray}
\mathcal{H} = \frac{1}{4\pi r^2}\left(\int_{I_{\rm HI}}^\infty\frac{L_E
e^{-\tau(r,E)}}{E}(E-I_{\rm HI})n_{\rm HI}\sigma_{\rm HI} f_{\rm
heat}(E,I_{\rm HI})dE\right.\nonumber\\
\left.+\int_{I_{\rm HeI}}^\infty\frac{L_E e^{-\tau(r,E)}}{E}(E-I_{\rm HeI})n_{\rm HeI}\sigma_{\rm HeI}
f_{\rm heat}(E,I_{\rm HeI})dE\right.\nonumber\\
\left.+\int_{I_{\rm HeII}}^\infty\frac{L_E e^{-\tau(r,E)}}{E}(E-I_{\rm HeII})n_{\rm HeII}\sigma_{\rm HeII}
f_{\rm heat}(E,I_{\rm HeII})dE\right),\nonumber\\
\label{eq:heatrate}
\end{eqnarray}
with $H(z)$ being the Hubble constant, $\rho_{\rm b}$ the average baryonic
density of the Universe, $\mu$ the mean molecular weight and $f_{\rm{heat}}$
the fraction of the photoelectron energy that goes into gas heating,
which depends on the photoelectron energy as given by \cite{fursto10}.
The term proportional to $\Lambda$ in equation~(\ref{eq:dtk}) accounts
for the radiative losses arising from collisional and recombinations
processes (the corresponding rates were adopted from
\citealt{theetal98}), as well as for Compton cooling caused by
scattering of the CMB on free electrons\footnote{Inverse Compton
heating due to the X-ray radiation is negligible except very close
to the miniquasar.}), which proceeds on the time scale
\begin{eqnarray}
t_{{\rm CMB}} &=& \frac{3m_{\rm e} c^2 (n/n_{\rm e})}{32\sigma_{\rm T}\sigmaT_{\rm{CMB}}^4(z)}\nonumber\\
&\approx & 8\times
10^{7}\left(\frac{n_{\rm e}}{n}\right)^{-1}\left(\frac{1+z}{11}\right)^{-4}\,{\rm
yr},
\label{eq:cmbtime}
\end{eqnarray}
where $n$ is the total particle number density, $\sigma_{\rm T}$ is the
Thomson scattering cross-section and $\sigma$ is the Stefan--Boltzmann
constant, cooling due to collisional processes (including
bremsstrahlung) and CMB scattering proves to be important in the
vicinity of the miniquasar where the gas becomes strongly ionized and
its temperature rises to $T_{\rm{K}}\gtrsim 10^4$~K. However, the cooling
processes typically have a negligible effect on the average parameters
of the IGM heating zone produced by the miniquasar.
We further assume that the spin temperature, $T_{\rm{s}}$, characterizing the
21~cm transition is everywhere equal to the gas kinetic temperature,
$T_{\rm{K}}$. The EDGES measurement \citep{bowetal18} suggests that it is
indeed the case at $z\lesssim 20$, implying that by that time the
first stars had already created a significant UV (10.2--13.6~eV)
background for decoupling the spin temperature from that of the CMB and
bringing it close to the gas kinetic temperature via the
Wouthuysen--Field effect \citep{wouthuysen52,field58}. Furthemore, the
photoionization of the IGM by soft X-rays from a miniquasar will be
accompanied by the creation of Ly$\alpha$ photons that will further
strengthen the Wouthuysen--Field effect wherever the gas temperature
increases by $\gtrsim 10^3H(z)t_{\rm{S}}\gtrsim 100$~K on the BH growth
timescale \citep{chuetal06,chemir08}. Under these assumptions and for
the adopted cosmological parameters, the brightness temperature of the
21~cm line is expected to be
\begin{equation}
T_{\rm{b}}=29x_{\rm HI}\left(\frac{1+z}{11}\right)^{1/2}\left(1-\frac{1+z}{11}\frac{30}{T_{\rm{K}}}\right)\,{\rm
mK}.
\label{eq:tb}
\end{equation}
\section{Results}
\label{s:result}
Our model has the following parameters: the initial and final
redshifts ($z_{\rm{i}}$ and $z_{\rm{f}}$), the initial and final masses of the BH
($m_{\rm{i}}$ and $m_{\rm{f}}$, in solar masses), the Eddington ratio during active
accretion phases ($\dot{m}$), and the intragalactic absorption column
density and metallicity ($N_{\rm{H}}$ and $Z$, respectively). We now present
a summary of results obtained for various sets of the parameter
values. Most of the results presented below have been obtained for the
case of purely thermal accretion disk emission, with the spectral
shape as shown in the top panel of Fig.~\ref{fig:spectra}. We specify
explicitly whenever we also take the possible contribution of a
high-energy Comptonization component into account.
\subsection{Gas temperature and 21~cm brightness temperature radial profiles}
\label{s:profiles}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_nh_m4_z15_10.pdf}
\caption{{\it Top panel:} IGM temperature as a function of comoving
distance from the miniquasar for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $m_{\rm{i}}=2\times
10^3$, $m_{\rm{f}}=10^4$, $\dot{m}=1$ and various parameters of intragalactic
absorption [$N_{\rm{H}}$~(cm$^{-2}$), $Z$]: ($10^{18}$, 0) -- solid,
($10^{20}$, 0) -- short-dashed, ($10^{21}$, 0) -- dotted,
($10^{22}$, 0) -- long-dashed and ($10^{21}$, 1) --
dash-dotted. {\it Middle panel:} hydrogen ionization fraction. {\it
Bottom panel:} 21~cm brightness temperature as a function of
angular distance from the miniquasar. These plots correspond to
$z_{\rm{f}}$.
}
\label{fig:profiles_nh_m1e4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_nh_m5_z15_10.pdf}
\caption{As Fig.~\ref{fig:profiles_nh_m1e4}, but for $m_{\rm{i}}=2\times
10^4$ and $m_{\rm{f}}=10^5$.
}
\label{fig:profiles_nh_m1e5}
\end{figure}
Figure~\ref{fig:profiles_nh_m1e4} shows the IGM temperature and
hydrogen ionization fraction as functions of comoving distance from
the miniquasar and the brightness temperature of the resulting 21~cm
signal as a function of the angular distance in the plane of the sky
for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $m_{\rm{i}}=2\times 10^3$, $m_{\rm{f}}=10^4$, $\dot{m}=1$ (the
corresponding accretion duty cycle $k_{\rm{duty}}=40$\%) and various
absorption characteristics: $Z=0$, $N_{\rm{H}}=10^{18}$, $10^{20}$,
$10^{21}$, $10^{22}$~cm$^{-2}$ and $Z=1$, $N_{\rm{H}}=10^{21}$. We see that
the absorption of soft X-rays within the host galaxy leads to less
efficient IGM heating if $N_{\rm{H}}\gtrsim 10^{20}$~cm$^{-2}$ and that the
presence of heavy elements ($Z=1$ vs. $Z=0$) proves to be of minor
importance. Therefore, we hereafter focus on the metal-free case,
unless specifically noted otherwise.
Figure~\ref{fig:profiles_nh_m1e5} is analogous to
Fig.~\ref{fig:profiles_nh_m1e4}, but the BH mass has been increased by
an order of magnitude from $m_{\rm{i}}=2\times 10^3$, $m_{\rm{f}}=10^4$ to
$m_{\rm{i}}=2\times 10^4$, $m_{\rm{f}}=10^5$. We see that the influence of
intragalactic absorption is similar to the previous case and that the
IGM heating zone has somewhat spread outwards.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_mass_z15_10.pdf}
\caption{Similar to Fig.~\ref{fig:profiles_nh_m1e4}, but for a fixed
absorption column ($N_{\rm{H}}=10^{20}$~cm$^{-2}$) and various BH masses
($m_{\rm{i}}$, $m_{\rm{f}}$): ($2\times 10^2$, $10^3$) -- dash-dotted magenta,
($2\times 10^3$, $10^4$) -- solid black, ($2\times 10^4$, $10^5$) --
dotted blue, ($2\times 10^5$, $10^6$) -- dashed red.
}
\label{fig:profiles_mass}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_regime_m5.pdf}
\caption{Similar to Fig.~\ref{fig:profiles_nh_m1e4}, but for fixed BH
mass ($m_{\rm{i}}=2\times 10^4$, $m_{\rm{f}}=10^5$) and absorption column
($N_{\rm{H}}=10^{20}$~cm$^{-2}$) and various scenarios of BH growth ($z_{\rm{i}}$,
$z_{\rm{f}}$, $\dot{m}$): (15, 10, 1) -- solid black, (20, 10, 1) -- dotted
blue, (20, 10, 0.5) -- dashed red.
}
\label{fig:profiles_regime}
\end{figure}
Figure~\ref{fig:profiles_mass} demonstrates the dependence of the
results on the BH mass. Here, we adopted $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $\dot{m}=1$
and $N_{\rm{H}}=10^{20}$ and sampled BH masses ($m_{\rm{i}}$, $m_{\rm{f}}$) from
($2\times 10^2$, $10^3$) to ($2\times 10^5$, $10^6$). We see that
although more massive BHs produce much stronger ionization very close
to the source, this has a fairly small effect on the resulting 21~cm
signal, because the innermost region of strong heating is
characterized by a nearly saturated, positive 21~cm brightness
temperature [because $T_{\rm{K}}\ggT_{\rm{CMB}}$, see eq.~(\ref{eq:tb})]. More
important from an observational point of view is what happens at
larger distances where the 21~cm signal changes from emission to
absorption, and we see that the effective size of this region first
noticeably increases on going from $m_{\rm{f}}=10^3$ to $m_{\rm{f}}=10^4$ and then
remains nearly the same for $m_{\rm{f}}=10^5$ and $m_{\rm{f}}\le 10^6$ (in fact,
this region is somewhat smaller for $m_{\rm{f}}\le 10^6$ than for $m_{\rm{f}}=10^5$
because of the smaller number of soft X-ray photons with $E\gtrsim
300$~eV, capable of propagating to large distances, in the former case
-- see Fig.~\ref{fig:spectra}). This behavior is broadly consistent
with the prediction made in \S\ref{s:estimates} that the 21~cm zones
around miniquasars should be largely determined by the total accretion
energy for BHs with $M\lesssim 10^4M_\odot$ and by the characteristic
mean free path of accretion disk photons for more massive BHs.
Figure~\ref{fig:profiles_regime} demonstrates the influence of a
particular history of BH growth on the results. Here, we fixed the
final redshift at $z_{\rm{f}}=10$, the initial and final BH masses at
$m_{\rm{i}}=2\times 10^4$ and $10^5$, respectively, and the absorption column
at $N_{\rm{H}}=10^{20}$, and considered three scenarions: (i) $z_{\rm{i}}=15$,
$\dot{m}=1$ (the duty cycle $k_{\rm{duty}}=40$\%), (ii) $z_{\rm{i}}=20$, $\dot{m}=1$
($k_{\rm{duty}}=27$\%) and (iii) $z_{\rm{i}}=20$, $\dot{m}=0.5$ ($k_{\rm{duty}}=55$\%). We
see that the differences in the corresponding $T_{\rm{K}}$ and $T_{\rm{b}}$ profiles
are small.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_simpl_z15_10.pdf}
\caption{Similar to Fig.~\ref{fig:profiles_nh_m1e4}, for a fixed
absorption column ($N_{\rm{H}}=10^{20}$~cm$^{-2}$, $Z=0$), three sets of BH
masses ($m_{\rm{i}}=2\times 10^3$, $m_{\rm{f}}=10^4$ -- solid black, $m_{\rm{i}}=2\times
10^4$, $m_{\rm{f}}=10^5$ -- dotted blue, $m_{\rm{i}}=2\times 10^5$, $m_{\rm{f}}=10^6$ --
dashed red) and two spectral models: pure thermal disk emission
({\it diskbb} -- thick lines) and thermal disk emission with a
Comptonization tail ({\it simpl}*{\it diskbb} -- thin lines).
}
\label{fig:profiles_simpl}
\end{figure}
So far we have assumed that the incident radiation spectrum is that of
a multicolor accretion disk modified by line-of-sight absorption, as
shown in the top panel of Fig.~\ref{fig:spectra}. We now wish to
investigate the possible effect of an additional hard, power-law
spectral component that may arise due to the Comptonization of soft
photons from the BH accretion disk in its hot corona. To this end, we
carried out calculations for our {\it simpl}*{\it diskbb} spectral
models for $m_{\rm{f}}=10^4$, $10^5$ and $10^6$ and $N_{\rm{H}}=10^{20}$~cm$^{-2}$,
shown in the bottom panel of Fig.~\ref{fig:spectra}. The
resulting $T_{\rm{K}}$, $x_{\rm HII}$ and $T_{\rm{b}}$ radial profiles are compared in
Fig.~\ref{fig:profiles_simpl} with those computed without the hard
X-ray component. We see that the 21~cm zone is almost unaffected by
the hard spectral component for the least massive BH ($m_{\rm{f}}=10^4$),
somewhat broadens in the intermediate mass case ($m=10^5$), and
becomes substantially (by a factor of $\sim 2$) larger for the
heaviest BH ($m=10^6$). The last result is unsurprising, because the
corresponding X-ray spectrum (see Fig.~\ref{fig:spectra}) is
dominated by the power-law component already at $E\sim 300$~eV
(partially because of the adopted substantial line-of-sight absorption
of $10^{20}$~cm$^{-2}$).
\subsection{Characteristic size of the 21~cm zone}
\label{s:radii}
From the above comparison of the computed $T_{\rm{b}}$ radial profiles a
preliminary conclusion may be drawn that the spatial extent of the
21~cm signal associated with a high-redshift miniquasar will only
weakly depend on the properties of the latter. For more quantitative
assessment, we define two characteristic angular sizes: $\theta_0$ --
the projected distance from the miniquasar at which the 21~cm signal
changes from emission to absorption, i.e. $T_{\rm{b}}(\theta_0)=0$, and
$\theta_{1/2}$ -- the radius at which the brightness temperature of the
absorption signal is half the 'background' value (the 21~cm brightness
temperature outside of the miniquasar heating zone),
i.e. $T_{\rm{b}}(\theta_{1/2})=T_{\rm{b,bgr}}/2$. Under our assumptions that there is no
global IGM heating and that the 21~cm spin temperature is coupled to
the IGM kinetic temperature, $T_{\rm{b,bgr}}=-245$~mK and $-307$~mK at
$z_{\rm{f}}=15$ and $z_{\rm{f}}=10$, respectively.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{radii.pdf}
\caption{{\it Top panel:} Characteristic angular sizes (see text for
definitions) $\theta_0$ (empty squares connected by thin lines) and
$\theta_{1/2}$ (filled circles connected by thick lines) of the 21~cm
signal around a miniquasar as a function of the absorption column
for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $\dot{m}=1$ and three sets of BH masses
($m_{\rm{i}},m_{\rm{f}}$): ($2\times 10^3$, $10^4$) -- solid black, ($2\times 10^4$,
$10^5$) -- dotted blue, and ($2\times 10^5$, $10^6$) -- dashed
red. {\it Bottom panel:} Similar, for $z_{\rm{i}}=20$ and $z_{\rm{f}}=15$.
}
\label{fig:radii}
\end{figure}
Figure~\ref{fig:radii} (top panel) shows $\theta_0$ and $\theta_{1/2}$ as
functions of the absorption column for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $\dot{m}=1$
and three different BH masses, $m_{\rm{f}}=10^4$, $10^5$ and $10^6$. We see
that if the intragalactic absorption is not strong ($N_{\rm{H}}\lesssim
10^{20}$~cm$^{-2}$), $\theta_0$ and especially $\theta_{1/2}$ depend only
weakly on the BH mass and absorption column density. Specifically,
$\theta_0\sim 2.5$--5~arcmin, and $\theta_{1/2}\sim 5$--7~arcmin. If
$N_{\rm{H}}\gtrsim 10^{21}$~cm$^{-2}$, most of the microquasar's soft X-ray
emission is absorbed within its host galaxy, which naturally leads to
a dramatic weakening of IGM heating and shrinkage of the 21~cm
zone. The bottom panel of Fig.~\ref{fig:radii} shows a similar
set of curves for the case of miniquasars operating at higher
redshifts, namely $z_{\rm{i}}=20$ and $z_{\rm{f}}=15$. In this case, there is a more
noticeable, albeit still weak dependence on the BH mass, namely (for
$N_{\rm{H}}\lesssim 10^{20}$~cm$^{-2}$) $\theta_0$ changes from $\sim 1.5$ to
$\sim 3$~arcmin as $m_{\rm{f}}$ increases from $10^4$ to $10^6$, whereas
$\theta_{1/2}$ changes from $\sim 2.5$ to $\sim 4.5$~arcmin in the same BH
mass range. Overall, the computed $\theta_0$ size of the heating zone is
in remarkably good agreement (within a factor of $\sim 2$) with our
rough prediction given by equation~(\ref{eq:path_arcmin}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{radii_simpl.pdf}
\caption{As Fig.~\ref{fig:radii}, but for thermal disk emission
with a Comptonization tail ({\it simpl}*{\it diskbb}) instead of
pure thermal disk emission.
}
\label{fig:radii_simpl}
\end{figure}
Figure~\ref{fig:radii_simpl} demonstrates the impact of an
additional hard (Comptonization) spectral component on the extent of
the miniquasar 21~cm zone. By comparing these plots with those
pertaining to the case of pure multicolor disk emission
(Fig.~\ref{fig:radii}), we see that while $\theta_0$ and $\theta_{1/2}$ have
remained nearly unchanged for $m_{\rm{f}}\le 10^5$, these characteristic
radii have increased by a factor of $\sim 1.5$--2 for
$m_{\rm{f}}=10^6$. Therefore, the hard tail considerably changes the overall
picture for the most massive of the considered BHs ($m=10^6$). This
again reflects the fact that, within the adopted model, the
Comptonized radiation starts to dominate over the thermal emission
already at photon energies $\sim 300$~eV.
\subsection{Spectrum and flux of the 21~cm signal}
\label{s:spectrum21}
We now proceed to discussing the spectral properties of the 21~cm
signal associated with high-redshift miniquasars. Based on the above
results we can expect such objects to be surrounded on the sky by
fairly well defined regions with an apparent size of several arcmin
within which $T_{\rm{b}}-T_{\rm{b,bgr}}\gtrsim 100$~mK, and it will be interesting to
search for such specific zones of 21~cm excess emission with future
radio interferometers.
In reality, the effective angular size of the 21~cm signal extraction
region around a candidate miniquasar will be determined by the
characteristics of a particular radio interferometer and by the
related noise and background levels (see the discussion in
\S\ref{s:summary} below), but ideally it should be of the order of the
$\theta_{1/2}$ radius defined above. We have therefore integrated the
surface brightness of the expected 21~cm excess emission (i.e the
difference $T_{\rm{b}}-T_{\rm{b,bgr}}$) over the circle of radius $\theta_{1/2}$ around the
miniquasar.
Figure~\ref{fig:spectrum21} (top panel) shows the resulting spectra
for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $m_{\rm{i}}=2\times 10^4$, $m_{\rm{f}}=10^5$, $\dot{m}=1$ and
various absorption columns. We see that the 21~cm flux density is
almost unaffected by intragalactic absorption if $N_{\rm{H}}\lesssim
10^{20}$~cm$^{-2}$, the signal weakens by a factor of $\sim 3$ for
$N_{\rm{H}}=10^{21}$~cm$^{-2}$ and nearly vanishes if
$N_{\rm{H}}=10^{22}$~cm$^{-2}$, as essentially no soft X-rays from the
miniquasar leak from the host galaxy into the IGM.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{spectrum21.pdf}
\caption{{\it Top panel:} Spectra of 21~cm excess emission (with
respect to the background level at the miniquasar's redshift)
integrated within $\theta_{1/2}$ for $z_{\rm{i}}=15$, $z_{\rm{f}}=10$, $\dot{m}=1$,
$m_{\rm{i}}=2\times 10^4$, $m_{\rm{f}}=10^5$ and various $N_{\rm{H}}$ columns
(cm$^{-2}$): $10^{18}$(solid), $10^{20}$ (dashed), $10^{21}$
(dotted), $10^{22}$ (dash-dotted, poorly visible near the lower
boundary of the plot). {\it Middle panel:} Similar, for a fixed
absorption column ($N_{\rm{H}}=10^{20}$~cm$^{-2}$) and various BH masses
($m_{\rm{i}}$, $m_{\rm{f}}$): ($2\times 10^2$, $10^3$) -- dash-dotted magenta,
($2\times 10^3$, $10^4$) -- solid black, ($2\times 10^4$, $10^5$) --
dotted blue, and ($2\times 10^5$, $10^6$) -- dashed red. {\it Bottom
panel:} Similar, for higher redshifts, $z_{\rm{i}}=20$,
$z_{\rm{f}}=15$.
}
\label{fig:spectrum21}
\end{figure}
The middle panel of Fig.~\ref{fig:spectrum21} demonstrates the
dependence of the 21~cm spectum on the BH mass (for $z_{\rm{i}}=15$,
$z_{\rm{f}}=10$, $\dot{m}=1$ and $N_{\rm{H}}=10^{20}$~cm$^{-2}$). We see that the
signal increases by a factor of $\sim 3$ on going from $m_{\rm{f}}=10^3$ to
$10^4$ and then remains nearly the same (within a factor of $\sim
1.5$) for $m_{\rm{f}}=10^4$--$10^6$. The bottom panel of
Fig.~\ref{fig:spectrum21} shows the corresponding spectra for similar
miniquasars at higher redshifts: $z_{\rm{i}}=20$, $z_{\rm{f}}=15$. The picture is
qualitatively similar to the previous case, but the 21~cm excess flux
density is more sensitive to the BH mass at the higher redshift. We
also note that the signal is somewhat stronger for $m_{\rm{f}}=10^5$ than for
$m_{\rm{f}}=10^6$ in the $z_{\rm{f}}=10$ case, while the opposite is true for
$z_{\rm{f}}=15$. The reason is that this signal is accumulated from the
$\theta_{1/2}$ region whose dependence on the BH mass (see
Fig.~\ref{fig:radii}) is slightly different between $z_{\rm{f}}=10$ and
$z_{\rm{f}}=15$ due to a non-trivial interplay between the BH soft X-ray
spectral properties and redshift-dependent density of the IGM. Most
importantly, however, Fig.~\ref{fig:spectrum21} demonstrates that the
expected 21~cm signal depends fairly weakly (within a factor of 3) on
the BH mass over the $10^4$--$10^6M_\odot$ range.
Figure~\ref{fig:spectrum21_simpl} demonstrates the impact of an
additional hard spectral component on the discussed 21~cm spectra. We
see that the hard tail leads to a dramatic increase of the expected
21~cm signal for our most massive ($m_{\rm{f}}=10^6$) BH, with this difference
being more pronounced at the lower redshift ($z_{\rm{f}}=10$
vs. $z_{\rm{f}}=15$). These tendencies are expected, since the reported
spectra were obtained by integration of $T_{\rm{b}}$ within the
characteristic radius $\theta_{1/2}$, which increases in the presence of a
hard spectral component, as was shown in \S\ref{s:radii}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{spectrum21_simpl.pdf}
\caption{Similar to Fig.~\ref{fig:spectrum21}, comparing the case of
pure thermal disk emission ({\it diskbb}, thick curves) with that of
a combination of disk emission and a Comptonization tail ({\it
simpl}*{\it diskbb}, thin curves), for three sets of BH masses
($m_{\rm{i}}$, $m_{\rm{f}}$): ($2\times 10^3$, $10^4$) -- solid black, ($2\times 10^4$,
$10^5$) -- dotted blue and ($2\times 10^5$, $10^6$) -- dashed red. Note the
logarithmic vertical scale.
}
\label{fig:spectrum21_simpl}
\end{figure}
As regards the absolute value of the expected 21~cm flux density, it
is useful to approximate it as follows:
\begin{eqnarray}
F_{\nu}\approx \frac{2k}{(21~{\rm
cm})^2(1+z)^2}\frac{3|T_{\rm{b,bgr}}|}{4}\pi\theta_{1/2}^2,\nonumber\\
\approx
0.6\left(\frac{1+z_{\rm{f}}}{11}\right)^{-2}\frac{|T_{\rm{b,bgr}}|}{250~{\rm
mK}}\left(\frac{\theta_{1/2}}{5'}\right)^2~{\rm mJy},
\label{eq:fnu}
\end{eqnarray}
where we assumed that the average excess brightness temperature of the
21~cm signal within $\theta_{1/2}$ is $3T_{\rm{b,bgr}}/4$, which is approximately
the case (see the $T_{\rm{b}}$ radial profiles in
\S\ref{s:profiles}). Substituting the typical values derived from our
simulations for $z_{\rm{f}}=10$ ($T_{\rm{b,bgr}}=-309$~mK, $\theta_{1/2}=6'$) and $z_{\rm{f}}=15$
($T_{\rm{b,bgr}}=-245$~mK, $\theta_{1/2}=4'$) into the above expression, we find
$F_{\nu}\approx 1.1$ and $0.18$~mJy, respectively, in fairly good
agreement with the 21~cm spectra shown above.
We finally note that the simulated 21~cm spectra for the case of
purely thermal disk emission are characterized by FWHM of $\approx
0.01$ in terms of $\Delta\nu/\nu$.
\section{Relation to X-ray observations}
\label{s:xray}
As was discussed in \S\ref{s:intro}, there is a hope that future X-ray
observatories such as {\it Lynx} will be able to find a significant
number of high-redshift miniquasar candidates. Provided that the
planned next-generation radio facilities such as SKA are also
available by that time, it should be possible to search for the
specific 21~cm signatures of X-ray selected miniquasars discussed in
this paper.
The first practical question then is: what are the limiting BH mass
and redshift for X-ray detection of miniquasars? In this study, we
have assumed that a miniquasar's spectrum is a combination of (i)
multicolor disk emission with the temperature expected from the
standard accretion disk theory and, plausibly, (ii) a hard, power-law
($\Gamma\approx 2$) tail associated with Comptonization of disk
emission in a hot corona, which extends to high energies (at least to
a few tens of keV). It is this additional hard component that an X-ray
telescope might be able to detect from a high-redshift miniquasar. In
the particular spectral model, {\it simpl(diskbb)}, used in this
study, the observed X-ray flux in the 0.5--2~keV band (corresponding
to emission in a rest-frame band of $0.5(1+z)$--$2(1+z)$~keV), $F_{\rm X}$,
is proportional to the fraction of scattered photons. It turns out
that, virtually independently of the BH mass and accretion rate,
\begin{equation}
F_{\rm X}\approx 0.06\frac{f_{\rm sc}}{0.05}\frac{L}{4\piD_{\rm L}^2},
\label{eq:fx}
\end{equation}
where $L$ is the total luminosity of the miniquasar and $D_{\rm L}(z)$ is
its luminosity distance.
Assuming that $f_{\rm sc}=0.05$ (which is a reasonable value as discussed in
\S\ref{s:abs}) and that the X-ray tesecope catches the miniquasar when
it is accreting at a critical rate ($\dot{m}=1$), when $L=L_{\rm{Edd}}(m)$, we
find from equation~(\ref{eq:fx}) that for $m=10^4$, $F_{\rm X}=2.3\times
10^{-20}$ and $5.8\times 10^{-20}$~erg~cm$^{-2}$~s$^{-1}$ at $z=15$
and $z=10$, respectively, whereas for $m=10^5$, $F_{\rm X}=2.3\times
10^{-19}$ and $5.8\times 10^{-19}$~erg~cm$^{-2}$~s$^{-1}$ for the same
redshifts. These fluxes are well below the detection threshold of {\it
Chandra}, the most sensitive X-ray telescope so far. However, the
proposed {\it Lynx} mission is expected to reach a sensitivity of
$\sim 10^{-19}$~erg~cm$^{-2}$~s$^{-1}$ in its deep extragalatic
surveys and should thus be able to detect actively growing BHs of mass
$\gtrsim 5\times 10^4M_\odot$ at $z=15$ and $\gtrsim 2\times 10^4M_\odot$
at $z=10$. These mass limits are of course inversely proportional to
$\dot{m}$ and $f_{\rm sc}$.
\section{Discussion and summary}
\label{s:summary}
We have shown that an intermediate mass BH growing by radiatively
efficient accretion in the early Universe should leave a specific,
localized imprint on the 21~cm cosmological signal. Namely, a
miniquasar with the BH mass between $\sim 10^4$ and $\sim 10^6M_\odot$
at $z\sim 15$--10 will be surrounded by a region with a fairly well
defined boundary of several arcmin radius, within which the 21~cm
temperature quickly grows inwards from the background value
$T_{\rm{b,bgr}}\sim -250$~mK to $T_{\rm{b}}\sim 0$ (reaching the saturation value of
$\sim 30$~mK in the innermost region). The size of this region and the
flux density of the enclosed 21~cm signal are only weakly sensitive to
the BH mass in the range quoted above.
\subsection{Sensitivity to assumptions}
\label{s:assumptions}
The above result was obtained under certain assumptions and it is
important to discuss how realistic they are. Perhaps, the most
important constituent of our model is the miniquasar's spectral energy
distribution, which we assumed to be that of multicolor disk blackbody
emission. As was discussed in \S\ref{s:spec}, the actual spectrum of the
accretion disk emission is likely to deviate significantly from this
simplistic model, but given the weak sensitivity of the size of
the 21~cm zone to the BH mass, such deviations are unlikely to have a
significant effect on our predictions. More important is the likely
presence of an additional, hard component in the miniquasar
spectrum. As we have demonstrated, its effect is small for relatively
low-mass BHs ($\sim 10^4$--$10^5M_\odot$) but becomes substantial (the
heating zone widens by a factor of $\sim 1.5$--2) for a $10^6M_\odot$ BH.
The next important issue is possible photoabsorption of the
miniquasar's soft X-ray emission within its host galaxy. It turns out
that the properties of the 21~cm zone remain nearly unchanged as long
as $N_{\rm{H}}\lesssim 10^{20}$~cm$^{-2}$ (regardless of the presence of
metals in the absorbing medium), but at $N_{\rm{H}}\sim 10^{21}$~cm$^{-2}$
this zone starts to shrink dramatically. One may argue that a powerful
miniquasar should be able to quickly photoionize
the interstellar medium within a substantial distance of itself and
thus effectively reduce $N_{\rm{H}}$ (e.g. \citealt{sazkha18}), but this
clearly needs further investigation. Furthermore, if miniquasars are
less powerful analogs of AGN, they may have a small-scale obscuring
torus of cold gas and dust. In that case there will be two opposite
cones of specific 21~cm signal around the miniquasar, i.e. the average
signal within the $\theta_{1/2}$ radius will decrease by a factor of
$\Omega/4\pi$, where $\Omega$ is the solid angle of the unobscured sky
as seen from the BH.
Finally, we assumed that the Universe had not yet been globally heated
at the redshifts of interest ($z\sim 15$--10) and that the 21~cm spin
temperature was coupled to the gas temperature at these epochs. It is
only in this case that a large contrast in the 21~cm brightness
temperature will arise between the vicinity of the miniquasar, where
$T_{\rm{b}}\sim 0$, and the background, where $T_{\rm{b}}\sim -(200$--$300)$~mK (at
$z=15-10$). These assumed conditions are in good
agreement with the recent EDGES result \citep{bowetal18} for $z\sim
20$--15\footnote{Actually, EDGES measured an even lower sky-averaged
$T_{\rm{b}}\sim -500$~mK.} but appear to fail at $z\lesssim 14$, when the
global 21~cm temperature has been measured to be around zero. Of course, the
EDGES measurements must be verified with future observations but a lot
of authors (see \S\ref{s:intro}) have suggested that XRBs, miniquasars
and other types of X-ray sources can indeed significantly heat the
Universe by $z\sim 10$. In such a case, it will be extremely difficult
to discern the 21~cm imprint of an individual miniquasar against the
background at $z\sim 10$ but that should still be possible at
$z\sim 15$.
\subsection{Comparison with previous studies}
\label{s:previous}
The present study is not the first one addressing the potential impact
of high-redshift X-ray sources on the cosmological 21~cm signal. In
particular, a number of authors have focused on the expected 21~cm
signatures of individual quasars and miniquasars in the early Universe
\citep{chuetal06,thozar08,yajli14,ghaetal17,boletal18,vasetal18}. The
crucial novel aspect of our study is its focus on the (relatively
soft) thermal emission of the accretion disk around an
intermediate-mass black hole, which is expected (based on the rich
observational material on high Eddington ratio X-ray binaries and AGN)
to carry the bulk of the bolometric luminosity of the miniquasar but
has been usually ignored before. This leads to an important difference
for the predicted 21~cm signature, namely that it should be
concentrated within $\sim 5$~arcmin of the miniquasar due to the
relatively short mean free path of the extreme UV/soft X-ray photons
in the ambient IGM.
Furthemore, in contrast to some of the previous studies we have
assumed the 21~cm spin temperature at the considered epochs ($z\sim
15$--10) to be coupled (by the UV background from the first stars) to
the gas temperature throughout the IGM and that the latter had not yet
been heated significantly on average, so that the mean $T_{\rm{b}}\sim
-(200$--$300)$~mK, rather than $T_{\rm{b}}\approx 0$ as it would be in the
absence of Wouthuysen--Field coupling or in the presence of
significant global heating. This key assumption, as noted above (in
\S\ref{s:assumptions}), is partially motivated by the recent detection
of a strong 21~cm global absorption feature by EDGES. It is the
combination of the relative compactness of the heating zone and the
large negative global 21~cm brightness temperature that has led to our
conclusion that high-redshift miniquasars might be associated with
fairly strong ($\sim 0.2[(1+z)/16]^{-2}$~mJy) 21~cm signatures.
\subsection{Observational strategy}
\label{s:strategy}
A blind sky search for weak 21~cm signals from individual
high-redshift miniquasars might not be feasible in the near
future. Therefore, we propose to look for such signals specifically
from miniquasar candidates that can be found with the proposed {\it
Lynx} X-ray mission. As discussed in \S\ref{s:xray}, the planned
{\it Lynx} sensitivity of $\sim 10^{-19}$~erg~cm$^{-2}$~s$^{-1}$
should allow it to detect rapidly growing BHs with masses as low as a
few $10^4M_\odot$ out to $z\sim 15$, provided that a signficant fraction
of the energy released by accretion goes into Comptonized, hard X-ray
radiation.
However, selection of such candidates is unlikely to be an easy
task. Indeed, {\it Lynx} will provide only crude X-ray hardness
information for them, not sufficient for distinguishing from other
types of sources. Moreover, high-redshift miniquasars will probably be
just a small minority among the tens of thousands of sources to be
detected in the proposed {\it Lynx} (400~arcmin$^{2}$) ultradeep
survey \citep{lynx18}. Specifically, \cite{benetal18} predict that
between several dozen and a few thousand growing massive BHs (roughly
in the $10^4$--$10^6M_\odot$ range of interest to us) could be found at
$z\sim 5$--12, with this large uncertainty reflecting our poor
understanding of how BHs form and grow in the early Universe. In the
present study, we have focused on somewhat earlier epochs, $z\sim
15$--10, and just between a few and a few hundred (i.e. less or much
less than 1 object per arcmin$^2$) such high-redshift miniquasars are
expected to be found by {\it Lynx} \citep{benetal18}.
Fortunately, there are bright prospects for the synergy between the
proposed {\it Lynx} survey and the optical/IR ultradeep surveys by
next-generation telescopes such as {\it JWST} \citep{lynx18}, so that
the majority of the {\it Lynx} sources (such as AGN and galaxies) will
probably have reliable optical/IR counterparts. High-redshift
miniquasars, because of their expected optical/IR faintness (see
\S\ref{s:intro}), will thus be hidden among the relatively small
sample of very faint ($F_{\rm X}\gtrsim 10^{-19}$~erg~cm$^{-2}$~s$^{-1}$)
X-ray sources {\it without} an optical counterpart. A reasonable
approach would then be to regard all such sources as candidate
high-redshift miniquasars and search for specific 21-cm signatures
around their positions provided by {\it Lynx}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth,viewport=20 180 560 720]{profiles_gauss.pdf}
\caption{Radial profiles of the 21~cm images of miniquasars after
convolution with a 2-dimensional Gaussian with $\sigma=5'$ (left)
and $10'$ (right). The upper panels are for $z_{\rm{i}}=15$ and $z_{\rm{f}}=10$,
and the lower ones for $z_{\rm{i}}=20$ and $z_{\rm{f}}=15$. The different curves
correspond to different BH masses ($m_{\rm{i}}$, $m_{\rm{f}}$): ($2\times 10^3$,
$10^4$) -- solid black, ($2\times 10^4$, $10^5$) -- dotted blue,
($2\times 10^5$, $10^6$) -- dashed red. These results are for
multicolor disk emission and $N_{\rm{H}}=10^{20}$~cm$^{-2}$. Note the
logarithmic horizontal scale.
}
\label{fig:profiles_gauss}
\end{figure}
According to our estimates, high-redshift miniquasars are expected to
produce 21~cm signals with an amplitude $\sim 100$~mK on few-arcmin
scales, with the characteristic spectral width $\Delta\nu/\nu\sim
0.01$ corresponding to $\Delta\nu\sim 1$~MHz at $\nu\sim 100$~MHz for
$z=10$--15. How do these numbers compare with the expected
characteristics of future cosmological 21~cm surveys with their
specific noise levels and angular resolution?
Figure~\ref{fig:profiles_gauss} shows the result of convolution of our
predicted 21~cm images of miniquasars (for $m=10^4$-$10^6$ and
$z_{\rm{f}}=10$ and 15) with two-dimensional Gaussians with $\sigma=5'$ and
10$'$. We see that miniquasars are expected to produce $\sim
140$--180~mK and $\sim 50$--80~mK positive peaks (with respect to the
large-scale background) on images with 5-arcmin angular resolution for
$z\sim 10$ and $z\sim 15$, respectively.
The low-frequency component of the SKA experiment, SKA-low, is planned
to cover a broad frequency range extending down to $\sim 50$~MHz
(allowing one to probe the early Universe out to $z\sim 25$) with high
spectral resolution ($\sim 1$~kHz) and a very large collecting area
$\sim 1$~km$^2$ \citep{meletal13}. For the first phase of the
experiment (the so-called SKA1-Low), the noise level is expected to be
$\sim 3$~mK ($\sim 10$~mK) for $z=10$ ($z=15$) images with $5'$
angular resolution (with the frequency bandwidth matched to the
angular resolution) accumulated over an integration time of $\sim
1000$~hours (see fig.~2 in \citealt{meletal15}). Therefore, the $\sim
100$~mK signal from a high-redshift miniquasar (see
Fig.~\ref{fig:profiles_gauss}) should be reliably detectable with such
long (but feasible) observations by SKA1-low (and even more so by the
fully constructed SKA-low).
In reality, the biggest problem will likely be separating the 21~cm
signal associated with high-redshift miniquasars from astrophysical
foregrounds of much higher amplitude, such as Galactic synchrotron
radiation and cumulative emission from unresolved extragalactic
sources; furthermore, the large-scale structure of the early Universe
will cause additional fluctuations of the 21~cm brightness temperature
on the arcmin scales relevant to the problem in hand (see
\citealt{meletal13} for a review). Finally, as with any radio
interferometer, SKA will not be capable of measuring absolute source
fluxes due to the zero-spacing problem. A thorough consideration of
these non-trivial observational issues is beyond the scope of this
proof-of-concept study, but a number of recent studies
\citep{wyietal15,ghaetal17} address these problems in the context of
SKA and suggest that there are efficient methods of evaluation and
subtraction of foregrounds that might enable detection of the $\sim
100$~mK peaks associated with high-redshift miniquasars on the SKA
images.
We finally emphasize again that such a search will be greatly
faciliated by the availability of accurate celestial coordinates of
candidate miniquasars from {\it Lynx}, even though the X-ray data will
not provide their redshifts. In practice, the search might consist of
browsing SKA-low images constructed with $\sim 5'$ angular resolution
(and cleaned as carefully as possible from the foregrounds), one
redshift slice after another over a range of $z\sim 10$--20. The
detection of a positive $\sim 100$~mK peak (see
Fig.~\ref{fig:profiles_gauss}) centered on the {\it Lynx} position
will strongly indicate that the object is indeed an intermediate-mass
BH growing to become a supermassive BH. Moreover, its redshift can
thus be measured to within $\Delta z/(1+z)\lesssim 0.01$.
\section*{Acknowledgments}
The authors thank the referee for useful suggestions. The research was
supported by the Russian Science Foundation (grant 14-12-01315).
|
1,116,691,500,283 | arxiv | \section{Introduction}
The study of atomic properties of heavy actinides has gained growing interest ~\cite{Es1975,Es2009,Raeder2022,Mustapha2022,Fm1,Fm2,ErFm,Raeder}.
Transition frequencies and hyperfine structure (HFS) are being measured. Measuring HFS is motivated by obtaining data on the nuclear momenta of heavy nuclei. This would advance our knowledge about the nuclear structure of superheavy nuclei benefiting the search for the hypothetical stability island.
In light of this, we focus on theoretically studying of the hyperfine structure for heavy actinides, californium (Cf, Z= 98) and einsteinium (Es, Z= 99). Combining the calculations with the measurements would allow the extraction of the nuclear magnetic moment $\mu$ and electric quadrupole moments $Q$ of the studied isotopes.
HFS constants of some states of odd isotopes of Cf ($^{249}$Cf,$^{251}$Cf,$^{253}$Cf) were recently measured and nuclear moments $\mu$ and $Q$ were extracted using our calculations~\cite{Raeder}. This work presents a detailed account of these calculations as well as similar calculations for Es.
In the case of Es, there are no theoretical results currently available, whereas several experimental papers have been published. Using different empirical techniques, Refs.~\cite{Es1975,Es2009,Raeder2022} studied the HFS of Es for three isotopes with non-zero nuclear spins, $ ^{253,254,255}$Es.
Heavy actinides like Cf and Es, are atoms with an open $5f$ subshell. The number of electrons on open shells is twelve for Cf and thirteen for Es (including the $7s$ electrons).This presents a challenge for the calculations. We use the configuration interaction with perturbation theory (CIPT)~\cite{cipt} method, which has been developed for such systems. To check the applicability of the method and the expected accuracy of the results we performed similar calculations for lanthanides dysprosium (Dy, Z= 66) and holmium (Ho, Z= 67), whose electronic structures are similar to Cf and Es, respectively.
Both, Dy and Ho were extensively studied experimentally and theoretically (see, e.g. \cite{Dy1970,NIST,Dy1967,Dy1974,DzubaDy,Cheng85,Ho,hfs_Ho,DzubaHo}).
Here we compare our results to experimental data, Refs.~\cite{Dy1970,NIST} for Dy and Refs.~\cite{NIST,Ho,hfs_Ho} for Ho, to check the accuracy of the method we use.
\section{Method of calculation}
\subsection{Calculation of energy levels}
As it was mentioned in the introduction the Dy and Cf atoms have twelve valence electrons, the Ho and Es atoms have thirteen valence electrons. It is well known that as the number of valence electrons increases, the size of the CI matrix increases dramatically, making the standard CI calculations practically impossible for such systems. In this work we use the CIPT method~\cite{cipt} which has been especially developed for such systems. It reduces the size of the CI matrix by neglecting the off-diagonal matrix elements between high-energy states and reducing the contribution of these states to the perturbation theory-like corrections to the matrix elements between low-energy states. The size of the resulting CI matrix is equal to the number of low-energy states.
The CI Hamiltonian can be written as follows
\begin{equation}
\hat{H}^{\mathrm{CI}}=\sum_{i=1}^{N_{v}}\hat{H}^{\mathrm{HF}}_{i}+\sum_{i<j}^{N_{v}}\frac{e^{2}}{\left|r_{i}-r_{j}\right|},
\end{equation}
where $ i $ and $ j $ enumerate valence electrons and $ N_{v} $ is the total number of valence electrons, $ e $ is electron charge, and $ r $ is the distance. $ \hat{H}^{\mathrm{HF}}_{i} $ is the single-electron Hartree-Fock (HF) Hamiltonian, which has the form
\begin{equation} \label{e:RHF}
\hat{H}^{\mathrm{HF}}_{i}= c{ \bm{\alpha}}_i\cdot {\bf \hat p}_i+(\beta -1)mc^2+V_{\rm nuc}({r_i})+V^{N-1}({r_i}).
\end{equation}
Here $c$ is the speed of light, $\bm{\alpha}_i$ and $\beta$ are the Dirac matrixes, $\bf \hat p_i$ is the electron momentum, $m$ is the electron mass, $V_{\rm nuc}({i})$ is the nuclear potential obtained by integrating Fermi distribution of nuclear charge density, and $V^{N-1}({i})$ is the self-consistent HF potential obtained for the configuration with one $7s$ (or $6s$) electron removed from the ground state configuration of the considered atom. This corresponds to the $V^{N-1}$ approximation~\cite{VN1,VN2} which is convenient for generating a single-electron basis. Single-electron basis states are calculated in the frozen $V^{N-1}$ potential, so that they correspond to the atom with one electron excited from the ground state.
External electron wave functions are expressed in terms of coefficients of expansion over single-determinant basis state functions
\begin{eqnarray}
&&\Psi (r_1, \dots ,r_M)= \label{e:Psi} \\
&&\sum_{i=1}^{N_1} X_i \Phi_i (r_1, \dots ,r_M)+ \sum_{j=1}^{N_2} Y_j \Phi_j(r_1, \dots ,r_M). \nonumber
\end{eqnarray}
Here $M$ is the number of valence electrons, $N_1$ is the number of low-energy basis states, $N_2$ is the number of high-energy basis states.
Then the CI matrix equation can be written in a block form
\begin{equation} \label{e:blocks}
\left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \left(\begin{array}{c} X \\ Y \end{array} \right) = E_a \left(\begin{array}{c} X \\ Y \end{array} \right).
\end{equation}
Here block $A$ corresponds to low-energy states, block $D$ corresponds to high-energy states, and~blocks $B$ and $C$ correspond to cross terms.
Note that since the total CI matrix is symmetric, we have $C = B'$, i.e.,~$c_{ij} = b_{ji}$.
Vectors $X$ and $Y$ contain the coefficients of expansion of the valence wave function over the single-determinant many-electron basis functions (see Eq.~\ref{e:Psi}).
Finding $Y$ from the second equation of (\ref{e:blocks}) leads to
\begin{equation}\label{e:Y}
Y=(E_aI-D)^{-1}CX.
\end{equation}
Substituting $Y$ to the first equation of (\ref{e:blocks}) leads to
\begin{equation}\label{e:CIPT}
\left[A + B(E_aI-D)^{-1}C\right] X = E_a X,
\end{equation}
where $I$ is the unit matrix.
Then, following Ref.~\cite{cipt} we neglect off-diagonal matrix elements in block $D$. This leads to a very simple structure of the $(E_aI-D)^{-1}$ matrix, $(E_aI-D)^{-1}_{ik} = \delta_{ik}/(E_a - E_k)$, where $E_k = \langle k|H^{\rm CI} |k \rangle$
Matrix elements of the effective CI matrix (\ref{e:CIPT}) have the form
\begin{equation}
\langle i|\hat H^\mathrm{eff}|j\rangle =\langle i|\hat H^\mathrm{CI}|j\rangle + \sum_{k}\frac{\langle i|\hat H^\mathrm{CI}|k\rangle\langle k|\hat H^\mathrm{CI}|j\rangle}{E_a-E_{k}}.
\label{e:CIPT2}
\end{equation}
We see that the standard CI matrix elements between low-energy states are corrected by an expression which is very similar to the second-order perturbation theory correction to the energy. This justifies the name of the method. To calculate this second-order correction we need to know the energy of the state $E_a$ which must come as the result of the solution of the equation, i.e. it is not known in advance. Therefore, iterations are needed. We start from any reasonable guess for the energy. For example, it may come from the solution of the equation with neglected second-order correction. Note that the energy independent numerators of the second-order correction can be calculated only once, on the first iteration, kept on disk and reused on every consequent iteration. This means that only the first iteration takes some time while all other iterations are very fast. As a rule, less than ten iterations are needed for full convergence. As a result, we have an energy of the state $E_a$ and expansion coefficients $X$ and $Y$.
\subsection{Basis states}
To solve the CI equations we need many-electron basis states which are constructed from single-electron states.
For single-electron basis states we use the B-spline technique~\cite{Johnson_Bspline,Johnson_Bspline2}.
These states are defined as linear combinations of B-splines that are eigenstates of the HF Hamiltonian (\ref{e:RHF}).
Forty B-splines of the order nine are calculated within a box of radius $R_{\rm max}$ = 40$a_B$ (where $a_B$ represents Bohr's radius) and an orbital angular momentum of 0~$\leq$~\textit{l}~$\leq$~4. 14 states above the core in each partial wave are used. It has been found that by selecting the values of $l_{\rm max}$, $R_{\rm max}$, and the number of B-splines, a basis is adequately saturated for low-lying states. The many-electron states are found by making all possible single and double electron excitations from a few reference configurations. One, two or three configurations, corresponding to the low-lying states of an atom are considered as reference configurations. One configuration of the same parity is considered at a time.
For each configuration, all possible values of the projection of the total angular momentum $j$ of the single-electron states are considered and many-electron states with fixed values of total many-electron angular momentum $J$ and its projection $M$ are constructed. Usually, we take $M=J$.
\subsection{Calculation of hyperfine structure}
In this section, we mostly follow our previous work on hafnium and rutherfordium~\cite{HfRf}.
To calculate HFS,
we use the time-dependent Hartree-Fock (TDHF) method, which is equivalent to the well-known random-phase approximation (RPA).
The RPA equations are the following:
\begin{equation}\label{e:RPA}
\left(\hat H^{\rm RHF}-\epsilon_c\right)\delta\psi_c=-\left(\hat f+\delta V^{f}_{\rm core}\right)\psi_c
\end{equation}
where $\hat f$ is an operator of an external field (nuclear magnetic dipole or electric quadrupole fields).
Index $c$ in (\ref{e:RPA}) numerates states in the core, $\psi_c$ is a single-electron wave function of the state $c$ in the core, $\delta\psi_c$ is the correction to this wave function caused by an external field, and $\delta V^{f}_{\rm core}$ is the correction to the self-consistent RHF potential caused by changing of all core states.
Eq. (\ref{e:RPA}) are solved self-consistently for all states in the core. As a result, an effective operator of the interaction of valence electrons with an external field is constructed as $\hat f + \delta V^{f}_{\rm core}$. The energy shift of a many-electron state $a$ is given by
\begin{equation} \label{e:de}
\delta \epsilon_a = \langle a | \sum_{i=1}^M \left(\hat f+\delta V^f_{\rm core} \right)_i | a\rangle.
\end{equation}
Here $M$ is the number of valence electrons.
When the wave function for the valence electrons comes as a solution of Eq.~(\ref{e:CIPT}), Eq.~(\ref{e:de}) is reduced to
\begin{equation}\label{e:mex}
\delta \epsilon_a = \sum_{ij} x_i x_j \langle \Phi_i|\hat H^{\rm hfs}|\Phi_j \rangle,
\end{equation}
where $\hat H^{\rm hfs} = \sum_{i=1}^M (\hat f+\delta V^f_{\rm core})_i$.
For better accuracy of the results, the full expansion (\ref{e:Psi}) might be used. Then it is convenient to introduce a new vector $Z$, which contains both $X$ and $Y$, $Z \equiv \{X,Y\}$. Note that the solution of (\ref{e:CIPT}) is normalized by the condition $\sum_i x_i^2=1$. The normalization condition for the total wave function (\ref{e:Psi}) is different, $\sum_i x_i^2+\sum_j y_j^2 \equiv \sum_i z_i^2=1$. Therefore, when $X$ is found from (\ref{e:CIPT}), and $Y$ is found from (\ref{e:Y}), both vectors should be renormalized. Then the HFS matrix element is given by the expression, which is similar to (\ref{e:mex}) but has much more terms
\begin{equation}\label{e:mez}
\delta \epsilon_a = \sum_{ij} z_i z_j \langle \Phi_i|\hat H^{\rm hfs}|\Phi_j \rangle.
\end{equation}
Energy shift (\ref{e:de}) is used to calculate HFS constants $A$ and $B$ using textbook formulas
\begin{equation}
A_a = \frac{g_I \delta \epsilon_a^{(A)}}{\sqrt{J_a(J_a+1)(2J_a+1)}},
\label{e:Ahfs}
\end{equation}
and
\begin{equation}
B_a = -2Q \delta \epsilon_a^{(B)}\sqrt{\frac{J_a(2J_a-1)}{(2J_a+3)(2J_a+1)(J_a+1)}}.
\label{e:Bhfs}
\end{equation}
Here $\delta \epsilon_a^{(A)}$ is the energy shift (\ref{e:de}) caused by the interaction of atomic electrons with the nuclear magnetic moment $\mu$, $g_I=\mu/I$, $I$ is nuclear spin; $\delta \epsilon_a^{(B)}$ is the energy shift (\ref{e:de}) caused by the interaction of atomic electrons with the nuclear electric quadrupole moment $Q$ ($Q$ in (\ref{e:Bhfs}) is measured in barns).
\section{Energy levels and HFS of Dysprosium and Holmium}
\begin{table}
\caption{\label{t:Er}
Excitation energies ($E$, cm$^{-1}$), and $g$-factors for some low states of Dy, and Ho atoms.}
\begin{ruledtabular}
\begin{tabular}{c cc cc cc }
&&&
\multicolumn{2}{c}{This work } &
\multicolumn{2}{c}{NIST \cite{NIST}}\\
\cline{4-5}
\cline{6-7}
\multicolumn{1}{c}{Conf.}&
\multicolumn{1}{c}{Term}&
\multicolumn{1}{c}{J}&
\multicolumn{1}{c}{$ E $}&
\multicolumn{1}{c}{$ g $}&
\multicolumn{1}{c}{$ E $}&
\multicolumn{1}{c}{$ g $}\\
\hline
\multicolumn{7}{c}{\textbf{Dy}}\\
$4f^{10}6s^2$&$ ^{5}$I &8& 0.000 &1.242& 0.000 & 1.2416\\
$4f^{10}6s^2$&&7&3933& 1.175& 4134.2 & 1.1735\\
$4f^{10}6s^2$&& 6 &7179& 1.073& 7050.6 & 1.0716\\
$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&8 &7818&1.347 & 7565.610& 1.35246 \\
$4f^{9}5d6s^2$& &7 &9474 & 1.353 & 8519.210&1.336\\
$4f^{10}6s^2$&$ ^{5}$I&5 &9589 & 0.909& 9211.6 & 0.911\\
$4f^{9}5d6s^2$& $ ^{7} $I$ ^{o} $&9 &10048& 1.316 & 9990.974 & 1.32\\
$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&6 &11052&1.417 & 10088.802 & 1.36\\
$4f^{10}6s^2$&$ ^{5}$I&4&11299 & 0.613& 10925.3 & 0.618\\
\hline
\multicolumn{7}{c}{\textbf{Ho}}\\
$4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 15/2 &0.00&1.196&0.00&1.1951\\
$4f^{11}6s^2$& & 13/2 &5205 & 1.107& 5419.7 &$ - $\\
$4f^{10}5d6s^2$& (8,$\frac{3}{2}$) & 17/2&8344&1.262& 8378.91&$ - $\\
$4f^{10}5d6s^2$& &15/2 &8385 &1.280 &8427.11&$ - $\\
$4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 11/2 &8501 &0.979& 8605.2 & 1.012 \\
$4f^{10}5d6s^2$& (8,$\frac{3}{2}$)&13/2 & 8989&1.336 &9147.08&$ - $\\
$4f^{10}5d6s^2$& &19/2 &8952 & 1.231 &9741.50&$ - $\\
$4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 9/2 &10550 & 0.780& 10695.8 &0.866\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table*}
\caption{\label{t:HFS}
Hyperfine structure constants $A$ and $B$ (in MHz) for low-lying states of Dy and Ho. Nuclear spin $I$, nuclear magnetic moment $\mu(\mu_N)$, and
nuclear electric quadrupole moment $Q(b)$ values for the isotopes of the $ ^{161,163} $Dy and $ ^{165} $Ho are taken from Ref.~\cite{Stone1},
$g_I=\mu/I$. Last column presents references to experimental data for $A$ and $B$.}
\begin{ruledtabular}
\begin{tabular}{cc cc c cc c c}
\multicolumn{1}{c}{Isotope}&&&&
\multicolumn{2}{c}{ This work}&
\multicolumn{3}{c}{Experimental results.}\\
\cline{5-6}
\cline{7-9}
\multicolumn{1}{c}{Nuclear Parameters}&
\multicolumn{1}{c}{Conf.}&
\multicolumn{1}{c}{Term}&
\multicolumn{1}{c}{$ J $}&
\multicolumn{1}{c}{$A$}&
\multicolumn{1}{c}{$B$}&
\multicolumn{1}{c}{$A$}&
\multicolumn{1}{c}{$B$}&
\multicolumn{1}{c}{Ref.}\\
\hline
\multicolumn{1}{c} {\bf $ ^{161} $Dy} \\
$\mu$= -0.480, I= 5/2, Q= 2.51&$4f^{10}6s^2$&$ ^{5}$I &8 &-113&1127 & -116.231 &1091.577 &\cite{Dy1970}\\
&$4f^{10}6s^2$&&7& -125&1057&-126.787 &1009.742&\cite{Dy1970}\\
& $4f^{10}6s^2$&& 6 &-140 & 991 &-139.635 &960.889&\cite{Dy1970}\\
&$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&8 &-88&2256 & -&-&- \\
&$4f^{9}5d6s^2$& &7 &-104&2397 & -&-&-\\
& $4f^{10}6s^2$&$ ^{5}$I&5 &-166 &928 &-161.971 &894.027&\cite{Dy1970}\\
&$4f^{9}5d6s^2$& $ ^{7} $I$ ^{o} $&9 &-80&2663 &- &-&-\\
&$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&6 &-122&2901 &- &-&-\\
& $4f^{10}6s^2$&$ ^{5}$I&4 &-216&997&-205.340 &961.156&\cite{Dy1970}\\
\hline
\multicolumn{1}{c} {\bf $ ^{163} $Dy} \\
$\mu$= 0.673, I= 5/2, Q= 2.65 &$4f^{10}6s^2$&$ ^{5}$I &8 &158&1190 & 162.754 &1152.869&\cite{Dy1970}\\
&$4f^{10}6s^2$&&7& 176&1116 &177.535 &1066.430&\cite{Dy1970}\\
& $4f^{10}6s^2$&& 6 & 196& 1046 & -&-&-\\
&$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&8 &123& 2381& - &-&- \\
&$4f^{9}5d6s^2$& &7 &146&2531 & - &-&-\\
& $4f^{10}6s^2$&$ ^{5}$I&5 &233 &979 &- &-&-\\
&$4f^{9}5d6s^2$& $ ^{7} $I$ ^{o} $&9 &112& 2812& - &-&-\\
&$4f^{9}5d6s^2$& $ ^{7} $H$ ^{o} $&6 &170&3063 & -&-&-\\
& $4f^{10}6s^2$&$ ^{5}$I&4 &303&1053 &- &-&-\\
\hline
\multicolumn{1}{c} {\bf $ ^{165} $Ho}\\
$\mu$= 4.17, I= 7/2, Q= 3.58 & $4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 15/2 &787&-1943&800.583&-1668.089&\cite{Ho}\\
&$4f^{11}6s^2$& & 13/2 & 939&-1668&937.209&-1438.065&\cite{Ho}\\
& $4f^{10}5d6s^2$& (8,$\frac{3}{2}$) & 17/2&666& 1085&776.4(4.5)&608(300) &\cite{hfs_Ho}\\
&$4f^{10}5d6s^2$& &15/2 &763 & 1127&783.0(4.5)&801(300)&\cite{hfs_Ho}\\
&$4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 11/2 &1061&-1315 &1035.140 &-1052.556&\cite{Ho} \\
&$4f^{10}5d6s^2$& (8,$\frac{3}{2}$)&13/2& 879 &1829&916.6(0.5)&2668(7)&\cite{hfs_Ho}\\
&$4f^{10}5d6s^2$& &19/2 &617 &1650 &745.1(1.4)&1747(78) &\cite{hfs_Ho}\\
&$4f^{11}6s^2$& $ ^{4} $I$ ^{o} $&9/2 &1279&-1174 &1137.700&-494.482&\cite{Ho}\\
\end{tabular}
\end{ruledtabular}
\end{table*}
For the purpose of testing the accuracy of the method, we start calculating the energy levels for some low-lying states of Dy and Ho. The results are shown in Table~\ref{t:Er}.
As can be seen, our results are consistent with the experimental results compiled in Ref.~\cite{NIST} of respective atomic systems. The difference between theoretical calculations and measurements is within a few hundred cm$ ^{-1} $.
Calculated and experimental Land\'{e} g-factors are also presented. A comparison of Land\'{e} g-factors calculated with non-relativistic expressions is helpful for identifying state labels.
\begin{equation}\label{e:g}
g_{NR} = 1 + \frac{J(J+1)-L(L+1)+S(S+1)}{2J(J+1)}.
\end{equation}
Total orbital momentum $L$, and total spin $S$ in (\ref{e:g}) cannot come from relativistic calculations. Instead, we choose their values from the condition that formula (\ref{e:g}) gives values very close to the calculated $g$-factors.
This allows us to link the state to the non-relativistic notation $^{2S+1}L_J$. Here, $J$ is the total angular momentum ({\bf J = L + S}).
A good agreement is also observed between current calculations and experimental $g$-factors of Dy and Ho whenever experimental data are available. In order to identify the states correctly, it is essential to take this into consideration. An exception stands out in state $4f^{11}6s^2\ ^{4}\rm I ^{o} _{9/2}$ of Ho, where the theory differs significantly from the experiment. Based on the NIST database~\cite{NIST} of Ho spectrum, we can observe that there are multiple states with the same parity and total angular momentum J, separated only by small energy intervals and dominated by different electron configurations. Due to this vigorous mixing, the calculations of the $g$-factor become unstable.
The hyperfine structures of the ground states and some low-lying states of Dy and Ho have also been calculated.
The Dy atom has two stable isotopes, $^{161}$Dy and $^{163}$Dy, and the Ho atom has one stable isotope, $^{165}$Ho. The results of calculations and corresponding nuclear parameters are presented in Table~\ref{t:HFS}. One can see that we have good agreement between theory and experiment for magnetic dipole constant $A$ and electric quadrupole constant $B$ for most states of Dy and Ho. The difference between theory and experiment is within 3\% for the $A$ constant of Dy and Ho, within 4\% for the $B$ constants of Dy and $\sim$~20\% for the $B$ constant of Ho. A similar agreement between theory and experiment was found earlier for the HFS constants of Er~\cite{ErFm}.
Two states of Ho present an exception. These are the $4f^{10}5d6s^2\ (8,\frac{3}{2})_{13/2}$ state, and the $4f^{11}6s^2\ ^{4}\rm I^{o}_{9/2}$ state. Here the difference between theory and experiment for electric quadrupole HFS constant $B$ is significant. In particular, it is 138\% for the $4f^{11}6s^2\ ^{4}\rm I^{o}_{9/2}$ state. This is the same state which shows poor accuracy for the $g$-factor, which indicates that strong configuration mixing affects the HFS as well.
It should be mentioned that an earlier study performed using the MCDF method also found that this state had a low level of accuracy with a 117 \% deviation from the experimental result~\cite{Cheng85}.
Note that our investigations of testing the accuracy of using the CIPT method on the Er atomic system, which has a similar electronic structure, were previously performed~\cite{ErFm}. All the above atomic properties, energies, $g$-factors, and HFS constants $A$ and $B$ for the stable isotope with non-zero spin, $^{167} $Er, have been calculated. There has been a good agreement between measurements and our results (see Ref.~\cite{ErFm} Tables 1 and 6). In the end, we expect that the results for Cf and Es will be accurate as well.
\section{Ionization potentials}
\begin{table*}
\caption{\label{t:IP}
Experimental and theoretical values of the first ionization potential IP$_1$ (in cm$ ^{-1} $).}
\begin{ruledtabular}
\begin{tabular}{cl lc cc }
&
\multicolumn{2}{c}{ State}&
\multicolumn{3}{c}{IP$_1$}\\
\cline{2-3}
\cline{4-6}
\multicolumn{1}{c}{Atom}&
\multicolumn{1}{c}{Initial}&
\multicolumn{1}{c}{Final}&
\multicolumn{1}{c}{Presesnt}&
\multicolumn{1}{c}{Expt.}&
\multicolumn{1}{c}{Ref.}\\
\hline
Dy& $4f ^{10} 6s ^{2} $ \ $ ^{5}\rm I_{8} $ & $4f ^{10} 6s $ \ $ (8,\frac{1}{2})_{17/2} $&46658& 47901.76(5)&\cite{IPDy}\\
Ho&$4f ^{11} 6s ^{2} $ \ $ ^{4}\rm I^{o}_{{15}/{2}} $& $4f ^{11} 6s $ \ $ (\frac{15}{2},\frac{1}{2})^{\rm o}_{8} $&47819&48567(5)&\cite{IPHo} \\
Cf&$5f ^{10} 7s ^{2} $ \ $ ^{5}\rm I_{8} $ & $5f ^{10} 7s $ \ $ ^{6}\rm I_{{17}/{2}} $&50821& 50663(2)&\cite{IP} \\
Es&$5f ^{11} 7s ^{2} $ \ $ ^{4}\rm I^{o}_{{15}/{2}} $& $5f ^{11} 7s $ \ $ ^{5}\rm I^{o}_{8} $&51763&51358(2)&\cite{IP} \\
&&&&$51364.58(14) _{\rm stat} (50)_{\rm sys}$&\cite{IPEs}\\
\end{tabular}
\end{ruledtabular}
\end{table*}
Calculating ionization potential (IP) is a good way to test the theoretical approach for the ground state. The IP is obtained as a difference between the ground state energies of the neutral atom and the ion. The CIPT method, which we use in the present calculations, has a feature of having good accuracy for low-lying states, and it decreases while going up on the energy scale. The best accuracy is expected for the ground state. On the other hand, having HFS for the ground state is sufficient to extract nuclear parameters $\mu$ and $Q$. Therefore, we calculate the first ionization potential (IP$_1$) for all atoms considered in the present work. We calculate ground state energies of neutral atoms and corresponding ions in the same $V^{N-1}$ potential and the same single-electron basis.
This ensures exact cancelation of the energies associated with core electrons.
The results are presented in Table~\ref{t:IP} and compared with available experimental data.
As can be seen from the table the accuracy of the results is 2.7\% for Dy, 1.6\% for Ho, 0.3\% for Cf and 0.8\% for Es
\section{Results for HFS}
\begin{table}
\caption{\label{t:GHFS}
Calculated hyperfine structure constants $A$ and $B$ (in MHz) for the ground states of Dy, Ho, Cf and Es atoms.}
\begin{ruledtabular}
\begin{tabular}{cc cc cc}
\multicolumn{1}{c}{Atom}&
\multicolumn{1}{c}{Conf.}&
\multicolumn{1}{c}{Term}&
\multicolumn{1}{c}{$ J $}&
\multicolumn{1}{c}{$A$}&
\multicolumn{1}{c}{$B$}\\
\hline
{ Dy} & $4f^{10}6s^2$&$ ^{5}$I &8 &587$\times$$ g _I$&449$\times$Q\\
{ Ho} & $4f^{11}6s^2$& $ ^{4} $I$ ^{o} $& 15/2 &661$\times$$ g _I$&-543$\times$Q\\
{ Cf} & $5f^{10}7s^2$&$ ^{5}$I &8 &608$\times$$ g _I$&477$\times$Q\\
{ Es} & $5f^{11}7s^2$& $ ^{4} $I$ ^{o} $& 15/2 &681$\times$$ g _I$&-818$\times$Q\\
\end{tabular}
\end{ruledtabular}
\end{table}
In Table~\ref{t:GHFS}, we present the results of our calculations of the HFS constants of the ground states of Dy, Ho, Cf and Es. We have calculated both, magnetic dipole HFS constant $A$ and electric quadrupole HFS constant $B$, which can be used for the extraction of nuclear moments for any isotope with non-zero spin.
For a better understanding of the accuracy of the calculations for heavy actinides, it is instructive to compare electron structure factors for the HFS constants with those of lighter atoms, Dy, Ho, and Er. The situation is different for the HFS constants $A$ and $B$. The electron structure factor for the magnetic dipole constant $A$ is almost the same for heavy actinides and their lighter analogs; it varies within 3\%. The electron structure factors for the HFS constant $B$ are also similar, although the variation is larger. It goes from about 20\% for the Dy, Cf pair to 50\% for the Ho, Es pair.
This justifies using lighter analogs of heavy actinides for the estimation of the uncertainty of the calculations. We assume 3\% uncertainty for the HFS constant $A$ of all considered atoms and 16\% uncertainty for the HFS constant $B$ (as the difference between theory and experiment for the ground state of Ho).
This latter assumption is rather conservative. The difference between theory and experiment for the HFS constant $B$ of the ground state of Dy is about 3\% and it is about 10\% for the ground state of Er~\cite{ErFm}.
This high level of accuracy is a bit surprising for atoms with open shells.
Therefore, it is instructive to see how dominating contributions are formed.
First, we note that according to numerical tests, configuration mixing gives a relatively small contribution to the HFS constants. About 90\% or more comes from leading configurations which is $4f^n6s^2$ for Dy and Ho and $5f^n7s^2$ for Cf and Es (n=10,11). In these configurations $s$-electrons form a closed shell and do not contribute to the HFS. Therefore, all contribution comes from $f$-electrons. It is well known that in the case of excited valence $f$-states (e.g, $4f$ state of Cs or $5f$ state of Fr) the HF value of the energy shift due to HFS operator $\langle 4f|\hat f| 4f\rangle$ is small and dominating contribution comes from the core polarisation correction $\langle 4f|\delta V_{\rm core}^f| 4f\rangle$ (see, Eq.~(\ref{e:de})). The situation is different in atoms considered in present work. The $f$ electron states are inside the core, localised at about the same distances as other states with the same principal quantum number, i.e. it is not even the outermost shell. For example, $\langle 4f|r| 4f\rangle < 1a_B$ for Dy, Ho and Er, while $\langle 4f|r| 4f\rangle \sim 20a_B$ for Cs.
Being inside the core $f$-states penetrate to short distances near the nucleus making a large value of the HF matrix element $\langle 4f|\hat f| 4f\rangle$.
In contrast, the core polarization correction $\langle 4f|\delta V_{\rm core}^f| 4f\rangle$ is small ($\sim$~1\%). In the end, zero-order matrix elements are large while core polarization and configuration mixing corrections are small. This is the key to the high accuracy of the results.
\begin{table*}
\caption{\label{t:EsHFS}
Hyperfine structure constants $A$ and $B$ (in MHz) of the ground state of Es. Nuclear spin $I$, nuclear magnetic moment $\mu(\mu_N)$, and
nuclear electric quadrupole moment $Q(b)$ values for the isotopes of the $^{253}$Es are taken from Ref.~\cite{Stone1}, while $^{254}$Es and $^{255}$Es parameters are taken from Ref.~\cite{Raeder2022}. $g_I=\mu/I$. The last column presents references for experimental data on $A$ and $B$.
The values of $\mu$ and $Q$ obtained in this work are extracted from comparison of experimental and calculated HFS constants assuming 3\% uncertainty in calculation of $A$ and 16\% uncertainty in calculation of $B$.}
\begin{ruledtabular}
\begin{tabular}{cc cc c cc c cc}
\multicolumn{1}{c}{Isotope}&&&
\multicolumn{4}{c}{ This work}&
\multicolumn{3}{c}{Experimental results.}\\
\cline{4-7}
\cline{8-10}
\multicolumn{1}{c}{Nuclear Parameters}&
\multicolumn{1}{c}{Conf.}&
\multicolumn{1}{c}{Term}&
\multicolumn{1}{c}{$A$}&
\multicolumn{1}{c}{$B$}&
\multicolumn{1}{c}{$\mu$}&
\multicolumn{1}{c}{$Q$}&
\multicolumn{1}{c}{$A$}&
\multicolumn{1}{c}{$B$}&
\multicolumn{1}{c}{Ref.}\\
\hline
\multicolumn{1}{c} {\bf $ ^{253} $Es} \\
$\mu$= 4.1(7), I= 7/2, Q= 6.7(8) & $5f^{11}7s^2$& $ ^{4} $I$ ^{o}_{15/2}$ &798& -5481& 4.12(15)& 4.8(1.0)& 802(18)& -3916(550)&\cite{Raeder2022}\\
&&&&& 4.20(13)&5.3(8) &817.153(7) & -4316.254(76)&\cite{Es1975}\\
\hline
\multicolumn{1}{c} {\bf $ ^{254} $Es}\\
$\mu$= 3.42(7), I= 7, Q= 9.6(1.2) &$5f^{11}7s^2$& $ ^{4} $I$ ^{o}_{15/2}$ &333&-7853& 3.48(10)& 7.6(1.3) & 339(4)& -6200(300)&\cite{Raeder2022}\\
\hline
\multicolumn{1}{c} {\bf $ ^{255} $Es}\\
$\mu$= 4.14(10), I= 7/2, Q= 5.1(1.7) &$5f^{11}7s^2$& $ ^{4} $I$ ^{o}_{15/2}$ &806&-4172& 4.23(26)& 3.7(1.8)& 824(45) & -3001(1400)&\cite{Raeder2022}\\
\end{tabular}
\end{ruledtabular}
\end{table*}
Table \ref{t:EsHFS} shows the results and analysis of the HFS for three isotopes of Es ($^{253-255}$Es). This table serves two purposes. First, this is another confirmation of the accuracy of the calculations. However, to compare the calculations to the experiment we need to use nuclear moments, which are known to have pretty poor accuracy (see the table). For example, the uncertainty for the magnetic moment of the $^{253}$Es nucleus is 17\%. On the other hand, our estimated accuracy for the HFS constant $A$ is 3\%. This means that we can improve the accuracy of the nuclear moments by extracting them from a comparison of the experimental data with our calculations. The results are presented in the table. We see that real improvement is obtained for $\mu$($^{253})$Es only. For other nuclear moments, the uncertainties are similar but central points are shifted. New and old values are consistent when error bars are taken into account.
\section{Conclusions}
Magnetic dipole and electric quadrupole HFS constants $A$ and $B$ were calculated for the ground states of heavy actinides Cf and Es.
Similar calculations were performed for the lighter analogs of these atoms, Dy and Ho. To establish the accuracy of the results, the comparison between theory and experiment was done for HFS constants, energy levels, $g$-factors and ionization potential, everywhere where the experimental data are available.
We found the uncertainty of 3\% for the HFS constant $A$ and about 16\% uncertainty for the HFS constant $B$.
Using the calculated HFS constants of those heavy elements considered, nuclear magnetic and electric quadrupole moments can be extracted from the measurement data.
\begin{acknowledgments}
The authors are grateful to Sebastian Raeder for many stimulating discussions.
This work was supported by the Australian Research Council under Grants No. DP190100974 and No. DP200100150. S.O.A. gratefully acknowledges the Islamic University of Madinah (Ministry of Education, Kingdom of Saudi Arabia) for funding his scholarship.
\end{acknowledgments}
|
1,116,691,500,284 | arxiv | \section{Congruence towers and the~$4/3$ bound}
\label{one}
Hurwitz surfaces are an important and famous family of Riemann
surfaces.
We clarify the explicit structure of the Hurwitz quaternion order,
which is of fundamental importance in Riemann surface theory and
systolic geometry
\footnote{In the literature, the term ``Hurwitz quaternion order'' has
been used both in the sense used in the present text, and in the sense
of the unique maximal order of Hamilton's rational quaternions.}
A Hurwitz surface~$X$ by definition attains the upper bound
of~$84(\genus_X - 1)$ for the order~$|{\rm Aut}(X)|$ of the
holomorphic automorphism group of~$X$, where~$\genus_X$ is its genus.
In \cite{KSV}, we proved a systolic bound
\begin{equation}
\label{11}
\syspi_1(X)\geq \tfrac{4}{3} \log(\genus_X)
\end{equation}
for Hurwitz surfaces in a principal congruence tower (see below).
Here the systole~$\syspi_1$ is the least length of a noncontractible
loop in~$X$. The question of the existence of other congruence towers
of Riemann surfaces satisfying the bound \eqref{11}, remains open.
Marcel Berger's monograph \cite[ pp.~325--353]{Be6} contains a
detailed exposition of the state of systolic affairs up to '03. More
recent developments are covered in \cite{SGT}.
While \eqref{11} can be thought of as a differential-geometric
application of {\em quaternion algebras}, such an application of {\em
Lie algebras\/} may be found in \cite{e7}.
We will give a detailed description of a specific quaternion algebra
order, which constitutes the arithmetic backbone of Hurwitz
surfaces. The existence of a quaternion algebra presentation for
Hurwitz surfaces is due to G.~Shimura \cite[p.~83]{Sh}. An explicit
order was briefly described by N.~Elkies in~\cite{El0} and in
\cite{El}, with a slight discrepancy between the two descriptions, see
Remark~\ref{32b} below. We have been unable to locate a more detailed
account of this important order in the literature. The purpose of
this note is to provide such an account.
A Hurwitz group is by definition a (finite) group occurring as the
(holomorphic) automorphism group of a Hurwitz surface. Such a group
is the quotient~$\Deltah/\Gamma$ of a pair of Fuchsian groups.
Here~$\Deltah$ is the~$(2,3,7)$ triangle group, while
$\Gamma\normali\Deltah$ is a normal subgroup.
Let~$\eta= 2\cos \tfrac{2\pi}{7}$ and~$K=\Q[\eta]$, a cubic extension of $\Q$.
A class of Hurwitz groups arise from ideals in the Hurwitz quaternion order
\[
\Hur \subset (\eta,\eta)_K,
\]
see \Dref{Ddef} for details.
Recall that for a division algebra~$D$ over a number field~$k$, the
discriminant~$\disc(D)$ is the product of the finite ramification
places of~$D$. Let~$\CQ$ be an order of~$D$, and let~$\algint{K}$ be
the ring of algebraic integers in~$K$. By definition~$\CQ^1$ is the
group of elements of norm~$1$ in~$\CQ$, and a {\em principal
congruence subgroup\/} of~$\CQ^1$ is a subgroup of the form
\begin{equation}
\label{12}
\CQ^1(I) = \set{ x \in \CQ^1 \suchthat x\equiv 1\pmod{I \CQ} }
\end{equation}
where~$I \normali \algint{K}$. Any subgroup containing such a
subgroup is called a {\em congruence subgroup}.
The Hurwitz order is described in Section~\ref{two}. In
Section~\ref{four} we verify its maximality. The precise relationship
between the order and the~$(2,3,7)$ group is given in
Section~\ref{max}. In Section~\ref{six} we note that the Hurwitz order
is Azumaya, which implies that every ideal of the order is generated
by a central element, and every automorphism is inner. It also follows
that all quotient rings are matrix rings, and in Section~\ref{shesh}
we present some explicit examples of these quotients and their
associated congruence subgroups.
\section{The Hurwitz order~$\Hur$}
\label{two}
Let~$\HC^2$ denote the hyperbolic plane. Let~$\Aut (\HC^2)=
\PSL[2](\R)$ be its group of orientation-preserving isometries.
Consider the lattice~$\Deltah \subset \Aut(\HC^2)$, defined as the
even part of the group of reflections in the sides of the~$(2,3,7)$
hyperbolic triangle, \ie geodesic triangle with
angles~$\tfrac{\pi}{2}$,~$\tfrac{\pi}{3}$, and~$\tfrac{\pi}{7}$. We
follow the concrete realization of~$\Deltah$ in terms of the group of
elements of norm one in an order of a quaternion algebra, given by
N.~Elkies in \cite[p.~39]{El0} and in \cite[Subsection~4.4]{El}.
Let~$K$ denote the real subfield of~$\Q[\rho]$, where~$\rho$ is a
primitive \th{7} root of unity. Thus~$K = \Q[\eta]$, where the
element~$\eta = \rho+\rho^{-1}$ satisfies the relation
\begin{equation}
\label{21c}
\eta^3 + \eta^2 - 2\eta -1 = 0.
\end{equation}
Note the resulting identity
\begin{equation}
\label{21b}
(2-\eta)^3 = 7 (\eta-1)^2.
\end{equation}
There are three embeddings of~$K$ into~$\R$, defined by sending~$\eta$
to any of the three real roots of~\eqref{21c}, namely
\[
2\cos\(\tfrac{2\pi}{7}\), 2\cos\(\tfrac{4\pi}{7}\),
2\cos\(\tfrac{6\pi}{7}\).
\]
We view the first embedding as the `natural' one~$K \hra \R$, and
denote the others by~$\s_1,\s_2 \co K \ra \R$. Notice that~$2
\cos(2\pi/7)$ is a positive root, while the other two are
negative.
{}From the minimal polynomial we have $\Tr[K/F](\eta) =
-1$. Multiplying \eq{21c} by the `conjugate' $\eta^3-\eta^2-2\eta+1$
gives $$(\eta^2)^3 - 5(\eta^2)^2 + 6 (\eta^2) -1 = 0,$$ so similarly
$\Tr[K/F](\eta^2) = 5$. {}The recursion relation
\[
\Tr(\eta^{3+i}) = - \Tr(\eta^{2+i}) + 2\Tr(\eta^{1+i}) +\Tr(\eta^i)
\]
provides $\Tr(\eta^3) = -4$ and $\Tr(\eta^4) = 13$. Traces of
multiples in the integral basis $1,\eta,\eta^2$ then give the
discriminant
\[
\disc(K/\Q)=\abs{\Mat{3}{-1}{5}{-1}{5}{-4}{5}{-4}{13}} = 49 .
\]
By Minkowski's bound \cite[Subsection~30.3.3]{Hasse}, it follows
that every ideal class contains an ideal of
norm~$<\frac{3!}{3^3}\sqrt{49} < 2$, which proves that~$\algint{K} = \Z[\eta]$ is a
principal ideal domain. The only ramified prime is~$7\algint{K} =
\ideal{2-\eta}^3$, \cf \eqref{21b}
and using the fact that~$\eta-1=(\eta^2+2\eta)^{-1}$ is invertible in $\algint{K}$.
Note that the minimal polynomial~$f(t) = t^3+t^2-2t-1$ remains irreducible modulo~$2$, so
that
\begin{equation}
\label{23b} \algint{K}/\ideal{2} \isom \F_2[\bar{\eta}] = \F_8,
\end{equation}
the field with~$8$ elements.
\begin{defn}\label{Ddef}
We let~$\quat$ be the quaternion~$K$-algebra
\begin{equation}
\label{21}
(\eta,\eta)_K = K[i,j \subjectto i^2 = j^2 = \eta,\, ji = -ij].
\end{equation}
\end{defn}
As mentioned above, the root~$\eta > 0$ defines the natural imbedding
of~$K$ into~$\R$, and so~$D\otimes \R = \M[2](\R)$, and thus the
imbedding is unramified. On the other hand, we
have~$\s_1(\eta),\s_2(\eta)<0$, so the algebras~$D \otimes_{\sigma_1}
\R$ and~$D \otimes_{\sigma_2} \R$ are isomorphic to the standard
Hamilton quaternion algebra over~$\R$.
Moreover,~$D$ is unramified over all the finite places of~$K$
\cite[Prop.~7.1]{KSV}.
\begin{rem}
By the Albert-Brauer-Hasse-Noether theorem
\cite[Theorem~32.11]{Reiner},~$D$ is the only quaternion algebra
over~$K$ with this ramification data.
\end{rem}
Let~$\O \sub \quat$ be the order defined by
\[
\O = \algint{K}[i,j].
\]
Clearly, the defining relations of~$D$
serve as defining relations for~$\O$ as well:
\begin{equation}\label{defO}
\O \isom \Z[\eta][i,j \subjectto i^2 = j^2 = \eta,\, ji = -ij].
\end{equation}
Fix the element~$\tau = 1+\eta+\eta^2$, and define an element~$j'\in
D$ by setting
\[
j' = \frac{1}{2}(1+\eta i+\tau j).
\]
Notice that~$j'$ is an algebraic integer of~$\quat$, since the reduced
trace is~$1$, while the reduced norm is
\[
\frac{1}{4}(1-\eta\cdot \eta^2 - \eta\cdot \tau^2 + \eta^2 \cdot 0) =
-1-3\eta,
\]
so that both are in~$\algint{K}$. In particular, we have the relation
\begin{equation}
\label{jt}
{j'}^2 = j' + (1+3\eta).
\end{equation}
We define an order~$\Elk \subset \quat$ by setting
\begin{equation}
\label{22}
\Elk = \Z[\eta][i,j'].
\end{equation}
Finally, we define a new order~$\Hur \subset \quat$ by setting
\begin{equation}
\label{72}
\Hur = \Z[\eta][i,j,j'].
\end{equation}
\begin{rem}\label{32b}
There is a discrepancy between the descriptions of a maximal order
of~$\quat$ in \cite[p.~39]{El0} and in \cite[Subsection~4.4]{El}.
According to \cite[p.~39]{El0},~$\Z[\eta][i,j,j']$ is a maximal
order. Meanwhile, in \cite[Subsection~4.4]{El}, the maximal order
is claimed to be the order~$\Elk = \Z[\eta][i,j']$, described
as~$\Z[\eta]$-linear combinations of the elements~$1,i,j'$,
and~$ij'$, on the last line of \cite[p.~94]{El}. The correct
answer is the former, \ie the order~\eqref{72}.
\end{rem}
We correct this minor error in \cite{El}, as follows.
\begin{lem}\label{2.6}
The order~$\Hur$ strictly contains~$\Elk = \Z[\eta][i,j']$.
\end{lem}
\begin{proof}
The identities \eq{jt} and
\begin{equation}
\label{23}
j'i = \eta^2 + i - ij'
\end{equation}
show that the module
\begin{equation}
\label{Q0prime}
\CQ_{\opname{Elk}}' = \Z[\eta] + \Z[\eta] i + \Z[\eta]j' +
\Z[\eta]ij',
\end{equation}
which is clearly contained in~$\Elk$, is closed under multiplication,
and thus equal to~$\Elk$. Moreover, the set~$\set{1,i,j',ij'}$ is a
basis of~$D$ over~$K$, and a computation shows that
\[
j = \frac{-9+2\eta+3\eta^2}{7}+\frac{3-3\eta-\eta^2}{7} i+ \frac
{18-4\eta-6\eta^2} {7}j',
\]
with non-integral coefficients. Therefore~$j \not \in \Elk$.
\end{proof}
\section{Maximality of the order~$\Hur$}
\label{four}
\begin{prop}
The order~$\Hur$ is a maximal order of~$D$.
\end{prop}
\begin{proof}
By \Lref{2.6} it is enough to show that every order
containing~$\Elk$ is contained in~$\Hur$. Let~$M \supseteq \Elk$
be an order, namely a ring which is finite as
a~$\algint{K}$-module, and let~$x \in M$. Since~$\{1,i,j,ij\}$ is
a~$K$-basis for the algebra~$D$, we can write
\[
x = \tfrac{1}{2}(a+bi+cj+dij)
\]
for suitable~$a,b,c,d \in K$. Recall that every element of an
order satisfies a monic polynomial over~$\algint{K}$, so in
particular it has integral trace. Since we have~$x,ix,jx,ijx \in
M$,
with traces~$a,\eta b,\eta c,-\eta^2d$, respectively, while the
element~$\eta = (\eta^2+\eta-2)^{-1}$ is invertible
in~$\algint{K}$, we conclude that, in fact,~$a,b,c,d \in
\algint{K}$. Now,
$$\tr(xj') = \frac{1}{4}\tr((a+bi+cj+dij)(1+\eta i+\tau j)) = \frac{1}{2}(a+\eta^2b +\eta\tau c) $$
and
$$\tr(xij') = \frac{1}{4}\tr((a+bi+cj+dij)i(1+\eta i+\tau j)) = \frac{1}{2}(\eta^2 a+\eta b-\eta^2 \tau d ).$$
Since these are integers, and since $\eta \tau \equiv \eta + 1$
and $\eta^3\tau \equiv 1$ modulo $2\algint{K}$, we have
that~$a\equiv \eta^2 b + (\eta+1)c$, and~$d \equiv \eta^3 a +
\eta^2 b \equiv \tau b + \eta c$.
It then follows that
\[
x - (\eta^2+2\eta+1)cj' - ((\eta^2+3\eta+1)c+b)ij' \in
\algint{K}[i,j],
\]
so that~$x \in \Hur$.
\end{proof}
\begin{rem}
Since~$K \Hur = D$, the center of~$\Hur$ is
$$\Cent(\Hur) = \Hur \cap \Cent(D) = \Hur \cap K = \algint{K}.$$
\end{rem}
While~$\O$ admits the presentation \eq{defO}, typical of symbol
algebras, it should be remarked that~$\Hur$ cannot have such a
presentation.
\begin{rem}\label{33}
There is no pair of anticommuting generators of~$\Hur$ over
$\Z[\eta]$.
\end{rem}
\begin{proof}
One can compute that~$\Hur/2\Hur$ is a~$2\times 2$ matrix algebra
\cite[Lemma~4.3]{KSV}, and in particular non-commutative; however
anticommuting generators will commute modulo~$2$.
\end{proof}
The prime~$2$ poses the only obstruction to the existence of an
anti-commuting pair of generators. Indeed, adjoining the fraction
$\frac{1}{2}$, we clearly have
\[
\Hur[\tfrac{1}{2}] = \O[\tfrac{1}{2}] =
\algint{K}[\tfrac{1}{2}][i,j\,|\,i^2 = j^2 = \eta,\, ji = -ij],
\]
and this is an Azumaya algebra over~$\algint{K}[\frac{1}{2}]$, see
Definition~\ref{62}. A presentation of~$\Hur$ is given in
Lemma~\ref{32}.
\section{The~$(2,3,7)$ group inside~$\Hur$}
\label{max}
The group of elements of norm~$1$ in the order~$\Hur$, modulo the
center~$\set{\pm 1}$, is isomorphic to the~$(2,3,7)$ group
\cite[p.~95]{El}. Indeed, Elkies gives the elements
\begin{eqnarray*}
g_2 & = & \frac{1}{\eta}ij, \\
g_3 & = & \frac{1}{2}(1+(\eta^2 - 2)j + (3-\eta^2)ij), \\
g_7 & = & \frac{1}{2}((\tau-2)+(2-\eta^2)i+(\tau-3)ij),
\end{eqnarray*}
satisfying the relations~$g_2^2 = g_3^3 = g_7^7 = -1$ and~$g_2 = g_7
g_3$, which therefore project to generators of~$\Deltah \subset
\PSL[2](\R)$.
\begin{lem}
The Hurwitz order is generated, as an order, by the elements~$g_2$
and~$g_3$, so that we can write~$\Hur = \algint{K}[g_2,g_3]$.
\end{lem}
\begin{proof}
We have~$g_2,g_3,g_7 \in \Hur$ by the invertibility of~$\eta$ in
$\algint{K}$ and the equalities
\begin{eqnarray*}
g_3 & = & (3+6\eta-\eta^2) + (1+3\eta)i- (2+\eta^2) jj' -2 (ijj'-(1-\eta)ij),\\
g_7 & =
&(\tau+3\eta)+2(1+\eta)i-(\eta+\eta^2)jj'+(2-\tau)(ijj'-(1-\eta)ij)
.
\end{eqnarray*}
Conversely, we have the relations
\begin{eqnarray*}
i & = & (1+\eta)(g_3 g_2 - g_2 g_3),\\
j & = & (1+\eta)(1 + (\eta^2+\eta-1)g_2- 2 g_3),\\
j' & = & (1+\eta i)g_3 + (\eta^2 - 2)ij + j,
\end{eqnarray*}
proving the lemma.
\end{proof}
\begin{lem}\label{32}
A basis for the order~$\Hur$ as a free module over~$\Z[\eta]$ is
given by the four elements~$1$,~$g_2$,~$g_3$, and~$g_2 g_3$.
The defining relations~$g_2^2 = -1$,~$g_3^2 = g_3 - 1$ and~$g_2g_3
+ g_3 g_2 = g_2 - (\eta^2 +\eta-1)$ provide a presentation of
$\Hur$ as an~$\algint{K}$-order.
\end{lem}
\begin{proof}
The module spanned by~$1,g_2,g_3,g_2g_3$ is closed under
multiplication by the relations given in the statement (which are
easily verified); thus~$\algint{K}[g_2,g_3] =
\opname{span}_{\algint{K}}\set{1,g_2,g_3,g_2g_3}$. The relations
suffice since the ring they define is a free module of rank~$4$,
which clearly project onto~$\Hur$.
\end{proof}
\begin{rem}
An alternative basis for the order~$\Hur$ as a free module
over~$\Z[\eta]$ is given by the four elements~$1$,~$i$,~$jj'$,
and~$\kay = ijj'-(1-\eta)ij$.
\end{rem}
\forget
\section{A module basis for~$\Hur$}
\label{three}
\begin{lem}\label{32} A basis for the order~$\Hur$ as a free module
over~$\Z[\eta]$ is given by the four elements~$1$,~$i$,~$jj'$,
and~$\kay = ijj'-(1-\eta)ij$.
\end{lem}
The element~$\kay$, as well as the formulas below, were found by
elementary matrix operations over~$\Z[\eta]$, noting that~$\Hur
\sub \frac{1}{2}\O$ is free, being a submodule of a free module
over a principal ideal domain.
\begin{proof}[Proof of \Lref{32}]
The identities \eqref{23} and
\begin{eqnarray}
j j' & = & (\tau+2\eta)(i j'-2ij) - (\tau+\eta)(j'-2j), \label{jjprime} \\
j'j & = & (3\eta+1) + j - j j' \nonumber
\end{eqnarray}
show that~$\Hur$ is spanned, as an~$\algint{K}$-module,
by~$1,i,j,ij,j'$ and~$ij'$. Now the identities
\[
\begin{aligned}
j & = (2\eta^2+\eta-1)[(1+3\eta) - 2 jj'] + \eta(\eta+1)[(1+3\eta)
i -2 \kay], \\
j' & = (8+17\eta+\eta^2) +(5+13\eta+3\eta^2)i -(4\eta+5\eta^2)jj'
-(1+4\eta+3\eta^2)\kay,
\\
ij & = (1+4\eta+3\eta^2)(\eta^2 + i) -
2(1+\eta)(\eta^2 jj'+\kay), \\
ij' & =(3+8\eta+5\eta^2)( \eta^2 +i)
-(2+4\eta+\eta^2) (\eta^2j j' + \kay).
\end{aligned}
\]
complete the proof of the lemma.
\end{proof}
\forgotten
\forget
It may be helpful to write down the matrix of the coefficients
expressing the elements~$j, j', ij, ij'$ in terms of the basis~$1,
i, jj', \kay$:
\[
\begin{pmatrix}
(5+10\eta-\eta^2) & -(3+7\eta+\eta^2) & -(2-2\eta-4\eta^2) &
-(2\eta+2\eta^2)
\\
(8+17\eta+8\eta^2) & (5+13\eta+3\eta^2) & -(4\eta+5\eta^2) &
-(1+4\eta+3\eta^2) &
\\
(1+5\eta+6\eta^2) & (1+4\eta+3\eta^2) & -(2+4\eta) & -(2+2\eta)
\\
(3+11\eta+10\eta^2) & (3+8\eta+5\eta^2) & -(3+7\eta+\eta^2) &
-(2+4\eta+\eta^2)
\end{pmatrix}
\]
\forgotten
\begin{rem}
Since~$\O$ is a free module of rank~$4$ over~$\algint{K}$, so
is~$\frac{1}{2}\O$, and~$\frac{1}{2}\O/\O$ is a~$4$-dimensional vector
space over~$\algint{K}/2\algint{K}$, which is the field of
order~$8$. Furthermore, one can check that~$\Hur/\O$ is a
two-dimensional subspace, namely~$\dimcol{\frac{1}{2}\O}{\Hur} =
\dimcol{\Hur}{\O} = 2^6$
where we are calculating the indices of the orders as abelian groups
.
\end{rem}
\section{Azumaya algebras}
\label{six}
We briefly describe a useful generalization of the class of
central simple algebras over fields, to algebras over commutative
rings.
\begin{defn}[\eg\ {\cite[Chapter~2]{Saltman}}]\label{62}
Let~$R$ be a commutative ring. Let~$A$ be an~$R$-algebra which is
a faithful finitely generated projective~$R$-module. If the
natural map~$A \tensor[R] A^{\opname{op}} \ra \End_R(A)$ (action
by left and right multiplication) is an isomorphism, then~$A$ is
an {\emph{Azumaya algebra}} over~$R$.
\end{defn}
Suppose every non-zero prime ideal of~$R$ is maximal (which is the
case with every Dedekind domain, such as~$\algint{K} = \Z[\eta]$),
and let~$F$ denote the ring of fractions of~$R$. It is known that
if~$A$ is an~$R$-algebra, which is a finite module, such that
\begin{enumerate}
\item for every maximal ideal~$M \normali R$,~$A/MA$ is a central
simple algebra, of fixed degree, over~
$R/MR$; and
\item~$A\tensor[R] F$ is central simple, of the same degree, over
$F$,
\end{enumerate}
then~$A$ is Azumaya over~$R$ \cite[Theorem~2.2.a]{Saltman}. The
second condition clearly holds for~$\Hur$ over~$\algint{K}$
since~$\Hur \tensor[\algint{K}] K \isom D$.
In \cite[Lemma~4.3]{KSV} we proved the following theorem.
\newcommand\fp{{\mathfrak p}}
\begin{thm}\label{6.1}
For every ideal~$I \normali \algint{K}$, we have an isomorphism
\[
\Hur/I\!\Hur \isom \M[2] (\algint{K}/I) .
\]
\end{thm}
This was proved in \cite{KSV} for an arbitrary maximal order in a
division algebra with no finite ramification places, by
decomposing~$I$ as a product of prime power ideals, applying the
isomorphism~$\CQ/\fp^t = \CQ_{\fp} / \fp^t \CQ_{\fp}$
\cite[Section~5]{Reiner} for~$\CQ = \Hur$ ($\CQ_\fp$ being the
completion with respect to the~$\fp$-adic valuation), and using
the structure of maximal orders over a local ring
\cite[Section~17]{Reiner}.
We therefore obtain the following corollary.
\begin{cor}
The order~$\Hur$ is an Azumaya algebra.
\end{cor}
This fact has various ring-theoretic consequences. In particular,
there is a one-to-one correspondence between two-sided ideals of
$\Hur$ and ideals of its center,~$\algint{K}$
\cite[Proposition~2.5.b]{Saltman}. Since~$\algint{K}$ is a
principal ideal domain, if follows that every two-sided ideal of
$\Hur$ is
generated by a single central element.
Another property of Azumaya algebras is that every automorphism is
inner (namely, induced by conjugation by an invertible element)
\cite[Theorem~2.10]{Saltman}, in the spirit of the Skolem-Noether
theorem, \cf \cite[p.~107]{MR}.
\section{Quotients of~$\Hur$}\label{shesh}
In order to make \Tref{6.1} explicit, suppose~$I \normali
\algint{K}$ is an odd ideal (namely~$I+2\algint{K} = \algint{K}$).
By the inclusion~$2 \Hur \sub \O$, we have that~$\O + \IHur =
\Hur$ and~$\O \cap \IHur = I \O$. Therefore
\[
\O/I \O = \O/(\O \cap \IHur) \isom (\IHur + \O)/\IHur = \Hur / \IHur,
\]
and so~$\O /I\O \isom \M[2](L)$ for~$L = \algint{K}/I$. {}From the
presentation of~$\O$, see \eq{defO}, it follows that
\[
\O /I\O \isom L[i,j \subjectto i^2 = j^2 = \eta, ji = -ij],
\]
which allows for an explicit isomorphism~$\O/I\O \ra \M[2](L)$.
\begin{exmpl}[First Hurwitz triplet]
\label{65}
The quotient~$\Hur/13\Hur$ can be analyzed as follows. Since the
minimal polynomial~$\lam^3+\lam^2 - 2\lam-1$ factors over~$\F_{13}$ as
$(\lam-7)(\lam-8)(\lam-10)$, we obtain the ideal decomposition
\begin{equation}
\label{61b}
13\algint{K} = \ideal{13,\eta-7}\ideal{13,\eta-8}\ideal{13,\eta-10},
\end{equation}
and the isomorphism~$\algint{K}/\ideal{13} \ra \F_{13}\times
\F_{13}\times \F_{13}$, defined
by ~$\eta \mapsto (7,8,10)$. In fact, one has
$$13 = \eta(\eta+2)(2\eta-1)(3-2\eta)(\eta+3),$$ where~$\eta(\eta+2)$
is invertible, and the other factors generate the ideals given
above, in the respective order; therefore, \eqref{61b} can be
rewritten as
\[
13\algint{K} = (2\eta-1) \algint{K} \cdot (3-2\eta) \algint{K} \cdot
(\eta+3) \algint{K} .
\]
An embedding~$\algint{K}[i]/ \ideal{13} \hra \M[2](\F_{13})\times
\M[2](\F_{13})\times \M[2](\F_{13})$ is obtained by mapping the
generator~$i$ via
\[
i \mapsto
\left(\smat{0}{1}{7}{0},\smat{0}{1}{8}{0},\smat{0}{1}{10}{0}
\right),
\]
satisfying the defining relation~$i^2 = \eta$. In order to extend this
embedding to~$\Hur/13\Hur$, we need to find in each case a matrix
which anti-commutes with~$i$, and whose square is~$\eta$. Namely, we
seek a matrix
\[
\smat{a}{b}{-\eta b}{-a},
\]
such that~$a^2 - \eta b^2 = \eta$ ($\eta$ stands for~$7,8$ or
$10$, respectively). Solving this equation in each case, the
map
\[
\Hur/13\Hur \ra \M[2](\F_{13})\times \M[2](\F_{13}) \times
\M[2](\F_{13})
\]
may then be defined as follows:
\[
j \mapsto \left( \smat{1}{1}{6}{12}, \smat{4}{1}{5}{9},
\smat{6}{0}{0}{7} \right).
\]
The map is obviously onto each of the components, and thus, by the
Chinese remainder theorem, onto on the product.
The three prime ideals define a triplet of principal congruence
subgroups, as in \eqref{12}. One therefore obtains a triplet of
distinct Hurwitz surfaces of genus~$14$. All three differ both in
the value of the systole and in the number of systolic loops
\cite{Vo03}.
\end{exmpl}
\begin{exmpl}[Klein quartic]
\label{6.2}
Consider the ramified prime,~$p = 7$. The minimal polynomial of~$\eta$
factors modulo~$7$ as~$(t-2)^3$, and so~$7\algint{K} = \got{p}^3$
for~$\got{p} = \ideal{\eta-2}$, see identity \eqref{21b}. The
quotient
\[
L = \algint{K}/\got{p}^3 \isom \F_7[\epsilon|\epsilon^3 = 0]
\]
is in this case a local ring, with~$\algint{K}/7 \algint{K} \isom
L$ via~$\eta \mapsto 2+\epsilon$. The isomorphism~$\Hur/7\Hur \ra
\M[2](L)$ can be defined by~$i \mapsto \smat{0}{1}{2+\epsilon}{0}$
and~$j \mapsto (3-\epsilon+\epsilon^2) \smat{1}{0}{0}{-1}$, taking
advantage of the square root~$(3-\epsilon +\epsilon^2)^2 =
2+\epsilon$ in~$L$.
The Hurwitz surface defined by the principal congruence subgroup
associated with the ideal~$\ideal{2-\eta}$ is the famous Klein quartic, a
Hurwitz surface of genus~$3$.
\end{exmpl}
\forget
\begin{prop} Let~$\got{p}$ be an odd prime ideal
of~$\algint{K}$, and~$I = \got{p}^t$ where~$t \geq 1$ is
arbitrary. Let~$L = \algint{K}/I$ be the quotient ring. Then
$$\Hur/\IHur \isom \O/I\O \isom \M[2](L).$$
\end{prop}
\begin{proof}
By its presentation, the ring~$\O/I\O$ is clearly isomorphic to
\[
L[i,j \subjectto i^2 = j^2 = \eta, ji = -ij],
\]
where by abuse of notation we denote the image of~$\eta$ in~$L$ by the
same letter, and it clearly remains a unit. First assume~$I$ is prime;
then~$L$ is a field, and the above algebra is well known to be a
central simple algebra of degree~$2$ over~$L$; however by Wedderburn's
theorem that there are no finite non-commutative division algebras,
such an algebra is necessarily isomorphic to~$\M[2](L)$. The same
argument, slightly generalized, covers the case where~$I$ is a prime
power, where~$L$ is a finite local (commutative) ring.
The ideals~$J = I\O$ and~$T = 2\O$ are co-prime by assumption, and
$2\Hur \sub \O$ as~$2j' = 1+\eta i + \tau j$. Therefore, the lemma
asserts that~$\Hur/\IHur \isom \O/I\O$.
\end{proof}
\forgotten
\forget
\begin{rem} An explicit isomorphism can be obtained by
sending~$i \mapsto \smat{0}{1}{\eta}{0}$ and~$j \mapsto
\smat{a}{b}{-\eta b}{-a}$, where~$a,b \in L$ solve the equation
$a^2 - \eta b^2 = \eta$. (A solution always exists, see Serre's
Course in Arithmetic).
\end{rem}
\forgotten
Having computed~$\Hur/I\Hur$ for~$I$ an odd ideal, it remains to
consider the even prime,~$2$. Recall that the order~$\Elk$ was defined
in~\eqref{22}.
\begin{prop}
Let~$I = 2^t\algint{K}$ and~$L = \algint{K}/I$. Then
$$\Hur/\IHur \isom \Elk/\IElk \isom \M[2](L).$$
\end{prop}
\begin{proof}
Since~$\tau j = 2j' - 1 - \eta i$, we have that~$\tau \Hur \sub
\Elk$; it then follows that~$\Elk + \IHur = \Hur$ and~$\Elk \cap
\IHur = \IElk$. As before,
$$\Elk/\IElk \isom \Hur / \IHur,$$ and so~$\Elk /\IElk \isom \M[2](L)$ for
$L = \algint{K}/I$.
\forget
To complete the proof, we consider the co-prime ideals~$J = \IElk$
and~$T = \tau \Elk$ of~$\Elk$. Since~$\tau j = 2j' - 1 - \eta i$,
we have that~$\tau \Hur \sub \Elk$, and \Lref{RJS}.b asserts
that~$\Hur/\IHur \isom \Elk/\IElk$. \forgotten
Let us make this isomorphism explicit. Let~$\eta \in L$ denote the
image of~$\eta$ under the projection~$\algint{K}\ra L$. By the
relations obtained in \Lref{2.6}, we have the following presentation:
$$\Elk/\IElk \isom L[x,y\,|\,x^2 = \eta,\, y^2 = 1+3\eta+y,\,yx =
\eta^2 + x - xy]$$
Let~$b \in L$ be a solution to~$b = 2+b^2$ (such a solution exists
by Hensel's lemma, or one can iterate the defining equation).
Taking~$x' = \smat{0}{1}{\eta}{0}$ and~$y' = \smat{\eta^2}{\eta
b}{\eta^2(1-b)}{1-\eta^2}$ in~$\M[2](L)$, one verifies that~$x'$
and~$y'$ satisfy the required relations, and so~$x\mapsto x'$ and
$y\mapsto y'$ define an isomorphism.
\end{proof}
\begin{exmpl}[Fricke-Macbeath curve]
Let~$I=2\algint{K}$. Then~$L=\algint{K}/I= \F_8$ by \eqref{23b}, and
so~$\Hur/2\Hur = \M[2](\F_8)$, as was already mentioned in
\Rref{33}. The associated quotient is
\[
\CQ_{\operatorname{Hur}}^1/\CQ_{\operatorname{Hur}}^1(I) \isom
\SL[2](\F_8)=\PSL[2](\F_8),
\]
which is the automorphism group of the corresponding Hurwitz surface
of genus~$7$, called the Fricke-Macbeath curve \cite[p.~37]{El0}.
\forget Meanwhile, the associated Fuchsian group~$\Gamma(2)$ is
the quotient of~$2\Hur$ by~$\pm Id$, and the quotient
$\Delta_{2,3,7}/\Gamma(2)$, so the quotient automorphism group is
not affected. The group \forgotten
\end{exmpl}
\begin{rem}
The quotient~$\O/2\O$ is a local commutative ring, with residue
field~$\F_8$ and a radical~$J$ whose dimension over~$\F_8$ is~$3$,
satisfying~$\dim(J^2) = 1$ and~$J^3 = 0$.
\end{rem}
\begin{proof}
Since~$ji = -ij$, the elements~$i$ and~$j$ commute modulo~$2$, so
the quotient ring is commutative. Also,~$\eta^8 \equiv \eta
\pmod{2}$. Taking the defining relations of~$\O$ modulo~$2$ we
obtain~$\O/2\O = \F_8[i,j \subjectto (i-\eta^4)^2 = (j-\eta^4)^2 =
0]$, so take~$J = \F_8 \cdot (i-\eta^4) + \F_8 \cdot (j-\eta^4) +
\F_8 \cdot (i-\eta^4)(j-\eta^4)$.
\end{proof}
\forget
\normali \algint{K}$ are co-prime ideals, then~$\IHur \cap \JHur =
\IHur \cdot J \Hur$ (indeed write~$1 = a+b$ for~$a\in I$ and~$b \in
J$, then~$x \in \IHur \cap J \Hur$ implies~$x = ax+bx \in I \JHur + J
\IHur = \IHur \cdot \JHur$). Writing~$I = \got{p}_1^{t_1} \cdots
\got{p}_s^{t_s}$ for distinct primes~$\got{p}_i$, the Chinese
remainder theorem implies that
\[
\Hur / I \isom \prod_i \Hur/\got{p}_i^{t_i} \isom \prod_i
\M[2](\algint{K}/\got{p}_i^{t_i}) \isom \M[2](\prod_i
\algint{K}/\got{p}_i^{t_i}) \isom \M[2](I).
\]
\forgotten
\bibliographystyle{amsalpha}
|
1,116,691,500,285 | arxiv | \section{Introduction and Conclusions}
\label{sec:intro}
The dynamics of quantum field theories driven far from equilibrium is a fascinating topic, owing to the complex interplay of quantum and statistical behaviours in the system. While a quantitative understanding of how field theories respond to non-linear external sources remains in general an open problem, in recent years one has gained some insight into such phenomena.
On the one hand progress in this direction has been driven by experimental developments which allow for a detailed study. For instance the ability to simulate many-body dynamics in cold-atom systems has led to the opening of a new frontier in dynamical simulations, cf., \cite{Polkovnikov:2010yn} for a recent review. On the other hand, theoretical horizons have been broadened with the gauge/gravity duality providing an excellent arena to explore the dynamics of strongly interacting many-body systems using
(classical) gravitational dynamics in a suitable limit (cf., \cite{Hubeny:2010ry} for a not so recent review). Coupled with the development of excellent numerical algorithms for studying dynamical problems in AdS gravity \cite{Chesler:2008hg,Chesler:2013lia}, the confluence of ideas and techniques provides an excellent opportunity to further our understanding of
out-of-equilibrium dynamics.
A much studied protocol in this context is the quantum quench dynamics, wherein one takes a system initially in equilibrium, typically in the ground state, and subjects it to external sources which change the subsequent dynamics by modifying the Hamiltonian. The rate at which sources act on the system controls the features of the subsequent relaxation, assuming that the sources are non-vanishing for a finite amount of time. The analysis of such a quench protocol has benefited both from theoretical understanding using standard quantum field theory technology in low dimensions \cite{Calabrese:2005in,Calabrese:2006rx,Calabrese:2007rg} and from a wide array of examples that have been studied holographically in the recent past
\cite{Bhattacharyya:2009uu,Das:2010yw,Basu:2011ft,Buchel:2012gw,Bhaseen:2012gg,Basu:2012gg,Nozaki:2013wia,Buchel:2013lla,Hartman:2013qma,Basu:2013vva,Li:2013fhw,Buchel:2013gba,Auzzi:2013pca,Buchel:2014gta}.
In most cases the interest is in the approach to equilibrium at late times and the rate at which various observables thermalize
\cite{Danielsson:1999fa,Hubeny:2007xt,AbajoArrastia:2010yt,Albash:2010mv,Balasubramanian:2010ce,Balasubramanian:2011ur,Aparicio:2011zy,Balasubramanian:2011at,Keranen:2011xs,Galante:2012pv,Caceres:2012em,Hubeny:2013hz,Hartman:2013qma,Liu:2013iza,Balasubramanian:2013oga,Liu:2013qca,Abajo-Arrastia:2014fma}. Note that since we inject energy in the process of the quench, even an initially pure state will appear to be well approximated by a thermal ensemble asymptotically (assuming that the field theory dynamics are sufficiently ergodic).
A slightly different but related scenario is one where we subject a system, again initially in an equilibrium configuration, to an external driving source which keeps doing work on it throughout the entire time period under study. More specifically, we will be interested in examining the behaviour when the initial state is chosen to be a thermal density matrix, so that one can simultaneously explore the response of a quantum dissipative system. For non-linear dynamical systems the response under such external driving can provide insight into the dynamics via the coherent build-up of the response.
Classical analogs of what we have in mind are situations where we drive a (damped) pendulum steadily or subject a viscous fluid to external forcing. The latter is particularly apposite, for the problem we study can be thought of as a hot deconfined plasma of a planar gauge theory disturbed by an external source, as studied in the hydrodynamic context in
\cite{Bhattacharyya:2008ji}.
Rather than letting the driving grow without bound, we will subject our plasma to a periodic driving by turning on the source for a relevant operator.
One therefore has two relevant dimensionful parameters characterizing the situation:
(a) The amplitude of the external force whose scaling dimension is set by the conformal weight of the operator we exploit and (b) The frequency of the external driving. The third scale which is the temperature of the initial equilibrium state can be factored out, if we are interested in describing the dynamics for a conformally invariant system, which is most natural in the gauge/gravity context. This scenario was explored in \cite{Auzzi:2013pca}, who carried out a perturbative analysis for small amplitudes of the driving source. A related analysis of periodically driving a quantum system near a critical point was undertaken in \cite{Li:2013fhw}.
Gravitationally the problem we study is the following: we have a Schwarzschild-\AdS{4} black hole modeling our initial thermal density matrix of a three-dimensional CFT. At some instant of time on the boundary we turn on a periodic source for a relevant scalar operator, which we specifically choose to be of dimension $2$ for simplicity.\footnote{ This choice turns out to have several advantages as the dual scalar being conformally coupled to gravity in the bulk allows a certain level of technical simplification in various holographic renormalizations we need to do to extract physical data.} The physics of the system is captured by examining the behaviour or various observables as we vary the amplitude ${A}$ and the period ${P}$ of the driving (measured e.g. in units of the initial temperature). We will in particular extend the perturbative analysis of \cite{Auzzi:2013pca} valid for ${A} \ll 1$ to the non-perturbative regime
${A} \gg 1$ for a wide range of driving frequencies. We find that the system naturally exhibits at least four different phases which are depicted in phase diagram Fig.~\ref{fig:PP_qualitative}; two of these (labeled IIb and III) are non-perturbative in ${A}$.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{figures/phase_diagram_qualitative.pdf}
\setlength{\unitlength}{1cm}
\begin{picture}(0.3,0.4)(0,0)
\put(0.15,0.56){\makebox(0,0){ ${P}\, T_0$}}
\put(-13.04,10.5){\makebox(0,0){ ${A}/T_0$}}
\end{picture}
\caption{The ``phase diagram'' of the driven holographic plasma characterized by the period (${P}$) and amplitude (${A}$) of the driving force, measured in units of the initial thermal scale $T_0$. There are four distinct regimes marked on the diagram which are explained in the main text. $\sigma_\text{in}$ refers here to the imaginary (or in-phase) part of the conductivity defined in Eq.~\eqref{eq:sigdef}. As we move from southwest to northeast in the figure, the system is driven into a more non-linear regime; the crossing of the grey-dashed boundary is the turn on of the in-phase part of the conductivity $\sigma_{\text{in}}$ in regime II, and the crossing of the blue-dashed boundary signifies the entrance into the resonance phase of regime III i.e., $|\phi^{\text{max}}_{1}| \rightarrow \infty$.
The character of the different regimes is further illustrated by displaying the phase portrait of the scalar operator (expectation value against source) used to drive the plasma.}
\label{fig:PP_qualitative}
\end{figure}
Before we describe the different phases, let us examine for a moment the physics of the gravitational system qualitatively. Initially we have a planar black hole in \AdS{4}. When we turn on the scalar source, we are injecting energy into the bulk. This energy does work on the system and simultaneously heats it up. The latter is seen by the fact that some of the energy falls behind the horizon, which grows\footnote{ As we will be describing the dynamics of Einstein-scalar system with the scalar field satisfying the null energy condition, the areas of the event and apparent horizon (in the canonical foliation) have to grow monotonically -- a consequence of the area theorem \cite{Hayward:1993wb} (see \cite{Booth:2005qc} for an excellent overview). We will elaborate on this point in \S\ref{sec:bulksol}.} -- this is the gravitational response to the disturbance of the plasma. However, in this process we also induce an expectation value for the operator whose source we tweak. When we disturb the system `slowly enough', the operative parameter measuring this being the product of the amplitude and the period, the system has time to catch-up. This is the dissipation dominated regime indicated by I in Fig.~\ref{fig:PP_qualitative}. In this regime the injected energy falls behind the horizon with little fanfare.
As we ramp up the disturbance, the plasma is driven more and more non-linear, with a dynamical cross-over visible as we move into phases IIa or IIb of Fig.~\ref{fig:PP_qualitative}.
Note that the entire non-linear dynamics in the system is induced by the non-linearities of gravity, for we model the system simply by a free (massive) scalar field. In this phase the response gets more in-phase with the source. It is amusing to contrast this with non-linear scalar dynamics; we find that in this phase we can model the scalar 1PI effective action induced from the gravitational interactions to be well mimicked by a polynomial potential (see \cite{Basu:2013vva} for previous studies of self-interacting scalars in AdS). In this regime there is less dissipation; the entropy production by the growth of the horizon area is slowed down relative to region I. The primary distinction between the two phases IIa and IIb themselves is the lag in the response seen as the period is increased (hence the tilt in the phase portrait).
For even larger disturbances, we enter region III, where the system response gets highly resonant and there is a steep growth in the response. As one might suspect this is the domain where the gravitational non-linearities are strongest and indeed one can check that such behaviour is not visible for a polynomially (self-) interacting scalar. In the course of our investigation we explore not just the phase portrait, but various other physical quantities of interest, such as the growth of entropy and dissipation in the system, the rate at which entanglement is produced, etc.. For instance, region IIb is characterized by enormous fluctuations in the energy of the system over a single period and continuous but non-differentiable behaviour in the entanglement entropy of a sub-system.
Let us contrast our results with the analysis in the perturbative regime of small amplitudes undertaken in \cite{Auzzi:2013pca}.\footnote{ We note that \cite{Li:2013fhw} study the influence of a periodic electric fields on the phase transition between a normal and superconducting phase using holography. It is clear in this case that a driving the system will make it exit the low temperature superconducting phase as the energy expended heats up the system past the critical point, as their analysis confirms.}
As one can see from phase diagram Fig.~\ref{fig:PP_qualitative} for small amplitudes, $A \ll 1$, one is largely in the dissipation dominated linear response regime. This is indeed consistent with the analysis of \cite{Auzzi:2013pca}, who explore the dependence of observables on both the period of the driving as well as the dimension of the perturbing operator $\Delta$. As for us the latter remains frozen and we are unable to check the detailed scaling relations they find, but in the common domain of overlap we do indeed have agreement. In particular, for perturbing operators of dimension $\Delta =2$ in CFT$_3$ we expect to see that the energy dissipation as a function of the period scales as $E_\text{diss} \sim {P}^{-1}$ (for ${A} \ll 1$), independent of the initial temperature.
Furthermore, we also expect that the work done in each cycle, measured by the entropy produced, to scale with the increased energy density. We find that in the slow driving regime this scaling closely tracks the prediction from local thermal equilibrium, but starts to grow more steeply as we transit into more interesting non-linear regimes.
While the various response functions provide us with a useful diagnostic of the phase structure of the dynamical evolution, we also attempt to gain insight into the non-equilibrium dynamics using entanglement entropy for small sub-systems. This non-local probe exhibits distinct characteristic features in the various regimes: for weak driving, the growth of entanglement is gradual (and appears to track the growth of thermal entropy), while for strong driving there are steep oscillations and glitches in its evolution. We should caution the reader that we have only examined entanglement entropy for relatively small sub-systems, owing to technical complications with numerical stability. Nevertheless these results suggest a rather rich structure in the temporal growth of entanglement with driving, which deserves further detailed exploration \cite{Rangamani:2015ys}.
The outline of this paper is as follows. We begin in \S\ref{sec:setup} by giving a quick overview of the basic set-up and the numerical solutions. Following this in \S\ref{sec:obs}, we set out the various observables we use to explore the behaviour of the system. In particular, we justify the rationale behind phase diagram Fig.~\ref{fig:PP_qualitative} and how we should physically think of the different regimes. \S\ref{sec:ee} is devoted to the study of entanglement entropy in this system where we focus on the region of an infinite strip and exploit the underlying homogeneity of the set-up. We conclude with a discussion in \S\ref{sec:discuss}. Some technical results about holographic renormalization required for computing various observables is collected in the Appendices; Appendix \ref{sec:holren} collects some useful information about holographic renormalization in our models while Appendix \ref{sec:eeapp} provides details relevant for computing entanglement entropy.
\section{Driven CFTs and their Holographic Duals }
\label{sec:setup}
We first take the opportunity to set up the basic problem of a field theory driven out of equilibrium by turning on a source for a relevant operator. We then go on to describe how to model this in the holographic set-up and present the basic methodology and results from the numerical simulations.
\subsection{Driving CFTs by Relevant Operators}
\label{sec:cftdriving}
We are interested in the dynamics of strongly coupled plasmas that are driven by an external source. The initial plasma is in equilibrium in some homogeneous thermal state at a temperature $T_0 $ for $t <0$. At $t=0$ we introduce external sources with some specified spatial-temporal profile that we control. We focus exclusively on situations where
the external sources are spatially homogeneous, but otherwise arbitrary and tunable at will.
To wit, the system under consideration can be modeled by an equilibrium density matrix, evolved under a time-dependent Hamiltonian, i.e., we take
\begin{equation}
S_{CFT} = S_{{\cal J}=0} + \int d^d x \sqrt{-\gamma}\; {\cal J}(x)\, {\cal O}(x)
\label{}
\end{equation}
where ${\cal O}(x)$ is a single trace (gauge-invariant) relevant operator of conformal dimension
$\Delta<d$. The source ${\cal J}(x)$ is chosen to have no spatial dependence and be temporally periodic and thus can be represented as
\begin{equation}
{\cal J}(x) = {A} \, \cos(\omega t)\, \Theta(t) \,.
\label{}
\end{equation}
Here $\Theta(t)$ is the Heaviside step function for turning on the periodic perturbation of amplitude ${A}$ and driving frequency $\omega = 2 \pi / {P}$ at $t=0$; later in actual (numerical) implementations we will choose a suitable ramp factor to smoothly turn the perturbation on.
In the presence of the source, the Ward identities following from diffeomorphism and Weyl invariance get modified. A simple analysis shows that the boundary conservation equation now has an explicit source term
%
\begin{equation}
\nabla_\mu T^{\mu}_{\ \alpha} = {\cal O}\, \nabla_\alpha {\cal J} \,.
\label{eq:cward}
\end{equation}
indicative of the work done by the driving source on the CFT. Likewise the one-point function of the trace of the energy-momentum tensor no longer vanishes but satisfies
\begin{equation}
T^\mu_\mu = \left(d-\Delta \right)\, {\cal J}(x) {\cal O}(x)
\label{eq:tward}
\end{equation}
Since the boundary theory is conformal, it does not have any intrinsic time scale. The time scales in the problem come from only the driving force, namely its amplitude and period.
The situation of interest is thus characterized by three scales:
\begin{itemize}
\item $T_0$: the initial thermal scale for the homogeneous plasma.
\item ${A}$: the amplitude of the source whose scaling dimension is $d-\Delta$.
\item $\omega$: the driving frequency or the time-scale set by the period
${P} = 2\pi/\omega$.
\end{itemize}
\subsection{Holographic Driving}
\label{sec:gdual}
The gravity dual to this set-up is modeled by the dynamics of a scalar field $\phi$ with mass $m_\phi^2=-2$, dual to a relevant perturbation of the boundary theory.
\begin{equation}
S_{\text{bulk}} = \frac{1}{16\pi G_N}\; \int d^{d+1} x\; \sqrt{-g} \, \left( R + d(d-1) - \frac{\alpha_g}{2} \left[ \, (\partial \phi)^2 + m^2 \phi^2 \right] \right)
\label{eq:bulkS}
\end{equation}
In our holographic implementation of this set-up we will work in $d=3$ and consider a scalar operator with conformal dimension $\Delta =2$. While this is rather specific, we will explore the phase structure of the driven system as a function of the ratio of scales outlined above. The qualitative features we believe are independent of these actual choices.\footnote{ We have also set $\ell_\text{AdS} =1$ for simplicity.} We have included a dimensionless gravity-scalar coupling $\alpha_g$ which we can use to tune the amount of backreaction on the geometry; for the most part we will focus on $\alpha_g = 0$ or $\alpha_g =1$, to model probe and interacting scalar fields respectively.
We want to study gravitational dynamics driven by a scalar field whose non-normalizable mode is turned out as dictated by the source ${\cal J}(x)$, i.e., take $\phi_0(t) = {A}\,\cos(\omega t)$
and study the behaviour of the theory with varying amplitude
${A}$ and frequency $\omega$. The gravitational background is an asymptotically
\AdS{4} spacetime, which we write in ingoing Eddington-Finkelstein coordinates (sometimes called the Bondi-Sachs form) as:
\begin{equation}
ds^2 =-2 \, \gtt (t,r)\, e^{2 {\chi} (t,r)}\,dt^2+2 \,e^{2 {\chi} (t,r)}\,dt \,dr + {\rho} (t,r)^2\, (dx^2+dy^2)
\label{eq:bulkcy}
\end{equation}
The coordinate dependences of the metric functions $\gtt$, ${\chi}$, ${\rho}$ are explicitly indicated with homogeneity ensuring that $\partial_x$ and $\partial_y$ are Killing vector fields.
Our initial state is a planar Schwarzschild-\AdS{4} black hole with temperature
$T_0=3/\pi$, corresponding to horizon size $r_+ =1$. This bulk solution is given by $\gtt = r^2(1-\frac{1}{r^3})$, ${\chi} = 0$, ${\rho} = r$ with metric
\begin{equation}
ds^2_{t \le 0} = - r^2 \left(1-\frac{1}{r^3}\right)\, dt^2 +2\, dt\, dr + r^2\, \left(dx^2 + dy^2\right).
\label{}
\end{equation}
For our choices of $m_\phi^2 = -2$ in $d=3$, the amplitude ${A}$ has mass dimension $1$.
Thus we have two interesting time scales associated with the external driving force: the period ${P}$ and the inverse amplitude ${A}^{-1}$. To capture universal physics, we look at relatively late times of the non-thermalized system compared to both of these scales.
Note also that in those late times the initial value of the temperature, $T_0$, becomes irrelevant.
There has been much interest recently in holographic {\it quenches}, in which the system is initially driven to an excited state, and then is allowed to return to equilibrium, a process which exhibits some degree of universality. In contrast, we are interested in the dynamics of the steady state system while it is being driven. Hence, in our solutions we do not turn off the driving force at late times, and seek universal features associated with the driven steady state system. We will see that such dynamical features exist, and they strongly depend on the parameter
\begin{equation}
\xi ({P},{A}) \equiv {P}\,{A} \,,
\label{eq:xidef}
\end{equation}
the unique dimensionless parameter formed from the two time scales associated with the driving force. Below we refer to the regime $\xi \ll 1$ as the weak driving regime, and $\xi \gg 1$ as the strong driving regime (which is further divided into two separate dynamical regimes). We also measure time in units of the period ${P}$, thus we vary and discuss the dependence of observables on the two dimensionless parameters: the strength of the drive and time.
\subsection{Bulk solutions}
\label{sec:bulksol}
We solve the equations of motion resulting from the scalar-gravity Lagrangian \eqref{eq:bulkS}
by direct numerical integration. The boundary conditions on the scalar are prescribed by the source and the metric is required to be asymptotically \AdS{4}. The AdS boundary is attained as $r\to \infty$ and the asymptotic behaviour of the fields is
\begin{align}
\phi(t,r) &= \frac{\phi_0(t)}{r} + \frac{\phi_1(t)}{r^2} + {\cal O}(r^{-3})
\nonumber \\
{\rho} (t,r) &= r + \lambda(t) - \frac{\alpha_g}{4}\, \frac{\phi_0(t)^2}{r} + {\cal O}(r^{-2})
\nonumber \\
\gtt (t,r) &= \frac{1}{2} ( r + \lambda(t))^2 - \lambda'(t) - \frac{\alpha_g}{4}\, \phi_0(t)^2
+ {\cal O}(r^{-1})
\nonumber \\
\chi(t,r) &= {\cal O}(r^{-4})
\label{}
\end{align}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/phi_amp=1_per=1_tf=10.pdf}
\put(-47,8){\makebox(0,0){\normalsize $\frac{t}{{P}}$}}
\put(-160,13){\makebox(0,0){\normalsize $u$}}
\put(-202,95){\makebox(0,0){\normalsize $\phi$}}
\caption{Sample $\phi$ solution}
\label{subfig:sample_phi}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/gtt_amp=1_per=1_tf=10.pdf}
\put(-47,8){\makebox(0,0){\normalsize $\frac{t}{{P}}$}}
\put(-160,13){\makebox(0,0){\normalsize $u$}}
\put(-202,95){\makebox(0,0){\normalsize $\gtt$}}
\caption{Sample $\gtt$ solution}
\label{subfig:sample_gtt}
\end{subfigure}
\caption{A sample solution displaying the scalar field $\phi(t,u)$ and the temporal component of the metric function $\gtt (t,u)$ for $\xi({P}=1, {A}=1)=1$. Time is measured in units of ${P}$ and the radial component is compactified as $u = 1/r$. }
\label{fig:sample_solution}
\end{figure}
More specifically, we use the characteristic formulation of the resulting partial differential equations as explained in detail in \cite{Chesler:2013lia} to numerically integrate for the solution. The advantage of the method is that it allows us to use constrained evolution: at each time step we solve a nested set of ODEs to determine the time derivatives of all dynamical quantities, and then we use one of the standard time evolution schemes to march forward in time. While we follow the general logic of \cite{Chesler:2013lia}, in our implementation we found that some of elements described in \cite{Balasubramanian:2013yqa} enabled for a more robust evolution.
To solve the radial ODEs we discretize the equations using a Chebyshev basis in the radial direction, typically taking a grid of 60 points. For time evolution we use an explicit Runge-Kutta method of order 4, with an adaptive step size. We filter at each time step by throwing out the top third of the Fourier modes for each dynamical variable to avoid artificial and unphysical growth in amplitudes of short wavelength modes associated with the UV cutoff.
In the regime of strong driving, we found it necessary to turn on the perturbation gradually from zero. Therefore we include a ramp-up time of $2\,{P}$, after which the amplitude reaches its intended value. Thus, the first few periods of each solution show behaviour sensitive to details of the ramp-up protocol. We look at observables only after this ramp-up time of $2\,{P}$.
In Fig.~\ref{fig:sample_solution} we show one example of evolved bulk fields for a specific solution. As we perturb the system by a relevant operator, the scalar field grows towards the horizon. All fields are (at least approximately) modulated with the period of the source.
At this point it is worthwhile mentioning one important consistency check on the numerical scheme, which relies on the existence of a smooth horizon in the spacetime. Given a metric and a Cauchy slice in the bulk spacetime, one can find the outermost trapped surface on this slice. If we have a set of Cauchy slices that foliate the spacetime, then the future outermost trapping horizon, which we simply refer to as the apparent horizon by a common abuse of terminology, is typically defined by taking the union of the outermost trapped surface on all the slices. The apparent horizon thus defined is subject to an area law which was originally discussed by \cite{Hayward:1993wb} -- we refer the reader to \cite{Booth:2005qc} for a concise modern summary and proof of the statement. It is however important to note that the statement relies on the existence of a sensible foliation of the spacetime by Cauchy slices. Indeed, it is possible as discussed in \cite{Wald:1991zz} to find exotic symmetry breaking foliations (which are however incomplete) in which even the Schwarzschild black hole solutions fails to have a trapped surface.
We mention this in passing, as \cite{Auzzi:2013pca} quotes the result of \cite{Wald:1991zz} to argue that apparent horizon areas need not be monotone generically. They however do not encounter such behaviour, for with the choice of
ingoing coordinates in \eqref{eq:bulkcy}, there is a canonical choice of bulk Cauchy slices respecting the homogeneity of the disturbance. In this foliation the result quoted in \cite{Booth:2005qc} does apply and in fact simply follows from properties of null congruences using Raychaudhuri's equation.\footnote{ To be sure the statement of the area increase theorem does rely on the
null energy condition, which we happily assume, for it is always satisfied by scalar fields with sensible kinetic terms.} Our results are indeed consistent with this expectation and we have checked that the area of the apparent horizon does grow monotonically in $t$ (which labels the leaves of the foliation chosen), as we shall extensively see in the sequel. While initial results of \cite{Buchel:2014gta} appeared to suggest otherwise, upon closer scrutiny, one finds that in numerical analyses so far the area of the apparent horizon does respect the second law as derived by \cite{Hayward:1993wb}.\footnote{ We thank Alex Buchel for checking this and confirming the monotone growth of the apparent horizon area.}
\section{Driving Diagnostics}
\label{sec:obs}
Having constructed the holographic duals we now turn to lessons that can be extracted from the geometry for the dynamics of strongly coupled field theories. A-priori there are a number of observables which are useful probes of the out-of-equilibrium situation and we will focus on those that offer most clear insight into the dynamics. Our primary goal is to quantify the behaviour of the system as a function of $\{{P},{A}\}$ and construct a phase diagram demarcating the various regimes in this phase space. Let us quickly enumerate the observables we will use and proceed to explain why they give us some insight into the dynamics:
%
\begin{itemize}
\item The phase portrait of response $\phi_1(t)$ as a function of the source $\phi_0(t)$. Alternatively, this relation can be codified in a conductivity $\sigma(t)$, as defined below in \eqref{eq:sigdef}.
We find 4 underlying phases regions that the system can fall into.
\item The $\phi_1$-$\phi_0$ phase portrait features for polynomial and non-polynomial potentials with the gravity-scalar coupling $\alpha_g$ switched on and off.
\item The cycle-averaged thermodynamics quantified by the energy density $\epsilon_\text{avg}(t)$ and entropy density $s_{\text{avg}}(t)$, and the scaling relation $s_{\text{avg}} \sim \epsilon^{\gamma}_\text{avg}$ between them.
\item The work done in each cycle, measured as the difference in average energy between two successive cycles, $\epsilon_\text{cycle} = \epsilon_\text{avg}^{(n+1)} - \epsilon_\text{avg}^{(n)}$. We typically take $n$ to correspond to the penultimate cycle of our simulation.
\item Fluctuations $\epsilon_\text{fluc}(t)$ in the energy density around $\epsilon_\text{avg}(t)$ and the
maximal response $|\phi^{\text{max}}_1(t)|$.
\item Entanglement entropy and extremal surface evolution for fixed spatial strips ${\cal A}$ on the boundary.
\end{itemize}
When the system is driven by an external source, the most basic quantity is the response, which is characterized by the scalar one-point function in the presence of the source. In linear response theory, this can be obtained from the retarded Green's function of the operator ${\cal O}(x)$ evaluated in equilibrium. We are not just interested in the linear response regime, which would correspond in our set-up to ${A} \ll T_0$, but in the full non-linear response. To visualize the response of the strongly coupled plasma, especially in the non-linear regime, where its phase relative to the source is important, we will find it instructive to exhibit the phase portrait, the trajectory traced by the system in the $\phi_0$-$\phi_1$ plane. We also codify the relation between scalar source and response by a complex conductivity, defined below.
In addition to the one-point function of the operator deforming the CFT, we are interested in the boundary energy-momentum tensor. This can be decomposed in to an energy density $\epsilon(t)$ and a pressure. In the holographic set-up one has
\begin{equation}
\vev{{\cal O}(t)} = \phi_1(t)\,, \quad \epsilon(t) = \vev{T^t_{\ t} (t)} \,, \quad p(t) = \vev{T^i_{\ i} (t)}
\label{}
\end{equation}
The scale Ward identity \eqref{eq:tward} implies that pressure is not an independent observable since it can be obtained from knowledge of $\epsilon(t)$ and $\phi_1(t)$, so we will not discuss the pressure separately. Additionally, to probe the local thermodynamics we will monitor the local entropy density $s(t)$, obtained by computing the area of the apparent horizon at time $t$.\footnote{ Using the area of the apparent horizon (defined as the outermost trapped surface in the foliation respecting spatial homogeneity) results a causal boundary observable. One maps points on the apparent horizon to boundary points by Lie transport along radially ingoing null geodesics, which in the ansatz \eqref{eq:bulkcy} are simply lines of constant $\{t,x,y\}$. On the other hand the teleological nature of the event horizon implies that its area would not provide a good measure for the boundary entropy density, cf.,
\cite{Chesler:2008hg, Figueras:2009iu} for a discussion of this point.}
The dynamics of the bulk gravitational fields encode the heat production resulting from supplying external energy to the system. We monitor the explicit time dependence of the energy density $\epsilon(t)$ and the entropy density $s(t)$ along with their values averaged over each driving cycle period ${P}$, and find for the most part that the averaged values are increasing with time.\footnote{ Note that the averaging makes $\epsilon_\text{avg}(t)$ and $s_\text{avg}(t)$ discrete in time.} These provide a useful diagnostic of the departure from equilibrium, as one can monitor the
scaling relation to infer the local thermodynamic equation of state. We define the thermodynamic scaling exponent $\gamma$ when the system is in a steady state $t>t_{\text{s}}$ via
\begin{equation}
s_{\text{avg}} \sim \epsilon_{\text{avg}}^{\gamma}
\label{eq:gammadef}
\end{equation}
Note that in thermal equilibrium, conformal invariance predicts $\gamma_0 = \tfrac{2}{3}$. We will encounter this and other scaling regimes in our driven system when conformal invariance is broken.
Note that one natural set of non-local observables we could use are the multi-point correlation function for gauge invariant local operators, perhaps for ${\cal O}$ itself. However, realistically this computation involves solving the wave-equation for the linearized scalar fluctuations on top of the background we have constructed, together with the imposition of suitable boundary conditions on the future horizon, to obtain sensible time-ordered correlation functions. These boundary conditions are somewhat tricky to implement (see however \cite{CaronHuot:2011dr,Chesler:2011ds}) --
we will therefore postpone a discussion of correlators to the
future.\footnote{ We could following standard practice attempt to compute two-point correlation functions using the geodesic approximation \cite{Balasubramanian:1999zv}. However, as discussed in \cite{Louko:2000tp} and more recently in \cite{Headrick:2014cta}, this prescription doesn't generically reproduce correct time-ordered correlation functions (we really want in-in correlation functions in our set-up). As a result we will also refrain from computing geodesics in the numerical background.}
Below we describe the behaviour of the observables mentioned above in three distinct dynamical regimes, and comment on the bulk interpretation of those regimes. Once we have gained sufficient intuition from this exercise, we will then examine the entanglement entropy for a specified boundary region.
\subsection{Dissipation Dominated Regime}
\label{sec:dissdom}
The simplest situation occurs in the regime of weak driving $\xi \ll 1$, which is best described as the \emph{dissipation-dominated regime} (phase I). This includes the regime of small amplitudes, studied perturbatively in \cite{Auzzi:2013pca}. In this weak driving regime, the behaviour of all observables is dominated by dissipation, which we now demonstrate by looking at some specific observables.
As we drive the system by the scalar non-normalizable mode $\phi_0$ it is instructive to divide the scalar response $\phi_1$ to the part in-phase with the driving force, and the part completely out-of-phase with the perturbation. In analogy with an electromagnetic perturbation in linear response, we can complexify the time dependence of the scalar field\footnote{ That is, regard $\cos \omega t$ and $\sin \omega t$ as the real and imaginary parts of $e^{i \omega t}$.} and define a complex conductivity
\begin{equation}
\sigma(t) \equiv \frac{1}{i \omega} \frac{\phi_1(t)}{\phi_0(t)} = {\sigma_{\text{out}}}(t) + i \, {\sigma_{\text{in}}}(t) .
\label{eq:sigdef}
\end{equation}
With this notation the out-of-phase and in-phase parts of the response correspond to the real and imaginary parts of the complex conductivity, ${\sigma_{\text{out}}}(t)$ and ${\sigma_{\text{in}}}(t)$, respectively.
This is the usual convention for the more familiar conductivity, related to electromagnetic perturbations.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{figures/PP_amp=1_per=0.001_tf=0.01.pdf}
\begin{picture}(0.3,0.4)(0,0)
\put(-108,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-203,92){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\end{picture}
\vspace{2 mm}
\caption{The phase portrait of the dimensionless response
$\tilde{\phi}_1 \equiv \tfrac{{P}}{{A}} \,\phi_1$ versus the dimensionless source
$\tilde{\phi}_0 \equiv \tfrac{1}{{A}}\, \phi_0$
for $\xi = 0.001 \ll 1$ in the dissipation dominated regime (${P} = 0.001, {A} = 1$)
which we label as phase I. We evolve the solution for 10 periods with each colour segment representing one period.
The early times $t < 2\,{P}$ show the effect of the perturbation ramp-up,
and thus are numerical artefacts that we omit from the plot.}
\label{fig:PP_boring}
\end{figure}
As shown in Fig.~\ref{fig:PP_boring} in the low driving regime the scalar response is precisely out of phase with the scalar source, ${\sigma_{\text{in}}}=0$, meaning all the energy is dissipated and none of it used to excite the internal energy associated with the scalar field i.e., no work is being done on the system.
This is the quench limit and it matches with what we expect from the behaviour of the perturbation in linear response.
The complex conductivity $\sigma={\sigma_{\text{out}}}$ is purely real and has constant amplitude as a function of time at high frequencies.\footnote{ This is similar to the behaviour of the conductivity for electromagnetic perturbations in asymptotically AdS space.}
This is manifested in the final steady state being reached almost immediately and consisting of closed untilted trajectories in phase space.
As we shall see below, tilting of the trajectories in phase space is indicative of non-trivial response and work done onto the system.
Fig.~\ref{fig:conductivity_phasediagram} shows what fraction of the complex conductivity $\sigma_{\text{out}}$ is present on each point on the $({P},{A})$ phase diagram, and for what we are concerned with currently, the system has the response being completely out-of-phase with the source when the period is low.
\begin{figure}
\centering
\includegraphics[width=0.65 \textwidth]{figures/conductivity_phasediagram.pdf}
\put(-60,13){\makebox(0,0){\normalsize $\log_{10} {P}$}}
\put(-235,23){\makebox(0,0){\normalsize ${A}$}}
\put(-265,225){\makebox(0,0){\Large $\left| \frac{\sigma_{\text{in}}}{\sigma} \right|$}}
\vspace{1mm}
\caption{The fraction of the complex conductivity $\sigma_{\text{in}}$ over the entire $({P},{A})$ phase diagram where $|\sigma|^2 = \sigma^2_{\text{in}} + \sigma^2_{\text{out}}$.}
\label{fig:conductivity_phasediagram}
\end{figure}
Both the energy and entropy density, averaged over each cycle, grow linearly with time in the dissipation-dominated regime . As the black hole grows, its entropy growth tracks its energy growth at a slightly higher rate than the equilibrium relation $s_{\text{avg}} \sim \epsilon_{\text{avg}}^{2/3}$, i.e., $\gamma \gtrsim 2/3$. This entropy-energy scaling is shown in Fig.~\ref{fig:S_vs_E_boring} along with their own evolution with time. Note that the expansion of the black hole horizon is not necessarily adiabatic (as measured e.g., by the rate of entropy increase $\frac{1}{T}\frac{\dot{S}}{S}$).
\begin{figure}
\centering
\begin{subfigure}[b]{0.43\textwidth}
\includegraphics[width=\textwidth]{figures/SvE_amp=1_per=0.01_tf=0.1.pdf}
\put(-82,-8.0){\makebox(0,0){\normalsize $\tilde{\epsilon}_{\text{avg}}$}}
\put(-194,90){\makebox(0,0){\normalsize $\tilde{s}_{\text{avg}}$}}
\vspace{1mm}
\caption{$s_{\text{avg}}(t)$ versus $\epsilon_{\text{avg}}(t)$.}
\label{subfig:S_vs_E_plot}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.465\textwidth}
\includegraphics[width=\textwidth]{figures/SEvt_amp=1_per=0.01_tf=0.1.pdf}
\put(-200,122){\makebox(0,0){\normalsize $\tilde{\epsilon}_{\text{avg}}$}}
\put(-200,49){\makebox(0,0){\normalsize $\tilde{s}_{\text{avg}}$}}
\put(-90,-8.0){\makebox(0,0){\normalsize $\frac{t}{{P}}$}}
\vspace{1mm}
\caption{$s_{\text{avg}}(t)$ and $\epsilon_{\text{avg}}(t)$ versus time.}
\label{subfig:SE_vs_t_plot}
\end{subfigure}
\caption{The fitted average entropy $s_{\text{avg}}$ versus the average energy $\epsilon_{\text{avg}}$ (left)
and their individual values as a function of time (right) for $\xi({P} = 0.01, {A} = 1) = 0.01$.
Fitting for $s_{\text{avg}} \sim \epsilon_{\text{avg}}^{\gamma}$, we find a fitted value of
$\gamma=0.6682 \pm 0.0023 \gtrsim \tfrac{2}{3}$ with 95\% confidence.}
\label{fig:S_vs_E_boring}
\end{figure}
In the low amplitude regime, one can also estimate in perturbation theory the amount of energy dissipated per cycle $\epsilon_\text{cycle}$ which we define as the difference of the average energy $\epsilon_\text{avg}$ between two successive cycles; for simplicity we take the result for the last two cycles of our evolution in quoting the results below.
One expects the relation to take a scaling form $\epsilon_\text{cycle} \sim \omega^\alpha$. The scaling exponent $\alpha$ should be a non-trivial function of frequency itself; for low frequencies it is independent of the driving operator, but the high frequency limit cares about the spectral properties about the operator in question. Specifically, one finds that \cite{Auzzi:2013pca}:
$\epsilon_\text{cycle} \sim \omega$ for small frequencies and
$\epsilon_\text{cycle} \sim \omega^{2\,\Delta - d}$ for high frequencies. Since we are not scanning over different choices of the driving operator, we have a single shot at determining this result. As depicted in Fig.~\ref{fig:alpha_omega_low_A} we indeed find that the energy dissipated is linear both at low and high frequencies: $\alpha(\omega) \to 1$ both for $\omega \gg 1$ and for $\omega \ll 1$
(a coincidence owing to our choice $\Delta =2$ and $d=3$). Interestingly there is some non-trivial intermediate frequency behaviour which appears to amplify the energy dissipated in a single cycle.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{figures/alpha_omega_low_A.pdf}
\begin{picture}(0.3,0.4)(0,0)
\put(-91,-8.0){\makebox(0,0){\normalsize $\text{log}_{10} \ \omega$}}
\put(-207,98){\makebox(0,0){\normalsize $\alpha$}}
\end{picture}
\vspace{2 mm}
\caption{The dimensionless scaling parameter $\alpha(\omega)$ from fitting $\epsilon_{\text{cycle}} \sim \omega^{\alpha}$ for a small amplitude ${A} = 1$ in the linear response regime. It is expected for our choice of the scalar and dimension ($\Delta = 2$ and $d=3$) that $\alpha \rightarrow 1$ in both the small ($\epsilon_{\text{cycle}} \sim \omega$) and large frequency ($\epsilon_{\text{cycle}} \sim \omega^{2 \Delta - d}$) limits. }
\label{fig:alpha_omega_low_A}
\end{figure}
The bulk picture of the process is also very simple: as we send energy pulses, which are either weak or infrequent, they interact very rarely before falling into the black hole horizon. All injected energy from the boundary goes towards steadily increasing the black hole mass and the scalar field remains unexcited. The more diverse behaviour observed below can be attributed to gravitational interactions of those energy pulses before they fall into the black hole.
\subsection{Dynamical Crossover Tilted Regime}
\label{sec:dc}
We now discuss the qualitative changes in the system as we begin to move from the weak driving $\xi \ll 1$ to the strong driving regime $\xi \gg 1$ (from regime I to regime II through the grey-dashed line in phase diagram Fig.~\ref{fig:PP_qualitative}). Fig.~\ref{fig:PP_wobbly} depicts a typical phase portrait of the system as we cross into the new dynamical regime. We see that this regime is characterized by an onset of excitations of the scalar field and breaking of discrete time translation symmetry.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PP_amp=20_per=0.1_tf=1.pdf}
\put(-104,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-199,95){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 0.1, {A} = 20) = 2$}
\label{subfig:PP_boring_high_freq}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PP_amp=1_per=10_tf=100.pdf}
\put(-104,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-199,95){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 10, {A} = 1)=10$}
\label{subfig:PP_boring_low_freq}
\end{subfigure}
\caption{The dimensionless phase portrait of the response $\tilde{\phi}_1$ versus the source $\tilde{\phi}_0$ for $\xi({P} = 0.1, {A} = 20)=2$ (left) and $\xi( {P} = 10, {A} = 1) = 10$ (right). The conventions are as in Fig.~\ref{fig:PP_boring}. The left panel shows the behaviour in phase IIb while the right panel pertains to phase IIa.}
\label{fig:PP_wobbly}
\end{figure}
The left panel of Fig.~\ref{fig:PP_wobbly} shows the transition from $\xi \ll 1 \rightarrow \xi \gg 1$ at high amplitudes: the trajectories are no longer closed, rather they precess as a function of time and are slightly tilted. The breaking of discrete time-translation invariance is an interesting effect of the gravitational interactions of the scalar field.
In the right panel of Fig.~\ref{fig:PP_wobbly} we see the effect of moving into the new dynamical regime at low amplitudes: there is a clear tilt in the phase portrait from the one in Fig.~\ref{fig:PP_boring} with $\xi \gg 1$ which indicates that the response is no longer completely out of phase with the source.
The tilting of the trajectories at lower frequencies corresponds to the emergence of a finite in-phase contribution ${\sigma_{\text{in}}} > 0$ in the conductivity; this sets the system somewhere between one with a purely out-of-phase conductivity (closed circular trajectories) and one with a purely in-phase conductivity (straight diagonal line trajectories). In other words not all of the injected energy is dissipated as was the case in regime I, but rather, work is actually being done on the system.
As a result of having less dissipation in this regime, the energy and entropy of the black hole grow more slowly with time. Moreover, we find the scaling behaviour between the average energy and entropy, with a thermodynamic scaling exponent $\gamma > \tfrac{2}{3}$, for all values of $({P},{A})$, as shown in Fig.~\ref{fig:gamma_phasediagram}. In other words, while the work done in the system slows down the energy increase of the black hole, the entropy production is affected less.
\begin{figure}
\centering
\includegraphics[width=0.65 \textwidth]{figures/gamma_phasediagram.pdf}
\put(-60,13){\makebox(0,0){\normalsize $\log_{10} {P}$}}
\put(-235,23){\makebox(0,0){\normalsize ${A}$}}
\put(-265,222){\makebox(0,0){\normalsize $\log_{10}(\gamma-\gamma_0)$}}
\vspace{1mm}
\caption{The increase in the scaling exponent $\gamma$ in $s_\text{avg} \sim \epsilon^{\gamma}_\text{avg}$ from the equilibrium value of $\gamma_{\text{0}} = \tfrac{2}{3}$ over the entire $({P},{A})$ phase diagram. We find that $\gamma > \gamma_{\text{0}}$ holds for all scanned values on the phase diagram.}
\label{fig:gamma_phasediagram}
\end{figure}
To understand this regime further, it is instructive to reproduce this type of phase portrait for a system without gravity. To that effect, we can study the special case of scalar field evolution in a fixed black hole background, with no backreaction on the geometry (i.e., $\alpha_g =0$). To include non-linearity into the problem, we add self-coupling to the scalar field, to mimic the effect of the non-linearities due to gravitational interactions (see also \cite{Basu:2013vva}). Fig.~\ref{fig:PP_phi2} depicts the phase portrait of a self-coupled scalar field with two types of {\it polynomial} potentials, which we took to be our original form (free massive scalar) and also one with quartic self-interactions:
\begin{equation}
V_{\text{poly},4}(\phi)=-2 \phi^2-\frac{1}{2}\phi^4.
\label{eqn:V4}
\end{equation}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PPp1_amp=1_per=10_tf=100.pdf}
\put(-104,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-198,93){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$V_2(\phi) = -2 \phi^2$}
\label{subfig:PP_phi2_xi10}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{figures/PPp2_amp=1_per=10_tf=100.pdf}
\put(-104,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-197,92){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$V_4(\phi) = -2 \phi^2-\frac{1}{2}\phi^4$}
\label{subfig:PP_phi4_xi10}
\end{subfigure}
\caption{The phase portrait of the dimensionless response
$\tilde{\phi}_1 \equiv \tfrac{{P}}{{A}} \,\phi_1$ versus the dimensionless source
$\tilde{\phi}_0 \equiv \tfrac{1}{{A}}\, \phi_0$
for $\xi({P} = 10, {A} = 1)=10$ with $\alpha_g=0$ and different polynomial potentials $V(\phi)$. The conventions are
as described in Fig.~\ref{fig:PP_boring}.}
\label{fig:PP_phi2}
\end{figure}
We can see that without non-linearity as in Fig.~\ref{subfig:PP_phi2_xi10}, the phase portrait is tilted, but sharp features of the phase portrait are lost compared to the case with the same driving but also gravitational backreaction, depicted in Fig.~\ref{subfig:PP_boring_low_freq}. Adding a polynomial non-linearity, as done in Fig.~\ref{subfig:PP_phi4_xi10}, gives a phase portrait that starts to form slightly sharper features along with some amplification of the response. Thus, the simple system of self-interacting scalar field allows us sufficiently separate the two effects in regime II: we see that the tilt in the phase diagram is associated with decreased frequency, whereas the breaking of time-translation invariance is associated with increased amplitude. We note also that for this simple system, the third dynamical regime of unbounded amplification discussed in the next subsection seems to be absent.
Thus, the bulk interpretation of this dynamical regime becomes clear: the pulses of energy injected at the boundary interact gravitationally before falling into the black hole. This results in additional physics to that of simple dissipation, modeled here by infalling the black hole. The gravitational interaction is due to perturbative exchange of gravitons, and can be mimicked by a polynomial self-interaction of the scalar field. In the next subsection we will see the effect of the gravitational interactions becoming strong when both ${A}$ and ${P}$ are large.
\subsection{Unbounded Amplification Regime}
\label{sec:unamp}
As we increase the driving strength further in both ${A}$ and ${P}$ directions (from regime II to regime III through the blue-dashed line in phase diagram Fig.~\ref{fig:PP_qualitative}), we enter a dynamical regime no longer reproducible by polynomial self-interactions of the scalar field.
We see the phase portrait of the scalar field in Fig.~\ref{fig:PP_interesting}
for two instances of parameters in this regime. Moreover, we find this dynamical regime to be characterized by unbounded response and restoration of time translation symmetry.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PP_amp=20_per=1_tf=10.pdf}
\put(-102,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-195,90){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 1, {A} = 20)=20$}
\label{subfig:PP_sharp}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{figures/PP_amp=20_per=10_tf=100.pdf}
\put(-102,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-194,90){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 10, {A}= 20) = 200$}
\label{subfig:PP_sharper}
\end{subfigure}
\caption{The phase portrait of the response $\tilde{\phi}_1$ versus the source $\tilde{\phi}_0$ for $\xi=20$ (left) and $\xi= 200$ (right) in the non-perturbative dynamical regime (regime III). The conventions are as in
Fig.~\ref{fig:PP_boring}.}
\label{fig:PP_interesting}
\end{figure}
As we increase the strength of the driving force $\xi$, the phase portrait becomes sharper and tilted, corresponding to an increased response and, again, less lag with the source as seen in Fig.~\ref{fig:conductivity_phasediagram}.
The `slowness' of the energy injection from the boundary allows the scalar field to heat up as if the entire process were adiabatic, consequently allowing the scalar response to respond relatively quicker to the source.
Note that although Fig.~\ref{fig:conductivity_phasediagram} shows $\left| \sigma_{\text{in}} / \sigma \right| \approx 1$ in this regime, the absolute value $\left| \sigma \right|$ is actually very large in this unbounded amplification regime so that a small $\left| {\sigma_{\text{out}}} / \sigma \right|$ is still strong enough to keep the black hole perpetually growing in size.
The maximal response $|\phi^{\text{max}}_1|$ over our ten cycles of driving is plotted in Fig.~\ref{fig:maxresponse_phasediagram}
throughout the phase diagram. It is seen to increase rapidly with $\xi$ past the dissipation-dominated regime. This seems to indicate the presence of a non-linear resonance, which allows the scalar response to grow without bound. An interesting feature of
Fig.~\ref{fig:maxresponse_phasediagram} is that the maximal response does not grow in the high frequency regime regardless of how large $\xi$ is by increasing ${A}$.
It seems unlikely that unbounded behaviour is attainable even for amplitudes drastically higher than the bounds of numerical explorations reported in Fig.~\ref{fig:maxresponse_phasediagram}.
Physically, this means that a rapid pulsing of small packets of energies can barely amplify the response of the system; the frequency of driving has to be below a certain bound for resonance to be possible -- or in other words, a certain slowness in the sourcing is required.
We conjecture that one should would see unbounded amplification only in the combined large ${P}$, large ${A}$ regime which is slightly different from the traditional definition of resonance that depends only on frequency. An interesting curiousity is a slight dip in the response for moderate values of $\xi$ preceding the rapid growth. This trough appears to demarcate the domains of bounded (regime II) and unbounded responses (regime III) empirically. It would be interesting to come up with a explanation for this phenomenon.
\begin{figure}
\centering
\includegraphics[width=0.65 \textwidth]{figures/maxresponse_phasediagram.pdf}
\put(-60,13){\makebox(0,0){\normalsize $\log_{10} {P}$}}
\put(-235,23){\makebox(0,0){\normalsize ${A}$}}
\put(-270,224){\makebox(0,0){\normalsize $\log_{10} \left| \tilde{\phi}^{\text{max}}_1 \right| $}}
\vspace{1mm}
\caption{The maximal response $\left| \tilde{\phi}^{\text{max}}_1 \right| = \frac{{P}}{{A}} \left| \phi^{\text{max}}_1 \right|$ over the entire $({P},{A})$ phase diagram.}
\label{fig:maxresponse_phasediagram}
\end{figure}
Finally, it is amusing to model the non-linear effects of gravity in terms of an effective scalar potential to see what is necessary to attain regime III. We find that while a scalar field with polynomial self-interaction does not seem to posses this regime, one can reproduce similar features by {\it non-polynomial} potentials. For example, we can discuss a self-interacting scalar field probe, with
\begin{equation}
V_{\text{non-poly}}(\phi)=-2 \sinh^2\phi+\frac{1}{6}\sinh^4\phi\,.
\label{eqn:V_nonpoly}
\end{equation}
This choice of scalar self-interaction is chosen to agree with our previous example \eqref{eqn:V4} in the small field regime, but of course behaves differently for large field values. In Fig.~\ref{fig:PP_nonpoly} we see that indeed similar features of the phase diagram are reproduced: narrow closed trajectories and resonant response. We conclude therefore that the features of this dynamical regime are due to strong, non-perturbative gravitational effects occurring outside the black hole horizon. The fact that the non-linearities induced by gravity can be extremely strong, should perhaps be borne in mind while attempting to come up with simplified models of gravitational dynamics in AdS spacetime.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PPnp1_amp=1.5_per=10_tf=100.pdf}
\put(-102,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-195,91){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 10, {A} = 1.5)=15$}
\label{subfig:V3_A1p5_P10}
\end{subfigure}
~ ~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/PPnp2_amp=2_per=10_tf=100.pdf}
\put(-102,-8.0){\makebox(0,0){\normalsize $\tilde{\phi}_0$}}
\put(-195,91){\makebox(0,0){\normalsize $\tilde{\phi}_1$}}
\vspace{1mm}
\caption{$\xi({P} = 10, {A} = 2) = 20$}
\label{subfig:V3_A2_P10}
\end{subfigure}
\caption{The phase portrait of the response $\tilde{\phi}_1$ versus the source $\tilde{\phi}_0$ for $\xi= 15$ (left) and $\xi= 20$ (right) for the non-polynomial potential Eq.~\eqref{eqn:V_nonpoly}, in the conventions of Fig.~\ref{fig:PP_boring}. }
\label{fig:PP_nonpoly}
\end{figure}
\subsection{Energy Fluctuations}
Another observable we monitor is the behaviour of energy fluctuations. More precisely, we consider the deviations from the average energy in a each cycle, $\epsilon_\text{fluc}(t) = |\epsilon(t) - \epsilon_\text{avg}(t)|$.
These cycle fluctuations are a crude proxy for genuine fluctuation information that can be extracted, for instance, by considering symmetrized two-point functions of the boundary energy momentum tensor. Such ensemble-averaged fluctuations are known to exhibit phase transitions in periodically driven systems \cite{nature}. Some indication those transitions are possible in holographic systems is given in \cite{Auzzi:2013pca}.
The results for our simulations in various regimes are plotted in Fig.~\ref{fig:energyfluc_phasediagram}. We observe a qualitative change in these cycle fluctuations between different regimes. While in the dissipation-dominated phase we do not see a lot of deviation from the mean, there is a steep growth in fluctuations as we enter the non-linear phases. The fluctuations are maximal in the unbounded amplification regime (regime III). We note that in contrast to the maximal scalar response, which also grows dramatically in that phase, the fluctuations do track the driving frequency, with there being more deviations in the large period limit.
It would be useful to confirm this behaviour directly with the computation of correlation functions, a task we leave for future investigation.
\begin{figure}
\centering
\includegraphics[width=0.65 \textwidth]{figures/energyfluc_phasediagram.pdf}
\put(-60,13){\makebox(0,0){\normalsize $\log_{10} {P}$}}
\put(-235,23){\makebox(0,0){\normalsize ${A}$}}
\put(-260,217){\makebox(0,0){\large $\tilde{\epsilon}_{\text{fluc}}$}}
\vspace{1mm}
\caption{Energy density fluctuations (last cycle) $\tilde{\epsilon}_{\text{fluc}}$ in units of ${A}^2 / {P}$ over the entire $({P},{A})$ phase diagram.}
\label{fig:energyfluc_phasediagram}
\end{figure}
\section{Entanglement entropy}
\label{sec:ee}
Thus far we have discussed various local observables (response functions and thermodynamic data) which have served to help us chart the phase diagram of the driven system in Fig.~\ref{fig:PP_qualitative}. We now turn to other non-local field theory observables that are sensitive to the non-equilibrium dynamics. Since we are not going to examine the behaviour of higher point correlation functions, we will dive right into the dynamical behaviour of entanglement entropy.
In the boundary we have a density matrix $\rho(t)$ which is time-evolving with respect to the perturbed Hamiltonian. At any given instant of (boundary) time, we pick a spatial region ${\cal A}$ and construct the matrix elements of the reduced density matrix
$\rho_{\cal A} (t) = {\rm Tr}_{{\cal A}^c}\left(\rho(t)\right)$ by tracing out the degrees of freedom in the complement (on the chosen Cauchy slice). The entanglement entropy is given by the von Neumann entropy of $\rho_{\cal A}$, i.e., $S_{\cal A}(t) = -{\rm Tr}_{\cal A} \left(\rho_{\cal A}\, \log \rho_{\cal A}\right)$ which we can monitor as a function of time.
Holographically computing the entanglement entropy for boundary regions in time dependent situations involves finding bulk codimension-2 extremal surfaces $\cal E_{\cal A}$ anchored on the said boundary region ${\cal A}$ \cite{Hubeny:2007xt}.
We study the evolution of entanglement entropy focusing in particular on translationally invariant strip regions:
\begin{equation}
{\cal A} = \{ t = t_{\cal A}, -a \leq x \leq a , y \in {\mathbb R} \}\,.
\label{eqn:spatial_strip_defn}
\end{equation}
The bulk codimension-2 surface ends at $x=\pm a$ at some chosen instance of boundary time $t_{\cal A}$ and is obtained by solving effectively a set of geodesic-like equations with our interpolated metric functions $\Sigma$, $f$, and $\chi$ (see Appendix \ref{sec:extrdet} for details). The covariant holographic entanglement entropy prescription \cite{Hubeny:2007xt} generalizing \cite{Ryu:2006bv,Ryu:2006ef} states that
\begin{equation}
S_{\cal A} = \frac{ \text{Area}({\cal E}_{\cal A} ) }{ 4 \, G_{\text{\tiny N}}^\text{\tiny{(4)}} }\,.
\label{eqn:RT_prescription}
\end{equation}
Should there be multiple extremal surfaces, we choose the one with minimal area (homologous to ${\cal A}$).
The proper area of these surfaces diverges owing to the locality of the underlying QFT. In our case we encounter potential divergences not only from the surface reaching out to the asymptotic boundary, but also from the presence of the sources driving the system. The physical result we are after is the finite universal contribution $S^{\text{fin}}_{\cal A}$, which will measure the entanglement created/destroyed as we drive the system away from thermal equilibrium. Fortuitously, for our choice of scalar operator, there are no contributions due to the source, and hence we can simply regulate by background subtraction.\footnote{ Details of the divergent structure and the counter-terms necessary to compute the area functional in our set up can be found in Appendix \ref{sec:regent}.} As a result we will consider as our entanglement diagnostic, the following finite quantity
\begin{equation}
\Delta{\sf S}_{_{\cal A}} (t)= \frac{4 \, G_{\text{\tiny N}}^\text{\tiny{(4)}}}{L_y} \, \big[ S_{\cal A}(t) - S_{\cal A}(t=0) \big]
\label{eqn:S_reg}
\end{equation}
where $L_y$ is the IR regulator in the non-compact translationally invariant direction. Since we drive the system away from thermal equilibrium, $S_{\cal A}(t=0)$ is the corresponding value of the entanglement entropy computed in the Schwarzschild-\AdS{4} geometry.
In what follows we will simply quote the results of our numerical simulations both for the behaviour of the extremal surfaces themselves and $\Delta{\sf S}_{_{\cal A}} (t)$.
\subsection{Extremal surfaces in the driven geometries}
\label{sec:extrsufaces}
The extent to which the extremal surfaces penetrate into the bulk can for the most part be determined from
the location of the cap-off point which we parameterize as $(t^*, u^*=1/r^*,x=0)$.\footnote{ The coordinate $u =1/r$ is chosen such that the horizon remains at $u=1$ during the entire course of the evolution (the boundary is at $u=0$).}
For very small regions we are reasonably close to the AdS boundary whence, the curves are approximately semi-circles
$u^2 + x^2 \approx a^2$. As we increase to larger strip widths the extremal surfaces start to probe the interesting regions of the driven geometry and thus allows us to see qualitative differences between the four phases.
Generically we see that the following statements hold irrespective of the phases we consider:
\begin{enumerate}
\item The radial depth and the temporal extent spanned by the surface evolves non-trivially as a function of $t_{\cal A}$. One consequence of working with ingoing coordinates \eqref{eq:bulkcy} is that the surfaces naturally dip back in time (see \cite{Hubeny:2013hz,Hubeny:2013dea}).
\item The oscillatory driving of the system imprints itself in the profile of the extremal surfaces, with the scale of these oscillations set by the the driving parameters ${A}$ and ${P}$. The periodic movement of the surface can be seen in pulsations of the turnaround point of the surface: $u^*$ and $t^*$ have oscillations of the same period superposed over some enveloping function.
\item On average, the extremal surfaces reach further into the bulk with time; $u^*(t_{\cal A})$ is monotonically increasing for the range of parameters explored. To understand this note, we gauge fixed the bulk coordinate chart \eqref{eq:bulkcy} such that the horizon is at $u_+=1$. In these coordinates the proper size of the region ${\cal A}$ increases (due to ${\rho}(t,r)$) which means that the surfaces want to get closer to the horizon to extremize the area functional.
The rate at which this happens depends on both the amplitude and the frequency of the driving.
We also note that surfaces dip less temporally, i.e., $t^* - t_{\cal A}$ is increasing.
\item We also note that the location of the extremal surface appears to be consistent with causality of entanglement entropy \cite{Headrick:2014cta}. While we have not explicitly checked that the surface lies in the casual shadow of the boundary region ${\cal A}$, one simple consistency check visible from our results for $t^*$ is that $t^*< t_{\cal A}-a$. We remind the reader that in \eqref{eq:bulkcy} lines of constant $t$ and $x$ are radially ingoing null geodesics. Causality at the very least requires that the cap-off point of the extremal surface lies below the ingoing null geodesic from the domain of dependence. Since for the strip region the boundary domain of dependence is a diamond anchored at $(t_{\cal A}\pm a,0)$ and $(0,\pm a)$, we note that the ingoing light ray from the bottom tip of this diamond cannot signal to the cap-off point.
\end{enumerate}
In the following discussion we will illustrate the behaviour of the extremal surfaces more explicitly in each of our phases. We have been reasonably conservative in our analysis and have chosen to work only with surfaces that do not get too close to the horizon (in fact $u^* <0.2$). This is to avoid both numerical issues as well as to avoid complications from the existence of multiple extremal surfaces. We follow a single branch of solutions as described at the end of Appendix \ref{sec:extrdet}. The primary results of the extremal surfaces are shown in the plots Figs.~\ref{fig:ES_evolution_per01_amp1}, \ref{fig:ES_evolution_per10_amp1}, \ref{fig:ES_evolution_per10_amp20}, and \ref{fig:ES_evolution_per01_amp20}, where we show the evolution of the extremal surface as well as $u^*(t_{\cal A})$ and $t^*(t_{\cal A})$.
\begin{figure}
\begin{minipage}[c][11cm][t]{.65\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ES_evo_a=0.05_amp=1_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-11,19){\makebox(0,0){\Large $\frac{x}{a}$}}
\put(-140,-10){\makebox(0,0){\large $u$}}
\put(-290,100){\makebox(0,0){\Large $\frac{t}{{P}}$}}
\end{minipage}
\begin{minipage}[c][11cm][t]{.3\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ustar_evo_a=0.05_amp=1_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-105,105){\makebox(0,0){\large $u^*$}}
\vspace{0.2cm}
\includegraphics[width=\textwidth]{figures/tstar_evo_a=0.05_amp=1_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-103,99){\makebox(0,0){\normalsize $\tilde{t}^*$}}
\put(-56,-8.0){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\end{minipage}
\caption{Evolution of the extremal surfaces for a strip of width $a=0.05$ with driving parameters $\xi({P}=0.1,{A}=1)=0.1$ (phase I; dissipation-dominated). We pick a UV cutoff $u_{\cal A}=10^{-3}$ and have defined $\tilde{t}^* \equiv (t^*-t_{\cal A})/{P}$ to measure the cap-off $t^*$ point relative to the boundary.}
\label{fig:ES_evolution_per01_amp1}
\end{figure}
\begin{figure}
\begin{minipage}[c][11cm][t]{.65\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ES_evo_a=0.05_amp=1_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-11,19){\makebox(0,0){\Large $\frac{x}{a}$}}
\put(-140,-10){\makebox(0,0){\large $u$}}
\put(-290,100){\makebox(0,0){\Large $\frac{t}{{P}}$}}
\end{minipage}
\begin{minipage}[c][11cm][t]{.3\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ustar_evo_a=0.05_amp=1_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-105,105){\makebox(0,0){\large $u^*$}}
\vspace{0.2cm}
\includegraphics[width=\textwidth]{figures/tstar_evo_a=0.05_amp=1_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-103,99){\makebox(0,0){\normalsize $\tilde{t}^*$}}
\put(-116,114){\makebox(0,0){\tiny $\times 10^{-3}$}}
\put(-56,-8.0){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\end{minipage}
\caption{Evolution of the extremal surfaces for a strip of width $a=0.05$ with driving parameters $\xi({P}=10,{A}=1)=10$ (phase II; tilted).
Conventions described in Fig.~\ref{fig:ES_evolution_per01_amp1} apply.}
\label{fig:ES_evolution_per10_amp1}
\end{figure}
\paragraph{Linear regime (small ${A}$):}
Although all phases display extremal surfaces that sink into the bulk with each driving cycle, the growth of $u^*$ in the linear regime of small amplitudes is most steady. We focus here on phases I (high frequency; dissipation-dominated) illustrated in Fig.~\ref{fig:ES_evolution_per01_amp1}
and IIa (low frequency; tilted) illustrated in Fig.~\ref{fig:ES_evolution_per10_amp1}, which fall under this characterization. As the frequency is lowered and we pass from the dissipation-dominated phase to the tilted phase, there is drastic reduction in the growth of $u^*$ per cycle.
The evolution of $t^*$ in the two phases is also interesting; $t^*-t_{\cal A}$ is gradually increasing on average with time
(recall that in the stationary geometry $t^*-t_{\cal A}$ would be constant). It turns out to be useful to look at
a dimensionless parameter $\tilde{t}^* \equiv (t^*-t_{\cal A})/{P}$ which measures the cap-off time relative to the boundary. In this context, there is more time-lag in phase I i.e., $\tilde{t}^*_{\text{\tiny{I}}} \ll \tilde{t}^*_{\text{\tiny{IIa}}} \lesssim 0$, which hints at the cause for why the surfaces do not penetrate as far deep in the bulk in phase IIa as opposed to phase I.\footnote{Note that in absolute terms however, $t^*$ in both regimes is comparable in magnitude.}
In addition we see strong oscillatory patterns in phase II in spite of having only a steady increase in $u^*$; such a feature is absent in phase I.
\begin{figure}
\begin{minipage}[c][11cm][t]{.65\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ES_evo_a=0.05_amp=20_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-11,19){\makebox(0,0){\Large $\frac{x}{a}$}}
\put(-140,-10){\makebox(0,0){\large $u$}}
\put(-290,100){\makebox(0,0){\Large $\frac{t}{{P}}$}}
\end{minipage}
\begin{minipage}[c][11cm][t]{.289\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ustar_evo_a=0.05_amp=20_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-105,105){\makebox(0,0){\large $u^*$}}
\vspace{0.2cm}
\includegraphics[width=\textwidth]{figures/tstar_evo_a=0.05_amp=20_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-103,99){\makebox(0,0){\normalsize $\tilde{t}^*$}}
\put(-110,114){\makebox(0,0){\tiny $\times 10^{-3}$}}
\put(-56,-8.0){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\end{minipage}
\caption{
Evolution of the extremal surfaces for a strip of width $a=0.05$ with driving parameters $\xi({P}=10,{A}=20)=200$ (phase III; unbounded amplification).
Conventions described in Fig.~\ref{fig:ES_evolution_per01_amp1} apply.}
\label{fig:ES_evolution_per10_amp20}
\end{figure}
\begin{figure}
\begin{minipage}[c][11cm][t]{.65\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ES_evo_a=0.01_amp=20_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-11,19){\makebox(0,0){\Large $\frac{x}{a}$}}
\put(-140,-10){\makebox(0,0){\large $u$}}
\put(-290,100){\makebox(0,0){\Large $\frac{t}{{P}}$}}
\end{minipage}
\begin{minipage}[c][11cm][t]{.3\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=\textwidth]{figures/ustar_evo_a=0.01_amp=20_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-105,105){\makebox(0,0){\large $u^*$}}
\vspace{0.2cm}
\includegraphics[width=\textwidth]{figures/tstar_evo_a=0.01_amp=20_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-103,99){\makebox(0,0){\normalsize $\tilde{t}^*$}}
\put(-56,-8.0){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\end{minipage}
\caption{Evolution of the extremal surfaces for a strip of width $a=0.01$ with driving parameters $\xi({P}=0.1,{A}=20)=2$ (phase IIb; dynamical crossover). Conventions are as described in Fig.~\ref{fig:ES_evolution_per01_amp1}.}
\label{fig:ES_evolution_per01_amp20}
\end{figure}
\paragraph{Non-linear regime (large ${A}$):}
We now turn to the phases III (low frequency; unbounded amplification) illustrated in
Fig.~\ref{fig:ES_evolution_per10_amp20} and IIb (high frequency; wobbly) illustrated in
Fig.~\ref{fig:ES_evolution_per01_amp20} in the non-linear regime of high amplitude. Some of the features seen in the linear regime continue to pertain: we see more pronounced oscillations in $\tilde{t}^*$ and a decreased tendency for the surfaces to lag behind in time at lower frequencies.
In the unbounded amplification regime (phase III), we see significant bursts of growth of the extremal surfaces. The oscillatory driving is felt rather acutely by the surfaces and the evolution is considerably violent. On average however, $u^*$ appears to advance more serenely despite having large amplitude oscillations per cycle.
In the dynamical crossover wobbly regime (phase IIb), there is a considerable amount of instability. We chose here to work with smaller strip widths $a=0.01$ (instead of $a=0.05$) to avoid complications of phase transitions between multiple competing extremal surfaces.
The early part of the evolution is in line with what happens in the dissipation-dominated regime (phase I), but shortly after, there are discontinuities in the $\tilde{t}^*$ parameter with no noticeable effect in $u^*$.
Around $t_{\cal A}/P \approx 4.0 - 4.2$ and $t_{\cal A}/P \approx 4.6$, we see an exchange of dominance in the extremal surface, which starts out at a higher value of $\tilde{t}^*$.
All in all, the extremal surfaces in the non-linear regime definitely has elements of intrigue owing to the large pulses of energy that affect the bulk geometry significantly. Although we do not delve into extremal surfaces that are positioned deeper into the bulk, we notice in the course of our analysis that the surfaces tend towards the horizon as expected. More curiously, we also find that for larger regions we cannot find extremal surfaces that stay outside the apparent horizon. This is not surprising since we expect based on earlier results that there will be surfaces that penetrate the apparent horizon of the black hole (cf., \cite{AbajoArrastia:2010yt}). However, one of the disadvantages of our numerical scheme is that we are unable to explore this interesting regime due to the fact that the spacetime inside the apparent horizon has been excised. As explained in \cite{Chesler:2013lia}, this was to avoid complications with having caustics in the coordinate chart. Analysis of entanglement entropy however does require us to have the complete bulk geometry.
\subsection{The evolution of entanglement}
\label{sec:eegrow}
We now turn to the evolution of the entanglement entropy; the results are presented in Fig~\ref{fig:EE_unreg} for the regulated quantity $\Delta{\sf S}_{_{\cal A}} $ as introduced in \eqref{eqn:S_reg}.
In the dissipation-dominated regime (phase I), the entanglement entropy gradually increases, though in each cycle of forcing there is a time period for which the growth is negligible. We expect this feature is simply a consequence of the entanglement entropy tracking the thermal entropy. Even though we are not quite probing the full thermal contribution with the relatively small regions ${\cal A}$, it bears to reason that the variation of the geometry is more or less equitable on all radial scales. This appears consistent with other probes of this phase. As we discussed in \S\ref{sec:dissdom} the weak driving allows the system to efficiently dissipate the energy induced by the source and the conductivity $\sigma(t)$ was purely imaginary. Basically the dominant effect here is the growth of the black hole horizon due to the driving and this in turn imprints itself into the growth of $\Delta{\sf S}_{_{\cal A}} $ seen in Fig.~\ref{subfig:EE_reg_phase1}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_evo_a=0.05_amp=1_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase I: $\xi({A}=1,{P}=0.1)=0.1$}
\label{subfig:EE_reg_phase1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.438\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_evo_a=0.05_amp=1_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\put(-198,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase IIa: $\xi({A}=1,{P}=10)=10$}
\label{subfig:EE_reg_phase2}
\end{subfigure}
\vspace{5mm}
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_evo_a=0.05_amp=20_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase III: $\xi({A}=20,{P}=10)=200$}
\label{subfig:EE_reg_phase3}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_evo_a=0.01_amp=20_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $t_{\cal A}/{P}$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase IIb: $\xi({A}=20,{P}=0.1)=2$}
\label{subfig:EE_reg_phase4}
\end{subfigure}
\caption{The evolution of the regularized entanglement entropy, $\Delta{\sf S}_{_{\cal A}} $ defined in Eq.~\eqref{eqn:S_reg}, for the four phases for a radial cutoff of $u_{\cal A} = 10^{-3}$. The strip widths are $a=0.05$ for panels (a), (b), (c), and $a=0.01$ for panel (d).}
\label{fig:EE_unreg}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_S_evo_a=0.05_amp=1_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $s/s_0$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase I: $\xi({A}=1,{P}=0.1)=0.1$}
\label{subfig:EE_regS_phase1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.438\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_S_evo_a=0.05_amp=1_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $s/s_0$}}
\put(-198,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase IIa: $\xi({A}=1,{P}=10)=10$}
\label{subfig:EE_regS_phase2}
\end{subfigure}
\vspace{5mm}
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_S_evo_a=0.05_amp=20_per=10_tfinal=100_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $s/s_0$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase III: $\xi({A}=20,{P}=10)=200$}
\label{subfig:EE_regS_phase3}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figures/EE_reg_S_evo_a=0.01_amp=20_per=0.1_tfinal=1_cutoff=0.001.pdf}
\put(-82,-7){\makebox(0,0){\normalsize $s/s_0$}}
\put(-191,92){\makebox(0,0){\rotatebox{0}{$\Delta{\sf S}_{_{\cal A}} $}}}
\caption{Phase IIb: $\xi({A}=20,{P}=0.1)=2$}
\label{subfig:EE_regS_phase4}
\end{subfigure}
\caption{The evolution of the regularized entanglement entropy, $\Delta{\sf S}_{_{\cal A}} $ defined in Eq.~\eqref{eqn:S_reg}, against the normalized entropy of the black hole, $s/s_0 = s/s(t=0)$, for the four phases for a radial cutoff of $u_{\cal A} = 10^{-3}$. The strip widths are $a=0.05$ for panels (a), (b), (c), and $a=0.01$ for panel (d). We include the Spearman and Pearson rank coefficients, $-1 \leq \rho_s \leq 1$ and $-1\leq \rho_p \leq 1$ respectively, for each plot to demonstrate the linearity of the correlation between the entanglement entropy and the thermal entropy (see text for explanation).}
\label{fig:EE_reg_S}
\end{figure}
On the other hand when we reach phase IIa (tilted regime) by way of small amplitudes, we start to see definite oscillatory evolution of $\Delta{\sf S}_{_{\cal A}} $. In each oscillatory period we see a local reduction in $\Delta{\sf S}_{_{\cal A}} $. On the other hand the temporal radial depth attained by the extremal surface as measured by $u^*$ is almost similar to that in phase I by juxtaposing the behaviour in Fig.~\ref{fig:ES_evolution_per01_amp1} and Fig.~\ref{fig:ES_evolution_per10_amp1}.
In phase IIa however, our extremal surfaces are closer to the boundary in contrast to phase I. We conjecture that the origin of the reduction in the $\Delta{\sf S}_{_{\cal A}} $ is associated with the sharp oscillations in $t^*$ or equivalently $\tilde{t}^*$. These imprint themselves into the actual value of the area despite the surface not getting too far into the bulk (which is possible since even the asymptotics of the geometry is sensitive to the driving,
cf., \eqref{eq:metfg}). The onset of non-monotone growth of $\Delta{\sf S}_{_{\cal A}} $ in Fig.~\ref{subfig:EE_reg_phase2} characterizes the departure from the linear regime to the non-linear domain in line with the behavior of the phase portrait which in turns modifies the conductivity (which picks up a real part ${\sigma_{\text{in}}} > 0$ in phase IIa).
The temporal change of $\Delta{\sf S}_{_{\cal A}} $ is much more pronounced in the non-linear regime.
In the unbounded amplification phase III (see Fig.~\ref{subfig:EE_reg_phase3}) and the dynamical crossover wobbly phase IIb (see Fig.~\ref{subfig:EE_reg_phase4}), the $\Delta{\sf S}_{_{\cal A}} $ appears to track the time-coordinate of the cap-off point $\tilde{t}^*$ quite efficiently. Indeed here we expect the non-linearities of the system to be the dominant effect. We know that the black hole grows quite rapidly in response to the energy injected into the system at the boundary from our discussion in \S\ref{sec:dc} and \S\ref{sec:unamp}. The behaviour in phase III is smooth with large amplitude oscillations, which qualitatively track quite well the behaviour of
$\tilde{t}^*$. The dynamical crossover wobbly phase (phase IIb) exhibits a lot more drastic behaviour. We encounter for the first time a jumps in the family of extremal surface that minimize the area (satisfying the boundary conditions and the homology constraint). These jumps translate into continuous but non-differentiable kinks in $\Delta{\sf S}_{_{\cal A}} $ visible in Fig.~\ref{subfig:EE_reg_phase4}. We again note that the radial position of the cap-off point of the extremal surface behaves much more smoothly and the glitches appear in $\tilde{t}^*$. Furthermore, the growth of the entanglement itself is rather steep as we see about an order of magnitude difference in $\Delta{\sf S}_{_{\cal A}} $ between the low amplitude and high amplitude regimes.
It is interesting to contrast the change of entanglement entropy with the change in the thermal entropy to see how the two are correlated. As we have argued above, the fact that we have an ever increasing thermal entropy (the bulk black hole is constantly growing) implies that even for small sub-systems we will quickly see overwhelming thermal contribution. We display in
Fig.~\ref{fig:EE_reg_S} the functional dependence of $\Delta{\sf S}_{_{\cal A}} $ on the (normalized) instantaneous thermal entropy $s(t)/s(t=0)$. \
It is immediately apparently by eyeballing the plots that there appears to be near-perfect correlation in three phases with
Fig.~\ref{subfig:EE_regS_phase3} corresponding to phase III being the only outlier. To get a quantitative feeling for the correlation we have also indicated the Pearson correlation coefficient $\rho_p$ as well as the Spearman rank coefficient $\rho_s$. These are statistical markers for measuring correlations between two sets of data and are defined to take values in the interval $[-1,1]$. The values
$\rho_s, \rho_p = 0, \pm 1$ signify zero, perfect positive and perfect negative correlation respectively. While the Spearman coefficient indicates that the observables in question are monotonically related, the Pearson coefficient provides an accurate measure of linear correlation. Indeed from the results quoted in Fig.~\ref{fig:EE_reg_S} we see that $\Delta{\sf S}_{_{\cal A}} (s)$ is a linear function to a very good approximation in phases I, IIa and IIb. It is curious that the linearity is respected even in the presence of the glitches in the growth of entanglement entropy (we do not see any drastic behaviour in the area of the apparent horizon). The unbounded amplification phase III clearly demonstrates the effects of non-linearities by decorrelating $\Delta{\sf S}_{_{\cal A}} $ and $s(t)$.
\section{Discussion}
\label{sec:discuss}
The non-equilibrium dynamics of strongly coupled field theories is amenable to detailed quantitative exploration using the AdS/CFT correspondence. We have exploited this set-up to study the behaviour when a homogeneous thermal plasma is driven away from equilibrium by a periodically sourcing a relevant (composite) scalar operator. The resulting dynamics exhibits a rather rich phase structure illustrated in Fig.~\ref{fig:PP_qualitative}.
We identified four distinct phases, characterizing them in terms of the frequency and amplitude of the external driving force. Of these the dissipation dominated phase I is perhaps most intuitive for here the weakness of the driving, allows the system to to catch up with the driving. This is clearly visible in the various observables we studied; the complex conductivity of the response is purely real owing to the phase lag between the source and response and the evolution of entanglement is pretty quiescent.
There is more structure when we ramp up either the period of driving, or the amplitude, for now the system departs quite rapidly away from equilibrium. The response therefore is more pronounced; we see more in phase response and greater temporal oscillations. In phases IIa to IIb there emerges a non-vanishing imaginary part to the conductivity, which in fact appears to capture the entire response for high values of the period and amplitude. We also notice that there are significant fluctuations in the energy density and the entanglement entropy and furthermore, the entropy density grows rather rapidly in this regime. Perhaps most intriguing is the unbounded amplification of phase III, where we see sharp fluctuations and a highly non-linear response. We argue that this response appears to be not captured by polynomial self-interactions of the composite operator; the intricate dynamics of gravity in AdS appears to induce effective non-polynomial couplings in the effective action for the operator ${\cal O}$ we use to perturb the system away from equilibrium. We believe this fact is significant and should be taken into account when attempting to construct effective models distilling the effects of gravitational interactions for strongly coupled systems .
While our focus has been on computing the simplest set of observables, essentially one-point functions and entanglement entropy for small sub-systems, the power of holography is that we can do much more. In time independent equilibrium scenarios it is straightforward to use the holographic map to compute correlation functions (at least two point functions). In the genuine non-equilibrium scenarios as those we have focused on the technology for computing such observables, whilst present \cite{CaronHuot:2011dr,CaronHuot:2011dr} is still a bit cumbersome to work with (at least numerically). It would be interesting to develop these techniques further perhaps taking inspiration from the analytical models of \cite{Ebrahim:2010ra,Keranen:2014lna}. This would allow us with a direct probe of fluctuations in the plasma, which can be contrasted with the dissipation in the system, the latter being measured by the entropy production through the growth of the horizon.
Likewise our exploration of the behaviour of entanglement entropy has been restricted to analysis of small sub-systems for pragmatic reasons. While the sub-system under consideration was chosen to have fixed size, the fact that we are continuously driving the system leads to an ever increasing thermal contribution to the entanglement. Geometrically this is easy to understand since the horizon for our bulk solution is ever growing (as we have indicated that both the event and apparent horizons are required to be monotonic in our set-up) and
reaches out towards the boundary in the course of the evolution. As a result, the local thermal scale can overwhelm the relative smallness of the sub-region we choose. To have precise mapping of the entanglement structure we need to be able to ascertain the true minimum of the area functional in such scenarios bearing in mind that the extremal surface can (and often does) penetrate various horizons. A significant obstacle in ascertaining this is the fact that the characteristic method for solving Einstein's equations developed in \cite{Chesler:2013lia} excises the region of the spacetime behind the apparent horizon. While this is a technical obstacle, overcoming it would not only enable us to probe the interior of a highly non-equilibrium black hole using holographic entanglement, but it could also allow us to explore other interesting scenarios such as the effect of perturbing the ground state of the system by external sources.
\acknowledgments
We would like to thank Veronika Hubeny and Henry Maxfield for useful discussions.
M.~Rangamani and M.~Rozali would like to acknowledge the hospitality of Yukawa Institute for Theoretical Physics, Kyoto during the course of the project. In addition M.~Rangamani would also like to acknowledge the hospitality of IAS, Princeton, University of Amsterdam and Aspen Center for Physics.
M.~Rangamani acknowledges support from the Ambrose Monell foundation, by the
National Science Foundation under Grant 1066293, by the FQXi under grant ``Measures of Holographic Information" (FQXi-RFP3-1334), by the STFC Consolidated Grant ST/L000407/1, and the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013), ERC Consolidator Grant Agreement ERC-2013-CoG-615443: SPiN (Symmetry Principles in Nature). M.~Rozali and A.~Wong are supported by NSERC.
|
1,116,691,500,286 | arxiv |
\subsection{Classification model}
The model input is the quarter day backscatter profiles and LDR data (667 x 800 x 2).
The first layer is a 28 x 1 convolution with stride of 1 that reduces the data size to 640 x 800 x 2.
The initial size reduction is done to ensure the downsampling (and later on upsampling)
dimension changes are consistent throughout the entire model.
Next are five convolutional-pooling blocks.
Each block contains
two 3 x 3 convolutions layers with stride of 1 with
rectified linear unit (ReLU) activation function
and a 2 x 2 max pooling layer with dropout.
The first convolutional layer doubles the depth size in each block
except for the first one,
which increase the depth size from 2 to 16.
This is followed by a 20 x 25 convolutional layer with stride of 1 with ReLU and dropout
to flatten the layer.
Next are two dense layers, the first using batch normalization and ReLU
and the second using a softmax activation function to reduce to a 1 x 2 output.
The first index in the output array is the probability the input data does not contain a cloud
and the seconds is the probability the input data contains a cloud.
Figure \ref{classify_model} shows the classification model.
\subsection{Segmentation model}
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{segment_model}
\caption{Diagram of fully convolutional neural network trained to identify clouds from lidar data}
\label{seg_model}
\end{figure*}
The segmentation model architecture uses the same structure as the classification model
up to and including the last 80 x 100 x 128 convolutional output
(excluding the max-pooling and dropout layer).
The convolutional output is connected to three deconvolutional blocks.
Each deconvolutional block consists of a deconvolutional layer that upsamples
the image size by 2 and reduces the number of filters by a factor of 2.
This is concatenated with the output of the last convolutional layer of
the same image size as the output of the deconvolution layer,
followed by two 3 x 3 convolutional layers.
These "skip" connections help transfer higher level features from earlier convolutions
later in model training.
We use a 28 x 1 deconvolution layer with batch normalization and ReLU activation
to return the data to the initial time and height dimensions (667 x 800).
Finally, the model performs a 1 x 1 convolutional layer with softmax activation function
to create the output layer.
The output layer is the softmax probability for each time-height point whether it is a cloud or not a cloud.
A point is identified as being a cloud if it has a probability greater than or equal to 50\%
of being a cloud.
Figure \ref{seg_model} shows the full segmentation model and notes the layers
from the classification model that are reused.
\section{Introduction} \label{introduction}
\input{introduction}
\section{Dataset} \label{mpl_data}
\input{mpl_data}
\section{Related work} \label{related_work}
\input{related_work}
\section{Data preprocessing and preparation} \label{dataset}
\input{dataset}
\section{Model architecture} \label{model}
\input{model}
\section{Training} \label{train}
\input{training}
\section{Results} \label{results}
\input{results}
\section{Conclusion} \label{conclusion}
\input{conclusion}
\subsection*{Acknowledgements}
\input{acknowledgements}
\bibliographystyle{plain}
\subsection{Training methodology verification}
To verify the training methodology,
we train a FCN model without the first two training steps (i.e., no image-level and noisy annotation pre-training)
and one without the second training step (no noisy annotation pre-training).
For the second model, we use the same classification weights that were transferred to the best FCN model
during the first training step.
Additionally, we use the same hyperparameter search techniques
to find the best model.
As seen in table \ref{table:march_results} (rows 2 and 3),
both of these models have lower F1-scores (0.8263 and 0.8242) and
precision (0.7801 and 0.7938) than the fully trained FCN model on the hold-out dataset.
We do note the FCN model with no pre-training does have a slightly higher recall (0.8783)
than the fully trained model.
On the test-split of hand-labeled dataset, the fully trained FCN model outperforms both of them
in each category (see rows 2 and 3 of table \ref{table:test_results}).
Thus, the pre-training with the image-level and noisy annotated data increases the model's overall performance.
\input{bad_march_figures}
\subsection{Misidentified aerosol-dust layers}
In the holdout dataset (March 2015), we discovered one day, last quarter of March 30th 2015,
where the model performs very poorly (f1-score: 0.2929, precision: 0.1806, recall: 0.7752).
The model incorrectly marks a layer in the lower height bins (see figure \ref{fig:march_30_data}) as cloud.
The attenuated backscatter of the scatterers in this layer visually appears similar to
aerosol but the LDR values are close to that of an ice leading the model to label the
layer cloud.
This layer is not observed in images from a ceilometer, a second co-located lidar system
at the SGP, on this same day.
The ceilometer has a longer wavelength and is less sensitive to detecting smaller
particles such as aerosol.
This suggest this layer is likely aerosol, possibly related to local agricultural activity.
Looking closer at the training data, there are no days in January or February 2015 that
had similar aerosol layers. Thus, the model did not have the opportunity to train on these
edge cases and did not learn to ignore this layer.
\input{oliktok_figures}
\subsection{Test on MPL data from different facility}
In addition, we test the FCN model on MPL data
from the ARM Oliktok mobile facility on the north slope of Alaska
to see how well the model performs on data from a different site location \citep{mplmask}
since different sites have different atmospheric properties and weather
that can affect the input data.
We randomly select 14 days from May 2016 to hand-label and run the model on.
The model performs reasonably well on the Oliktok data,
achieving an F1-score of 0.8045 (0.8478 precision, 0.7654 recall).
This performs much better than the MPLCMASK product,
which has an F1-score of 0.4185 (0.371 precision, 0.48 recall).
Several qualitative results can be seen in figure \ref{fig:oliktok_figures}.
While the FCN model does perform noticeably worse on the Oliktok data in contrast to the
holdout data from the SGP site, the results are encouraging.
We believe if we use transfer learning and train the model on data from the Oliktok site,
we can improve the model's performance on the Oliktok site data and its overall performance.
|
1,116,691,500,287 | arxiv | \section{Introduction}
Intervening metal absorption lines have routinely been observed along
the line of sight to quasars (QSOs). Among these features, strong
{{Mg}{II} $\lambda$2796, $\lambda$2803} absorption doublets (with
equivalent width $EW>0.3$ \AA) represent good indicators of
metal-enriched gas associated with foreground galaxies. This is
because Magnesium is an $\alpha$ element produced by red giants and
dispersed in the interstellar medium (ISM) by supernovae and stellar
winds (Bergeron \& Boisse 1991, Steidel et al. 1994). In particular,
such a strong doublet on a QSO sightline witnesses the presence of
$0.1-5 L_\star$ galaxies within 60 kpc projected distance and with
different morphological types (Churchill et al. 2005, Kacprzak et
al. 2008). In addition, the $EW>1$ \AA$\;$ {Mg}{II} intervening
absorbers are hosted within dark matter halos with characteristic
masses of $10^{11}-10^{12} M\odot$ (Bouche' et al. 2006) and $\sim
80\%$ are associated with Damped Lyman-alpha systems (DLAs, Rao et
al. 2006).
For a few hours after their onset, Gamma-Ray Burst (GRB) afterglows
are the brightest beacons in the far Universe: therefore they provide
an alternative and complementary way to QSOs to fully explore the
properties of high-z galaxies in which they are hosted (see Savaglio
2006; Prochaska et al. 2007) and of those along their sightlines. In
principle, the sightlines of QSOs and GRBs are expected to be
equivalent. Prochter et al. (2006) found $\sim 7000$ strong ($EW>1
\AA\;$) {Mg}{II} intervening absorbers in $\sim50000$ QSO spectra
from the SDSS DR4, which corresponds to a redshift number density of
$dn/dz=0.24$. Surprisingly, the same authors report the
identification of 14 {Mg}{II} systems along 14 GRB sightlines,
which translates into a redshift number density of
$dn/dz=0.90^{+0.83}_{-0.50}$ (99\% confidence interval), almost 4
times higher than the QSO one. On the other hand, the intervening
{{C}{IV}} absorbers for the two classes of sources do not show any
statistical difference (Sudilovsky et al. 2007, Tejos et al. 2007).
The reason for this discrepancy is still uncertain, and several
scenarios have been widely discussed (see e.g. Porciani et al. 2007
and Cucchiara et al. 2009). One of these interpretations requires a
different size of the source, namely, that QSO emitting regions are
larger than GRBs which in turn are comparable to the typical size of
the MgII clouds (Frank et al. 2007). In this scenario variability in
the column densities of the {{Mg}{II}} absorber is expected, since
the Lorentz factor decreases because the fireball decelerates on the
interstellar medium, and thus the GRB emission regions increase. A
{{Mg}{II}} variability in multi-epoch spectroscopy data on
GRB060206 was first claimed by the analysis by Hao et al. (2007), but
then disproved by Aoki et al. (2008) and Th\"one et al. (2008).
Occasionally, extremely bright optical transient emission is
associated with the GRB event, offering the superb opportunity to take
spectra of the afterglows with high resolution instruments. In a
fraction of these cases, multi-epoch, high resolution spectroscopy
with an adequate signal to noise ratio (S/N) can be obtained, allowing
to make detailed studies of the host galaxy ISM and to put strong
constraints on its physical parameters (see Vreeswijk et al. 2007 for
GRB060418, and D'Elia et al. 2009a for GRB080319B). Here we take
advantage of the high quality data collected for GRB083019B to make a
systematic study of this GRB sightline and to search for variability
of the {{Mg}{II}} intervening absorbers.
The paper is organized as follows. Section $2$ presents a short summary
of the GRB080319B detection and observations; Section $3$ describes the datasets
and data reduction; Section $4$ presents the full analysis of the
intervening systems identified in the spectra; finally in Section $5$
the results are discussed and conclusions are drawn.
\section{GRB080319B}
GRB080319B was discovered by the Burst Alert Telescope (BAT)
instrument on board Swift on 2008, March 19, at 06:12:49 UT. Swift
slewed to the target in less than 1 minute and a bright afterglow was
found by both the X-Ray Telescope (XRT) and UV-Optical Telescope
(UVOT) at RA = $14$h $31$m $40.7$s, Dec = +36$^o$ $18'$ $14.7"$
(Racusin et al. 2008a) with observations starting 60.5 and 175 s after
the trigger, respectively.
The field of GRB080319B was imaged by the "Pi of the Sky" apparatus
located at Las Campanas Observatory before, during and after the GRB
event (Cwiok et al. 2008). The field was also targetted by the robotic
telescope REM just 43 s after the BAT trigger (Covino et al. 2008a,
b). The TORTORA wide-field optical camera (12 cm diameter,
20$\times$25 deg FOV, TV-CCD, unfiltered) mounted on REM also imaged
the field before, during and after the GRB event with good temporal
resolution (Karpov et al. 2008). These observations show that the GRB
reached the magnitudes $V = 5.3 $ about $20$ s and $H = 4.2$ about
$50$ s after the trigger. This makes GRB080319B the brightest GRB
ever recorded at optical wavelengths (Racusin et al. 2008b, Bloom et
al. 2009).
\begin{table*}
\caption{\bf GRB080319B journal of observations and setups used}
{\footnotesize
\smallskip
\label{obs_log}
\begin{tabular}{|lccccccccc|}
\hline
\hline
Obs & UT observation & T. from burst (s)& Setup (nm) & Wavelength (\AA) & Slit & Resolution & Exp. (s) & S/N & R mag \\
\hline
RRM 1 & 2008 Mar 19, 06:21:26 & 517 & Dic 2, 437 & 3760 - 4980 & 1'' & 40000 & 600 & $ \sim 30 $ & $12 - 13$\\
RRM 1 & 2008 Mar 19, 06:21:26 & 517 & Dic 2, 860 & 6700 - 10430 & 1'' & 40000 & 600 & $ \sim 50 $ & $12 - 13$\\
RRM 2 & 2008 Mar 19, 07:18:47 & 3441 & Dic 1, 346 & 3040 - 3870 & 1'' & 40000 & 1800 & $ \sim 7 $ & $16 - 17$\\
RRM 2 & 2008 Mar 19, 07:18:47 & 3441 & Dic 1, 580 & 4780 - 6810 & 1'' & 40000 & 1800 & $ \sim 12 $ & $16 - 17$\\
RRM 2 & 2008 Mar 19, 08:06:42 & 6833 & Dic 2, 437 & 3760 - 4980 & 1'' & 40000 & 1800 & $ \sim 7 $ & $16 - 17$\\
RRM 2 & 2008 Mar 19, 08:06:42 & 6833 & Dic 2, 860 & 6700 - 10430 & 1'' & 40000 & 1800 & $ \sim 12 $ & $16 - 17$\\
ToO & 2008 Mar 19, 08:43:52 & 8546 & Dic 1, 346 & 3040 - 3870 & 1'' & 40000 & 1200 & $ \sim 5 $ & $16 - 17$\\
ToO & 2008 Mar 19, 08:43:52 & 8546 & Dic 1, 580 & 4780 - 6810 & 1'' & 40000 & 1200 & $ \sim 8 $ & $16 - 17$\\
ToO & 2008 Mar 19, 09:07:18 & 10478 & Dic 2, 437 & 3760 - 4980 & 1'' & 40000 & 1200 & $ \sim 5 $ & $16 - 17$\\
ToO & 2008 Mar 19, 09:07:18 & 10478 & Dic 2, 860 & 6700 - 10430 & 1'' & 40000 & 1200 & $ \sim 8 $ & $16 - 17$\\
\hline
\end{tabular}
}
\end{table*}
\section{Observations and data reduction}
We observed the bright afterglow of GRB080319B in the framework of the
ESO program 080.A-0398 with the VLT/UVES (Dekker et al. 2000). The
Observation Log and the setups used are reported in Table 1. Both
UVES dichroics, as well as the red and the blue arms, were used.
The first, 10min observation, was performed in Rapid Response Mode
(RRM) and started just 8m:30s after the GRB event, when the afterglow
was extremely bright ($R_{Mag}=12-13$). This provided a S/N=$30 - 50$ per
resolution element. Two more UVES observations followed, the first one
was again in RRM mode, activated in the framework of program 080.D-0526
and starting 1.0 hours after the GRB event. The second one was a Target
of Opportunity (ToO), starting 2.4 hours after the GRB, see Table 1. A
slit width of 1'' has been used in all the observations; this
corresponds to a spectral resolution of $R = 40000$, or $7.5$
km\,s$^{-1}$.
Data reduction was carried out by using the UVES pipeline (Ballester
et al. 2000). The final useful spectra extend from $\sim 3800$~\AA{}
to $\sim 9500$~\AA. The spectra were normalized to the continuum,
which was evaluated by fitting the data with cubic splines, after the
removal of the absorption features. Finally, the noise spectrum, used
to determine the errors on the best fit line parameters, was
calculated from the real, background-subtracted spectra using
line-free regions. This takes into account both statistical and
systematic errors in the pipeline processing and background
subtraction.
\section{The GRB080319B sightline}
An analysis of the GRB080319B UVES spectra reveals at least 5
absorbing systems along the GRB line of sight. The presence of
excited {{Fe}{II}} and {{Ni}{II}} features in the higher
redshift system is a clear indication that its gas belongs or is close
to the host galaxy of the GRB. We will not discuss the host galaxy
absorber, since a detailed study has been reported by D'Elia et
al. (2009a); but instead we will now concentrate on the intervening
systems.
The four intervening absorbers identified have redshifts in the range
$0.5 - 0.8$. Each system features {{Mg}{II}} absorption.
{{Fe}{II}}, {{Mg}{I}} and {{Mn}{II}} are also present;
they appear in four, two and one intervening absorbers,
respectively. The analysis of these systems is often intricate due to
the complexity of the absorption lines of the spectrum, which in
several cases cannot easily be fit with a single line profile.
The presence of several components is indicative of clumpy gas in the
intervening absorbers, similarly to what observed for the circumburst
environment of many GRBs (See e.g., D'Elia et al. 2007, Piranomonte et
al. 2009, D'Elia et al. 2009a,b) and for the {{Mg}{II}}
intervening absorbers along the QSO sightlines (Churchill \& Vogt
2001). Whenever a system can not be fit by a single line profile, it
can be interpreted both as a clumpy system with a complex velocity
structure or as many systems lying at different redshifts. This is why
we say that GRB080319B has at least four intervening systems. We
arbitrarily chose to consider as a single system two or more
absorption features closer than $500$ km s$^{-1}$. Tab. 2 summarizes
the characteristics of the intervening absorbers. Column 2 gives the
heliocentric redshift, column 3 the absorbing elements and ions,
column 4 the total width of the system, column 5 the rest frame
equivalent width (EW$_{rf}$) of the {{Mg}{II}$\lambda$2796} line,
column 6 the number of components necessary to adequately fit each
system. The intervening absorbers are ordered with decreasing
redshift. For multiple component systems the $z$ reference values have
been arbitrarily placed to be coincident with the component indicated
in column 7; for single component absorbers the redshift is defined by
the best fit value of the central absorption line wavelength.
\begin{table*}
\caption{\bf Absorption systems along the GRB080319B sightline}
{\footnotesize
\smallskip
\label{obs_log}
\begin{tabular}{|l|cccccc|}
\hline
\hline
System & redshift & species & Width (km s$^{-1}$) & {{Mg}{II} $\lambda$2796} EW$_{rf}^*$ (\AA) & \# of components & z reference component \\
\hline
1 & $0.76046$ & {Mg}{II}, {Fe}{II} & $90$ &$0.121\pm0.007$ & $4$ & 2nd \\
2 & $0.71468$ & {Mg}{II}, {Mg}{I}, {Fe}{II} & $400$ &$1.448\pm0.007$ & $10$ & 7th \\
3 & $0.56578$ & {Mg}{II}, {Fe}{II} & $30$ &$0.083\pm0.006$ & $1$ & 1st \\
4 & $0.53035$ & {Mg}{II}, {Mg}{I}, {Fe}{II}, {Mn}{II} & $100$ &$0.62\pm0.01$ & $6$ & 3rd \\
\hline
\end{tabular}
$^*$ Rest frame values, errors are at $1\sigma$ confidence level.
}
\end{table*}
The next subsections report the analysis for each intervening
absorber. The fitting procedure for the absorption systems has been
carried out as follows. First of all, we analyzed the first epoch
spectrum, that with the highest signal-to-noise ratio. The spectrum
was analyzed in the MIDAS environment using the {\sc fitlyman}
procedure (Fontana \& Ballester 1995).
The line profile fitting is usually performed using a Voigt function.
Each Voigt profile has basically three free parameters: the central
wavelength of the transition, the column density of the absorbing
species and the doppler parameter $b$ of the gas. A single component
treatment is often inadequate, reflecting the complexity of the
intervening systems, as noted before. We thus fit the data several
times with different numbers of components, in order to minimize the
reduced $\chi ^2$ values. {Mg}{II}, which appears in all systems,
is the one with a larger velocity spread, so it was used to guide the
identification of all components. The other species, when present,
allowed us to best constrain the central wavelength positions in the
regions where the {Mg}{II} results to be strongly saturated. The
central wavelengths and the $b$ parameters have been kept fixed among
the different species, unless otherwise stated. Once a satisfactory
fit to the first epoch spectrum was obtained, we turned to the
analysis of the other two epochs. Again, the central wavelengths and
the $b$ parameters were fixed to the first epoch results. To increase
the S/N of the later epoch observations, we added them (the second and third
spectra) and repeated the fits to the coadded spectrum. Tables 3 to 6
report the results of our analysis for the intervening absorbers 1 to
4, respectively. In particular, column 2 shows the number of features
contributing to the fit for each species, column 3 reports the epoch
to which the data refers, and the following ones the column densities
of each component for that specific element or ion. Components are
identified with progressive numbers for decreasing redshifts (or
decreasing wavelengths, i.e., the higher the wavelength or the
positive velocity shift, the lower the component number). Errors are
the formal $1 \sigma$ uncertainties given by {\sc fitlyman}; upper
limits have the $90\% $ level confidence.
\subsection{The intervening system 1}
This is the system with the lowest number of absorption features. In
fact, only the {{Mg}{II} $\lambda$2796, $\lambda$2803} doublet and
the {{Fe}{II} $\lambda$2382} line are present at
$z=0.76046$. Despite this, the structure of the system, which spans a
velocity range of $\sim 90$ km s$^{-1}$ is quite complex, and a four
component model is necessary in order to obtain a reasonable fit to
the data. Fig. 1 shows the absorption due to the {Mg}{II} doublet
and the {{Fe}{II} $\lambda$2382} line, together with the results
of the four component fit. The {{Fe}{II} $\lambda$2382} line is
only present in the fourth component of the first observation. The S/N
ratio of the following observations just allows to set upper limits for
this feature. Table 3 reports the column densities for each
component. We can rule out a variability of both the {Mg}{II} and
{Fe}{II} lines, since all the column densities are consistent
within the $2 \sigma$ level.
The non variability of the {Mg}{II} features is also shown in
Fig. 2, which displays the {{Mg}{II} $\lambda$2796} features for
the three epochs.
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{Interv07604355.ps}
\caption{The {{Mg}{II} $\lambda$2796, $\lambda$2803} absorption
doublet and the {{Fe}{II} $\lambda$2382} line of the first
intervening system, for the first epoch spectrum. Solid lines
represent our four Voigt components, best fit model. Vertical lines
identify the velocity of each component with respect to the zero point
arbitrarily placed at $z=0.76046$ and coincident with component II.}
\end{figure}
\begin{table*}
\caption{\bf Column densities for the absorption system 1}
{\footnotesize
\smallskip
\begin{tabular}{|lc|ccccc|}
\hline
\hline
Species & Trans. & Obs.& I (+30 km s$^{-1}$) & II (0 km s$^{-1}$) & III (-20 km s$^{-1}$) & IV (-30 km s$^{-1}$)\\
\hline
Mg II & $\lambda$2796 & 1 & $11.83 \pm 0.02$ & $11.91 \pm 0.16$ & $10.56 \pm 0.07 $ & $12.37 \pm 0.01 $\\
& $\lambda$2803 & 2 & $11.75 \pm 0.09$ & $ < 11.7 $ & $ <11.7 $ & $12.33 \pm 0.04 $\\
& & 3 & $ < 12.1 $ & $ < 12.1 $ & $ <12.1 $ & $12.22 \pm 0.07 $\\
& &2+3 & $11.72 \pm 0.08$ & $ < 11.7 $ & $ <11.7 $ & $12.29 \pm 0.03 $\\
\hline
Fe II & $\lambda$2382 & 1 & $ < 11.5 $ & $ < 11.5 $ & $ <11.5 $ & $11.53 \pm 0.09 $\\
& & 2 & $ < 12.7 $ & $ < 12.7 $ & $ <12.7 $ & $ < 12.7 $\\
& & 3 & $ < 13.1 $ & $ < 13.1 $ & $ <13.1 $ & $ < 13.1 $\\
& &2+3 & $ < 12.7 $ & $ < 12.7 $ & $ <12.7 $ & $ < 12.7 $\\
\hline
\end{tabular}
All values are logarithmic cm$^{-2}$
}
\end{table*}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgII_2796_076_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{II} $\lambda$2796} optical
depth in the three UVES observation epochs, for the first intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\subsection{The intervening system 2}
This is the most complex system, which spans the largest velocity
range ($\sim 400$ km s$^{-1}$). A lot of absorption features are
present at the redshift of $0.71468$, namely: the {{Mg}{II}
$\lambda$2796, $\lambda$2803} doublet, {Mg}{I} ($\lambda$2852) and
{Fe}{II} in several flavours ($\lambda$2344, $\lambda$2374,
$\lambda$2382, $\lambda$2586 and $\lambda$2600). The first epoch
spectrum required a ten component fit in order to obtain a good
modeling to the data. Fig. 3 shows the absorption due to the
{Mg}{II} doublet and {{Mg}{I} $\lambda$2852}, while Fig. 4
displays all the features of {Fe}{II}; in both figures the resuls
of the ten component fit are also plotted. Table 4 reports the column
densities for each component. No variability is detected in the three
epochs within the $3 \sigma $ uncertainty. This rules out column
density variability for the intervening system 2. The only exception
is represented by the second component of the {Mg}{II}. Anyway,
this specific component is strongly saturated, so the values reported
in the corresponding column of Table 4 may not be entirely reliable.
The non variability of the features belonging to the second system can
also be seen in Figs. 5-7, which display the {{Mg}{II} $\lambda$2796},
{{Mg}{I} $\lambda$2852} and {{Fe}{II} $\lambda$2600} features for the
three epochs, respectively. This system is the only one showing an
EW$_{MgII,rf}>1$ so we also computed the EWs of the absorption
features for comparison with other works. The {{Mg}{II}$\lambda$2796}
EW does not vary between the three observations at the $2\sigma$
confidence level, and a variability of this feature, if present, is
less than 10\% at the $3\sigma$ confidence level. The {{Fe}{II}
$\lambda$2600} and {{Mg}{I} $\lambda$2852} rest frame EWs are
EW$_{FeII,rf}=0.708\pm0.007$ and EW$_{MgI,rf}=0.289\pm0.007$,
respectively ($1\sigma$ confidence). These values are not different
from that observed along other GRB sightlines (see e.g. Cucchiara et
al. 2009).
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{IntervMgII_MgI_071466.ps}
\caption{The {{Mg}{II} $\lambda$2796, $\lambda$2803} doublet and
the {{Mg}{I} $\lambda$2852} feature of the second intervening
system, for the first epoch spectrum. Solid lines represent our ten
Voigt components, best fit model. Vertical lines identify the velocity
of each component with respect to the zero point arbitrarily placed at
$z=0.71468$ and coincident with component VII.}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{Interv_FeII_071466.ps}
\caption{All the {Fe}{II} features of the second intervening
system, for the first epoch spectrum. Solid lines represent our ten
Voigt components, best fit model. Vertical lines identify the velocity
of each component with respect to the zero point arbitrarily placed at
$z=0.71468$ and coincident with component VII.}
\end{figure}
\begin{table*}
\caption{\bf Column densities for the absorption system 2}
{\footnotesize
\smallskip
\begin{tabular}{|lc|cccccc|}
\hline
\hline
Species & Trans. & Obs.& I (160 km s$^{-1}$)& II (135 km s$^{-1}$)& III (125 km s$^{-1}$)& IV (90 km s$^{-1}$)& V (50 km s$^{-1}$) \\
\hline
& & & VI (30 km s$^{-1}$)& VII (0 km s$^{-1}$)&VIII (-50 km s$^{-1}$)& IX (-100 km s$^{-1}$)& X (-210 km s$^{-1}$)\\
\hline
Mg II & $\lambda$2796 & 1 & $13.84 \pm 0.05$ & $15.45 \pm 0.08$ & $13.05 \pm 0.02 $ & $13.04 \pm 0.02 $ & $13.29 \pm 0.02 $ \\
& $\lambda$2803 & 2 & $13.79 \pm 0.12$ & $16.01 \pm 0.06$ & $13.03 \pm 0.04 $ & $13.07 \pm 0.13 $ & $13.32 \pm 0.04 $ \\
& & 3 & $ < 12.2 $ & $14.83 \pm 0.12$ & $13.07 \pm 0.04 $ & $13.21 \pm 0.19 $ & $13.19 \pm 0.11 $ \\
& &2+3 & $13.57 \pm 0.16$ & $15.11 \pm 0.30$ & $13.24 \pm 0.06 $ & $12.99 \pm 0.04 $ & $13.19 \pm 0.08 $ \\
\hline
& & 1 & $13.65 \pm 0.02$ & $12.57 \pm 0.03$ & $12.66 \pm 0.49 $ & $12.56 \pm 0.34 $ & $13.44 \pm 0.03 $ \\
& & 2 & $13.64 \pm 0.12$ & $12.50 \pm 0.06$ & $12.37 \pm 0.28 $ & $13.03 \pm 1.95 $ & $13.57 \pm 0.05 $ \\
& & 3 & $13.56 \pm 0.10$ & $12.37 \pm 0.09$ & $ < 12.1 $ & $12.27 \pm 0.64 $ & $13.53 \pm 0.04 $ \\
& &2+3 & $13.70 \pm 0.07$ & $12.43 \pm 0.06$ & $ < 11.9 $ & $12.78 \pm 0.23 $ & $13.54 \pm 0.05 $ \\
\hline
Mg I & $\lambda$2852 & 1 & $ < 10.5 $ & $12.18 \pm 0.01$ & $11.19 \pm 0.07 $ & $11.29 \pm 0.03 $ & $11.45 \pm 0.03 $ \\
& & 2 & $ < 11.2 $ & $12.10 \pm 0.04$ & $ < 11.2 $ & $ < 11.2 $ & $11.29 \pm 0.11 $ \\
& & 3 & $ < 11.6 $ & $12.10 \pm 0.03$ & $ < 11.6 $ & $ < 11.6 $ & $ < 11.6 $ \\
& &2+3 & $ < 11.2 $ & $12.08 \pm 0.05$ & $11.34 \pm 0.14 $ & $ < 11.2 $ & $11.13 \pm 0.12 $ \\
\hline
& & 1 & $11.77 \pm 0.01$ & $10.63 \pm 0.09$ & $ < 10.5 $ & $ < 10.5 $ & $11.45 \pm 0.03 $ \\
& & 2 & $11.77 \pm 0.04$ & $ < 11.2 $ & $ < 11.2 $ & $ < 11.2 $ & $11.49 \pm 0.06 $ \\
& & 3 & $11.75 \pm 0.03$ & $ < 11.6 $ & $ < 11.6 $ & $ < 11.6 $ & $ < 11.6 $ \\
& &2+3 & $11.78 \pm 0.05$ & $ < 11.2 $ & $ < 11.2 $ & $ < 11.2 $ & $11.58 \pm 0.05 $ \\
\hline
FeII&$\lambda$2344, $\lambda$2374& 1 & $13.33 \pm 0.08$ & $13.43 \pm 0.01$ & $12.40 \pm 0.03 $ & $12.68 \pm 0.01 $ & $13.40 \pm 0.02 $ \\
& $\lambda$2382 & 2 & $13.79 \pm 0.18$ & $13.35 \pm 0.04$ & $12.52 \pm 0.09 $ & $12.64 \pm 0.04 $ & $13.43 \pm 0.02 $ \\
& $\lambda$2586 & 3 & $13.44 \pm 0.56$ & $13.39 \pm 0.06$ & $ < 12.7 $ & $12.80 \pm 0.05 $ & $13.42 \pm 0.03 $ \\
& $\lambda$2600 &2+3 & $13.77 \pm 0.17$ & $13.35 \pm 0.03$ & $12.53 \pm 0.08 $ & $12.71 \pm 0.03 $ & $13.41 \pm 0.02 $ \\
\hline
& & 1 & $13.15 \pm 0.01$ & $12.35 \pm 0.02$ & $ < 11.7 $ & $ < 11.7 $ & $13.48 \pm 0.01 $ \\
& & 2 & $13.09 \pm 0.04$ & $ < 12.4 $ & $ < 12.4 $ & $ < 12.4 $ & $13.40 \pm 0.04 $ \\
& & 3 & $13.12 \pm 0.05$ & $ < 12.7 $ & $ < 12.7 $ & $ < 12.7 $ & $13.43 \pm 0.05 $ \\
& &2+3 & $13.12 \pm 0.08$ & $12.18 \pm 0.08$ & $ < 12.1 $ & $ < 12.1 $ & $13.41 \pm 0.03 $ \\
\hline
\end{tabular}
All values are logarithmic cm$^{-2}$
}
\end{table*}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgII_2796_071_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{II} $\lambda$2796} optical
depth in the three UVES observation epochs, for the second intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgI_2852_071_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{I} $\lambda$2852} optical
depth in the three UVES observation epochs, for the second intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{FeII_2600_071_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Fe}{II} $\lambda$2600} optical
depth in the three UVES observation epochs, for the second intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\subsection{The intervening system 3}
This is the simplest system among the intervening absorbers. Its
velocity dispersion is just $\sim 30$ km s$^{-1}$ and only two ions
appear in the spectrum at the redshift of $z=0.56578$: {Mg}{II},
with the classical $\lambda$2796, $\lambda$2803 doublet and
{Fe}{II} ($\lambda$2586 and $\lambda$2600). All the three epoch
spectra are well fit by a single component Voigt profile. Fig. 8 shows
the absorption from the {Mg}{II} and {Fe}{II} features,
together with our fit to the data. Table 5 reports the column
densities measured for the three epochs. We can rule out a
variability of both {Mg}{II} and {Fe}{II} lines, since the
column densities of the three epochs are consistent within the $3
\sigma$ level.
The non variability of the features belonging to the third system is
also shown in Figs 9 and 10, which display the {{Mg}{II}
$\lambda$2796} and {{Fe}{II} $\lambda$2600} features for the three
epochs, respectively.
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{Interv0565763.ps}
\caption{The {{Mg}{II} $\lambda$2796, $\lambda$2803} doublet and
the {{Fe}{II} $\lambda$2586, $\lambda$2600} absorption features of
the third intervening system, for the first epoch spectrum. Solid
lines represent our Voigt profile, best fit model. The vertical
line identifies the position of the central wavelength of the voigtian.
}
\end{figure}
\begin{table*}
\caption{\bf Column densities for the absorption system 3}
{\footnotesize
\smallskip
\begin{tabular}{|lc|cc|}
\hline
\hline
Species & Trans. & Obs.& I (0 km s$^{-1}$) \\
\hline
Mg II & $\lambda$2796 & 1 & $12.73 \pm 0.02$ \\
& $\lambda$2803 & 2 & $12.57 \pm 0.05$ \\
& & 3 & $12.71 \pm 0.09$ \\
& &2+3 & $12.63 \pm 0.05$ \\
\hline
Fe II & $\lambda$2586 & 1 & $12.40 \pm 0.04$ \\
& $\lambda$2600 & 2 & $12.28 \pm 0.18$ \\
& & 3 & $ < 12.7 $ \\
& &2+3 & $12.18 \pm 0.15$ \\
\hline
\end{tabular}
All values are logarithmic cm$^{-2}$
}
\end{table*}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgII_2796_056_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{II} $\lambda$2796} optical
depth in the three UVES observation epochs, for the third intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event). The slight wavelength shift of the second
and third epoch spectra with respect to the first one ($\sim 0.04$\AA$\;$
or $<3$ km/s, lower than the spectral resolution, see also fig. 10) is
possibly an offset that comes out from the data reduction and not a
real effect.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{FeII_2600_056_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Fe}{II} $\lambda$2600} optical
depth in the three UVES observation epochs, for the third intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\subsection{The intervening system 4}
This is the system which shows the greatest number of species. Three
ions, {Mg}{II} ($\lambda$2796, $\lambda$2803), {Fe}{II}
($\lambda$2586, $\lambda$2600), {Mn}{II} ($\lambda$2576,
$\lambda$2594) and the neutral {Mg}{I} ($\lambda$2852) appear in
the spectrum at the redshift of $z=0.53035$. The velocity dispersion
of the system is $\sim 100$ km s$^{-1}$. Since both {Mg}{II}
features are strongly saturated, we used {Mg}{I} and {Fe}{II}
to guide the identification of the components. The spectrum around
these features results to be well fit by a six component
model. Fig. 11 shows the absorptions from the {Mg}{II} and
{Mg}{I} features, while Fig. 12 those from {Fe}{II} and
{Mn}{II}. These figures also display our six component fit to the
data. Table 6 reports the column densities measured for the three
epochs divided by components. Variability can be excluded for
{Mg}{I}, {Fe}{II} and {Mn}{II} whose components are
consistent in the three epochs within the $3 \sigma$ uncertainty.
{Mg}{II} seems not to behave this way, but most of the components of
its features are strongly saturated, so the corresponding column
density values reported in the table are less reliable. Components 1
and 6 are not saturated (the former is not saturated only in the
{Mg}{II} $\lambda$2803 feature), and they are consistent with no
variability within the $3 \sigma$ uncertainty. It is worth noting,
however, that component 6 has a positive detection in the first
observation only, while in the other epochs just upper limits can be
set. Saturation is also present in component 2 and 3 of the
{Fe}{II}, but just in the first observation.
The non variability of the features belonging to the fouth system
is also shown in Figs. 13-15, which display the {{Mg}{II}
$\lambda$2796}, {{Mg}{I} $\lambda$2852} and {{Fe}{II}
$\lambda$2600}, features for the three epochs, respectively.
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{Interv053033_Mg.ps}
\caption{The {{Mg}{II} $\lambda$2796, $\lambda$2803} doublet and
the {{Mg}{I} $\lambda$2852} absorption features of the fourth
intervening system, for the first epoch spectrum. Solid lines
represent our six component, best fit model. Vertical lines identify
the velocity of each component with respect to the zero point
arbitrarily placed at $z=0.53035$ and coincident with component III. }
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{Interv053033_Fe_Mn.ps}
\caption{The {{Fe}{II} $\lambda$2586, $\lambda$2600} and the
{{Mn}{II} $\lambda$2576, $\lambda$2594} absorption features of the
fourth intervening system, for the first epoch spectrum. Solid lines
represent our six component, best fit model. Vertical lines identify
the velocity of each component with respect to the zero point
arbitrarily placed at $z=0.53035$ and coincident with component III. }
\end{figure}
\begin{table*}
\caption{\bf Column densities for the absorption system 4}
{\footnotesize
\smallskip
\begin{tabular}{|lc|ccccccc|}
\hline
\hline
Species & Trans. & Obs.& I (25 km s$^{-1}$) & II (10 km s$^{-1}$) & III (0 km s$^{-1}$) & IV (-15 km s$^{-1}$) & V (-25 km s$^{-1}$) & VI (-50 km s$^{-1}$) \\
\hline
Mg II &$\lambda$2796& 1 & $12.94 \pm 0.02$ & $15.00S \pm 0.15$ & $15.25S \pm 0.04 $ & $13.22S \pm 1.71 $& $14.61S \pm 0.04 $ & $11.40 \pm 0.04 $ \\
&$\lambda$2803& 2 & $12.75 \pm 0.09$ & $15.24S \pm 0.56$ & $16.13S \pm 0.15 $ & $15.68S \pm 0.22 $& $13.02S \pm 0.22 $ & $ < 11.9 $ \\
& & 3 & $12.79 \pm 0.10$ & $15.81S \pm 0.06$ & $14.75S \pm 0.13 $ & $14.94S \pm 0.12 $& $13.91S \pm 0.18 $ & $ < 12.1 $ \\
& &2+3 & $12.71 \pm 0.08$ & $15.93S \pm 0.07$ & $13.83S \pm 0.16 $ & $15.39S \pm 0.24 $& $13.91S \pm 0.17 $ & $ < 11.8 $ \\
\hline
Mg I &$\lambda$2852& 1 & $11.17 \pm 0.06$ & $12.48 \pm 0.09$ & $11.67 \pm 0.03 $ & $11.44 \pm 0.03 $& $11.45 \pm 0.48 $ & $10.82 \pm 1.86 $ \\
& & 2 & $ < 11.2 $ & $11.96 \pm 0.30$ & $11.64 \pm 0.08 $ & $11.67 \pm 0.08 $& $ < 11.2 $ & $ < 11.2 $ \\
& & 3 & $ < 11.5 $ & $12.96 \pm 0.42$ & $11.79 \pm 0.09 $ & $11.53 \pm 0.13 $& $ < 11.5 $ & $ < 11.5 $ \\
& &2+3 & $ < 11.1 $ & $13.02 \pm 0.80$ & $11.71 \pm 0.07 $ & $11.59 \pm 0.07 $& $ < 11.1 $ & $ < 11.1 $ \\
\hline
Fe II &$\lambda$2586& 1 & $12.80 \pm 0.06$ & $15.74S \pm 0.09$ & $14.38S \pm 0.09 $ & $13.65 \pm 0.07 $& $13.51 \pm 0.15 $ & $ < 11.8 $ \\
&$\lambda$2600& 2 & $12.75 \pm 0.10$ & $13.67 \pm 0.47$ & $14.03 \pm 0.13 $ & $13.68 \pm 0.17 $& $13.75 \pm 0.50 $ & $ < 12.3 $ \\
& & 3 & $12.85 \pm 0.23$ & $15.06 \pm 0.81$ & $14.09 \pm 0.36 $ & $13.69 \pm 0.24 $& $13.33 \pm 0.24 $ & $ < 12.5 $ \\
& &2+3 & $12.81 \pm 0.12$ & $14.33 \pm 0.60$ & $14.05 \pm 0.16 $ & $13.67 \pm 0.19 $& $13.54 \pm 0.49 $ & $ < 12.2 $ \\
\hline
Mn II &$\lambda$2576& 1 & $ < 11.4 $ & $11.80 \pm 0.03$ & $11.78 \pm 0.03 $ & $11.56 \pm 0.16 $& $ < 11.4 $ & $ < 11.4 $ \\
&$\lambda$2594& 2 & $ < 12.4 $ & $ < 12.4 $ & $ < 12.4 $ & $ < 12.4 $& $ < 12.4 $ & $ < 12.4 $ \\
& & 3 & $ < 12.7 $ & $ < 12.7 $ & $ < 12.7 $ & $ < 12.7 $& $ < 12.7 $ & $ < 12.7 $ \\
& &2+3 & $ < 12.2 $ & $ < 12.2 $ & $ < 12.2 $ & $ < 12.2 $& $ < 12.2 $ & $ < 12.2 $ \\
\hline
\end{tabular}
All values are logarithmic cm$^{-2}$
}
\end{table*}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgII_2796_053_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{II} $\lambda$2796} optical
depth in the three UVES observation epochs, for the fourth intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event). The $\sim 0.04$\AA$\;$ frequency shift (see
also figs. 14 and 15) is present also in this system, confirming that
its nature is not physical, but due to the reduction process.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{MgI_2852_053_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Mg}{I} $\lambda$2852} optical
depth in the three UVES observation epochs, for the fourth intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=6cm, angle=-90]{FeII_2600_053_3epoch.ps}
\end{tabular}
\caption{A comparison between the {{Fe}{II} $\lambda$2600} optical
depth in the three UVES observation epochs, for the fourth intervening
system. Solid line refers to the first epoch spectrum (8m30s after the
Swift trigger), dashed line to the second epoch spectrum (1.0 hours
after the GRB event), and dotted line to the third epoch spectrum (2.4
hours after the GRB event).}
\end{figure}
\section{Conclusions}
In this paper we present high resolution (R=40000, corresponding to
7.5 km/s) spectroscopy of the optical afterglow of the ``naked-eye''
Gamma Ray Burst GRB080319B ($z=0.937$), observed by UVES at the VLT in
three different epochs, starting $\sim 8.30$ minutes, $\sim 1$ and
$\sim 2.4$ hours after the trigger, respectively.
We concentrate here on the intervening absorbers along the GRB
sightline, since the absorption features in the vicinity of the
afterglow have already been studied and presented in D'Elia et
al. (2009a). Our spectral coverage allows to analyze a redshift path
for the {{Mg}{II} $\lambda$2796} in the range $z=0.36 - 0.937$.
We are sensitive to lines with EW down to $0.015$ (at the
$2\sigma$ confidence level), thanks to the extremely high S/N of the
first RRM observation. At least four intervening systems between z =
0.8 and z = 0.5 have been identified. Most of them show a complex
structure constituted by several components, similar to that of the
intervening absorbers along the QSO sightlines (Churchill \& Vogt
2001). All systems feature the {{Mg}{II} $\lambda$2796,
$\lambda$2803} doublet. {Fe}{II}, {Mg}{I} and {Mn}{II}
lines are detected in four, two and one systems, respectively.
Prochter et al. (2006) claimed that the incidence of strong
{{Mg}{II}} absorbers along GRB sight lines is nearly four times
higher than that along the line of sight to QSOs. They analyzed the
spectra of 14 GRB optical afterglows, finding on average one
intervening system per afterglow with equivalent width $>1$ \AA. The
GRB080319B sightline confirms this trend, since the rest frame EW of
the {{Mg}{II} $\lambda$2796} absorption line is $>1$\AA$\;$ in one
system.
Structured intervening systems are expected if the discrepancy between
QSOs and GRBs intervening absorbers is due to a different size of the
source, namely, if QSO emitting regions are larger than GRBs which in
turn are comparable to the typical size of the{Mg}{II} clouds
(Frank et al. 2007). According to their estimation, the QSO beam size
must be larger than 3 $\times$ 10$^{16}$ cm while for the GRB beam
they suggest a size around a few $\times$ 10$^{15}$ cm. To observe
this effect, the MgII absorber should then be patchy with a clump size
around 10$^{16}$ cm. Nevertheless, in this scenario, variability in the
column densities of the {{Mg}{II}} absorbers along the GRB
sightlines is expected. {{Mg}{II}} variability in multi-epoch
spectroscopy data on GRB060206 was first claimed by the analysis by
Hao et al. (2007), but then disproved by Aoki et al. (2008) and
Th\"one et al. (2008). This work confirms the lack of variability in
the column densities of the GRB intervening absorbers, in the specific
case of GRB080319B. In particular, the EW of the {{Mg}{II}
$\lambda$2796} absorption line is consistent within $2\sigma$ in the
three UVES observations. Given the S/N ratio of our data, we can
conclude that a variability of this feature, if present, is less than
10\% at the $3\sigma$ confidence level. This upper limit is three
times smaller than the ones that can be estimated for GRB060206 using
the data by Aoki et al. (2008). Racusin et al. (2008b) modeled the
radiation of GRB080319B as coming from a structured jet constituted by
an inner, narrow jet ($\theta_n = 0.2^o$) and an outer, wider jet
($\theta_w = 4^o$). They interpreted the optical emission at $T < T_0
+ 800$s ($T_0$ being the burst detection time) as produced by the
reverse shock of the inner jet, while that at $T > T_0 + 800$ as the
signature of the forward shock of the outer jet. Since the first UVES
observation starts at $T \sim T_0 + 500$ while the second and the
third ones start more than $1$hr later, a strong increase in the size
of the emitting region is expected from this scenario. In more
detail, the dimension of the emitting region is $\sim 10^{17}$ cm at
the beginning of the GRB event and $\sim 10^{18}$ cm a few hours later
(Pandey et al. 2009, Racusin et al. 2008b and Kumar \& Panaitescu
2008). Although we cannot be more quantitative, a variation of a
factor of ten in the dimension of the emitting region would imply a
considerable reduction of the intervening column densities, but it is
not detected. Indeed, time resolved, high resolution spectroscopy
allows us for the first time to exclude a significant ($>3\sigma$)
variability even by considering the single components of each
intervening system alone.
Porciani et al. (2007) investigated several possible explanations for
the strong {{Mg}{II}} excess, namely, dust obscuration bias,
clustering of the absorbers, different beam sizes of the sources,
multiband magnification bias of GRBs, association of the absorbers
with the GRB event or with the circumburst environment. They concluded
that none of these effects alone can explain the observed difference,
but maybe the combination of two or more of them can reduce the
significance of the discrepancy. We can take advantage of the
dimensions of the emitting source in GRB080319B and the lack of
variability in the intervening absorbers to characterize the absorbers
along this GRB sightline. Since these dimensions are in the range
$10^{17} - 10^{18} $ cm, the {Mg}{II} (and other species) clouds
cannot have clumps or structures smaller than $0.1 - 1$ pc, otherwise
we should observe variability in their absorption. This is not
obvious, since Ding et al. (2003) show that smaller intervening
absorbers are present along the line of sight to QSOs.
Cucchiara et al. (2009) compared 81 QSOs and 6 GRB lines of sight
obtained with UVES. They found no significant evidence to support a
difference between the two absorber populations, concluding that a
possible explanation for the {{Mg}{II}} excess could be intrinsic
to the GRB environment. A similarity between the narrow lines of the
GRB intervening absorbers and that produced by the material ejected in
the accretion disk winds of QSOs has been pointed out. Nevertheless,
the high velocities required by the intervening redshifts ($v/c \sim
0.2 - 0.4$) and the lack of fine structure features in these systems
represent a strong weakness of this picture. In addition, Tejos et
al. (2009) recently found no evidence for a similar excess considering
the weak ($0.07 < EW < 1$\AA) {Mg}{II} systems along the line of
sight of 8 GRB observed with echelle spectrographs. These authors tend
to exclude an intrinsic nature of the discrepancy, since it would
result in an excess in the weak systems too. They suggest that the
best explanation available at present could be gravitational lensing
bias due to lensing by the host galaxy of the absorber. Indeed the
strong {Mg}{II} system along the GRB080319B sightline has a
velocity of $\sim 36300$ km/s with respect to the GRB. The data do not
show evidence either of broad profiles and/or partial coverage (see
e.g. D'Odorico et al. 2004), so this absorber is likely an intervening
system.
Combining the data by Prochter et al. (2006) with the results from the
analysis in this paper and that in D'Elia et al. (2009b) regarding the
line of sight to GRB080330, we obtain 16 strong (EW$>1$\AA)
{Mg}{II} intervening absorbers along 16 GRB sightlines. The total
redshift path becomes $17.02$, and this results in a $dn/dz = 0.94$.
This surprising excess of strong {{Mg}{II}} absorbers with respect
to QSO sightlines remains a matter of debate and a satisfactory
explanation is still missing. Clearly more observations and analysis
are needed in order to solve this issue.
|
1,116,691,500,288 | arxiv | \section{Introduction}
Quantum Electrodynamics is a theory experimentally verified with a high level of accuracy, mainly through the study of perturbative effects observed for example in particle-particle collisions. Nonetheless, Quantum Field Theory (QFT) allows to make predictions beyond this perturbative regime. This is the case, for instance, for the creation of particle-antiparticle pairs from matter fields coupled to intense electromagnetic fields: the so-called Schwinger effect. This phenomenon was first suggested by F. Sauter \cite{Sauter1931742}, although it carries the name of Schwinger as he was the one who first explained it in the context of Quantum Electrodynamics for slowly varying fields \cite{Schwinger1951}.
This kind of phenomena of creation of particles is, in general, typical of situations in which the vacuum state of a quantum field is strongly affected by an external agent. In QFT in curved spacetimes the curvature of the spacetime plays the role of this agent, as it happens, for example, in the Hawking effect, which describes the radiation of a black hole \cite{BlackHexplosons}. The problem is that the direct experimental verification of these tiny effects of creation of particles caused by the gravitational field are still unreachable (although quantum acoustic Hawking radiation from analogue black holes in atomic Bose-Einstein condensates has already been observed \cite{Steinhauer_2016}). Analogously, in order to verify empirically the Schwinger effect it is necessary to generate electromagnetic fields above the Schwinger limit, which is of the order of $10^{18} \ \mathrm{V/m}$ \cite{Yakimenko_2019}, with all the technical difficulties that dealing with such intense fields implies. However, there are experimental proposals, such as the ones based on ultraintense lasers \cite{Lasers}, which may make the Schwinger effect one of the first non-perturbative phenomena tested. Thus, it is a unique opportunity for studying the non-perturbative regime in QFT.
In the process of canonical quantization of a free field in Minkowski spacetime, it is usual to unitarily implement the invariance of the classical system under the Poincaré symmetry in the quantum theory. This selects a unique set of annihilation and creation operators. Thus, there is no ambiguity in the definition of the quantum vacuum and unique notions of particles and antiparticles exist.
Nevertheless, we should keep in mind that QFT is a theory in terms of fields and not of particles. Indeed, in general curved spacetimes this particle interpretation might not even exist. The key is that for general geometries the classical system may not be invariant under so many symmetries or none at all. In particular, particle creation effects occur in general when the external agent breaks the invariance under time translations, so that there is not a preferred vacuum, but infinite different choices which, in fact, could give rise to non-equivalent quantum theories.
While the motivation for this work comes from the study of QFT in curved spacetimes, in the context of the Schwinger effect we consider a flat background. In this case, an intense electromagnetic field coupled to the matter fields is responsible for breaking the Poincaré symmetry of Minkowski spacetime, implying the creation of particles throughout the evolution of the vacuum.
With the aim of reducing the ambiguity in the choice of vacuum, it is desirable to find other physically reasonable requirements to impose on the quantizations. In the context of fields propagating through homogeneous cosmologies a proposal has been put forward which allows to select a unique family of unitarily equivalent Fock representations and, therefore, physically equivalent quantizations \cite{Cortez:2019orm,Cortez:2020rla}: the unitary implementation of both the classical symmetries of the equations of motion and the quantum field dynamics at all times.
Invariance of the classical system under any symmetry transformation implies in the quantum theory that, if the vacuum remains invariant, it is possible to implement the symmetry transformation unitarily. When such transformations are not symmetries or the vacuum is not invariant, one can still try to impose a weaker condition, namely a unitary (non-trivial) implementation of such transformations. This, in particular, applies to time translations. For a deeper discussion on this point, we refer the reader to \cite{universe7080299}. If the unitarity of the dynamics is imposed, the quantizations at each time, which might not be a priori equivalent, are assured to provide the same physics. This requirement in QFT reminds us of what we do in Quantum Mechanics when we work equally in Schrödinger, interaction and Heisenberg pictures. All of them are known to be related by a unitary evolution operator, which allows us to go from one picture to another rendering equivalent physical descriptions. In addition, for particle creation phenomena such as the Schwinger effect, the requirement of unitarily implementing the dynamics is equivalent to having a well-defined number of generated particles at all finite times.
Unitary dynamics at all finite times is a stronger condition than the usual one found in the literature \cite{Gavrilov:1996pz,WALD1979490}, which only imposes that the states in the asymptotic past and future (when the external agent stops being coupled to the matter fields) are connected unitarily by the $S$-matrix. Actually, these approaches do not succeed in reducing the ambiguity in the definition of the number of particles created in the process during the interaction between matter and external fields. Moreover, in spite of the existence of the $S$-matrix for general backgrounds \cite{WALD1979490}, there is no certainty that there exist asymptotic free-particle states in non-trivial backgrounds \cite{wald1994quantum}.
The aim of this work is to study the canonical quantization of a massive charged Dirac field in a flat spacetime coupled to a homogeneous time-dependent electric field, extending to the fermionic case the results achieved in \cite{Garay2020} for the scalar field. In additon, we will consider the electric field to be intense enough so that the test matter field does not modify its dynamics; i.e., we will neglect the backreaction. Thus, the electric field will be classical, external and non-dynamical. The present system has already been studied following diverse approaches reviewed in \cite{Gelis_2016}, such as the quantum kinetic equation \cite{Smolyansky1997DynamicalDO,Fedotov_2011}, or the Wigner formalism \cite{Hebenstreit:2010cc,Schwingerwigner}. Here we follow a different procedure.
The main result of our work is the characterization of the possible Fock representations which unitarily implement both the symmetries of the system and the dynamics. Furthermore, we prove that the ambiguity in the quantization is reduced so that they are all unitarily equivalent and, therefore, lead to the same transition amplitudes. In particular, we identify the quantizations with a well-defined total number of particles and antiparticles created throughout the evolution of the vacuum at any instant of time. Nevertheless, the particular definition for this number of particles will still depend on the specific vacuum chosen within the unitary equivalence class.
On the other hand, we will also show that, in order for a quantization to be part of this unitary equivalence class, it must explicitly depend on the electromagnetic potential externally applied to the fermionic field. In particular, this implies that in this non-trivial background the usual Minkowski quantization (fixed by the Poincaré symmetry) does not unitarily implement the dynamics. This is, in fact, in agreement with previous works \cite{Ruijsencharged}.
In section \ref{sec_theoreticalframework}, we describe the canonical quantization of a charged fermionic field in a general electromagnetic background, highlighting the ambiguity in the split of the Hilbert space in one-particle and one-antiparticle sectors. In section \ref{sec_bogoliubov}, we present the concepts of unitary implementation of time-dependent canonical transformations between different Fock representations, also known as Bogoliubov transformations. In section \ref{section_fermionic}, we start particularizing our previous results to the case of a charged fermionic field coupled to a homogeneous electric field. Particularly, we propose that gauge transformations be trivially implemented to keep homogeneity. In section \ref{sec_uniquenessquantization} we prove that the unitary implementation of the dynamics, together with the symmetries of the system, reduces the ambiguity in the quantization to a unique equivalence class. Finally, in section \ref{sec_conclusions} we present a summary and a discussion of the main results of this work.
\section{Fermionic canonical quantization} \label{sec_theoreticalframework}
Our system consists of a fermionic field described by a Dirac spinor $\psi$ of mass $m$ and non-zero charge $q$ coupled to an electromagnetic field in Minkowski spacetime. The starting point of our analysis is its action, namely
\begin{equation}
S=\int d^4x \ \Bar{\psi}\left[ i\gamma^\mu(\partial_\mu+iqA_\mu)-m\right] \psi,
\label{fermionicaction}
\end{equation}
where $x=(t,\textbf{x})$, $A_{\mu}$ is the four-vector potential of the external electromagnetic field (which will remain classical all along), $\gamma^\mu$ are the Dirac matrices, and the bar denotes Dirac adjoint ($\Bar{\psi}=\psi^\dagger\gamma^0$).
Spinors are half integral representations of the Lorentz group. In order to capture the fermionic nature of this field, we will consider the components of $\psi$ and $\bar{\psi}$ to be Grassmann anticommuting variables. Consequently, in order to calculate the Euler-Lagrange equations, the derivatives of the action $S$ should be either left or right Grassmann derivatives. We will choose left Grassmann derivatives, which are defined as derivatives of an expression where the variable with respect to which the expression is being differentiated occupies the first place in multiplicative order. Thus, the Dirac equation of motion is
\begin{equation}
\left[i\gamma^\mu(\partial_\mu+iqA_\mu)-m\right]\psi=0.
\label{eqdirac}
\end{equation}
One way to proceed in the description of the classical theory is the canonical approach. The canonical phase space of the system is the set of pairs composed by a field and its conjugate momentum $(\psi_0(\textbf{x}),\pi_0(\textbf{x}))$, defined on a Cauchy surface $\Sigma_{t_0}$. Each pair of functions is assumed to be smooth and represents one possible set of initial conditions for the solutions to the equations of motion, $(\psi(x),\pi(x)$). By definition, this canonical phase space is also endowed with a symmetric Poisson bracket structure. For two Grassmann variables $A$ and $B$ it is defined as
\begin{align}
\poissonbracket{A}{B}_{\text{G}}=-\sum_{\alpha}\left(\partial^{\text{G}}_{\theta^\alpha}A\partial^{\text{G}}_{\pi_\alpha}B+\partial^{\text{G}}_{\theta^\alpha}B\partial^{\text{G}}_{\pi_\alpha}A\right),
\label{poissonbracket}
\end{align}
where $\partial^{\text{G}}_{\theta^{\alpha}}$ and $\partial^{\text{G}}_{\pi_{\alpha}}$ denote the left Grassmann derivatives with respect to the canonical Grassmann variable $\theta^{\alpha}$ and its conjugate momentum $\pi_{\alpha}$, respectively \cite{Casalbuoni1976}.
The Dirac equation \eqref{eqdirac} has a well-posed initial value formulation, in the sense that for every pair $(\psi_0,\pi_{0})$ there exists a unique smooth solution $\psi$ on the whole spacetime manifold satisfying the initial conditions $\psi|_{\Sigma_{t_0}}=\psi_0$ and $\pi|_{\Sigma_{t_0}}=-i\psi^{\dagger}|_{\Sigma_{t_0}} = \pi_{0}$. Therefore, we can identify this canonical phase space with the so-called covariant phase space: the vector space $\mathcal{S}$ of classical solutions of the Dirac equation with smooth initial data.
Using integration by parts it can be shown that the form on $\mathcal{S}$ given by
\begin{align}
Q(\psi_1,\psi_2)=\int_{\Sigma_t} d^3\textbf{x} \ \psi_1^{\dagger}(x)\psi_2(x)
\end{align}
does not depend on the Cauchy surface $\Sigma_t$ on which it is evaluated, i.e., it is time independent. Moreover, $Q$ is a positive-definite inner product, which endows the space of solutions $\mathcal{S}$ with a complex Hilbert space structure. We find here the fundamental difference with respect to the case of a charged scalar field in an electromagnetic background: the space of solutions of the Klein-Gordon field coupled to an external electromagnetic potential has no natural inner product defined on it, but an antisymmetric symplectic form which fails to be positive definite \cite{wald1994quantum,Garay2020}. In that case, the introduction of a complex structure is necessary in order to construct a Hilbert space of solutions.
\subsection{One-particle and one-antiparticle Hilbert spaces} \label{sec_split}
Despite the fact that in the fermionic case $\mathcal{S}$ is directly a Hilbert space of solutions, in order to quantize the theory we will define a separable Hilbert space of particles and antiparticles, for which the introduction of a complex structure is necessary.
Indeed, the freedom in defining the one-particle Hilbert space is completely characterized by a choice of complex structure $J:\mathcal{S}\rightarrow\mathcal{S}$, which by definition is a real antihermitian operator satisfying $J^2=-\mathbb{1}$. It defines orthogonal projection operators
\begin{equation}
P^{\pm}=(\mathbb{1}\mp iJ)/2
\end{equation}
on the two spectral eigenspaces of $J$ with eigenvalues $\pm i$.
We define the particle states as the ones associated with the eigenvalue $+i$. Thereby, the states associated with $-i$ could be viewed as \textit{holes} which represent the antiparticle states. In this way, the one-particle Hilbert space $\mathcal{H}^{\text{p}}$ can then be identified with the Cauchy completion of $P^+\mathcal{S}$ with respect to $Q$. The choice of $\mathcal{H}^{\text{p}}$ determines unequivocally the one-antiparticle Hilbert space, $\mathcal{H}^{\text{ap}}$, whose elements are called antiparticle states. If $\psi\in P^{+}\mathcal{S}$ is a particle, $\psi^*\in P^{-}\mathcal{S}$ is its antiparticle, but viewed as a hole. In order to see $\psi^*$ as an antiparticle state, $\mathcal{H}^{\text{ap}}$ is taken as the subspace of $\mathcal{S}^*$ associated with the eigenvalue $+i$ of $J$. In other words, $\mathcal{H}^{\text{ap}}$ is defined as the Cauchy completion (with respect to $Q^*$) of $P^+\mathcal{S}^*$. In fact, as a matter of consistency, it is easy to see that $P^+\mathcal{S}^*=(P^-\mathcal{S})^*$. The complete Hilbert space is then $\mathcal{H}=\mathcal{H}^{\text{p}}\oplus \mathcal{H}^{\text{ap}}$. Let us remark here that $\mathcal{H}$ is not the Hilbert space of solutions $\mathcal{S}$. In fact, $\mathcal{S}=\mathcal{H}^{\text{p}}\oplus (\mathcal{H}^{\text{ap}})^*$, whereas $\mathcal{H}=\mathcal{H}^{\text{p}}\oplus \mathcal{H}^{\text{ap}} \subset \mathcal{S}\oplus \mathcal{S}^*$.
We can now choose orthonormal bases $\{\psi_n^{\text{p}}\}\subset \mathcal{H}^{\text{p}}$, $\{(\psi_n^{\text{ap}})^*\}\subset \mathcal{H}^{\text{ap}}$, so that $\{\psi_n^{\text{p}},(\psi_n^{\text{ap}})^*\}$ is an orthonormal basis of $\mathcal{H}$. Consequently, for every solution $\Psi \in \mathcal{S}$ there exist unique complex coefficients $a_n$ and $b_n^*$ such that
\begin{equation}
\Psi(x)=\sum_n \big[ a_n\psi_n^{\text{p}}(x)+b_n^*\psi_n^{\text{ap}}(x) \big].
\label{Psipap}
\end{equation}
These coefficients, which are associated with the complex structure $J$, are called annihilation and creation variables, respectively. Here we consider the label $n$ to be discrete. In the case that $n$ were a continuous index, all equations would be naturally written with integrals instead of summations.
It is easy to see that the symmetric Poisson bracket structure \eqref{poissonbracket} induces the following algebra for the creation and annihilation variables:
\begin{align}
\{a_n,a_m^*\}_{\text{G}}=&\{b_n,b_m^*\}_{\text{G}}=-i\delta_{n,m},
\label{poissoncndn}
\end{align}
the rest of Poisson brackets among them being zero.
\subsection{Canonical quantization}
In order to define the quantum theory, the full Hilbert space is chosen to be the antisymmetric Fock space,
\begin{equation}
\mathcal{F}=\oplus_{n=0}^{\infty}\left( \otimes^n_{\text{A}} \mathcal{H} \right),
\end{equation}
where $\otimes^n_{\text{A}} \mathcal{H}$ is the antisymmetric tensor product of $n$ copies of $\mathcal{H}$. The complex coefficients $a_n$ and $b_n$ are mapped to annihilation operators $\hat{a}_n,\hat{b}_n$ acting on the Fock space, verifying the canonical anticommutation relations obtained by the prescription
\begin{equation}
\{\cdot,\cdot\}_{\text{G}}\rightarrow \{\hat{\cdot},\hat{\cdot}\}=i\widehat{\{{\cdot},{\cdot}\}}_{\text{G}},
\end{equation}
where $\{\hat{\cdot},\hat{\cdot}\}$ is the anticommutator. The only non-vanishing relations among them are therefore
\begin{equation}
\{\hat{a}_n,\hat{a}_m^{\dagger}\}=\{\hat{b}_n,\hat{b}_m^{\dagger}\}=\delta_{n,m}.
\end{equation}
Furthermore, we define the Fock vacuum as the state which is annihilated by all annihilation operators $\hat{a}_n$ and $\hat{b}_n$. Finally, the quantum field operator $\hat{\Psi}$ on the Fock space is defined in the Heisenberg picture by simply substituting the coefficients $a_n$ and $b_n$ by their corresponding operators in \eqref{Psipap}; i.e.,
\begin{equation}
\hat{\Psi}(x)=\sum_n \big[ \hat{a}_n\psi_n^{\text{p}}(x)+\hat{b}_n^{\dagger}\psi_n^{\text{ap}}(x) \big].
\label{Psipapquantized}
\end{equation}
In summary, every complex structure $J$ defines a split of the space of solutions $\mathcal S$ which leads to two one-particle Hilbert spaces, namely $\mathcal{H}^{\text{p}}$ and $\mathcal{H}^{\text{ap}}$. Therefore, the definitions of what we call particles (elements of $\mathcal{H}^{\text{p}}$) and antiparticles (elements of $\mathcal{H}^{\text{ap}}$) are characterized by the complex structure, which keeps all the information of the particular quantization chosen for the classical theory.
\section{Bogoliubov transformations} \label{sec_bogoliubov}
In the definition of the one-particle Hilbert space $\mathcal{H}^{\text{p}}$ and, therefore, in the selection of the annihilation and creation operators of the quantum theory, there is an ambiguity based on the choice of the complex structure $J$. We are interested in finding whether some complex structures are more adequate than others in order to quantize our theory.
One requirement which is reasonable to impose is that $J$ defines a quantization which unitarily implements the symmetries of the classical system. This condition is extremely restrictive in certain cases, for instance for free quantum fields in Minkowski spacetime: if the Poincaré symmetry is unitarily implemented in the quantum theory, the ambiguity in the selection of the complex structure disappears and a unique complex structure is selected. Thereby, there is a preferred vacuum and well-defined notions of particle-antiparticle can be provided. However, for general geometries, or even for Minkowski spacetime with a non-trivial background (as an electromagnetic potential coupled to the fields), the classical theory is not invariant under such group of transformations, and consequently, imposing symmetries in the quantum theory can reduce the choice of the complex structure but, in general, does not suffice to fix it unequivocally. Then, the concepts of particle and antiparticle inherit the residual ambiguity.
\subsection{Classical Bogoliubov transformations} \label{sec_classicalbog}
With the aim of reducing the ambiguity in the choice of the complex structure, we need to compare different representations of the canonical anticommutation relations and understand whether they can be physically related in some sense under certain conditions. In order to mathematically formulate this problem, we are going to consider canonical transformations of the fields.
Let us consider a vector space of classical solutions $\mathcal{S}$ and two complex structures on it, $J$ and $\tilde{J}$, determining Hilbert spaces $\mathcal{H}$ and $\tilde{\mathcal{H}}$, respectively, with the corresponding orthonormal bases $\{\psi^{\text{p}}_n,(\psi^{\text{ap}}_n)^*\} \subset \mathcal{H}$ and $\{\tilde{\psi}^{\text{p}}_n,(\tilde{\psi}^{\text{ap}}_n)^*\}\subset \tilde{\mathcal{H}}$. We can write any classical solution $\Psi\in\mathcal{S}$ in terms of both bases:
\begin{equation}
\Psi=\sum_n \big( a_n\psi_n^{\text{p}}+b_n^*\psi_n^{\text{ap}} \big)
=\sum_n \big( \tilde{a}_n\tilde{\psi}_n^{\text{p}}+\tilde{b}_n^*\tilde{\psi}_n^{\text{ap}}\big).
\label{Psicctilde}
\end{equation}
Classically, $\Psi$ is the same solution expressed in different bases. Nevertheless, since what we quantize are the annihilation and creation variables $\{a_n,b^*_n\}$, $\{\tilde{a}_n,\tilde{b}^*_n\}$, which are different for $J$ and $\tilde{J}$, both representations will lead to different field operators.
A canonical transformation can be written as
\begin{equation}
\mqty(\tilde{a}_n\\\tilde{b}^*_n)=\sum_m\mqty(\alpha_{nm}^{\text{p}} & \beta_{nm}^{\text{p}} \\ \beta_{nm}^{\text{ap}} & \alpha_{nm}^{\text{ap}})\mqty(a_m \\ b_m^*),
\label{cndnbogoliubov}
\end{equation}
with the matrix entries satisfying appropriate relations so that the Poisson algebra of the creation and annihilation variables is preserved. Any canonical transformation of this sort is called a classical Bogoliubov transformation \cite{wald1994quantum}.
The concepts of particle and antiparticle are in general different for both complex structures: the coefficients $\beta_{nm}^{\text{p}}$ mix the states in $\tilde{\mathcal{H}}^{\text{p}}$ with the states in $\mathcal{H}^{\text{ap}}$; and $\beta^{\text{ap}}_{nm}$ mix $\tilde{\mathcal{H}}^{\text{ap}}$ with $\mathcal{H}^{\text{p}}$. Only if these $\beta$-coefficients vanished the definitions of particle and antiparticle would be the same for $J$ and $\tilde{J}$. In fact, this trivial transformation would correspond to independent changes of bases in the one-particle and in the one-antiparticle Hilbert spaces, not modifying the quantization of the classical theory.
Note that, in general, a classical time-dependent Bogoliubov transformation $B$ can connect two isomorphic vector spaces of solutions $\mathcal{S}$ and $\mathcal{S}'$. For example, as we are going to see in section \ref{sec_gauge}, the Dirac equation is not invariant under a $U(1)$ gauge transformation, but covariant. Nevertheless, when choosing a complex structure $\tilde{J}'$ on $\mathcal{S}'$, it is always possible to write the corresponding complex structure on $\mathcal{S}$:
\begin{equation}
\tilde{J}=B^{-1}\tilde{J}'B.
\end{equation}
Then, without loss of generality we will assume in the following that every complex structure is defined on the domain of the Bogoliubov transformation.
\subsection{Quantum Bogoliubov transformations} \label{sec_unitary}
Let $B:\mathcal{S}\rightarrow\mathcal{S}'$ be a classical (time-dependent) Bogoliubov transformation. Two complex structures $J$ and $\tilde{J}=B^{-1}\tilde{J}'B$ on $\mathcal{S}$ provide quantum field operators $\hat{\Psi}$ and $\tilde{\hat{\Psi}}$, respectively, associated with a classical solution $\Psi\in\mathcal{S}$. $B$ is said to be unitarily implementable in the quantum theory if and only if it can be represented by a unitary operator $\hat{U}_B:\mathcal{F} \rightarrow \tilde{\mathcal{F}}$ such that
\begin{equation}
\tilde{\hat{\Psi}}=\hat{U}_B\hat{\Psi}\hat{U}_B^{-1}.
\end{equation}
In this case, $J$ and $\tilde{J}$ are said to be unitarily equivalent, and the states $\ket{\phi}\in \mathcal{F}$ and $|\tilde{\phi}\rangle=\hat{U}_B\ket{\phi}\in \tilde{\mathcal{F}}$ lead to the same transition amplitudes; i.e., $\langle\tilde{\phi}_1|\tilde{\hat{\Psi}}(x)|\tilde{\phi}_2\rangle=\bra{\phi_1}\hat{\Psi}(x)\ket{\phi_2}$.
A rather useful characterization of the condition of unitary equivalence between two complex structures was given by Shale \cite{Shale:1962,RUIJSENAARS1978105}: the necessary and sufficient condition for a classical Bogoliubov transformation to be unitarily implementable in the quantum theory is that its $\beta$-Bogoliubov coefficients are Hilbert-Schmidt; namely, that they satisfy
\begin{equation}
\sum_{n,m} \left( |\beta_{nm}^p|^2+|\beta_{nm}^{ap}|^2 \right) < \infty.
\label{betaconvergent}
\end{equation}
As we are considering general time-dependent Bogoliubov transformations, this condition has to be satisfied at all finite times.
For systems with finite number of degrees of freedom, the sum in \eqref{betaconvergent} is trivially finite, which proves that all quantizations are unitarily equivalent, in agreement with the Stone-von Neumann theorem \cite{wald1994quantum}. However, in the infinite dimensional case, as we are going to see later, there exist Bogoliubov transformations which cannot be implemented as unitary operators, so there exist unitarily inequivalent quantizations. Consequently, the selection of the complex structure plays a relevant role in the process of quantization, and some effort should be made to distinguish which ones are more appropriate in each case.
Thus, we need to impose physical criteria on the complex structure in order to reduce the ambiguity in the quantization. Motivated by previous studies in cosmology \cite{Cortez:2019orm,Cortez:2020rla} and in the Schwinger effect for a charged scalar field \cite{Garay2020}, our central work in section \ref{sec_uniquenessquantization} will be to characterize the complex structures which preserve the symmetries of the system and unitarily implement the dynamical evolution of a charged fermionic field in presence of a homogeneous time-dependent electromagnetic background. Therefore, let us describe how can time evolution be treated as a Bogoliubov transformation.
\subsection{Time evolution as a Bogoliubov transformation} \label{timeevolution}
Let us review here some results which will be useful in our later discussion. For more details, see \cite{Cortez:2019orm,Cortez:2020rla,CORTEZ201536}.
Because of the well-posed initial value formulation of the Dirac equation, time evolution from time $t_0$ to time $t$ of a classical solution $\Psi\in\mathcal{S}$ is described by a time-dependent classical Bogoliubov transformation $T(t_0,t):\mathcal{S}\to \mathcal{S}$. With this transformation we are allowing the annihilation and creation operators to be time-dependent, distributing the dynamics between them and the elements of the considered orthonormal bases in the expansion \eqref{Psipapquantized}. This is analogous to working in different pictures in Quantum Mechanics.
Given a complex structure $J_{t_0}$, $T(t_0,t)$ defines the one-parameter family of complex structures
\begin{equation}
J_t=T(t_0,t)J_{t_0}T^{-1}(t_0,t).
\end{equation}
If we impose that the Bogoliubov transformation $T(t_0,t)$ is unitarily implementable for all $t$, then $J_t$ will be unitarily equivalent to $J_{t_0}$ for all $t$. In other words, quantizations will be unitarily equivalent, and, consequently, will provide the same physics during the evolution of the fields. This unitary implementation of the dynamics is exactly what we are going to demand to the quantization of a charged fermionic field in an electromagnetic background in section \ref{sec_unitarydynamics}. Later, in section \ref{sec_uniqueness} we will prove that this reduces the ambiguity in the selection of complex structures to a unique class that defines unitarily equivalent quantizations.
\subsection{Gauge transformations} \label{sec_gauge}
Let us consider local $U(1)$ gauge transformations. They define a time-dependent classical Bogoliubov transformation $G(g):\mathcal{S}_A \rightarrow \mathcal{S}_{A^g}$ given by
\begin{equation}
\Psi \mapsto G(g)\Psi=e^{iqg(x)}\Psi,
\end{equation}
where $\mathcal{S}_A$ is the vector space of classical solutions to the Dirac equation \eqref{eqdirac} with the four-vector potential $A_{\mu}$, and $g$ is a general function. The Dirac equation is gauge covariant but not invariant since $A_{\mu}$ transforms as $
A_{\mu} \rightarrow A_{\mu}^g=A_{\mu}-\partial_{\mu}g$.
This is due to the fact that $A_{\mu}$ is a nondynamical gauge field, externally imposed on the equations of motion for the matter fields $\psi$. Therefore, as the Dirac equation depends on the particular choice of the gauge field $A_{\mu}$, so does its space of solutions, meaning that, in general, $\mathcal{S}_A$ and $\mathcal{S}_{A^g}$ do not coincide.
Given a complex structure $J$ on $\mathcal{S}_A$, a gauge transformation $G(g)$ defines another complex structure
\begin{equation}
J_g=G(g)JG(g)^{-1}
\end{equation}
on $\mathcal{S}_{A^g}$
that trivially implements $G(g)$.
Indeed, gauge transformations simply act by multiplication by a phase. This translates into a diagonal Bogoliubov matrix with null $\beta$-coefficients. Thus, gauge transformations can always be unitarily implemented in the quantum theory by an operator which acts on the quantum fields simply by multiplication of~$e^{iqg(x)}$. In particular, as we are going to see in section \ref{sec_symmetries}, the homogeneity that we will impose on the electric field applied to the Dirac field will select a privileged gauge (the temporal gauge) which preserves the homogeneity of the Dirac equation. We will work in this fixed gauge. To translate the results obtained to other gauges, we simply need to classically transform the fields.
\section{Homogeneous electric field} \label{section_fermionic}
Taking into account all the previous results regarding the canonical quantization of a charged fermionic field in Minkowski spacetime coupled to a general electromagnetic field, let us particularize our study to the case in which this external field is a homogeneous electric field, so no magnetic component is present.
We would like to benefit from this property, so we will choose a gauge explicitly exhibiting these symmetries in the action. This is the case of the temporal gauge, in which the four-potential takes the form $A_\mu(t,\textbf{x})=(0,\textbf{A}(t))$. Without loss of generality, we also choose the potential to be parallel to the $z$-axis, i.e., the third spatial direction: $\textbf{A}(t)=(0,0,A(t))$. Finally, to completely fix the gauge we are going to set $A(t_0)=0$.
\subsection{Mode decomposition}
Since the temporal gauge makes the equation of motion invariant under spatial translations, we can expand its solution in plane wave modes $\psi_{\textbf{p}}(t)$, one for each wave vector $\textbf{p}\in \mathbb{R}^3$:
\begin{equation}
\psi(t,\textbf{x})=\int_{\mathbb{R}^3} \frac{d^3\textbf{p}}{(2\pi)^{3/2}} \ e^{i\textbf{p}\cdot \textbf{x}}\psi_\textbf{p}(t).
\label{modedecomposition}
\end{equation}
These modes have decoupled actions,
\begin{equation}
S=\int_{\mathbb{R}^3} d^3\textbf{p} \ S_{\textbf{p}},
\end{equation}
with
\begin{equation}
S_{\textbf{p}}=\int_{\mathbb{R}} dt \ \Bar{\psi}_{\textbf{p}}(t)\gamma^0 \Big[ i\partial_t -\left(p_1\gamma^0\gamma^1+p_2\gamma^0\gamma^2+m\gamma^0\right)-\left(p_3+qA(t)\right)\gamma^0\gamma^3 \Big] \psi_{\textbf{p}}(t).
\label{actionmodedecomposition}
\end{equation}
It is now convenient to decompose the solution in eigenvectors of $\gamma^0\gamma^3$. The eigenvalues of $\gamma^0\gamma^3$ are +1 and -1, each of them with double degeneracy. Two eigenvectors which form an orthonormal basis of the subspace associated with the eigenvalue $+1$ (in the Dirac representation of the Dirac matrices) are
\begin{equation}
R_1= \frac{1}{\sqrt{2}}(1,0,1,0)^\textsc{t}, \quad R_2=\frac{1}{\sqrt{2}}(0,-1,0,1)^\textsc{t},
\end{equation}
where $\textsc{t}$ denotes transposition. Besides, the vectors
\begin{equation}
-\omega_{\perp}^{-1}\left(p_1\gamma^0\gamma^1+p_2\gamma^0\gamma^2-m\gamma^0\right)R_r,
\label{basisR}
\end{equation}
with $r=1,2$, form an orthonormal basis on the subspace with eigenvalue $-1$, where
\begin{equation} \label{omegaperp}
\omega_{\perp}=\sqrt{p_1^2+p_2^2+m^2}.
\end{equation}
Thus, we can write each mode $\psi_{\textbf{p}}$ as a linear combination of these four vectors,
\begin{equation}
\psi_{\textbf{p}}(t)=\gamma^0\sum_{r=1,2}\big[\sigma^*_{r,\textbf{p}}(t) -\omega_{\perp}^{-1}\left(p_1\gamma^0\gamma^1+p_2\gamma^0\gamma^2-m\gamma^0\right)\chi_{r,\textbf{p}}(t)\big]R_r,
\label{eachmodedecomposition}
\end{equation}
for some time-dependent scalar fields $\sigma^*_{r,\textbf{p}}(t)$, $\chi_{r,\textbf{p}}(t)$. Since $\psi$ and $\Bar{\psi}$ are Grassmann variables, their anti-commuting nature is now inherited by the scalar functions $\sigma_{r,\textbf{p}}$ and $\chi_{r,\textbf{p}}$. The prefactor $\gamma^0$ has been introduced for convenience.
Using this form for each mode in expression \eqref{modedecomposition} and inserting everything in \eqref{actionmodedecomposition}, the action of the system becomes
\begin{equation}
S=\sum_{r=1,2}\int_{\mathbb{R}^3}d^3\textbf{p} \ S_{r,\textbf{p}},
\end{equation}
with
\begin{equation} \label{actionsigmachi}
\begin{split}
S_{r,\textbf{p}}&=\int_{\mathbb{R}} dt \ \big[ i\left(\chi_{r,\textbf{p}}^*\dot{\chi}_{r,\textbf{p}}+\sigma_{r,\textbf{p}}\dot{\sigma}_{r,\textbf{p}}^*\right)
+(p_3+qA)\left(\sigma_{r,\textbf{p}}\sigma_{r,\textbf{p}}^*-\chi_{r,\textbf{p}}^*\chi_{r,\textbf{p}}\right)
\\
&-\omega_{\perp}\left(\chi_{r,\textbf{p}}^*\sigma_{r,\textbf{p}}^*+\sigma_{r,\textbf{p}}\chi_{r,\textbf{p}}\right)\big].
\end{split}
\end{equation}
The only non-vanishing symmetric Poisson brackets among the canonical variables are:
\begin{equation}
\poissonbracket{\sigma_{r,\textbf{p}}}{\sigma_{s,\textbf{q}}^*}_{\text{G}}=\poissonbracket{\chi_{r,\textbf{p}}}{\chi^*_{s,\textbf{q}}}_{\text{G}}=-i\delta_{r,s}\delta^{(3)}(\textbf{p}-\textbf{q}).
\label{poissonbrackets}
\end{equation}
In addition, we see from \eqref{actionsigmachi} that the modes $\sigma^*_{r,\textbf{p}}(t)$ have the same dynamics for $r=1$ and $r=2$, which allows us to drop the index $r$. Furthermore, the modes labelled by $\textbf{p}$ are decoupled from one another, which allows us to drop the index $\textbf{p}$ (keeping in mind that the equations of motion depend on the momentum). This also applies to $\chi_{r,\textbf{p}}(t)$. Using left Grassmann variational derivatives we obtain that both $\chi$ and $\sigma$ satisfy harmonic oscillator equations with time-dependent complex frequency $\omega(t)^2+iq\dot{A}(t)$, where
\begin{equation}
\omega(t)=\sqrt{\omega_{\perp}^2+[p_3+qA(t)]^2}.
\label{omega}
\end{equation}
Furthermore, $\chi$ determines the value of $\sigma^*$. The equations of motion can then be written as
\begin{align}
&\ddot{\chi}+\left(\omega^2+iq\dot{A}\right)\chi=0,
\label{soeqchi} \\
& \sigma^*=\omega_{\perp}^{-1}\left[ i\dot{\chi}-(p_3+qA)\chi \right]. \label{sigma}
\end{align}
Note that at $t=t_0$, due to the condition $A(t_0)=0$ we get that
\begin{equation}
\omega(0)=\sqrt{m^2+p^2}=\omega_0,
\end{equation}
where $p=\abs{\mathbf{p}}$.
\subsection{Classical time evolution}
We are interested in obtaining the time evolution of the coupled modes $\chi(t)\equiv \chi_{r,\textbf{p}}(t)$ and $\sigma^*(t)\equiv\sigma^*_{r,\textbf{p}}(t)$ in terms of the initial conditions $\chi(t_0)$ and $\sigma^*(t_0)$. According to section \ref{timeevolution}, this evolution is given by a classical Bogoliubov transformation:
\begin{equation}
\mqty(\chi(t)\\ \sigma^*(t))={T}(t_0,t)\mqty(\chi(t_0)\\ \sigma^*(t_0)).
\label{evolutionmodeschi}
\end{equation}
On the other hand, equation \eqref{soeqchi} is a linear second order differential equation with complex coefficients, which means that we can choose a basis of solutions to be
$e^{i(-1)^l\Theta^l(t)}$, with $\Theta^l$ ($l=1,2$) complex functions to be determined. Then, any solution to this equation can be expressed in terms of them as
\begin{equation}
\chi(t)=C^1 e^{-i\Theta^1(t)}+C^2 e^{i\Theta^2(t)},
\label{chienfunciondetheta}
\end{equation}
where $C^l\in\mathbb{C}$ are uniquely determined by the initial conditions $\Theta^l(t_0)$ and $\dot{\Theta}^l(t_0)$. By inserting \eqref{chienfunciondetheta} in \eqref{evolutionmodeschi} and using \eqref{sigma} we can deduce the expressions for the entries of the matrix $T(t_0,t)$:
\begin{equation}
\begin{split}
T^{11}(t_0,t)&=W(t_0)\big[\Omega^2(t_0)e^{-i\delta^1(t)}+\Omega^1(t_0)e^{i\delta^2(t)}\big], \\
T^{12}(t_0,t)&=W(t_0) \omega_{\perp} \big[e^{-i\delta^1(t)}-e^{i\delta^2(t)}\big], \\
T^{21}(t_0,t)&=W(t_0)\omega_{\perp}^{-1} \big[\Omega^2(t_0)\Omega^1(t)e^{-i\delta^1(t)} -\Omega^1(t_0)\Omega^2(t)e^{i\delta^2(t)}\big], \\
T^{22}(t_0,t)&=W(t_0) \big[\Omega^1(t)e^{-i\delta^1(t)}+\Omega^2(t)e^{i\delta^2(t)}\big], \label{elementsmatrix}
\end{split}
\end{equation}
where
\begin{align}
\Omega^l(t)=&\dot{\Theta}^l(t)+(-1)^l(p_3+qA(t)), \label{Omegal} \\
\delta^l(t)=&\Theta^l(t)-\Theta^l(t_0), \\
W(t_0)=&[\Omega^1(t_0)+\Omega^2(t_0)]^{-1}.
\end{align}
\subsection{Classical solutions in the ultraviolet regime} \label{sec_asymptotic}
In the following we study the behaviour of the modes $\chi$ and $\sigma^*$ in the ultraviolet (UV) regime, that is, in the asymptotic limit in which the wave number $p$ is large, since the Hilbert-Schmidt condition does not depend on the lower energy scales for a massive Dirac field. For convenience, we write
\begin{equation}
\Theta^l(t)=\int_{t_0}^t d\tau \ \left[\omega(\tau)+\Lambda^l(\tau)\right].
\label{formtheta}
\end{equation}
Here we have chosen $\Theta^l(t_0)=0$ without loss of generality. In addition, taking into account that $A(t_0)=0$, we impose $\dot{\Theta}^l(t_0)=\omega_0$, which implies $\Lambda^l(t_0)=0$.
We assume now that $\Lambda^l$ are complex functions which behave asymptotically as $\order{p^{-1}}$, provided that the time-dependence of the electromagnetic potential is controlled in a sense that will be specified below. With the objective of checking the self-consistency of this fact, we insert the solutions $e^{i(-1)^l\Theta^l(t)}$ in \eqref{soeqchi}. We then obtain a Riccati equation for the functions $\Lambda^l$:
\begin{equation}
\dot{\Lambda}^l=i(-1)^l\left[(\Lambda^l)^2+2\omega\Lambda^l\right]-\dot{\omega}+(-1)^lq\dot{A}.
\label{ricatti}
\end{equation}
As $\omega=\order{p}$, $\dot{\omega}=\order{1}$, and as we are assuming $\Lambda^l=\order{p^{-1}}$, the term $i(-1)^l(\Lambda^l)^2$ is negligible at large $p$. The solution in the UV regime to the resulting linear equation, after integrating by parts, is
\begin{align}
\Lambda^l(t)&=\frac{i}{2}(-1)^l\bigg[-\Gamma^l(t)+e^{2i(-1)^l\theta(t)}\left(\Gamma^l(t_0)+\int^t_{t_0} d\tau \ e^{-2i(-1)^l\theta(\tau)}\dot{\Gamma}^l(\tau)\right)\bigg],
\label{Lambda}
\end{align}
where
\begin{equation}
\theta(t)= \int_{t_0}^t d\tau \ \omega(\tau),\quad \Gamma^l(t)=\frac{1}{\omega(t)}\left[\dot{\omega}(t)-(-1)^lq\dot{A}(t)\right].
\end{equation}
Given that $\Gamma^l(t)=\order{p^{-1}}$, it is easy to check that $ \Lambda^l(t)=\order{p^{-1}}$ in the UV regime, as assumed, as long as $\Gamma^l(t)$ and the integral in \eqref{Lambda} are finite. This is a restriction to the time dependence of the possible electromagnetic potentials for which \eqref{formtheta} is valid. From now on we will assume that we are dealing with potentials satisfying this mild requirement. It can be seen that if $\dot{\Gamma}^l$ has a finite number of sign changes in $[t_0,t]$ and $\Gamma^l(t_0)$ and $\Gamma^l(t)$ are finite, this condition is verified. Indeed, with these hypotheses we have that
\begin{equation}
\begin{split}
&\left|\int^t_{t_0} d\tau \ e^{-2i(-1)^l\theta(\tau)}\dot{\Gamma}^l(\tau)\right| \leq \int^t_{t_0} d\tau \left| \dot{\Gamma}^l(\tau) \right| \\
&=\left|\sum_{i=1}^n \int_{t_{i-1}}^{t_i} d\tau \ (-1)^{s_i} \dot{\Gamma}^l(\tau)\right|=\left|(-1)^{s_n}\Gamma^l(t)-(-1)^{s_1}\Gamma^l(t_0)\right|
\end{split}
\end{equation}
is finite, where $[t_0,t]=\cup_{i=1}^n[t_{i-1},t_i]$, $\dot{\Gamma}^l(t_j)=0$ ($j=2,...,n-1$), and $s_i$ denotes the sign of $\dot{\Gamma}^l$ in $[t_{i-1},t_i]$. In particular, potentials which turn on and off asymptotically, remaining finite at all times, are of this type. Then, our formalism applies to a very general family of potentials, including as a particular case those associated with electromagnetic fields localized in time. These are the ones usually found in the literature with analytical solutions of the Dirac equation \cite{Gavrilov:1996pz}.
\section{Reduction of the ambiguity in the quantization} \label{sec_uniquenessquantization}
In section \ref{sec_unitary} we discussed that, in general, there exist unitarily inequivalent quantizations. In order to reduce this ambiguity, we also commented that it is reasonable to restrict our study to complex structures which preserve the symmetries of the system and which unitarily implement the dynamics. This is precisely what we are doing in this section for the case of a massive charged fermionic field in presence of a homogeneous time-dependent electric field.
\subsection{Preservation of the symmetries} \label{sec_symmetries}
In the case under study, the preservation of the symmetries implies two conditions. First, our quantization should be explicitly invariant under spatial translations due to the homogeneity of the electromagnetic field. Second, the complex structure should not mix modes with different $(r,\textbf{p})$, which are decoupled in the equations of motion. Both conditions require the temporal gauge fixing condition that we have used, namely, $A_{\mu}(t)=(0,0,0,A(t))$.
Nevertheless, $\chi_{r,\textbf{p}}$ and $\sigma^*_{r,\textbf{p}}$ remain dynamically coupled by equation \eqref{sigma}. Therefore, the annihilation and creation variables $a\equiv a_{r,\textbf{p}}$ and $b^*\equiv b^*_{r,\textbf{p}}$, which are in general time-dependent, will be given by a linear combination of the modes $\chi\equiv \chi_{r,\textbf{p}}$ and $\sigma^*\equiv\sigma^*_{r,\textbf{p}}$ exclusively:
\begin{equation}
\mqty(a(t)\\b^*(t))=\mathfrak{J}(t) \mqty(\chi(t)\\\sigma^*(t)), \quad \mathfrak{J}(t)=\mqty(f_1(t)&f_2(t)\\g_1(t)&g_2(t)).
\label{abest}
\end{equation}
$\mathfrak{J}(t)$ is a matrix of time-dependent functions parametrizing all the possible complex structures compatible with our requirements. These functions need to verify some conditions between them in order to represent a complex structure. Indeed, the modes $\chi$ and $\sigma^*$ and the creation and annihilation variables $a$ and $b$ must satisfy \eqref{poissonbrackets} and the classical symmetric Poisson bracket relations \eqref{poissoncndn}, rewritten as
\begin{equation}
\poissonbracket{a_{r,\textbf{p}}}{a^*_{s,\textbf{q}}}_G=\poissonbracket{b_{r,\textbf{p}}}{b^*_{s,\textbf{q}}}_G=-i\delta_{r,s}\delta^{(3)}(\textbf{p}-\textbf{q}).
\end{equation}
This requirement implies the following restrictions:
\begin{gather}
|f_1(t)|^2+|f_2(t)|^2=1,
\label{idf1f2.a} \\
g_1(t)=f^*_2(t)e^{i\kappa(t)}, \quad g_2(t)=-f^*_1(t)e^{i\kappa(t)},
\label{relationsbea}
\end{gather}
where $\kappa(t)$ is an arbitrary real function. The choice of particular $f_i$ and $g_i$ ($i=1,2$) determines the definition of the creation and annihilation variables via \eqref{abest}, and therefore, selects the quantization prescription unequivocally.
\subsection{Unitary implementation of the dynamics} \label{sec_unitarydynamics}
The second requirement that we are imposing on the physically relevant complex structures is that they unitarily implement the time evolution. This will restrict the choice of $f_i$ and $g_i$.
As said in section \ref{timeevolution}, the evolution from time $t_0$ to time $t$ of the modes $(\chi,\sigma^*)$, for each $(r,\textbf{p})$, is determined by a time-dependent classical Bogoliubov transformation of the type \eqref{cndnbogoliubov}:
\begin{equation}
\mqty(a(t)\\b^*(t))=\mathfrak{B}(t_0,t) \mqty(a(t_0)\\b^*(t_0)), \quad\mathfrak{B}(t_0,t)=\mqty(\alpha^f(t_0,t)&\beta^f(t_0,t)\\\beta^g(t_0,t)&\alpha^g(t_0,t)).
\label{bog_dynamics}
\end{equation}
From the equation describing the time evolution of the modes in terms of the initial conditions and from the definition of $\mathfrak{J}(t)$ we directly obtain that
\begin{equation}
\mathfrak{B}(t_0,t)=\mathfrak{J}(t)T(t_0,t)\mathfrak{J}^{-1}(t_0),
\end{equation}
with the $\beta$-coefficients given by
\begin{equation}
\beta^f(t_0,t)=\frac{e^{-i\kappa(t)}}{2\omega_0\omega_{\perp}}
\left[ e^{i\Theta^2(t)}\Delta^2(t)\Delta^1(t_0) -e^{-i\Theta^1(t)}\Delta^1(t)\Delta^2(t_0) \right], \label{betacoeffs}
\end{equation}
where
\begin{equation} \label{Deltapl}
\Delta^l(t) = \omega_{\perp}f_1(t)-(-1)^l\Omega^l(t)f_2(t)
\end{equation}
and recall that $\omega_{\perp}$ and $\Omega^l(t)$ are given in \eqref{omegaperp} and in \eqref{Omegal}, respectively.
Similarly, $\beta^g(t_0,t)$ satisfies an equation analogue to \eqref{betacoeffs} except for a global minus sign, and with $f_i$ replaced by $g_i$. Due to this symmetry and to the relations between the functions $f_i$ and $g_i$, from now on we will focus on the study of $\beta^f(t_0,t)$. The results obtained can also be applied in a straightforward manner to $\beta^g(t_0,t)$.
As discussed in section \ref{sec_unitary}, the condition for the $\beta$-coefficients to be associated with a unitarily implementable Bogoliubov transformation is that they be Hilbert-Schmidt for each fixed finite time $t$. This condition translates into square integrability (with respect to $\textbf{p}$) at all times, i.e., that
\begin{equation}
\int d^3\textbf{p}\ |\beta^f(t_0,t)|^2 =\int_0^{2\pi} d\phi \int_0^{\pi} d\theta \ \sin{\theta} \int_0^{\infty} dp \ p^2|\beta^f(t_0,t)|^2
\end{equation}
be finite for each $t$. Provided that $|\beta^f(t_0,t)|^2$ does not diverge in the angular coordinates $(\theta,\phi)$, as their domains are bounded, the integrability of $|\beta^f(t_0,t)|^2$ will be satisfied if and only if the integral in $p$ converges for almost all fixed directions. In particular, as we are considering a massive Dirac field, the $\beta$-coefficients \eqref{betacoeffs} do not diverge in the infrared. Thus, we are only interested in the UV regime, where the integrability condition is satisfied if and only if $\beta^f(t_0,t)= \order{p^{-\alpha}}$,
for some $\alpha>3/2$ at all times $t$.
The next step is to analyze the UV behaviour of $\beta^f(t_0,t)$ in equation \eqref{betacoeffs} for a fixed angular direction. First, we note that
\begin{equation}
\omega_{\perp}=\sqrt{p^2\sin^2{\theta}+m^2},
\end{equation}
where $\theta \in [0,\pi]$ is the polar angle between $\textbf{p}$ and $\textbf{A}$. For $\theta \in (0,\pi)$ we have $\omega_{\perp}=\order{p}$, while for $\theta=0,\pi$ we have $\omega_{\perp}=m=\order{1}$. As the subspace of $\mathbb{R}^3$ which corresponds to $\theta=0,\pi$ (i.e., the $z$-axis) has zero measure, we will consider $\theta\in (0,\pi)$ from now on. Taking also into account that $\omega=\order{p}$, it is direct to see that the prefactor of \eqref{betacoeffs} is $\order{p^{-2}}$ for $\theta\in (0,\pi)$.
Consequently, the integrability of the $\beta$-coefficients is assured if and only if the expression in brackets in \eqref{betacoeffs} is $\order{p^{-\alpha+2}}$ for all $t$ and in almost all the angular directions. In addition, we will neglect all the possibilities of cancelling the first term of \eqref{betacoeffs} with the second one. The reason is that $\Delta^l$ (and, therefore, the complex structure $\mathfrak{J}$) would need to absorb the leading order terms of the dynamical solutions $\Theta^l$. This would imply a trivialization of the Bogoliubov transformation \eqref{bog_dynamics} (see section \ref{sec_classicalbog}), thus allowing the unitary implementation of the dynamics trivially. Then, we will assume the independence of both terms in \eqref{betacoeffs} throughout time evolution, so that this integrability condition becomes
\begin{align}
\Delta^2(t)\Delta^1(t_0)=\order{p^{-\alpha+2}}, \quad \Delta^1(t)\Delta^2(t_0)=\order{p^{-\alpha+2}},
\label{integrabilitycondition}
\end{align}
with $\alpha>3/2$ for each finite value of $t$ and for almost all directions.
The relation \eqref{idf1f2.a} forces $f_1(t)$ and $f_2(t)$ to converge for each finite value of time $t$ to a complex number whose module is less than or equal to $1$ for arbitrarily large values of \textbf{p}, i.e., the UV regime. It can be easily seen that if that limit is $0$ for $f_1(t)$ or $f_2(t)$, then both $\Delta^1(t)$ and $\Delta^2(t)$ behave as $\order{p}$ and the condition \eqref{integrabilitycondition} is not satisfied for any $\alpha > 3/2$. Therefore, the dynamics cannot be unitarily implemented.
In the most general case where both $f_1(t)$ and $f_2(t)$ are $\order{1}$ and none of them converge to 0, thus $\Delta^1$ and $\Delta^2$ given by \eqref{Omegal} and \eqref{Deltapl} would
be $\order{p}$. Consequently, we need to cancel the leading orders of $\Delta^s$, for $s=1$ or $s=2$, so that the $\beta$-coefficients have the right UV behaviour. Before we do so, it is useful to write the ratio of $f_1$ to $f_2$ as
\begin{equation}
\frac{f_1}{f_2}=u^s, \quad u^s= (-1)^s\frac{\Omega^s}{\omega_{\perp}}+h
\label{f1/f2}.
\end{equation}
This parametrization, using \eqref{idf1f2.a}, can be rewritten as
\begin{equation}
f_1=\frac{u^s}{\sqrt{|u^s|^2+1}}e^{i\varphi}, \hspace{0.5cm} f_2=\frac{1}{\sqrt{|u^s|^2+1}}e^{i\varphi},
\label{f1f2phase}
\end{equation}
where $\varphi=\varphi(t)\in \mathbb{R}$ is an arbitrary phase. Let us stress that, at this point, $u^s$ and $h$ are arbitrary. However, by requiring that the leading orders of $\Delta^s$ vanish for some $s$, we find in \eqref{f1/f2} that $h$ must be a function of order $\order{p^{-\sigma}}$, for some $\sigma \geqslant 0$, in order to have $\Delta^s=\order{p^{-\sigma+1}}$. For the other function $\Delta^{l\neq s}$ no cancellation occurs and $\Delta^{l\neq s}=\order{p}$. As a result, $\Delta^2(t)\Delta^1(t_0)$ and $\Delta^1(t)\Delta^2(t_0)$ are $\order{p^{-\sigma+2}}$. This satisfies condition \eqref{integrabilitycondition} if and only if $\sigma > 3/2$. In particular, this shows that the first two leading orders of $\Omega^s/\omega_{\perp}$ must cancel.
The coefficient $\beta^f(t_0,t)$ does not diverge on the angles as can be checked by substituting \eqref{f1f2phase} into \eqref{betacoeffs}, provided that $h$ is chosen with a smooth dependence on $\theta$. It is important to emphasize that, in order to translate the symmetries of the equations of motion to the complex structure, $h$ should not depend on the angular coordinate $\phi$. Considering all of the requirements exposed above, the integrability of the $\beta$-coefficients is assured.
In summary, the temporal evolution can be (non-trivially) unitarily implemented if and only if neither $f_1$ nor $f_2$ vanish in the UV and \eqref{f1f2phase} is verified for $s=1$ or $s=2$ in the UV regime, where
\begin{equation}
h=\order{p^{-\sigma}}, \quad \sigma>3/2,
\label{orderh}
\end{equation}
is a function that depends smoothly on $\theta$. It is important to remember that we assumed that the electromagnetic potential verifies the mild conditions discussed in section \ref{sec_asymptotic} so that \eqref{formtheta} holds.
The requirements of the preservation of the symmetries of the equations of motion and the unitary implementation of the dynamics have reduced the possible complex structures to a unique family characterized by the integrability of the $\beta$-coefficients associated to the time evolution Bogoliubov transformation. As the density of the number of created particles and antiparticles throughout the evolution of the vacuum is given by
\begin{equation}
N(t)=\frac{1}{2}\int_{\mathbb{R}^3} d^3\textbf{p} \ \left(|\beta^f(t,t_0)|^2+|\beta^g(t,t_0)|^2\right),
\end{equation}
this family of complex structures is equivalently characterized by a well-defined number of created particles at finite times.
However, it is essential to emphasize that there is still some freedom in the selection of the particular complex structure within the family, encoded in the choice of $h$. This translates into an ambiguity in the particular value of $N(t)$ at finite times, which means that we need more criteria in order to characterize it completely. In particular, recent studies in hybrid loop quantum cosmology have already proposed additional requirements so that the Dirac Hamiltonian has nice mathematical properties. They further reduce the ambiguity by requiring for example that the fermionic backreaction is finite and that the Hamiltonian is diagonal asymptotically \cite{PhysRevD.98.063535,PhysRevD.99.063535,Elizaga_Navascu_s_2019}.
Let us note that when there is no electromagnetic field, the general quantization scheme followed here is unitarily equivalent to the usual Minkowski quantization. Indeed, in this case with $A=0$, the anisotropy introduced when working with $\gamma_0\gamma_3$ eigenvectors is irrelevant when compared to an isotropic treatment, since both descriptions are related through a unitary transformation which does not mix positive and negative frequencies.
\subsection{Unitary implementation of the dynamics for different potentials}
Let us now prove that a complex structure defined by $(f'_1,f'_2)$ which unitarily implements the dynamics for an electromagnetic potential $A'_{\mu}$ cannot unitarily implement the temporal evolution for a different potential $A_{\mu}$. In fact, if that happened we would have
\begin{equation}
\frac{f'_1}{f'_2}=(-1)^s\frac{\Omega^s}{\omega_{\perp}}+h',
\end{equation}
with $\Omega^s$ satisfying \eqref{Omegal} for the potential $A_{\mu}$ (with $s=s'$; for $s\neq s'$ the result will be the same) and $h'=\mathcal{O}(p^{-\sigma'})$, $\sigma'>3/2$. Since $(f_1,f_2)$ verify \eqref{f1/f2}, it would imply that
\begin{equation}
\frac{f_1}{f_2}-\frac{f'_1}{f'_2}=\order{p^{-\gamma}},
\end{equation}
with $\gamma>3/2$. However, this cannot hold, as the leading order of this difference of quotients can be easily seen to be
\begin{equation}
\frac{f_1}{f_2}-\frac{f'_1}{f'_2}=\frac{q(A-A')}{\omega_{\perp}}[1+(-1)^s\cos{\theta}]+\cdots,
\end{equation}
which is $\mathcal{O}(p^{-1})$ for $\theta\in (0,\pi)$ unless $A_{\mu}=A'_{\mu}$. Note that we fixed in section \ref{section_fermionic} the temporal gauge $A_{\mu}(t_0)=0$. Had we not done so, then we would have obtained that for $A_{\mu}'=A_{\mu}+\text{constant}$ the quantizations should be equivalent.
In particular, this result states that the Minkowski quantization, obtained when $A_{\mu}=0$, does not unitarily implement the temporal evolution as long as there is a non-vanishing electromagnetic field.
\subsection{Uniqueness of the quantization} \label{sec_uniqueness}
To which extent the requirements of preservation of the symmetries and unitary implementation of the temporal evolution reduce the ambiguity in the selection of the complex structure? Particularly, we will study if the quantum representations which admit a unitary implementation of the dynamics (characterized in the previous section) are unitarily equivalent.
With this objective in mind, let $(a,b^*)$ and $(\tilde{a},\tilde{b}^*)$ be two sets of time-dependent annihilation and creation variables which allow for a unitary implementation of the dynamics. They satisfy relations of the form \eqref{abest}, being $\mathfrak{J}$ and $\tilde{\mathfrak{J}}$, respectively, the corresponding matrices defining their complex structures. It is not difficult to see that both sets of variables are related by a Bogoliubov transformation given by
\begin{equation}
\mqty(a\\b^*)=\mathfrak{H} \mqty(\tilde{a}\\\tilde{b}^*),\hspace{0.5cm} \quad\mathfrak{H}=\mathfrak{J}\tilde{\mathfrak{J}}^{-1}=
\mqty(\kappa^f&\lambda^f\\\lambda^g&\kappa^g). \label{JJ-1}
\end{equation}
Using again Shale's theorem \cite{Shale:1962,RUIJSENAARS1978105}, both Fock representations $(a,b^*)$ and $(\tilde{a},\tilde{b}^*)$ will be unitarily equivalent if and only if $|\lambda^f|^2$ and $|\lambda^g|^2$ are integrable with respect to $\textbf{p}$. It is direct to see from \eqref{relationsbea} and \eqref{JJ-1} that
\begin{equation}
|\lambda^f|=|\lambda^g|=|f_1 \tilde{f_2} - f_2\tilde{f_1}|,
\label{lambda}
\end{equation}
where we followed the notation introduced in \eqref{abest}, with a tilde for the components of $\tilde{\mathfrak{J}}$. By hypothesis, $f_i$ and $\tilde{f}_i$ satisfy \eqref{f1/f2} in the UV regime, for some $s,\tilde{s}=1,2$ and $h=\order{p^{-\alpha}}$, $\tilde{h}=\order{p^{-\tilde{\alpha}}}$, with $\alpha,\tilde{\alpha} >3/2$. Substituting these relations into \eqref{lambda} we obtain
\begin{equation}
|\lambda^f|=|\lambda^g|= |f_2\tilde{f}_2 ( u^s-\tilde{u}^{\tilde{s}} ) |.
\end{equation}
On the one hand, if $s=\tilde{s}$, then $|\lambda^f|=|\lambda^g|=\order{p^{-\min\{\alpha,\tilde{\alpha}\}}}$ and both representations are unitarily equivalent. On the other hand, if $s\neq \tilde{s}$, then $|\lambda^f|=|\lambda^g|=\order{1}$ (for $\theta\in(0,\pi)$), so both representations would not be unitarily equivalent. This is due to the fact that the relative sign between the functions $f_1$ and $f_2$ is different from the relative sign between $\tilde{f}_1$ and $\tilde{f}_2$. However, according to \eqref{relationsbea}, if we exchanged $\tilde{f}_i$ and $\tilde{g}_i$, then the relative sign between $\tilde{g}_1$ and $\tilde{g}_2$ would be the same as the one between $f_1$ and $f_2$. According to \eqref{abest}, this exchange between $\tilde{f}_i$ and $\tilde{g}_i$ is equivalent to a change in the convention of what we define as particles and antiparticles. Therefore, we can interpret that the inequivalence between these two representations is due exclusively to artificially choosing two different conventions ($s\neq \tilde{s}$) for the concepts of particle and antiparticle. Analogous results also appear in the study of the uniqueness of the Fock quantization of free Dirac fields in non-stationary curved spacetimes \cite{Cortez:2020rla}.
\section{Conclusions} \label{sec_conclusions}
In the study of a massive charged fermionic field coupled to a spatially homogeneous electric field, we have dealt with the reduction of the ambiguities in the process of canonical quantization. In particular, we consider that the physically relevant complex structures should preserve the symmetries of the system (the translational invariance due to the homogeneity of the external field and the decoupling between the modes in the equations of motion). In addition, we also impose that they should allow the unitary implementation of the dynamics with two main objectives: assuring throughout the evolution of the vacuum both the physical equivalence of the quantizations and the finiteness of the number of created particles at finite times.
This work aims to translate the analysis carried out in non-stationary curved spacetimes for free fields \cite{Cortez:2019orm,Cortez:2020rla} to the Schwinger effect. This was first done for a charged scalar field on \cite{Garay2020}. Other approaches with similar purposes are also found on the literature \cite{Dabrowski_2016}. However, our approach allows to obtain a characterization of the Fock representations assuring the compatibility with the requirements listed above at all finite times and not only asymptotically. Another strong point of the formalism followed here is the generality of the electromagnetic fields for which it applies. This family of external fields should satisfy certain mild time-dependence conditions and includes as a particular case those vanishing asymptotically.
The main result of our analysis is that the physically reasonable requirements imposed on the quantization succeed in restricting the ambiguities to one unique equivalence class, which has been characterized. More precisely, when asking the complex structure to preserve the symmetries of the classical system and unitarily implement the dynamics, the freedom in its selection is reduced to just a choice of a function (for each mode) which has to decay sufficiently fast in the UV regime. The infinite possibilities for the selection of this function generates a family of unitarily equivalent complex structures characterized by a well-defined number of created particles at finite times. Thus, in this work we do not propose a unique candidate of this observable, but a selection of unitarily equivalent ones. Additional theoretical requirements, possibly based on experimental data, might help on reducing even more this residual ambiguity. This issue has already been studied in recent works for Dirac fields in cosmology, demanding the Hamiltonian to have nice physical and mathematical properties \cite{PhysRevD.98.063535,PhysRevD.99.063535,Elizaga_Navascu_s_2019}. This issue will be addressed elsewhere.
The choice of a privileged gauge is a direct consequence of the homogeneity of the electric field coupled to the matter field so that the spatial translational invariance is preserved in the quantization. In the case in which the electric field had spatial inhomogeneities, other procedures may be taken into account. The inclusion of perturbative electromagnetic inhomogeneities in the external field, including magnetic components, would provide a more realistic analysis of the Schwinger effect, allowing to establish a connection between theory and potential experiments. This issue has been recently analyzed numerically using the Dirac-Heisenberg-Wigner formalism and the Furry-picture quantization in \cite{PhysRevD.101.096003,PhysRevD.101.096009} and will be of central interest in our future works.
We have also shown that complex structures satisfying these criteria explicitly depend on the specific external electromagnetic field. As a consequence, a quantization based on the Minkowski vacuum (i.e., the corresponding to a vanishing electromagnetic field) does not unitarily implement the dynamics throughout the evolution of the vacuum in the presence of an electromagnetic field, in agreement with \cite{Ruijsencharged}. On the other hand, we have shown that the usual Minkowski vacuum is equivalent to a vacuum from our distinguished equivalence class in the case of a vanishing electric field. Consequently, in the frequently analyzed case in the literature of electromagnetic fields vanishing for $t\rightarrow \pm \infty$, the asymptotic behaviour of the complex structure (and then of the number of created particles) is the same both for the usual Minkowski quantization and for every quantization from our distinguished equivalence class.
Finally, some other approaches to the Schwinger effect, such as the quantum kinetic approach \cite{Smolyansky1997DynamicalDO,Fedotov_2011}, usually make use of a particular quantization which diagonalizes the Dirac Hamiltonian. We leave a thorough comparison between other approaches and our proposal for the future.
\acknowledgments
The authors are grateful to G. Garc\'ia-Moreno for useful discussions. This work has been supported by Project. No.
MICINN FIS2017-86497-C2-2-P from Spain (with extension Project. No. MICINN PID2020-118159GB-C44 under
evaluation).
AAD acknowledges financial
support from IPARCOS through ``Ayudas para la realizaci\'on de
Trabajos Fin de M\'aster del
Instituto de F\'isica de Part\'iculas y del Cosmos''.
\bibliographystyle{JHEP}
|
1,116,691,500,289 | arxiv | \section{Introduction}
\label{sec:introduction}
\begin{figure*}[t]
\sidecaption
\includegraphics[width=6cm]{13072f1a.eps}
\includegraphics[width=6cm]{13072f1b.eps}
\caption{\label{fig:ds9Image}
Model IBIS/ISGRI images of the sky with (right) and without (left)
instrumental vignetting effects (see \reffig{insEffects}). They show the
geometry and relative intensity (on a logarithmic scale from dark and blue to
red and bright areas) of the various components during the first EO at
IJD\,=\,2216.04 as derived for channel 3 ($\sim$27\,keV). The bluish circle
shows emission of the Earth occulting the diffuse sky background (purple),
the Galactic ridge (red strip), and the point sources (bright dots). The
instrumental background is ignored for clarity. The images extend over the
partially coded FoV ($28.8\degr \times 29.2\degr$). \modif{Point sources} are
convolved with a circular Gaussian typical for the instrumental resolution
($\sigma\!=\!0.1\degr$).
}
\end{figure*}
Although the cosmic X-ray background (CXB) was discovered before the cosmic
microwave background \citep{GGP62}, it is known in much less detail and its
spectral shape and normalization are still subjects of debate. This diffuse
emission is thought to be mainly due to unresolved active galactic nuclei (AGN)
extending to cosmological distances with a contribution from Type Ia supernovae
in the low-energy gamma-rays \citep{Z96}. Evidence for the AGN origin of the CXB
at energies below 10 keV comes from various X-ray mirror telescopes -- in
particular \emph{Chandra} and \emph{XMM-Newton} -- that were able to resolve up
to 80\,\% of the diffuse emission into point sources \citep[e.g.][]{BH05,GCH07}.
The amount of resolved sources decreases rapidly with energy though so that only
2.5\,\% of the diffuse background is resolved by the deepest survey yet in the
20--60\,keV range, at the peak of the CXB emission \citep{PWM08}. The
characterization of the actual spectral shape and normalization around this
emission bump is crucial to estimate the fraction of heavily absorbed
Compton-thick AGN thought to contribute significantly in this hard X-ray
spectral range \citep{UAO03,GCH07,SKR08,TUV09}.
The High Energy Astronomical Observatory 1 (\emph{HEAO-1}) is hitherto the only
satellite which had a dedicated mechanism to disentangle the CXB from the
instrumental background. By using a movable CsI crystal with a thickness of
5\,cm to cover part of the field of view (FoV), the \emph{HEAO-1} observations
of the mid-1970s are still the most accurate and reliable measurements of the
CXB spectral shape in the hard X-rays \citep{MBH80,KJG97,GMP99}. Without such a
masking mechanism in recent space missions, a practical way to study this
diffuse hard X-ray emission is to use the Earth as a screen occulting part of
the background sky. For pointing satellites in low orbits around the Earth, our
planet often crosses part of the field of view during normal operations. Such
events can be analyzed in detail to evaluate the CXB spectrum. This was done by
\citet{FOL07} for the \emph{BeppoSAX} mission and by \citet{AGS08} for the data
of the Burst Alert Telescope (BAT) aboard the \emph{Swift} spacecraft.
With its eccentric three-days orbit, the \emph{INTEGRAL} satellite \citep{WCD03}
is close to the Earth only during the perigee passage when the instruments are
not operating because of excessive background in the radiation belts. In order
to study the X-ray background, a series of four dedicated observations were
performed in January and February 2006. The Earth was allowed to pass through
the FoV of the instruments shortly after radiation-belt exit while the
spacecraft was aimed to point towards a fixed position in the sky. \citet{CSR07}
described these observations by all four instruments aboard \emph{INTEGRAL} in
great detail and studied how the passage of the Earth modulates the detector
counts by occulting part of the CXB.
The difficulty of using the Earth to shield the CXB comes from the fact that the
Earth is not dark in the hard X-rays. The emission from the Earth in the
20--200\,keV range consists of two major contributions: \modif{the reflection of
the CXB by the atmosphere and its Compton emission under the bombardment by
cosmic rays (CR)}. Disentangling the CXB occultation from the Earth emission is
challenging. \citet{CSR07} assumed the spectral shape of the CXB and of the two
Earth emission components and fitted their normalizations to the observed
amplitude of the Earth modulation in the data. The studies of the
\emph{BeppoSAX}/PDS measurements \citep{FOL07} and the \emph{Swift}/BAT
observations \citep{AGS08} also relied on a priori assumptions on the spectral
shape of the CXB and the Earth emission.
We present here a completely different approach for the analysis of the same
\emph{INTEGRAL} observations as were used by \citet{CSR07}, but only focusing on
the data of the IBIS/ISGRI instrument \citep{ULD03}. Instead of fixing the
spectral shapes of the CXB and the Earth emission components, \modif{we aim to
derive them based on a detailed modeling of the spatial distribution of these
components and of all instrumental effects.} If the Earth surface brightness
differs significantly from the uniform CXB occultation, it is possible to
disentangle these components based on the recorded modulation when the Earth is
crossing the FoV. This has the potential to simultaneously derive the spectral
shape of the CXB and of the Earth emission from the observations.
Another difficulty of the analysis is the presence of the Galactic plane in the
border of the wide FoV of IBIS (see \reffig{ds9Image}). An empty extragalactic
field would have been ideal to study the CXB, but this was not possible due to
various scheduling constraints. This complication is however an opportunity to
study in addition the diffuse Galactic ridge X-ray emission (GRXE)
\citep[e.g.][]{RSG06,KRC07,BJR08}, which is also occulted by the passage of the
Earth.
The observational material and the analysis method are described in
\refsecs{data}{method}, respectively. We present the obtained spectra in
\refsec{results} and discuss them in comparison to previous results in
\refsec{discussion}, before concluding in \refsec{conclusion}. \modif{Unless
otherwise stated, the quoted errors are 1-$\sigma$ uncertainties, i.e. at the
68\,\% confidence level (CL).}
\section{Data}
\label{sec:data}
The data used here are the four Earth-occultation observations (EOs) conducted
by \emph{INTEGRAL} in January and February 2006 at the start of satellite
revolutions number 401, 404, 405 and 406. We refer the reader to the detailed
description of these observations in \citet{CSR07}. We focus our analysis on the
data of the IBIS/ISGRI gamma-ray imager that is best suited to study the
emission in the $\sim$20--200\,keV range.
Our work is based on the analysis of the modulation in full detector lightcurves
induced by the passage of the Earth through the FoV. These lightcurves are
obtained with the latest version of the \texttt{ii\_light} executable that will
be included in a forthcoming release of the Off-line Scientific Analysis (OSA)
software package provided by the \emph{INTEGRAL} Science Data Centre
\citep[ISDC,][]{CWB03}. They are corrected for instrumental dead time and the
effect of dead and noisy pixels, which amount to typically 5\,\% of all ISGRI
pixels. For each of the four similar observations, we extracted detector
lightcurves with a time binning of 300\,s in a series of 16 energy bins (see
\reftab{results}), carefully chosen to isolate instrumental emission features,
in particular the broad lines at 26 and 31\,keV from CdTe and the narrow lines
at 59\,keV from W, 75--77\,keV from Pb and 82--84\,keV from Bi \citep{TLB03}.
The detector lightcurves originally expressed in units of \mbox{count\,cm$^{-2}$\,s$^{-1}$}\ were
multiplied by 0.5\,($128\times 0.4$\,cm)$^2$ = 1310.72\,cm$^2$, which is the
area of the detector assumed by the standard ISGRI ancillary response file
(ARF), describing the energy dependence of the effective area of the instrument.
The factor 0.5 refers to the fraction of open coded mask elements, and 0.4\,cm
is the size of the $128\times 128$ ISGRI detector elements. Apart from this
change of unit to have full detector lightcurves, the only other manipulation of
the data was a cleaning of the lightcurves. This was done by removing points
with uncertainties of more than twice the average uncertainty and by iteratively
removing a few isolated outstanding points lying more than $3\,\sigma$ away from
the smoothed lightcurve with a smoothing window of 30 minutes (i.e. 6 time
bins).
In order to subtract from the lightcurves the contribution of point sources in
the FoV, we performed an image analysis separately for the four Earth
observations. This was possible since the drift of the satellite was moderate
despite the absence of star trackers during the pointings. The image analysis
was done in a standard way with the default background maps of OSA~7.0. We
searched for all sources previously detected by ISGRI including the new source
IGR~J17062--6143 already reported by \citet{CSR07}. We then selected all sources
that were detected with a significance of \modif{more than $2\,\sigma$} in the
22--60\,keV band. We chose this low significance threshold to minimize the
contribution of the known point sources to the GRXE and the CXB. Sources with
even less significance are more likely to be spurious and their global
contribution will mostly cancel out with fake negative sources. \modif{We tested
both a simple powerlaw and a bremsstrahlung model to fit the data. We found that
for most sources the bremsstrahlung model gives a better phenomenological
description of the data than a powerlaw because many sources have a convex
spectral shape in this energy range. The fluxes derived for the brightest
($>3\,\sigma$) sources in our sample are listed in \reftab{sources}.}
\begin{table}[tb]
\caption{\label{tab:sources}
List of point sources detected in at least one of the four EOs.
}
\begin{flushleft}
\addtolength{\tabcolsep}{-1pt}
\begin{tabular}{@{}l@{~~~}cc@{~~~}r@{~~~}r@{~~~}r@{~~~}r@{}}
\hline
\hline
\rule[-0.5em]{0pt}{1.6em}
Source name& RA & Dec & EO\,1 & EO\,2 & EO\,3 & EO\,4 \\
& \multicolumn{2}{c}{(~deg~)$^a$} & \multicolumn{4}{c}{(~$10^{-11}$\mbox{erg\,cm$^{-2}$\,s$^{-1}$}~)$^b$} \\
\hline
\rule{0pt}{1.2em}%
IGR J14471--6319 & 221.81 & $-$63.29 & -- & -- & -- & 1.9\\
IGR J14515--5542 & 222.89 & $-$55.68 & 18.4 & -- & -- & --\\
IGR J14532--6356 & 223.31 & $-$63.93 & -- & -- & -- & 2.2\\
IGR J15094--6649 & 227.36 & $-$66.82 & 4.1 & -- & -- & 4.4\\
PSR B1509--58 & 228.48 & $-$59.14 & 7.2 & 7.5 & -- & 5.6\\
IGR J15359--5750 & 233.97 & $-$57.83 & 2.9 & -- & -- & 1.4\\
H 1538--522 & 235.60 & $-$52.39 & 14.2 & 22.8 & -- & 25.0\\
4U 1543--624 & 236.98 & $-$62.57 & -- & 4.5 & 4.4 & --\\
IGR J16167--4957 & 244.16 & $-$49.98 & -- & 2.8 & -- & 7.1\\
IGR J16207--5129 & 245.19 & $-$51.50 & 2.6 & 6.4 & -- & --\\
SWIFT J1626.6--5156 & 246.65 & $-$51.94 & 12.4 & 11.1 & 10.1 & 8.3\\
IGR J16283--4838 & 247.04 & $-$48.65 & 8.2 & -- & -- & --\\
IGR J16318--4848 & 247.95 & $-$48.82 & 37.7 & -- & 11.6 & 7.1\\
IGR J16320--4751 & 248.01 & $-$47.87 & 8.1 & -- & -- & 12.8\\
4U 1626--67 & 248.07 & $-$67.46 & 13.0 & 12.3 & 8.9 & 8.7\\
IGR J16377--6423 & 249.57 & $-$64.35 & -- & 1.1 & -- & --\\
IGR J16393--4643 & 249.77 & $-$46.70 & -- & 11.9 & -- & 1.6\\
H 1636--536 & 250.23 & $-$53.75 & 7.1 & 2.3 & -- & --\\
IGR J17008--6425 & 255.20 & $-$64.43 & -- & -- & -- & 0.9\\
XTE J1701--462 & 255.24 & $-$46.19 & -- & 26.8 & -- & --\\
GX 339--4 & 255.71 & $-$48.79 & 9.5 & 28.7 & 40.9 & 44.3\\
IGR J17062--6143 & 256.57 & $-$61.71 & -- & -- & -- & 4.5\\
NGC 6300 & 259.25 & $-$62.82 & 2.5 & 3.1 & 3.4 & 2.8\\
ESO 103--35 & 279.58 & $-$65.43 & -- & -- & 8.3 & --\\
\hline
\end{tabular}\\
\rule{0pt}{1.0em}%
\textbf{Notes.}
$^{(a)}$ source catalog position in right ascension (RA) and declination (Dec).
$^{(b)}$ derived model fluxes in the 20--50\,keV band for each EO.
\end{flushleft}
\end{table}
\begin{table}[tb]
\caption{\label{tab:results}
Numerical values of the obtained spectra shown in \reffig{plteeufs}.
}
\begin{flushleft}
\begin{tabular}{@{}rrrrrrrr@{}}
\hline
\hline
\rule[-0.5em]{0pt}{1.6em}
$E$~~~ & $\Delta E$ & $F\dmrm{sky}$ & $\Delta F\dmrm{sky}$ & $F\dmrm{ear}$ & $\Delta F\dmrm{ear}$ & $F\dmrm{gal}$$^c$ & $\Delta F\dmrm{gal}$$^c$ \\
\multicolumn{2}{c}{(~keV~)$^a$} & \multicolumn{6}{|c}{(~\mbox{keV\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$}~)$^b$} \\
\hline
\rule{0pt}{1.2em}%
21.14 & 1.44 & 39.57 & 2.53 & 7.05 & 1.58 & 18.09 & 2.07 \\
24.01 & 1.44 & 42.79 & 1.92 & 8.63 & 1.47 & 19.94 & 0.55 \\
27.36 & 1.91 & 43.84 & 1.81 & 11.23 & 1.31 & 19.26 & 4.88 \\
31.19 & 1.91 & 42.90 & 1.44 & 13.79 & 1.21 & 16.62 & 2.75 \\
35.02 & 1.91 & 41.08 & 2.45 & 17.02 & 1.60 & 14.35 & 4.26 \\
38.85 & 1.91 & 39.08 & 2.55 & 20.31 & 1.88 & 12.12 & 2.63 \\
43.16 & 2.39 & 35.42 & 2.42 & 23.87 & 2.46 & 9.80 & 0.84 \\
47.95 & 2.39 & 38.67 & 1.95 & 26.82 & 2.16 & 9.70 & 1.35 \\
52.73 & 2.39 & 36.24 & 2.42 & 29.30 & 2.00 & 8.33 & 1.40 \\
58.00 & 2.87 & 30.88 & 3.41 & 30.85 & 2.14 & 7.72 & 1.37 \\
63.74 & 2.87 & 28.86 & 2.81 & 31.18 & 2.48 & 7.17 & 1.78 \\
71.88 & 5.27 & 28.92 & 3.34 & 32.47 & 3.11 & 6.46 & 1.31 \\
83.37 & 6.22 & 30.01 & 6.35 & 33.37 & 3.23 & 6.17 & 0.80 \\
94.86 & 5.27 & 27.12 & 4.89 & 34.01 & 2.71 & 6.64 & 0.84 \\
112.57 & 12.45 & 24.82 & 5.94 & 35.04 & 3.90 & 6.68 & 0.99 \\
162.35 & 37.34 & 30.65 & 9.55 & 37.85 & 7.46 & 7.93 & 2.86 \\
\hline
\end{tabular}\\
\rule{0pt}{1.0em}%
\textbf{Notes.}
$^{(a)}$ Central energy, $E$, and half-width, $\Delta E$, of the bins.
$^{(b)}$ Fluxes, $F$, and statistical uncertainties, $\Delta F$, for the sky
background (sky), the Earth (ear) and the Galaxy (gal).
$^{(c)}$ On average over the region $320\degr<l<340\degr$ and $|b|<5\degr$.
\end{flushleft}
\end{table}
\section{Method}
\label{sec:method}
The detector lightcurves described above were modulated by the passage of the
Earth through the FoV of IBIS. Our approach was to model these observations in
detail to derive the expected modulation of the detector counts for each
emission component on the sky. This resulted in a series of model lightcurves in
different energy bins for each emission component and each of the four EOs. We
then fitted the normalizations of these model lightcurves to the observed
detector counts to derive the actual contribution of the diffuse emission
components.
This method requires the knowledge of the spacecraft position and attitude with
respect to the Earth and to the background sky at any time, a description of the
spatial distribution on the sky of the various emission components and also an
accurate description of the IBIS/ISGRI instrumental characteristics. These
aspects are described in the three subsections below. The generation of the
model lightcurves is described in \refsec{lightcurves}, whereas the actual
spectral fitting procedure is the subject of \refsec{spectral}.
\subsection{Satellite position and attitude}
\label{sec:satellite}
To construct the images of the sky corresponding to each of the four Earth
observations as illustrated in \reffig{ds9Image}, we needed to know the exact
attitude of the satellite and its distance to the Earth at any time. This
information can be extracted from auxiliary files provided by the mission
operation centre (MOC) in Darmstadt. It was used to compute the position of the
Earth center, the position of the geographic and magnetic poles, and the
apparent radius of the Earth \modif{as a function of time, all expressed in
degrees, using} the IBIS/ISGRI instrument coordinates $(Y,Z)$. \modif{The
spacecraft was close enough to the Earth at the beginning of the observation for
the planet's sphericity to slightly affect its apparent radius. This was
properly taken into account, as well as the $\sim$100\,km of obscuring
atmosphere in the hard X-rays mentioned by \citet{CSR07}. The magnetic pole in
the Northern hemisphere is set to its 2005 position of 82.7\degr\,N,
114.4\degr\,W.}
\subsection{Spatial distribution of components}
\label{sec:spatial}
\modif{Although we were only interested in the temporal modulation of counts on
the full detector area, we needed a sufficiently precise} description of the
spatial distribution of the emission components. We chose to \modif{define all
of} them by analytical functions that are described below and are illustrated in
\reffig{ds9Image}.
The simplest component is the CXB that we assumed to be uniform on the sky.
Although there is evidence that the CXB has some large-scale intensity
variations \citep[e.g.][]{BCK02,RMS08}, they are small in amplitude
(\raisebox{-.5ex}{$\;\stackrel{<}{\sim}\;$}\,2\,\%) and it would be very difficult to evaluate and account for a
possible non-uniformity so close to the Galactic bulge.
The GRXE was modeled with two perpendicular Lorentzian functions aligned with
the Galactic coordinates. The full-width at half maximum (FWHM) of the
Lorentzians are of $21\degr$ \citep[Fig.~7]{KRC07} and $1.2\degr$
\citep[Fig.~5]{RSG06} respectively along the Galactic longitude, $l$, and
latitude, $b$. The Lorentzian's maximum are at the Galactic center with a slight
latitude displacement of $b=-0.15\degr$ as measured by \citet{RSG06}. \modif{As
shown by \citet{KRC07}, this distribution matches well the \emph{COBE}/DIRBE map
at 4.9\,$\mu$m, which was used by \citet{BJR08} as a template for the GRXE below
120\,keV.}
\modif{We took special care to define the spatial distribution of the Earth's
emission. There are two distinct components to be taken into account: the CXB
reflection by the Earth \citep{CSS08} and the CR-induced atmospheric emission
\citep{SCS07}. The authors of these studies performed Monte-Carlo simulations to
derive both the spectrum and the surface brightness of the Earth emission. We
used the latter results as a precise determination of the expected image of the
Earth at hard X-rays.}
\modif{\citet{CSS08} found that the X-ray albedo of the Earth is limb-darkened
at lower energies and limb-brightened at higher energies. At energies below
$\sim$100\,keV -- where this component dominates the Earth emission -- the
emission is found to be limb-darkened, but slightly less than for a sphere
emitting black-body radiation. Such an object would have a linear dependence of
the flux with $\mu\!\equiv\!\cos{\theta}$, where $\theta$ is the zenith angle,
i.e. the angle between the line-of-sight and the normal to the surface. For the
Earth albedo below $\sim$100\,keV they instead found an angular dependence of
the reflected flux that can be approximated by $F(\mu)\propto
\mu\,(1-0.5\,\mu)$. We used this equation to define the Earth albedo component.}
\modif{We modeled the CR-induced emission of the Earth's atmosphere
according to \citet[Eq.~(7)]{SCS07}. By setting the solar modulation potential
to $\phi\!=\!0.5$ -- corresponding to the solar minimum during the EOs of 2006
-- we can simplify this equation as:}
\begin{equation}
\label{eq:cr_emis}
C\propto \mu\,(1+\mu)\,\left(\,1+(R\dmrm{cut}/3.2)^2\,\right)^{-0.5},
\end{equation}
\modif{where $\mu$ is as defined above and $R\dmrm{cut}$ is the geomagnetic
cut-off rigidity. In the dipole approximation of the Earth's magnetic field,
the latter depends mainly on the geomagnetic latitude $\lambda\dmrm{m}$ as
$R\dmrm{cut}\simeq 14.5\,\cos^4{\lambda\dmrm{m}}$\,GV \citep{SS05}. The
resulting atmospheric emission of the Earth is a combination of relatively
strong limb-darkening from the $\mu$-dependence in \refeq{cr_emis} with enhanced
emission at the magnetic poles from the $\lambda\dmrm{m}$-dependence.}
\modif{We note that we took into account for both Earth emission components the
distortion of the surface brightness related to the fact that only a portion of
the Earth's hemisphere can be seen when the spacecraft is relatively close to
the planet.}
\begin{figure}[t]
\includegraphics[bb=16 150 430 700,clip,width=\hsize]{13072f2.eps}
\caption{\label{fig:insEffects}
Surfaces representing the five IBIS/ISGRI vignetting effects that affect the
incoming radiation until it reaches the detector plane. The effects are those
corresponding to channel 12 ($\sim$72\,keV) and are:
(a) the IBIS tube transparency to off-axis radiation,
(b) the energy independent exposure map,
(c) the effective coded-mask transparency,
(d) the transmission of the Nomex structure supporting the mask, and
(e) the absorption of the ISGRI spider beams.
The total vignetting effect is the product of the effects (b) to
(e) with the addition of effect (a), which extends well outside
the partially coded FoV.
}
\end{figure}
\subsection{Instrumental characteristics}
\label{sec:instrumental}
As we wanted to fit real detector lightcurves with model lightcurves we needed
to take into account the instrumental characteristics of the telescope in the
\modif{modeling}. This does not include detector responses, but all effects
attenuating the incoming photon field on its path from outside the telescope
until reaching the detector plane. We identified five effects that affected the
detector illumination depending on the direction of the incoming radiation and
sometimes on its energy. The most obvious effect is the attenuation due to the
coded-mask elements which block out about half of the incoming radiation. The
second effect is the non-uniform exposure map which is caused by a partial
illumination of the detector for a source outside of the fully coded FoV.
Another vignetting effect is induced by the Nomex honeycomb structure that
supports the coded mask. The two last effects are due to the IBIS/ISGRI spider
beams separating the eight detector modules and to the lead shielding of the
IBIS telescope tube. The aluminum spider results in opacity at the lowest
energies, and the lead shielding of the tube becomes transparent at the highest
energies.
The five effects mentioned above are included in the standard IBIS/ISGRI
software for image reconstruction and spectral extraction, but as we
worked directly with the detector lightcurves, we needed to account for these
effects independently. Their \modif{modeling} as used in this work is described
below and is illustrated in \reffig{insEffects}, while the overall vignetting
effect is shown in \reffig{ds9Image}.
The coded-mask transparency is ideally of $0.5$ since there are as many elements
open as closed. This is, however, only true at the center of the FoV. For
off-axis sources there is an additional attenuation due to the thickness of
16\,mm of the mask elements, which project a wider shadow on the detector for
increasing off-axis angles. As the elements are made of tungsten -- a strongly
absorbing material -- it is fair to assume the elements to be completely opaque
in the energy range considered here. As we were only interested in the net
effect over the full detector plane and in a simple analytical description we
approximated the mask pattern as a giant chessboard of 46 equally-sized square
elements on a side of 1064\,mm. We then properly computed the additional shadow
from radiation that crossed the border of the mask elements and did not fall
onto the shadow of other elements.
The exposure map of IBIS/ISGRI is basically very simple with a value of 1 in the
fully coded FoV and a linear decrease to zero in the partially coded FoV, except
in the corners of the image where the decrease is quadratic. \modif{When
we modeled this, we properly took into account the disposition of the eight
modules of the IBIS/ISGRI detector and the two-pixel wide space between them.}
The Nomex structure supporting the coded mask of IBIS is absorbing part of the
incoming photons. This is corrected for in the OSA software by off-axis
efficiency maps depending on energy. \modif{In 2006, at the time of the EOs,
these maps were still an approximation with only a dependence on the off-axis
angle. We used the new maps introduced in the OSA 6.0 release that do include an
additional azimuthal dependence due to the alignment of the walls of the
hexagonal tubes that the honeycomb structure is made of and also a correction
for the tubes pointing $\sim0.5\degr$ away from the center of the FoV; a
misalignment likely due to on-ground manipulations of the spacecraft.} We note
that the attenuation by the cosine of the off-axis angle is included in these
maps.
The eight IBIS/ISGRI detector modules are separated by an aluminum structure
called the ISGRI spider. It is made of one beam along the $Z$ axis and three
perpendicular beams. According to the IBIS experiment interface document part B
(EID-B) the beams have a trapezoidal section with a height of $48$\,mm a base of
$9.5$\,mm and a wall angle of $4\degr$ resulting in an upper width of $2.8$\,mm.
The wall angle ensures that the spider is not casting a shadow in the
fully coded FoV. However, in the partially coded FoV, the spider can mask up to
two rows of ISGRI pixels. We modeled this effect in detail for each energy bin
based on the corresponding attenuation length of Al. We found that for some
specific off-axis directions, the ISGRI spider can result in an attenuation of
the radiation on the detector plane of up to $\sim 5$\,\% in the lower energy
bins.
The last instrumental effect we considered is the IBIS telescope tube
transparency. At higher energies, the tube becomes transparent to radiation from
outside the fully coded FoV, giving an additional contribution to the detector
lightcurves. The tube is made of two vertical walls perpendicular to the $Y$
axis and of two walls inclined with an angle of $3.47\degr$ transverse to the
$Z$ axis. The walls are shielded with glued lead foils. The thickness of the Pb
sheets for each wall is reported in Table~3.2.8.1 of the EID-B (issue 7.0). We
used the values for the four upper sheets of the wall that are relevant for the
calculation of the tube transparency for an off-axis angle up to about 45\degr.
The calculation was done carefully, avoiding radiation further blocked by other
parts of the ISGRI collimating system: the 1\,mm thick W shield of the ISGRI
hopper and the 1.2\,mm thick W-strips of the side shielding of the mask. The
tube transparency is neglectable at the lower energies, but reaches $\sim 1\,\%$
in channel 12 (see \reffig{insEffects}) near the Pb attenuation length edge at
about 75--80\,keV and $\sim 6\,\%$ in the last energy bin. Although this might
seem unrelevant, the cumulated effect on the detector plane from a wide region
outside the FoV is actually far from negligible at these energies.
\begin{figure}[t]
\includegraphics[width=\hsize]{13072f3.eps}
\caption{\label{fig:pltLC}
Model lightcurves of each component for channel 3 ($\sim$\,27\,keV) of the
first EO. The detector count rate values are the effective contribution of
the point sources (orange stars). The other components are the sky background
(red circles), the GRXE (blue squares), and the Earth \modif{CXB reflection}
(cyan triangles, long-dashed) and the \modif{CR-induced emission} (magenta
triangles, short-dashed). For these diffuse components the detector count
rate corresponds to an incident radiation of 40\,count\,s$^{-1}$\,sr$^{-1}$.
For the instrumental background (black diamonds) we show the relative
modulation, $M\dmrm{ins}(t)/\overline{M\dmrm{ins}}$, derived from the SPI/ACS
corresponding to a detector count rate of 10\,count\,s$^{-1}$.
}
\end{figure}
\subsection{Construction of model lightcurves}
\label{sec:lightcurves}
The next step was to construct model lightcurves describing the modulation of
the radiation of each component described in \refsec{spatial} as induced by the
passage of the Earth through the IBIS/ISGRI FoV. A complete set of model
lightcurves is shown in \reffig{pltLC}.
\subsubsection{Extended components}
\label{sec:extended}
For the diffuse components -- the CXB, the GRXE and the two different Earth
emission components -- we constructed the model lightcurves by generating a
series of images of the sky at different times based on the attitude of the
spacecraft and the position of the Earth in the FoV (see \refsec{satellite}).
For each individual component, we considered only its own contribution and the
instrumental vignetting effects (see \refsec{instrumental}) attenuating the
count rates on the detector plane. The sum of the pixels in the images generated
for different times during the Earth occultation defines the model lightcurve
for a given component. Because the instrumental attenuation is energy dependent,
we constructed these model lightcurves for each energy bin and for each of the
four EOs because of the slightly different pointing directions with respect to
the Galactic ridge and Earth positions. As high-energy radiation from outside
the field of view also contributes to the detector counts (see
\refsec{instrumental}), the simulated images were defined on a wide area
extending 30\degr outside of the actual FoV of IBIS/ISGRI ($|Y|<44.4\degr$ and
$|Z|<44.6\degr$).
The normalization of the diffuse components in the input images -- without
vignetting effects -- was set to 10\,count\,s$^{-1}$\,sr$^{-1}$. For the Earth
emission components, this is the average intensity over the Earth disk, while it
is the average in the area defined by $320\degr<l<340\degr$ and $|b|<5\degr$ for
the GRXE. After attenuation by the instrumental effects described in
\refsec{instrumental} the actual detector count rate was typically reduced by an
order of magnitude (see \reffig{pltLC}).
\subsubsection{Point sources}
\label{sec:src}
In addition to the lightcurves constructed for the extended components we also
generated one lightcurve in each energy bin for the point sources in the FoV.
The time modulation is step-like in this case due to the abrupt disappearance of
a source when it gets occultated by the Earth. The count rate used for each
\modif{source detected with a significance of more than $2\,\sigma$ was derived
from a bremsstrahlung} fit to its observed IBIS/ISGRI spectrum (see
\refsec{data}) divided by the fraction of time during which the source was not
occulted. These counts were then assigned to the corresponding source position
in the simulated sky images, and detector counts were obtained by summing-up the
image pixels after application of the instrumental vignetting effects. We did
this at different times during the passage of the Earth through the FoV to get
model detector lightcurves. As the set of model lightcurves for point sources at
different energies was based on the actual data collected during each EO, they
were considered as a fix contribution to the detector lightcurves.
\subsubsection{Instrumental background}
\label{sec:ins}
The last but the dominant contributor to the observed detector lightcurves is
the instrumental background. The time variability of this component depends on
the cosmic particle environment \modif{and the induced radioactive decay of the
spacecraft's material}. The particle environment is well monitored by the
anti-coincidence shield (ACS) of the spectrometer SPI, the other gamma-ray
instrument of \emph{INTEGRAL} \citep{VRS03}. We found \modif{good} evidence that
the IBIS/ISGRI detector lightcurves are indeed \modif{following} the variations
recorded by the SPI ACS. To estimate the actual relationship between the count
rates in the ACS and in the IBIS/ISGRI detector in each of the considered
spectral bins, we used the extragalactic observations of revolution 342 (Her X-1
and XMM LSS). These observations, away from bright hard X-ray sources, were
taken about six months before the EOs and have the particularity of including a
solar flare at the start of the revolution, resulting in important correlated
variations in the SPI ACS and the ISGRI detector counts during the 12-hour decay
of the flare between \emph{INTEGRAL} Julian dates (IJD = JD $-\ 2\,451\,544.5$)
of 2040.0 and 2040.5. This relationship is characterized by the slope $\alpha$
of a linear fit of the ISGRI counts versus the SPI/ACS counts. This slope is
likely to change from one observation to the other because the orientation of
the spacecraft with respect to the solar radiation and particle flux will change
the effective areas of both the SPI/ACS and the IBIS/ISGRI detectors in a
complex manner. However, the energy dependence of the slope $\alpha(E)$ for
different ISGRI energy bins is expected to be rather stable. \modif{We used this
energy dependence of $\alpha$ as an indication of the amount of SPI/ACS
modulation expected in the ISGRI detector lightcurves of the EOs. The model
lightcurve for the variations of the instrumental background was thus
constructed based on those of the SPI/ACS as:}
\begin{equation}
\label{eq:ins}
M\dmrm{ins}(E,t) = \alpha(E)\,\left( C\dmrm{ACS}(t)-\overline{ C\dmrm{ACS}} \right)\,,
\end{equation}
where $C\dmrm{ACS}(t)$ is the SPI/ACS count rate lightcurve measured during the
EOs.
\begin{figure}[t]
\includegraphics[width=\hsize]{13072f6.eps}
\caption{\label{fig:pltSpectra}
IBIS/ISGRI count rate spectra of each model component derived from the
detector lightcurves of the \emph{INTEGRAL} EOs. The values are
vignetting-corrected count rates per channel in units of
10\,count\,s$^{-1}$\,sr$^{-1}$, except for the instrumental background (black
diamonds) for which they are actual detector count rates in count\,s$^{-1}$.
For the CXB (red circles), the GRXE (blue squares) and the total Earth
emission (green triangles), the dotted lines of the same color show the
average spectra obtained for the four independent EOs. The relative
contributions to the Earth emission from the \modif{CXB reflection} (cyan
triangles, long-dashed) \modif{and the CR scattering in the atmosphere}
(magenta triangles, short-dashed) are also shown.
}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.33\hsize]{13072f5a.eps}
\includegraphics[width=0.33\hsize]{13072f5b.eps}
\includegraphics[width=0.33\hsize]{13072f5c.eps}
\caption{\label{fig:pltFit}
Examples of detector lightcurve fits (\emph{upper panels}) and associated residuals
(\emph{lower panels}) for the first EO at three representative energies:
$\sim$27\,keV (channel 3), $\sim$52\,keV (channel 9) and $\sim$112\,keV
(channel 15), \emph{from left to right}. The reduced $\chi^2$ value of the best-fit
curve (red line) is \modif{0.97, 1.08 and 1.06} respectively. The
contribution of different components is shown by different colors with the
same coding as in \reffigs{pltLC}{pltSpectra}. From bottom to top we add to
the instrumental background (grey) -- modulated by the SPI/ACS lightcurve and
a possible trend with time -- the sky background (red), the GRXE (blue), the
\modif{albedo} (cyan) and \modif{atmospheric} (magenta) Earth emissions, and
the fixed contribution from point sources (orange).
}
\end{figure*}
\subsection{Spectral fitting}
\label{sec:spectral}
We described above the construction of the model lightcurves $M_i(t)$ shown in
\reffig{pltLC} for each component $i$, which are the \modif{SPI/ACS-related}
variations of the instrumental background (ins), the sky background (sky), the
GRXE (gal), \modif{the Earth's albedo (alb), the atmospheric CR-induced emission
(atm)}, and the point sources (src) in the FoV. The next step is to adjust these
model lightcurves to the observed detector lightcurve $D(t)$ in a given energy
band with a least-square fit. This is done by the following linear relation:
\begin{eqnarray}
\label{eq:fit}
D(t) \approx & a\dmrm{ins} + b\dmrm{ins}\,\frac{t\,-\,\overline{t}}{t\dmrm{end}\,-\,\overline{t}} + c\dmrm{ins}\,M\dmrm{ins}(t) + c\dmrm{sky}\,M\dmrm{sky}(t) + \nonumber\\
& c\dmrm{gal}\,M\dmrm{gal}(t) + c\dmrm{alb}\,M\dmrm{alb}(t) + c\dmrm{atm}\,M\dmrm{atm}(t) + M\dmrm{src}(t) \,,
\end{eqnarray}
\modif{where $a\dmrm{ins}$, $b\dmrm{ins}$ and the five $c_i$ are the seven free
parameters of the fit, scaling the model lightcurves $M_i(t)$ to best match the
observations $D(t)$. The $a\dmrm{ins}$ parameter describes the average value of
the instrumental background, whereas the $b\dmrm{ins}$ and $c\dmrm{ins}$
parameters model its variations. The parameter $b\dmrm{ins}$ allows us to
account for a linearly increasing or decreasing trend of the instrumental
background during the observation, centered on a time $\overline{t}$ and ending
at a time $t\dmrm{end}$. This turned out to have an important effect at energies
above 60\,keV (see \refsec{degeneracy}). Short-term variations of the
instrumental background $M\dmrm{ins}(t)$ are derived from the simultaneous
SPI/ACS lightcurve according to \refeq{ins} and are scaled by the parameter
$c\dmrm{ins}$. The four other $c_i$ parameters} are the count rates of the
various diffuse emission components on the sky, already corrected for
instrumental vignetting effects. Finally, the model lightcurve for the point
sources in the FoV, $M\dmrm{src}(t)$, was not scaled as it already represents
effective count rates in the detector.
By fitting the observed lightcurves $D(E,t)$ in different energy bands $E$, one
derives count rate spectra $c_i(E)$ for the five components $i$. When we did
this independently for each of the 16 energy bins, we obtained quite noisy
spectra with a divergence towards non-plausible values in some channels. This is
due to significant degeneracy between the various components that we discuss in
\refsec{degeneracy}. A way to overcome this problem was to include a link in the
fitting between the values obtained in one energy bin and in some others. We did
this by adding an additional constraint to the $\chi^2$ minimization of the fit
so that the fitted parameter value $c_i$ would not be too far from an expected
value $c_i\umrm{exp}$ according to:
\begin{equation}
\label{eq:chi2}
\chi\dmrm{fit}^2 = \chi\dmrm{red}^2 + \frac{1}{\xi}\sqrt{
\sum_{i=1}^{5}{\left(\log{c_i}-\log{c_i\umrm{exp}}\right)^2} }\,,
\end{equation}
where $\chi\dmrm{red}^2\!\equiv\!\chi^2/\mbox{d.o.f.}$ is the $\chi^2$ divided
by the number of degrees of freedom (d.o.f.) and $\xi$ is a factor to be chosen
to get an appropriate balance between the quality of the fit and additional
constraints. We calculated the difference with respect to the expected value on
a logarithmic scale to avoid a dependence on the actual count rate values from
one component to the other.
The choice of the expected values $c_i\umrm{exp}$ in this constraint fit can of
course have strong implications on the results. We therefore took great care to
define them without including wrong assumptions and systematic effects. For the
parameter $c\dmrm{ins}$, the expected value was set to be the mean value
obtained over the 16 energy bins. This was motivated by our discussion in
\refsec{ins}, where we concluded that this factor can differ from unity, but is
expected to be rather constant from one energy bin to the other. For the four
other $c_i$ parameters, we defined the expected value based on the assumption
that the final, unfolded spectrum of each component is supposed to be smooth.
For each component spectrum, $c_i(E)$, the expected count rate in a given
channel, $c_i\umrm{exp}$, was set to be the linear interpolation between the
values in the two adjacent energy bins corrected for the effects of different
energy widths of the channels and of the detector response\footnote{To take into
account the effect of the ISGRI detector response, we used XSpec \citep{A96}
with the standard ISGRI ARF and redistribution matrix file (RMF) of OSA~7.0 to
derive the relative strength of the instrumental modulation of a powerlaw model
spectrum from one channel to the other. This modulation follows basically that
of the ARF, with some additional smoothing and distortion towards lower energies
due to the RMF, with only a slight dependence on the photon index taken to be
typically of $\Gamma\!=\!2$.}. For the first and last energy bins, the spectral
smoothness was similarly constrained, by setting the expected value to the
linear extrapolation of the two closest channels. \modif{We note that we did not
constrain the spectrum of the instrumental background level $a\dmrm{ins}$ and
its increasing or decreasing trend with time, $b\dmrm{ins}$, because both can
change rapidly from one energy bin to the other due to the presence of narrow
emission lines (see \refsec{data}).} For the five other parameters, the
introduced interdependence of the values obtained in adjacent energy bins
typically reduces the number of free parameters of the fit by a factor of 2.
This was taken into account when calculating the d.o.f. of the fit.
The actual fitting began with a set of input spectra and minimized the modified
$\chi^2$ of \refeq{chi2}, one channel after the other. We then reran this
process up to \modif{four} times until the overall $\chi^2$ -- computed on the
lightcurves in all energy bins -- did not significantly improve anymore. This
iterative spectral fitting was done independently for each of the four
Earth-observation datasets, \modif{and we combined the results to get the final
spectra.}
\modif{An issue in the fitting process is the choice of the parameter $\xi$
that} defines the strength of the additional constraint on the fit in
\refeq{chi2}. If $\xi$ is low the spectral smoothness constraint becomes
important and it then does not leave enough freedom to fit the actual data,
whereas if $\xi$ is too high, the fit might diverge in some energy bins towards
unrealistic values. \modif{We tested different values and chose $\xi\!=\!15$,
which leaves a lot of freedom to the fit, while limiting strong divergence to
only a few energy bins in one of the four EOs, namely EO\,3 in the 70--100\,keV
range (see \reffig{pltSpectra}).}
\modif{Another important issue is the choice of the input parameters, since we
experienced that they can have a significant influence on the final results.
This is due to degeneracy between some components that we discuss in
\refsec{degeneracy}. It is therefore safe to start with values corresponding
roughly to the expectations for the CXB and also for one of the two Earth
emission components.} We chose the analytical formula of \citet{GMP99} as the
basis for the input CXB spectrum. \modif{A first guess of the spectral values of
the other components was obtained} by doing a fit with the input CXB spectrum
fixed and imposing equal contributions for the two Earth emission components.
\modif{We obtained a global Earth emission that is much lower at the highest
energies than derived by \citet{CSR07}. As the CR-induced emission of the Earth
is the dominant component at these energies, this suggests that its
normalization has to be scaled down by a factor of $\sim$0.4 (see
\reffig{compEarIntegral}). The final set of unperturbated input spectra was
obtained by fitting the lightcurves again, but this time with fixed values for
both the CXB and the CR-induced Earth emission. For the latter, the spectrum was
defined by the analytical formula proposed by \citet[Eq.~(1)]{SCS07} with a
normalization $C$ of 13.2\,\mbox{keV\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$}, i.e. 0.4 times that derived by
\citet{CSR07}.}
To smear-out \modif{the dependence of the final results on these input spectra
and to estimate the uncertainties}, we made a series of fits with different
input parameter values. We did this by perturbating each channel value of the
initial spectra by a random deviation following a Gaussian distribution with a
$\sigma$ of 30\,\%. This was done independently for all five component spectra,
\modif{namely the CXB, the GRXE, the two Earth emission components and the
average level of the instrumental background.} We performed the whole fitting
process described above starting from 30 different sets of perturbated input
spectra. \modif{For the four EOs, this resulted in 120 spectral fits. Instead of
taking the average on the obtained values, we took the median in each energy bin
as a robust estimator of the mean. This has the advantage to be independent of
the use of the actual results or of their logarithm and is not influenced by
outstanding values. We estimated the 1-$\sigma$ (68\,\% CL) statistical
uncertainties on the median taking the two results at
$\pm34\,\%/\sqrt{4}\!=\!\pm17\,\%$ in rank order away from the median, where the
factor of four stands for} the four EOs being independent measurements.
\section{Results}
\label{sec:results}
\begin{figure}[t]
\includegraphics[width=\hsize]{13072f4.eps}
\caption{\label{fig:plteeufs}
Unfolded IBIS/ISGRI spectra of the sky background (red circles), the earth
emission (green triangles) and the GRXE (blue squares) with their best fit
\modif{(dashed lines) and the more physical (dotted lines) spectral models
(see \reftabs{results}{fitparam}). The contribution of the sum of the
considered point sources averaged over the four EOs is also shown (orange
stars).}
}
\end{figure}
\begin{table}[tb]
\caption{\label{tab:fitparam}
Spectral fit parameters for the CXB, the GRXE and the Earth emission.
}
\begin{flushleft}
\begin{tabular}{@{}lcccccc@{}}
\hline
\hline
\rule[-0.5em]{0pt}{1.6em}
Spectra (model$^a$) & $\Gamma_1$$^b$ & $N_1$$^c$ & $E\dmrm{0}$$^d$ & $\Gamma_2$$^b$ & $N_2$$^c$ & $\chi\dmrm{red}^2$ \\
\hline
\rule{0pt}{1.2em}%
CXB (bknpow) & 1.68 & 15.2 & 28.7 & 2.42 & -- & 0.51 \\
CXB (cutoff)$^e$ & 1.95$^f$ & 44.1 & 127 & -- & -- & 0.85 \\
GRXE (cutoff) & 0.0$^f$ & 0.43 & 8.83 & 1.55$^f$ & 0.82 & 0.25 \\
GRXE (bremss)$^e$ & -- & 13.9 & 14.7 & 1.55$^f$ & 0.79 & 0.34 \\
Earth (bknpow) & 0.37 & 0.050 & 49.4 & 1.78 & -- & 0.05 \\
\hline
\end{tabular}\\
\rule{0pt}{1.0em}%
\textbf{Notes.}
$^{(a)}$ Model is either a broken powerlaw (bknpow), a cutoff powerlaw
(cutoff) or bremsstrahlung (bremss), with or without an extra powerlaw.
$^{(b)}$ Photon index of low- (1) or high-energy (2) powerlaw.
$^{(c)}$ Normalization at 1\,keV in \mbox{ph~cm$^{-2}$\,s$^{-1}$\,keV$^{-1}$\,sr$^{-1}$}.
$^{(d)}$ Characteristic energy in keV. Either the break energy (bknpow),
the cut-off energy (cutoff) or the $kT$ energy (bremss).
$^{(e)}$ the more physical model.
$^{(f)}$ fixed parameter value.
\end{flushleft}
\end{table}
The count rate spectra and uncertainties obtained with the iterative spectral
fitting process described in \refsec{spectral} are shown in \reffig{pltSpectra}.
\modif{We note that the large scatter from one EO to the other is clearly
dominating the uncertainties, which suggests that performing additional EOs in
the future will allow us to significantly improve the statistics of the
results.} We obtained overall average reduced $\chi^2$ values of
$\chi\dmrm{red}^2$\,=\,\modif{1.20, 1.11, 1.15 and 1.13} for EO\,1 to EO\,4,
respectively. These values only slightly above unity show that we get a fair
description of all the lightcurves, without overinterpreting the data by using
too many free parameters. As an example we show the match between the model
corresponding to the final results for EO\,1 and the observed lightcurves in
three representative channels in \reffig{pltFit}.
The count rate spectra $c_i(E)$ shown in \reffig{pltSpectra} correspond to the
values before entering the telescope, i.e. they are corrected for the
instrumental vignetting effects described in \refsec{instrumental}. We could
thus directly use them for spectral fitting with XSpec to get unfolded spectra
in physical units. We did the spectral fitting with the standard IBIS/ISGRI ARF
and RMF detector response files distributed with OSA~7.0. We did not consider
Galactic hydrogen absorption in the fit because even along the galactic plane
the hydrogen column density is small enough -- $N\dmrm{H}\!\approx\!2\times
10^{22}$\,cm$^{-2}$ at Galactic coordinates of ($l$,$b$)\,=\,(330\degr,0\degr)
-- to have only a negligible effect. The spectral fit parameters are given in
\reftab{fitparam} and the resulting unfolded spectra are shown in
\reffig{plteeufs} with the numerical values given in \reftab{results}.
\modif{The CXB spectrum is best fitted by a broken powerlaw model with a break
energy at $E\dmrm{b}\!=\!28.7$\,keV and a high-energy photon index of
$\Gamma_2\!=\!2.42\pm0.09$, giving a reduced chi-squared of
$\chi\dmrm{red}^2\!=\!0.51$ for 12 d.o.f. A more physical model for the CXB
emission -- considered as the superimposition of the emission of unresolved
Seyfert galaxies -- is to take a cut-off powerlaw model. The degeneracy between
spectral slope and cut-off energy was solved by fixing the photon index to the
value of $\Gamma\!=\!1.95$ derived by \citet{BSR09} on average for all Seyfert
galaxies detected by \emph{INTEGRAL}. We then obtained a good description of the
CXB spectrum $\chi\dmrm{red}^2/\mbox{d.o.f.}\!=\!0.85/14$ with a cut-off energy
of $E\!=\!127\pm20$\,keV, at slightly higher energy than $E\!=\!86$\,keV derived
for Seyfert~1 galaxies \citep{BSR09}.}
We found that the GRXE spectrum was best fitted by a cut-off powerlaw plus a
second powerlaw to account for the hard tail at energies above $\sim$\,80\,keV.
\modif{As the indices of the powerlaws are poorly constrained, we fixed their
values to $\Gamma_1\!=\!0.0$ for the cut-off powerlaw and to $\Gamma_2\!=\!1.55$
for the hard tail, as derived by \citet{BJR08}. This gives a very good
description of the data with a $\chi\dmrm{red}^2\!=\!0.25$ for 13 d.o.f.
According to \citet{RSG06}, the main population contributing to the low-energy
part of the GRXE are intermediate polar cataclysmic variables. The accretion
column onto the magnetic poles of such types of accreting white dwarfs is
emitting optically thin thermal emission. The best fit for this more physical
bremsstrahlung model is almost undistinguishable from the cut-off powerlaw (see
\reffig{plteeufs}) and gives a typical average temperature of
$kT\!=\!14.7\pm1.4$.}
\modif{The spectrum of the total Earth emission is best fitted by a broken
powerlaw with a break at $E\!=\!49.4\pm4.9$\,keV and a high-energy photon index
of $\Gamma_2\!=\!1.78\pm0.13$. This break energy is slightly higher than derived
by the recent analysis of the \emph{Swift}/BAT data by \citet{AGS08}, while the
obtained spectral slope is in remarkable agreement with their result of
$\Gamma_2\!=\!1.72\pm0.08$ (90\,\% CL errors). We found a different
normalization of the Earth emission spectrum however, which we discuss in
\refsec{earth}, where we also discuss the separate spectra obtained for the
albedo and the CR-induced emission.}
\modif{The spectrum of the sum of all point sources detected at more than
$2\,\sigma$ on average among the 4 EOs is added in \reffig{plteeufs} for
comparison. The impression that point sources contribute much less than the GRXE
is misleading. This is related to the arbitrary area we chose for the
normalization of the GRXE. If we had normalized it to the area actually covered
by the partially coded FoV of IBIS, we would have had a GRXE spectrum scaled
down by a factor of $\sim4$, depending a bit on the EO. This would then lead to
a higher contribution of the point sources compared to the GRXE in qualitative
agreement with the SPI results by \citet{BJR08}.}
\section{Discussion}
\label{sec:discussion}
The spectra in the $\sim$\,20--200\,keV range presented above will now be
compared to previously published \emph{INTEGRAL} results and to spectra obtained
by other satellites. In the subsections below, we discuss this separately for
the CXB spectrum, the GRXE and the Earth emission.
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f7.eps}
\caption{\label{fig:compIntegral}
Comparison of the IBIS/ISGRI CXB spectrum obtained here (red circles) with
the previous \emph{INTEGRAL} results \modif{of IBIS/ISGRI (black diamonds),
JEM-X (blue squares) and SPI (green triangles) published by \citet{CSR07}.}
}
\end{figure}
\subsection{Sky background spectrum}
\label{sec:sky}
It is interesting to compare the CXB spectrum obtained by the thorough analysis
of the IBIS/ISGRI detector lightcurves presented here with the \emph{INTEGRAL}
results previously published by \citet{CSR07}. The comparison is shown in
\reffig{compIntegral}. Our approach could significantly increase the useful
energy range of the IBIS/ISGRI data towards higher energies. \modif{The new
results fall slightly below the previous IBIS/ISGRI spectrum, while we get a
good agreement with the SPI results of \citet{CSR07}, except possibly for the
first energy bin.}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f8.eps}
\caption{\label{fig:compHEAO}
\modif{Comparison of the CXB spectrum obtained by \emph{INTEGRAL} -- JEM-X
measurements (blue squares) from \citet{CSR07} and our IBIS/ISGRI results
(red circles) -- with the \emph{HEAO-1} spectra and analytical model by
\citet{GMP99} (error bars and dashed line). The data of the A-4 instrument of
\emph{HEAO-1} are shown in green with original normalization, while we show
in orange the spectrum of the A-2 instrument and of the model, both increased
by 10\,\% in intensity.}
}
\end{figure}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f9.eps}
\caption{\label{fig:compSwift}
Comparison of the \emph{INTEGRAL} IBIS/ISGRI (red circles, this work) and
JEM-X \citep[magenta diamonds,][]{CSR07} spectra with the other recent CXB
measurements by \emph{Swift} and \emph{BeppoSAX}. The \emph{Swift}/XRT error
box (orange shaded area) and the \emph{Swift}/BAT results (green triangles)
are from \citet{MPC08} and \citet{AGS08}, respectively. \modif{The best-fit
model of \citet{MPC08} for the combined \emph{Swift} dataset is shown with a
black line and grey uncertainty area.} The original \emph{BeppoSAX}/PDS
measurements of \citet{FOL07} were scaled by +13\,\% in intensity (blue
squares) to correct for the difference in Crab normalization with respect to
\emph{INTEGRAL}. \modif{The analytical model we propose in \refeq{cxb} is
shown as a purple dashed line.}
}
\end{figure}
As shown in \reffig{compHEAO}, the slightly lower emission we obtain now with
\modif{IBIS/ISGRI} is consistent with the \emph{HEAO-1}
measurements and its analytical approximation by \citet{GMP99}. The flux scaling
of the \emph{HEAO-1} spectrum by $+10$\,\% as suggested by \citet{CSR07} is
actually not required anymore in the IBIS/ISGRI energy range. The discrepancy
appears only below 20\,keV for the \emph{INTEGRAL}/JEM-X data that indicate a
higher CXB intensity than the \emph{HEAO-1} measurements. It seems therefore
that a simple scaling in intensity of the historic \emph{HEAO-1} spectrum is not
able to consistently adjust the combined CXB measurements of \emph{INTEGRAL},
both below and above the turnover.
\modif{However, as illustrated in \reffig{compHEAO}, there is some freedom
within the uncertainties to scale up by $\sim10$\,\% the intensity of the
spectrum of the A-2 instrument of \emph{HEAO-1} without changing that of the A-4
experiment. This would better match the JEM-X measurements and other results by
recent X-ray instruments, which all suggest a higher intensity below 20\,keV
than obtained by \emph{HEAO-1}/A-2 \citep[e.g.][Fig.~15]{GCH07}. The net effect
would be a broadening of the CXB hump and a slight shift of its maximum towards
lower energies. The expected qualitative consequence for an AGN population
synthesis of the CXB would be a reduction of the contribution of the most highly
obscured AGN, in particular the Compton-thick ones \citep[e.g.][]{TUV09}.
Alternatively, it could also indicate a slightly stronger contribution from a
population of distant (redshifted) luminous AGN compared to the local population
\citep[e.g.][]{TU05}.}
\modif{Another consistency check of our results is to compare them to the recent
\emph{Swift} and \emph{BeppoSAX} measurements. This is illustrated in
\reffig{compSwift} where the \emph{Swift}/XRT and the \emph{Swift}/BAT spectra
are from \citet{MPC08} and \citet{AGS08}, respectively. Our data are consistent
with the \emph{Swift}/BAT results and the combined XRT--BAT spectral model
proposed by \citet{MPC08}, although they tend to be at a significantly lower
intensity. Our data agree very well} with the \emph{BeppoSAX}/PDS data
\citep[Fig.~6 \emph{Bottom}]{FOL07} provided that they are scaled by a factor of
1.13 in intensity to account for the difference in the Crab normalization in the
20--50\,keV band between \emph{BeppoSAX}/PDS
\citep[$F\dmrm{Crab}\!=\!9.22\!\times\!10^{-9}$\mbox{erg\,cm$^{-2}$\,s$^{-1}$}]{FOL07} and
\emph{INTEGRAL} \citep[$F\dmrm{Crab}\!=\!10.4\!\times\!10^{-9}$\mbox{erg\,cm$^{-2}$\,s$^{-1}$}]{CSR07}.
We note that the latter estimation for \emph{INTEGRAL} is fully consistent with
the value obtained with OSA~7.0. The measured fluxes are
$F\dmrm{Crab}\!=\!10.30\!\times\!10^{-9}$\mbox{erg\,cm$^{-2}$\,s$^{-1}$}\ and
$10.46\!\times\!10^{-9}$\mbox{erg\,cm$^{-2}$\,s$^{-1}$}\ for the Crab observations of revolutions 365
and 422, respectively.
\modif{The obtained spectrum seems to be also very consistent in the peak region
with the recent CXB synthesis model by \citet{TUV09}. It thus gives additional
evidence for a small Compton-thick AGN fraction in the CXB spectrum, close to
9\,\% instead about 30--40\,\% postulated before \citep[e.g.][]{TU05}. Our data
cannot constrain a possible hardening of the CXB spectrum above 100\,keV, but
are consistent with an additional contribution of flat-spectrum radio quasars,
which have been found to dominate the CXB in the MeV range \citep{ACS09}.}
\modif{Based on the considerations above, we can tentatively suggest a slight
adaptation of the analytical description of the CXB proposed by
\citet[Eq.~(4)]{MPC08}, as:}
\begin{equation}
\label{eq:cxb}
E^2~\frac{dN_{\gamma}}{dE}=E^2~\frac{0.109~\mbox{ph~cm$^{-2}$\,s$^{-1}$\,keV$^{-1}$\,sr$^{-1}$}}{(E/28\,\mbox{keV})^{1.40}+(E/28\,\mbox{keV})^{2.88}}\,,
\end{equation}
\modif{where the only difference -- but a correction of a typo in the units --
is a change of the break energy from 29\,keV to 28\,keV. The corresponding
spectral shape is at the lower limit of the uncertainty area of the \emph{Swift}
model as shown in \reffig{compSwift}.}
\subsection{Galactic ridge emission}
\label{sec:galactic}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f10.eps}
\caption{\label{fig:compGal}
Comparison of the obtained GRXE spectrum (blue squares)
with recent other determinations, all renormalized to the central radian of
the Milky Way defined by $|l|\!<\!30\degr$ and $|b|\!<\!15\degr$.
The previous \emph{INTEGRAL}/IBIS data (red circles) and the \emph{RXTE}/PCA
measurements (black diamonds) are from \citet[Fig.~14]{KRC07}. The
\emph{INTEGRAL}/SPI spectrum (green triangles) is from \citet[Fig.~9]{BJR08}.
}
\end{figure}
It is not easy to compare results on the GRXE from one publication to the other,
because the emission is often defined in different regions of the Galaxy. As the
region covered by our observations is away from the Galactic bulge where most
determinations have been made, we have to rescale them to a more commonly used
area. We choose the central radian of the Milky Way defined in Galactic
longitude $l$ and latitude $b$ by $|l|\leq 30\degr$ and $|b|\leq 15\degr$ as the
reference area for a comparison of the various measurements. As we do have an
analytical model of the GRXE (see \refsec{spatial}), it is possible to determine
the scaling factor from any region in the Galaxy to the chosen area. The
resulting renormalized spectra are compared in \reffig{compGal}. Our
measurements had to be scaled by a factor of $0.16$ to correspond to the chosen
area. The \emph{INTEGRAL}/IBIS spectrum from \citet[Fig.~14]{KRC07}
corresponding to the IBIS FoV area centered on the Galactic bulge was
multiplied by a calculated factor of $2.77$. We used the same factor for the
\emph{RXTE}/PCA data that have been scaled by \citet{KRC07} to match the IBIS
measurement at 20\,keV. For the \emph{INTEGRAL}/SPI spectrum of
\citet[Fig.~9]{BJR08} that correspond already to the area chosen here, we just
had to convert the units.
In general, \reffig{compGal} shows a good agreement between the results of the
various instruments. This is quite remarkable for data that were not arbitrarily
renormalized, but were rescaled based on our very simple double-Lorentzian model
of the GRXE. This suggests that the model provides a fair description of the
overall emission of the inner Galaxy in the hard X-ray range. All three
independent \emph{INTEGRAL} measurements reveal a minimum at about 80\,keV. This
was only suggested by the 2-$\sigma$ upper limits of \citet{KRC07}, but is
confirmed now by our Earth occultation results and the latest SPI results of
\citet{BJR08}. \modif{Our IBIS/ISGRI results do, however, suggest that the
minimum is shallower than previously found.} The diffuse GRXE below 80\,keV is
thought to be due to a population of accreting white dwarfs too faint to be
resolved into discrete sources in the hard X-rays \citep{RSG06,KRC07}. It is
only at energies of $\sim$\,6--7\,keV that the diffuse emission could finally be
resolved using deep Chandra observations \citep{RSC09}. \modif{The
bremsstrahlung temperature of $kT\!=\!14.7\pm1.4$ that we derived in
\refsec{results} for the accretion column onto the pole of the white dwarfs
agrees well with the measurements of individual intermediate polar systems
detected by \emph{Swift}/BAT \citep{BGA09}. Based on Table~2 in the latter
publication, we note that this temperature would correspond to a typical white
dwarf mass of $M\dmrm{wd}\!\simeq\!0.60\pm0.05$\,M$_{\odot}$ according to the
model of \citet{SRR05}. This estimate is just slightly above the expected
average mass of white dwarfs in the Galaxy
($\overline{M\dmrm{wd}}\!\sim\!0.5$\,M$_{\odot}$) that was found to agree well
with the previous IBIS/ISGRI results on the GRXE \citep[and references
therein]{KRC07}.}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f11.eps}
\caption{\label{fig:compGal2}
\modif{Comparison of the obtained GRXE spectrum (blue error bars and
long-dashed line) with the higher-energy emission of the Galactic ridge in
the region defined by $|l|\!<\!30\degr$ and $|b|\!<\!15\degr$ as observed by
\emph{INTEGRAL}/SPI (green triangles) from \citet[Fig.~9]{BJR08} and by
\emph{CGRO}/COMPTEL (black diamonds) and \emph{CGRO}/EGRET (red error bars).
The \emph{CGRO} data are from \citet[Fig.~3]{PMS08}, which is based on the
analysis by \citet{SBD99,SMR04}. The green solid line shows the model fitting
the SPI observations above 100\,keV \citep{BJR08}. It is the sum of a
powerlaw continuum (green dotted line), the positronium continuum
(short-dashed line) and a narrow electron-positron annihilation line at
511\,keV. A scaling by 20\,\% of the high-energy powerlaw is indicated by the
blue dotted-line.}
}
\end{figure}
Above 80\,keV the GRXE spectrum is likely dominated by inverse-Compton emission
from the interstellar medium \citep{PMS08}. We derived \modif{an intensity at
the level of} the 2-$\sigma$ upper limits of \citet{KRC07}, \modif{in excellent
agreement with} the latest SPI observations \citep{BJR08}. \modif{The photon
index of the high-energy powerlaw derived by these authors ($\Gamma\!=\!1.55$)
also agrees very well with our results, although it is too poorly constrained by
our data alone to be fitted independently.}
\modif{The overall shape of the high-energy spectrum of the GRXE including the
observations of the Compton gamma-ray observatory (\emph{CGRO}) up to 10\,GeV is
shown in \reffig{compGal2}. As our data are scaled from a region at
$320\degr<l<340\degr$ lying outside the galactic bulge where the bulk of the
positronium annihilation is emitted, they should be almost unaffected by the
positronium continuum. This implies that the $\sim20$\,\% higher normalization
of the high-energy powerlaw suggested by our data should be intrinsic. This
would agree well with the discussion of \citet{PMS08} concerning a possible
higher normalization of up to 40\,\% for this powerlaw.}
\subsection{Earth emission}
\label{sec:earth}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f12.eps}
\caption{\label{fig:compEarSwift}
Comparison of the obtained Earth emission spectrum (green triangles) with
previous determinations by various missions. The thin grey line is the
\emph{INTEGRAL} spectrum of \citet{CSR07} as described in
\reffig{compEarIntegral}. The obtained IBIS/ISGRI spectrum lies well between
the \emph{OSO-3} (black diamonds) and the \emph{Swift}/BAT (red circles)
measurements of \citet{SP74} and \citet{AGS08}, respectively. The values of
the \emph{BeppoSAX}/PDS measurements of \citet{FOL07} were increased by
13\,\% (blue squares) to be consistent with \reffig{compSwift}.
}
\end{figure}
\modif{Before discussing the relative contributions of the two Earth emission
components, we first compare, in \reffig{compEarSwift}, the obtained spectrum of
the Earth with other determinations. The Earth emission is found to be very
consistent with the spectra obtained previously, although there is a big scatter
among the various determinations. This is at least partially due to the
modulation of the Earth emission by the solar cycle and a dependence of the
observed flux on the spacecraft altitude and the geomagnetic latitude
\citep{SCS07}. For instance, the difference in normalization between the
\emph{Swift}/BAT spectrum and our determination can be related to
\emph{INTEGRAL} drifting towards an almost polar orbit, while \emph{Swift}
has a more equatorial orbit. The difference by roughly a factor of two depending
on the energy is consistent with the difference found by the polar-orbiting
satellite 1972-076B between the equatorial and the polar regions \citep{INR76}.
We did not include these spectra here for the sake of clarity, but they would be
compatible with} the other determinations provided that they are corrected for
unsubtracted CXB emission as shown by \citet[Figs.~15 and 16]{AGS08}.
\modif{A discrepancy we cannot ascribe to a different observation epoch or a
different viewpoint is the inconsistency of our results with the spectrum
derived from the same \emph{INTEGRAL} observations by \citet{CSR07}. We derive a
higher Earth emission at low energies and a lower intensity at high energies.
The discrepancy at the highest energies could somehow be due to the fact that
our results are based on IBIS data and their results on SPI, although both
instruments are well cross-calibrated. It is also possible that the difference
comes from our more detailed modeling of several instrumental effects described
in \refsec{instrumental}. At the lowest energies, the neglection of point source
emission by \citet{CSR07} is a likely cause of the discrepancy, since for a
given CXB we need more Earth emission to compensate for the occultation of point
sources.}
\begin{figure}[t]
\includegraphics[bb=16 144 600 510,clip,width=\hsize]{13072f13.eps}
\caption{\label{fig:compEarIntegral}
\modif{Resulting spectrum of the total Earth emission (green triangles, solid
line) with separated contributions from the Earth reflection of the CXB (cyan
triangles, short-dashed line) and the CR-induced atmospheric emission
(magenta triangles, long-dashed line). The thin grey curves are normalized as
derived by \citet{CSR07}, whereas the thick colored lines are normalized to
match our measurements.}
}
\end{figure}
\modif{To better characterize the difference between the two determinations of
the Earth spectrum, we show in \reffig{compEarIntegral} the decomposition of the
overall Earth emission in the two distinct components considered in both
studies. Those are the reflection of the CXB by the Earth -- the albedo -- and
the emission induced by CR interactions in the atmosphere. The spectra of both
components can be described by analytical functions fitted to the results of
Monte-Carlo simulations published by \citet{CSS08} for the albedo, and by
\citet{SCS07} for the atmospheric emission. In order to fit the overall spectrum
of the Earth with these two components, we need to increase the albedo component
by $\sim40$\,\% and decrease the atmospheric emission by $\sim60$\,\% compared
to the normalizations suggested by \citet{CSR07}. We thus get a normalization of
the atmospheric emission \citep[Eq.~(1)]{SCS07} of $13.2\,\mbox{keV\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$}$ at the
break energy of $E\!=\!44$\,keV, instead of $32.9\,\mbox{keV\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$}$ as derived by
\citet{CSR07}. This difference in normalization is much more important than the
few percent of likely overestimation mentioned by \citet{SCS07} related to the
inclusion of the Compton-scattering from particles that would have intersected
the surface of the Earth. Further speculations on the origin of this discrepancy
are beyond the scope of this paper.}
\modif{Concerning the albedo, we note that the increase by a factor $1.4$ we
derive here is not on the Earth reflection expected for the \emph{HEAO-1}
analytic approximation of \citet{GMP99}, but on this spectrum already scaled by a
factor of $1.1$ according to the results of \citet{CSR07}. As the CXB spectrum
obtained here agrees well with the original \emph{HEAO-1} spectrum, our
results suggest a reflection efficiency of the Earth atmosphere $\sim50$\,\%
higher than obtained by the Monte-Carlo simulations of \citet{CSS08}. According
to these authors, the shape of the input CXB spectrum has only a very limited
effect on the reflected spectrum, especially at energies below $\sim30$\,keV, so
there must be another reason for the important difference we observe. One
possibility is related to the delicate modeling of the composition of the
atmosphere. Could the presence of clouds have a significant effect by increasing
the amount of hydrogen atoms in the upper atmosphere, resulting in a more
``Sun-like'' albedo with more reflection at the lower energies
\citep[see][Fig.~6]{CSS08}? Another possibility would be the presence of
another Earth emission component emerging at the lowest energies, in particular
the potentially strong X-ray emission of aurorae \citep[e.g.][]{OSB01}. We note
that \citet{CSR07} found clear evidence for auroral emission in the
\emph{INTEGRAL}/JEM-X data during EO\,2 and EO\,3.}
\modif{Finally, it is fair to mention that the almost perfect agreement we show
in \reffig{compEarIntegral} between the data and the model for the two distinct
Earth emission components is mainly due to the choice of the input spectrum for
the atmospheric emission (see \refsec{spectral}). The strong degeneracy between
the two Earth emission components discussed in \refsec{degeneracy} does not
allow us to get such a good distinction of the two components when starting from
an arbitrary set of input parameters. We would just get a tendency for the
CR-induced emission to dominate at the higher energies and vice-versa at low
energy.}
\subsection{Degeneracy issues}
\label{sec:degeneracy}
Degeneracy is the reason why it is so difficult to determine the CXB spectrum
with Earth-occultation data. The basic problem is that a spatially uniform Earth
emission cannot be distinguished from the CXB occultation when the instrumental
background level is unknown. It is because of this that all previous studies
using the Earth as occultator had to assume a priori the spectrum of the CXB and
to a large extend also that of the Earth emission
\citep[see][]{CSR07,FOL07,AGS08}. Here, we tried to solve the degeneracy issue
by fixing the spatial distribution of the Earth emission components rather than
its spectral shape and by introducing a spectral smoothness constraint as
explained in \refsec{spectral}. However, because of the noise in the data and a
possible \modif{deviation} of the actual instrumental background variations
compared to those assumed based on the SPI/ACS lightcurve (see \refsec{ins}) it
is not possible to completely solve the degeneracy issues.
\modif{To overcome this problem, we had also to incorporate some a priori
assumptions on the spectral shape of the CXB and of one of the two Earth
emission components, chosen to be the atmospheric CR-induced emission because of
its strong drop at low energies that cannot be easily determined otherwise. As
explained in \refsec{spectral}, this is however only used to define the set of
input spectra that we then perturbate randomly before fitting the parameters to
the data. Despite this, we still keep a dependence on the input parameters in
the results. This is very obvious for the two Earth emission components that are
strongly degenerated (see \refsec{earth}).}
Another important degeneracy is between the instrumental background and the sum
of the sky background plus Earth emission. A higher instrumental background will
imply lower sky background and Earth emission, and vice-versa. Actually, the
data do primarily constrain the \emph{difference} between the Earth and the CXB
emissions. This difference is basically the height of a bump in the lightcurve
in case the Earth emission dominates or, alternatively, the depth of a trough,
when the sky background occultation dominates (see \reffig{pltFit}). \modif{To
test the effect of this degeneracy on our results, we fitted the data with the
same procedure as explained in \refsec{spectral}, but with an input spectrum for
the CXB increased by 10\,\%. This resulted in a CXB spectrum with a similar
shape, but a higher normalization by about the same factor. However, to
compensate this, the Earth albedo is then found to be higher than derived by
\citet{CSR07} by a factor of $\sim2$, instead of $\sim1.4$ (see \refsec{earth}).
Although this cannot be completely excluded, the discrepancy on the
normalization of the albedo is judged to be unrealistically high. We thus favor
the more conservative results obtained with the original normalization of the
CXB for the input parameters.}
The presence of the GRXE in the observed region of the sky can also add some
degeneracy, but its location to the side of the FoV was actually rather optimal
as its maximal occultation effect occurred earlier in the lightcurves than
for the CXB (see \reffig{pltLC}). \modif{We only noted a slight degeneracy
between the GRXE and the sum of the CXB and the polar-enhanced atmospheric
emission.}
\modif{We identified an instrumental effect that affects the results of
both the GRXE and of the Earth emission at energies above $\sim60$\,keV. By
ignoring the possibility of an increasing or decreasing trend of the
instrumental background in addition to the SPI/ACS modulation (see \refeq{fit}),
we obtained a high-energy drop of the Earth emission together with an
unrealistically steep rise of the GRXE spectrum. This behavior aims at
compensating a decreasing trend of the instrumental counts in the spectral
region of the emission lines of W, Pb and Bi (60--80\,keV, see \refsec{data})
and an increasing trend of the counts at even higher energies. This is likely
due to radioactive decay at the exit of the radiation belts and illustrates the
sensitivity of the method on a very accurate description of all instrumental
effects.}
\modif{Finally, we note that an underestimation of the contribution of the point
sources will tend to increase both the GRXE and the CXB intensity because point
sources are distributed all over the FoV with increased density in the galactic
plane.}
\section{Conclusion}
\label{sec:conclusion}
We presented the results of an original analysis of the four consecutive
Earth-occultation observations by the IBIS/ISGRI instrument aboard the
\emph{INTEGRAL} satellite. \modif{Our approach is complementary to the previous
study of these data by \citet{CSR07}, because instead of fixing the spectral
shape of the CXB and fitting its normalization, we attempt to derive the
complete spectral information from the observed detector lightcurves in
different energy bins. This requires a deep understanding of the instrumental
effects and a careful modeling of the spatial distribution of the various
contributions from the Galaxy, the Earth and point sources.} Despite inherent
degeneracy issues that forced us to fit the data with an additional spectral
smoothness constraint and adequate input parameters, the approach used here
results in a coherent set of spectra for the CXB, the GRXE and the Earth
emission.
The obtained IBIS/ISGRI results for the CXB are consistent with the historic
\emph{HEAO-1} spectrum, without any scaling in intensity. \modif{The scaling by
$+10$\,\% in intensity proposed by \citet{CSR07} is not incompatible with the
actual dataset, but is disfavored as it implies a CXB reflection by the Earth
twice as strong than the one derived by the Monte-Carlo simulations of
\citet{CSS08}. The obtained spectrum also agrees well with recent \emph{Swift}
and \emph{BeppoSAX} determinations. We propose a slight adaptation of the CXB
model spectrum suggested by \citet{MPC08}, which is based on \emph{Swift} data
alone, that implies a reduced fraction of strongly absorbed AGN, compared to the
\emph{HEAO-1} spectrum of \citet{GMP99}.}
\modif{The spectrum of the Earth emission is very well described by the
contribution of two distinct components: the reflection of the CXB that is
dominant at lower energies, and the CR-induced atmospheric emission. The derived
normalizations for these two components is however found to be very different
from what was suggested by the study of \citet{CSR07}.}
With a total observation time of only about a day, these special types of
\emph{INTEGRAL} observations yield a spectrum of the GRXE with comparable
statistics as obtained by combining all available \emph{INTEGRAL}/SPI
observations. This allows us to observationally estimate the average mass of
white dwarfs in the Galaxy. Conducting similar observations in different regions
of the Galactic plane would be useful to characterize the \modif{longitudinal}
distribution of the GRXE.
\modif{However, it would be even more important to conduct Earth observations
away from the Galactic plane to lift any degeneracy related to the presence of
the GRXE and point sources. This would lead to a determination of the CXB with
improved statistics and less systematics and thus fully exploit INTEGRAL's
unique capability to observe the entire Earth from a high-altitude orbit.}
\begin{acknowledgements}
Based on observations with \emph{INTEGRAL}, an ESA mission with instruments and
science data centre funded by ESA member states (especially the PI countries:
Denmark, France, Germany, Italy Switzerland, Spain), Czech Republic and Poland,
and with the participation of Russia and the USA.
PL has been supported in part by the Polish MNiSW grants NN203065933 and
362/1/N-INTEGRAL/2008/09/0, and the Polish Astroparticle Network
621/E-78/BWSN-0068/2008.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,500,290 | arxiv | \section{Introduction}
Estimating nodal functions/signals over networks is a task emerging in various domains, such as social, brain, and power networks, to name a few. Functions of nodes can represent certain attributes or classes of these nodes. In Facebook for instance, each node represents a person, and the presence of an edge indicates that two persons are friends, while nodal attributes can be age, gender or movie ratings of each person. In financial networks, where each node is a company, with links denoting trade between two companies, the function of the node can represent the category that each company belongs to, e.g., technology-, fashion-, or education-related.
\textcolor{black}{In real-world networks, there are often unavailable nodal function values, due to, e.g., privacy issues. Hence, a topic of great practical importance
is to interpolate missing nodal values (class, ranking or function), based on the function values at a subset of observed nodes. Interpolation of nodal function values often relies on the assumption of ``smoothness'' over the graphs, which implies that neighboring nodes will have similar nodal function values. For example, in social networks, people tend to rate e.g., movies similar to their friends, and in financial networks, companies that trade with each other usually belong to the same category.
%
From this point of view, }
function estimation over graphs based on partial observations has been investigated extensively, \cite{kolaczyk2009statistical,kondor2002diffusion,belkin2006manifold,wasserman2008statistical,lu2003link,giannakis2018pieee}.
Function estimation has been also pursued in the context of semi-supervised learning, e.g., for transductive regression or classification, see e.g., \cite{chapelle2009semi,chapelle2000transductive,cortes2007transductive,berberidis2018adaptive}. The same task has been studied recently as signal reconstruction over graphs, see e.g., \cite{narang2013signal,wang2015local,romero2017kernel,marques2016sampling,shuman2013emerging}, where signal values on unobserved nodes can be estimated by properly introducing a graph-aware prior.
Kernel-based methods for learning over graphs offer a unifying framework that includes linear and nonlinear function estimators \cite{romero2017kernel,smola2003kernels,ioannidis2018inference}. The nonlinear methods outperform the linear ones but suffer from the curse of dimensionality \cite{wahba1990spline}, rendering them less attractive for large-scale networks.
To alleviate this limitation, a scalable kernel-based approach will be introduced in the present paper, which leverages the random feature approximation to ensure \emph{scalability} while also allowing \emph{real-time} evaluation of the functions over large-scale dynamic networks. In addition, the novel approach incorporates a data-driven scheme for \emph{adaptive} kernel selection.
Adaptive learning over graphs has been also investigated for tracking and learning over possibly dynamic networks, e.g., \cite{di2018adaptive,ioannidis2018inference}. Least mean-squares and recursive least-squares adaptive schemes have been developed in \cite{di2018adaptive}, without explicitly accounting for evolving network topologies. In contrast, \cite{ioannidis2018inference} proposed a
kernel-based reconstruction scheme to track time-varying
signals over time-evolving topologies, but assumed that the kernel function is selected a priori. All these prior works assume that the network size is fixed.
In certain applications however, new nodes may join the network over time. For example, hundreds of new users are joining Facebook or Netflix every day, and new companies are founded in financial networks regularly. Real-time and scalable estimation of the desired functions on these newly-joining nodes is of great importance. While simple schemes such as averaging over one- or multi-hop neighborhoods are scalable to network size by predicting the value on each newly-coming node as a weighted combination of its multi-hop neighborhoods~\cite{altman1992introduction}, they do
not capture global information over the network. In addition, existing rigorous approaches are in general less efficient in accounting for newly-joining nodes, and need to solve the problem over all nodes, every time new nodes join the network, which incurs complexity $\mathcal{O}(N^3)$, where $N$ denotes the network size~\cite{romero2017kernel,smola2003kernels}. As a result, these methods are not amenable to real-time evaluation over newly-joining nodes. To this end, the present paper develops a scalable \emph{online graph-adaptive} algorithm that can efficiently estimate nodal functions on newly-joining nodes `on the fly.'
Besides scalability and adaptivity, nodes may have firm \emph{privacy} requirements, and may therefore not be willing to reveal who their neighbors are. However, most graph-based learning methods require knowing the entire connectivity pattern, and thus cannot meet the privacy requirements. The novel random feature based approach on the other hand, only requires an encrypted version of each node's connectivity pattern, which makes it appealing for networks with stringent privacy constraints.
In short, we put forth a novel online multikernel learning (MKL) framework for effectively learning and tracking nonlinear functions over graphs. Our contributions are as follows.
\noindent\textbf{c1)} A scalable MKL approach is developed to efficiently estimate the nodal function values both on the observed and un-observed nodes of a graph;\\
\noindent\textbf{c2)} The resultant algorithm is capable of estimating the function value of newly incoming nodes with high accuracy without having to solve the batch problem over all nodes, making it highly scalable as the network size grows, and suitable for nodal function estimation in dynamic networks;
\\
\noindent\textbf{c3)} Unlike most existing methods that rely on nodal feature vectors in order to learn the function, the proposed scheme simply capitalizes on the connectivity pattern of each node, while at the same time, nodal feature vectors can be easily incorporated if available; and,\\
\noindent\textbf{c4)} The proposed algorithm does not require nodes to share connectivity patterns. Instead, a privacy-preserving scheme is developed for estimating the nodal function values based on an encrypted version of the nodal connectivity patterns, hence respecting node privacy.
The rest of this paper is organized as follows. Preliminaries are in Section~\ref{sec:pre}, while Section~\ref{sec:online} presents an online kernel-based algorithm that allows sequential processing of nodal samples. Section~\ref{sec:gradraker} develops an online MKL scheme for sequential data-driven kernel selection, which allows graph-adaptive selection of kernel functions to best fit the learning task of interest.
Finally, results of corroborating numerical tests on both synthetic and real data are presented in Section~\ref{sec:test}, while concluding remarks along with a discussion of ongoing and future directions are given in Section~\ref{sec:con}.
\noindent\textit{Notation}. Bold uppercase (lowercase) letters denote matrices (column vectors), while $(\cdot)^{\top}$ and $\lambda_i(.)$ stand for matrix transposition, and the $i$th leading eigenvalue of the matrix argument, respectively. The identity matrix will be represented by $\mathbf{I}$, while $\mathbf{0}$ will denote the matrix of all zeros, and their dimensions will be clear from the context. Finally, the $\ell_p$ and Frobenius norms will be denoted by $\|\cdot\|_p$, and $\|\cdot\|_F$, respectively.
\section{Kernel-based learning over graphs}\label{sec:pre}
{\color{black}
Consider a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ of $N$ nodes, whose topology is captured by a known adjacency matrix $\bbA\in\mathbb{R}^{N\times N}$. Let $a_{nn'}\in\mathbb{R}$ denote the $(n,n')$ entry of $\bbA$, which is nonzero only if an edge is present from node $n'$ to $n$.
A real-valued function (or signal) on a graph is a mapping $f: {\cal V} \rightarrow \mathbb{R}$, where ${\cal V}$ is the set of vertices. The value $f(v)=x_v$ represents an attribute of $v \in {\cal V}$, e.g., in the context of brain networks, $x_{v_n}$ could represent the sample of an electroencephalogram (EEG), or functional magnetic resonance imaging (fMRI) measurement at region $n$. In a social network, $x_{v_n}$ could denote the age, political alignment, or annual income of the $n$th person. \textcolor{black}{Suppose that a collection of noisy samples $\{y_m=x_{v_{n_m}}+e_m\}_{m=1}^M$
is available, where $e_m$ models noise, and $M\leq N$ represents the number of measurements. Given $\{y_m\}_{m=1}^M$, and with the graph topology known, the goal is to estimate $f(v)$, and thus reconstruct the graph signal at unobserved vertices. Letting
$ \bby:= [y_1, \ldots, y_M ]^\top$, the observation vector obeys
\begin{align}
\label{eq:measure}
\bby=\bbPsi\bbx+\bbe
\end{align}
where $\bbx:=[x_{v_1}, \dots, x_{v_N}]^\top$, $\bbe:=[e_1, \dots, e_M]^\top$, and $\bbPsi\in \{0,1\}^{M\times N}$ is a sampling matrix with binary entries $[\bbPsi]_{m,n_m}=1$ for $m=1, \dots, M$, and $0$, elsewhere.}
Given $\bbPsi$, $\bby$, and $\bbA$, the goal is to estimate $\bbx$ over the entire network.
{To tackle the under-determined system \eqref{eq:measure},} consider function $f$ belonging to
a reproducing kernel Hilbert space (RKHS) defined as~\cite{smola2003kernels,romero2017kernel}
\begin{align}
\label{eq:def:grkhs}
\mathcal{H}:=\{f:f(v)=\sum_{n=1}^N\alpha_n \kappa(v,v_n), \alpha_n \in \mathbb{R}\}
\end{align}
where $\kappa: \mathcal{V}\times \mathcal{V}\rightarrow \mathbb{R}$ is a pre-selected kernel function. Hereafter, we will let $n_m=m$ for notational convenience, and without loss of generality (wlog). Given $\bby$, the RKHS-based estimate is formed as
\begin{equation}\label{opt0}
\hat{f}= \arg \min_{f\in \mathcal{H}}~\frac{1}{M}\sum_{m=1}^M{\cal C}(f(v_{m}),y_m)+\mu\Omega\left(\|f\|_{\mathcal{H}}^2\right)
\end{equation}
where the cost ${\cal C}(\cdot,\cdot)$ can be selected depending on the learning task, e.g., the least-squares (LS) for regression, or the logistic loss for classification; $\|f\|_{\mathcal{H}}^2:=\sum_{n}\sum_{n'} \alpha_n\alpha_{n'}\kappa(v_n,v_{n'})$ is the RKHS norm; $\Omega(\cdot)$ is an increasing function; and, $\mu>0$ is a regularization parameter that {copes with} overfitting. According to the definition of graph RKHS in \eqref{eq:def:grkhs}, the function estimate can be written as $\hat{f}(v)=\sum_{n=1}^N \alpha_n \kappa(v, v_n):=\bar{\bbalpha}^\top \bar{\mathbf{k}}(v)$, where $\bar{\bm{\alpha}}:=[\alpha_1,\ldots,\alpha_N]^{\top}\!\in\mathbb{R}^N$ collects the basis coefficients, and $\bar{\mathbf{k}}({v}):=[\kappa(v,v_1),\ldots,\kappa(v,v_N)]^{\top}\!$.
Substituting into the RKHS norm, we find $\|f\|_{\mathcal{H}}^2:=\sum_{n}\sum_{n'} \alpha_n\alpha_{n'}\kappa(v_n,v_{n'})=\bar{\bm{\alpha}}^{\top}\bar{\mathbf{K}}\bar{\bm{\alpha}}$, where the $N\times N$ kernel matrix $\bar{\mathbf{K}}$ has entries $[\bar{\mathbf{K}}]_{n,n'}:=\kappa(v_n,v_{n'})$; thus, the functional problem \eqref{opt0} boils down to
{\color{black}
\begin{equation}\label{eq:opt1}
\min_{\bar{\bm{\alpha}}\in\mathbb{R}^N}\!\frac{1}{M}\sum_{m=1}^M{\cal C}(\bar{\bm{\alpha}}^{\top}\bar{\mathbf{k}}(v_{m}),y_m)+\mu \Omega\left(\bar{\bm{\alpha}}^{\top}\bar{\mathbf{K}}\bar{\bm{\alpha}}\right).
\end{equation}
}
According to the representer theorem, the optimal solution of \eqref{opt0} admits the finite-dimensional form given by \cite{smola2003kernels,romero2017kernel}
\begin{equation}\label{eq:sol0}
\hat{f}(v)=\sum_{m=1}^M\alpha_m \kappa(v,v_{m})
:=\bm{\alpha}^{\top}{\mathbf{k}}(v).
\end{equation}
where $\bm{\alpha}:=[\alpha_1,\ldots,\alpha_M]^{\top}\!\in\mathbb{R}^M$, and $\mathbf{k}(v):=[\kappa(v,v_1),\ldots,\kappa(v,v_M)]^{\top}\!$.
This means that the coefficients corresponding to the unobserved nodes are all zeros. This implies that the function over the graph can be estimated by optimizing over the $M\times 1$ vector $\bbalpha$ [cf. \eqref{opt0}]
\begin{equation}\label{eq:opt2}
\min_{{\color{black}\bm{\alpha}\in\mathbb{R}^M}}~\frac{1}{M}\sum_{m=1}^M{\cal C}({\bm{\alpha}}^{\top}{\mathbf{k}}(v_m),y_m)+\mu \Omega\left({\bm{\alpha}}^{\top}\bbK{\bm{\alpha}}\right)
\end{equation}
where $\bbK:=\bbPsi^\top\bar{\mathbf{K}}\bbPsi$. For general kernel-based learning tasks, $\bar{\bbK}$ is formed using the nonlinear functions of pairwise correlations $\kappa(v_n, v_{n'})=\bbphi_{n}^\top\bbphi_{n'}$, where $\bbphi_{n}$ denotes the feature vector of node $n$, which can collect, for example, the buying history of users on Amazon, or the trading history of companies in financial networks. However, such information may not be available in practice, due to, e.g., privacy concerns. This has motivated the graph-kernel based approaches in \cite{romero2017kernel} and \cite{smola2003kernels}, to reconstruct the graph signal when only the network structure is available, and the kernel matrix is selected as a nonlinear function of the graph Laplacian matrix. Specifically, these works mostly consider undirected networks, $\bbA=\bbA^\top$.
Given the normalized Laplacian matrix $\bbL:=\bbI-\bbD^{-1/2}\bbA\bbD^{-1/2}$, with $\bbD:={\rm diag}(\bbA\mathbf{1})$, and letting $\bbL:=\bbU\bbLambda\bbU^\top$, the family of graphical kernels is}
\begin{align}
\label{eq:gk}
\bar{\bbK}:=r^\dagger(\bbL):=\bbU r^\dagger(\bbLambda)\bbU^\top
\end{align}
where $r(.)$ is a non-decreasing scalar function of the eigenvalues, and $^\dagger$ denotes pseudo-inverse. By selecting $r(.)$, different graph properties can be accounted for, including smoothness, band-limitedness, the random walk~\cite{smola2003kernels}, and diffusion~\cite{kondor2002diffusion}.
Although graph-kernel based methods are effective in reconstructing signals over graphs, it can be observed from \eqref{eq:gk} that formulating $\bar{\bbK}$ generally requires an eigenvalue decomposition of $\bbL$, which incurs complexity ${\cal O}(N^3)$ that can be prohibitive for large-scale networks. Moreover, even though nodal feature vectors $\{ {\mathbf \phi}_n\}$ are not necessary to form $\bar{\bbK}$, the graph-kernel-based scheme requires knowledge of the topology, meaning $\bbA$, in order to estimate the nodal function of each node. However, in networks with strict privacy requirements, nodes may not be willing to share such information with others. In Facebook, for example, most people do not make their friend list public. In addition, solving \eqref{eq:opt1} assumes that all sampled nodes are available in batch, which may not be true in scenarios where nodes are sampled in a sequential fashion.
In response to these challenges, an online scalable kernel-based method will be developed in the ensuing section to deal with sequentially obtained data samples, over generally dynamic networks. The resultant algorithm only requires encrypted versions of the nodal connectivity patterns of other nodes, and hence it offers privacy.
\section{Online kernel-based learning over graphs}\label{sec:online}
Instead of resorting to a graph kernel that requires an eigenvalue decomposition of $\bbL$ in \eqref{eq:gk}, the present section advocates treating the \emph{connectivity pattern of each node as its feature vector}, which can be the $n$th column $\bba_n^{(c)}$ and possibly the $n$th row $(\bba_n^{(r)})^\top$ of the adjacency (if $\bbA$ is nonsymmetric). We will henceforth term this the \emph{connectivity pattern} of $v_n$, and denote it as $\bba_n$, for brevity. Given $\bba_n$, we will interpolate unavailable nodal function values $\hat{f}(v_n)$ using a nonparametric approach, that is different and scalable relative to \cite{smola2003kernels} and \cite{romero2017kernel}. The kernel matrix is now
\begin{align}\label{eq:gka}
[\bar{\bbK}]_{n,n'}=\kappa(v_n,v_{n'})=\kappa(\bba_n,\bba_{n'}).
\end{align}
{Again, with $M$ nodes sampled,}
the representer theorem asserts that the sought function estimator has the form \cite{wahba1990spline}
\begin{align}
\label{eq:f:graph}
\hat{f}(v_n)=\hat{f}(\bba_n)=\sum_{m=1}^M\alpha_m \kappa(\bba_m,\bba_n):=\bm{\alpha}^{\top}\mathbf{k}(\mathbf{a}_n)
\end{align}
where $\bbk(\bba_n):=[\kappa(\bba_n,\bba_1) \dots \kappa(\bba_n,\bba_M)]^\top$.
It can be observed from \eqref{eq:f:graph} that $\hat{f}(v_n)$ involves the adjacency of the entire network, namely $\{\bba_m\}_{m=1}^M$, which leads to potentially growing complexity ${\cal O}(M^3)$ as the number of sampled nodes increases~\cite{wahba1990spline}.
\subsection{Batch RF-based learning over graphs}
To bypass this growing complexity, we will resort to the so-called random feature approximation \cite{rahimi2007} in order to reduce the original functional learning task in \eqref{eq:opt1} to a problem with the number of unknown parameters not growing with $M$.
We first approximate $\kappa$ in \eqref{eq:sol0} using random features (RFs) \cite{rahimi2007,shen2018aistats} that are obtained from a shift-invariant kernel satisfying $\kappa(\bba_n,\bba_{n'})=\kappa(\bba_n-\bba_{n'})$. For $\kappa(\bba_n-\bba_{n'})$ absolutely integrable,
its Fourier transform $\pi_{\kappa} (\bf v)$ exists and represents the power spectral density, which upon normalizing to ensure $\kappa(\mathbf{0})=1$, can also be
viewed as a probability density function (pdf); hence,
\begin{align}
\label{ieq.kx1}
\kappa(\bba_n-\bba_{n'}) &= \int \!\!\pi_{\kappa}(\bbv)e^{j\bbv^\top(\bba_n-\bba_{n'})} d\bbv \nonumber\\
&:=\mathbb{E}_{\bbv}\big[e^{j\bbv^\top(\bba_n-\bba_{n'})}\big]
\end{align}
where the last equality is due to the definition of the expected value. Drawing a sufficient number of $D$ independent and identically distributed samples $\{\bbv_i\}_{i=1}^D$ from $\pi_{\kappa}(\bbv)$, the ensemble mean \eqref{ieq.kx1} can be approximated by the sample average
\begin{equation}\label{eq.ker-quad}
\hat{\kappa}(\bba_n,\bba_{n'})=\bbz_{\bbV}^\top(\bba_n)\bbz_{\bbV}(\bba_{n'})
\end{equation}
where $\bbV:=[\bbv_1, \dots, \bbv_D]^\top \in \mathbb{R}^{D\times N}$, and $\bbz_{\bbV}$ denotes the $2D\times 1$ \emph{real-valued} RF vector
\begin{align}\label{rep:z}
\bbz_{\bbV}(\bba)&=D^{-\frac{1}{2}}\\
&\times\left[\sin(\bbv_1^\top \bba),\ldots, \sin(\bbv_D^\top \bba), \cos(\bbv_1^\top \bba), \ldots, \cos(\bbv_D^\top \bba)\right]^{\top}\!.\nonumber
\end{align}
Taking expectations in \eqref{eq.ker-quad} and using \eqref{ieq.kx1}, one can verify that
$\mathbb{E}_{\bbv}[\hat{\kappa}(\bba_n,\bba_{n'})]=\kappa(\bba_n,\bba_{n'})$, which means
${\hat \kappa}$ is unbiased. Note that finding $\pi_{\kappa}(\bbv)$ requires an $N$-dimensional Fourier transform of $\kappa$, which in general requires numerical integration. Nevertheless, it has been shown that for a number of popular kernels, $\pi_{\kappa}(\bbv)$ is available in closed form \cite{rahimi2007}. Taking the Gaussian kernel as an example, where $\kappa(\bba_n,\bba_{n'})=\exp\big(\|\bba_n-\bba_{n'}\|_2^2/(2\sigma^2)\big)$, it has a Fourier transform corresponding to the pdf $\mathcal{N}(0,\sigma^{-2}\bbI)$.
Hence, the function that is optimal in the sense of \eqref{opt0} can be cast to a function approximant over the $2D$-dimensional RF space (cf. \eqref{eq:f:graph} and \eqref{eq.ker-quad})
\begin{align}
\label{eq:rf:fx}
\hat{f}^{\rm RF}(\bba)=\sum_{m=1}^M \alpha_m \bbz_{\bbV}^\top(\bba_m)\bbz_{\bbV}(\bba):=\bbtheta^\top\bbz_{\bbV}(\bba)
\end{align}
where $\bbtheta^{\top}:=\sum_{m=1}^M \alpha_m \bbz_{\bbV}^{\top}(\bba_m)$.
While $\hat{f}$ in \eqref{eq:sol0} is the superposition of nonlinear functions $\kappa$, its RF approximant $\hat{f}^{\rm RF}$ in \eqref{eq:rf:fx} is a linear function of $\bbz_{\bbV}(\bba_i)$.
As a result, \eqref{opt0} reduces to
\begin{equation}\label{opt2}
\min_{\bbtheta\in \mathbb{R}^{2D}}~\frac{1}{M}\sum_{m=1}^M{\cal C}(\bbtheta^\top\bbz_{\bbV}(\bba_m),y_m)+\mu\Omega\left(\|\bbtheta\|^2\right)
\end{equation}
where $\|\bbtheta\|^2:=\sum_{t}\sum_{\tau}\alpha_t\alpha_{\tau}\bbz_{\bbV}^\top(\bba_t)\bbz_{\bbV}(\bba_{\tau}):=\|f\|_{\cal H}^2$. A batch solver of \eqref{opt2} has complexity $\mathcal{O}(MD^3)$ that does not grow with $N$. This batch RF-based approach scales linearly with the number of measured nodes $M$, and the number of variables is $2D$, which does not depend on $M$. This allows us to pursue an online implementation as elaborated next.
{\color{black}
\subsection{Online RF-based learning over graphs}
Here, we will further leverage RF-based learning over graphs to enable real-time learning and reconstruction of signals evolving over possibly dynamic networks. A scalable online algorithm will be introduced, which can adaptively handle sequentially sampled nodal features and update the sought function estimates.
\noindent\textbf{Training sequentially.} In the training phase, we are given a network of $N$ nodes, and the nodal function is sampled in a sequential fashion.
Letting $v_t$ denote the node sampled at the $t$th time slot, and having available $\{\bba_t,y_t\}$ at $v_t$, the online inference task can be written as [cf. \eqref{opt2}]
\begin{align}\label{eq:rf-task}
\hspace{-10mm}&\min_{\bm{\theta}\in\mathbb{R}^{2D}}\, \sum_{\tau=1}^t{\cal L}\left(\bbtheta^\top\bbz_{\bbV}(\bba_{\tau}),y_{\tau}\right)\\
& {\cal L}\big(\bbtheta^\top\bbz_{\bbV}(\bba_t),y_t\big):={\cal C}\big(\bbtheta^\top\bbz_{\bbV}(\bba_t),y_t\big)+\mu \Omega\big(\|\bbtheta\|^2\big).\nonumber
\end{align}
We will solve \eqref{eq:rf-task} using online gradient descent \cite{hazan2016}. Obtaining $v_t$ per slot $t$, the RF of its connectivity pattern $\bbz_{\bbV}(\bba_t)$ is formed as in \eqref{rep:z}, and $\bbtheta_{t+1}$ is updated `on the fly,' as
%
\begin{align}\label{eq:weit-rf}
\bbtheta_{t+1}=\bbtheta_t-\eta_t \nabla{\cal L}(\bbtheta_t^\top\bbz_{\bbV}(\bba_t),y_t)
\end{align}
where $\{\eta_t\}$ is the sequence of stepsizes that can tune learning rates.
\textcolor{black}{In this paper, we will adopt $\eta_t=\eta$ for simplicity}.
Iteration \eqref{eq:weit-rf} provides \emph{a functional update} since $\hat{f}^{\rm RF}_t(\bba)=\bbtheta_t^\top\bbz_{\bbV}(\bba)$. The per-iteration complexity of \eqref{eq:weit-rf} is $\mathcal{O}(D)$, and $\mathcal{O}(MD)$ for the entire training process, which scales better than~$\mathcal{O}(MD^3)$ that is required for a batch solver of \eqref{opt2}.
\noindent\textbf{Inferring unavailable nodal values.}
After the training phase, the nodal function value over the un-sampled nodes can be readily estimated by [cf. \eqref{eq:rf:fx}]
\begin{align}\label{eq:fx:inter}
\hat{f}(v_i)=\hat{\bbtheta}^\top \bbz_{\bbV}(\bba_i), ~~\forall i \in {\cal S}^c
\end{align}
\textcolor{black}{where $\hat{\bbtheta}$ is the final estimate after the training phase, i.e., $\hat{\bbtheta}=\bbtheta_{M+1}$, and} ${\cal S}^c$ denotes the index set of the nodes whose signal values have not been sampled in the training phase.
\noindent\textbf{Newly-joining nodes.} When new nodes join the network, batch graph-kernel based approaches must expand $\bar{\bbK}$ in \eqref{eq:gk} by one row and one column, and re-solve \eqref{eq:opt2} in order to form signal estimates for the newly-joining nodes. Hence, each newly joining node will incur complexity $\mathcal{O}(N^3)$. The novel online RF method on the other hand, can simply estimate the signal on the newly coming node via $\hat{f}(v_{\rm new})=\hat{\bbtheta}\bbz_{\bbV}(\bba_{\rm new})$, where $\bba_{\rm new}\in\mathbb{R}^N$ denotes the connectivity pattern of the new node with the \emph{existing} nodes in the network. This leads to a complexity of $\mathcal{O}(ND)$ per new node. If in addition, $y_{\rm new}$ is available, then the function estimate can also be efficiently updated via \eqref{eq:weit-rf} and \eqref{eq:rf:fx} using $\bba_{\rm new}$ and $y_{\rm new}$.
The steps of our online RF-based method are summarized in Algorithm~\ref{algo:okl}. A couple of standard learning tasks where Algorithm~\ref{algo:okl} comes handy are now in order.
\emph{Nonlinear regression over graphs.}
Consider first nonlinear regression over graphs, where the goal is to find a nonlinear function $f\in{\cal H}$, such that $y_n=f(v_n)+e_n=f(\bba_n)+e_n$ given the graph adjacency matrix $\bbA$. The criterion is to minimize the regularized prediction error of $y_n$, typically using the online LS loss ${\cal L}(f(\bba_t),y_t):=[y_t - f(\bba_t)]^2+\mu\|f\|_{\cal H}^2$ in \eqref{eq:rf-task}, whose gradient is (cf. \eqref{eq.klp-weight})
{\color{black}
\begin{align}
& \nabla{\cal L}\left(\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t),y_t\right)=2[\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t)-y_t]\bbz_{\bbV}(\bba_t)+2\mu \bbtheta_{t}.\nonumber
\end{align}
}
In practice, $y_t$ can represent a noisy version of each node's real-valued attribute, e.g., temperature in a certain city, and the graph can be constructed based on Euclidean distances among cities.
For a fair comparison with alternatives, only the regression task will be tested in the numerical section of this paper.
\emph{Nonlinear classification over graphs.} We can also handle kernel-based perceptron and kernel-based logistic regression, which aim at learning a nonlinear classifier that best approximates either $y_n$, or, the pdf of $y_n$ conditioned on $\bba_n$.
With binary labels $\{\pm 1\}$, the perceptron solves \eqref{opt0} with ${\cal L}(f(\bba_t),y_t)=\max(0,1-y_t f(\bba_t))+\mu\|f\|_{\cal H}^2$, which equals zero if $y_t=f(\bba_t)$, and otherwise equals $1$. In this case, the gradient of the presented online RF-based method is (cf. \eqref{eq.klp-weight})
{\color{black}
\begin{align}
\nabla {\cal L}&\left(\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t),y_t\right)=-2y_t{\cal C}(\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t),y_t)\bbz_{\bbV}(\bba_t)
+2\mu \bbtheta_{t}.\nonumber
\end{align}
}
Accordingly, given ${\bf x}_t$, logistic regression postulates that ${\rm Pr}(y_t=1|\bbx_t )=1/(1+\exp(f(\bbx_t)))$.
Here the gradient takes the form (cf. \eqref{eq.klp-weight})
\begin{align}
{\color{black}
\nabla {\cal L}\!\!\left(\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t),y_t\right)\!=\!\frac{2y_t\exp(y_t\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t))}{1+\exp(y_t\bbtheta_{t}^\top\bbz_{\bbV}(\bba_t))}\bbz_p(\bbx_t)+2\mu \bbtheta_{t}.\nonumber
}
\end{align}
Classification over graphs arises in various scenarios, where $y_n$ may represent categorical attributes such as gender, occupation or, nationality of users in a social network.
\noindent\textbf{Remark 1 (Privacy).} Note that the update in \eqref{eq:weit-rf} does not require access to $\bba_t$ directly. Instead, the only information each node needs to reveal is $\bbz_{\bbV}(\bba_t)$ for each $\bba_t$, which involves $\{\sin(\bba_t^\top \bbv_{j}),~ \cos(\bba_t^\top \bbv_{j})\}_{j=1}^D$. Being noninvertible, these co-sinusoids functions involved in generating the $\mathbf{z}_{\mathbf{V}}(\mathbf{a}_t)$ can be viewed as an encryption of the nodal connectivity pattern, which means that given $\mathbf{z}_{\mathbf{V}}(\mathbf{a}_t)$, vector $\mathbf{a}_t$ cannot be uniquely deciphered. Hence, Algorithm \ref{algo:okl} preserves privacy. }
\noindent\textcolor{black}{\textbf{Remark 2 (Directed graphs).} It can be observed from \eqref{eq:gk} that for $\bar{\bbK}$ to be a valid kernel, graph-kernel based methods require $\bbA$, and henceforth $\bbL$ to be symmetric, which implies they can only directly deal with symmetric/undirected graphs. Such a requirement is not necessary for our RF-based method.}
\noindent\textcolor{black}{\textbf{Remark 3 (Dynamic graphs).}
Real-world networks may vary over time, as edges may disappear or appear. To cope with such changing topologies, the original graph-kernel method needs to recalculate the kernel matrix, and resolve the batch problem whenever one edge changes. In contrast, our online RF-based method can simply re-estimate the nodal values on the two ends of the (dis)appeared edge using \eqref{eq:rf:fx} with their current $\{\bba_n\}$.
}
Evidently, the performance of Algorithm \ref{algo:okl} depends on $\kappa$ that is so far considered known. As the ``best'' performing $\kappa$ is generally unknown and application dependent,
it is prudent to adaptively select kernels by superimposing multiple kernels from a prescribed dictionary, as we elaborate next.
\begin{algorithm}[t]
\caption{Online kernel based learning over graphs}\label{algo:okl}
\begin{algorithmic}[1]
\State\textbf{Input:} step size $\eta>0$, and number of RFs $D$.
\State\textbf{Initialization:}~$\bbtheta_{1}=\mathbf{0}$.
\State\textbf{Training:}
\For {$t = 1, 2,\ldots, M$}
\State Obtain the adjacency vector $\bba_t$ of sampled node $v_t$ .
\State Construct $\bbz_{p}(\bba_t)$ via \eqref{rep:z} using $\kappa$.
\State Update $\bbtheta_{t+1}$ via \eqref{eq:weit-rf}.
\EndFor
\State \textbf{Inference: }\\
\hspace{6mm}Construct random feature vector $\bbz_{\bbV}(\bba_j)$ via \eqref{rep:z}\\
\hspace{6mm}Infer
$\hat{f}(v_j)=\bbtheta_{M+1}^\top\bbz_{\bbV}(v_j),~~j\in \Omega.$
\State \textbf{Accounting for newly-coming node}\\
\hspace{6mm}Construct random feature vector $\bbz_{\bbV}(\bba_{\rm new})$ via \eqref{rep:z}\\
\hspace{6mm}Estimate
$\hat{f}(v_{\rm new})=\bbtheta_{M+1}^\top\bbz_{\bbV}(v_{\rm new}).$\\
\hspace{6mm}If $y_{\rm new}$ available, Update $\bbtheta$ via \eqref{eq:weit-rf}.
\end{algorithmic}
\end{algorithm}
\section{Online Graph-adaptive MKL}\label{sec:gradraker}
In the present section, we develop an online \textbf{gr}aph-\textbf{ad}aptive learning approach that relies on \textbf{ra}ndom features, and leverages multi-\textbf{ker}nel approximation to estimate the desired $f$ based on sequentially obtained nodal samples over the graph. The proposed method is henceforth abbreviated as \textbf{Gradraker}.
The choice of $\kappa$ is critical for the performance of single kernel based learning over graphs, since different kernels capture different properties of the graph, and thus lead to function estimates of variable accuracy \cite{romero2017kernel}. To deal with this, combinations of kernels from a preselected dictionary $\{\kappa_p\}_{p=1}^P$ can be employed in \eqref{opt0}; see also \cite{romero2017kernel,shen2018aistats}. Each combination belongs to the convex hull $\bar{{\cal K}}:=\{\bar{\kappa}=\sum_{p=1}^P \bar{\alpha}_p \kappa_p,\, \bar{\alpha}_p\geq 0,\,\sum_{p=1}^P\bar{\alpha}_p=1\}$. With $\bar{\cal H}$ denoting the RKHS induced by $\bar{\kappa}\in \bar{{\cal K}}$, one then solves \eqref{opt0} with ${\cal H}$ replaced by
$\bar{\cal H}:={\cal H}_1\bigoplus\cdots\bigoplus{\cal H}_P$,
where $\{{\cal H}_p\}_{p=1}^P$ represent the RKHSs corresponding to $\{\kappa_p\}_{p=1}^P$~\cite{cortes2009}.
The candidate function ${\bar f} \in \bar{\cal H}$ is expressible in a separable form as $\bar{f}(\bba): =\sum_{p=1}^P {\bar f}_p(\bba)$, where ${\bar f}_p(\bba)$ belongs to $\mathcal{H}_p$, for $p\in{\cal P}:=\{1, \ldots, P\}$. To add flexibility per kernel in our ensuing online MKL scheme, we let wlog $\{{\bar f}_p = {w}_p f_p\}_{p=1}^P$, and seek functions of the form
\begin{align}\label{eq:fp}
f(v)=f(\bba): =\sum_{p=1}^P \bar{w}_p f_p(\bba)\in\bar{\cal H}
\end{align}
where $f:={\bar f}/\sum_{p=1}^P w_p$, and the normalized weights $\{\bar{w}_p:=w_p/\sum_{p=1}^P w_p\}_{p=1}^P$ satisfy $\bar{w}_p\geq 0$, and $\sum_{p=1}^P\bar{w}_p=1$.
Exploiting separability jointly with the
RF-based function approximation per kernel, the MKL task can be reformulated, after letting ${\cal L}_t(\hat{f}_p^{\rm RF}(\bba_t)):={\cal L}\big(\bbtheta^\top\bbz_{\bbV_p}(\bba_t),y_t\big)$ in \eqref{eq:rf-task}, as
\begin{subequations}\label{eq:raker-task}
\begin{align}
\!&\min_{\{\bar{w}_p\}, \{\hat{f}_p^{\rm RF}\}}\, \sum_{t=1}^T\sum_{p=1}^P \bar{w}_p\, {\cal L}_t\left(\hat{f}_p^{\rm RF}(\bba_t)\right)\label{eq:raker-taska}\\
{\rm s.~to}&~~\sum_{p=1}^P\bar{w}_p=1,~\bar{w}_p\geq 0,~p\in{\cal P},\\
&~~\hat{f}_p^{\rm RF}\!\in\!\left\{\hat{f}_p(\bba_t)\!=\!\bbtheta^{\top}\bbz_{\bbV_p}(\bba_t)\right\},~p\in{\cal P}\label{eq:raker-taskc}
\end{align}
\end{subequations}
which can be solved efficiently `on-the-fly.' Relative to \eqref{opt2}, we replaced $M$ by $T$ to introduce the notion of time, and stress the fact that the nodes are sampled sequentially.
Given the connectivity pattern $\bba_t$ of the $t$th sampled node $v_t$, an RF vector $\bbz_p(\bba_t)$ is generated per $p$ from the pdf $\pi_{\kappa_p}(\bbv)$ via \eqref{rep:z}, where $\bbz_p(\bba_t):=\bbz_{\bbV_p}(\bba_t)$ for notational brevity. Hence, per kernel $\kappa_p$ and node sample $t$, we have [cf. \eqref{eq:rf:fx}]
\begin{align}
\hat{f}_{p,t}^{\rm RF}(\bba_t)=\bbtheta_{p,t}^\top \bbz_p(\bba_t)
\end{align}
and as in \eqref{eq:weit-rf}, $\bbtheta_{p,t}$ is updated via
%
\begin{align}\label{eq.klp-weight}
\bbtheta_{p,t+1}=\bbtheta_{p,t}-\eta \nabla{\cal L}(\bbtheta_{p,t}^\top\bbz_p(\bba_t),y_t)
\end{align}
with $\eta\in(0,1)$ chosen constant to effect the adaptation.
As far as $\bar{w}_{p,t}$ is concerned, since it resides on the probability simplex, a multiplicative update is well motivated as discussed also in, e.g.,~\cite{hazan2016, shen2018aistats}. For the un-normalized weights, this update is available in closed form as \cite{shen2018aistats}
\begin{align}\label{eq.mkl-weight}
w_{p,t+1}=w_{p,t}\exp\left(-\eta{\cal L}_t\left(\hat{f}_{p,t}^{\rm RF}(\bba_t)\right)\right).
\end{align}
Having found $\{w_{p,t}\}$ as in \eqref{eq.mkl-weight}, the normalized weights in \eqref{eq:fp} are obtained as $\bar{w}_{p,t}:=w_{p,t}/\sum_{p=1}^P w_{p,t}$.
Note from \eqref{eq.mkl-weight} that when $\hat{f}_{p,t}^{\rm RF}$ has a larger loss relative to other $\hat{f}_{p',t}^{\rm RF}$ with $p' \neq p$ for the $t$th sampled node, the corresponding $w_{p,t+1}$ decreases more than the other weights. In other words, a more accurate approximant tends to play a more important role in predicting the ensuing sampled node.
In summary, our Gradraker for online graph MKL is listed as Algorithm \ref{algo:omkl:rf}.
{\color{black}
\noindent\textbf{Remark 4 (Comparison with batch MKL).} A batch MKL based approach for signal reconstruction over graphs was developed in \cite{romero2017kernel}. It entails an iterative algorithm whose complexity grows with $N$ in order to jointly estimate the nodal function, and to adaptively select the kernel function. When new nodes join the network, \cite{romero2017kernel} re-calculates the graphical kernels and re-solves the overall batch problem, which does not scale with the network size. In addition, \cite{romero2017kernel} is not privacy preserving in the sense that in order to estimate the function at any node, one needs to have access to the connectivity pattern of the entire network.
\noindent\textbf{Remark 5 (Comparison with $k$-NN).} An intuitive yet efficient way to predict function values of a newly joining node is to simply combine the values of its $k$ nearest neighbors ($k$-NN) \cite{altman1992introduction,chen2009fast}. Efficient as it is, $k$-NN faces several challenges: a) At least one of the neighbors must be labeled, which does not always hold in practice, and is not required by the Gradraker; and b) $k$-NN can only account for local information, while the Gradraker takes also into account the global information of the graph.}
\begin{algorithm}[t]
\caption{Gradraker algorithm}\label{algo:omkl:rf}
\begin{algorithmic}[1]
\State\textbf{Input:}~Kernels $\kappa_p, ~p=1,\ldots, P$, step size $\eta>0$, and number of RFs $D$.
\State\textbf{Initialization:}~$\bbtheta_{p,1}=\mathbf{0}$.
\State\textbf{Training:}
\For {$t = 1, 2,\ldots, T$}
\State Obtain the adjacency vector $\bba_t$ of node $v_t$ .
\State Construct $\bbz_{p}(\bba_t)$ via \eqref{rep:z} using $\kappa_p$ for $p=1,\dots, P$.
\State Predict $\hat{f}_t^{\rm RF}(\bba_t)=\sum_{p=1}^P \bar{w}_{p,t} \hat{f}_{p,t}^{\rm RF}(\bba_t)$
\State Observe loss function ${\cal L}_t$, incur ${\cal L}_t(\hat{f}_t^{\rm RF}(\bba_t))$.
\hspace{1.cm} \For {$p=1, \ldots, P$}
\State Obtain loss ${\cal L}(\bbtheta_{p,t}^\top\bbz_p(\bba_t),y_t)$ or ${\cal L}_t(\hat{f}_{p,t}^{\rm RF}(\bba_t))$.
\State Update $\bbtheta_{p,t+1}$ and $w_{p,t+1}$ via \eqref{eq.klp-weight} and \eqref{eq.mkl-weight}.
\EndFor
\EndFor
\State \textbf{Inference: }\\
\hspace{6mm}Construct RF vector $\{\bbz_{p}(\bba_j)\}$ using $\{\kappa_p\}$.\\
\hspace{6mm}Infer
$\hat{f}(v_j)=\sum_{p=1}^P \bar{w}_{p,T+1} \bbtheta_{p,T+1}^\top\bbz_{p}(v_j).$
\State \textbf{Accounting for newly-coming node}\\
\hspace{6mm}Construct RF vector $\{\bbz_{p}(\bba_{\rm new})\}$ using $\{\kappa_p\}$.\\
\hspace{6mm}Estimate
$\hat{f}(v_{\rm new})=\sum_{p=1}^P\bar{w}_{p,T+1} \bbtheta_{p,T+1}^\top\bbz_{p}(v_{\rm new}).$\\
\hspace{6mm}If $y_{\rm new}$ available update $\{\bbtheta_p, w_{p}\}$ via \eqref{eq.klp-weight} and \eqref{eq.mkl-weight}.
\end{algorithmic}
\end{algorithm}
\subsection{Generalizations}
So far, it is assumed that each node $n$ only has available its own connectivity feature vector $\bba_n$. This allows Gradraker to be applied even when limited information is available about the nodes, which many existing algorithms that rely on nodal features cannot directly cope with.
If additional feature vectors $\{ \boldsymbol{\phi}_{i,n}\}_{i=1}^I$ are actually available per node $n$ other than its own $\bba_n$, it is often not known a priori which set of features is the most informative for estimating the signal of interest on the graph. To this end, the novel Gradraker can be adapted by treating the functions learned from different sets of features as \emph{an ensemble of learners}, and combine them in a similar fashion as in \eqref{eq:fp}, that is,
\begin{align}
f(v_n)=\sum_{i=1}^I \beta_i f_i(\bbphi_{i,n})\label{eq:f:fad}
\end{align}
Applications to several practical scenarios are discussed in the following.
\noindent\textbf{Semi-private networks.}
In practice, a node may tolerate sharing its links to its neighbors, e.g., users of Facebook may share their friends-list with friends. In this scenario, each node not only knows its own neighbors, but also has access to who are its neighbors' neighbors, i.e., two-hop neighbors. Specifically, node $n$ has access to $\bba_n$, as well as to the $n$th column of \textcolor{black}{$\bbA^{(2)}:=\bbA\bbA$} \cite{kolaczyk2009statistical}, and a learner $f_2(\bbphi_{2,n})$ can henceforth be introduced and combined in \eqref{eq:f:fad}. Moreover, when nodes are less strict about privacy, e.g., when a node is willing to share its multi-hop neighbors, more learners can be introduced and combined `on the fly' by selecting $\bbphi_{i,n}$ as the $n$th column of $\bbA^{(i)}$ in \eqref{eq:f:fad}.
\noindent\textbf{Multilayer networks.}
Despite their popularity, ordinary networks are often inadequate to describe increasingly complex systems. For instance, modeling interactions between two individuals using a single edge can be a gross simplification of reality. Generalizing their \emph{single-layer} counterparts, \emph{multilayer networks} allow nodes to belong to $N_g$ groups, called layers~\cite{kivela2014multilayer,traganitis2017}. These layers could represent different attributes or characteristics of a complex system, such as temporal snapshots of the same network, or different types of groups in social networks (family, soccer club, or work related). Furthermore, multilayer networks are able to model systems that typically cannot be represented by traditional graphs, such as heterogeneous information networks~\cite{zhou2007co,sun2013mining}.
To this end, Gradraker can readily incorporate the information collected from heterogenous sources, e.g., connectivity patterns $\{\bbA_i\}_{i=1}^{N_g}$ from different layers, by adopting a kernel based learner $f_i(\bba_{i,n})$ on the $i$th layer and combining them as in \eqref{eq:f:fad}.
\noindent\textbf{Nodal features available.}
In certain cases, nodes may have nodal features \cite{kolaczyk2009statistical} in addition to their $\{\bba_n\}$. For example, in social networks, other than the users' connectivity patterns, we may also have access to their shopping history on Amazon. In financial networks, in addition to the knowledge of trade relationships with other companies, there may be additional information available per company, e.g., the number of employees, category of products the company sales, or the annual profit.
Gradraker can also incorporate this information by introducing additional learners based on the nodal feature vectors, and combine them as in \eqref{eq:f:fad}.
\section{Performance analysis}
To analyze the performance of the novel Gradraker algorithm, we assume that the following are satisfied.
\vspace{0.1cm}
\noindent\textbf{(as1)}
\emph{For all sampled nodes $\{v_t\}_{t=1}^T$, the loss function ${\cal L}(\bbtheta^\top\bbz_{\bbV}(\bba_t),y_t)$ in \eqref{eq:rf-task} is convex w.r.t. $\bbtheta$.}
\vspace{0.1cm}
\noindent\textbf{(as2)}
\emph{For $\bm{\theta}$ belonging to a bounded set ${\bbTheta}$ with $\|\bbtheta\|\leq C_{\theta}$, the loss is bounded; that is, ${\cal L}(\bbtheta^\top\bbz_{\bbV}(\bba_t),y_t)\in[-1,1]$, and has bounded gradient, meaning, $\|\nabla {\cal L}(\bbtheta^\top\bbz_{\bbV}(\bba_t),y_t)\|\leq L$.}
\vspace{0.1cm}
\noindent\textbf{(as3)}
\emph{The kernels $\{\kappa_p\}_{p=1}^P$ are shift-invariant, standardized, and bounded, that is, $\kappa_p(\mathbf{a}_n,\mathbf{a}_{n'})\!\leq\! 1,\,\forall \mathbf{a}_n,\mathbf{a}_{n'}$; and w.l.o.g. they also have bounded entries, meaning $\|\mathbf{a}_n\|\leq 1, \forall n$.}
\vspace{0.1cm}
Convexity of the loss under (as1) is satisfied by the popular loss functions including the square loss and the logistic loss.
As far as (as2), it ensures that the losses, and their gradients are bounded, meaning they are $L$-Lipschitz continuous.
While boundedness of the losses commonly holds since $\|\bbtheta\|$ is bounded, Lipschitz continuity is also not restrictive. Considering kernel-based regression as an example, the gradient is $(\bm{\theta}^{\top} \mathbf{z}_{\bbV}(\mathbf{x}_t)-y_t) \mathbf{z}_{\bbV}(\mathbf{x}_t)+\lambda\bbtheta$. Since the loss is bounded, e.g., $\|\bm{\theta}^{\top} \mathbf{z}_{\bbV}(\mathbf{x}_t)-y_t\| \leq 1$, and the RF vector in \eqref{rep:z} can be bounded as $\|\mathbf{z}_{\bbV}(\mathbf{x}_t)\|\leq 1$, the constant is $L:= 1+\lambda C_{\theta}$ using the Cauchy-Schwartz inequality.
Kernels satisfying the conditions in (as3) include Gaussian, Laplacian, and Cauchy \cite{rahimi2007}.
In general, (as1)-(as3) are standard in online convex optimization (OCO) \cite{shalev2011,hazan2016}, and in kernel-based learning \cite{micchelli2005,rahimi2007,lu2016large}.
In order to quantify the performance of Gradraker, we resort to the static regret metric, which quantifies the difference
between the aggregate loss of an OCO algorithm, and that of the best
fixed function approximant in hindsight, see also e.g.,~\cite{shalev2011,hazan2016}. Specifically, for a sequence $\{\hat{f}_t\}$ obtained by an online algorithm ${\cal A}$, its static regret is
\begin{align}\label{eq.sta-reg}
{\rm Reg}_{\cal A}^{\rm s}(T):=\sum_{t=1}^T {\cal L}_t(\hat{f}_t(\bba_t))-\sum_{t=1}^T{\cal L}_t(f^*(\bba_t))
\end{align}
where $\hat{f}_t^{\rm RF}$ will henceforth be replaced by $\hat{f}_t$ for notational brevity; and, $f^*(\cdot)$ is defined as the batch solution
\begin{align}\label{eq.slot-opt}
f^*(\cdot) & \in\arg\min_{\{f_p^*,\,p\in{\cal P}\}}\,\sum_{t=1}^T {\cal L}_t(f_p^*(\bba_t))\nonumber\\& ~~~{\rm with}~~~f_p^*(\cdot)\in\arg\min_{f\in{\cal F}_p} \,\sum_{t=1}^T {\cal L}_t(f(\bba_t))
\end{align}
where ${\cal F}_p:={\cal H}_p$, with ${\cal H}_p$ representing the RKHS induced by $\kappa_p$.
We establish the regret of our Gradraker approach in the following lemma.
\begin{lemma}
\label{lemma4}
Under (as1), (as2), and with $\hat{f}_p^*$
{\color{black} defined as
$
\hat{f}_p^*(\cdot)\in\arg\min_{f\in
\hat{\cal F}_p} \,\sum_{t=1}^T {\cal L}_t(f(\bba_t))
$,
}
with $\hat{\cal F}_p:=\{\hat{f}_p|\hat{f}_p(\bba)=\bbtheta^{\top}\mathbf{z}_p(\bba),\,\forall \bbtheta\in\mathbb{R}^{2D}\}$, for any $p$, the sequences $\{\hat{f}_{p,t}\}$ and $\{\bar{w}_{p,t}\}$ generated by Gradraker satisfy the following bound
\begin{align}
\label{eq.mkl.sreg}
&\sum_{t=1}^T{\cal L}_t\bigg(\sum_{p=1}^P \bar{w}_{p,t} \hat{f}_{p,t}(\bba_t)\bigg)-\sum_{t=1}^T{\cal L}_t(\hat{f}_p^*(\bba_t))\nonumber\\
\leq & \frac{\ln P}{\eta}+\frac{\|\bbtheta_p^*\|^2}{2\eta}+\frac{\eta L^2T}{2}+\eta T
\end{align}
where $\bbtheta_p^*$ is associated with the best RF function approximant $\hat{f}_p^*(\bba)=\left(\bbtheta_p^*\right)^{\top}\mathbf{z}_p(\bba)$.
\end{lemma}
\begin{proof}
See Appendix \ref{app.pf.lemma4}
\end{proof}
In addition to bounding the regret in the RF space, the next theorem compares the Gradraker loss
relative to that of the best functional estimator in the original RKHS.
\begin{theorem}\label{theorem0}
Under (as1)-(as3), and with $f^*$ defined as in \eqref{eq.slot-opt}, for a fixed $\epsilon>0$, the following bound holds with probability at least $1-2^8\big(\frac{\sigma_p}{\epsilon}\big)^2 \exp \big(\frac{-D\epsilon^2}{4N+8}\big)$
\begin{align}\label{eq.sreg.f}
&\sum_{t=1}^T{\cal L}_t\left(\sum_{p=1}^P \bar{w}_{p,t} \hat{f}_{p,t}(\bba_t)\right)-\!\!\!\sum_{t=1}^T{\cal L}_t\left(f^*(\bba_t)\right)\nonumber\\
\leq &\frac{\ln P}{\eta}+\frac{(1+\epsilon)C^2}{2\eta}\!+\!\frac{\eta L^2T}{2}+\eta T\!+\!\epsilon LTC
\end{align}
where $C$ is a constant, while $\sigma_p^2:=\mathbb{E}_{\pi_{\kappa_p}}[\|\bbv\|^2]$ is the second-order moment of the RF vector norm. Setting $\eta=\epsilon={\cal O}(1/\sqrt{T})$ in \eqref{eq.sreg.f}, the static regret in \eqref{eq.sta-reg} leads to
\begin{align}
\label{eq:sreg:11}
{\rm Reg}_{\rm Gradraker}^{\rm s}(T)= {\cal O}(\sqrt{T}).
\end{align}
\end{theorem}
\begin{proof}
See Appendix \ref{app.pf.theorem0}
\end{proof}
\begin{figure*}[t]
\centering
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{syn_nmse.pdf}
\centerline{(a) Generalization NMSE}
\end{minipage}
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{syn_rt.pdf}
\centerline{(b) Testing runtime }
\end{minipage}
\caption{{Inference performance versus number of nodes for synthetic dataset generated from graph diffusion kernel } }
\label{fig1}
\end{figure*}
Observe that the probability of \eqref{eq.sreg.f} to hold grows as $D$ increases, and one can always find a $D$ to ensure a positive probability for a given $\epsilon$. Theorem \ref{theorem0} establishes that with a proper choice of parameters, the Gradraker achieves sub-linear regret relative to the best static function approximant in \eqref{eq.slot-opt}, which means the novel Gradraker algorithm is capable of capturing the nonlinear relationship among nodal functions accurately, as long as enough nodes are sampled sequentially.
In addition, it is worth noting that Theorem \ref{theorem0} holds true regardless of the sampling order of the nodes $\{v_1, \dots, v_T\}$. However, optimizing over the sampling pattern is possible, and constitutes one of our future research directions.
\begin{figure*}[t]
\centering
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{syn_nmse_gaussian.pdf}
\centerline{(a) Generalization NMSE}
\end{minipage}
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{syn_rt_gaussian.pdf}
\centerline{(b) Runtime}
\end{minipage}
\caption{Inference performance versus number of nodes for synthetic dataset generated from Gaussian kernel }
\label{fig2}
\end{figure*}
\section{Numerical tests}
\label{sec:test}
In this section, Gradraker is tested on both synthetic and real datasets to corroborate its effectiveness. The tests will mainly focus on regression tasks for a fair comparison with existing alternatives.
\begin{figure*}[t]\label{fig:temp}
\centering
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{temp_nmse.pdf}
\centerline{(a) Generalization NMSE}
\end{minipage}
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{temp_rt.pdf}
\centerline{(b) Runtime}
\end{minipage}
\caption{Inference performance versus number of sampled nodes in temperature dataset }
\label{fig:temp}
\end{figure*}
\subsection{Synthetic data test}
\noindent\textbf{Data generation.}
An Erd{\"o}s-R{\'e}nyi graph \cite{erdos1959random} with binary adjacency matrix $\bbA_0\in \bbR^{N\times N}$ was generated with probability of edge presence $\pi=0.2$, and its adjacency was symmetrized as ${\bbA}=\bbA_0+\bbA_0^\top$. This symmetrization is not required by Gradraker, but it is necessary for alternative graph kernel based methods. A function over this graph was then generated with
each entry of the coefficient vector $\bbalpha\in\mathbb{R}^{N}$ drawn uniformly from $[0.5,1]$, and each entry of the noise $\bbe$ drawn from $\mathcal{N}(0, 0.01\bbI)$. In each experiment, the sampling matrix $\bbPsi$ is randomly generated so that $M=0.05 N$ of the nodes are randomly sampled, and the remaining $N-M$ nodes are treated as newly-joining nodes, whose function values and connectivity patterns are both unknown at the training phase, and whose nodal function values are estimated based on their connectivity with existing nodes in the network during the testing phase. All algorithms are carried out on the training set of $M$ nodes, and the obtained model is used to estimate the function value on the newly arriving nodes. The runtime for estimating the function value on the newly-joining nodes, as well as the generalization ${\rm NMSE}:=\frac{1}{|{\cal S}^c|}\|\hat{\bbx}_{{\cal S}^c}-\bbx_{{\cal S}^c}\|_2^2/\|\bbx_{{\cal S}^c}\|_2^2$ performance is evaluated, with ${\cal S}^c$ denoting the index set of new nodes. The Gradraker \textcolor{black}{adopts a dictionary consisting of $2$ Gaussian kernels with parameters $\sigma^2=1,5$, using $D=10$ random features, and it is compared with: a) the $k$NN algorithm, with $k$ selected as the maximum number of neighbors a node has in a specific network, and with the combining weights set to $1/k$ in unweighted graphs, and $a_{il}/\sum_{j\in\mathcal{N}_i}a_{ij}$ for the $l$th neighbor in weighted graphs;} b) the graph kernel (GK) based method using diffusion kernels with different bandwidths (named as GK-DF), or band-limited kernels with different bandwidths (GK-BL); and c) kernel based learning without RF approximation (KL) with a Gaussian kernel of $\sigma^2=5$. Results are averaged over $100$ independent runs. The regularization parameter for all algorithms is selected from the set $\mu=\{10^{-7},10^{-6}, \dots, 10^0\}$ via cross validation.
\noindent\textbf{Testing results.}
Figure \ref{fig1} illustrates the performance in terms of the average runtime and NMSE versus the number of nodes (size) of the network.
In this experiment, $\bar{\bbK}$ in \eqref{eq:gk} is generated from the normalized graph Laplacian $\bbL$, using the diffusion kernel $r(\lambda)=\exp (\sigma^2\lambda/2)$. A bandwidth of $\sigma^2=5$ was used to generate the data.
It is observed that GK attains the best generalization accuracy when the ground-truth model is known, but its computational complexity grows rapidly with the network size. However, GK does not perform as well when a mismatched kernel is applied. The Gradraker method on the other hand, is very efficient, while at the same time it can provide reasonable estimates of the signal on the newly arriving nodes, even without knowledge about the kernels. The k-NN method is very efficient, but does not provide as reliable performance as the Gradraker.
Figure \ref{fig2} depicts the performance of competitive algorithms. Matrix $\bar{\bbK}$ for data generation is formed based on \eqref{eq:gka} using the Gaussian kernel $\kappa(\bba_i-\bba_j)= \exp(\|\bba_i-\bba_j\|^2/\sigma^2)$, with $\sigma^2=5$. In this case, KL exactly matches the true model, and hence it achieves the best performance. However, it is the most complex in terms of runtime. Meanwhile, GK-based methods suffer from model mismatch, and are also relatively more complex than Graderaker. The novel Gradraker is capable of estimating the nodal function on the newly joining nodes with high accuracy at very low computational complexity.
Note that in real-world scenarios, accurate prior information about the underlying model is often unavailable, in which case Gradraker can be a more reliable and efficient choice.
\begin{figure*}[t]
\centering
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{email_nmse.pdf}
\centerline{(a) Generalization NMSE}
\end{minipage}
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{email_rt.pdf}
\centerline{(b) Runtime }
\end{minipage}
\caption{{Inference performance versus number of sampled nodes in email dataset} }\label{fig:email}
\end{figure*}
\subsection{Reconstruction of the temperature data}
This subsection tests the performance of Gradraker on a real temperature dataset. The dataset comprises $24$ signals corresponding to the average
temperature per month in the intervals $1961-1980$ and $1991-2010$
measured by $89$ stations in Switzerland {\cite{tempdata}}. The training set contains the first $12$ signals, corresponding to the interval $1961-1980$, while the test set contains the remaining $12$. Each station is represented by a node, and the graph was constructed using the algorithm in \cite{dong2016learning} based on the training signals. Given the test signal on a randomly chosen subset of $M$ vertices, the values at the remaining $N-M$ vertices are estimated as newly-coming nodes. \textcolor{black}{The generalization NMSE over the $N-M$ nodes is averaged across the test signals.}
Fig. \ref{fig:temp} compares
the performance of Gradraker with those of competing alternatives. Gradraker adopts a dictionary consisting of $3$ Gaussian kernels with parameters $\sigma^2=1,5,10$, using $D=100$ random features.
It is clear from Fig. \ref{fig:temp} that Gradraker outperforms GK in both generalization NMSE and runtime. On the other hand, even though KL achieves lower generalization NMSE, it incurs a much higher complexity.
\begin{figure*}[t]
\centering
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{cora_nmse.pdf}
\centerline{(a) Generalization NMSE}
\end{minipage}
\begin{minipage}[b]{.49\textwidth}
\centering
\includegraphics[width=9cm]{cora_rt.pdf}
\centerline{(b) Runtime }
\end{minipage}
\caption{{Inference performance versus number of sampled nodes in Cora dataset} }\label{fig:cora}
\end{figure*}
\subsection{Reconstruction of the Email-Eu-core data}
The Eu-core network was generated using email data from a large European research institution \cite{leskovec2007tkdd}, where each node represents a person, and an edge $(i,j)$ is present if person $i$ sent person $j$ at least one email. The e-mails only represent communication between institution members (the core), and the dataset does not contain incoming messages from or outgoing messages to the rest of the world. The dataset also contains ``ground-truth'' community memberships of the nodes. Each individual belongs to one of 42 departments at the research institute. During the experiment, the department labels are considered to be $y_n$ that are to be sampled and estimated. The graph consists of $N=1,005$ nodes, and $25,571$ edges. Gradraker adopts a dictionary consisting of $2$ Gaussian kernels with parameters $\sigma^2=1,10$, from which $D=10$ random features are generated. The test results were averaged over $100$ independent runs with randomly sampled nodes.
Fig. \ref{fig:email} compares
the performance of Gradraker with those of alternative algorithms when different numbers of nodal labels are observed.
It is clear that the RF-based approach outperforms the GK-based method in both reconstruction accuracy and runtime. While the batch KL method without RF approximation outperforms the RF method, it incurs considerably higher computational complexity.
\subsection{Reconstruction of the Cora data}
This subsection tests the Gradraker algorithm on the Cora citation dataset \cite{lu2003link}. Gradraker adopts a dictionary consisting of $2$ Gaussian kernels with parameters $\sigma^2=1,10$, using $D=20$ random features. The results were averaged over $100$ independent runs.
The Cora dataset consists of $2,708$ scientific publications classified into one of seven classes. The citation network consists of $5,429$ links. The network is constructed so that a link connects node $i$ to node $j$ if paper $i$ cites paper $j$, and the category id the paper belongs to is to be reconstructed. It can be observed again from Figure \ref{fig:cora}, that the Gradraker markedly outperforms the GK algorithms in terms of generalization NMSE, and is much more computationally efficient than all other algorithms except the kNN method, which however does not perform as well.
It can be readily observed from our numerical results over synthetic and real datasets, that the Gradraker provides reliable performance in terms of NMSE in all tests, while at the same time, it scales much better than all kernel based alternatives. This is because the alternative kernel-based algorithms require re-computing the kernel matrix whenever a new node joins the network. It is worth noting that all kernel-based alternatives require exact knowledge of the entire network topology, which is not necessary for GradRaker that only requires $\{\bbz_{\bbV}(\bba_n)\}$. These tests corroborate the potential of GradRaker for application settings, where the graphs grow and nodes have privacy constraints.
\section{Conclusions}\label{sec:con}
The present paper deals with the problem of reconstructing signals over graphs, from samples over a subset of nodes. An online MKL based algorithm is developed, which is capable of estimating and updating the nodal functions even when samples are collected sequentially. The novel online scheme is highly scalable and can estimate the unknown signals on newly joining nodes. Unlike many existing approaches, it only relies on encrypted nodal connectivity information, which is appealing for networks where nodes have strict privacy constraints.
This work opens up a number of interesting directions for future research, including: a) exploring distributed
implementations that are well motivated in large-scale networks; b) graph-adaptive learning when multiple sets of features are available; and c) developing adaptive sampling strategies for Gradraker.
\appendices
\section{Proof of Lemma \ref{lemma4}}\label{app.pf.lemma4}
To prove Lemma \ref{lemma4}, we introduce two intermediate lemmata.
\begin{lemma}\label{lemma3}
Under (as1), (as2), and $\hat{f}_p^*$ as in \eqref{eq.slot-opt} with ${\cal F}_p:=\{\hat{f}_p|\hat{f}_p(\bba)=\bbtheta^{\top}\mathbf{z}_p(\bba),\,\forall \bbtheta\in\mathbb{R}^{2D}\}$, let $\{\hat{f}_{p,t}(\bba_t)\}$ denote the sequence of estimates generated by Gradraker with a pre-selected kernel $\kappa_p$. Then the following bound holds true w.p.1
%
\begin{align}
\sum_{t=1}^T {\cal L}_t(\hat{f}_{p,t}(\bba_t))\!-\!\sum_{t=1}^T{\cal L}_t(\hat{f}_p^*(\bba_t))\!\leq\! \frac{\|\bbtheta_p^*\|^2}{2\eta}\!+\!\frac{\eta L^2T}{2}
\end{align}
%
where $\eta$ is the learning rate, $L$ is the Lipschitz constant in (as2), and $\bbtheta_p^*$ is the corresponding parameter (or weight) vector supporting the best estimator $\hat{f}_p^*(\bba)=(\bbtheta_p^*)^{\top}\mathbf{z}_p(\bba)$.
\end{lemma}
\begin{proof}
The proof is similar to the regret analysis of online gradient descent, see e.g., \cite{shen2018aistats}.
\end{proof}
In addition, we will bound the difference between the loss of the solution obtained from Algorithm \ref{algo:omkl:rf} and the loss of the best single kernel-based online learning algorithm. Specifically, the following lemma holds.
\begin{lemma}
\label{lemma10}
Under (as1) and (as2), with $\{\hat{f}_{p,t}\}$ generated from Gradraker, it holds that
\begin{equation}
\label{eq:lemm10}
\sum_{t=1}^T \sum_{p=1}^P \bar{w}_{p,t} {\cal L}_{t}(\hat{f}_{p,t}(\bba_t))- \sum_{t=1}^T {\cal L}_{t}(\hat{f}_{p,t}(\bba_t))\leq\eta T+\frac{\ln P}{\eta}
\end{equation}
where $\eta$ is the learning rate in \eqref{eq.mkl-weight}, and $P$ is the number of kernels in the dictionary.
\end{lemma}
\begin{proof}
Letting $W_{t}:=\sum_{p=1}^P w_{p,t}$, the weight recursion in \eqref{eq.mkl-weight} implies that
\begin{eqnarray}
\label{eq:sreg:W1}
W_{t+1}&=&\!\!\!\sum_{p=1}^P w_{p,t+1}=\sum_{p=1}^P w_{p,t} \exp\left(-\eta{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\right)\\
&\leq&\!\!\!\sum_{p=1}^P w_{p,t}\left(1-\eta{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)+\eta^2{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\right)\nonumber
\end{eqnarray}
where the last inequality holds because $\exp(-\eta x)\leq 1-\eta x+\eta^2 x^2$, for $|\eta|\leq 1$.
Furthermore, substituting $\bar{w}_{p,t}:=w_{p,t}/\sum_{p=1}^P w_{p,t}=w_{p,t}/W_t$ into \eqref{eq:sreg:W1} leads to
\begin{align}
\label{eq:sreg:W2-0}
W_{t+1}&\leq \sum_{p=1}^P W_t\bar{w}_{p,t}\!\left(\!1-\eta{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\!+\!\eta^2{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\!\right)\nonumber\\
&= W_t\Bigg(1-\eta\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\nonumber\\
&\hspace{2cm}+\eta^2\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\Bigg).
\end{align}
Since $1+x\leq e^x,\,\forall x$, it follows that
\begin{align}
\label{eq:sreg:W2}
W_{t+1}\leq& W_t \exp \Bigg(-\eta \sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\nonumber\\
&+\eta^2 \sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\Bigg).
\end{align}
Telescoping \eqref{eq:sreg:W2} from $t=1$ to $T$ yields
\begin{align}\label{eq:sreg:W3}
W_{T+1}\leq& \exp \Bigg(-\eta \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\nonumber\\
&+\eta^2 \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\Bigg).
\end{align}
On the other hand, for any $p$, it holds that
\begin{eqnarray}
\label{eq:sreg:W4}
W_{T+1}&\geq & w_{p,T+1}\nonumber\\
&=&w_{p,1}\prod_{t=1}^T \exp(-\eta{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right))\nonumber\\
&= & w_{p,1}\exp\Bigg(-\eta\sum_{t=1}^T{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\Bigg).
\end{eqnarray}
Combining \eqref{eq:sreg:W3} with \eqref{eq:sreg:W4}, we arrive at
\begin{align}
\label{eq:sreg:6}
&\exp \Bigg(\!-\!\eta \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\nonumber\\
&\hspace{2cm}+\eta^2 \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\!\Bigg)\nonumber\\
& \geq\, w_{p,1} \exp\Bigg(\!-\!\eta\sum_{t=1}^T{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\!\Bigg).
\end{align}
Taking the logarithm on both sides of \eqref{eq:sreg:6}, and recalling that $w_{p,1}=1/P$, we obtain
\begin{align}
\label{eq:sreg:7}
&-\eta \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\!\left(\hat{f}_{p,t}(\bba_t)\right)\!+\eta^2 \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\!\left(\hat{f}_{p,t}(\bba_t)\right)^2\nonumber\\
\!\geq\!&-\eta\sum_{t=1}^T{\cal L}_t\!\left(\hat{f}_{p,t}(\bba_t)\right)\!-\ln P.
\end{align}
Re-organizing the terms leads to
\begin{align}\label{eq:sreg:8}
&\sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)\\
\leq &\sum_{t=1}^T{\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)+\eta \sum_{t=1}^T\sum_{p=1}^P\bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2+\frac{\ln P}{\eta}\nonumber
\end{align}
and the proof is complete, since ${\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right)^2\leq 1$ and $\sum_{p=1}^P\bar{w}_{p,t}=1$.
\end{proof}
Since ${\cal L}_t(\cdot)$ is convex under (as1), Jensen's inequality implies
\begin{align}
\label{eq:sreg:9}
{\cal L}_t\bigg(\sum_{p=1}^P \bar{w}_{p,t} \hat{f}_{p,t}(\bba_t)\bigg)\leq \sum_{p=1}^P \bar{w}_{p,t} {\cal L}_t\left(\hat{f}_{p,t}(\bba_t)\right).
\end{align}
Combining \eqref{eq:sreg:9} with Lemma \ref{lemma10}, one arrives readily at
%
\begin{align}
&\sum_{t=1}^T{\cal L}_t\bigg(\sum_{p=1}^P \bar{w}_{p,t} \hat{f}_{p,t}(\bba_t)\bigg)\nonumber\\
\leq &\sum_{t=1}^T {\cal L}_{t}\left(\hat{f}_{p,t}(\bba_t)\right)+\eta T+\frac{\ln P}{\eta}\nonumber\\
\stackrel{(a)}{\leq} &\sum_{t=1}^T{\cal L}_t\left(\hat{f}_p^*(\bba_t)\right)+\frac{\ln P}{\eta}+\frac{\|\bbtheta_p^*\|^2}{2\eta}+\frac{\eta L^2T}{2}+\eta T
\end{align}
where (a) follows due to Lemma \ref{lemma3} and because $\bbtheta_p^*$ is the optimal solution for any given $\kappa_p$. This proves Lemma \ref{lemma4}.
\section{Proof of Theorem \ref{theorem0}}\label{app.pf.theorem0}
To bound the performance relative to the best estimator $f^*(\bba_t)$ in the RKHS, the key step is to bound the approximation error.
For a given shift-invariant $\kappa_p$, the maximum point-wise error of the RF kernel approximant is bounded with probability at least
$
1-2^8\big(\frac{\sigma_p}{\epsilon}\big)^2 \exp \big(\frac{-D\epsilon^2}{4N+8}\big),
$ by \cite{rahimi2007}
\begin{align}
\label{ieq:1}
\sup_{\bba_i,\bba_j\in{\cal X}} \left|\bbz_p^\top(\bba_i)\bbz_p(\bba_j)-\kappa_p(\bba_i,\bba_j)\right|<\epsilon
\end{align}
where $\epsilon>0$ is a given constant, $D$ the number of features, while $M$ is the number of nodes already in the network, and $\sigma_p^2:=\mathbb{E}_p[\|\bbv\|^2]$ is the second-order moment of the RF vector norm induced by $\kappa_p$.
Henceforth, for the optimal function estimator \eqref{eq.slot-opt} in ${\cal H}_p$ denoted by $f_p^*(\bba):=\sum_{t=1}^T\alpha_{p,t}^* \kappa_p(\bba,\bba_t)$, and its RF-based approximant $\check{f}^*_p:=\sum_{t=1}^T\alpha_{p,t}^* \bbz_p^\top(\bba)\bbz_p(\bba_t)\in{\cal F}_p$, we have
\begin{align}
\label{eq:sreg:5.a}
&\left|\sum_{t=1}^T{\cal L}_t\left(\check{f}^*_p(\bba_t)\right)-\sum_{t=1}^T{\cal L}_t\left(f_p^*(\bba_t)\right)\right|\nonumber\\
\stackrel{(a)}{\leq}\, &\sum_{t=1}^T\left|{\cal L}_t\left(\check{f}^*_p(\bba_t)\right)-{\cal L}_t(f_p^*(\bba_t))\right|\nonumber\\
\stackrel{(b)}{\leq}\, &\sum_{t=1}^T L\left|\sum_{t'=1}^T\alpha_{p,t'}^*\bbz_p^\top(\bba_{t'})\bbz_p(\bba_t)-\sum_{t'=1}^T\alpha_{p,t'}^*\kappa_p(\bba_{t'},\bba_t)\right|\nonumber\\
\stackrel{(c)}{\leq}\, &\sum_{t=1}^T L\sum_{t'=1}^T|\alpha_{p,t'}^*|\left|\bbz_p^\top(\bba_{t'})\bbz_p(\bba_t)-\kappa_p(\bba_{t'},\bba_t)\right|
\end{align}
where (a) is due to the triangle inequality; (b) uses the Lipschitz continuity of the loss, and (c) is due to the Cauchy-Schwarz inequality.
Combining with \eqref{ieq:1}, yields
\begin{eqnarray}
\label{eq:sreg:5}
&&\left|\sum_{t=1}^T{\cal L}_t(\check{f}^*_p(\bba_t))-\sum_{t=1}^T{\cal L}_t(f_p^*(\bba_t))\right|\nonumber\\
&\leq &\sum_{t=1}^T L\epsilon\sum_{t'=1}^T |\alpha_{p,t'}^*|\leq \epsilon L T C,~{\rm w.h.p.}
\end{eqnarray}
where we used that $C:=\max_p \sum_{t=1}^T |\alpha_{p,t}^*|$.
Under the kernel bounds in (as3), the uniform convergence in \eqref{ieq:1} implies that $\sup_{\bba_t,\bba_{t'}\in{\cal X}} \bbz_p^\top(\bba_t)\bbz_p(\bba_{t'})\leq 1+\epsilon$, w.h.p., which leads to
\begin{align}
\label{eq:sreg:4}
\!\!\!\left\|\bbtheta_p^*\right\|^2:=&\left\|\sum_{t=1}^T\alpha_{p,t}^*\bbz_p(\bba_t)\right\|^2\nonumber\\
=&\left|\sum_{t=1}^T\sum_{t'=1}^T\alpha_{p,t}^*\alpha_{p,t'}^*\bbz_p^{\top}(\bba_t)\bbz_p(\bba_{t'})\right|
\leq (1+\epsilon)C^2
\end{align}
where for the last inequality we used the definition of $C$.
Lemma \ref{lemma4} together with \eqref{eq:sreg:5} and \eqref{eq:sreg:4} lead to the regret of the proposed Gradraker algorithm relative to the best static function in ${\cal H}_p$, that is given by
\begin{align}
&\sum_{t=1}^T{\cal L}_t\bigg(\sum_{p=1}^P w_{p,t} \hat{f}_{p,t}(\bba_t)\bigg)-\sum_{t=1}^T{\cal L}_t(f_p^*(\bba_t))\nonumber\\
=&\sum_{t=1}^T{\cal L}_t\bigg(\sum_{p=1}^P w_{p,t} \hat{f}_{p,t}(\bba_t)\bigg)-\sum_{t=1}^T{\cal L}_t\left(\check{f}^*_p(\bba_t)\right)\nonumber\\
&+\sum_{t=1}^T{\cal L}_t\left(\check{f}^*_p(\bba_t)\right)-\sum_{t=1}^T{\cal L}_t(f_p^*(\bba_t))\nonumber\\
\leq\, &\frac{\ln P}{\eta}+\frac{\eta L^2T}{2}+\eta T+ \frac{(1+\epsilon)C^2}{2\eta}+\epsilon L T C,~{\rm w.h.p.}
\end{align}
which completes the proof of Theorem \ref{theorem0}.
|
1,116,691,500,291 | arxiv | \section{Introduction}
The Kuramoto model has been used to study the dynamics of synchronization in a wide variety of physical, chemical, and biological systems~\cite{kuramoto75, kuramoto84, strogatz00, pikovsky03, strogatz03, acebron05, dorfler14, pikovsky15, rodrigues16}. The model's governing equations can be written (unconventionally, but most usefully for our purposes) in the following dimensionless form:
\begin{alignat}{1} \label{GeneralKura}
\dot\theta_k = \Gamma \eta_k + \sum_{j\in\mathcal{N}(k)} \sin(\theta_j - \theta_k),
\end{alignat}
for $k = 1, \ldots, N$, where $\theta_k$ is the phase of oscillator $k$, and the sum is over all of $k$'s neighbors $\mathcal{N}(k)$, as determined by the coupling graph. By rescaling time in Eq.~\eqref{GeneralKura}, we have normalized the coupling strength to unity without loss of generality. The term $\Gamma \eta_k$ can then be interpreted as the scaled natural frequency of oscillator $k$.
The motivation for this unusual notation is that we are going to regard $\eta = (\eta_1, \ldots, \eta_N)$ as a fixed frequency vector and $\Gamma \ge 0$ as an adjustable parameter controlling the spread of the natural frequencies. For instance, the components of $\eta$ could be chosen independently at random from a prescribed probability distribution. Then increasing $\Gamma$ would allow us to increase the ``width" of the set of frequencies $\left\{\Gamma \eta_1, \ldots,\Gamma \eta_N \right\}$. We will occasionally write $\omega_k := \Gamma \eta_k$ for brevity.
In the simple case where $\Gamma=0$ and all the oscillators have $\omega_k=0$, the model has a stable fixed state with $\theta_k = 0$ for all $k$, for a broad class of coupling graphs. Now imagine increasing $\Gamma$ slightly to produce some variation among the $\omega_k$. Starting from an initial condition $\theta_k(0)=0$ and assuming a sufficiently small but nonzero $\Gamma$, the system will asymptotically approach a stable periodic solution of Eq.~\eqref{GeneralKura} in which all the oscillators run at the same constant frequency $\dot\theta_k \equiv \Omega$ for all $k$, for some constant $\Omega$. We call such a solution a stable phase-locked state. But when $\Gamma$ gets too large, the natural frequencies $\omega_k = \Gamma \eta_k$ will become too disparate for the coupling to lock the oscillators to a common $\Omega$. So as $\Gamma$ increases, we eventually lack any stable phase-locked solution.
This desynchronization transition occurs at what we call the {\it locking threshold}, at a parameter value given by the {\it critical value} of $\Gamma$ (alternatively, the critical width). Its calculation has been a focus of many prior studies of the Kuramoto model. Among these, a major point of variation has come from the choice of coupling topologies. The manner in which the oscillators are connected can have drastic effects on the behavior of the critical $\Gamma$, as has been demonstrated in work on complete graphs, one-dimensional chains and rings, two-dimensional square grids, three-dimensional cubic lattices, $d$-dimensional hypercubic lattices, random graphs, small-world and scale-free networks, and so on. For recent reviews, see Refs.~\cite{dorfler14, pikovsky15, rodrigues16}.
In this paper, we analyze a tractable situation where the dependence of the critical $\Gamma$ on topology, as opposed to dimension, can be well characterized. Namely, if we have a one-dimensional lattice of oscillators with nearest-neighbor coupling, how does the critical $\Gamma$ depend on the choice of boundary condition? If oscillators 1 and $N$ are coupled, we call this the {\it ring} topology and denote its locking threshold by $\Gamma_R$. Alternatively, if oscillators 1 and $N$ are not connected, we call this the {\it chain} topology and write its corresponding locking threshold as $\Gamma_C$.
Intuitively, one might expect a ring and a chain to have similar locking thresholds, especially when $N$ becomes large. After all, the two topologies differ only by a single edge. On the other hand, that single edge is responsible for a topological (and hence \emph{qualitative}, not merely quantitative) change in the lattice's connectivity structure. For that reason it could conceivably have a very potent effect.
Although the setting of one-dimensional lattices may seem overly simplistic, it has the advantage that both rings and chains of oscillators have been studied extensively, using various techniques to analyze their dynamics and bifurcations \cite{cohen82, ermentrout84, ermentrout85, kopell86, sakaguchi87, sakaguchi88, strogatz88a, strogatz88b, wiley06, muruganandam08, elnashar09, ochab09, kogan09, lee09, giver11, tilles11, tilles13a, tilles13b}.
The main question is this: If we have a chain and a ring subject to the same initial condition $\theta_k(0)=0$ and the same vector of base frequencies $\eta = (\eta_1, ..., \eta_N)$, what limits can be placed on the ratio $\Gamma_R/\Gamma_C$? In particular, must a ring always be ``more stable'' than a chain, leading to $\Gamma_R/\Gamma_C \geq 1$?
\subsection{Telescopic Coupling}
In addition to variation in the connectivity structure, another source of variation in Kuramoto-like systems comes from altering the coupling function. For instance, we could replace the pure sine function in Eq.~\eqref{GeneralKura} with a more general periodic function. As we will see, the following analysis allows for such a generalization, though at the cost of introducing a different type of special structure.
To motivate this structure, let us look at the governing equation for an internal oscillator $k$ (meaning an oscillator with $1< k < N$) in a one-dimensional Kuramoto chain or ring:
\begin{alignat}{1} \label{StandardSine}
\dot\theta_k= \omega_k + \sin(\theta_{k-1} - \theta_k) + \sin(\theta_{k+1} - \theta_k).
\end{alignat}
Because sine is odd, Eq.~\eqref{StandardSine} can be rewritten as
\begin{alignat}{1} \label{TelescopeSine}
\dot\theta_k= \omega_k + \sin(\theta_{k-1} - \theta_k) - \sin(\theta_{k} - \theta_{k+1}).
\end{alignat}
So if we want to generalize from sine to a more general function $f$, mathematically speaking we have two plausible choices: Either
\begin{equation}
\dot\theta_k = \omega_k + f(\theta_{k-1} - \theta_k) + f(\theta_{k+1} - \theta_k), \label{StandardEq}
\end{equation}
or
\begin{equation}
\dot\theta_k = \omega_k + f(\theta_{k-1} - \theta_k) - f(\theta_k - \theta_{k+1}). \label{TelescopicEq}
\end{equation}
Equation~\eqref{StandardEq} is a generalization of the Kuramoto model that has often been studied in the past, motivated by its physical and biological applications~\cite{ermentrout84, ermentrout85, kopell86, ostborn04}. However, we believe it is instructive to consider the alternative Eq.~\eqref{TelescopicEq} as well, and will devote most of our attention to it below. Where the distinction becomes important, we will say Eq.~\eqref{StandardEq} represents {\it standard coupling}, and Eq.~\eqref{TelescopicEq} represents {\it telescopic coupling}, thanks to some convenient cancellation properties it enjoys. We will restrict attention to continuously differentiable coupling functions $f$ that are $2\pi$-periodic, and will also demand that $f$ is nonconstant and has at least one zero.
Although telescopic coupling is unconventional, it coincides with standard coupling when $f$ is an odd function, as commonly assumed in the physics literature. In that sense, telescopic and standard coupling schemes are on equal footing as generalizations of Kuramoto's sinusoidal coupling. Actually, considering the vast literature that focuses on pure sine coupling, even that special case remains of interest. Our results for telescopic coupling will include the traditional sine case while extending it to a new and wider family of models.
One possible objection is that telescopic coupling injects a directionality to a chain or ring. To see this, note that swapping the oscillators and natural frequencies ``left to right'' ($j \to N-j+1$ for $j=1,\ldots, N$) changes the governing equations for telescopic coupling, but not for standard coupling. But such a directionality may be reasonable in some contexts. For example, there are a number of physical and biological systems which have been modeled as directed chains of oscillators, such as central pattern generators for the swimming rhythm of lamprey~\cite{cohen82, kopell86, cohen92, ren00}.
A related point in favor of telescopic coupling is that it eases the analysis of oscillator arrays whose coupling functions $f$ lack odd symmetry. Although some results have been obtained for non-odd coupling on a chain ~\cite{kopell86, ostborn04, strogatz88a}, these are rare. Much of the existing research in this field has relied on the oddness of the coupling function and struggled otherwise. As we will see, telescopic coupling handles non-odd functions without difficulty.
\section{Critical Width $\Gamma_C$ for a Chain}
To begin the analysis, we calculate the critical width $\Gamma_C$ above which the chain has no phase-locked solutions \cite{strogatz88a, strogatz88b}. After including the chain boundary terms, and assuming telescopic coupling as in Eq.~\eqref{TelescopicEq}, the dynamics are given by
\begin{alignat*}{1}
\dot \theta_1 =& \omega_1 - f(\theta_1 - \theta_2), \\
\dot \theta_k =& \omega_k +f(\theta_{k-1}- \theta_k) - f(\theta_{k}- \theta_{k+1}), \text{for} \ 1 < k < N, \\
\dot \theta_N =& \omega_N +f(\theta_{N-1}- \theta_N).
\end{alignat*}
By definition, for $\Gamma \leq \Gamma_C$, the system evolves to a stable locked state, and conversely, locking is impossible for $\Gamma > \Gamma_C$. So if we find a condition on the existence of a locked state, we get a condition on $\Gamma_C$.
Recall that locking occurs when $\dot\theta_k \equiv \Omega$ for all $k$, for some $\Omega$. If we simply sum all $N$ of the differential equations above and then divide by $N$, we find
\begin{alignat*}{1}
\Omega = \frac{1}{N}\sum_{k=1}^N \omega_k,
\end{alignat*}
where we took advantage of telescoping nature of telescopic coupling. Let
\begin{equation}
\bar\omega = \frac{1}{N}\sum_{k=1}^N \omega_k.
\end{equation}
This allows us to rewrite our condition for locking as
\begin{alignat*}{1}
\omega_1 - \bar \omega =& f(\theta_1 - \theta_2), \\
\omega_k - \bar \omega =& -f(\theta_{k-1}- \theta_k) + f(\theta_{k}- \theta_{k+1}), 1<k<N, \\
\omega_N - \bar \omega =& -f(\theta_{N-1}- \theta_N).
\end{alignat*}
Sum the first $k$ equations and telescope them to obtain
\begin{alignat*}{1}
\sum_{j=1}^k \left(\omega_j - \bar \omega\right) = f(\theta_k - \theta_{k+1}).
\end{alignat*}
Let us define
\begin{equation}
\phi_{k} = \theta_k - \theta_{k+1}
\end{equation}
and
\begin{equation}
D_k = \sum_{j=1}^k (\eta_j - \bar \eta)
\end{equation}
for $k = 1, \ldots, N-1$. This yields
\begin{alignat}{1} \label{ChainEq}
f(\phi_k) = \Gamma D_k,
\end{alignat}
which is an exact condition on finding a locked state in the chain topology. In particular this means that $\Gamma_C$ corresponds to the supremum of all $\Gamma$'s where the above equation is satisfied and the solution $\phi = (\phi_1, .., \phi_{N-1})$ is stable. This condition is equivalent to one found previously for sine coupling~\cite{strogatz88a, strogatz88b}.
\subsection{Existence and Stability of the Locked State}
Next we check that condition \eqref{ChainEq} is satisfiable for the class of $f$ under consideration. Our biggest demand on $f$ was that it be continuously differentiable and periodic. Continuous periodic real-valued functions are bounded and attain their maximum and minimum, so we know that both $f$ and $f'$ attain their upper and lower bounds. Let us define the bounds $f_u := \max_x f(x),$ and $f_l := \min_x f(x).$ We also requested that $f$ be non-constant and cross zero, so $f_u > 0 > f_l.$
Given a particular realization of $\eta_k$'s, we can define $D_u := \max(0, \max_k(D_k))$ and $D_l := \min(0, \min_k(D_k))$. So $D_u$ represents the largest positive value of $D_k$ if it exists and 0 otherwise, with $D_l$ similarly defined for negative values, enforcing $D_l \leq 0 \leq D_u$. Therefore, we know that all locked states disappear at a critical point of
\begin{equation} \label{ChainCrit} \Gamma_C = \min(f_u/D_u, f_l/D_l ).\end{equation}
We formally take $1/0 = \infty$; notice that $\Gamma_C = \infty$ if and only if $D_k =0$ for all $k$, which is only possible if all the $\eta_k$ are identical. Also note that since $f_u$ and $f_l$ represent global bounds on $f$, then no equilibrium {\it at all} can exist when $\Gamma > \Gamma_C.$ However, for $\Gamma <\Gamma_C$ we can always find a set of $\phi_k$ that will satisfy the prior equations. This makes $\Gamma_C$ the true point between a locked state existing and disappearing.
\begin{figure}
\includegraphics[width = 0.5\textwidth]{LambdaPlot.pdf}
\caption{Example showing a choice of $\Lambda$ for a specific coupling function $f(x) = \sin(x)+\cos(3x)$, as indicated by the shaded region on the $x$-axis. The dashed lines illustrate how $\mbox{image}(f|_\Lambda)$ $= \mbox{image}(f)$, up to global extrema, while having $f'|_\Lambda(x) > 0 $. }
\label{LambdaPlot}
\end{figure}
However, knowing a phase-locked state exists does not ensure it is stable. Fortunately, it is not hard to show if $\Gamma < \Gamma_C$, then a stable locked state exists. For any $y \in (f_l, f_u)$ there exists some point $x$ where $f(x) = y$ and $f'(x) >0$; otherwise $f$ could never climb from $y$ to $f_u$. Moreover, since $f'$ is bounded, there are only finitely many $x$ which could work for each $y$ in the bounded domain $(-\pi, \pi]$. And since $f'$ is continuous, then $f'$ will be positive in a neighborhood of $x$, so our point selection can take advantage of this. Ergo there exists some open set $\Lambda$ where $f$ restricted to $\Lambda$ always has positive derivative and is surjective onto $(f_l, f_u).$ For a visual example, see Fig.~\ref{LambdaPlot}.
Returning to our original question, if $\Gamma < \Gamma_C,$ we can select a set of $\phi_k$ out of $\Lambda$ in a well-defined way, where $f'(\phi_k) >0$ and $f(\phi_k) = \Gamma D_k$. A theorem of Ermentrout ~\cite{ermentrout92} then guarantees that such a solution is asymptotically stable. Therefore, Eq.~\eqref{ChainCrit} really does define $\Gamma_C$, below which at least one stable locked state exists and above which none do.
\section{An upper bound on $\Gamma_R/\Gamma_C$}
The next step is to obtain an upper bound on $\Gamma_R$, the locking threshold for a ring. Although the interior of a chain looks the same as a ring, they differ at the boundary terms, as seen in the following equations:
\begin{alignat*}{1}
\dot \theta_1 =& \omega_1 + f(\theta_N - \theta_1) - f(\theta_1 - \theta_2), \\
\dot \theta_k =& \omega_k +f(\theta_{k-1}- \theta_k) - f(\theta_{k}- \theta_{k+1}), 1<k<N, \\
\dot \theta_N =& \omega_N +f(\theta_{N-1}- \theta_N) - f(\theta_N - \theta_1).
\end{alignat*}
Nevertheless, several steps in the following argument will be the same as for the chain. For example, locked states still satisfy $\dot\theta_k = \Omega$ for some $\Omega$, and we can still telescope the equations, yielding $\Omega = \bar\omega$ again. Similarly,
\begin{alignat*}{1}
\omega_1 - \bar \omega =& -f(\theta_N - \theta_1) + f(\theta_1 - \theta_2), \\
\omega_k - \bar \omega =& -f(\theta_{k-1}- \theta_k) + f(\theta_{k-}- \theta_{k+1}), 1<k<N, \\
\omega_N - \bar \omega =& -f(\theta_{N-1}- \theta_N) + f(\theta_N - \theta_1)
\end{alignat*}
which can be telescoped into
\begin{equation}
\Gamma D_k = f(\phi_k) - f(\theta_N - \theta_1). \nonumber
\end{equation}
Here, $D_k$ and $\phi_k$ are defined exactly as in the last section. Hence, if we put the same choice of $\eta$'s on a ring and a chain, they would have the same vectors $D = (D_1, ..., D_{N-1})$. Also notice that $-\sum_{j=1}^{N-1} \phi_j = \sum_{j=1}^{N-1} (\theta_j - \theta_{j-1}) = \theta_N - \theta_1.$ Therefore, we can write
\begin{equation}\label{RingEq}
\Gamma D_k = f(\phi_k) - f\left(-\sum_{j=1}^{N-1} \phi_j\right).
\end{equation}
Equations like this have been found before for the special case of sine coupling~\cite{ochab09, tilles11, tilles13b}. Although Eq.~\eqref{RingEq} has a compact form, demonstrating that solutions to it exist and calculating them explicitly is a difficult endeavor; hence our more modest goal is to establish a bound on $\Gamma_R$.
Let us define $f_u, f_l, D_u$, and $D_l$ as before. Since $f_l < 0 < f_u$ represent global extrema, the ring can have a locked state only if $f_l - f_u \leq \Gamma D_k \leq f_u - f_l$ for all $k$. This yields
\begin{equation}\label{RingCrit}
\Gamma_R \leq \min\left(\frac{f_u - f_l}{D_u}, \frac{f_l - f_u}{D_l} \right).
\end{equation}
Because we have been careful to use the same $D_k$ here as in the chain case, Eq.~\eqref{RingCrit} can be directly compared to Eq.~\eqref{ChainCrit} to give the bound
\begin{alignat*}{1}
\Gamma_R/\Gamma_C \leq& \min\left(\frac{f_u - f_l}{D_u}, \frac{f_l - f_u}{D_l} \right) / \min\left(\frac{f_u}{D_u},\frac{f_l}{D_l} \right) \\
=& \min\left(\frac{f_u - f_l}{D_u}, \frac{f_l - f_u}{D_l} \right)\max\left(\frac{D_u}{f_u}, \frac{D_l}{f_l} \right). \notag
\end{alignat*}
If $D_u/f_u > D_l/f_l$, then we have that
\begin{alignat*}{1}
\Gamma_R/\Gamma_C \leq& \min\left(\frac{f_u - f_l}{D_u}, \frac{f_l - f_u}{D_l} \right) \left( \frac{D_u}{f_u} \right) \\
\leq & \left(\frac{f_u - f_l}{D_u}\right) \left( \frac{D_u}{f_u}\right) \\
=& 1 + \left|\frac{f_l}{f_u} \right|.
\end{alignat*}
If otherwise $D_u/f_u < D_u/f_l$, then
\begin{alignat*}{1}
\Gamma_R/\Gamma_C \leq& \min\left(\frac{f_u - f_l}{D_u}, \frac{f_l - f_u}{D_l} \right) \left( \frac{D_l}{f_l} \right) \\
\leq& \left(\frac{f_l - f_u}{D_l}\right) \left( \frac{D_l}{f_l}\right)\\
=& 1 + \left|\frac{f_u}{f_l} \right|.
\end{alignat*}
Together these facts imply
\begin{equation} \label{RatioBound}
\Gamma_R/\Gamma_C \leq 1 + \max\left(\left| \frac{f_l}{f_u} \right|, \left|\frac{f_u}{f_l} \right| \right).
\end{equation}
Thus we have found a rigorous upper bound on the ``advantage'' of a ring over a chain, with the bound depending exclusively on the shape of the coupling function $f$. Also notice that the arguments of the $\max$ function are a nonnegative real number and its reciprocal, so this upper bound is always at least 2.
In fact, if $f$ is odd and $N=2$ then $\Gamma_R = 2 \Gamma_C,$ so Eq.~\eqref{RatioBound} is tight in certain cases. We will discuss the linear stability of states on the ring later, but since we need a solution to exist before it can be stable, the bound is valid.
\section{Upper and lower bounds}
Now that we have the upper bound \eqref{RatioBound} on the ratio of the critical widths, it is natural to want to check how sharp it is. The results shown in Fig.~\ref{WedgePlot} do exactly that. We generate many different realizations of the base frequency vectors $\eta$, and then plot the numerically obtained $\Gamma_C$ and $\Gamma_R$ on a scatterplot, and draw a solid line to denote our predicted boundary \eqref{RatioBound}. We first test an odd coupling function, namely sine; then we test several non-odd coupling functions. For the regimes being tested, a lot of points congregate at our upper bound, but as expected, none actually trespass it.
\begin{figure*}
\includegraphics[width = \textwidth]{ScatterPlot.pdf}
\caption{Scatterplot comparing the critical width $\Gamma$ for a chain and a ring of $N=25$ oscillators for a variety of coupling schemes and random realizations of the natural frequencies. For each data point, the corresponding ring and chain were matched, meaning that both were subject to the same initial conditions and natural frequencies. Initial phases $\theta_k(0)$ were chosen to be identically zero, and natural frequencies $\eta_k$ were drawn at random from a uniform distribution on $[-1,+1]$. Lines here represent a 1:1 ratio (dashed green line) and our theoretically predicted upper bound (solid red line defined by Eq.~\eqref{RatioBound}). Panel (a) has $f(x) = \sin(x)$ and panel (b) has $f(x) = \sin(x)+\cos(3x)$, both under the telescopic coupling scheme of Eq.~\eqref{TelescopicEq}. Both the panels (c) and (d) on the right have $f(x) = \sin(x +0.6) - \sin(0.6)$. However (c) uses telescopic coupling, whereas (d) follows the standard coupling equations of \eqref{StandardEq}. Notice that the upper bound is always obeyed, but some data points lie below the lower dashed line, showing that it is not a strict bound. Values of $\Gamma$ were estimated via a bisection technique combined with numerical integration, using a fourth-order Runge-Kutta method with a timestep of 0.125, a transient time between $5\ \times 10^2$ and $2 \times10^3$ time units, and observation times of $5 \times 10^2$. }
\label{WedgePlot}
\end{figure*}
But what about the lower dashed line representing $\Gamma_C = \Gamma_R$? It is tempting to think that this line should also be respected; after all, a ring has an additional coupling connection, and it has no free ends. With this extra edge to provide more coupling between the oscillators, one intuitively expects that a ring should always lock more easily than a chain. Moreover, the difference in boundary conditions means that the ring permits topologically twisted states ~\cite{ermentrout85, wiley06} that would be impossible for the chain. This too would naively suggest that the ring is always more susceptible to locking than the chain is.
However, Fig.~\ref{WedgePlot} indicates that some cases lie below the dashed line. In such cases the chain locks when its matched ring does not. Apparently the naive intuition above is wrong. We now confirm this surprising result by constructing a counterexample.
\subsection{Counterexample to $\Gamma_C \leq \Gamma_R$}
In fact, the critical width of a chain is \emph{not} always less than that of a matched ring. Here is a counterexample. Say we have $N = 4$, $f = \sin$, and we have obtained a realization of $\eta$'s such that $D = (+1, -1, -1)^T$. Then Eq.~\eqref{ChainCrit} immediately implies that $\Gamma_C = 1$, and we can satisfy this system with $\phi_1 = -\phi_2 = -\phi_3 = \pi/2$.
Now consider what the corresponding locked state would be for the ring. By assumption, such a state must exist; if $\Gamma_C \leq \Gamma_R$ is true, we should be able to produce a locked solution to the ring equations~\eqref{RingEq} with $\Gamma=1$. Such a solution would then satisfy the following system:
\begin{alignat}{1} \label{Countereqs}
\sin(\phi_1) + \sin(\phi_1+\phi_2+\phi_3) &= +1, \\
\sin(\phi_2) + \sin(\phi_1+\phi_2+\phi_3) &= -1, \\
\sin(\phi_3) + \sin(\phi_1+\phi_2+\phi_3) &= -1.
\end{alignat}
Notice that if we subtract the second or third equation from the first, we get
\begin{alignat*}{1}
\sin(\phi_1) - \sin(\phi_2) &= +2, \\
\sin(\phi_1) - \sin(\phi_3) &= +2.
\end{alignat*}
From here, we realize we have no choice. It must be that $\phi_1 = \pi/2$ and $\phi_2 = \phi_3 = -\pi/2,$ which yields the desired contradiction, since it gives
\begin{equation}
\sin(\phi_1) + \sin(\phi_1+\phi_2+\phi_3) = \sin(\pi/2) + \sin(-\pi/2) = 0, \nonumber
\end{equation}
which violates Eq.~(14).
The contradiction shows that even though we have a locked state for a chain, none exists for the ring. So sometimes $\Gamma_C \not\leq \Gamma_R$. Because this counterexample uses the sine function, it works for both the standard and telescopic coupling models.
Unfortunately, trying to come up with a genuine lower bound for $\Gamma_R/\Gamma_C$ is surprisingly involved, given that such shenanigans can be found in the small-$N$ cases.
\section{Asymptotic existence}
Although small $N$ is problematic, the large-$N$ regime is more tractable. Let us fix $N$ to be large but finite, and choose some realization of $\eta$. If we start with $\Gamma < \Gamma_C$, we are guaranteed a phase-locked solution $\phi^{(C)}$ to the chain equation \eqref{ChainEq}, satisfying $f\left(\phi_{k}^{(C)}\right) = \Gamma D_k$ for all $k = 1, \dots, N-1$. Moreover, we are guaranteed to be able to choose these $\phi_k$ from the set $\Lambda$ as defined earlier.
We seek to construct an approximate phase-locked solution to the ring based on this chain solution. The coupling function $f$ is $2\pi$-periodic, so let us define
\[\Psi := \left( \sum_{j=1}^{N-1} \phi_{k}^{(C)}\right)\mbox{mod}{2\pi}. \]
Thus $0 \leq \Psi < 2\pi$. Since we insisted that $f$ cross zero and be both nonconstant and periodic, there exists some point $x_0 \in (-\pi,\pi]$ such that $f(x_0) = 0$ and $f'(x_0) > 0$. We can then define $ \phi_{k}^{(R)} := \phi_{k}^{(C)} - (x_0 + \Psi)/(N-1)$ as a value close to $\phi_{k}^{(C)}$. This will represent our attempted solution to the ring equation \eqref{RingEq}.
First notice that
\begin{alignat*}{1}
f\left(-\sum_{j=1}^{N-1} \phi_{j}^{(R)}\right) &= f\left(-\sum_{j=1}^{N-1} \left(\phi_{k}^{(C)} - \frac{x_0 + \Psi}{N-1} \right) \right) \\
&= f\left( -\sum_{j=1}^{N-1} \left(\phi_{k}^{(C)}\right) + x_0 + \Psi \right)\\
&= f(x_0)\\
&= 0.
\end{alignat*}
And so we find
\begin{alignat*}{1}
&f\left(\phi_{k}^{(R)}\right) - f\left(-\sum_{j=1}^{N-1} \phi_{j}^{(R)}\right) \\
&= f\left(\phi_{k}^{(C)} - \frac{x_0 + \Psi}{N-1}\right).
\end{alignat*}
But recall that $f$ is continuously differentiable, so there is some finite upper bound on the derivative $f_u' = \max_x |f'(x)|$. In other words, for any $x$ and $\delta$, then $|f(x) - f(x+\delta)| < f_u' \delta.$ Therefore,
\begin{equation*}
\left| f\left(\phi_{k}^{(C)} - \frac{x_0 + \Psi}{N-1} \right) - f\left(\phi_{k}^{(C)}\right) \right| < f_u' \frac{|\Psi+x_0|}{N-1},
\end{equation*}
which implies
\begin{alignat*}{1}
f\left(\phi_{k}^{(R)}\right) - f\left(-\sum_{j=1}^{N-1} \phi_{j}^{(R)}\right) =\Gamma D_k + O(N^{-1}).
\end{alignat*}
So $\phi^{(R)}$ is an approximate solution to the ring equations that becomes exact as $N$ approaches infinity.
Figure~\ref{NormPlot} shows the convergence of this approximate solution for the ring to that for the chain. We numerically construct pairs of solutions that get closer as $N$ gets large. This all makes sense, since an infinitely long chain should be identical to an infinitely long ring.
Concerning stability, remember that the set $\Lambda$ is open, so for any $x\in \Lambda$, then for sufficiently small $\delta$ then $x+\delta \in \Lambda$. So this ring solution $\phi^{(R)}$ also lies entirely in $\Lambda$ for large enough $N$. This is almost enough to cite Ermentrout and establish the stability of this solution~\cite{ermentrout92}. However, we have an additional phase difference in our dynamics, $\theta_N - \theta_1 = -\sum_{j=1}^{N-1} \phi_{k}.$ In our proposed solution this quantity is sent to $\Psi+x_0$, which by construction has $f'(\Psi + x_0) = f'(x_0) > 0$, and so stability is secured.
To summarize, if we have a stable locked solution to the chain of oscillators for large $N$, then there is a nearby stable locked solution for the ring of oscillators. Hence, the naive lower bound $\Gamma_R \geq \Gamma_C$ is valid in the asymptotic case $N \gg 1$.
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth]{LogNormScatterPlot.pdf}
\caption{A plot showing the log of the separation (as measured by the infinity norm) between the vector of $\phi_k$'s for a chain and the same vector for a ring, given that they are subject to the same natural frequencies $\eta_k$, which were randomly drawn from a uniform distribution on $[-1,+1]$. We first calculate the locked solution for a chain, using an initial condition of all zeros. Then we use the final result of that calculation as the initial condition of the ring to allow for direct comparison. A coupling function $f(x) = -\sin(x)$ was used. The straight line shows the best linear fit to the log-log plot, indicating that we are seeing a decay comparable to $O(N^{-1})$. Values of the phases were computed by numerical integration, using a fourth-order Runge-Kutta method with a timestep of 0.125, a transient time between $5 \times10^2$ and $10^6$ time units and observation times of $10^3$.}
\label{NormPlot}
\end{figure}
\subsection{Partial Results for Standard Coupling}
Our prior argument relied very little on telescopic coupling. In fact, we can use a similar method to show an equivalent result using the standard coupling~\eqref{StandardEq} instead of telescopic coupling~\eqref{TelescopicEq}. However, this requires the additional constraint of $x_0 = 0$ or $\pi$ (where $f(x_0) = 0$ and $f'(x_0) > 0$).
To derive the relevant results, suppose that we have some set of $\phi_{k}^{(C)} = \theta_{k} - \theta_{k+1}$ which satisfy the standard coupling equations for the chain and are locked at $\dot\theta_k \equiv \Omega$. Then
\begin{alignat*}{2}
& \Omega = \omega_1 + f\left(-\phi_{1}^{(C)}\right), \\
& \Omega = \omega_k +f\left(\phi_{k-1}^{(C)}\right) + f\left(-\phi^{(C)}_{k} \right), 1< k <N, \\
& \Omega = \omega_N + f\left(\phi^{(C)}_{N-1}\right).
\end{alignat*}
Using the fact that $\theta_N - \theta_0 = -\sum_{j=1}^{N-1} \phi_{k}$, the condition for locking on a ring becomes
\begin{alignat*}{2}
& \Omega = \omega_1 + f\left(-\phi_{1}^{(R)}\right) + f\left(-\sum_{j=1}^{N-1} \phi_{k}^{(R)} \right), \\
& \Omega = \omega_k +f\left(\phi_{k-1}^{(R)}\right) + f\left(-\phi^{(R)}_{k} \right), 1<k <N, \\
& \Omega = \omega_N + f\left(\phi^{(R)}_{N-1}\right) + f\left(\sum_{j=1}^{N-1} \phi_{k}^{(R)} \right).
\end{alignat*}
If we try plugging in $ \phi_{k}^{(R)} := \phi_{k}^{(C)} - (\Psi+x_0)/(N-1)$, with $\Psi$ defined the same as before, then the sum terms will evaluate to $x_0$ modulo $2\pi$. If we use the continuity arguments from before for $1 < k <N$, then
\begin{alignat*}{1}
& \omega_k +f\left(\phi_{k-1}^{(C)}- \frac{\Psi + x_0}{N-1} \right) + f\left(-\phi^{(C)}_{k} + \frac{\Psi + x_0}{N-1} \right) \\
&= \Omega + O(N^{-1}).
\end{alignat*}
For $k = 1$, then
\begin{alignat*}{2}
& \omega_1 + f\left(-\phi_{1}^{(R)}\right) + f\left(-\sum_{j=1}^{N-1} \phi_{k}^{(R)} \right) \\
&= \omega_1 + f\left(-\phi_{1}^{(C)} +\frac{\Psi + x_0}{N-1} \right)+ f(x_0) \\
&= \Omega + O(N^{-1}),
\end{alignat*}
and for $k=N$, then
\begin{alignat*}{1}
& \omega_N + f\left(\phi^{(R)}_{N} \right) + f\left(\sum_{j=1}^{N-1} \phi_{k}^{(R)} \right)\\
&= \omega_N + f\left(\phi_{N}^{(C)} - \frac{\Psi + x_0}{N-1} \right)+ f(-x_0) \\
&= \Omega + O(N^{-1}).
\end{alignat*}
Hence, as $N \rightarrow \infty$ this solution becomes exact. The reason we restricted $x_0$ was because we wanted $f(x_0) = 0 = f(-x_0)$, which was only guaranteed if $x_0 = -x_0$ mod $2\pi$.
By periodicity and continuity, if $f_l < f(\phi_k) < f_u$, then there is always some $\phi_k'$ such that $f(\phi_k') = f(\phi_k)$ and $f'(\phi_k') > 0$. So without loss of generality, if we had a chain solution $\phi_{k}^{(R)}$, we could pick another solution where all the phase differences have positive slope in $f$. And for sufficiently large $N$, the same would hold true for the ring, since we are perturbing only slightly and we already assumed $f'(x_0)>0$ for the boundary term.
Therefore, given any existing locked solution for the chain with standard coupling (even an unstable solution), this argument guarantees the existence of a \emph{stable} locked solution for the chain and a stable approximate solution for the ring, also with standard coupling. This means the large-$N$ limit gives $\Gamma_R \geq \Gamma_C$ for standard coupling, just as it did for telescopic coupling. But unlike the more convenient case of telescopic coupling, we can no longer construct a locked solution in the first place nor can we put clean upper bounds on $\Gamma_R$ or $\Gamma_C$.
\section{Summary and future directions}
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth]{Models.png}
\caption{Schematic illustration of the relationship between standard coupling, telescopic coupling, and their agreement for odd coupling functions $f$.}
\label{ModelSchematic}
\end{figure}
Our main results have put a limit on the relative behavioral difference between a ring and a chain of phase oscillators. As noted in the introduction, it is generically hard to predict the conditions on synchronization. As we have shown, even a single additional connection can cause a \emph{doubling} of the locking threshold, emphasizing its sensitivity to topology. However, simply putting limits on the synchronization criterion is often good enough for practical purposes. This is especially true for our particular comparison, since an analytic criterion is exactly known for a chain, but no equivalent has been demonstrated for a ring.
Our analysis was facilitated by the introduction of the telescopic coupling scheme \eqref{TelescopicEq}. Thanks to its convenient analytic properties, a large collection of different results which typically require sine, odd, or some other heavily restricted coupling function have been generalized to a new family of $f$'s. And as we noted earlier, telescopic coupling \eqref{TelescopicEq} and standard coupling \eqref{StandardEq} have equally legitimate mathematical claims to being a generalization of the sine-based Kuramoto model. Moreover, as illustrated by Fig.~\ref{ModelSchematic}, these two coupling schemes exactly overlap in the case of odd $f$.
Regarding future directions, one possibility is to ask whether the results generalize to higher dimensions. Telescopic coupling introduces a directionality to a one-dimensional chain. The natural extension to higher-dimensional lattices would be to introduce a directionality along each axis. It is not hard to do such a thing, and when we do, we extend the same cancellation properties enjoyed by odd $f$ to generic $f$. What results might come of this?
Although we distinguished between the two coupling schemes \eqref{StandardEq} and \eqref{TelescopicEq}, we have not made much effort to connect their behaviors for $f$ non-odd. But hints of a connection are present. For example, the two plots on the right side of Fig.~\ref{WedgePlot} both seemingly obey our predictions, even though only one of them used our preferred telescopic coupling. The other came from standard coupling, about which we were unable to make comparably strong statements. Unfortunately, we made little progress in computing exact relationships between the two schemes. Given that the telescoping scheme is much easier to work with, any general connection could potentially shed a lot of light on the standard case.
Finally, the fundamental question of this paper could be generalized in an ambitious way. Given a network of oscillators on a connectivity graph $G$, how does the locking threshold $\Gamma$ change with the addition or removal of a single edge? We chose both the graph and the edge very carefully in this paper, but some of our logic might still be relevant to this larger problem. Considering the close relationship between the power grid and the Kuramoto model, this question might bear on current issues of power grid resilience~\cite{dorfler13, dorfler12, motter13, wang09, kinney05, albert04}.
This research was supported by a Sloan Fellowship and NSF Graduate Research Fellowship grant DGE-1650441 to Bertrand Ottino-L\"{o}ffler in the Center for Applied Mathematics at Cornell, as well as by NSF grants DMS-1513179 and CCF-1522054 to Steven Strogatz.
|
1,116,691,500,292 | arxiv | \section{Introduction}
\label{sec:introduction}}
\IEEEPARstart{C}{lick}-through rate (CTR) prediction is an important application of machine learning algorithms,
which aims to predict the ratio of clicks to impressions of a specific link.
It has high commercial value for real-world use such as online advertising and recommendation systems \cite{Graepel10, Covington16, Cheng16, Guo17, Zhou18}.
But it is challenging to make a good prediction of CTR for several reasons.
For example, the input of CTR prediction tasks usually contains a number of categorical features,
which will be extremely sparse and high-dimensional if one-hot encoding is applied \cite{Shan16, He17}.
And not only the original features but also their interactions are of high importance,
which makes the problem more difficult \cite{Cheng16, Guo17, Wang17, Lian18}.
What's more, the useful interactions may be time-varied due to the drift of trends or individual interest.
The performance of a CTR prediction model can be significantly improved if appropriate interactions are added into the input \cite{McMahan13, He14, Beutel18, Luo19, Liu20}.
But as the total number of interactions grows rapidly with the number of input features,
it is difficult or infeasible to take all the interactions into consideration.
Many efforts have been devoted to detect the useful interactions.
For example, some interactions are manually added to the input of Deep Neural Networks \cite{Covington16} and Wide\&Deep \cite{Cheng16}.
This could be laborious and may miss some useful interactions.
Many researchers and practitioners turn to algorithms that can detect interactions automatically.
Several automatic feature interaction methods have been proposed in the last few years,
such as AutoCross \cite{Luo19}, AutoInt \cite{Song19}, AutoCTR \cite{Song20}, AutoFIS \cite{Liu20} and GLIDER \cite{Tsang20}.
A shortcoming of these methods lies in their high computational cost.
For example, AutoFIS assigns a gate to each feature interaction to control whether it's selected.
So a network taking all the interactions as its input should be trained in the search stage.
AutoCTR has to train and evaluate a large number of networks with different architectures to perform NAS research.
The process of GLIDER may take several hours and significant RAM ($>$150GB) to detect the interactions.
For AutoInt and AutoCTR, another disadvantage is that it's inconvenient to interpret the discovered interactions.
As the interactions are represented by some non-additive functions of the embedding of features inside the model,
it's hard to interpret which interactions are discovered unless checking the weights layer by layer.
In practical applications of recommendation systems, new data is collected every day because users continuously interact with the system.
How to make good use of both the latest and historical data is of great significance.
Training a new model only on the latest data is the simplest way, but it's a waste of the historical information.
Fine-tuning the model on hand with the new data is memory- and time-efficient.
However, it ignores the historical data that contains long-term preference signal, thus can easily cause overfitting and forgetting issues \cite{Zhang20}.
Full retraining, which trains the model on both the historical and new data, usually has the best performance but results in a heavy computational burden.
A better way to deal with this problem is online or streaming recommendation, which aims to refresh recommendation models based on real-time user interactions \cite{He16, Subbian16, Chang17}.
A related topic is called sequential or session-based recommendation \cite{Hidasi16, Tang18}, which takes a sequence of items that a user has interacted with as the input, and aims to predict the items that a user will likely interact in the future.
These methods are suitable for sequential data, but may still need retraining when new data comes.
Much attention has been paid to interaction detection or online algorithms for recommendation systems.
To the best of our knowledge, however, the topic of online interaction detection is relatively unexplored.
The above-mentioned interaction detection models should be retrained when a new set of data arrives.
Full retraining is computationally expensive, and adding an interaction which was only useful long ago may be harmful to current prediction.
So a practical concern is to perform interaction detection continuously, in which more recent data making a greater contribution than less recent one.
Based on the idea of Random Intersection Chains \cite{lin21}, we propose an approach that can continuously detect the useful interactions without retraining on the historical data.
Random Intersection Chains detect interactions by taking some random samples from the database, then estimating the frequency of patterns in the final intersection by maximum likelihood estimation and calculating their confidence by Bayes' theorem.
In this work, the estimated frequency for a pattern is derived from maximum posterior estimation rather than maximum likelihood estimation.
The historical estimation of frequency could be used as the prior probability, so the method retains the historical knowledge well.
By adjusting the formulation of the prior probability, the relative importance of historical data and the latest data can be controlled.
Also, a framework for integrating the time-varying interactions is designed, with which almost any existing CTR prediction model can be applied after interaction detection.
To summarize, our main contributions in this paper are listed as follows:
\begin{itemize}
\item We propose an algorithm, named Online Random Intersection Chains, that can detect the meaningful high-order feature interactions explicitly and continuously.
\item We provide an framework to integrate the discovered interactions into the input of existing CTR prediction models.
Thus these models can make use of time-varying interactions.
\item We analyze the convergence and computational complexity of the proposed interaction detection algorithm. So the effectiveness and efficiency of the algorithm can be guaranteed.
\item We conduct a series of experiments on three large-scale datasets.
The results demonstrate that ORIC is efficient and consistent.
The found interactions are interpretable and helpful for CTR prediction.
\end{itemize}
The rest of the paper is organized as follows.
In Section~\ref{sec:preliminaries}, we give the definition of some related concepts, such as CTR prediction, categorical feature interaction and online interaction detection.
Then we introduce some related work about CTR prediction, interaction detection and online recommendation in Section~\ref{sec:related_work}.
In Section~\ref{sec:methods}, we formally introduce the algorithm for online interaction detection, named Online Random Intersection Chains, and a framework to make use of the detected interactions in online scenario.
Some theoretical analyses of the proposed algorithm are then given in Section~\ref{sec:theoretical_analysis}.
In Section~\ref{sec:experiment} we report the results of experiments on three benchmark datasets to verify the efficiency and effectiveness of the algorithm.
Finally Section~\ref{sec:conclusion} concludes this paper.
\section{Preliminaries}
\label{sec:preliminaries}
In this section, we formally give the definition of the related concepts,
such as CTR prediction, categorical feature interaction and online interaction detection.
\begin{definition}
\textbf{CTR prediction}:
Given a dataset $D=\{\boldsymbol{X}, \boldsymbol{y}\}$,
where $\boldsymbol{X}\in \mathbb{R}^{N\times p}$ contains the features of users and items,
$\boldsymbol{y}\in \{0, 1\}^N$ indicates the clicks of users to items.
CTR prediction aims to predict the probability of a user clicking a specific item.
\end{definition}
The $i$-th row of $\boldsymbol{X}$ is denoted by $\boldsymbol{X}_i$ and the $j$-th element of $\boldsymbol{X}_i$ is denoted by $\boldsymbol{X}_{i, j}$.
$\boldsymbol{X}_i$ contains the features of a user and an item.
Each feature is either numerical or categorical, where the categorical features are label-encoded.
So $\boldsymbol{X}_i$ is made up of real numbers and integers.
Usually only a small portion of the recommended items will be clicked by the users.
Therefore, relatively few samples in the dataset are belonging to the positive class, which makes CTR prediction an unbalanced binary classification task.
\begin{definition}
\textbf{Categorical feature interaction}:
If $C_1$, $C_2$, ..., $C_k$ are $k$ categorical features,
and $c_1$, $c_2$, ..., $c_k$ are specific categories for corresponding features,
then ($C_1=c_1$, $C_2=c_2$, ..., $C_k=c_k$) is called a $k$-order categorical feature interaction.
\end{definition}
Categorical feature interactions could be viewed in two different ways.
Firstly, an interaction is a binary feature, which will be assigned 1 if and only every element in the expression is satisfied.
For example, if an interaction is ($C_1$=0, $C_2$=1), then the interaction is 1 for the sample [$C_1$=0, $C_2$=1, $C_3$=1], but 0 for the sample [$C_1$=0, $C_2$=0, $C_3$=1].
This definition captures the interactions between different features, and help the succeeding model to learn the non-linearity relationships.
One of the advantages of this definition lies in its interpretability.
As an interaction can be regarded as a logical expression, its practical meaning is quite obvious.
This definition coincides the term ``interaction'' used in \cite{Shah14},
and will reduce to the latter if all the input categorical features are binary.
The interaction defined here is also a non-additive interaction \cite{Friedman08, Sorokina08, Song19, Tsang18}, since it can not be represented by a linear combination of lower-order interactions.
Another way is to treat an interaction as a set of items, in which every constituent ($C_i=c_i$) component is regarded as an item.
For instance, the interaction ($C_1$=0, $C_2$=1) is an itemset consisting of two items, and this interaction is contained in the sample [$C_1$=0, $C_2$=1, $C_3$=1], but not in the sample [$C_1$=0, $C_2$=0, $C_3$=1].
From this viewpoint, association rule mining methods can be adopted to detect the interactions \cite{lin21tkde}.
In this paper, we use the terms ``interaction'', ``itemset'' and ``pattern'' interchangeably.
\begin{definition}
\textbf{Online interaction detection}:
For $T\ge 1$, let $\boldsymbol{D}_{T}$ be the data collected at time $T$, $\mathcal{I}_{T}$ be the interaction detection model fitted on \{$\boldsymbol{D}_{t}:t\le T$\}.
Online interaction detection aims to find a map $f$ that $\mathcal{I}_{T}=f(\mathcal{I}_{T-1}, \boldsymbol{D}_{T})$ for $T\ge 2$.
\end{definition}
Because essential interactions may change as time goes by, interaction detection model should be refreshed with new data continuously.
Intuitively, more recent data has a higher value than less recent one.
But it's also unwise to ignore the information in the historical data.
So it's necessary to have an approach that can detect interactions from both latest and historical data,
while the relative importance of new data and the histories can be controlled.
In the rest of this paper, subscript ``$t$'' or ``$T$'' is used to identify the time period.
Subscript ``$s$'' means the symbol is corresponding to the pattern $s$, and superscript ``$(c)$'' stands for the class label $c$.
For example, we use $p_{s,t}^{(c)}$ to represent the frequency of pattern $s$ among the samples labeled with $c$ in the data collected at time $t$.
\section{Related Work}
\label{sec:related_work}
\subsection{CTR Prediction}
The CTR prediction problem could be regarded as a special binary classification task, where the data is highly unbalanced.
Various algorithms have been developed to deal with this problem.
For instance, Factorization machine (FM) associates each feature with a low-dimensional vector, and models all possible interactions by the inner product of the corresponding vectors \cite{Rendle10}.
Field-aware Factorization Machine (FFM) allows each feature to associate with different vectors when interacting with features from different fields \cite{Juan16, Juan17}.
There are also other factorization models, such as Attention Factorization Machine (AFM) \cite{Xiao17}, Neural Factorization Machine (NFM) \cite{He17}, Product-Based Neural Networks (PNN) \cite{Qu18} and Field-weighted Factorization Machine (FwFM) \cite{Pan18}.
Recently deep learning models are quite popular.
Deep Neural Networks (DNN) are used for CTR prediction \cite{Zhang16, Covington16}.
Wide\&Deep is proposed to jointly train wide linear models and deep neural networks \cite{Cheng16}.
To make use of the feature interactions, DeepFM use FM as its ``wide'' part rather than a generalized linear model,
and it has a shared input to its ``wide'' and ``deep'' parts.
Replace the FM with Compressed Interaction Network (CIN) in DeepFM, then xDeepFM is obtained \cite{Lian18}.
\subsection{Interaction Detection}
Many automatic interaction detection methods for CTR prediction are proposed in the past few years.
AutoInt \cite{Song19} models the feature interactions by a multi-head self-attentive neural network with residual connections.
AutoCTR \cite{Song20} is based on Neural Architecture Search (NAS).
It modularizes interactions as virtual building blocks and wiring them into a space of direct acyclic graphs, then performs evolutionary architecture exploration to select a good architecture.
AutoFIS \cite{Liu20} introduces a gate for each feature interaction to control whether its output should be passed to the next layer, and retrains the model with the essential interactions.
GLIDER \cite{Tsang20} detects interactions by training a lasso-regularized multilayer perceptron (MLP) on a dataset, then identifying the features that have high-magnitude weights to common hidden units.
It is also a well-established practice among statisticians fitting models on interactions.
There exist a number of works dealing with interactive feature selection, especially for pairwise interactions.
Many of them select interactions by hierarchy principle \cite{bien13, hao14, hao18, agrawal19}.
Some works are free of the hierarchy assumption.
For instance, Thanei et al. proposed the xyz algorithm,
where the underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time \cite{Thanei18}.
Instead of the hierarchy principle, Yu et al. come up with the reluctant principle, which says that one should prefer main effects over interactions given similar prediction performance \cite{yu19}.
Most of the above-mentioned works concentrate on regression task and numerical features.
On the contrary, random intersection trees \cite{Shah14} detect interactions of binary predictors.
It works by detecting the frequent patterns in the positive class based on random intersections, and estimating the patterns' frequency in negative class based on min-wise hashing.
The idea that detecting the patterns frequent in positive class but infrequent in negative class coincides with association rule mining \cite{agrawal93, agrawal98}.
Random intersection chains \cite{lin21} detect frequent patterns by random intersection as well.
But the frequency in both positive class and negative class is estimated by maximum likelihood estimation, so a more careful selection can be conducted.
\subsection{Online Recommendation}
In practice, new data is continuously collected, so recommendation systems need to be retrained with this new data periodically.
An approach to deal with this problem is called online or streaming recommendation, which aims to update recommendations based on real-time user interactions \cite{He16, Subbian16, Chang17, Zhang20}.
For example, \cite{Chang17} proposes a framework termed sRec to provide explicit continuous-time random process models, and a variational Bayesian approach called recursive mean-field approximation to permit online inference.
\cite{He16} proposes an MF method aimed at learning from implicit feedback effectively and develops an incremental update strategy that instantly refreshes model parameters given new incoming data.
\cite{Zhang20} designs a neural network-based transfer component, which transforms the old model to a new model that is tailored for future recommendations.
Rather than CTR prediction, most of these models aim to predict the rating of a user on an item, and it's the rank that finally matters.
Usually the prediction is mainly based on users' history activities, with few or even no additional features about users or items.
A related topic is called sequential or session-based recommendation \cite{Hidasi16, Tang18}, which takes a sequence of items that a user has interacted with as the input, and aims to predict the items that a user will likely interact in the future.
\cite{Hidasi16} comes up with an RNN for each user’s interaction sequence to capture the interest evolution, and \cite{Tang18} uses CNN as a solution to address the sequential patterns.
These methods are suitable for sequential data, but may still need retraining when new data comes.
\section{Methods}
\label{sec:methods}
\subsection{Online Random Intersection Chains}
A random intersection chain is a linked list.
The head node contains the items in a random sample,
while other nodes contains the intersection of the itemset in its previous node and another random sample.
The most intuitional way to store a random intersection chain is recording the itemset for each node.
But it's worth noting that the chain is nonincreasing.
In other words, the itemset in a node is a subset of the previous node (except the head node).
Thus storing all the itemsets explicitly is wasteful.
To handle this problem, we come up with a special representation for chains.
We use two vectors to represent a chain, denoted by [$item$, $count$].
The first vector $item$ records the items in the head node and the other vector $count$ records how many times the corresponding item occurs in the chain.
In most cases, the input features as well as their orders keep unchanged.
So $item$ can be further simplified as a copy of the first sample, and $count$ is a vector consisting of integers.
The detail of chain generation can be summarized as follows.
First randomly sample an instance $\boldsymbol{X}_1$,
set $item$=$\boldsymbol{X}_{i_1}$, and $count$=$\mathbf{1}_p$, where $\mathbf{1}_p$ is an all-ones vector of dimension $p$.
After choosing the $k$-th sample $\boldsymbol{X}_{i_k}$,
if $count_j$=($k$-1) and $item_j$=$\boldsymbol{X}_{{i_k}, j}$, then set $count_j$=$k$.
These operations will be repeated until the maximum length of a chain is reached or the number of items in the tail node is sufficiently small.
A typical process for chain generation is illustrated in Figure~\ref{fig:chain_generation}.
In this example, three randomly chosen samples are ($A$=$a_1$, $B$=$b_1$, $C$=$c_1$),
($A$=$a_1$, $B$=$b_2$, $C$=$c_1$) and ($A$=$a_1$, $B$=$b_1$, $C$=$c_2$).
The generated intersection chain is ($A$=$a_1$, $B$=$b_1$, $C$=$c_1$)$\to$($A$=$a_1$, $C$=$c_1$)$\to$($A$=$a_1$),
which could be represented by [($A$=$a_1$, $B$=$b_1$, $C$=$c_1$), (3, 1, 2)]
and further simplified as [($a_1$, $b_1$, $c_1$), (3, 1, 2)].
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{chain_generation}
\caption{An illustration of chain generation.}
\label{fig:chain_generation}
\end{figure}
After the generation of chains, the itemsets in the tail nodes will be treated as frequent patterns.
For a pattern with frequency $p$, the distribution of its number of occurrence $k$ in a chain of length $L$ is
\begin{equation}
\mathbb{P}(k|p)=\begin{cases}
p^{k}(1-p), \hfill \mbox{if~} k<L\\
p^{k}, \hfill \mbox{if~} k=L
\end{cases}.
\end{equation}
And for $M$ such chains, the distribution is
\begin{equation}
\mathbb{P}(k_1, k_2, ..., k_M|p)=\mathbb{P}(K, I|p)=p^K(1-p)^I,
\end{equation}
where $k_m$ is the number of this pattern's occurrence in the $m$-th chain,
$K=\sum_{m=1}^Mk_m$ is the number of occurrence of this pattern in total $M$ chains,
$I=\sum_{m=1}^M\chi_{[k_m<L]}$ is the number of chains that don't contain this pattern in its tail node.
Thus for $M$ chains, the log of likelihood is
\begin{equation}
\label{eq:loglikelihood}
\log \mathbb{P}(K, I|p)=K\log p+I\log (1-p).
\end{equation}
Setting the derivative equalling to zero and rearranging, the maximum likelihood estimation of frequency is
\begin{equation}
\label{eq:frequency_likelihood}
\hat{p}=\frac{K}{K+I}.
\end{equation}
If we have some prior knowledge about the frequency beforehand, e.g. the historical records of a pattern's frequency,
then the frequency could be estimated by maximum posterior estimation.
Assume the prior distribution of a pattern's frequency is subject to a beta distribution, which means
\begin{equation}
{\rm Beta}(p|a, b)=\frac{\Gamma (a+b)}{\Gamma (a)\Gamma (b)}p^{a-1}(1-p)^{b-1},
\end{equation}
where $\Gamma(\cdot)$ is the Gamma function, $a$ and $b$ are two parameters.
Then the posterior distribution has the form
\begin{equation}
\mathbb{P}(p|K, I)\propto \mathbb{P}(K, I|p){\rm Beta}(p) \propto p^{K+a-1}(1-p)^{I+b-1}.
\end{equation}
So the posterior distribution $\mathbb{P}(p|K, I)={\rm Beta}(p|K+a, I+b)$ has the same functional form as the prior.
This means beta distribution is indeed a conjugate prior,
thus the calculations can be greatly simplified.
Similar to maximum likelihood estimation, the maximum posterior estimation of frequency is
\begin{equation}
\label{eq:frequency_posterior}
\hat{p}=\frac{K+a-1}{K+a+I+b-2}.
\end{equation}
When $T$=1, we have no prior knowledge about the prior.
The parameters $a$ and $b$ are simply set as 1 and the beta distribution reduces to a uniform distribution between 0 and 1.
The posterior distribution in this case is actually equivalent to maximum likelihood estimation.
For $T\ge 2$, the posterior distribution at time $T$-1 can be used as the prior for time $T$.
So the posterior distribution at time $T\ge 2$ is ${\rm Beta}(p|\sum_{t=1}^{T}K_{t}+1, \sum_{t=1}^{T}I_{t}+1)$, where $K_{t}$ and $I_{t}$ are the corresponding statistics for chains generated at time $t$.
This is the same as if all the generated chains are used for the current estimation.
What's more, both $K_{t}$ and $I_{t}$ can be weighted by a coefficient $\gamma\in [0, 1]$, as shown in (\ref{eq:weighted_K_I}).
\begin{equation}
\label{eq:weighted_K_I}
\begin{aligned}
\hat{K}_{T}&=K_{T}+\gamma \hat{K}_{T-1}=\sum_{t=1}^{T}{\gamma^{T-t}K_{t}},\\
\hat{I}_{T}&=I_{T}+\gamma \hat{I}_{T-1}=\sum_{t=1}^{T}{\gamma^{T-t}I_{t}}.
\end{aligned}
\end{equation}
The corresponding maximum posterior estimation at time $T$ is (\ref{eq:frequency_posterior_weighted})
\begin{equation}
\label{eq:frequency_posterior_weighted}
\hat{p}_{T}=\frac{\hat{K}_{T}}{\hat{K}_{T}+\hat{I}_{T}}.
\end{equation}
For $\gamma<1$, the earlier historical records of $K$ and $I$ have less influence on the current estimation.
Therefore the impact of historical data is limited while some information is still acquired from it.
Once frequency is estimated, it's not difficult to calculate the confidence according to the Bayes' theorem as
\begin{equation}
\label{eq:confidence}
q_s^{(1)}=\mathbb{P}(Y=1|s\subset X)=\frac{p_s^{(1)}p^{(1)}}{p_s^{(0)}p^{(0)}+p_s^{(1)}p^{(1)}},
\end{equation}
where $p_s^{(c)}$ is the proportion of samples containing pattern $s$ among those with label $c$,
$p^{(c)}$ is the proportion of samples with label $c$.
The procedure of interaction selection is summarized as follows.
(i) Divide the database according to the label.
(ii) Generate chains for positive and negative class separately.
(iii) Estimate the frequency of patterns in the tail nodes of positive class by (\ref{eq:frequency_posterior_weighted}).
(iv) Collect the the most frequent patterns as frequent patterns.
(v) Estimate the frequency of frequent patterns in negative class by (\ref{eq:frequency_posterior_weighted}) and calculate their confidence by (\ref{eq:confidence}).
(vi) Select the most confident patterns as useful interactions.
The hyperparameters in ORIC include the maximum lenght of a chain $L$, the number of chains $M$, the number of frequent interactions $d_{\rm freq}$, the number of confident interactions $d_{\rm conf}$, and the time decay parameter $\gamma$.
Denoting the set of ever detected patterns before time $T$ by $S_T$, the learning parameters at time $T$ are $\hat{K}_{s}^{(c)}$ and $\hat{I}_{s}^{(c)}$ for $c\in \{0,1\}$ and $s\in S_T$.
During the initialization of ORIC, the hyperparameters should be set by hand, and learning parameters are all assigned 0.
When new data is collected, ORIC is updated according to Algorithm~\ref{alg:update_oric}.
\begin{algorithm}
\caption{Update Online Random Intersection Chains}
\label{alg:update_oric}
\begin{algorithmic}[1]
\REQUIRE
$D_{T}$(newly collected data); \\
$\gamma$(time decay parameter);\\
\{$\hat{K}_{s,T-1}^{(c)}, \hat{I}_{s,T-1}^{(c)}$\}(current learning parameters)
\ENSURE
\{$\hat{K}_{s,T}^{(c)}, \hat{I}_{s,T}^{(c)}$\}(updated learning parameters)
\FORALL{$s\in S_T$}
\FOR{c in $\{0, 1\}$}
\STATE {$\hat{K}_{s,T-1}^{(c)}\leftarrow \gamma \hat{K}_{s,T-1}^{(c)}$}
\STATE {$\hat{I}_{s,T-1}^{(c)}\leftarrow \gamma \hat{I}_{s,T-1}^{(c)}$}
\ENDFOR
\ENDFOR
\STATE {Divide $D_{T}$ into $D_{T}^{(0)}$ and $D_{T}^{(1)}$}
\FOR{c in $\{0, 1\}$}
\STATE {Generate chains for $D_{T}^{(c)}$}
\FORALL{$s$ in tail nodes}
\STATE {$\hat{K}_{s,T}^{(c)}\leftarrow \hat{K}_{s,T-1}^{(c)}+K_{s,T}^{(c)}$}
\STATE {$\hat{I}_{s,t}^{(c)}\leftarrow \hat{I}_{s,T-1}^{(c)}+I_{s,T}^{(c)}$}
\ENDFOR
\ENDFOR
\STATE {Return \{$\hat{K}_{s,T}^{(c)}, \hat{I}_{s,T}^{(c)}$\}}
\end{algorithmic}
\end{algorithm}
\subsection{Streaming Integrated Model}
Since the proposed interaction detection method is model-agnostic,
almost all existing models for CTR prediction can be applied after the detection of interactions.
The interactions found by ORIC can be added to the original input directly as binary features.
But this approach has difficulty for online use.
Since the generated interactions evolve with time, training the model with current informative interactions but on the historical data seems redundant and problematic.
On the contrary, if the model is trained only with the latest data,
it doesn't make full use the historical information.
Inspired by Wide\&Deep learning, we design a generic model framework as illustrated in Figure~\ref{fig:integrated_model}.
This model consists of a base part as well as an interaction part.
The base part takes the original features as input, while the interaction part takes the interactions as input.
Just like the setting in Wide\&Deep, the base part and interaction part are combined using a weighted sum of their output log odds as the prediction.
Both the base part and the interaction part can be any existing CTR prediction model, such as Wide\&Deep, DeepFM, xDeepFM, Deep\&Cross or AutoInt.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{combined_model}
\caption{Architecture of the streaming integrated model}
\label{fig:integrated_model}
\end{figure}
Because the input interactions change over time, the interaction part should be initialized every time new data is collected.
However, the input features for base part stay the same.
To make better use of historical information, we design a workflow analogous to transfer learning as following:
(1) Copy the weights from the previous streaming integrated model or an independent base model to the base part.
(2) Freeze the base part.
(3) Randomly initialize the weights of interaction part.
(4) Train the integrated model on the latest data.
An optional step is fine-tuning, which consists of unfreezing the base part and re-training the whole model on the new data with a low learning rate.
\subsection{Algorithm Overview}
We now provide the general procedure of applying Online Random Intersection Chains and building the streaming integrated model in practical online situations.
Algorithm~\ref{alg:evaluation_update} shows how we evaluate and update the model when new data comes at time $T$.
First the interactions for newly collected data is generated according to the current interaction detection model.
Then the current integrated model can be evaluated with these interactions.
Next the base model and interaction detection model is updated on the newly collected data.
Finally a new integrated model is built and trained on on the latest data, which will be evaluated when new data is available in the future.
\begin{algorithm}
\caption{Model evaluation and update}
\label{alg:evaluation_update}
\begin{algorithmic}[1]
\REQUIRE
$D_{T}$(newly collected data); \\
$M_{T-1}$(current base model); \\
$\tilde{M}_{T-1}$(current integrated model); \\
$\mathcal{I}_{T-1}$(current interaction detection model)
\ENSURE
$M_{T}$(updated base model);\\
$\tilde{M}_{T}$(updated integrated model);\\
$\mathcal{I}_{T}$(updated interaction detection model)
\STATE {Generate interactions $I_{test,T}$ for $D_{T}$ according to $\mathcal{I}_{T-1}$}
\STATE {Evaluate $\tilde{M}_{T-1}$ with $D_{T}$ and $I_{test,T}$}
\STATE {Fine-tune $M_{T-1}$ with $D_{T}$ to be $M_{T}$}
\STATE {Update $\mathcal{I}_{T-1}$ with $D_{T}$ to be $\mathcal{I}_{T}$}
\STATE {Generate interactions $I_{train,T}$ for $D_{T}$ according to $\mathcal{I}_{T}$}
\STATE {Initialize the integrated model $\tilde{M}_{T}$ with $M_{T}$ as its base part.
Then fine-tune $\tilde{M}_{T}$ with $D_{T}$ and $I_{train,T}$}
\STATE{Return $M_{T}$, $\tilde{M}_{T}$ and $\mathcal{I}_{T}$}
\end{algorithmic}
\end{algorithm}
\section{Theoretical Analysis}
\label{sec:theoretical_analysis}
In this section, we theoretically analyze some properties of ORIC, such as convergence, the existance of appropriate hyperparameters and its computational complexity.
Most of the analyses are analogous to those in \cite{lin21} and the proofs are relegated to the Appendix..
One may question the rationality of (\ref{eq:frequency_posterior_weighted}) because $\hat{p}_{s,T}$ seems to be heuristic and in lack of practical meaning at a first glance.
However, Theorem~\ref{thm:freq} shows that $\hat{p}_{s,T}$ is an estimator of an adjusted frequency $\tilde{p}_{s,T}$, which is a weighted average of all historical frequency.
The weight of historical frequency depends on the time stamp and the quantity of itself.
If $0<\gamma<1$, earlier records contributes less to the adjusted frequency.
And the influence of larger frequency last longer than smaller one.
\begin{theorem}
\label{thm:freq}
\it $\hat{p}_{s,T}$ calculated by (\ref{eq:frequency_posterior_weighted}) satisfies:
\begin{equation}
\sqrt{M}[\hat{p}_{s,T}-\tilde{p}_{s,T}]\stackrel{d}{\longrightarrow}n(0, \tau^2),
\end{equation}
where
\[\tilde{p}_{s,T}=\frac{1}{\sum_{t=0}^{T}\alpha_t} \sum_{t=0}^{T}\alpha_tp_{s,t},\]
\[\alpha_t=\frac{1-p_{s,t}^L}{1-p_{s,t}}\gamma^{T-t},\]
$\tau^2$ is a positive number depending on $\gamma$ and $p_{s,t}(1\le t\le T)$.
\hfill
\end{theorem}
Another concern is that whether the frequent patterns can be detected by ORIC.
Due to the randomness of sampling, the algorithm is heuristic.
But according to Theorem~\ref{thm:exist_M_L}, it is guaranteed that the frequent patterns can be detected with arbitrarily high probability, as long as the hyperparameters are appropriately set.
\begin{theorem}
\label{thm:exist_M_L}
\it Given $\eta_1, \eta_2 \in (0,1]$, for any $\theta \in (0,1]$, there exist choices of the number of chains $M$, the length of a chain $L$ such that the set of ever detected patterns $S_T$ contains a pattern $s$ with probability at least $1-\eta_1$ if $\tilde{p}_{s,T}^{(c)}\ge \theta$, and with probability at most $\eta_2T$ if $p_{s,t}< \theta$ for all $1\le t\le T$.
\hfill
\end{theorem}
Because infrequent patterns in positive class will not be selected, keeping an eye on such patterns are useless.
As can be seen from Theorem~\ref{thm:exist_M_L}, there is a small chance for infrequent patterns to be detected by ORIC.
Together with the fact that only two integers (namely $K_{s}$ and $I_{s}$) are stored for pattern $s$, Theorem~\ref{thm:exist_M_L} ensures the little storage space of ORIC.
During the update phase, additional space for $M$ chains is required.
Since a chain is represented by two vectors, the space complexity of an update is $O(M)$ and independent of the length $L$.
According to Theorem~\ref{thm:freq}, the estimation will be more precise for larger $M$.
Thus there is a trade-off between accuracy and efficiency.
If there are many patterns having similar frequency, $M$ should be sufficiently large to obtain accurate frequency.
Contrarily, a few chains are enough when the gap between frequent and infrequent patterns is wide.
\section{Experiments}
\label{sec:experiment}
In this section, experiments are conducted to answer the following questions:\\
(1) Is ORIC efficient enough for large-scale data? How much memory and time will it take to select the interactions?\\
(2) Are the estimations accurate and consistent enough to detect the informative interactions?\\
(3) Is integrating these interactions into the input helpful for existing CTR prediction models in online scenario?\\
(4) Are the detected interactions comprehensible? Do their meanings make sense for human beings?\\
\subsection{Experimental Settings}
We conduct experiments on three public real-world datasets, named Avazu, Criteo and Taobao.
The addresses for downloading the datasets and the experimental codes are given in the Appendix.
\textbf{Avazu: }
This dataset contains the records of whether a displayed mobile ad is clicked by a user or not.
Click-through data of 10 days, ordered chronologically, is provided.
And the total number of samples is above 40 million.
It has 23 features, all of which are categorical.
\textbf{Criteo: }
This is a benchmark dataset for CTR prediction, which consists of a portion of Criteo's traffic over a period of 7 days.
There are 45 million users’ clicking records on displayed ads, and the rows are chronologically ordered.
It contains 26 categorical features and 13 numerical features.
\textbf{Taobao: }
This is a dataset of click rate prediction about display Ad, which is displayed on the website of Taobao.
1140000 users from the website of Taobao are randomly sampled for 8 days of ad display/click logs to form the original sample skeleton.
There are 27 million records in the dataset.
13 categorical features and 1 numerical feature are used for making a prediction.
Some important statistics for these datasets are summarized in Table~\ref{tab:statistics}.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Statistics for the datasets.}
\label{tab:statistics}
\centering
\begin{tabular}{ccccc}
\hline
Dataset & \#Sample & \#DenseFeat & \#SparseFeat & \#Category\\
\hline
Avazu & 40,428,967 & 0 & 23 & 1,544,488 \\
Criteo & 45,840,617 & 13 & 26 & 998,960 \\
Taobao & 26,557,961 & 1 & 13 & 2,667,994\\
\hline
\end{tabular}
\end{table}
\subsubsection{Data Preprocessing}
Both Avazu and Criteo are processed in the same way as provided in \cite{Song19}, which is also adopted by \cite{Song20, Tsang20}.
We ignore the infrequent categories whose appearance is less than a threshold, and label them by a single integer ``0'', which stands for ``others''.
The threshold is 5 for Avazu and 10 for Criteo.
And a numerical value $z$ will be transformed to $(\log z)^2$ if $z>2$.
As for Taobao, we simply label encode the categorical features and standardize the numerical features.
\subsubsection{Baselines}
To form a streaming integrated model, a basic prediction model is required.
We use a popular CTR prediction model, namely DeepFM, as the basic model.
It is worth mentioning that, other CTR prediction models, such as Wide\&Deep or xDeepFM, can also serve as the basic model.
It is also not necessary for the base part and the interaction part belonging to the same kind of model.
But in this paper, both the base part and the interaction part of the streaming integrated model are DeepFM.
We compare the proposed method with two automatic feature interaction methods for CTR prediction, namely AutoInt and DCN.
\textbf{DCN: }
While keeping the benefits of a DNN model, it introduces a novel cross network that is more efficient in learning certain bounded-degree feature interactions \cite{Wang17}.
\textbf{AutoInt: }
It maps the features into a low-dimensional space, and explicitly models the feature interactions in this low-dimensional space by a multi-head self-attentive neural network with residual connections \cite{Song19}.
Most online recommendation models are aimed at top-N recommendation rather than CTR prediction.
So we do not compare ORIC with existing online recommendation models.
Instead, two different retraining strategies are adopted to show the effectiveness and efficiency of ORIC:
\textbf{Fine-tuning: }
This method fine-tune the CTR prediction model only on the newly collected data.
\textbf{Retraining-with-reservoir: }
This method maintains a reservoir of historical samples.
When new data comes, we fine-tune the CTR prediction model on both the reservoir and the newly collected data.
\subsubsection{Evaluation Protocols}
To simulate real-world online scenario, we split each of the data into 10 parts $\{D_1, D_2, ..., D_{10}\}$ with equal size based on their temporary information.
The first 5 parts $\{D_1, D_2, ..., D_{5}\}$ are treated as the ``base training set'', which can be used to train an initial model and perform parameter selection.
The last 5 parts $\{D_6, D_7, ..., D_{10}\}$ are used to evaluate the online algorithms, as if one part is newly collected from one new period.
The procedure of hyperparameter selection and model evaluation can be summarized as follows.
At first, a base model is pre-trained on $D_1,...,D_4$.
Then ORIC with different parameters is fitted on $D_1,...,D_4$, after which a streaming integrated model is built and trained on $D_4$.
The streaming integrated model is then evaluated and updated on $D_5$ according to Algorithm~\ref{alg:evaluation_update}.
The model with the best parameter is preserved, which will be evaluated and updated on $D_6,...,D_{10}$ according to Algorithm~\ref{alg:evaluation_update}.
Two common metrics for CTR prediction are adopted to evaluate the models, namely AUC (Area Under ROC) and logloss (cross-entropy).
It is noticeable that a small increase of AUC or a slight decrease of logloss at 0.001-level is regarded significant for CTR prediction task, according to existing works \cite{Cheng16, Guo17, Wang17, Song19}.
\subsubsection{Implementation Details}
The structure of all the recommender models are the same as reported in \cite{Tsang20}.
And we use Adam \cite{Kingma14} with learning rate of 0.001 to optimize all deep neural network-based models.
We set all embedding sizes to 16, and the batch size is 8192 for all the cases.
The training is regularized by early stopping to prevent over-fitting.
For ORIC, we set the number of chains $M=10000$, and the number of frequent patterns $d_{\rm freq}=100$.
In order to control the order of discovered interactions directly, we use the size of the tail node rather than the length as the stop criterion.
A chain will stop growing if its tail node contains no more than 4 components.
What's more, according to the reluctant interaction selection principle, one should prefer the lower-order component over a higher-order interaction if all else is equal \cite{yu19, lin21tkde}.
We remove the interactions that are less confident than at least one of its constituents from the result.
So the number of finally selected interactions may be smaller than $d_{\rm conf}$.
The number of confident interactions $d_{\rm conf}$ and time decay parameter $\gamma$ are determined by grid search, where the searching range is $[10,20,...,100]$ for $d_{\rm conf}$ and $[0.0,0.1,...,1.0]$ for $\gamma$.
We also introduce another hyperparameter $\lambda$, which stands for the learning rate after unfreezing the base part if $\lambda>0$ and means the base part will not be unfrozen if $\lambda$=0.
The searching range for $\lambda$ is $[0, 10^{-5}, 10^{-4}, 10^{-3}]$.
We first set $\gamma=1.0$ and $\lambda=0$, and test ORIC with different $d_{\rm conf}$ on the ``base training set''.
Fixing $d_{\rm conf}$ with the best value, we select the best $\gamma$.
After $d_{\rm conf}$ and $\gamma$ are assigned with the best values, $\lambda$ is finally determined.
The best hyperparameters we found in this paper are listed in Table~\ref{tab:hyperparameter}.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Hyperparameters for the datasets.}
\label{tab:hyperparameter}
\centering
\begin{tabular}{cccc}
\hline
Dataset & $d_{\rm conf}$ & $\gamma$ & $\lambda$\\
\hline
Avazu & 60 & 0.5 & $10^{-3}$ \\
Criteo & 30 & 0.0 & $10^{-5}$ \\
Taobao & 50 & 0.4 & 0 \\
\hline
\end{tabular}
\end{table}
The data is preprocessed by Intel(R) Xeon(R) Gold 6148 CPU @2.40GHz,
and the deep models are implemented on a single NVIDIA RTX 2080ti GPU card.
We adopt the implementation of CTR models in a public repository named DeepCTR.
\subsection{Efficiency of ORIC}
In this section, we show that ORIC is both time- and memory-efficient by the analysis of its procedure and the experimental results on three datasets.
As can be seen from the procedure of chain generation, the running time is mainly affected by the length and total number of chains as well as the number of categorical features.
The number of finally selected interactions $d_{\rm conf}$, time decay parameter $\gamma$ and whether ORIC is previously trained have little influence on the running time.
Once the number of chains $M$ is determined, the running time mainly depends on some statistics of the dataset, e.g. the number of categories and the difference between the frequency of different patterns.
The update time on the 10 parts of each dataset is shown in Figure~\ref{fig:time_rico}.
The average time for updating ORIC is about 40 seconds for Avazu, 2 minutes for Criteo and 8 minutes for Taobao.
We can see that the update of ORIC on Taobao is slowest, although the dataset of Taobao is smallest.
This is because Taobao contains the most categories and many new patterns are detected when new data comes, which requires much time to calculate the frequency.
The demand for memory comes from two sources.
One is to store the parameters of ORIC, the second is to generate the chains.
As analyzed in Section~\ref{sec:theoretical_analysis}, only two integers are stored for a pattern.
Actually the average size of ORIC model for the benchmark datasets is about 40 KB for Avazu, 300 KB for Criteo and 3 MB for Taobao.
The size of ORIC model on each period is shown in Figure~\ref{fig:size_rico}.
Not surprisingly, it is Taobao that consumes the most space due to its large number of categories.
While the size of ORIC on Avazu and Criteo is relatively stable, it increases constantly on Taobao, which indicates that many new patterns are found in every period.
Nevertheless, the space for storing ORIC is almost negligible.
As for chain generation, the chain is represented by two vectors whose dimension is the number of categorical features.
The memory for generating and storing the chains is the same as $2M$ samples.
As stated earlier, $M$ is assigned 10,000 in this paper.
It is as if there is an additional dataset containing 20,000 samples during chain generation, which is very small compared with the original dataset consisting of tens of millions of samples.
\begin{figure}[!t]
\centering
\subfloat[Update time]{\includegraphics[height=1.7in]{time_rico.png}
\label{fig:time_rico}}
\hfil
\subfloat[File size]{\includegraphics[height=1.7in]{size_rico.png}
\label{fig:size_rico}}
\caption{Update time and size of ORIC on benchmark datasets.}
\label{fig:computational_complexity}
\end{figure}
\subsection{Consistency of ORIC}
To demonstrate the consistency of ORIC, we exhibit the evolution of the discovered interactions on the validation set $D_5$.
If the detected interactions are exactly the $d_{\rm conf}$ most confident ones, these interactions will be also included in the resulting sets when $d_{\rm conf}$ becomes larger, and the change will be relatively small when $\gamma$ slightly varies.
We use Jaccard-index to evaluate the similarity of interactions found by different values of $d_{\rm conf}$ or $\gamma$.
The Jaccard-index is defined as
\begin{equation}
\label{eq:Jaccard_index}
{\rm J}(S, S')=\frac{|S\cap S'|}{|S\cup S'|},
\end{equation}
where $S$ and $S'$ are two sets, and $|\cdot|$ stands for cardinality of the set.
The larger Jaccard-index indicates the greater similarity between two sets.
We first fit ORIC with $\gamma=1.0$ and $d_{\rm conf}$ varying from 10 to 100.
Figure~\ref{fig:jaccard_index_nconf} exhibits the Jaccard-indices of adjacent $d_{\rm conf}$, where the y-axis denotes the Jaccard-index of interactions found with $d_{\rm conf}$ and $(d_{\rm conf}-10)$.
We can see great similarity of detected interactions for close $d_{\rm conf}$.
The Jaccard-index of a set of cardinality $d_{\rm conf}$ and a set of cardinality $(d_{\rm conf}-10)$ is at most $(d_{\rm conf}-10)/d_{\rm conf}$.
In fact, the Jaccard-index in Figure~\ref{fig:jaccard_index_nconf} is even larger than the upper bound.
This is because among the $d_{\rm conf}$ most confident patterns, some high-order interactions are less confident than its components and dropped in the last.
So the number of finally selected interactions are smaller than $d_{\rm conf}$.
Actually for $d_{\rm conf}\ge 60$, no new interactions are detected even though we enlarge $d_{\rm conf}$.
We check the interactions and ensure that the interactions found with smaller $d_{\rm conf}$ are always contained in the result with larger $d_{\rm conf}$.
Since each ORIC is independently built, these results verify the consistency of ORIC.
Fixing $d_{\rm conf}$ with the best value in Table~\ref{tab:hyperparameter}, Figure~\ref{fig:jaccard_index_gamma} shows the Jaccard-indices between the interactions found by similar $\gamma$, where the y-axis denotes the Jaccard-index of interactions found with $\gamma$ and $(\gamma-0.1)$.
The Jaccard-indices are large in general, which verify the consistency of ORIC again.
There is a sudden fall when $\gamma=0.8$ on Criteo, which means the detected interactions are very different for $\gamma=0.7$ and $\gamma=0.8$.
But for $\gamma=0.9$ and $\gamma=1.0$, large Jaccard-index turns up again, which indicates that patterns found by $\gamma=0.8$, 0.9 and 1.0 are similar.
A possible explanation is that there may be many patterns that are only frequent in one or a few periods.
They will be selected when $\gamma$ is sufficiently large but abandoned otherwise.
Another point is that the interactions vary when the difference of $\gamma$ is large.
For example, the Jaccard-indices of interactions found with $\gamma=0$ and $\gamma=1$ are 0.533, 0.017 and 0.212 on Avazu, Criteo and Taobao, respectively.
This observation indicates the drift of meaningful interactions.
\begin{figure}[!t]
\centering
\subfloat[Number of interactions]{\includegraphics[height=1.7in]{jaccard_index_nconf.png}
\label{fig:jaccard_index_nconf}}
\hfil
\subfloat[Time decay]{\includegraphics[height=1.7in]{jaccard_index_gamma.png}
\label{fig:jaccard_index_gamma}}
\caption{Jaccard-indices for adjacent parameters.}
\label{fig:jaccard_index}
\end{figure}
\subsection{Effectiveness of ORIC}
We adopt DeepFM as the base model, while both fine-tuning and retraining-with-reservoir are used as the online updating methods.
Each part of the test data is divided into halves, in which the first half serves as training set and the remaining half is used for validation to prevent overfitting.
For fine-tuning, the models are fine-tuned on the training set.
As for retraining-with-reservoir, we employ the technique of random sampling proposed in \cite{Vitter85}, which is also adopted in SPMF \cite{wang2018}.
A reservoir $R$ of size 2,000,000 is used to maintain some historical data.
The $i$-th instance in the stream is included with probability $|R|/i$, and replaces a random instance in reservoir $R$ if it is included.
After the reservoir is built, we retrain the models on the combination of the training set and the reservoir.
The results of AutoInt and AutoCross, two CTR prediction models that can automatically learn feature interactions, are also given.
Experimental results on the benchmark datasets are shown in Table~\ref{tab:performance}.
As can be seen from the table, the performance of SIM and compared methods verifies that adding interactions found by ORIC and building the integrated model can make a progress.
SIM outperforms the other methods in all cases except Avazu with reservoir.
The comparison between the basic DeepFM model and SIM shows that the improvement exceeds the desired 0.001 level for Criteo and Taobao dataset, which can be regarded as practically significant.
Despite the encouraging results, it should be noted that adding interactions or building an integrated model may cause overfitting.
For Taobao dataset, the loss on training data drops rapidly if we unfreeze the base part and fine-tune the whole SIM, but the loss on validation data or the data in the next period increases.
The test result of fine-tuned SIM is usually worse than it of the basic DeepFM model.
On the contrary, fine-tuning the whole SIM is beneficial for Avazu and Criteo.
Another annoying fact in the online scenario is that the validation set may not represent the future test set well.
Actually for Avazu dataset, the loss on validation set will be smaller if we fine-tune the SIM with a smaller learning rate.
But the test performance is worse even if the validation loss becomes lower.
These observations tell us that although integrating interactions is usually helpful, it should be used with caution.
\begin{table}
\caption{CTR prediction performance on the benchmark datasets.}
\label{tab:performance}
\centering
\begin{tabular}{ccccc}
\hline
Dataset & Method & Model & AUC & logloss\\
\hline
\multirow{8}{*}{Avazu} & \multirow{4}{*}{Fine-tune} & DeeoFM & 0.7518 & 0.3918 \\
& & DCN & 0.7506 & 0.3935 \\
& & AutoInt & 0.7486 & 0.3933 \\
& & SIM & \textbf{0.7523} & \textbf{0.3911} \\
\cline{2-5}
& \multirow{4}{*}{Reservoir} & DeeoFM & 0.7535 & 0.3902 \\
& & DCN & 0.7538 & 0.3903 \\
& & AutoInt & 0.7533 & 0.3902 \\
& & SIM & 0.7534 & 0.3904 \\
\hline
\multirow{8}{*}{Criteo} & \multirow{4}{*}{Fine-tune} & DeeoFM & 0.8019 & 0.4514 \\
& & DCN & 0.8002 & 0.4525 \\
& & AutoInt & 0.8029 & 0.4505 \\
& & SIM & \textbf{0.8046} & \textbf{0.4485} \\
\cline{2-5}
& \multirow{4}{*}{Reservoir} & DeeoFM & 0.8009 & 0.4524 \\
& & DCN & 0.8001 & 0.4528 \\
& & AutoInt & 0.8020 & 0.4516 \\
& & SIM & \textbf{0.8046} & \textbf{0.4486} \\
\hline
\multirow{8}{*}{Taobao} & \multirow{4}{*}{Fine-tune} & DeeoFM & 0.6024 & 0.2007 \\
& & DCN & 0.5981 & 0.2010 \\
& & AutoInt & 0.6005 & 0.2010 \\
& & SIM & \textbf{0.6093} & \textbf{0.2002} \\
\cline{2-5}
& \multirow{4}{*}{Reservoir} & DeeoFM & 0.6088 & 0.2015 \\
& & DCN & 0.6078 & 0.2014 \\
& & AutoInt & 0.6073 & 0.2011 \\
& & SIM & \textbf{0.6116} & \textbf{0.2009} \\
\hline
\end{tabular}
\end{table}
\subsection{Interpretability of ORIC}
Due to the comprehensibility of association rules, interactions detected by ORIC are highly interpretable.
Since Avazu contains non-anonymous features, we can understand the meaning of the interactions.
The 10 most confident patterns detected by ORIC in $D_1$ are listed in Table~\ref{tab:detected_interactions}.
To keep notation uncluttered, we only list out the name of features, and omit the specific value for each feature.
We can see that most of the patterns found by ORIC are pairwise or higher-order interactions, which reflects the fact that there are indeed many essential interactions between different features.
With a more detailed observation, we can find that many interactions have a feature associated with ``app'' and a feature about ``device'', which indicates the relationship between an item and a user.
\begin{table}
\caption{Detected interactions for Avazu.}
\label{tab:detected_interactions}
\centering
\begin{tabular}{cc}
\hline
RID & Related Features \\
\hline
1 & app\_domain, app\_category, device\_conn\_type \\
2 & app\_category, device\_conn\_type \\
3 & app\_domain, app\_category \\
4 & app\_category \\
5 & app\_id \\
6 & app\_id, app\_domain \\
7 & app\_id, device\_conn\_type \\
8 & app\_id, app\_domain, device\_conn\_type \\
9 & C1, app\_domain, device\_id \\
10 & C1, app\_domain, device\_id, device\_type \\
\hline
\end{tabular}
\end{table}
For $D_1$, ORIC is not pre-trained and has no historical records.
So the estimated frequency and confidence are supposed to approximate the accurate frequency and confidence in $D_1$.
The estimated frequency in the negative class $\hat{p}^{(0)}$, the estimated frequency in the positive class $\hat{p}^{(1)}$, the estimated confidence $\hat{q}^{(1)}$ are given in Table~\ref{tab:freq_conf}, where the corresponding accurate values are also exhibited.
As can be seen from the table, the estimations are very close to their accurate values, with the numerical order well preserved.
This observation partially verifies that ORIC can find the most frequent and confident interactions.
\begin{table}
\caption{Statistics of Detected interactions for Avazu.}
\label{tab:freq_conf}
\centering
\begin{tabular}{ccccccc}
\hline
RID & $\hat{p}^{(0)}$ & $p^{(0)}$ & $\hat{p}^{(1)}$ & $p^{(1)}$ & $\hat{q}^{(1)}$ & $q^{(1)}$ \\
\hline
1 & 0.5908 & 0.5949 & 0.7112 & 0.7126 & 0.2022 & 0.2014 \\
2 & 0.5908 & 0.5949 & 0.7112 & 0.7126 & 0.2022 & 0.2014 \\
3 & 0.6188 & 0.6225 & 0.7417 & 0.7464 & 0.2015 & 0.2016 \\
4 & 0.6188 & 0.6225 & 0.7417 & 0.7464 & 0.2015 & 0.2016 \\
5 & 0.6064 & 0.6097 & 0.7243 & 0.7281 & 0.2010 & 0.2009 \\
6 & 0.6064 & 0.6097 & 0.7243 & 0.7281 & 0.2010 & 0.2009 \\
7 & 0.5803 & 0.5837 & 0.6933 & 0.6945 & 0.2010 & 0.2003 \\
8 & 0.5803 & 0.5837 & 0.6933 & 0.6945 & 0.2010 & 0.2003 \\
9 & 0.5747 & 0.5763 & 0.6757 & 0.6806 & 0.1985 & 0.1992 \\
10 & 0.5747 & 0.5763 & 0.6757 & 0.6806 & 0.1985 & 0.1992 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a method to select categorical feature interactions for click-through rate prediction in online scenario.
The common patterns among the positive samples are detected by random intersections, then their frequency is estimated by maximum posterior estimation, where the historical estimation could be used as prior knowledge.
Then their confidence is calculated by Bayes' theorem, and the most confident patterns will be finally selected.
To make full use of the interactions, we construct a streaming integrated model that consists of two parts.
The base part takes the original features as input, while the interaction part is fed with the discovered interactions.
Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed interaction detection methods and the integration approach.
One of the opportunities for future work is adopting more advanced time-series model to predict the future frequency and confidence of an interaction, rather than simply estimating their current values by maximum posterior estimation.
We are also trying to extend ORIC to numerical features.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
This work was supported by the National Nature Science Foundation of China under Grant No. 12071428 and 11671418, and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ20A010002.
\else
\section*{Acknowledgment}
\fi
\bibliographystyle{IEEEtran}
\section{Proofs}
In this appendix we give the proofs omitted earlier in the paper.
To keep the notation uncluttered, we omit the subscript ``$s$'' and superscript ``$(c)$'' unless otherwise stated.
\subsection{Proof of Theorem~1}
\begin{IEEEproof}
For the $m$-th chain generated at time $t$,
\begin{equation}
\mathbb{P}(k_{m,t}|p_t)=\begin{cases}
p_t^{k_{m,t}}(1-p_t), \hfill \mbox{if~} k_{m,t}<L\\
p_t^{k_{m,t}}, \hfill \mbox{if~} k_{m,t}=L
\end{cases}.
\end{equation}
For $\chi_{m,t}=\chi_{(k_{m,t}<L)}$,
\begin{equation}
\mathbb{P}(\chi_{m,t}|p_t)=\begin{cases}
p_t^L, \hfill \mbox{if~} \chi_{m,t}=0\\
1-p_t^{L}, \hfill \mbox{if~} \chi_{m,t}=1
\end{cases}.
\end{equation}
According to the definition of expectation, we have
\begin{equation}
\begin{aligned}
{\rm E}[k_{m,t}]&=\frac{p_t(1-p_t^L)}{1-p_t}\\
{\rm E}[\chi_{m,t}]&=1-p_t^{L}.
\end{aligned}
\end{equation}
Define
\[\hat{k}_{m,T}=\sum_{t=1}^T\gamma^{T-t}k_{m,t},\]
\[\hat{\chi}_{m,T}=\sum_{t=1}^T\gamma^{T-t}\chi_{m,t},\]
then we have
\begin{equation}
\begin{aligned}
{\rm E}[\hat{k}_{m,T}]&=\sum_{t=1}^T\gamma^{T-t}{\rm E}[k_{m,t}]=\sum_{t=1}^T\alpha_tp_t,\\
{\rm E}[\hat{\chi}_{m,T}]&=\sum_{t=1}^T\gamma^{T-t}{\rm E}[\chi_{m,t}]=\sum_{t=1}^T\alpha_t(1-p_t),
\end{aligned}
\end{equation}
${\rm Var}[\hat{k}_{m,T}]$, ${\rm Var}[\hat{\chi}_{m,T}]$ and ${\rm Cov}[\hat{k}_{m,T}, \hat{\chi}_{m,T}]$ could also be calculated by definition, but we omit their actual formulas for simplicity.
$k_{m,T}$ can be treated as a random sample for variable $\hat{k}_{T}$, and $\chi_{m,T}$ a random sample for variable $\hat{\chi}_{T}$.
Define
\[g(k,\chi)=\frac{k}{k+\chi},\]
then we have
\[\hat{p}_T=g(\hat{K}_T,\hat{I}_T)=g(M\bar{\hat{k}}_T,M\bar{\hat{\chi}}_T)=g(\bar{\hat{k}}_T,\bar{\hat{\chi}}_T)\]
\[\tilde{p}_T=g({\rm E}[\hat{k}_T],{\rm E}[\hat{\chi}_T])=\frac{1}{\sum_{t=0}^{T}\alpha_t} \sum_{t=1}^{T}\alpha_tp_{t}\]
Denote
\[g_k':=\frac{\partial g}{\partial k}({{\rm E}[k_{m,T}],{\rm E}[\chi_{m,T}}]), \]
\[g_\chi':= \frac{\partial g}{\partial \chi}({{\rm E}[k_{m,T}],{\rm E}[\chi_{m,T}}]), \]
\[\tau^2:= (g_k')^2{\rm Var}[\hat{k}_T]+(g_\chi')^2{\rm Var}[\hat{\chi}_T]+2g_k'g_\chi'{\rm Cov}[\hat{k}_T,\hat{\chi}_T], \]
then by the Multivariate Delta Method, we have
\[\sqrt{M}(\hat{p}_T-\tilde{p}_T)\to n(0,\tau^2) \]
in distribution.
\end{IEEEproof}
\subsection{Proof of Theorem~2}
To prove Theorem 2, we show Theorem 1 in \cite{lin21} here as Lemma \ref{lm:exist_M_L}.
\begin{lemma}
\label{lm:exist_M_L}
\it Given $\eta_1, \eta_2 \in (0,1]$, for any $\theta \in (0,1]$, there exist choices of $M$, $L$ such that $s$ appears in at least one of the tail nodes with probability at least $1-\eta_1$ if $P(s\subseteq X|Y=c)\ge \theta$, and with probability at most $\eta_2$ if $P(s\subseteq X|Y=c)< \theta$.
\hfill
\end{lemma}
\begin{IEEEproof}[Proof of Lemma \ref{lm:exist_M_L}]
Define $p_1$ as the smallest pattern frequency above $\theta$, $p_2$ as the largest pattern frequency below $\theta$. That is,
\[ p_1=\min\{p_s:p_s\ge \theta\},\]
\[ p_2=\max\{p_s:p_s<\theta\}.\]
For a pattern $s$, and a chain of length $L$,
\[\mathbb{P}(s\subseteq S_{L,1})=p_s^L. \]
For a pattern $s$, and $M$ chains of length $L$ ,
\begin{equation}
\begin{aligned}
g(p_s;L,M)&=\mathbb{P}(s\subseteq S_{L,M})\\
&=1-[1-\mathbb{P}(s\subseteq S_{L,1})]^M\\
&=1-[1-p_s^L]^M.
\nonumber
\end{aligned}
\end{equation}
We can see that $g(p_s;L,M)$ is monotone increasing with the increasing of $p_s$ and $M$, and the decreasing of $L$.
For $p_s\ge \theta$, if $M\ge \frac{{\log}\eta_1}{\log(1-p_1^L)}$, then
\begin{equation}
\begin{aligned}
\mathbb{P}(S\subseteq S_{L,M})&=g(p_s;L,M)\\
&\ge g(p_1;L,\frac{{\log}\eta_1}{\log(1-p_1^L)})\\
&=1-[1-p_1^L]^{\frac{{\log}\eta_1}{\log(1-p_1^L)}}\\
&=1-\eta_1.
\nonumber
\end{aligned}
\end{equation}
Define
\[M^*(L)=\lceil \frac{\log\eta_1}{\log(1-p_1^L)}\rceil, \]
\[\bar{M}(L)=\frac{\log\eta_1}{\log(1-p_1^L)}+1. \]
Thus $\bar{M}(L)\ge M^*(L)\ge \frac{\log\eta_1}{\log(1-p_1^L)}$. Then for $p_s\ge \theta$, we have $\mathbb{P}(S\subseteq S_{L,M})\ge 1-\eta_1$ if $M\ge M^*(L)$.
Next we give the conditions for tail nodes containing $s$ with probability at most $\eta_2$ if $P(s\subseteq X|Y=c)< \theta$.
Fixing $M=M^*(L)$, for $p_s< \theta$ we have
\begin{equation}
\begin{aligned}
\mathbb{P}(S\subseteq S_{L,M^*})&=g(p_s;L,M^*)\\
&< g(p_2;L,\bar{M})\\
&=1-[1-p_2^L]^{\frac{{\log}(\eta_1)}{\log(1-p_1^L)}+1}\\
&=1-\eta_1^{\frac{{\log}(1-p_2^L)}{{\log}(1-p_1^L)}}(1-p_2^L).
\end{aligned}
\nonumber
\end{equation}
Define
\[f(L)=\frac{{\log}(1-p_2^L)}{{\log}(1-p_1^L)}.\]
Take the derivative of $f$, then we have
\begin{equation}
\begin{aligned}
f'(L)&= \frac{-p_2^L\log p_2}{1-p_2^L}\log(1-p_1^L)+\frac{p_1^L\log p_1}{1-p_1^L}\log(1-p_2^L)\\
&= \log(1-p_1^L)\log(1-p_2^L)[f_1(p_1^L)-f_1(p_2^L)],
\end{aligned}
\nonumber
\end{equation}
where
\[f_1(x)= \frac{x\log x}{(1-x)\log(1-x)}.\]
So the corresponding derivative is
\[f_1'(x)= \frac{(1+\log x-x)\log(1-x)+x\log x}{[(1-x)\log(1-x)]^2}.\]
Denote the numerator as $f_2(x)$, and take the derivative, then we have
\[f_2(x)=(1+\log x-x)\log(1-x)+x\log x,\]
\[f_2'(x)= \frac{(1-x)^2\log(1-x)-x^2\log x}{x(1-x)}.\]
Again denoting the numerator as $f_3(x)$ and taking the derivative, we have
\[f_3(x)=(1-x)^2\log(1-x)-x^2\log x,\]
\[f_3'(x)=-2(1-x)\log(1-x)-2x\log x-1.\]
Denoting $f_4(x)=f_3'(x)$, we have
\[f_4'(x)=2\log(1-x)-2\log x=2\log(\frac{1}{x}-1). \]
Therefore for $x\in (0,1)$,
\begin{equation}
\begin{aligned}
&f_4'(x)<0~{\rm for}~x\in (0, \frac{1}{2}),~f_4'(x)>0~{\rm for}~x\in (\frac{1}{2}, 1)\\
\Rightarrow &f_3'(x)= f_4(x)\le f_4(\frac{1}{2})=-1<0\\
\Rightarrow &f_3(x)\le \lim_{x\to 0}f_3(x)=0\\
\Rightarrow &f_2'(x)\le 0\\
\Rightarrow &f_2(x)\le \lim_{x\to 0}f_2(x)=0\\
\Rightarrow &f_1'(x)\le 0.
\end{aligned}
\nonumber
\end{equation}
Noticing that $0\le p_2<p_1\le 1$, we have $f_1(p_1^L)<f_1(p_2^L)$, and thus $f'(L)<0$.
So $g(p_2;L,\bar{M})$ is a monotone decreasing function of $L$.
Extend the domain of $f$ to real numbers, according to $L'H\hat{o}pital's$ rule,
\begin{equation}
\begin{aligned}
\lim_{x\to \infty}f(x)&=\lim_{x\to \infty}{(\frac{-p_2^x\log p_2}{1-p_2^x}/\frac{-p_1^x\log p_1}{1-p_1^x})}\\
&=\frac{\log p_2}{\log p_1} \lim_{x\to \infty}(\frac{p_2}{p_1})^x \lim_{x\to \infty}\frac{1-p_1^x}{1-p_2^x}\\
&=\frac{\log p_2}{\log p_1}\cdot 0 \cdot 1 = 0.
\end{aligned}
\end{equation}
Then according to Heine theorem,
\[\lim_{L\to \infty}f(L)=0.\]
Thus we have
\[\lim_{L\to \infty} g(p_2;L,\bar{M})=1-\lim_{L\to \infty}\eta_1^{f(L)}(1-p_2^L)=0.\]
So for any $\eta_2\in (0,1)$, there exists $L^*\in \mathbb{N}$ such that $\mathbb{P}(S\subseteq S_{L,M^*}^{(c)})\le \eta_2$ if $L\ge L^*$.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~2]
Since $\tilde{p}_{s,T}$ is a weighted average of $(p_{s,1}, ..., p_{s,T})$, $\tilde{p}_{s,T}\ge \theta$ implies that there exists a $t\in \{1,2,...,T\}$ such that $p_{s,t}\ge \theta$.
According to Lemma \ref{lm:exist_M_L}, $s$ will be detected with probability at least $1-\eta_1$ at time $t$.
As for a pattern with $p_{s,t}<\theta$ for all $1\le t\le T$, Lemma \ref{lm:exist_M_L} indicates that the probability of not detecting it at one update is larger than $1-\eta_2$. Thus the probability of not detecting it in all $T$ updates is at least $(1-\eta_2)^T$. That is, it will be included in $S_T$ with probability less than $1-(1-\eta_2)^T\le \eta_2T$.
\end{IEEEproof}
\section{Related Links}
\textbf{Avazu dataset:}\\
https://www.kaggle.com/c/avazu-ctr-prediction/data.\\
\textbf{Criteo dataset:}\\
https://www.kaggle.com/c/criteo-display-ad-challenge.\\
\textbf{Taobao dataset:} \\
https://tianchi.aliyun.com/dataset/dataDetail?dataId=56.\\
\textbf{DeepCTR:}\\
https://github.com/shenweichen/DeepCTR.\\
\textbf{Repeatable experiment code:}\\
https://github.com/Lin-John/ORIC
\bibliographystyle{IEEEtran}
|
1,116,691,500,293 | arxiv | \section{Introduction}
As high quality samples of graphene are now readily available,\cite{Morozov:08,Du:08,Bolotin:08} an exciting prospect is to observe signatures of strong electron correlations in this system. Graphene's distinctive feature with respect to a more conventional two-dimensional (2D) electron gas such as the one realized in GaAs-based structures is that its charge carriers are relativistic in character, having a linear energy dispersion.\cite{Geim:07, Neto:09} From a theoretical perspective it is interesting to understand the interplay between the Dirac character of the carriers and the effects of interactions between the electrons.\cite{Peres:11,Goerbig:10,Kotov:10} While the linearity of the spectrum has already been demonstrated by different experiments,\cite{Neto:09} the effects of interactions have been believed to be quite small in standard graphene samples. Recently, however, the fractional quantum Hall effect has been observed in suspended graphene,\cite{Du:09,Bolotin:09} demonstrating that interactions can play an important role in graphene in a strong magnetic field with sufficiently high mobility.
The analysis of fractional quantum Hall (FQH) states arising at high perpendicular magnetic field in a two-dimensional system has revealed most prominently the effects of electronic interactions.\cite{Tsui:82,Laughlin:83,Pinczuk:96} In GaAs-based 2D electron gases in the FQH regime important information has been obtained from transport and shot noise experiments.\cite{Picciotto:97,Saminadaya:97} However, it is known that various 2D correlated states do not show anomalous transport signatures.\cite{Jain:89} Moreover, due to the manner of construction of the quantum well structures, having a direct access to their bulk properties is in general not straightforward. For example, only a few measurements of the local density of states have been performed in 2D gases,\cite{Chan:97,Dial:07,hashim:08}, in general via indirect methods.
Compared to the other 2D gases, graphene has the advantage that being a surface electronic system, in addition to standard transport measurements, it is directly accessible by local density of states (DOS) measurements such as scanning tunneling microscopy (STM).\cite{Li} This opens the perspective to use such measurements to obtain information about the electronic interactions. Here we study the DOS in graphene exposed to a strong magnetic field focusing on the effects of the electronic interactions, taken into account via the method of Haldane pseudopotentials.\cite{Haldane:83}
Our analysis has been motivated by a recent experimental analysis of the high-resolution time-domain capacitance spectroscopy for a 2D quantum well in the quantum Hall regime.\cite{Dial:07} The observation of unexpected peaks in the high-energy spectra for LL filling factors near $\nu=0$ and $\nu=1$, to which the authors referred as ``sashes", has called for new perspectives in the problem of 2D correlation physics in strong magnetic fields. The filling factor $\nu=n_{el}/n_B$ is the ratio between the electronic density $n_{el}$ and that $n_B=eB/h$ of flux quanta threading the 2D surface of the system. Soon after the experimental observation, a few different approaches have been proposed for the description of these peaks in relation to the electronic interactions.\cite{MacDonald:10,Barak:10} In particular, in Ref.~\onlinecite{MacDonald:10} these peaks have been attributed to the strong correlations between electrons, which have been modeled using the Haldane's pseudopotentials.
In the present paper, we generalize this calculation for 2D electrons in graphene. In this case, the total filling factor is defined as $\nu=\nu_n+\bar{\nu}$, where $\nu_n=\pm 2(2n+1)=\pm2,\pm6,\ldots$ is the filling at which relativistic integer quantum Hall effect occurs\cite{Peres:11,Goerbig:10} and $\bar{\nu}$ is the partial filling factor of the $n$-th LL. In this approach, at very low partial filling factors $\bar{\nu}\rightarrow 0$, besides a number of completely filled and inert LLs, only one extra particle is considered present in the ground state. Whereas this theoretical limit allows for an important simplification in the calculation of the tunneling density of states, it is experimentally relevant as long as the average distance between particles $d\sim l_B/\sqrt{\bar{\nu}}$ of electrons in the $n$-th LL is larger than the cyclotron radius $R_C=l_B\sqrt{2n+\delta_{n,0}}$ in graphene, in terms of the magnetic length $l_B=\sqrt{\hbar c/eB}\simeq 26\,\tr{nm}/\sqrt{B[T]}$. Our theoretical analysis is therefore applicable in the limit
$\bar{\nu}\ll 1/(2n+\delta_{n,0})$.
Assuming thus that the state of the system is a one-particle state, when a second particle tunnels in from the STM tip, the measured STM signal is proportional to the overlap between the resulting state, which is not an eigenstate of the high-magnetic field two-particle Hamiltonian, and the two-particle eigenfunctions. These eigenfunctions are characterized by a quantum number associated with the two-particle relative angular momentum. As a result, discrete peaks arise in the density of states \cite{lectures} corresponding to the energy differences between the two-body interacting eigenfunctions of the system and the one-particle state. These energy differences are given by Haldane's pseudopotentials, and their measurement via the peaks in the spectrum yields information about the interaction parameters in the system.
Similarly to non-relativistic 2D electron systems, such as in GaAs heterostructures, we find that the high-field DOS, close to the filling $\nu_n=\pm 2(2n+1)$, allows for a determination of Haldane's pseudopotentials and thus of the effective Coulomb interaction in graphene. However, contrary to
non-relativistic 2D electrons in a strong magnetic field, the center-of-mass (CM) is not separable from the relative coordinate as
a consequence of the Lorentz invariance. The extraction of Haldane's pseudopotentials in higher LLs ($n\neq 0$) is therefore more
involved than in $n=0$, where the two-body problem is equivalent to that in non-relativistic 2D electron systems.
Our results may be tested in high-field STM that has already been applied successfully in the past to graphene\cite{highBSTM},
and which has also been proposed as a tool for the study of high-field electron-solid phases\cite{popl}, as well as of the role of impurity scattering.\cite{bena:08,bena:10,cheianov:06}
The paper is organized as follows: in Sec. \ref{sec:2}, we review the recent theoretical interpretation of the ``sash" features observed in high-quality 2DEG samples via Haldane's pseudopotentials. In Sec. \ref{sec:3}, we generalize this theory to graphene by solving the two-body problem for Dirac quasiparticles in a strong magnetic field. With the help of the exact two-body eigenstates of the interacting system, in Secs. \ref{sec:4} and \ref{sec:5} we calculate the DOS, and we describe how it can be used to extract information about the pseudopotentials for the LLs $n=0$ and $n\neq 0$, respectively. The last section (Sec. \ref{sec:6}) of the paper presents a discussion of the results and the conclusions.
\section{Theoretical interpretation of the DOS under high magnetic field using Haldane pseudopotentials}
\label{sec:2}
Here we review the main aspects of the theory based on Haldane's pseudopotentials proposed in Ref.~\onlinecite{MacDonald:10} to explain the unexpected sashes that have been recently observed in the DOS of a 2D gas at high magnetic field.\cite{Dial:07} In standard STM experiments, the measured differential conductance is taken to be proportional to the tunneling DOS of the system being probed. At zero temperature, this quantity is given by
\begin{eqnarray}
A(\omega)= \sum_{\alpha} |\langle \Psi_{\alpha}(N+1)\bigl|c_{\beta}^\dag|\Psi_0(N)\rangle\bigr|^2 \delta\bigl(\omega-E_{\alpha,0}\bigr)\nonumber\\
\tr{\ }+\sum_{\alpha}\bigl|\langle \Psi_{\alpha}(N-1)|c_{\beta}|\Psi_0(N)\rangle\bigr|^2 \delta\bigl(\omega+E_{\alpha,0}\bigr),
\end{eqnarray}
where $|\Psi_0(N)\rangle$ is the ground state of the $N$-particle system and $c_{\beta}^{(\dagger)}$ is a fermionic operator
that removes (adds) a particle from (to) the one-particle state labeled by a set of quantum numbers $\beta$. In the high-field case for graphene, which we consider here, the set of quantum numbers is given by $\beta=\{n,m,\sigma,K/K'\}$, where $n$ is the LL index, $m$ labels the degenerate one-particle states inside the level, $\sigma$ is the spin index and $K/K'$ is the valley index.\cite{lectures} Thus, the first term describes an electron added to the system, whereas the second one corresponds to removing an electron. Here, $|\Psi_{\alpha}(N\pm1)\,\rangle$ are the exact eigenstates of the interacting $N\pm1$-particle system labeled by another set of quantum numbers $\alpha$. Their energy difference is given by $E_{\alpha,0}=[E_{\alpha}(N\pm1)-E_0(N)]$. Because the quantum number $m$ is associated with the center of the electronic cyclotron motion, which is a constant of motion, the DOS is independent of $m$ in a translationally invariant system.\cite{lectures} Since the DOS yields information about the spectral properties of the many-body system, STM experiments can thus provide important information on the correlation physics in the quantum Hall regime.
To evaluate the above formula, in Ref.~\onlinecite{MacDonald:10}, one considers the extremely dilute limit $(\bar{\nu}\simeq0)$, where the ground state simply consists of a single particle in the lowest LL $|\Psi_0(N=1)\,\rangle$. Due to particle-hole symmetry, these results also hold for the electron-removal part of the DOS when the system is close to an almost filled LL state. The tunneling experiment in this regime is then described by adding an extra particle instantaneously to the ground state (without perturbing it). The resulting state is not an eigenstate of the Hamiltonian, the exact two-body eigenstates $|\Psi_{\alpha}(N=2)\,\rangle$ of a 2D gas in a strong magnetic field being labeled by their relative angular momentum. Moreover, the eigenvalues corresponded to the two-particle interacting states are shifted by the Coulomb interactions, the corresponded shifts being described by the Haldane's pseudopotentials.\cite{Haldane:83}. The resulting spectral peaks in the DOS spectrum occur at energies matching the difference between the energy of the interacting two-particle states and that of of the one-particle states -- this difference being precisely given by the Haldane's pseudopotentials. Thus the DOS spectrum provides detailed information about the values of the pseudopotentials in a system, which is crucial for understanding the formation of various correlated ground states.
\section{Two-body eigenstates of Dirac particles under magnetic field}
\label{sec:3}
As discussed in the previous section, knowledge of the exact two-body interacting eigenstates and their corresponding eigenvalues, as well as of the non-interacting state resulting by the addition of an extra particle to the one-particle state, is required in order to calculate the DOS in the low-filling-factor limit. Given that the two-particle eigenstates of the non-interacting Hamiltonian are also eigenstates of the Coulomb interaction potential, the two-particle interacting eigenstates have the same form as the non-interacting ones, but correspond to different eigenvalues. They are slightly more complicated for graphene for which the relevant charge excitations are Dirac-like particles, than for a 2D gas. Besides, one needs to take into account the extra spin and valley degrees of freedom. To begin, in Sect. IIIA, we focus on the orbital part of the interacting wavefunction for two Dirac particles in a magnetic field. In Sect. IIIB we write down the wavefunction obtained when adding an extra Dirac particle to a one-particle state. The effects of the extra degrees of freedom (spin, valley) will be touched upon in Sect. IIIC
\subsection{Two-particle interacting eigenstates and the corresponding eigenvalues}
As mentioned above the two-particle interacting eigenstates are the same as the non-interacting ones. In order to calculate them, one notes that at low-energy (up to fractions of an $eV$), the electronic properties of graphene can be described using a continuum model, with electrons localized around the two Dirac cones, conventionally called the $K$ and $K'$ valleys.\cite{Neto:09} The Schr\"{o}dinger equation for a Dirac particle in the $K$ valley in a magnetic field can be written as
\begin{eqnarray}
H \Psi = v\vec{\sigma}\cdot\vec{\Pi}\,\Psi=E\Psi,
\end{eqnarray}
where the Fermi velocity $v\simeq 10^6$ m/s plays the role of the speed of light. Furthermore, $\vec{\sigma}=(\sigma^x,\sigma^y)$ in terms of Pauli matrices, and $\vec{\Pi}=(\Pi_x,\Pi_y)$ is the canonical momentum operator, after Peierls substitution, which obeys $[\Pi_x,\Pi_y]=-i\hbar^2/l_B^2$.
By defining the ladder operators $a=l_B(\Pi_x-i\Pi_y)/\sqrt{2}\hbar$ and $a^\dag=l_B(\Pi_x+i\Pi_y)/\sqrt{2}\hbar$ that obey the harmonic oscillator algebra, $[a,a^\dag]=1$, the Schr\"{o}dinger equation becomes
\begin{eqnarray}
\sqrt{2}\hbar \frac{v}{l_B}\begin{pmatrix} 0&a\\a^\dag&0\end{pmatrix}\Psi=E\Psi.
\end{eqnarray}
Diagonalizing the Hamiltonian yields the energy spectrum of relativistic LLs
\begin{eqnarray}
E=\lambda \hbar \frac{v}{l_B}\sqrt{2 n},
\end{eqnarray}
where $\lambda=\pm$, and $n=0,1,\ldots$ denotes the LL. The associated eigenstates are given by
\begin{eqnarray}
&&\Psi_{n=0,m}=\begin{pmatrix} 0 \\ |n=0,m\rangle \end{pmatrix},\ \text{for}~n=0\nonumber\\
&&\Psi_{\lambda n,m}=\frac{1}{\sqrt{2}}\begin{pmatrix} |n-1,m\rangle \\\lambda |n,m\rangle \end{pmatrix},\ \text{for}~n\neq 0,
\end{eqnarray}
in terms of the (non-relativistic) LL states $|n,m\rangle=\frac{(a^\dag)^n (b^\dag)^m}{\sqrt{n!m!}}|0\rangle$. Here, we have implicitly introduced the cyclotron-orbit-center operator $b=[x+iy +\frac{i}{m\omega}(\Pi_x+i\Pi_y)]$ that has the associated quantum number $m$ with $b^\dag b \ \mathbb{I}\ \Psi_{n,m} = m \Psi_{n,m} $, where $\mathbb{I}$ is the $2\times 2$ identity matrix. Since $[a,b]=[a^\dag,b]=0$, and $[H, b^\dag b \ \mathbb{I}]=0 $, the additional quantum number $m$ labels the macroscopic degeneracy $N_{B}=n_B\mathcal{A}$ for each Landau level $n$, in terms of the total surface $\mathcal{A}$.
The solutions of the same problem for the second valley $K'$ can be obtained by the transformation $\Psi\rightarrow \sigma_x \Psi$, since the the Hamiltonian around the $K'$ valley is related to Eq.~(2) via $H\rightarrow \sigma_x H \sigma_x^\dag$. This simple covariance property permits to focus the discussion in the rest of this only on one $K$ valley (assuming that the interaction is invariant under the above transformation, for more details see Sect. IIIC).
For the two-body problem of Dirac particles in the absence of magnetic field, it was shown \cite{Sabio:10} that due to the coupling of sublattice and orbital degrees of freedom, the CM degree of freedom cannot in general be separated from the relative coordinates degree of freedom. In the presence of a magnetic field, we have
\begin{eqnarray}
v\bigl[ \sigma\cdot\vec{\Pi}_1\otimes \mathbb{I}+\mathbb{I}\otimes\sigma\cdot\vec{\Pi}_2\bigr]\Psi&=&E_0\,\Psi,
\end{eqnarray}
or explicitly
\begin{eqnarray}
\biggl(\frac{\sqrt{2}\hbar v}{l_B}\biggr) \begin{pmatrix} 0&a_2&a_1&0\\ a_2^\dag&0&0&a_1\\ a_1^\dag&0&0&a_2\\ 0&a_1^\dag&a_2^\dag&0\end{pmatrix}\Psi&=&E_0\,\Psi,
\end{eqnarray}
where the index $1,2$ labels the two particles. By defining the CM and relative coordinates, $z_R=(z_1+z_2)/2$ and $z_r=z_1-z_2$, respectively, the ladder operators become $a_R=( a_1+a_2)/\sqrt{2}, \tr{\ } a_r=( a_1-a_2)/\sqrt{2}$. The Schr\"{o}dinger equation then takes the form
\begin{eqnarray}
\biggl(\frac{\hbar v}{l_B}\biggr)
\begin{pmatrix} 0&a_R-a_r&a_R+a_r&0\\ a_R^\dag-a_r^\dag&0&0&a_R+a_r\\ a_R^\dag+a_r^\dag &0&0&a_R-a_r\\ 0&a_R^\dag+a_r^\dag&a_R^\dag-a_r^\dag&0\end{pmatrix}\Psi=E_0\,\Psi,\nonumber\\
\end{eqnarray}
which shows that this is also the case in the presence of a magnetic field.
We denote
\begin{eqnarray}
\dr{N,M}_R\dr{n,m}_r\equiv\frac{(a_R^\dag)^N (a_r^\dag)^{n} (b_R^\dag)^M (b_r^\dag)^{m}}{\sqrt{N!n!M!m!}}|0,0\rangle_R|0,0\rangle_r,\nonumber\\
\end{eqnarray}
where we have defined $b_R=( b_1+b_2)/\sqrt{2}, \tr{\ } b_r=( b_1-b_2)/\sqrt{2}$ as the LL ladder operators, with the subscripts $R$ and $r$ in $\dr{N,M}_R\dr{n,m}_r$ indicating the CM and relative coordinates subspaces. Using this notation one may write down the eigenstates in a 4-spinor form $\dr{\vec{\Psi}_{M,m}(N=2)\,}$, see Table~\ref{table1} for the exact form of the lowest energy eigenstates. We remark that the macroscopic degeneracies in both the CM and relative angular momenta, $M$ and $m$, are similar to the single-particle LL case.
While the eigenstates for the non-interacting and interacting problems are the same, the values of the eigenvalues are different. In order to compute these values, we now take into account the interactions
\begin{eqnarray}
\bigl[ v\sigma\cdot\vec{\Pi}_1\otimes \mathbb{I}+\mathbb{I}\otimes v\sigma\cdot\vec{\Pi}_2+\hat{V}_{1,2}\mathbb{I}\otimes\mathbb{I}
\bigr]\Psi &=&E\,\Psi,
\end{eqnarray}
where we consider the interaction potential to be isotropic $\dl r_1,r_2| \hat{V}_{1,2}\dr{r_1,r_2}= V(|r_1-r_2|)$. The eigenvalues $E$ of the fully interacting Hamiltonian in Eq.~(10) can be obtained by sandwiching it between the eigenstates $\dr{\vec{\Psi}_{M,m}(N=2)\,}$ described above, with the help of Haldane's pseudopotentials,\cite{lectures}
\begin{eqnarray}
V_{m}^{n}\equiv _{\ r}\!\!\langle n,m| \hat{V}_{1,2} |n,m \rangle_r.
\end{eqnarray}
These eigenvalues are summarized in the third column of Table~\ref{table1}.
We can see that the interaction partially lifts the degeneracy in the relative angular momentum quantum number $m$ within one Landau level.
To determine the parity of the 4-spinor under particle interchange, we perform the following operation:\cite{Sabio:10}
\begin{eqnarray}
\dl{z_R,z_r}\dr{\vec{\Psi}} \equiv\begin{pmatrix}
\Psi_{AA}(z_R,z_r)\\\Psi_{AB}(z_R,z_r)\\\Psi_{BA}(z_R,z_r)\\\Psi_{BB}(z_R,z_r)\end{pmatrix} \rightarrow \begin{pmatrix} \Psi_{AA}(z_R,-z_r)\\\Psi_{BA}(z_R,-z_r)\\\Psi_{AB}(z_R,-z_r)\\\Psi_{BB}(z_R,-z_r)\end{pmatrix},
\end{eqnarray}
where $A,B$ denote the sublattice degree of freedom. We thus see that the parity of the two-Dirac-electron eigenstates depends not only on the total relative angular momentum $(m+n)$, which fixes the exponent in the variables $z_r,\bar{z}_r$ in the wavefunction $\dl{z_r,\bar{z}_r}\dr{n,m}_r$, but also on the second and third components of the 4-spinor. From Table~I, we find that, for $\dr{\Psi_{M,m}(N=2)\,}$ with even $m$, the states (I), (III), (VI), (VII) and (VIII) are symmetric under particle exchange; whereas the states (II), (IV) and (V) are antisymmetric under particle exchange.
\subsection{Wavefunctions resulting by addition of an extra particle to the one-particle state}
In addition to calculating the eigenstates and the eigenvalues $E$ of the two-particle interacting problem, in order to compute the DOS in Eq.~(1), we also need to construct the wavefunction resulting when a particle is added to the single-particle state $|\Psi_0(N=1)\,\rangle$, while taking into account the overall symmetry. Since the particle-addition process is assumed to be instantaneous, and not to perturb the host state, the wavefunction resulting by the addition of one particle to the one-particle state can be constructed by taking the product of two single-particle states and (anti-)symmetrizing it with the symmetrization operator $\mathcal{P}_{S}$ or antisymmetrization operator $\mathcal{P}_{AS}$. Notice that, here, we need to take into account both the symmetrization and the antisymmetrization of the orbital wavefunctions. This is because of the spin-valley component that can also be antisymmetric or symmetric, such that the total wavefunction satisfies fermion statistics.
Close to $\nu=-2$, we take the single-particle state to be the $n=0$ LL wavefunction $(0,\dr{0,m_1})$, see Eq.~(5). The addition of an extra electron in the same LL results in the two-body state
\begin{eqnarray}
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=0}}&=&\mathcal{P}_{S(AS)} \biggl[ \begin{pmatrix} 0 \\ \dr{0,m_1} \end{pmatrix}\otimes \begin{pmatrix} 0 \\ \dr{0,m_2} \end{pmatrix}\biggr]\\
&=&\frac{1}{\sqrt{2}}\begin{pmatrix} 0 \\0\\0\\\dr{0,m_1}_1\dr{0,m_2}_2\pm\dr{0,m_2}_1\dr{0,m_1}_2\end{pmatrix},\nonumber
\end{eqnarray}
where the subscripts 1 and 2 in $\dr{n,m}_1\dr{n',m'}_2$ denote the subspaces for the respective particles.
For a generalization to any integer filling $n$, it is justifiable to take the $(n-1)$ LLs to be inert. The two-body state $\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}}} $ can then be constructed in a similar manner as for the $n=0$ case. For example, take the $n=1$ LL case just above the filling $\nu=2$, the instantaneous addition of an electron to the $n=1$ LL wavefunction $(\dr{0,m_1},\dr{1,m_1})/\sqrt{2}$ results in
\begin{eqnarray}
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=1}}&=&\frac{1}{2}\mathcal{P}_{S(AS)} \biggl[ \begin{pmatrix} \dr{0,m_1} \\ \dr{1,m_1} \end{pmatrix}\otimes \begin{pmatrix} \dr{0,m_2} \\ \dr{1,m_2} \end{pmatrix}\biggr]\\
&=& \frac{1}{2\sqrt{2}}\begin{pmatrix} \dr{0,m_1}_1\dr{0,m_2}_2\pm\dr{0,m_2}_1\dr{0,m_1}_2 \\ \dr{0,m_1}_1\dr{1,m_2}_2 \pm\dr{0,m_2}_1\dr{1,m_1}_2 \\ \dr{1,m_1}_1\dr{0,m_2}_2\pm\dr{1,m_2}_1\dr{0,m_1}_2 \\ \dr{1,m_1}_1\dr{1,m_2}_2\pm\dr{1,m_2}_1\dr{1,m_1}_2\end{pmatrix}.\nonumber
\end{eqnarray}
For a translationally invariant system, the DOS calculation does not depend on the angular momentum $m_2$ of the added particle, such that we may set $m_2=0$.\cite{MacDonald:10} In terms of ladder operators, the two wavefunctions Eq.~(13) and Eq.~(14) are then given by
\begin{eqnarray} \label{eqn op}
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=0}}&=&\frac{1}{\sqrt{2 m_1!}} \begin{pmatrix} 0 \\ 0\\0 \\\bigl[ (b_1^\dag)^{m_1}\pm(b_2^\dag)^{m_1}\bigr]\end{pmatrix}\dr{0,0}_1\dr{0,0}_2, \nonumber\\
\\
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=1}}&=&\frac{1}{2\sqrt{2 m_1!}} \begin{pmatrix} \bigl[ (b_1^\dag)^{m_1}\pm(b_2^\dag)^{m_1}\bigr] \\ a_2^\dag\bigl[ (b_1^\dag)^{m_1}\pm(b_2^\dag)^{m_1} \bigr] \\a_1^\dag\bigl[ (b_1^\dag)^{m_1}\pm(b_2^\dag)^{m_1}\bigr] \\a_1^\dag a_2^\dag\bigl[ (b_1^\dag)^{m_1}\pm(b_2^\dag)^{m_1}\bigr] \end{pmatrix}\dr{0,0}_1\dr{0,0}_2,\nonumber\\
\end{eqnarray}
respectively. By substituting the CM and relative coordinates operators $a_{1,2}=(a_R\pm a_r)/\sqrt{2}$, $b_{1,2}=(b_R\pm b_r)/\sqrt{2}$, and using the binomial expansion, the wavefunctions in the new basis become
\begin{eqnarray}
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=0}}&=&\sum_{l=0}^{m_1} F_{l,m_1}^{S(AS)}\begin{pmatrix} 0\\ 0 \\0\\1 \end{pmatrix} \dr{0,l}_R\dr{0,m_1-l}_r,\\
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=1}}&=&\frac{1}{2}\sum_{l=0}^{m_1}F_{l,m_1}^{S(AS)} \begin{pmatrix} 1\\ \frac{1}{\sqrt{2}}(a_R^\dag-a_r^\dag) \\\frac{1}{\sqrt{2}}(a_R^\dag+a_r^\dag)\\\frac{1}{2}((a_R^\dag)^2 -(a_r^\dag)^2) \end{pmatrix} \nonumber\\
&&\tr{\ \ \ }\times \dr{0,l}_R\dr{0,m_1-l}_r,
\end{eqnarray}
with the coefficients
\begin{eqnarray}
F_{l,m_1}^{S(AS)}\equiv [1\pm(-1)^{m_1-l}]\, \sqrt{\frac{l!\,(m_1-l)!}{2\, m_1! \,2^{m_1}}} \,\binom{m_1}{l}.
\end{eqnarray}
We see that the product state constructed in this way is a superposition of the two-body eigenstates of the relevant energy level listed in Table~\ref{table1}.
\subsection{Total wavefunction}
Having obtained the orbital part of the wavefunction for two Dirac particles, we now take into account the remaining internal degrees of freedom relevant for graphene. The first one is the intrinsic $1/2$ spin of the electron. The second is the valley index $K,K'$ associated to the two inequivalent Dirac cones in the graphene energy spectrum. These are separated by a large reciprocal wavevector, and even though Coulomb interaction can in principle induce inter-valley scattering, the matrix element involving atomic-scale high-momentum exchanges are typically small.\cite{Goerbig:06} Scattering between different valleys can thus be neglected in the low-energy regime, such that the valley index may be described by a pseudo-spin-$1/2$, with respect to which the Coulomb interaction is approximately SU(2)-symmetric. The total wavefunction $\dr{\Phi}$ is then a direct product of three parts:
\begin{eqnarray}
\dr{\Phi}=\dr{\vec{\Psi}}\otimes \dr{\tr{spin, valley}}.
\end{eqnarray}
The parity of the orbital part $\dr{\vec{\Psi}}$ has been discussed in Sect.~IIIA and IIIB, whereas the spin and valley parts can separately form either a singlet or a triplet state, respectively. The total two-body wavefunction must be antisymmetric under particle exchange.
\section{The DOS of graphene for $n=0$}\label{sec:4}
In this section, we compute the electron-addition DOS close to $\bar{\nu}\simeq 0$ in the $n=0$ LL, that is just above the filling $\nu=-2$ (or the electron-removal DOS close to $\nu=2$ by particle-hole symmetry). We first note that from Eq.~(13), while we fix $m_2=0$ by invoking translational invariance, the groundstate $\dr{\Psi_0(N=1)\,}$ remains macroscopically degenerate in the quantum number $m_1$. Therefore, the local DOS needs to be averaged over the $N_{B}$-fold degeneracy, e.g.,
\begin{eqnarray}
&&A_{+}^{\tilde{n}}(\omega)= \frac{1}{N_{B}}\sum_{m_1=0}^{N_{B}-1} \sum_{\alpha}\delta\bigl(\omega-E_{\alpha,0}\bigr)\nonumber\\
&& {\tr \ \ \ }\bigl|\langle \Psi_{\alpha}(N=2)| c_{\tilde{n},m_1,\sigma,K/K'}^{\dagger}|\Psi_{0}(N=1)\,\rangle\bigr|^2,
\end{eqnarray}
for the electron-addition part of the DOS.
In the presence of a high magnetic field, the $N=1$ groundstate is a spin-polarized state with a Zeeman energy $-\Delta_z/2$, where $\Delta_z$ is the energy splitting between the majority and minority spin states. However, the groundstate electron can belong to either the $K$ or $K'$ valley. Now, when an extra electron is injected, the latter can also have a spin pointing either parallel or anti-parallel to the ground state electron, and it can reside on either the $K$ or $K'$ valley.
We first consider the groundstate electron residing on the $K$ valley and the added electron being spin parallel ($S_z=1$, where $S_z$ is the total spin component along the magnetic field direction) but belonging to either of the valleys. It then follows that the total wavefunction with an added particle can be either $\dr{\Phi}=c_{m_1,\uparrow,K}^{\dagger}|\Psi_{0,\uparrow,K}(N=1)\,\rangle$ that is
\begin{eqnarray}
\dr{\vec{\Psi}_{AS}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-triplet}},
\end{eqnarray}
or $\dr{\Phi}=c_{m_1,\uparrow,K'}^{\dagger}|\Psi_{0,\uparrow,K}(N=1)\,\rangle$ that is given by either
\begin{eqnarray}
&\dr{\vec{\Psi}_{S}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-singlet}},& \nonumber\\
&\tr{or}&\nonumber\\
&\dr{\vec{\Psi}_{AS}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-triplet}}.&
\end{eqnarray}
Here, $\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=0}}$ are given by Eq.~(17). Substituting them into Eq.~(21), and summing over all $N=2$ interacting eigenstates $\dr{\Psi_{\alpha}(N=2)\,}$ (partly summarized in Table~I), we obtain
\begin{eqnarray}
&&A_{+,S_z=1}^{\tilde{n}=0}(\omega)=\frac{2}{N_{B}}\sum_{m\in even}\delta\bigl(\omega-V_m^{n=0}+\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{6}{N_{B}}\sum_{m\in odd}\delta\bigl(\omega-V_m^{n=0}+\Delta_z/2\bigr).
\end{eqnarray}
This is the result which one also obtains in the case of 2D electrons in GaAs heterostructures, that is the weight of the peaks corresponding to odd pseudopotentials is 3 times larger than that for even ones. However, as we shall see, the four-component structure of graphene LLs yields eventually a different result than the two-component structure in non-relativistic LLs, when $S_z=0$ two-particle states are taken into account.
In principle, even though the eigenstate summation is performed over all two-body eigenstates, the only one yielding a non-trivial contribution is the eigenstate (I) of Table~I. We also note that there is an extra Zeeman energy cost of $-\Delta_z$ associated with the interacting eigenstate because of the spin-triplet component. Furthermore, to write down the above expression we have employed the summation formula
\begin{eqnarray}
\sum_{m_1=0}^{N_B-1} \frac{2}{2^{m_1}}\frac{_1!}{(m_1-m)!m!}\,\rightarrow\, 4
\end{eqnarray}
in the thermodynamic limit $N_B\rightarrow \infty$, for any integer $m$.\cite{MacDonald:10} A parallel analysis can be made for the ground state electron residing on the $K'$ valley, which leads to the same result.
On the other hand, for the addition of an electron with opposite spin, the resulting state with an identical valley $\dr{\Phi}=c_{m_1,\downarrow,K}^{\dagger}|\Psi_{0,\uparrow,K}(N=1)\,\rangle$ gives rise to either
\begin{eqnarray}
&\dr{\vec{\Psi}_{S}^{\tilde{n}=0}}\otimes\dr{\tr{spin-singlet, valley-triplet}},&\nonumber\\
&\tr{or}&\nonumber\\
&\dr{\vec{\Psi}_{AS}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-triplet}};&
\end{eqnarray}
and the resulting state with an opposite valley $\dr{\Phi}=c_{m_1,\downarrow,K'}^{\dagger}|\Psi_{0,\uparrow,K}(N=1)\,\rangle$ gives rise to either of the states:
\begin{eqnarray}
&\dr{\vec{\Psi}_{AS}^{\tilde{n}=0}}\otimes\dr{\tr{spin-singlet, valley-singlet}},&\nonumber\\
&\dr{\vec{\Psi}_{S}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-singlet}},&\nonumber\\
&\dr{\vec{\Psi}_{S}^{\tilde{n}=0}}\otimes\dr{\tr{spin-singlet, valley-triplet}},&\nonumber\\
&\tr{\ \ or}&\nonumber\\
&\dr{\vec{\Psi}_{AS}^{\tilde{n}=0}}\otimes\dr{\tr{spin-triplet, valley-triplet}}.
\end{eqnarray}
The resulting DOS contribution is
\begin{eqnarray}
&&A_{+,S_z=0}^{\tilde{n}=0}(\omega)=\frac{4}{N_{B}}\sum_{m\in even}\delta\bigl(\omega-V_m^{n=0}-\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{4}{N_{B}}\sum_{m\in odd}\delta\bigl(\omega-V_m^{n=0}-\Delta_z/2\bigr).
\end{eqnarray}
When putting together the contributions from adding a parallel-spin electron and an opposite-spin electron to the DOS
\begin{eqnarray}
A_{+}^{\tilde{n}=0}(\omega)&=&A_{+,S_z=0}^{\tilde{n}=0}(\omega)+A_{+,S_z=1}^{\tilde{n}=0}(\omega),
\end{eqnarray}
the total weight for the odd $m$ peak becomes 5/3 times larger than that for the even $m$ (while being possible to neglect the Zeeman energy difference).
As we have already mentioned above, the relative weight $5/3\simeq 1.67$ between the spectral weight of the odd pseudopotentials with respect to the even ones is a benchmark of the underlying four-component structure of graphene LLs, due to the spin-valley degeneracy. In the case of a two-component system, such as in a conventional 2DEG in GaAs heterostructures, the ratio would be 3.\cite{MacDonald:10} This result is retrieved in the $n=0$ graphene LL close to the charge-neutrality point ($\nu\simeq 0$), where one of the spin-valley components (say the spin component in the case of a dominant Zeeman effect) is completely frozen. The relative spectral weight between the odd and even pseudopotentials therefore yields insight into the multi-component structure of LLs.
\section{The DOS of graphene for general $n$}
\label{sec:5}
It is now straightforward to generalize our study to other LLs. For a filling factor $\nu\simeq 2$, we start by considering a groundstate which is fully occupied for all $n<1$ LLs and a single spin-polarized electron at $n=1$. The addition of an electron results in the eigenstate described in Eq.~(18). Following the same procedure to compute the DOS as in the previous section, we obtain
\begin{eqnarray}
&&A_{+,S_z=1}^{\tilde{n}=1}(\omega)=\frac{2}{N_{B}}\sum_{m\in even}\delta\bigl(\omega-\Omega_{1,m}+\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{6}{N_{B}}\sum_{m\in odd}\delta\bigl(\omega-\Omega_{1,m}+\Delta_z/2\bigr)
\end{eqnarray}
and
\begin{eqnarray}
&&A_{+,S_z=0}^{\tilde{n}=1}(\omega)=\frac{4}{N_{B}}\sum_{m\in even}\delta\bigl(\omega-\Omega_{1,m}-\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{4}{N_{B}}\sum_{m\in odd}\delta\bigl(\omega-\Omega_{1,m}-\Delta_z/2\bigr).
\end{eqnarray}
where now, only the eigenstate (VII) contributes to the DOS, and $\Omega_{1,m}=2\sqrt{2}\hbar v/l_B +(5/8)\,V_{m}^{n=0}+ (1/4)\,V_{m}^{n=1}+ (1/8)\,V_{m}^{n=2}$. Compared to the usual 2DEG system, we see that the position of the peak does not only contain information about the $n=0$ LL pseudopotential $V_{m}^{n=0}$ but also on higher LL pseudopotentials $V_m^{n=1,2}$. This is due to the fact that a general two-body eigenstate of the interacting problem consists of spinorial components that occupy at the same time different LLs $n$ in the relative coordinate subspace.
Let us also write down the solution for the DOS of electron addition close to $\nu\simeq 6$. The two-body state for an $n=2$ LL electron with an added particle is given by
\begin{widetext}
\begin{eqnarray}
\dr{\vec{\Psi}_{S(AS)}^{\tilde{n}=2}}&=&\frac{1}{2}\sum_{l=0}^{m_1}F_{l,m_1}^{S(AS)} \begin{pmatrix} \frac{1}{2}((a_R^\dag)^2 -(a_r^\dag)^2)\\ \frac{1}{4}((a_R^\dag)^3+(a _r^\dag)^3-(a_R^\dag)^2a_r^\dag -a_R^\dag (a_r^\dag)^2)\\\frac{1}{4}((a_R^\dag)^3-(a _r^\dag)^3+(a_R^\dag)^2a_r^\dag -a_R^\dag (a_r^\dag)^2)\\\frac{1}{8}((a_R^\dag)^4 +(a_r^\dag)^4-2 (a_R^\dag)^2 (a_r^\dag)^2 ) \end{pmatrix} \dr{0,l}_R\dr{0,m_1-l}_r
\end{eqnarray}
\end{widetext}
and taking the overlap with the interacting eigenstate (VIII) from Table~\ref{table1}, the electron-addition parts of the DOS are given by
\begin{eqnarray}
&&A_{+,S_z=1}^{\tilde{n}=2}(\omega)=\frac{2}{ N_{B}}\sum_{m\in even}\delta\bigl(\omega-\Omega_{2,m}+\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{6}{ N_{B}}\sum_{m\in odd}\delta\bigl(\omega-\Omega_{2,m}+\Delta_z/2\bigr)
\end{eqnarray}
and
\begin{eqnarray}
&&A_{+,S_z=0}^{\tilde{n}=2}(\omega)=\frac{4}{ N_{B}}\sum_{m\in even}\delta\bigl(\omega-\Omega_{2,m}-\Delta_z/2\bigr)\nonumber\\
&&{\ \ \ \ \ \ }+\frac{4}{ N_{B}}\sum_{m\in odd}\delta\bigl(\omega-\Omega_{2,m}-\Delta_z/2\bigr).
\end{eqnarray}
where $\Omega_{2,m}=4\hbar v/l_B+(13/32)\,V_{m}^{n=0}+ (1/16)\,V_{m}^{n=1}+(1/4)\,V_{m}^{n=2}+(3/16)\,V_{m}^{n=3}+(3/32)\,V_{m}^{n=4}$. Thus, the DOS at this filling factor contains rich information on the pseudopotentials $V_m^{n}$ belonging to many LLs.
\begin{table}[c]
\caption{Two-body eigenvalues and eigenstates}
\begin{scriptsize}
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ |c | c | c | c|}
\hline
&$\dr{\Psi_{M,m}(N=2)}$ & $E_0/(\hbar v/l_B)$ & $E-E_0$\\
\hline
\hline
(I) & $\begin{pmatrix} 0\\0\\0\\ \dr{0,M}_R\dr{0,m}_r\end{pmatrix}$ & 0 & $V_{m}^{n=0}$ \\
\hline
(II) & $\begin{pmatrix} 0\\\frac{1}{2}[\dr{0,M}_R\dr{1,m}_r- \dr{1,M}_R\dr{0,m}_r] \\\frac{1}{2}[\dr{0,M}_R\dr{1,m}_r+\dr{1,M}_R\dr{0,m}_r\\0\end{pmatrix}$ & 0 & $\frac{1}{2}\bigl( V_{m}^{n=0}+V_{m}^{n=1}\bigr)$ \\
\hline
(III) & $\begin{pmatrix} 0\\\frac{1}{2}\dr{0,M}_R\dr{0,m}_r\\\frac{1}{2}\dr{0,M}_R\dr{0,m}_r\\\frac{1}{\sqrt{2}} \dr{1,M}_R\dr{0,m}_r\end{pmatrix}$ & $\sqrt{2}$ & $V_{m}^{n=0}$ \\
\hline
(IV) & $\begin{pmatrix} 0\\\frac{1}{2}\dr{0,M}_R\dr{0,m}_r\\-\frac{1}{2}\dr{0,M}_R\dr{0,m}_r\\\frac{1}{\sqrt{2}} \dr{0,M}_R\dr{1,m}_r\end{pmatrix}$ & $\sqrt{2}$ & $\frac{1}{2}\bigl( V_{m}^{n=0}+V_{m}^{n=1}\bigr)$ \\
\hline
(V) & $\begin{pmatrix} 0\\\frac{1}{2\sqrt{2}}\bigl[\dr{0,M}_R\dr{1,m}_r+ \dr{1,M}_R\dr{0,m}_r\bigr] \\\frac{1}{2\sqrt{2}}\bigl[\dr{0,M}_R\dr{1,m}_r-\dr{1,M}_R\dr{0,m}_r\bigr]\\\frac{\sqrt{2}}{2} \dr{1,M}_R\dr{1,m}_r\end{pmatrix}$ & $2$ & $\frac{1}{4}V_{m}^{n=0}+ \frac{3}{4}V_{m}^{n=1}$ \\
\hline
(VI) & $\begin{pmatrix} 0\\\frac{1}{2\sqrt{2}}\bigl[\dr{1,M}_R\dr{0,m}_r+ \dr{0,M}_R\dr{1,m}_r\bigr] \\\frac{1}{2\sqrt{2}}\bigl[\dr{1,M}_R\dr{0,m}_r-\dr{0,M}_R\dr{1,m}_r\bigr]\\ \frac{1}{2}\bigl[\dr{2,M}_R\dr{0,m}_r+\dr{0,M}_R\dr{2,m}_r\bigr] \end{pmatrix}$ & $2$ & $\frac{1}{2}V_{m}^{n=0}+\frac{1}{4} V_{m}^{n=1}+\frac{1}{4}V_{m}^{n=2}$\\
\hline
(VII) & $\begin{pmatrix} \frac{1}{2} \dr{0,M}_R\dr{0,m}_r\\\frac{1}{2\sqrt{2}}\bigl[\dr{1,M}_R\dr{0,m}_r- \dr{0,M}_R\dr{1,m}_r\bigr]\\\frac{1}{2\sqrt{2}}\bigl[\dr{1,M}_R\dr{0,m}_r+\dr{0,M}_R\dr{1,m}_r\bigr]\\ \frac{1}{2\sqrt{2}}\bigl[\dr{2,M}_R\dr{0,m}_r-\dr{0,M}_R\dr{2,m}_r\bigr]\end{pmatrix}$ & $2 \sqrt{2}$ & $\frac{5}{8}V_{m}^{n=0}+ \frac{1}{4}V_{m}^{n=1}+\frac{1}{8}V_{m}^{n=2}$ \\
\hline
(VIII) & $ \begin{pmatrix} \frac{\sqrt{2}}{4} \bigl[\dr{2,M}_R\dr{0,m}_r-\dr{0,M}_R\dr{2,m}_r\bigr]\\
\frac{\sqrt{6}}{8}\bigl[\dr{3,M}_R\dr{0,m}_r+ \dr{0,M}_R\dr{3,m}_r\bigr]-\frac{\sqrt{2}}{8}\bigl[\dr{2,M}_R\dr{1,m}_r+ \dr{1,M}_R\dr{2,m}_r\bigr]\\
\frac{\sqrt{6}}{8}\bigl[\dr{3,M}_R\dr{0,m}_r- \dr{0,M}_R\dr{3,m}_r\bigr]+\frac{\sqrt{2}}{8}\bigl[\dr{2,M}_R\dr{1,m}_r- \dr{1,M}_R\dr{2,m}_r\bigr]\\
\frac{\sqrt{6}}{8}\bigl[\dr{4,M}_R\dr{0,m}_r+ \dr{0,M}_R\dr{4,m}_r\bigr]-\frac{1}{4}\dr{2,M}_R\dr{2,m}_r\end{pmatrix}$ & $4$ & $\substack{\frac{13}{32}V_{m}^{n=0}+ \frac{1}{16}V_{m}^{n=1}+\frac{1}{4}V_{m}^{n=2}\\ \\
+\frac{3}{16}V_{m}^{n=3}+\frac{3}{32}V_{m}^{n=4}}$
\\
\hline
\end{tabular}
\label{table1}
\end{scriptsize}
\end{table}
\section{Discussions and conclusions}
\label{sec:6}
We have calculated the tunneling DOS in graphene in high magnetic fields when the filling factor is close
to $\nu_n\pm 2(2n+1)$. In order to describe the electronic interactions, we have used the method of Haldane's pseudopotentials, which describes the two-particle interacting eigenstates of a system in strong magnetic field. The method is valid for a system in the very close proximity of a completely filled LL, such that, besides an integer number of filled and inert LLs, only a single electron or hole are present. Although this limit may seem, at first sight, extremely theoretical, it describes
the experimental situation of a very sparsely electron- or hole-filled LL, in which the average distance between the particles is larger than the cyclotron radius $R_C=l_B\sqrt{2n+\delta_{n,0}}$ in graphene, that is $\bar{\nu}\ll 1/(2n+\delta_{n,0})$. The tunneling from the STM tip, which injects a second particle into the system, can thus measure the overlap between the resulting state and the two-particle interacting eigenfunctions of the system. Also, it allows one to measure the difference in energy between the two-particle interacting states of the system and the one-particle state, thus yielding information about the strength of the interactions.
Our calculations revealed that the DOS spectrum exhibit peaks, the energy of which can be related directly to the energies of Haldane's pseudopotentials. While the $n=0$ state is quite similar to the $n=0$ LL in non-relativistic 2D electron systems with a parabolic band dispersion, the higher LL DOS structures are different, in that the energies in the spectrum do not result from a single pseudopotential value, but involve combinations of these values, corresponding to states with different angular momenta. This is a direct consequence of two graphene-specific properties. The first one is that the spinorial eigenstates have (sublattice) components
in different non-relativistic LLs. The second one is that, as a consequence of the Lorentz invariance of the underlying Dirac equation, the
center of mass and the relative degrees of freedom are intimately coupled. Finally, the relative spectral weight between the peaks corresponding to odd and even pseudopotentials yields insight into the multi-component structure of graphene LLs.
It would be interesting to generalize our results for larger partial fillings, eventually moving towards the regime of the fractional quantum Hall effect. Such an analysis would allow one to make predictions about the experimental spectroscopic signature of different highly-delicate quantum Hall states, such as the $\nu=1/2$ state, that do not have distinct anomalous transport signatures; understanding the nature of such states has been a long-standing question in the study of the FQHE. While the fractional quantum Hall effect has been measured only recently in graphene in transport experiments,
spectroscopic measurements may yield additional information about the relevant electronic interactions in graphene LLs and thus about the nature of strongly-correlated phases in partially filled levels.
\acknowledgements
This work was supported by the ANR project NANOSIM GRAPHENE under Grant No. ANR-09-NANO-016, and by the FP7 ERC Starting Independent Researcher Grant NANO-GRAPHENE 256965.
|
1,116,691,500,294 | arxiv | \section{Introduction}
Let $k$ be a commutative ring with unit. There are various equivalent definitions of the notion of representation of a poset over $k$. One can look at representations of the \emph{Hasse diagram} of the poset viewed as a quiver with the relations of total commutativity, or at modules over the so-called \emph{incidence algebra} of the poset. An alternative definition is to use a \emph{functor category}. To a poset $(Y,\leqslant)$ one can associate a finite category $\mathcal{C}_Y$ where the objects are the elements of $Y$ and there is a unique morphism between $y$ and $y'$ if and only if $y\leqslant y'$. It is well-known that the category of covariant functors from $\mathcal{C}_Y$ to the category of $k$-vector spaces is equivalent to the category of right modules over the incidence algebra of $Y$. Moreover, it is also classical that if $k\mathcal{C}_Y$ is the $k$-linearization of $\mathcal{C}_Y$, then the category of functors from $\mathcal{C}_Y$ to the category of all $k$-modules $k\Mod$ is equivalent to the category of \emph{$k$-linear functors} from $k\mathcal{C}_Y$ to $k\Mod$.
\begin{de}
Let $k$ be a commutative ring and $(Y,\leqslant )$ be a poset. The category of $k$-linear functors from $k\mathcal{C}_Y$ to $k\Mod$ is denoted $\mathcal{F}_{Y,k}$. \end{de}
Note that the study of these functor categories is very
different from another kind of ``representation of posets", that
was considered by Nazarova, Kleiner and others (See \cite{simson} for more details), which involves a
non-abelian subcategory of the category of modules over a
one-point extension of the Hasse diagram.
Since the category $k\Mod$ is abelian, the category $\FYk$ inherits an abelian structure. In particular, one can consider the bounded derived category $D^b(\FYk)$ of this abelian category. With a slight abuse of notation, we call it the derived category of the poset. Numerous invariants of the poset can be read inside the derived category such as its cardinality or its number of connected components. It is also the good setting for the study of the Coxeter transformation and the Coxeter polynomial of finite posets (See \cite{ladkani_derived_poset} for more details). If two finite posets share the same derived category, then they have the same Coxeter polynonial. Using a computer, it is then easy to find many examples of finite posets with the same Coxeter polynomial, and one can wonder if they also share the same derived category.
In this spirit, there is an interesting conjectural example in the theory of Tamari lattices. It is conjectured by the first author that the Tamari lattice is derived equivalent to the poset of Dyck paths (See \cite{chapoton_derived_tamari} for more details). In the same context, we propose another conjecture involving the derived category of the poset of Dyck paths (See Conjecture \ref{conj_dyck}).
There are various tools that can be used to check if two posets share the same derived category (see \cite{ladkani_derived_poset} or \cite{ladkani_universal} for some explicit constructions). Unfortunately, there are no (known) algorithm and it is most of the time difficult to build such derived equivalences.
One of the difficulties comes from the fact that the derived category of a finite poset may also be equivalent to the derived category of a ring with a-priori no relation with posets. For example the poset $1<2 < 3$ is derived equivalent to the quotient of the path algebra of the quiver $1\to 2 \to 3$ by the ideal generated by the path of length two.
In this article we will focus on the set of intervals of finite posets. We start with two possible definitions of categories of intervals of a poset. The first definition is a poset, denoted $\Gamma$ and viewed as a category. On the other hand, the second category, denoted by $k\Gamma_0$, is not the category of a finite poset. It is, in some sense, the category of a poset with some extra zero-relations. More formally, the category of $k$-linear representations of $k\Gamma_0$ is equivalent to the category of modules over an algebra which is a quotient of the incidence algebra of $\Gamma$ by zero-relations. The main result of the article is that the categories of $k$-linear representations of $\Gamma$ and $k\Gamma_0$ are derived equivalent. This result is obtained as a special case of a slightly more general construction, that we illustrate with a few examples.
This general construction takes as starting point a pair of posets
$X,Y$ and a morphism from $Y$ to the distributive lattice of lower
ideals in $X$. In fact, we think that it might be seen as a very special case of some derived equivalences obtained by Asashiba, which has considered the so-called Grothendieck construction in \cite{asashiba}. Our results are much more elementary and concrete, with a shorter proof and provide an explicit and simple tilting complex. One can hope to apply them in many combinatorial contexts.
The category of intervals $k\Gamma_0$ seems to be a good intermediate object when one wants to produce derived equivalences between finite posets. As applications, we prove that the Auslander algebra of a linear order $A_n$ is derived equivalent to the incidence algebra of the poset of intervals of $A_n$. Together with results of the second author, this proves that the \emph{rectangle} poset $A_{2n+1}\times A_{n}$ is derived equivalent to the \emph{triangle} poset of intervals of $A_{2n}$. Finally, we investigate the relations between the derived category of the poset of $(a,b)$-\emph{rational Dyck} paths and the poset of \emph{lattice paths} in the $(a,b)$-rectangle.
\begin{notations}
If $\mathcal{A}$ is an abelian category, we denote by $D^b(\mathcal{A})$ its bounded derived category. We denote by $\mathrm{proj}(\mathcal{A})$ the full subcategory of $\mathcal{A}$ consisting of the finitely generated projective objects. If $\mathcal{B}$ is an additive category, we denote by $K^{b}(\mathcal{B})$ the homotopy category of bounded complexes of $\mathcal{B}$.
\newline If $n\in\mathbb{N}$, we denote by $\overrightarrow{A_n}$ the set $\{1,\cdots, n\}$. Unless specified otherwise, we see it with the total order $1<2 <\cdots < n$.
\end{notations}
\section{Two categories of generalized intervals of a poset}
\subsection{Categories of intervals of a finite poset}
Let $k$ be a commutative ring with unit. Let $(X,\leqslant)$ be a finite poset. For $a,b\in X$, we set
\[ [a,b]:= \{z\in X; a\leqslant z \hbox{ and } z\leqslant b \}.\]
As usual, the set $[a,b]$ is called an interval of $X$. We let $\Int(X)$ be the set of intervals of $X$. It has a natural partial order defined by
\[ [a,b] \leqslant [c,d] \hbox{ if and only if } a\leqslant c \hbox{ and } b\leqslant d.\]
\begin{de}
The set $\Int(X)$ with this particular partial order is called the poset of intervals of $X$.
\end{de}
\begin{re}
There is another natural partial order on the set $\Int(X)$ which is given by the inclusion of the intervals. However, with this partial order, the resulting endo-functor of the category of finite posets behaves less nicely. For example, it does not commutes with the duality. One can see that this poset is not equivalent to the poset that we consider in this article, even at the level of derived categories.
\end{re}
If $[a,b]$ is an interval of $X$, then we have an indecomposable functor $M_{a,b}$ in $\mathcal{F}_{X,k}$ defined on the objects by:
\[ M_{a,b}(x) = \left\{\begin{array}{c}\ \ k \hbox{\ \ \ \ if $a\leqslant x\leqslant b$,}\\ 0\ \ \ \hbox{ otherwise.} \end{array}\right.\]
If $\alpha : x\to y$ is a morphism in $k\mathcal{C}_X$, then $M_{a,b}(\alpha) = \alpha$ if $a\leqslant x\leqslant y \leqslant b$ and $M_{a,b}(\alpha)=0$ otherwise.
\begin{de}
Let $X$ be a finite poset and $k$ be a commutative ring. We let $\mathrm{Int}^{0}_k(X)$ be the category where the objects are the intervals of $X$ and, \[\Hom_{\mathrm{Int}^{0}_k(X)}([c,d],[a,b]):=\Hom_{\mathcal{F}_{X,k}}\big(M_{a,b},M_{c,d}\big).\]
\end{de}
Note that we applied an `op'-functor on each set of morphisms.
\begin{lemma}
Let $X$ be a finite poset, let $k$ be a commutative ring. Then
\[ \Hom_{\mathcal{F}_{X,k}}(M_{a,b},M_{c,d}) = \left\{\begin{array}{c}k \hbox{ if $c\leqslant a\leqslant d\leqslant b$,} \\0 \hbox{ otherwise. }\end{array}\right.\]
\end{lemma}
\begin{proof}
If there is a non-zero morphism $\phi$ between $M_{a,b}$ and $M_{c,d}$, then the intersection $[a,b]\cap [c,d]$ is non-empty. Let $x$ be an element of this intersection. Since $\phi$ is a natural transformation, the following diagram commutes:
\[ \xymatrix{
M_{a,b}(x) = k \ar[r]^{\phi_x} & M_{c,d}(x)=k \\
M_{a,b}(a) = k \ar[r]^{\phi_a}\ar[u] & M_{c,d}(a). \ar[u]
}\]
This implies that $M_{c,d}(a)\neq 0$. So we have $c\leqslant a\leqslant d$. Similarly, the following diagram commutes:
\[
\xymatrix{
M_{a,b}(d)\ar[r]^{\phi_d} & M_{c,d}(d)=k \\
M_{a,b}(x)=k \ar[r]^{\phi_x}\ar[u]& M_{c,d}(x)=k.\ar[u]}
\]
So, $M_{a,b}(d)\neq 0$. This implies that $a\leqslant d \leqslant b$. Conversely, if $c\leqslant a\leqslant d\leqslant b$, then the morphism $\phi$ defined by $\phi_{z}=\operatorname{Id}_k$ for every $a\leqslant z\leqslant d$ is a natural transformation from $M_{a,b}$ to $M_{c,d}$.
\end{proof}
In other terms, we have a combinatorial description of $\mathrm{Int}^{0}_k(X)$:
\begin{coro}
Let $X$ be a finite poset. Let $k$ be a commutative ring. Then $\mathrm{Int}^{0}_k(X)$ is the category where the objects are the intervals of $X$ and the morphisms are:
\[\Hom_{\mathrm{Int}^{0}_k(X)}([a,b],[c,d])= \left\{\begin{array}{c}k \hbox{ if $a\leqslant c\leqslant b\leqslant d$,} \\0 \hbox{ otherwise. }\end{array}\right.\]
The composition is given by scalar multiplication.
\end{coro}
It is easy to see that the categories of $k$-linear representations of $k\mathrm{Int}(X)$ and $\mathrm{Int}^{0}_k(X)$ are not equivalent. This is already the case when $X = \overrightarrow{A_2}$. However, we will see that they share the same derived category.
\subsection{Generalized intervals of a finite posets}
Let $(X,\leqslant)$ be a finite poset. For an element $x\in X$, we let $[.,x] = \{ x'\in X\ ; \ x'\leqslant x\}$. A subset $Z\subseteq X$ is \emph{closed} if $[.,x] \subseteq Z$ for every $x\in Z$. We denote by $\mathcal{J}(X)$ the poset of closed subsets of $X$ partially ordered by inclusion. Note that a closed subset of $X$ is also called an \emph{ideal} of $X$.
\newline\indent Let $X$ and $Y$ be two finite posets. Let $F : Y \to \mathcal{J}(X)$ be an order-preserving map. In other words, for $y\in Y$ there is a closed subset $F(y)$ of $X$ such that $F(y)\subseteq F(y')$ whenever $y\leqslant y'$. Consider $\Gamma$ the poset defined by
\[ \Gamma = \bigsqcup_{y\in Y}\big(F(y)\times \{y\} \big) \subseteq X\times Y\]
with the partial order induced from that of the product $X\times Y$. In other terms, the elements of $\Gamma$ are pairs $(x,y)$ where $y\in Y$ and $x\in F(y)$, with
\[ (x,y)\leqslant (x',y') \Leftrightarrow x \leqslant x' \hbox{ and } y\leqslant y'. \]
The elements of $\Gamma$ are called \emph{generalized intervals} for the data $(X,Y,F)$.
\newline\indent Let us consider the $k$-linear category $k\Gamma_0$ where the objects are the pairs $(x,y)$ such that $y\in Y$ and $x\in F(y)$ and the morphisms are given by
\[\Hom_{k\Gamma_0}\big((x,y),(x',y')\big) =\left\{\begin{array}{c}k \hbox { if $x\leqslant x'$, $y\leqslant y'$ and $x'\in F(y)$}, \\0 \hbox{ otherwise.} \end{array}\right. \]
The composition is given by the scalar multiplication.
\begin{de}
Let $X$ and $Y$ be two finite posets and $F : Y \to \mathcal{J}(X)$ be an order preserving map. Let $k$ be a commutative ring. Then, we denote by $\mathcal{F}_{\Gamma_0,k}$ the category of functors from $k\Gamma_0$ to $k\Mod$.
\end{de}
\begin{re}
This setting is a generalization of the two previous constructions for the intervals of a given poset $X$. Indeed,
if $X=Y=Z$ and $F : X\to \mathcal{J}(X)$ is the map defined by $F(x)=[\cdot, x]$, then $\Gamma = \mathrm{Int}(X)$ and $k\Gamma_0 \cong \mathrm{Int}^0_k(X)$.
\end{re}
\section{Main result and applications}
\subsection{Main Theorem}
\begin{theo}\label{main_thm}
Let $k$ be a commutative ring. Let $X$ and $Y$ be two finite posets and $F : Y \to \mathcal{J}(X)$ be an order preserving map. Then, there is a triangulated equivalence between $D^{b}\big(\mathcal{F}_{\Gamma,k}\big)$ and $D^{b}\big(\mathcal{F}_{\Gamma_0,k}\big)$.
\end{theo}
\begin{re}
We postpone the proof until Sections $4$ and $5$.
\end{re}
Let us give an equivalent formulation of $\Gamma$ and the category $k\Gamma_0$ which is easier to manipulate. Let $X$, $Y$ and $Z$ be three finite posets with order preserving maps $f : X \to Z$ and $g : Y \to Z$. Consider the poset
\[\Gamma = \{ (x,y) \in Z\ ;\ f(x)\leqslant_{Z} g(y)\} \]
with partial order induced from $X\times Y$.
\newline\indent Let $k\Gamma_0$ the category where the objects are the elements of $\Gamma$ and the morphisms are given by
\[\Hom_{k\Gamma_0}\big((x,y),(x',y')\big) =\left\{\begin{array}{c}k \hbox { if $x\leqslant_X x'$, $y\leqslant_Y y'$ and $f(x')\leqslant_Z g(y)$}, \\0 \hbox{ otherwise.} \end{array}\right. \]
Let $X$ and $Y$ be two finite posets and $F : Y\to \mathcal{J}(X)$ be an order preserving map. We set $Z= \mathcal{J}(X)$, the map $f : X\to J(X)$ is defined by $f(x)= [\cdot,x]$ and $g=F$. Then, the condition on a pair $(x,y)\in X\times Y$ that $f(x)\leqslant_{Z} g(y)$ means that $[\cdot,x] \subseteq F(y)$. Since $F(y)$ is closed, this is equivalent to the condition $x\in F(y)$.
\newline Conversely, let $X$, $Y$ and $Z$ be three finite posets. Let $f : X \to Z$ and $g : Y\to Z$ be two order preserving maps. Then, for $y\in Y$, we let $F(y) = \{ x\in X\ ;\ f(x)\leqslant_Z g(y) \}$. Since $f$ is order preserving, the set $F(y)$ is closed and since $g$ is order preserving, we have $F(y)\subseteq F(y')$ whenever $y\leqslant_Y y'$. The condition on a pair $(x,y)\in X\times Y$ that $f(x)\leqslant_Z g(y)$ is equivalent to the condition that $x\in F(y)$.
\begin{ex}
As first application, we consider some simple cases.
\begin{enumerate}
\item Let $X = \{1,2,3\}$ such that $1< 3$ and $2< 3$. Let $Y = \{a,b,c,d\}$ such that $a < b < c < d$. Let $Z = \{i,j,k\}$ such that $i< j < k$. The morphism $f$ is defined by $f(1)= i$, $f(2)=j$ and $f(3)=k$. The morphism $g$ is defined by $g(a)=i$, $g(b)=j$, $g(c)=g(d)=k$. Then the Hasse diagram of $\Gamma$ is
\[
\xymatrix{
(1,a) \ar[r] & (1,b) \ar[r] & (1,c) \ar[r]\ar[d]\ar@{}[rd]|{\circlearrowleft} & (1,d)\ar[d] \\
& & (3,c) \ar[r]\ar@{}[rd]|{\circlearrowleft} & (3,d) \\
& (2,b) \ar[r] &(2,c) \ar[r]\ar[u] &(2,d) \ar[u]
}
\]
For $k\Gamma_0$ we have the following presentation with generators and relations of the category
\[
\xymatrix{
(1,a) \ar[r] & (1,b) \ar[r]\ar@{..>}[rd]_{0} & (1,c) \ar[r]\ar[d]\ar@{}[rd]|{\circlearrowleft} & (1,d)\ar[d] \\
& & (3,c) \ar[r]\ar@{}[rd]|{\circlearrowleft} & (3,d) \\
& (2,b) \ar[r]\ar@{..>}[ru]^{0} &(2,c) \ar[r]\ar[u] &(2,d) \ar[u]
}
\]
where the dotted arrows are zero relations.
\item It is particularly interesting to consider the more symmetric case where $Y=Z$ and $g= Id_{Y}$. Then
\[ \Gamma = \{ (x,y)\in X\times Y\ ;\ f(x)\leqslant_{Y} y \}\]
and the category $k\Gamma_0$ has morphisms:
\[\Hom_{k\Gamma_0}\big((x,y),(x',y')\big) =\left\{\begin{array}{c}k \hbox { if $x\leqslant_X x'$, $y\leqslant_Y y'$ and $f(x')\leqslant_Y y$}, \\0 \hbox{ otherwise.} \end{array}\right. \]
Let $P = \{1,2,3\}$ where $1<3$ and $2<3$. Let $X$ be the poset of $2$-chains of $P$ and $Y$ be the poset of $3$-chains of $P$. We let $f : X\to Y$ to be the morphism that sends a chain $i\leqslant j$ to $i\leqslant i \leqslant j$. Then, the Hasse diagram of $\Gamma$ is
\[
\xymatrix{
\bullet\ar[r]\ar@{}[rd]|{\circlearrowleft} & \bullet \ar[r] & \bullet & \bullet\ar[l]\ar@{}[rd]|{\circlearrowleft} & \bullet \ar[l] \\
\bullet\ar[r]\ar@{}[rd]|{\circlearrowleft}\ar[u] & \bullet\ar[u] & & \bullet\ar[u] \ar@{}[rd]|{\circlearrowleft}& \bullet\ar[u]\ar[l]\\
\bullet\ar[r]\ar[u] & \bullet\ar[u] & & \bullet\ar[u] & \bullet\ar[u]\ar[l]\\
\bullet\ar[u] & & & & \bullet\ar[u]}
\]
For $k\Gamma_0$ we have a presentation by generators and relations
\[
\xymatrix{
\bullet\ar[r]\ar@{}[rd]|{\circlearrowleft} & \bullet \ar[r] & \bullet & \bullet\ar[l]\ar@{}[rd]|{\circlearrowleft} & \bullet \ar[l] \\
\bullet\ar[r]\ar[u]\ar@{}[rd]|{\circlearrowleft} & \bullet\ar[u]\ar@{..>}[ur]_{0} & & \bullet\ar[u]\ar@{..>}[ul]^{0} \ar@{}[rd]|{\circlearrowleft}& \bullet\ar[u]\ar[l]\\
\bullet\ar[r]\ar[u] & \bullet\ar[u] & & \bullet\ar[u] & \bullet\ar[u]\ar[l]\\
\bullet\ar[u]\ar@{..>}[ur]_{0} & & & & \bullet\ar[u]\ar@{..>}[ul]^{0}
}
\]
More generally, if $l \in \mathbb{N}$, one can defined simplicial morphisms between the poset of $l$-chains of $P$ and the poset of $l+1$-chains of $P$, by duplicating or forgetting the element at a fixe position of the chains. This will give similar diagrams.
\end{enumerate}
\end{ex}
\subsection{Special case of intervals}
For the specific case of the intervals of a finite poset, we have
\begin{coro}
Let $X$ be a finite poset. Let $k$ be a commutative ring. Then, the category $\Fintk$ is derived equivalent to the category $\mathcal{F}_{\mathrm{Int}_k^{0}(X),k}$.
\end{coro}
It was shown by the second author that the poset $A_{2n+1} \times A_n$ is derived equivalent to the stable Auslander algebra of the quiver $\overrightarrow{A_{2n+1}}$. The poset $A_{2n+1} \times A_n$ can be viewed as a \emph{rectangle}, and the stable Auslander algebra of the quiver $\overrightarrow{A_{2n+1}}$ with linear order can be seen as a \emph{triangle}. However, there are some zero-relations in the Auslander Reiten quiver that come from the almost split sequences where the left and the right terms are two simple modules. Using Theorem \ref{main_thm}, we can remove these zero-relations.
\begin{coro}\label{triangle}
Let $n\in \mathbb{N}$ and $k$ be an algebraically closed field. Then, the poset $A_{2n+1} \times A_n$ is derived equivalent to the poset $\mathrm{Int}(\overrightarrow{A_{2n}})$.
\end{coro}
\begin{proof}
By Corollary $1.12$ of \cite{ladkani_rectangles}, the poset $A_{2n+1}\times A_n$ is derived equivalent to the stable Auslander algebra of $k\overrightarrow{A_{2n+1}}$. Because we consider a linear order on $A_{2n+1}$, it is easy to see that the stable Auslander algebra of $k\overrightarrow{A_{2n+1}}$ is isomorphic to the usual Auslander algebra of $k\overrightarrow{A_{2n}}$. Now, for $m\in \mathbb{N}^{*}$, we consider the Auslander algebra of $A_m$ with ordering $m < m-1 < \cdots < 1$. Let $Q$ be the Auslander Reiten quiver of $kA_m$ viewed as a category. Let $I$ be the category $\mathrm{Int}_{k}^{0}(\overrightarrow{A_m})$. There is a functor from $I$ to $Q$ which can be described as follows. The interval $[i,j]$ is sent to the indecomposable $kA_m$ module with support $[i,j]$, denoted $M_{[i,j]}$. If $[i,j] \leqslant [k,l]$, then the corresponding basis element is sent to the irreducible morphism between the indecomposable modules $M_{[i,j]}$ and $M_{[k,l]}$.
\newline\indent If $i\neq j$, then the equality of the morphisms $[i,j] \to [i+1,j] \to [i+1,j+1]$ and $[i,j] \to [i,j+1] \to [i+1,j+1]$ corresponds via $\phi$ to the mesh relation
{\small \[
\xymatrix{ & M_{[i+1,j]}\ar[rd] & \\ M_{[i,j]}\ar[ru]\ar[rd] & & M_{[i+1,j+1]}=\tau^{-1}(M_{[i,j]}) \\ & M_{[i,j+1]}\ar[ru]
}
\]}
and the zero-relation $[i,i] \to [i,i+1] \to [i+1,i+1]$ in $\mathrm{Int}_k^{0}(A_k)$ corresponds to the mesh relation
{\small \[ \xymatrix{ 0 \ar[r] & M_{[i,i]} \ar[r] & M_{[i,i+1]} \ar[r] & M_{[i+1,i+1]} = \tau^{-1}(M_{[i,i]})\ar[r] & 0.}\]}
It is now easy to see that this functor is an equivalence of categories. In particular, the category of modules over the Auslander algebra of $A_m$ is equivalent to the category of $k$-linear functors from $\mathrm{Int}_k^{0}(\overrightarrow{A_m})$ to $k\Mod$.
\newline\indent In conclusion, the poset $A_{2n+1} \times A_{n}$ is derived equivalent to $\mathrm{Int}_{k}^{0}(\overrightarrow{A_{2n}})$. The result follows from Theorem \ref{main_thm}.
\end{proof}
\begin{re}
It is well-known that any two different orientations of a Dynkin diagram of type $A$ are derived equivalent. The stable Auslander algebra of two different orientations are also derived equivalent (see Section $1.5$ of \cite{ladkani_rectangles}). This implies that the poset of intervals $\mathrm{Int}(A_{2n})$ of a linear orientation of $A_{2n}$ is derived equivalent to the stable Auslander algebra of $A_{2n+1}$ for any orientation. However, It is wrong that two different orientations of $A_{2n}$ lead to derived equivalent posets of intervals.
\end{re}
\subsection{Application to the poset of rational Dyck paths}
Let $a$ and $b$ be two co-prime integers. A rational $(a,b)$-Dyck path is a lattice path in an $(a\times b)$-rectangle that stays above and never crosses the diagonal. We denote by $\mathrm{Dyck}_{a,b}$ the set of rational Dyck paths. It is well known that there are $\frac{1}{a+b} {{a+b}\choose{b}}$ elements in $\mathrm{Dyck}_{a,b}$ (see \cite{bizley} for more details). The usual proof of this formula is to consider the set of all lattice paths in the rectangle $a\times b$, denoted by $\mathcal{L}_{a,b}$. This is a set with ${a+b}\choose{b}$ elements. The cyclic group $\mathbb{Z}/{(a+b)\mathbb{Z}}$ acts on this set by the so-called cycling relation of a path. The orbits contains $a+b$ elements and exactly one rational Dyck path.
\newline\indent The sets $\mathcal{L}_{a,b}$ and $\mathrm{Dyck}_{a,b}$ can be naturally viewed as posets. A lattice path $l_1$ is smaller than another $l_2$ if $l_1$ lies below $l_2$. The formula for the cardinality of $\mathrm{Dyck}_{a,b}$ suggests a relation between the poset $A_{a+b}\times \mathrm{Dyck}_{a,b}$ and the poset $\mathcal{L}_{a,b}$. It is easy to see that these two posets are not isomorphic. Still, we think that they may share the same derived category.
\begin{conj}\label{conj_dyck}
Let $a,b$ be two co-prime integers. Let $\mathcal{L}_{a,b}$ be the poset of lattice paths in the rectangle $a\times b$ and $\mathrm{Dyck}_{a,b}$ be the poset of $(a,b)$-rational Dyck paths. Then, the poset $A_{a+b} \times \mathrm{Dyck_{a,b}}$ is derived equivalent to the poset $\mathcal{L}_{a,b}$.
\end{conj}
In the particular case where $a=2$, the tools developed here together with results of the second author can be used in order to check this conjecture.
\begin{prop}
Let $k$ be an algebraically closed field. Let $b$ be an \emph{odd} integer. Then, there is a derived equivalence over the field $k$ between the posets $A_{b+2} \times \mathrm{Dyck}_{2,b}$ and $\mathcal{L}_{2,b}$.
\end{prop}
\begin{proof}
If $\lambda$ is a lattice path in the rectangle $2\times b$, we denote by $I(\lambda)$ the pair $(j,i)$ where $i$ is the abscissa of the first vertical move of the path and $j$ is the abscissa of the second vertical move. The path is characterized by the pair $I(\lambda)$. There are $\frac{b+1}{2}$ different $(2,b)$-Dyck paths that corresponds to the pairs $(0,0), (1,0),\cdots ,(\frac{b-1}{2},0)$. Moreover, the partial order of the paths is given by $(\frac{b-1}{2},0) < \cdots < (1,0) < (0,0)$. In other words, $\mathrm{Dyck}_{2,b} \cong A_{\frac{b+1}{2}}$.
\newline\indent If $(j,i)$ is the pair $I(\lambda)$ of a lattice path, it is clear that $i\leqslant j$. In particular $I(\lambda)$ can be seen as an interval of $A_{b+1}$ ordered by decreasing order. If $\lambda_1$ and $\lambda_2$ are two paths, it is easy to see that $\lambda_1 \leqslant \lambda_2$ if and only if $I(\lambda_1)\leqslant I(\lambda_2)$ in the poset of intervals. This shows that $\mathcal{L}_{2,b}$ is isomorphic to the poset of intervals of $A_{b+1}$. The result follows from Corollary \ref{triangle}.
\end{proof}
\section{Representations of a finite poset}
Let $k$ be commutative ring. Let $Y$ be a finite poset. The category $\FYk$ is abelian. The abelian structure is point-wise. More precisely, it is defined on the evaluations of the functors. For $y\in Y$, there is an obvious functor, denoted by $\ev_y$ from $\FYk$ to $k\Mod$ that sends a functor $F$ to its value $F(y)$. This functor is clearly exact.
\newline\indent For $y \in Y$, we let $\mathrm{P}_y:= \Hom_{k\mathcal{C}_Y}(y,-)$. By Yoneda's Lemma, we have
\[ \Hom_{\FYk}\Big(\Hom_{k\mathcal{C}_Y}\big(y,-\big),-\Big) \cong \mathrm{ev}_y.\] In particular, the functor $\mathrm{P}_y$ is projective. Similarly, the functor $\mathrm{I}_y:=\Hom_{k\C_Y}(-,y)^{*}$ is an injective functor. More precisely, the evaluations of these functors are
\[ \mathrm{P}_y(z) = \left\{\begin{array}{c}k \hbox{ \ \ if $y \leqslant z$, } \\0\hbox{ \ otherwise,}\end{array}\right. \]
\[ \mathrm{I}_y(z) = \left\{\begin{array}{c}k \hbox{ \ \ if $z \leqslant y$, } \\0\hbox{ \ otherwise.}\end{array}\right. \]
\begin{lemma}\label{hom_proj}
Let $Y$ be a finite poset and $k$ be a commutative ring. Let $x$ and $y\in Y$. Then,
\[ \Hom_{\FYk}\big(\mathrm{P}_x, \mathrm{P}_y\big) = \Hom_{\FYk}\big(\mathrm{I}_x, \mathrm{I}_y\big) = \left\{\begin{array}{c}k \hbox{ if $y \leqslant x$, } \\0 \hbox{ otherwise.}\end{array}\right.\]
\end{lemma}
\begin{proof}
These are straightforward applications of Yoneda's Lemma.
\end{proof}
Let $X$ and $Y$ be two finite posets and $F : Y\to \mathcal{J}(X)$ be an order preserving map. For $y\in Y$, we let $i_{y} : F(Y) \to \Gamma$ be the map that sends $x\in F(y)$ to $(x,y)\in \Gamma$. This is an order preserving map, so the pre-composition by $i_{y}$ gives a functor $i_{y}^{-1} : \FGk \to \Fyk$. More explicitly, if $\phi \in \FGk$, then $i_{y}^{-1}(\phi)$ is the functor that sends $x\in F(y)$ to the $k$-module $\phi(x,y)$. The functor $i_{y}^{-1}$ is clearly exact and by usual arguments it has a left and a right adjoint which can be described as particular coend and end (for more details see Theorem $1$, Section $4$ of Chapter $X$ of \cite{cftwm}). One can explicitly compute this end in order to find the right adjoint. Alternatively, and for the convenience of the reader, we give the formula and check that this gives indeed a right adjoint of $i_{y}^{-1}$.
\newline\indent Let $\phi \in \Fyk$. Then, for $(a,b)\in \Gamma$ we set:
\[ (i_{y})_{\star}\phi(a,b):=\left\{\begin{array}{c}\Hom_{k}\Big(\Hom_{k\Gamma}\big((a,b),(a,y)\big), \phi(a)\Big) \hbox{ if $b \leqslant y$, } \\0 \hbox{ otherwise. }\end{array}\right. \]
Since $\Hom_{k\Gamma}\big((a,b),(a,y)\big)$ is isomorphic to $k$ when $b\leqslant y$, this formula can be simplified as
\[ (i_{y})_{\star}\phi(a,b)\cong \left\{\begin{array}{c}\phi(a) \hbox{ if $b \leqslant y$, } \\0 \hbox{ otherwise. }\end{array}\right. \]
However, we feel that it is more natural to describe this functor in this way.
\newline\indent Let $0\neq f : (a,b) \to (c,d)$ be a morphism in $k\Gamma$ such that $b\leqslant d \leqslant y$. Let $0\neq g \in \Hom_{k\Gamma}\big((c,d),(c,y) \big)$. Then, there exist $h\in \Hom_{k\Gamma}\big((a,b),(a,y)\big)$ and $\alpha \in \Hom_{kF(y)}(a,c)$ such that the following diagram commutes
\[
\xymatrix{
(a,b)\ar[r]^{h}\ar[d]^{f} & (a,y)\ar[d]^{i_{y}(\alpha)} \\
(c,d) \ar[r]^{g} & (c,y)
}
\]
Note that $h$ and $\alpha$ are not unique. However, the different choices are of the form $\lambda \times h$ and $\lambda^{-1}\times \alpha$ for $\lambda \in k^{\times}$.
\newline\indent Then $(i_{y})_{\star}(\phi)(f)$ is the application that sends $\rho\in \Hom_{k}\Big(\Hom_{k\Gamma}\big((a,b),(a,y)\big), \phi(a)\Big)$ to the $k$-linear morphism that sends $g\in \Hom_{k\Gamma}\big((c,d),(c,y)\big)$ to $\phi(\alpha) \circ \rho(h)\in \phi(c)$. Since $\phi$ and $\rho$ are $k$-linear morphisms, we see that the value of $(i_{y})_{\star}(\phi)(f)$ does not depend on the choice of $\alpha$ and $h$.
\newline\indent Let $\eta : \phi \Rightarrow \psi$ be a morphism between two functors of $\Fyk$. Then $(i_{y})_{\star}(\eta)$ is the natural transformation defined by $(i_{y})_{\star}(\eta)_{(a,b)}\big(\rho) = \eta_{a} \circ \rho$ for $(a,b)\in \Gamma$ such that $b\leqslant y$ and $\rho \in \phi(a,b)$.
\begin{lemma}
Let $y\in Y$. Then, the functor $(i_{y})^{-1} : \FGk\to \Fyk$ is a left adjoint to the functor $(i_y)_{\star} : \Fyk \to \FGk$.
\end{lemma}
\begin{proof}
We give the unit and the co-unit of the adjunction.
\newline\indent Let $F\in \Fyk$ and $x\in F(y)$. The unit at $F$ and $x$, is the $k$-linear morphism
\[\epsilon_F(x) : \Hom_{k}\Big(\Hom_{k\Gamma}\big((x,y),(x,y)\big),F(x)\Big) \to F(x),\]
that sends $\alpha \in \Hom_{k}\Big(\Hom_{k\Gamma}\big((x,y),(x,y)\big),F(x)\Big)$ to $\alpha(Id_{(x,y)}) \in F(x)$.
\newline\indent Let $G\in \FGk$ and $(a,b)\in\Gamma$ such that $b\leqslant y$. The co-unit of the adjunction at $G$ and $(a,b)$ is the $k$-linear morphism
\[\eta_{G}(a,b) : G(a,b) \to \Hom_{k}\Big(\Hom_{k\Gamma}\big((a,b),(a,y)\big),G(a,y)\Big) \]
that sends $\gamma \in G(a,b)$ to the $k$-linear morphism that sends $\alpha \in \Hom_{k\Gamma}\big((a,b),(a,y)\big)$ to $G(\alpha)(\gamma)$. It is now straightforward to check that these two morphisms are the unit and the co-unit of the adjunction.
\end{proof}
Here, we summarise the main properties of the functors $i_{y}^{-1}$ and $(i_y)_{\star}$.
\begin{lemma}\label{adj}
Let $y\in Y$.
\begin{enumerate}
\item The functor $(i_y)_{\star}$ sends the injective $I_{x} \in \Fyk$ to the injective $I_{(x,y)} \in \FGk$.
\item The two functors $i_y^{-1}$ and $(i_y)_{\star}$ are exact.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $(i_y)_{\star}$ is a right-adjoint to an exact functor, it sends $I_{x}\in \Fyk$ to an injective object of $\FGk$. Moreover, one can explicitly compute $(i_y)_{\star}(I_{x})$. Let $(a,b)\in \Gamma$, then we have
\[(i_y)_{\star}(I_x)(a,b) = \left\{\begin{array}{c} I_{x}(a) \hbox{ if $b\leqslant y$} \\0 \hbox{ otherwise } \end{array}\right. = \left\{\begin{array}{c} k \hbox{ if $a\leqslant x$ and $b\leqslant y$} \\0 \hbox{ otherwise. } \end{array}\right. \]
It is clear that $i_y^{-1}$ is an exact functor. For the functor $(i_{y})_{*}$ the exactness follows easily from the description of this adjoint. Let
\[\xymatrix{
0 \ar[r] & F_1 \ar[r]^{\alpha_1} & F_2 \ar[r]^{\alpha_2} & F_3 \ar[r] &0,
} \]
be an exact sequence of functors of $\Fyk$. Let $(a,b)\in \Gamma$. If $b \nleqslant y$, then for $i=1,2,3$ we have $(i_{y})_{\star}(F_i)(a,b) = 0$ and $(i_{y})_{\star}(\alpha_i)_{(a,b)}=0$ for $i=1,2$ so the sequence is exact. If $b\leqslant y$, then $(i_{y})_{\star}(\alpha)_{(a,b)} = \Hom_{k}\Big(\Hom_{k\Gamma}\big((a,b),(a,y)\big),(\alpha_i)_{a}\Big)$ for $i=1,2$. Since $\Hom_{k\Gamma}\big((a,b),(a,y)\big)\cong k$, the result follows.
\end{proof}
Since the functors $(i_y)_{\star}$ and $i_y^{-1}$ are both exact, they can be extended as a pair of adjoint triangulated functors between the derived categories $D^{b}\big(\Fyk\big)$ and $D^b\big(\FGk\big)$.
\newline\indent Let us remark that, in general, the functor $(i_{y})_{\star}$ does not send a projective functor to a projective functor. However, it behaves relatively nicely for these projective functors.
\begin{lemma}
Let $y$ and $y'\in Y$. Let $x\in F(y)$ and $P_x$ be the corresponding projective functor when it is followed by a restriction. Then,
\[ \big(i_{y'}^{-1} \circ (i_{y})_{\star}\big)(P_x)\cong \left\{\begin{array}{c}P_x \in \mathcal{F}_{F(y'),k} \hbox{ if $ y'\leqslant y$ and $x\in F(y')$, } \\0 \hbox{ otherwise. }\end{array}\right. \]
\end{lemma}
\begin{proof}
Let $x'\in F(y')$. Then, we have
\begin{align*}
\big(i_{y'}^{-1} \circ (i_{y})_{\star}\big)(P_x)(x') &=(i_{y})_{\star}(P_x)(x',y') \\
&= \left\{\begin{array}{c}P_x(x') \hbox{ if $ y'\leqslant y$, } \\0 \hbox{ otherwise. }\end{array}\right. \\
&= \left\{\begin{array}{c}k \hbox{ if $ y'\leqslant y$, and $x \leqslant x'$ } \\0 \hbox{ otherwise. }\end{array}\right.
\end{align*}
Since $F(y')$ is closed, the condition $x\leqslant x'$ implies that $x\in F(y')$. The result follows.
\end{proof}
\section{Proofs of Theorem \ref{main_thm}}
First let us recall the famous Morita theorem for derived categories of Rickard.
\begin{theo}
Let $A$ and $B$ be two rings. Then, the following are equivalent
\begin{enumerate}
\item $D^b(A\Mod) \cong D^b(B\Mod)$.
\item $B$ is isomorphic to $\End_{D^b(A)}(T)^{op}$ where $T$ is an object of $K^{b}(\mathrm{proj}(A))$ satisfying
\begin{itemize}
\item $\Hom_{D^{b}(A)}(T,T[i]) = 0 $ for $i\neq 0$,
\item $\mathrm{add}(T)$, the category of direct summands of finite direct sums of copies of $T$, generates $K^b(\mathrm{proj}(A))$ as triangulated category.
\end{itemize}
\end{enumerate}
\end{theo}
\begin{proof}
See Theorem $6.4$ of \cite{rickard_morita} or Theorem $6.5.1$ of \cite{zimmermann_rep} for a proof following Keller's approach. Note that the proof of Keller is stronger. It shows that it is possible to realise the derived equivalence as the tensor product with a bounded complex of bimodules. However, it holds for two $k$-algebras over a commutative ring that are projective over $k$.
\end{proof}
The complex $T$ is called a tilting complex for the ring $A$. In the present paper, we work with categories of functors. However, it is easy to see that all our functors categories are equivalent to categories of modules over an algebra. Moreover, these algebras are free over the ring $k$ and of finite rank. In particular, in order to build derived equivalences, we can use Rickard's Morita theorem. Since the algebras are free over $k$, the stronger form of this theorem due to Keller also holds in our context.
\newline\indent Let $y\in Y$ and $x\in Y$. We denote by $P_{x}$ the projective functor of $\Fyk$ that corresponds to the element $x$.
\begin{prop}\label{tilting}
Let $k$ be a commutative ring. Let $X$ and $Y$ be two finite posets and let $F : Y \to \mathcal{J}(X)$ be an order preserving map. Then, the complex
\[\mathrm{T} := \bigoplus_{y\in Y} \bigoplus_{x\in F(y)} (i_{y})_{\star} (P_{x}) \]
is a tilting complex for $D^{b}\big(\mathcal{F}_{\Gamma,k}\big)$.
\end{prop}
\begin{proof}
Strictly speaking, the complex $\mathrm{T}$ is not a tilting complex since its terms are not projective. However, since the category $\mathcal{F}_{\Gamma,k}$ has finite global dimension, we can find a bounded complex of projective objects which is quasi-isomorphic to $\mathrm{T}$.
\begin{enumerate}
\item Let $y \in Y$. Let $x\in F(y)$ and $I_{x}$ be a finitely injective functor of $\Fyk$ corresponding to $x$. It has a finite projective resolution, so there is a bounded complex $P_{\bullet} $ of elements of $add(\bigoplus_{x \in F(y)} P_x)$ and a quasi-isomorphism $\phi : P_{\bullet} \to I$. Since the functor $(i_{y})_{\star}$ is exact, we have a quasi-isomorphism between $(i_{y})_{\star}(P_{\bullet})$ and $(i_y)_{\star}(I)$. By Lemma \ref{adj}, we have $(i_y)_{\star}(I_{x}) \cong I_{(x,y)}$. So for every $(x,y)\in \Gamma$, the injective functor $I_{(x,y)}$ belongs to the smallest triangulated subcategory of $D^{b}(\FGk)$ that contains $\mathrm{add(T)}$. Since every finitely projective functor has a finite injective co-resolution, we conclude that this category is equivalent to $K^{b}\big(\mathrm{proj}(\FGk)\big)$.
\item Let $i\in \mathbb{Z}$. Then, using the fact that $(i_{y'})_{\star}$ is a triangulated functor and the adjunction, we have
\begin{align*}
\Hom_{D^b(\FGk)}\big(\mathrm{T},\mathrm{T}[i]\big) &= \bigoplus_{y,y'\in Y} \Hom_{D^b(\FGk)}\Big((i_{y})_{\star}\big(\bigoplus_{x\in F(y)}P_{x}\big),(i_{y'})_{\star}\big(\bigoplus_{x'\in F(y')}P_{x'}\big)[i]\Big)\\
&\cong \bigoplus_{y,y'\in Y} \Hom_{D^b(\mathcal{F}_{F(y'),k})}\Big(i_{y'}^{-1}\circ (i_{y})_{\star}\big( \bigoplus_{x\in F(y)}P_{x}\big),\bigoplus_{x'\in F(y')}P_{x'}[i]\Big) \\
& \cong \bigoplus_{y'\leqslant y} \Hom_{D^b(\mathcal{F}_{F(y'),k})}\Big(\bigoplus_{x\in F(y')}P_{x},\bigoplus_{x'\in F(y')}P_{x'}[i]\Big)
\end{align*}
Since $P_x$ and $P_x'$ are projective objects in $\mathcal{F}_{F(y'),k}$, there are no non-trivial extensions between them. This implies that $\Hom_{D^b(\FGk)}\big(\mathrm{T},\mathrm{T}[i]\big) = 0$ if $i\neq 0$.
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_thm}]
By Proposition \ref{tilting}, there is an equivalence between $D^b(\FGk)$ and $D^b(\End(T)^{op})$, where $T$ is the tilting complex of the Proposition. By usual Morita theory, the category $\mathcal{F}_{\Gamma_0,k}$ is equivalent to $\End_{\mathcal{F}_{\Gamma_{0},k}}\big(\bigoplus_{(x,y)\in k\Gamma_0} P_{(x,y)}\big)^{op}$ where $P_{(x,y)}$ is the representable functor $\Hom_{k\Gamma_0}\big((x,y),-\big)$. Using the Yoneda Lemma, we have
\[ \Hom_{\mathcal{F}_{\Gamma_{0},k}}\big(P_{(x,y)},P_{(x',y')}\big) = \left\{\begin{array}{c} k \hbox{ if $x'\leqslant x$, $y'\leqslant y$ and $x\in F(y')$, } \\0 \hbox{ otherwise.}\end{array}\right. \]
Since $(i_{y})_{\star}(P_x)$ and $(i_{y'})_{\star}(P_{x'})$ for $x\in F(y)$ and $x\in F(y')$ and $y,y'\in Y$, are two functors, we can do the computation of $\End(\mathrm{T})$ in the categories of functors instead of the derived category. Then, we have
\begin{align*}
\Hom_{\FGk}\big( (i_{y})_{\star}(P_x), (i_{y'})_{\star}(P_{x'})\big) & \cong \Hom_{\mathcal{F}_{F(y'),k}}\big(i_{y'}^{-1} \circ (i_{y})_{\star}(P_x), P_{x'}\big)\\
& \cong \left\{\begin{array}{c} \Hom_{\mathcal{F}_{F(y'),k}}(P_{x},P_{x'}) \hbox{ if $y'\leqslant y$ and $x\in F(y')$} \\0 \hbox{ otherwise,}\end{array}\right.\\
& \cong \left\{\begin{array}{c} k \hbox{ if $x'\leqslant x$, $y'\leqslant y$ and $x\in F(y')$, } \\0 \hbox{ otherwise.}\end{array}\right.
\end{align*}
This implies that $End(T) \cong \End_{\mathcal{F}_{\Gamma_{0},k}}\big(\bigoplus_{(x,y)\in k\Gamma_0} P_{(x,y)}\big)$. Taking the `op' functor, we have the derived equivalence between $\FGk$ and $\mathcal{F}_{\Gamma_0,k}$.
\end{proof}
\bibliographystyle{alpha}
|
1,116,691,500,295 | arxiv | \section{Introduction}
A fascinating family of \emph{pleated} origami models use extremely simple
crease patterns---repeated concentric shapes, alternating mountain and
valley---yet automatically fold into interesting 3D shapes.
The most well-known is the \emph{pleated hyperbolic paraboloid},
shown in Figure~\ref{hypar}, where the crease pattern is concentric squares
and their diagonals.
As the name suggests, it has long been conjectured, but never formally
established, that this model approximates a hyperbolic paraboloid.
More impressive (but somewhat harder to fold) is the \emph{circular pleat},
shown in Figure~\ref{circular}, where the crease pattern is simply
concentric circles, with a circular hole cut out of the center.
Both of these models date back to the Bauhaus, from a preliminary course in
paper study taught by Josef Albers in 1927--1928 \cite[p.~434]{Wingler-1969},
and taught again later at Black Mountain College in 1937--1938
\cite[pp.~33, 73]{Adler-2004}; see \cite{curved}.
These models owe their popularity today to origamist Thoki Yenn,
who started distributing the model sometime before 1989.
Examples of their use and extension for algorithmic sculpture
include \cite{BRIDGES99,AAG08}.
\begin{figure}
\centering
\subfigure[Standard mountain-valley pattern.
\label{hypar cp} \label{hypar mv}]
{\includegraphics[scale=0.6]{hypar_cp}}\hfil
\subfigure[Photograph of physical model. {[Jenna Fizel]} \label{hypar photo}]
{\includegraphics[scale=0.2]{hypar_photo}}
\caption{Pleated hyperbolic paraboloid.}
\label{hypar}
\end{figure}
\begin{figure}
\centering
\subfigure[Mountain-valley pattern.
\label{circular cp} \label{circular mv}]
{\includegraphics[scale=0.6]{circular_cp}}\hfil
\subfigure[Photograph of physical model. {[Jenna Fizel]} \label{circular photo}]
{\includegraphics[scale=0.2,trim=100 0 0 0,clip]{circular_photo}}
\caption{Circular pleat.}
\label{circular}
\end{figure}
The magic of these models is that most of the actual folding happens by
the physics of paper itself; the origamist simply puts all the creases in
and lets go. Paper is normally elastic: try wrapping a paper sheet
around a cylinder, and then letting go---it returns to its original state.
But \emph{creases} plastically deform the paper beyond its yield point,
effectively resetting the elastic memory of paper to a nonzero angle.
Try creasing a paper sheet and then letting go---it stays folded at the crease.
The harder you press the crease, the larger the desired fold angle.
What happens in the pleated origami models is that the paper tries to
stay flat in the uncreased portions, while trying to stay folded at the
creases, and physics computes a configuration that balances these forces
in equilibrium (with locally minimum free energy).
But some mathematical origamists have wondered over the years \cite{nytimes}:
do these models actually \emph{exist}? Is it really possible to fold
a piece of paper along exactly the creases in the crease pattern of
Figures~\ref{hypar} and~\ref{circular}?
The first two authors have always suspected that both models existed,
or at least that one existed if and only if the other did.
But we were wrong.
\paragraph{Our results.}
We prove that the hyperbolic-paraboloid crease pattern of Figure~\ref{hypar cp}
does not fold using exactly the given creases,
even with a hole cut out of the center.
In proving the impossibility of folding the pleated hyperbolic
paraboloid, we develop a structural characterization of how uncreased
paper can fold (hence the title of this paper). Surprisingly, such a
characterization has not been obtained before. An intuitive
understanding (often misquoted) is that paper folds like a ruled
surface, but that claim is only true locally (infinitesimally) about
every point. When the paper is not smooth or has zero principal
curvature at some points, the truth gets even subtler. We correct
both of these misunderstandings by handling nonsmooth (but uncreased)
surfaces, and by stating a local structure theorem flexible enough to
handle zero curvatures and all other edge cases of uncreased surfaces.
In contrast, we conjecture that the circular-pleat crease pattern of
Figure~\ref{circular cp} folds using exactly the given creases, when
there is a hole cut out of the center. A proof of this would be the
first proof of existence of a curved-crease origami model (with more
than one crease) of which we are aware. Existing work characterizes
the local folding behavior in a narrow strip around a curved crease,
and the challenge is to extend this local study to a globally consistent
folding of the entire crease pattern.
Another natural remaining question is what actually happens to a real
pleated hyperbolic paraboloid like Figure~\ref{hypar photo}.
One conjecture is that the paper uses extra creases (discontinuities
in the first derivative), possibly many very small ones.
We prove that, indeed, simply triangulating the crease pattern,
and replacing the four central triangles with just two triangles,
results in a foldable crease pattern.
Our proof of this result is quite different in character,
in that it is purely computational instead of analytical.
We use interval arithmetic to establish with certainty that the
exact object exists for many parameter values, and its coordinates
could even be expressed by radical expressions in principle,
but we are able only to compute arbitrarily close approximations.
\section{Structure of Uncreased Flat Surfaces}
Our impossibility result rests on an understanding of how it is possible
to fold the faces of the crease pattern, which by definition are regions
folded without creases. The geometric crux of the proof therefore relies
on a study of uncreased intrinsically flat (paper) surfaces.
This section gives a detailed analysis of such surfaces.
Our analysis allows discontinuities all the way down to the second derivative
(but not the first derivative---those are creases),
provided those discontinuities are somewhat tame.
We begin with some definitions, in particular to nail down the notion
of creases.
\begin{definition}
For us a \term{surface} is a compact 2-manifold embedded in $\R^3$.
The surface is $C^k$ if the manifold and its embedding are $C^k$.
The surface is \term{piecewise-$C^k$} if it can be decomposed as a
complex of $C^k$ regions joined by vertices and $C^k$ edges.
\end{definition}
\begin{definition}
A \term{good surface} is a piecewise-$C^2$ surface. A good surface
$S$ therefore decomposes into a union of $C^2$ surfaces $S_i$,
called \term{pieces}, which share $C^2$ edges $\gamma_j$, called
\term{semicreases}, whose endpoints are \term{semivertices}.
Isolated points of $C^2$ discontinuities are also \term{semivertices}.
If $S$ is itself $C^1$ everywhere on a semicrease,
we call it a \term{proper semicrease}; otherwise it is a \term{crease}.
Similarly a semivertex $v$ is a \term{vertex} if $S$ is not $C^1$ at~$v$.
Accordingly an \term{uncreased surface} is a $C^1$ good surface
(with no creases or vertices),
and a \term{creased surface} is a good surface not everywhere~$C^1$
(with at least one crease or vertex).
\end{definition}
\begin{definition}
A surface is \term{(intrinsically) flat} if every point $p$ has a
neighborhood isometric to a region in the plane.%
%
\footnote{Henceforth we use the term ``flat'' for this intrinsic notion
of the surface metric, and the term ``planar'' for the extrinsic notion
of (at least locally) lying in a 3D plane.}
\end{definition}
In order to understand the uncreased flat surfaces that are our chief
concern, we study the $C^2$ flat surfaces that make them up.
On a $C^2$ surface, the well-known \term{principal curvatures}
$\kappa_1 \geq \kappa_2$ are defined for each interior point as the
maximum and minimum (signed) curvatures for geodesics through the
point. A consequence of Gauss's celebrated Theorema Egregium
\cite{Gauss-1902} is that, on a $C^2$ flat surface, the \term{Gaussian
curvature} $\kappa_1\kappa_2$ must be everywhere zero. Thus every
interior point of a $C^2$ flat surface is either \term{parabolic} with
$k_2 \neq k_1 = 0$ or \term{planar} with $k_2 = k_1 = 0$.
Each interior point $p$ on a $C^2$ flat surface therefore either
\begin{enumerate}
\item[(a)] is planar, with a planar neighborhood;
\item[(b)] is planar and the limit of parabolic points; or
\item[(c)] is parabolic, and has a parabolic neighborhood by continuity,
\end{enumerate}
and an interior point on an uncreased flat surface may additionally
\begin{enumerate}
\item[(d)] lie on the interior of a semicrease; or
\item[(e)] (a priori) be a semivertex.
\end{enumerate}
For points of type (a), it follows by integration that the
neighborhood has a constant tangent plane and indeed lies in this
plane. Types (b) and (c) are a bit more work to classify, but the
necessary facts are set forth by
Spivak \cite[vol.~3, chap.~5, pp.~349--362]{Spivak-1979}
and recounted below.
(In Spivak's treatment the regularity condition is left unspecified,
but the proofs go through assuming only~$C^2$.)
We address type (d) farther below. From our results it will become clear that
the hypothetical type (e) does not occur in uncreased flat surfaces.
\begin{proposition}
{\rm \cite[Proposition III.5.4 et seq.]{Spivak-1979}} \label{prp:1}
For every point $p$ of type (c) on a surface~$M$, a
neighborhood $U \subset M$ of $p$ may be parametrized as
$$ f(s,t) = c(s) + t \cdot \delta(s) $$
where $c$ and $\delta$ are $C^1$ functions; $c(0) = p$;
$|\delta(s)| = 1$; $c'(s)$, $\delta(s)$, and $\delta'(s)$ are coplanar
for all $s$; and every point of $U$ is parabolic.
\end{proposition}
Write $\interior(M)$ for the interior of a surface~$M$.
\begin{proposition}
{\rm \cite[Corollaries III.5.6--7]{Spivak-1979}} \label{prp:2}
For every point $p$ of type (b) or (c) on a surface~$M$,
there is a unique line $L_p$ passing through $p$ such that
the intersection $L_p \cap M$ is open in $L_p$ at~$p$.
The component $C_p$ containing $p$ of the
intersection $L_p \cap \interior(M)$ is an open segment,
and every point in $C_p$ is also of type (b) or (c) respectively.
\end{proposition}
Following the literature on flat surfaces, we speak of a segment like
the $C_p$ of Proposition~\ref{prp:2} as a \term{rule segment}.
The \term{ruling} of a surface is the family of rule segments of all
surface points, whose union equal the surface.
A ruling is \term{torsal} if all points along each rule segment
have a common tangent plane.
To characterize points of type (d), lying on semicreases, we require
the following two propositions.
\begin{proposition}\label{prp:6}
Consider a point $q$ of type (d) on a surface~$M$.
Then $q$ is not the endpoint of the rule segment $C_p$
for any point $p \in M$ of type (b) or (c).
\end{proposition}
\begin{proof}
It suffices to show the conclusion for $p$ of type (c), because a
rule segment of type (b) is a limit of rule segments of type (c).
Let $\gamma$ be the interior of the semicrease on which $q$ lies.
Because $M$ is $C^1$, it has a tangent plane $M_q$ at each $q$, which
is common to the two $C^2$ pieces bounded by $\gamma$. Parametrize
$\gamma$ by arclength with $\gamma(0) = q$, and write $n(s)$ for the
unit normal to the tangent plane $M_{\gamma(s)}$. Parametrize the
two pieces as torsal ruled surfaces by the common curve $c_1(s) =
c_2(s) = \gamma(s)$ and lines $\delta_1(s)$ and $\delta_2(s)$.
Then, because each piece is torsal, $\dot n(s) \perp \delta_1(s)$ and
$\dot n(s) \perp \delta_2(s)$. But both $\delta_1(s)$ and $\delta_2(s)$
lie in the tangent plane at $s$, perpendicular to $n(s)$, and so too
does $\dot n(s)$ because $n(s)$ is always a unit vector. Therefore,
for each $s$, either $\dot n(s) = 0$ or $\delta_1(s)$ and $\delta_2(s)$
are collinear.
Let $A$ be the subset of $\gamma$ on which $\dot n(s) = 0$, and $B$
the subset on which $\delta_1(s)$ and $\delta_2(s)$ are collinear.
Then we have shown that $A \cup B = \gamma$. By continuity, both
$A$ and $B$ are closed. Therefore any point of $\gamma$ which does
not belong to $A$ is in the interior of $B$, and any point not in
the interior of $A$ is in the closure of the interior of $B$.
If an open interval $I$ along $\gamma$ is contained in $A$ so that
$\dot n(s) = 0$, then a neighborhood in $M$ of $I$ is planar by
integration because each rule segment has a single common tangent
plane in a torsal ruled surface. On the other hand if $I$ is
contained in $B$ so that $\delta_1(s)$ and $\delta_2(s)$ are
collinear, then a neighborhood is a single $C^2$ ruled surface. In
either case, the rule segments from one surface that meet $I$
continue into the other surface. That is, each rule segment meeting
a point in the interior of $A$ or $B$ continues into the other
surface.
Now we conclude. By continuity, each rule segment meeting a point
in the closure of the interior of $A$ or $B$ continues into the
other surface; but these two closures cover $\gamma$. So no rule
segment ends on $\gamma$, including at $q$.
\end{proof}
\begin{proposition}\label{prp:3}
For every point $p$ of type (d) on a surface~$M$,
there is a unique line $L_p$ passing through $p$
such that the intersection $L_p \cap M$ is open in $L_p$ at~$p$.
The component $C_p$ containing $p$ of the
intersection $L_p \cap \interior(M)$ is an open segment,
the limit of rule segments through points neighboring $p$,
and every point of $C_p$ is also of type~(d).
\end{proposition}
\begin{proof}
Let $B_r(p)$ be a radius-$r$ disk in $M$ centered at $p$,
small enough that no point of the disk is a semivertex.
By Proposition~\ref{prp:6}, the rule segment
through any point $q$ of type (b) or (c) in the
half-size disk $B_{r/2}(p)$ cannot end in $B_r(p)$,
so it must be of length at least $r/2$ in each direction.
Further, $p$ must be a limit of such points,
or else a neighborhood of $p$ would be planar.
By a simple compactness argument, provided in \cite{Spivak-1979} for the
type-(b) case of Proposition~\ref{prp:2}, $C_p$ is the limit of (a
subsequence of) rule segments $C_q$ through points $q$ of type (b)
and (c) approaching $p$ and is an open segment. Because each $C_q$
has a single tangent plane, the discontinuity in second derivatives
found at $p$ is shared along $C_p$.
\end{proof}
Two corollaries follow immediately from Proposition~\ref{prp:3}.
\begin{corollary}\label{cory:2}
Every (proper) semicrease in an uncreased flat surface is a line segment,
and its endpoints are boundary points of the surface.
\end{corollary}
\begin{corollary}\label{cory:3}
An uncreased flat surface has no interior semivertices;
every interior point is in the interior of a $C^2$ piece or a semicrease.
\end{corollary}
Another corollary summarizes much of Propositions~\ref{prp:2}
and~\ref{prp:3} combined.
\begin{corollary}\label{cory:1}
Every interior point $p$ of an uncreased flat surface $M$ not
belonging to a planar neighborhood belongs to a unique rule segment
$C_p$. The rule segment's endpoints are on the boundary of $M$, and
every interior point of $C_p$ is of the same type (b), (c), or (d).
\end{corollary}
Finally, we unify the treatment of all types of points in the
following structure theorem for uncreased flat surfaces. The theorem
is similar to Proposition~\ref{prp:1}, which concerns only points of
type~(c).
\begin{theorem}
Every interior point of an uncreased flat surface has a neighborhood that is
a ruled surface. In each rule segment, every interior point is of the same
type (a), (b), (c), or (d). The ruled surface may be parametrized
as
$$ f(s, t) = c(s) + t \cdot \delta(s), $$
where $c$ is $C^1$, $\delta$ is $C^0$, and $\delta$ is $C^1$ whenever
$c(s)$ is of type (a), (b), or (c).
\end{theorem}
\begin{proof}
Let $p$ be a point on an uncreased flat surface $M$. If $p$ is of
type (a), then we may parametrize its planar neighborhood as a ruled
surface almost arbitrarily. Otherwise, $p$ is of type (b), (c), or
(d) and has a unique rule segment $C_p$.
Embed a neighborhood $U \subset M$ of $C_p$ isometrically in the
plane, by a map $\phi : U \to \R^2$. Let $\gamma$ be a line segment
in the plane perpendicularly bisecting $\phi(C_p)$, parametrized by
arclength with $\gamma(0) = \phi(p).$ Every point $\phi^{-1}(\gamma(s))$
of type (b), (c), or (d) has a unique rule segment $C_{\phi^{-1}(\gamma(s))}$;
for such $s$, let $\eps(s)$ be the unit vector pointing along
$\phi(C_{\phi^{-1}(\gamma(s))})$, picking a consistent orientation.
Now the remaining $s$ are those for which $\phi^{-1}(\gamma(s))$ is of type
(a). These $s$ form an open subset, so that for each such $s$ there
is a previous and next $s$ not of type (a). For each such~$s$,
we can determine an $\eps(s)$ by interpolating angles linearly between
the $\eps(s)$ defined for the previous and next $s$ not of type (a).
The resulting function $\eps(s)$ is continuous and identifies a
segment through every point in $\gamma$, giving a parametrization of
a neighborhood of $\gamma$ as a ruled surface by $g(s, t) =
\gamma(s) + t \cdot \eps(s)$.
Finally, write $f(s, t) = \phi^{-1}(g(s, t))$ to complete the construction.
\end{proof}
\section{How Polygonal Faces Fold}
If all edges of the crease pattern are straight,
every face of the crease pattern is a polygon.
We first show that, if the edges of such a polygon remain
straight (or even piecewise straight) in space,
then the faces must remain planar.
\begin{theorem}\label{thm:5}
If the boundary of an uncreased flat surface $M$ is piecewise
linear in space, then $M$ lies in a plane.
\end{theorem}
\begin{proof}
Let $p$ be a parabolic point in the interior of $M$, a point of type (c). We will
show a contradiction. It then follows that every point of $M$ is of
type (a), (b), or (d), so planar points of type (a) are dense and by
integration $M$ lies in a plane.
Because $p$ is parabolic, it has by Proposition~\ref{prp:1} a
neighborhood consisting of parabolic points which is a ruled
surface. By Corollary~\ref{cory:1},
the rule segment through each point in this neighborhood can be
extended to the boundary of $M$. Let $U$ be the neighborhood so
extended.
Now the boundary of $U$ consists of a first rule segment $ab$, a last
rule segment $cd$, and arcs $bd$ and $ac$ of which at least one must be
nontrivial, say $bd$. Because we extended $U$ to the boundary of
$M$ and the boundary of $M$ is piecewise linear, $bd$ consists of a
chain of segments. Let $b'd'$ be one of these segments.
Let $q$ be any point interior to the segment $b'd'$, and consider
the normal vector $n(q)$ to $M$ at $q$. The normal is perpendicular
to $b'd'$ and to the rule segment $C_q$ meeting $q$. Because $U$ is
torsal, its derivative $n'(q)$ along $b'd'$ is also perpendicular to
$C_q$, and because the normal is always perpendicular to $b'd'$ the
derivative is perpendicular to $b'd'$. But this forces $n'(q)$ to
be a multiple of $n(q)$, therefore zero, which makes the points of
$C_q$ planar and is a contradiction.
\end{proof}
\section{Straight Creases Stay Straight}
Next we show that straight edges of a crease pattern
must actually fold to straight line segments in space.
\begin{theorem} \label{straight creases stay straight}
If $\gamma$ is a geodesic crease in a creased flat surface~$M$
with fold angle distinct from $\pm 180^\circ$,
then $\gamma$ is a segment in~$\R^3$.
\end{theorem}
\begin{proof}
The creased surface $M$ decomposes by definition into a complex of
uncreased surfaces, creases, and vertices.
A point $p$ in the interior of $\gamma$ is therefore on the boundary of two
uncreased pieces; call them $S$ and~$T$. Let $S_p$ and $T_p$ be the tangent
planes to $S$ and $T$ respectively at~$p$.
Because $\gamma$ is by hypothesis not a proper semicrease,
has no semivertices along it,
and has a fold angle distinct from $\pm 180^\circ$,
there is some $p \in \gamma$ where $S_p \neq T_p$.
By continuity, the same is true for a neighborhood in $\gamma$ of~$p$;
let $U$ be the maximal such neighborhood.
Now parametrize $\gamma$ by arclength and let $p = \gamma(s)$. At
each $q = \gamma(t)$, the tangent vector $\gamma'(t)$ lies in the
intersection $S_q \neq T_q$; in~$U$, this determines $\gamma'(t)$ up
to sign. Because $S$ and $T$ are $C^2$, the tangent planes $S_q$ and $T_q$
are $C^1$, hence so is $\gamma'(t)$, and the curvature $\gamma''(t)$
exists and is continuous.
Now around any $q \in U$ project $\gamma$ onto the tangent plane
$S_q$. Because $\gamma$ is a geodesic, we get a curve of zero
curvature at $q$, so $\gamma''(t)$ must be perpendicular to $S_q$.
Similarly $\gamma''(t) \perp T_q$. But certainly $\gamma''(t) \perp
\gamma'(t)$. So $\gamma''(t) = 0$.
We have $\gamma''(t) = 0$ for $t$ in a neighborhood of $s$, so
$\gamma$ is a segment on $U$. Further, by the considerations of
Theorem~\ref{thm:5}, the tangent planes $S_q$ and $T_q$ are constant
on~$U$. Therefore they remain distinct at the endpoints of~$U$, and
because $U$ is maximal, these must be the endpoints of $\gamma$ and
$\gamma$ is a segment.
\end{proof}
Combining the previous two theorems, we deduce that
polygonal faces of the crease pattern with no boundary edges
must indeed stay planar:
\begin{corollary}\label{planar dammit}
If an uncreased region of a creased flat surface $M$ is piecewise
geodesic and entirely interior to $M$, then the region lies in a plane.
\end{corollary}
\section{Nonexistence of Pleated Hyperbolic Paraboloid}
Now we can apply our theory to prove nonfoldability of crease patterns.
First we need to formally define what this means.
\begin{definition}
A \term{piece of paper} is a planar compact 2-manifold.
A \term{crease pattern} is a graph embedded into a piece of paper,
with each edge embedded as a non-self-intersecting curve.
A \term{proper folding} of a crease pattern is an isometric embedding
of the piece of paper into 3D whose image is a good surface such that
the union of vertices and edges of the crease pattern map onto
the union of vertices and creases of the good surface.
A \term{rigid folding} is a proper folding that maps each face of the
crease pattern into a plane (and thus acts as a rigid motion on each face).
\end{definition}
Note that a proper folding must fold every edge of the crease pattern
by an angle distinct from $0$ (to be a crease) and from $\pm 180^\circ$
(to be an embedding). We call such fold angles \term{nontrivial}.
Also, one edge of a crease pattern may map to multiple creases in 3D,
because of intervening semivertices.
The key property we need from the theory developed in the previous sections
is the following consequence of Corollary~\ref{planar dammit}:
\begin{corollary} \label{interior rigid}
For any crease pattern made up of just straight edges,
any proper folding must fold the interior faces rigidly.
\end{corollary}
We start by observing that the center of the standard crease pattern
for a pleated hyperbolic paraboloid has no proper folding.
\begin{lemma} \label{four triangles}
Any crease pattern containing four right triangular faces,
connected in a cycle along their short edges,
has no rigid folding.
\end{lemma}
\begin{proof}
This well-known lemma follows from,
e.g., \cite[Lemma~9]{Donoso-O'Rourke-2002}.
For completeness, we give a proof.
Let $v_1$, $v_2$, $v_3$, and $v_4$ denote the direction vectors
of the four short edges of the triangular faces, in cyclic order.
By the planarity of the faces,
the angle between adjacent direction vectors is kept at~$90^\circ$.
Thus the folding angle of edge $i$ equals the angle
between $v_{i-1}$ and $v_{i+1}$ (where indices are treated modulo~$4$).
If edge $2$ is folded nontrivially, then
$v_1$ and $v_3$ are nonparallel and define a single plane~$\Pi$.
Because $v_2$ is perpendicular to both $v_1$ and~$v_3$,
$v_2$~is perpendicular to~$\Pi$.
Similarly, $v_4$ is perpendicular to~$\Pi$.
Thus $v_2$ and $v_4$ are parallel,
and hence edge $3$ is folded trivially.
Therefore two consecutive creases cannot both be folded nontrivially.
\end{proof}
\begin{corollary} \label{hypar center}
The standard crease pattern for a pleated hyperbolic paraboloid
(shown in Figure~\ref{hypar cp}), with $n \geq 2$ rings,
has no proper folding.
\end{corollary}
\begin{proof}
With $n \geq 2$ rings, the four central triangular faces are completely
interior. By Corollary~\ref{interior rigid}, any proper folding
keeps these faces planar. But Lemma~\ref{four triangles} forbids
these faces from folding rigidly.
\end{proof}
The standard crease pattern for a pleated hyperbolic paraboloid
cannot fold properly for a deeper reason than the central ring.
To prove this, we consider the \term{holey crease pattern}
in which the central ring of triangles has been cut out,
as shown in Figure~\ref{holey hypar}.
If there were $n$ rings in the initial crease pattern
(counting the central four triangles as one ring), then $n-1$ rings remain.
\begin{figure}
\centering
\subfigure[Holey mountain-valley pattern
for the pleated hyperbolic paraboloid. \label{holey hypar}]
{\includegraphics[scale=0.6]{hypar_holey_cp}}\hfil\hfil
\subfigure[Holey concentric pleat mountain-valley pattern.
\label{holey concentric pleat}]
{\includegraphics[scale=0.6]{concentric_holey_cp}}
\caption{Holey mountain-valley patterns which have no proper foldings.}
\label{holey}
\end{figure}
\begin{theorem}
The holey crease pattern for a pleated hyperbolic paraboloid
(shown in Figure~\ref{holey hypar}), with $n-1 \geq 3$ rings,
has no proper folding.
\end{theorem}
\begin{proof}
Consider any nonboundary square ring of the crease pattern.
By Corollary~\ref{interior rigid}, the four trapezoidal faces
each remain planar. Any folding of these four faces in fact
induces a folding of their extension to four meeting right triangles.
But Lemma~\ref{four triangles} forbids these faces from folding rigidly.
\end{proof}
A different argument proves nonfoldability of a more general
pleated crease pattern.
Define the \term{concentric pleat crease pattern}
to consist of $n$ uniformly scaled copies of a convex polygon~$P$,
in perspective from an interior point~$p$, together with the ``diagonals''
connecting $p$ to each vertex of each copy of $P$.
The outermost copy of the polygon $P$ is the boundary of the piece of paper,
and in the \term{holey concentric pleat mountain-valley pattern}
we additionally cut out a hole bounded by the innermost copy of~$P$.
Thus $n-1$ rings remain;
Figure~\ref{holey concentric pleat} shows an example.
First we need to argue about which creases can be mountains and valleys.
\begin{definition}
A \term{mountain-valley pattern} is a crease pattern together with
an assignment of signs ($+1$ for ``mountain'' and $-1$ for ``valley'')
to the edges of a crease pattern.
A \term{proper folding} of a mountain-valley pattern,
in addition to being a proper folding of the crease pattern,
must have the signs of the fold angles (relative to some canonical
orientation of the top side of the piece of paper) match the signs of the
mountain-valley assignment.
\end{definition}
\begin{lemma} \label{not all mountains}
A single-vertex mountain-valley pattern consisting of entirely mountains
or entirely valleys has no proper rigid folding.
\end{lemma}
\begin{proof}
If we intersect the piece of paper with a (small) unit sphere centered
at the vertex, we obtain a spherical polygon whose edge lengths match the
angles between consecutive edges of the crease pattern.
(Here we rely on the fact that the vertex is intrinsically flat,
so that the polygon lies in a hemisphere and thus no edges go
the ``wrong way'' around the sphere.)
The total perimeter of the spherical polygon is $360^\circ$.
Any rigid folding induces a non-self-intersecting spherical polygon,
with mountain folds mapping to convex angles and
valley folds mapping to reflex angles, or vice versa
(depending on the orientation of the piece of paper).
To be entirely mountain or entirely valley,
the folded spherical polygon must be locally convex,
and by non-self-intersection, convex.
But any convex spherical polygon (that is not planar) has perimeter
strictly less than $360^\circ$ \cite[page~265, Theorem~IV]{Halsted-1885},
a contradiction.
\end{proof}
\begin{lemma} \label{degree 4}
Consider a rigidly foldable degree-$4$ single-vertex mountain-valley
pattern with angles $\theta_1$, $\theta_2$, $\theta_3$, and $\theta_4$
in cyclic order.
Then exactly one edge of the mountain-valley pattern has sign different
from the other three, and if $\theta_2 + \theta_3 \geq 180^\circ$,
then the unique edge cannot be the one between $\theta_2$ and~$\theta_3$.
\end{lemma}
\begin{proof}
Again we intersect the piece of paper with a (small) unit sphere centered
at the vertex to obtain a spherical polygon, with edge lengths $\theta_1$,
$\theta_2$, $\theta_3$, and $\theta_4$, and whose convex angles correspond
to mountain folds and whose reflex angles correspond to valley folds,
or vice versa. By Lemma~\ref{not all mountains}, at least one vertex
is reflex, and thus the remaining vertices must be convex.
(The only non-self-intersecting spherical polygons with only two convex
vertices lie along a line, and hence have no reflex vertices.
Here we rely on the fact that the vertex is intrinsically flat,
so that the polygon lies in a hemisphere, to define the interior.)
The two edges incident to this vertex form a triangle,
by adding a closing edge.
The other two edges of the quadrilateral also form a triangle,
with the same closing edge, that strictly contains the previous triangle.
The latter triangle therefore has strictly larger perimeter than the
former triangle, as any convex spherical polygon has larger perimeter than
that of any convex spherical polygon it contains
\cite[page~264, Theorem~III]{Halsted-1885}.
The two triangles share an edge which appears in both perimeter sums,
so we obtain that the two edges incident to the reflex angle sum to less
than half the total perimeter of the quadrilateral, which is $360^\circ$.
Therefore they cannot be the edges corresponding to angles
$\theta_2$ and~$\theta_3$.
\end{proof}
Now we can prove the nonfoldability of a general concentric pleat:
\begin{theorem}
The holey concentric pleat crease pattern
(shown in Figure~\ref{holey concentric pleat}), with $n-1 \geq 4$ rings,
has no proper folding.
\end{theorem}
\begin{proof}
First we focus on two consecutive nonboundary rings of the crease pattern,
which by Corollary~\ref{interior rigid} fold rigidly.
Each degree-$4$ vertex between the two rings
has a consecutive pair of angles summing to more than $180^\circ$
(the local exterior of~$P$), and two consecutive pairs of angles summing to
exactly $180^\circ$ (because the diagonals are collinear).
By Lemma~\ref{degree 4}, the interior diagonal must be the unique crease
with sign different from the other three.
Thus all of the creases between the rings have the same sign,
which is the same sign as all of the diagonal creases in the outer ring.
Now focus on the outer ring, whose diagonal creases all have the same sign.
Any folding of the faces of a ring in fact induces a folding of their
extension to meeting triangles.
(In the unfolded state, the central point is the center $p$ of scaling.)
Thus we obtain a rigid folding of a crease pattern with a single vertex $p$
and one emanating edge per vertex of~$P$, all with the same sign.
But such a folding contradicts Lemma~\ref{not all mountains} .
\end{proof}
\section{Existence of Triangulated Hyperbolic Paraboloid}
In contrast to the classic hyperbolic paraboloid model,
we show that triangulating each trapezoidal face
and retriangulating the central ring permits folding:
\begin{theorem}
The two mountain-valley patterns in Figure~\ref{hypar triangulations},
with mountains and valleys matching the hyperbolic paraboloid
of Figure~\ref{hypar mv}, have proper foldings,
uniquely determined by the fold angle $\theta$ of the central diagonal,
that exist at least for $n = 100$ and
$\theta \in \{2^\circ,4^\circ,6^\circ,\dots,178^\circ\}$
for the alternating asymmetric triangulation,
and at least for $n$ and $\theta$ shown in Table~\ref{n vs theta}
for the asymmetric triangulation.
For each $\theta \in \{2^\circ,4^\circ,\dots,178^\circ\}$,
the asymmetric triangulation has no proper folding for $n$ larger
than the limit values shown in Table~\ref{n vs theta}.
\end{theorem}
\begin{figure}
\centering
\subfigure[Asymmetric triangulation. \label{asymmetric triangulation}]
{\includegraphics[scale=0.6]{hypar_triangulation_a}}
\hfil
\subfigure[Alternating asymmetric triangulation.
\label{alternating asymmetric triangulation}]
{\includegraphics[scale=0.6]{hypar_triangulation_b}}
\caption{Two foldable triangulations of the hyperbolic paraboloid
crease pattern (less one diagonal in the center).}
\label{hypar triangulations}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|cccccccccccccccccccc}
$\theta$ & $2^\circ$ & $4^\circ$ & $6^\circ$ & $8^\circ$ & $10^\circ$ & $12^\circ$ & $14^\circ$ & $16^\circ$ & $18^\circ$ & $20^\circ$ & $22^\circ$ & $24^\circ$ & $26^\circ$ & $28^\circ$ & $30^\circ$ & $32^\circ$
\\ \hline
$n$ & $133$ & $67$ & $45$ & $33$ & $27$ & $23$ & $19$ & $17$ & $15$ & $13$ & $13$ & $11$ & $11$ & $9$ & $9$ & $9$
\bigskip \\
$\theta$ & $34^\circ$ & $36^\circ$ & $38^\circ$ & $40^\circ$ & $42^\circ$ & $44^\circ$ & $46^\circ$ & $48^\circ$ & $50^\circ$ & $\cdots$ & $72^\circ$ & $74^\circ$ & $76^\circ$ & $\cdots$ & $176^\circ$ & $178^\circ$
\\ \hline
$n$ & $9$ & $7$ & $7$ & $7$ & $7$ & $7$ & $7$ & $5$ & $5$ & $\cdots$ & $5$ & $3$ & $3$ & $\cdots$ & $3$ & $3$
\end{tabular}
\caption{The largest $n$ for which the asymmetric triangulation has a proper
folding, for each $\theta \in \{2^\circ, 4^\circ, \dots, 178^\circ\}$.
(By contrast, the alternating asymmetric triangulation
has a proper folding for $n=100$ for all such~$\theta$.)
Interestingly, for $\theta$ not too large,
$n \cdot \theta$ is roughly $270^\circ$.}
\label{n vs theta}
\end{table}
\begin{proof}
The proof is by construction: we give a construction which implies
uniqueness, and then use the resulting algorithm to construct the explicit
3D geometry using interval arithmetic and a computer program.
To get started, we are given the fold angle $\theta$ between the two
triangles of the central square. By fixing one of these triangles in
a canonical position, we obtain the coordinates of the central square's
vertices by a simple rotation.
We claim that all other vertices are then determined by a sequence of
intersection-of-three-spheres computations from the inside out.
In the asymmetric triangulation of Figure~\ref{asymmetric triangulation},
the lower-left and upper-right corners of each square have three known
(creased) distances to three vertices from the previous square.
Here we use Theorem~\ref{straight creases stay straight} which guarantees that
the creases remain straight and thus their endpoints have known distance.
Thus we can compute these vertices as the intersections of three spheres
with known centers and radii. Afterward, the lower-right and upper-left
corners of the same square have three known (creased) distances to three
known vertices, one from the previous square and two from the current square.
Thus we can compute these vertices also as the intersection of three spheres.
In the alternating asymmetric triangulation of
Figure~\ref{asymmetric triangulation}, half of the squares behave the same,
and the other half compute their corners in the opposite order
(first lower-right and upper-left, then lower-left and upper-right).
The intersection of three generic spheres is zero-dimensional, but in general
may not exist and may not be unique. Specifically, the intersection of
two distinct spheres is either a circle, a point, or nothing;
the further intersection with another sphere whose center is not
collinear with the first two spheres' centers is either two points,
one point, or nothing. When there are two solutions, however,
they are reflections of each other through the plane containing the
three sphere centers. (The circle of intersection of the first two
spheres is centered at a point in this plane, and thus the two
intersection points are equidistant from the plane.)
For the hyperbolic paraboloid, we can use the mountain-valley assignment
(from Figure~\ref{hypar mv}) to uniquely determine which intersection to
choose. In the first intersection of three spheres, one solution would make
two square creases mountains (when the solution is below the plane) and the
other solution would make those creases valleys. Thus we choose whichever
is appropriate for the alternation. In the second intersection of three
spheres, one solution would make a diagonal crease mountain, and the other
solution would make that crease valley. Again we choose whichever is
appropriate for the alternation. Therefore the folding is uniquely
determined by $\theta$ and by the mountain-valley assignment of the
original hyperbolic paraboloid creases.
This construction immediately suggests an algorithm to construct the
proper folding. The coordinates of intersection of three spheres can be
written as a radical expression in the center coordinates and radii
(using addition, subtraction, multiplication, division, and square roots).
See \cite{Trilateration-wiki} for one derivation;
Mathematica's fully expanded solution for the general case
(computed with \texttt{Solve}) uses over 150,000 leaf expressions
(constant factors and variable occurrences).
Thus, if the coordinates of the central square can be represented
by radical expressions (e.g., $\theta$ is a multiple of $15^\circ$),
then all coordinates in the proper folding can be so represented.
Unfortunately, we found this purely algebraic approach to be
computationally infeasible beyond the second square; the expressions
immediately become too unwieldy to manipulate (barring some
unknown simplification which Mathematica could not find).
Therefore we opt to approximate the solution, while still guaranteeing
that an exact solution exists, via \emph{interval arithmetic}
\cite{Hayes-2003-interval,Moore-Kearfott-Cloud-2009,Alefeld-Herzberger-1983}.
The idea is to represent every coordinate $x$ as an interval $[x_L,x_R]$
of possible values, and use conservative estimates in every arithmetic
operation and square-root computation to guarantee that the answer
is in the computed interval.
For example, $[a,b]+[c,d] = [a+c,b+d]$ and
$[a,b] \cdot [c,d] = [\min\{a \cdot c, a \cdot d, b \cdot c, b \cdot d\},
\max\{a \cdot c, a \cdot d, b \cdot c, b \cdot d\}]$,
while $\sqrt{[a,b]}$ requires a careful implementation of a square-root
approximation algorithm such as Newton's Method.
The key exception is that $\sqrt{[a,b]}$ is undefined when $a < 0$.
A negative square root is the only way that the intersection of three
spheres, and thus the folding, can fail to exist.
If we succeed in computing an approximate folding using interval
arithmetic without attempting to take the square root of a partially
negative interval, then an exact folding must exist.
Once constructed, we need only check that the folding does not
intersect itself (i.e., forms an embedding).
We have implemented this interval-arithmetic construction in Mathematica;
refer to Appendix~\ref{mathematica}.
Using sufficiently many (between 1,024 and 2,048)
digits of precision in the interval computations,
the computation succeeds for the claimed ranges of $n$ and~$\theta$
for both triangulations.
Table~\ref{precision} shows how the required precision grows with~$n$
(roughly linearly), for a few different instances.
Figure~\ref{triangulated hypar} shows some of the computed structures
(whose intervals are much smaller than the drawn line thickness).
The folding construction produces an answer
for the asymmetric triangulation even for $n=100$ and
$\theta \in \{2^\circ, 4^\circ, \dots, 178^\circ\}$,
but the folding self-intersects for $n$ larger
than the limit values shown in Table~\ref{n vs theta}.
\end{proof}
\begin{table}
\centering
\begin{tabular}{l|rrrrrrrr}
digits of precision & $16$ & $32$ & $64$ & $128$ & $256$ & $512$ & $1024$ & $2048$
\\ \hline
$n$ for $\theta=1^\circ$ & $3$ & $6$ & $12$ & $22$ & $41$ & $76$ & $\geq 100$
\\
$n$ for $\theta=1^\circ$ alt.& $3$ & $6$ & $12$ & $24$ & $43$ & $79$ & $\geq 100$
\\
$n$ for $\theta=45^\circ$ alt.& $3$ & $5$ & $10$ & $18$ & $32$ & $58$ & $\geq 100$
\\
$n$ for $\theta=76^\circ$ alt.& $2$ & $5$ & $9$ & $16$ & $29$ & $53$ & $95$ & $\geq 100$
\end{tabular}
\caption{Number $n$ of triangulated rings that can be successfully
constructed using various precisions (measured in digits) of interval
arithmetic.}
\label{precision}
\end{table}
We conjecture that this theorem holds for all $n$ and all
$\theta < 180^\circ$ for the alternating asymmetric triangulation,
but lack an appropriately general proof technique.
If the construction indeed works for all $\theta$ in some interval
$[0,\Theta)$, then we would also obtain a continuous folding motion.
\begin{figure}
\centering
\subfigure[Asymmetric triangulation, $\theta = 8^\circ$, $n=16$.]
{\quad\includegraphics[scale=0.3,clip,trim=15 260 10 255]{example_asymmetric}\quad}\hfil
\subfigure[Alternating asymmetric triangulation, $\theta = 30^\circ$, $n=16$.
\label{example alternating}]
{\includegraphics[scale=0.4,clip,trim=130 20 130 20]{example_alternating}}
\caption{Proper foldings of triangulated hyperbolic paraboloids.}
\label{triangulated hypar}
\end{figure}
Interestingly, the diagonal cross-sections of these structures seem to
approach parabolic in the limit. Figure~\ref{zigzagall} shows the
$x = y \geq 0$ cross-section of the example from
Figure~\ref{example alternating}, extended out to $n=100$.
The parabolic fit we use for each parity class is the unique quadratic
polynomial passing through the three points in the parity class farthest
from the center. The resulting error near the center is significant, but
still much smaller than the diagonal crease length,~$\sqrt 2$.
Least-square fits reduce this error
but do not illustrate the limiting behavior.
\begin{figure}
\centering
\begin{minipage}{1.9in}
\subfigure[Actual zig-zag and parabolic fits.]
{\quad\includegraphics[width=1.8in]{zigzagfit}\quad}
\end{minipage}\hfill
\begin{minipage}{4in}
\subfigure[Absolute difference: fit minus actual.]
{\includegraphics[width=4in]{zigzagabs}}
\subfigure[Relative difference: fit over actual.]
{\includegraphics[width=4in]{zigzagrel}}
\end{minipage}
\caption{Planar cross-section of alternating asymmetric triangulation,
$\theta = 30^\circ$, $n=100$, with parabolic fits of each parity
class based on the last three vertices.}
\label{zigzagall}
\end{figure}
\section{Smooth Hyperbolic Paraboloid}
Given a smooth plane curve $\Gamma$ and an embedding of $\Gamma$ in
space as a smooth space curve $\gamma$, previous work
\cite{FuchsTabachnikov} has studied the problem of folding a strip of
paper so that a crease in the form of $\Gamma$ in the plane follows
the space curve $\gamma$ when folded. The main theorem from this work
is that such a folding always exists, at least for a sufficiently
narrow strip about $\Gamma$, under the condition that the curvature of
$\gamma$ be everywhere strictly greater than that of $\Gamma$.
Further, with some differential geometry described in
\cite{FuchsTabachnikov}, it is possible to write down exactly how the
strip folds in space; there are always exactly two possible choices,
and additionally two ways to fold the strip so that $\Gamma$ lies
along $\gamma$ but remains uncreased.
Based on some preliminary work using these techniques, we conjecture
that the circular pleat indeed folds, and that so too does any similar crease
pattern consisting of a concentric series of convex smooth curves.
Unfortunately a proof remains elusive. Such a proof would be the first
proof to our knowledge of the existence of any curved-crease origami model,
beyond the local neighborhood of a single crease.
\section*{Acknowledgments}
We thank sarah-marie belcastro, Thomas Hull, and Ronald Resch for
helpful related discussions over the years.
We also thank Jenna Fizel for folding and photographing the models
in Figures~\ref{hypar photo} and~\ref{circular photo}.
|
1,116,691,500,296 | arxiv | \section{Introduction}
When making predictions in the framework of the Minimal
Supersymmetric Extension of the Standard Model (MSSM), one encounters
parameter freedom which is mainly due to the so-called soft SUSY
breaking terms.
To restrict this freedom and get more
predictive power, one usually follows the universality hypothesis
that assumes universality (i.e. equality) of soft terms at
some high energy scale (most common is the GUT scale at which the
gauge couplings are unified).
Under this assumption one is left with 5 free parameters,
and a thorough analysis of the MSSM mass spectrum in universal case
has been done \cite{uniMSSM}.
However, the newest experiments, and first of all recent LEP II
limits on the lightest Higgs boson mass \cite{LEP_new_results},
suggest that this minimal scenario might not work in practice.
In the present paper we try to clarify
what one expects to get by releasing the universality
conditions in the MSSM, allowing more freedom for the soft terms,
with the emphasis on the Higgs boson mass predictions.
In order to see how the non-universality at the GUT scale can
change the predictions at the low energy scale, we use recently
obtained analytical solutions to the renormalization group (RG)
equations for the Yukawa couplings \cite{moultaka,KM}.
By means of Grassmannian expansion they allow one to derive
analytical expressions for soft term evolution
\cite{kaz_avd_kond,kaz_physlettB} and trace analytically the
dependence of the soft terms at the $M_Z$ scale on their boundary
values.
In this way, both the expediency of one or another
simplifying hypotheses concerning GUT scale values of the soft
parameters and the role of non-universality in the context of the
MSSM can be estimated.
The parameter space of the MSSM can further be narrowed using the
so-called infrared quasi-fixed point (IRQFP) behaviour of some
parameters \cite{Hill}, i.e. independence of low energy values on
initial conditions at the GUT scale.
Using the analytical results along with
the numerical ones, one can keep over control the way how the
IRQFP strategy works. In what follows we adopt the method
advocated in Refs. \cite{YJK,JK} and apply the above-mentioned tips
in making prediction for the lightest Higgs boson mass in the MSSM.
The paper is organized as follows.
In Sec.~\ref{ir_analytic}, we analyze the IRQFP behaviour of soft
parameters with the help of analytical solutions.
In Sec.~\ref{lowtan}, the low $\tan\beta$ scenario with
non-universality is investigated, and prediction for Higgs boson
mass is given.
The same analysis is conducted in
Sec.~\ref{bigtan} for the case of large $\tan\beta$.
Our concluding remarks are presented in Sec.~\ref{the_end}.
\section{
Analytical solutions of RG equations for the soft terms and IRQFPs
}
\label{ir_analytic}
As has been recently shown \cite{moultaka}, the RGEs for Yukawa
couplings can be solved analytically by means of an iteration
procedure.
Following the recipe advocated in \cite{KM,kaz_physlettB},
the expressions for soft parameters can be
derived from analytical solutions of Yukawa coupling RGEs.
Below we provide a brief analysis of analytical solutions for the
soft parameters, emphasizing the difference between universal and
non-universal cases.
For low $\tan\beta$ an exact analytical solution is known
\cite{exactlow,bottom-up,kaz_cod}; so we discuss just the case of
large $\tan\beta$ when all the Yukawa couplings are essential.
Consider solutions to the RG equations for the soft terms.
Since the triple scalar couplings $A_{t,b,\tau}$ and gaugino masses
$M_i$ have a dimension of a mass and the squark, slepton and Higgs
mass terms have a dimension of a mass squared, the corresponding
RG equations are linear with respect to these parameters, and
their solutions can be represented in the form \cite{KM}
\begin{eqnarray}
\hspace*{-10mm}
A_{l}(t)&=&\!\sum_{j=t,b,\tau}a_{lj}(t)A_{0j}+
\!\!\!\sum_{k=1,2,3}b_{lk}(t)M_{0k}\, ,\qquad l=t,b,\tau
\label{AS} \\
m^2_{n}(t)&=&\!
\sum_{i,j=1,2,3}\!c_{ij(n)}(t)M_{0i}M_{0j}+
\!\!\!\!\!\!\sum_{i,j=t,b,\tau}\!d_{ij(n)}(t)A_{0i}A_{0j}
+\!\!\!\!\!
\sum_{\displaystyle\stackrel{ i=t,b,\tau}{\stackrel{j=1,2,3}{}}}
\!e_{ij(n)}(t)A_{0i}M_{0j}
+\!\sum_{q}k_{q(n)}(t)m^2_{0q}\nonumber
\end{eqnarray}
where $m^2_{n}$ represent the squark, slepton and
Higgs mass terms,
$n,q=Q_3,U_3,D_3,H_1,H_2,E_3,L_3$ and $m^2_{0n}$, $A_{0k}$, $M_{0j}$ ,
($k=t,b,\tau ,\,j=1,2,3$) are the initial values of the parameters.
In one loop order the coefficients of eq.(\ref{AS}) can be
calculated within the iteration procedure described in \cite{KM}.
We have calculated them up to the 6-th iteration, that allows one
to get accuracy of 1\%.
Evaluated at the $M_Z$ scale, which corresponds to
$t=\log M_{GUT}^2/M_Z^2\approx 66$, they depend on the
initial values of Yukawa couplings.
In what follows we use the
notation $Y_k\equiv h_k^2/16\pi^2 \ (k=t,b,\tau)$ and ${\mbox a}_i
\equiv \alpha_i/4\pi \equiv g_i^2/16\pi^2 \ (i=1,2,3)$.
To test the behaviour of the coefficients, we take several sets
of Yukawa couplings $Y^0_{t,b,\tau}$ at the GUT scale in the range
$(0.5\div25){\mbox a}_0$ with some arbitrary ratios of
$Y_b/Y_t$ and $Y_\tau/Y_t$ (${\mbox a}_0$ is the common value of
the gauge couplings at the GUT scale).
The upper bound for $Y^0_i$ is taken to preserve the
perturbativity up to GUT scale, the lower one keeps us in
the large $\tan\beta$ regime.
In Fig.\ref{a_mssmbig} we plot the coefficients of eq.(\ref{AS})
as functions of $Y^0_t/{\mbox a}_0$.
For fixed $Y^0_t$ different points for a given coefficient
correspond to different relative ratios $Y^0_t/Y^0_{b,\tau}$.
In this section for illustration we consider three extreme cases:
$Y^{0}_{t}=Y^{0}_{b}=Y^{0}_{\tau}$,
$Y^{0}_{t}=Y^{0}_{b}=10Y^{0}_{\tau}$ and
$Y^{0}_{b}=Y^{0}_{\tau}=(1/10) Y^{0}_{t}$ ,
to demonstrate relative insignificance of the non-universality
of the Yukawa couplings.
\begin{figure*}[p]
\hspace*{-.02\textwidth}
\includegraphics[width=.52\textwidth]{hatopc.eps}
\hspace*{-.036\textwidth}
\includegraphics[width=.52\textwidth]{hatauc.eps}\\
\hspace*{-.02\textwidth}
\includegraphics[width=.52\textwidth]{hmqc.eps}
\hspace*{-.036\textwidth}
\includegraphics[width=.52\textwidth]{hmdc.eps} \\
\hspace*{-.02\textwidth}
\includegraphics[width=.52\textwidth]{hmh1c.eps}
\hspace*{-.036\textwidth}
\includegraphics[width=.52\textwidth]{hmec.eps}
\caption{
The dependence of the coefficients of eq.(\ref{AS}) on
the values of Yukawa couplings.
The coefficients are computed at $t=66$ ($M_Z$ scale).
For a given $Y^{0}_{t}$ the plotted points
correspond to three different sets of $Y^{0}_{b},Y^{0}_{\tau}$, namely
1): $Y^{0}_{t}=Y^{0}_{b}=Y^{0}_{\tau}$,
2): $Y^{0}_{t}=Y^{0}_{b}=10Y^{0}_{\tau}$,
3): $Y^{0}_{b}=Y^{0}_{\tau}=(1/10)Y^{0}_{t}$.
Some points for a given $Y^0_{t}$ may overlap on the plot.
To keep the plots readable no connecting lines between points
are drawn, they can be easily recovered due to smooth behaviour.
For the same reason the coefficients that are close to zero
are not shown. }
\label{a_mssmbig}
\end{figure*
Indeed, one can see that for $Y^0_{t}\ge 2{\mbox a_0}$ and regardless of the
ratios $Y^0_t/Y^0_{b,\tau}$ (but still remaining in the range corresponding
to large $\tan\beta$) the coefficients of soft breaking parameters in
$A_{t,b,\tau}$ approach the asymptotic values equal to their IRQFPs.
Since at the same scale one has
$M_1(M_Z)=0.412\ M_{01},\,\, M_2(M_Z)=0.822\
M_{02},\,\,M_3(M_Z)=2.85 \ M_{03}$, we conclude that the ratios
$A_{t,b}/M_3$ exhibit the proper IRQFP behaviour as used in Ref.\cite{JK}.
Hence, non-universality of the soft terms changes almost nothing
in the IRQFP behaviour for $A_t$ and $A_b$ because the coefficient
of $M_{03}$ is bigger than the others by a factor of 5 or more.
One should also note that non-universality of the Yukawa couplings
has a weak effect on the values of the soft terms coefficients.
On the plot for $A_t$ (the same is true for $A_b$) for a given
coefficient the three points corresponding to various ratios of
Yukawa couplings almost overlap.
For $A_\tau$ all the coefficients but $A^0_{t}$ and $M_{01}$ have
comparable non-vanishing values and the IRQFP is less stringent.
There is also stronger dependence on the relative ratios of Yukawa
couplings.
Fortunately, $A_\tau$ does not play any significant role in the
Higgs mass prediction.
The same observations holds true for the soft masses.
Taking the values of the Yukawa couplings at the GUT scale as above,
we have found that for $m^2_{Q_3}$, $m^2_{U_3}$ and $m^2_{D_3}$
the prevalence of the gluino mass $M^2_{03}$ is obvious and
non-universality does not change anything
(see on Fig.\ref{a_mssmbig} the $m^2_{Q_3}$ and $m^2_{D_3}$ plots)
when comparing with the universality case.
In the expression of $m^2_{H_1}$ the coefficients for some scalar
masses are opposite in sign and of the same magnitude ($m^2_{0H_1}$
comes with '+' sign and $m^2_{0Q_3}$, $m^2_{0D_3}$ come with '-' sign),
and the same is true for $m^2_{H_2}$ (with $D \to U, H_1 \to H_2$).
In the case of universal boundary
conditions, for the scalar masses $m^2_{H_1}$ and $m^2_{H_2}$ the only
dependence on the initial conditions left is that on the gluino mass
$M^2_{03}$ since the scalar mass $m^2_{0n}$
contributions cancel each other.
In the case of non-universality with some peculiar choice of
initial conditions some residual dependence on the scalar masses
may appear. Nevertheless, one can still rely on asymptotic
plateau for the coefficients at large Yukawa couplings.
Again, one observes that for $Y_t^0> 2{\mbox a_0}$
the coefficients approach some asymptotic values and the
dependence on non-universality of Yukawa couplings is rather feeble
for $m^2_{Q_3}$, $m^2_{U_3}$ and $m^2_{D_3}$, and small enough
for $m^2_{H_1}$ and $m^2_{H_2}$. The residual dependence for
the coefficients of $M^2_{03}$ in the latter case is because
we get out of the large Yukawa region:
for $Y^0_t=2{\mbox a_0}$ we have $Y^0_{b,\tau}=0.2{\mbox a_0}$
which doesn't ensure the IRQFP regime.
The masses of sleptons $m^2_{E_3}$ and $m^2_{L_3}$ exhibit
rather a fuzzy picture (see the last plot in Fig.\ref{a_mssmbig}).
The coefficient of $M^2_{03}$ is no longer the leading one, instead
we have large contributions from $m^2_{0E_3}$ and $m^2_{0L_3}$.
Here some coefficients are negative,
thus leading to negative values of e.g. $m^2_{E_3}$ if
$m^2_{0L_3}$ and $m^2_{0H_1}$ are big enough.
The requirement of positiveness of slepton masses imposes
some bounds on non-universality choice.
In our analysis below we take the relative ratios of
the soft masses at the GUT scale in the range $0.5\div 2$
which ensures $m^2_{E_3}>0$ for the most regions
of the parameter space.
These bounds are in agreement with those
obtained in \cite{bottom-up} in the bottom-up approach.
In the universal case both $m^2_{E_3}$
and $m^2_{L_3}$ are positive due to the cancellations between
different soft terms.
On the plots in Fig.\ref{a_mssmbig}, only those parameters which
have non-negligible coefficients are shown.
Thus, we come to the following conclusions:
i) if the Yukawa couplings at the GUT scale are large enough
( $> 2 {\mbox a}_0$) the coefficients of eqs.(\ref{AS}) for the soft
terms at low energy scale approach the asymptotic values for both
the universal and non-universal boundary conditions, independently
of the relative ratios of the Yukawa couplings;\\[2mm]
ii) for $A_t,A_b,m^2_{Q3},m^2_{U3},m^2_{D3},m^2_{H1}$ and
$m^2_{H2}$ at the $M_Z$ scale the coefficient of $M_{03}\ (M^2_{03})$
dominates, the IRQFP behaviour is substantiated and can be used
for both the universal and non-universal boundary conditions.
\section{Low $\tan\beta$ Scenario}\label{lowtan}
We begin our analysis of the influence of non-universality on the
mass of the lightest Higgs boson in the low $\tan\beta$ case.
The present
approach is based on our previous papers \cite{YJK,JK} where we
investigated the mass spectrum of sparticles and the Higgs
bosons using the conception of infrared quasi-fixed points
(IRQFPs) with the
assumption of universality of the soft supersymmetry
breaking parameters. The concept of IRQFPs, which has been
introduced in \cite{Hill}, was widely employed in the literature
\cite{YJK,JK,WinnyPuh_i_vse_vse_vse,nath_all,HLR}. It allows one
to find the values of the relevant parameters at the $M_Z$ scale
without exact knowledge of their initial values. The validity of
the fixed points is clearly demonstrated in the previous section.
This analysis gives us an important information about the weight
of various initial parameters in the calculations at low energy
values and, finally, in the calculation of the mass spectrum.
In our previous papers \cite{YJK,JK}, we have concluded that all
Higgs bosons in the MSSM except the lightest CP-even one, are
too heavy to play an important role in the near future experiments;
therefore, in the present paper we concentrate on the
lightest Higgs boson only.
As input parameters we take the known value of the top-quark pole
mass, $m_t^{pole}=173.8 \pm 5.2$ GeV \cite{top}, the experimental
values of the gauge couplings \cite{top} $\alpha_3=0.120 \pm
0.005, \ \alpha_2=0.034,\ \alpha_1=0.017$ and the sum of the
Higgs vev's squared $v^2 = v_1^2+v_2^2 \approx $174.1 GeV$^2$. We
use the approximate and/or numerical solutions of the relevant RG
equations to evaluate the fixed point values of the mass
parameters. To calculate the mass of the lightest Higgs boson, one
also needs to know the ratio $v_2/v_1$ known as $\tan\beta$ and
the mass parameter $\mu$. To determine $\tan\beta$, we use the
well-known relation between the top-quark running mass, top Yukawa
coupling and $\sin\beta$
\begin{equation}
m_t=h_t v \sin{\beta}. \label{tm}
\end{equation}
The top-quark running mass is found from the pole mass taking into
account QCD and SUSY corrections \cite{mtop1,mtop2} as (for
details see Refs.\cite{YJK,JK})
\begin{equation}
m_t(m_t)=\frac{m_t^{pole}}{1+ \left({\displaystyle\frac{\Delta
m_t}{m_t}}\right)_{QCD} + \left({\displaystyle\frac{\Delta
m_t}{m_t}}\right)_{SUSY}}. \label{pole}
\end{equation}
The results depend on the sign of the $\mu$ parameter which enters the
mixing terms in the stop sector. For $\mu>0$, one obtains
$m_t(m_t) = 162 \pm 5$ GeV. Negative values of $\mu$ lead to a too
light Higgs boson, and we do not consider this case here.
Now we can estimate the value of the $\tan\beta$. We assume that
the top Yukawa coupling is close to its IRQFP. This is realized
when $ \rho_{t0}=Y_t^0/{\mbox a}_0 >2$. Then one gets
$h_t(M_Z)=1.09-1.14$ when $2< \rho_{t0} <25$ .
As a central value of top Yukawa coupling we take
$h_t(M_Z)=1.12$ which corresponds
to $\rho_{t0}=5$. This gives the following value of $\tan\beta$:
$$\tan\beta = 1.47 \pm 0.1 \pm 0.15 \pm 0.05, \ \ \ \ \mu>0 . $$
The deviations from the central value are connected with the
deviation from the fixed point value of the Yukawa coupling and
the experimental uncertainties in the top-quark mass and
$\alpha_3(M_Z)$, respectively.
The Higgs mixing parameter $\mu$ can be determined from the
requirement of radiative EWSB and can be found from the Higgs
potential minimization condition. In contrast with our previous
paper \cite{YJK} (where we have taken into account only tree level
minimization condition), we include here the one-loop corrections.
It gives the difference of about 2 GeV for the lightest Higgs
boson mass. The one-loop minimization condition reads
\begin{equation}
\frac{M_Z^2}{2}+\mu^2=\frac{m_{H_1}^2+\Sigma_1-
(m_{H_2}^2+\Sigma_2) \tan^2\beta}{\tan^2\beta-1}\,,
\label{MZC}
\end{equation}
where $\Sigma_1$ and $\Sigma_2$ are the one-loop corrections
\cite{Gl}, $M_Z$ is the $Z$-boson mass and $m_{H_1}^2$ and
$m_{H_2}^2$ are the Higgs soft mass parameters which are
determined by solutions of the RG equations. The latter possess
the IRQFP's which we use in our analysis. The above equation
allows one to obtain an absolute value of $\mu$. The sign of
$\mu$ remains a free parameter, however, as it has already been
mentioned, negative values of $\mu$ give too small values of the
lightest Higgs boson mass and are excluded experimentally
\cite{LEP_new_results}.
In the MSSM, the Higgs sector consists of five physical states: two
neutral CP-even scalars $h$ and $H$, one neutral CP-odd scalar
$A$, and two charged Higgs scalars $H^{\pm}$. We concentrate on
the mass of the lightest Higgs boson $h$. At the tree level, the
mass of $h$ is smaller than the mass of $Z$-boson, $M_Z$, but the
loop corrections increase it. In general, the mass matrix for the
CP-even neutral Higgs bosons looks like
\begin{eqnarray}
{\mathcal M}\!\!&=&\!\!\left(\!\!\begin{array}{cc}\tan\beta & -1\\ -1 &
\cot\beta
\end{array}\!\! \right)m^2_A\cos\beta\sin\beta
+ \left(\!\!\begin{array}{cc}\cot\beta & -1\\ -1 & \tan\beta
\end{array}\!\!\right)M^2_Z\cos\beta\sin\beta
+ \left(\!\!\begin{array}{cc} \Delta_{11} & \Delta_{12}\\
\Delta_{12} & \Delta_{22}
\end{array}\!\!\right) \label{h}
\end{eqnarray}
where $m_A$ is the mass of the CP-odd Higgs boson and $\Delta's$
are the radiative corrections~\cite{cor}. These corrections depend
on stop masses which are given by the following equation:
\begin{eqnarray}
\tilde m_{t_{1,2}}^{2}&=&\frac{1}{2} \big[\tilde m_{t_L}^{2}
+\tilde m_{t_R}^{2} \mp \sqrt{(\tilde m_{t_L}^{2} - \tilde
m_{t_R}^{2})^2 +4 m_{t}^{2} (A_t-\mu \, \cot \beta)^2} \big] \,,
\label{stop}
\end{eqnarray}
\begin{figure}[t]
\includegraphics[width=.5\textwidth]{cfig1a.eps}
\hspace*{-.015\textwidth}
\includegraphics[width=.5\textwidth]{cfig1b.eps} \\
\vspace*{-10mm}
\caption{ The mass of the lightest Higgs boson as a function of
$M_{SUSY}$. The dashed (central) line corresponds to the central
values of the parameters, the dash-doted lines corresponds to the upper
and lower limits in the case of universal boundary conditions and
the solid lines are absolute upper and lower limits on the mass of
the lightest Higgs boson in the non-universal case (left).
The influence of variations from central
values of the individual parameters as well as their collective
effect on the mass of the lightest Higgs boson in both the
universal and non-universal cases at a typical scale $M_{SUSY}=1$
TeV (right). \label{fm}}
\end{figure}
To find the Higgs boson mass, one has to diagonalize the mass
matrix (\ref{h}). In our previous paper \cite{YJK} we have
estimated the mass of the lightest Higgs boson exploring the
IRQFPs in case of universal boundary conditions. In the present
paper, we also use the concept of IRQFPs but release the
universality conditions and allow moderate deviations from
universality. In view of the analysis of the previous section, we
expect that the main influence of non-universality comes from the
initial values of the Higgs masses $m^2_{H_1}$ and $m^2_{H_2}$ and
that of $m^2_{U}$ while those of $m^2_{Q}$ and gauginos are of
minor importance. In the numerical analysis,
we consider the following intervals for the top Yukawa coupling
and soft breaking parameters at the GUT scale:
$Y_{t}^0/{\mbox a}_0 \in <2,25>$, $A_{0t}/M_{03} \in
<-1,1>$, $m_{0i}^2/M_{03}^2 \in <0.25,4>$ and $M_{0j}/M_{03} \in
<0.5,2>$, where $i=(Q_3, U_3, H_1, H_2)$ and $j=1,2$.
In Fig.\,\ref{fm}, the dependence of the mass of the
lightest Higgs boson $m_h$ on the geometric mean of stop masses
$\sqrt{\tilde m_{t_1} \tilde m_{t_2}}$ is shown,
which is often identified
with the supersymmetry breaking scale $M_{SUSY}$. One can see
obvious saturation of the Higgs mass when $M_{SUSY} \geq 500$
GeV. The central (dash) line corresponds to the central values of
the parameters. We take them as follows: $Y_{t}^0/{\mbox a}_0 =5$,
$A_{0t}/M_{03}=0$, $m_{0}^2/M_{03}^2=1$ for all scalar masses and
$M_{0j}/M_{03}=1$ for gaugino masses $M_{01}$ and $M_{02}$. If
one assumes the universality, the mass of the lightest Higgs boson
at a typical scale $M_{SUSY}=1$ TeV ($\mu>0$) is
\begin{equation}
m_h=92.7 \ ^{\displaystyle +3.8}_{\displaystyle -1.9}\ \pm5\
\pm0.4 \ \mbox{GeV}, \ \ \ \mbox{ for} \ M_{SUSY}=1 \ TeV.
\label{mass}
\end{equation}
The first uncertainty is given by the deviations from central
values of the top Yukawa coupling and soft breaking parameters
($+3.8\ (-1.9)$), the second one by the
uncertainty of top-quark mass
and the third one by uncertainty in the strong coupling constant
$\alpha_3(M_Z)=0.120 \pm 0.005$.
If the parameters are non-universal at the GUT scale, the range of
possible values of the lightest Higgs boson mass becomes wider:
\begin{equation}
m_h=92.7 \ ^{\displaystyle +10.1}_{\displaystyle -4.9}\ \pm5 \
\pm0.4 \ \mbox{GeV}, \ \ \ \mbox{ for} \ M_{SUSY}=1 \ TeV.
\label{mass2}
\end{equation}
{}From Fig.\,\ref{fm}, one can see that the main deviations from
the universal case for the lightest Higgs boson mass is due to the
soft mass parameters, especially $m_U^2$.
One can see that the
restrictions on the lightest Higgs boson mass in the non-universal
case are not so strict as in the universal one.
In the non-universal
case, in the MSSM with low $\tan\beta$ it is possible that the
lightest Higgs boson has the mass slightly higher than $100$ GeV
contrary to the universal case.
However, it is still too light in
view of recent experimental data, which sets the lower limit on
the Higgs mass of 103 GeV \cite{LEP_new_results}.
Of course, if one allows the parameters to have larger deviations than
we use in present analysis
(we have imposed the following restrictions on the soft masses:
${\displaystyle m^2_{0i}/M^2_{03} \in <.25, 4> }$ and
${\displaystyle m^2_{0i}/m^2_{0j}}$ $ \in <1/16, 16> $ where
$i,j=Q_3,U_3,D_3,H_1,H_2$) it is possible to find the mass
of the lightest Higgs boson even larger than our upper bound.
For instance if one allows the soft masses to be in interval
${\displaystyle m^2_{0i}/m^2_{0j}\in <1/100, 100> }$
than the upper bound for $m_h$ increases by about 3 GeV.
However, we consider such a large non-universality to be unnatural.
\begin{figure}[t]
\includegraphics[width=.5\textwidth]{cfig2a.eps}
\hspace*{-.015\textwidth}
\includegraphics[width=.5\textwidth]{cfig2b.eps} \\
\vspace*{-10mm}
\caption{ The mass of the lightest Higgs boson as
a function of $M_{SUSY}$ for both the cases
$\mu>0$ and $\mu<0$.
The dashed (central) line corresponds to the central values of the
parameters, the dash-doted lines correspond to the upper and lower
limits for the universal boundary conditions and the solid lines
are the absolute upper and lower limits on the mass of the
lightest Higgs boson in the non-universal case.
The line intersection is related to a steep fall of $m^2_h$
at low values of $M_{SUSY}$ where $m^2_h$ becomes negative.
The position of the ``switch'' depends on the choice of
parameters.
Physically relevant parts of the plots start at
approximately $M_{SUSY}\ge 400$ GeV ($\mu >0$) and
$M_{SUSY}\ge 600$ GeV ($\mu <0$) \label{fm1} }
\end{figure}
\section{Large $\tan\beta$ scenario}\label{bigtan}
Consider now the large $\tan\beta$ case. The situation is more
complicated because the space of input parameters is larger. We
follow the same strategy as in Ref.\,\cite{JK}, but with some
modifications. To take into account the non-universality, we keep
the initial values of top and bottom Yukawa couplings within the
whole interval $\rho_{t0},\rho_{b0} \in <2, 25>$ (see the previous
section for notation). In Ref.\,\cite{JK}, we restricted this
interval imposing some constraints on the $\sin\beta$ and
$\tau$-lepton mass $m_{\tau}$. Here, the only restriction is the
attraction to the IRQFPs which defines the above mentioned
interval. To determine $\tan\beta$, we use equation (\ref{tm}) and
a similar one for the bottom-quark running mass
\begin{equation}
m_b=h_b v \cos{\beta}, \label{tb}
\end{equation}
so that $\tan\beta$ is defined from
\begin{equation}
\tan\beta=\frac{m_t}{m_b}\frac{h_b}{h_t}. \label{tan}
\end{equation}
The top-quark running mass has been calculated in
the previous section. As for the bottom-quark running mass,
the
situation is more complicated because the mass of the bottom $m_b$
is essentially smaller than the scale $M_Z$; so we have to take
into account the running of this mass from $m_b$ to the $M_Z$ scale.
The bottom-quark pole mass is $m_b^{pole}=4.94\pm0.15$ GeV
\cite{dev}.
To calculate the running mass, we use the well-known
procedure given e.g. in Refs.\,\cite{JK,mtop2,ar,gr}.
Since we
assume non-universality of the Yukawa couplings, the value of
$\tan\beta$ obtained from eq.(\ref{tan}) belongs to a wider
interval.
For the central values of the Yukawa couplings and the
mass parameters (see the previous section) we find the following
values of $\tan\beta$: $\tan\beta=69.3, \mu>0$ and
$\tan\beta=38.1, \mu<0$.
When the parameters vary around their
central values, $\tan\beta$ is changing within the intervals:
$$\tan\beta \in <41.2, 130.2> \ \ (\mu>0), \ \quad \ \
\ \tan\beta \in <35.0, 40.6>\ \ (\mu<0).$$
The parameter $\mu$ is
calculated in the same manner as in the low $\tan\beta$ case.
The same is true for the stop and sbottom masses.
\begin{figure}[t]
\includegraphics[width=.5\textwidth]{cfig3a.eps}
\hspace*{-0.015\textwidth}
\includegraphics[width=.5\textwidth]{cfig3b.eps}
\caption{ The influence of the variations from central values of
the individual parameters and their collective effect on the mass
of the lightest Higgs boson in both the universal and
non-universal cases at a typical scale $M_{SUSY}=1$ TeV.
\label{fm2}
}
\end{figure}
In Fig.\,\ref{fm1}, we present the dependence of the mass of the
lightest Higgs boson on $M_{SUSY}$ for both the cases $\mu>0$ and
$\mu<0$.
One can immediately see the very sharp increase of $m_h$
to the plateau starting from $M_{SUSY} \geq 500$ GeV.
In the non-universal case,
the interval of masses is slightly wider than
in the universal one.
Fig.\,\ref{fm2} shows the dependence of the lightest Higgs
boson mass on the deviations of
the individual parameters from their central values for both the
cases $\mu>0$ and $\mu<0$.
The major influence on the Higgs mass
is given by the Yukawa top and bottom couplings.
The influence of the
other parameters is negligible.
One can immediately see the big difference between the universal
and non-universal cases.
If one assumes the universality of the Yukawa couplings and soft
parameters, the mass of the lightest Higgs boson at the typical
scale $M_{SUSY}=1$ TeV is given by
\begin{eqnarray}
m_h&=& 125.7 \ ^{\displaystyle +2.2}_{\displaystyle -3.5}\ \pm5 \
\pm0.4 \ \ \ \mbox{GeV for} \ \ \ \mu>0 \,, \nonumber \\
m_h&=&125.4 \ ^{\displaystyle +2.0}_{\displaystyle -3.6}\ \pm5 \
\pm0.4 \ \ \ \mbox{GeV for} \ \ \ \mu<0 \,. \nonumber
\end{eqnarray}
The first uncertainty is connected with the deviations of the
Yukawa couplings and soft parameters from their central values in
the universal case, the second one is due to the experimental
uncertainty in the top-quark mass, and the third one is connected
to that of the strong coupling constant.
When one does not assume universality, the allowed interval of
the Higgs boson mass is wider. For $M_{SUSY}=1$ TeV
we get
\begin{eqnarray}
m_h&=& 125.7 \ ^{\displaystyle +6.4}_{\displaystyle -9.0}\ \pm5 \
\pm0.4 \ \ \ \mbox{GeV for} \ \ \ \mu>0 \,, \nonumber \\
m_h&=&125.4 \ ^{\displaystyle +6.6}_{\displaystyle -9.0}\ \pm5 \
\pm0.4 \ \ \ \mbox{GeV for} \ \ \ \mu<0 \,. \nonumber
\end{eqnarray}
One can see that in the case of large $\tan\beta$ the mass of the
lightest Higgs boson typically belongs to the interval $<115,
130>$ GeV. The upper bound on $m_h$ is reached for $Y^0_t$
close to its perturbative limit ($Y^0_t/{\mbox a_0}\approx 25$).
The influence of the soft parameters is small as one can see in
Fig. \ref{fm2} and is also restricted by the assumption
that the soft masses for sleptons $m^2_{E3}$ and $m^2_{L3}$
are positive at $M_Z$ scale.
The lower bound on $m_h$ decreases by about 3 GeV when
the assumption of IRQFP for Yukawa couplings is released.
There is still a constraint on $Y^0_t$ ($Y^0_t/{\mbox a_0} > 1.2$)
given by condition
$\sin\beta\le 1$ in the relation for top mass (\ref{tm}).
Experiments are still far away from these values,
though the lower boundary may be within the reach of LEP II.
\section{Conclusion}\label{the_end}
We have analyzed the influence of non-universality of the Yukawa
couplings and soft SUSY breaking parameters on the mass of the
lightest Higgs boson $h$ in the MSSM. Possible values of the Higgs
mass are obtained. This may be important for the Higgs searches in
the nearest future.
In the low $\tan\beta$ case, the main role is played by
non-universality of the mass soft parameters. Assuming a moderate
deviation from universality one gets the mass of the lightest
Higgs boson below 103 GeV which is almost excluded by recent
experimental data \cite{LEP_new_results}.
For high $\tan\beta$ the situation is different.
Here, the main role is played by non-universality of the
Yukawa couplings;
the variations of the soft terms are of minor importance.
The mass of the lightest Higgs boson in this case is much larger.
Here more interesting
is the lower bound of the Higgs mass. The effect of
non-universality is the decrease in this bound which may become as
low as 115 GeV leaving hopes for the imminent observation of the
Higgs boson.
\vglue 0.5cm
{\bf Acknowledgments}
\vglue 0.5cm
We are grateful to G. Moultaka for useful discussions. Financial
support from RFBR grants \# 99-02-16650 and \# 96-15-96030 is
kindly acknowledged.
|
1,116,691,500,297 | arxiv |
\section{Use Cases}
\section{Applications of Rhetorical Roles Prediction Task}
The purpose of creating a rhetorical role corpus is to enable automated understanding of legal documents by segmenting them into topically coherent units. This can be helpful in various applications such legal document summarization \cite{bhattacharya2019comparative}, and legal judgment prediction \cite{malik-etal-2021-ildc}. In this paper, we explore both the use-cases. We experimented with how rhetorical roles prediction could help create abstractive, extractive summaries of Indian court judgments and predict the judgment outcome based on the judgment text.
\subsection{Extractive Summarization of Court Judgments using Rhetorical Roles}
\label{subsection:extractive_summarization}
We explored the task of extractive summarization. For a given legal document, the task requires extracting the salient sentences that would summarize the document. We experimented with the LawBriefs corpus consisting of 285 extractive summaries of Indian court judgments prepared by law students from a National Law University in India. The corpus was created by providing judgment documents to law students, followed by a questionnaire that required them to pick salient sentences that would answer the questions and, in the process, create the summaries. The questions pertained to facts, arguments, issues, ratio, and decisions. We wanted to experiment with how rhetorical roles could be helpful in extracting summaries.
We finetuned BERTSUM \cite{liu2019text} model on the Lawbriefs data to pick up the top 20\% of the sentences as summaries. Since the judgments are much longer than 512 token limits of BERTSUM, we created non-overlapping chunks of 512 tokens and created 3151 chunks in training data from 235 judgments and 827 chunks from 50 judgments as test data. We then trained another model, which also takes as input a rhetorical role for each sentence. We concatenated 768-dimensional sentence vector from CLS token to one-hot encoded sentence rhetorical roles. The idea is that if certain rhetorical roles are more important than others while creating summaries, then the model will learn those. We call this model BERTSUM RR. Discussion with Legal Experts revealed that ISSUE, RATIO, and RPC are important in summary and must always be selected without the need of summarizing. So we copied all the sentences with predicted rhetorical roles ISSUE, RATIO and RPC regardless of whether they are present in the top 20\% sentences. Model performance evaluated using ROUGE scores \cite{lin2004rouge} are compared in Table \ref{table:ExtractiveSummarizationResults}. Results indicate that rhetorical roles are useful in selecting better summary sentences.
\begin{table}[t]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|X|X|}
\hline
Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline
BERTSUM & 0.60 & 0.42 & 0.59\\ \hline
BERTSUM RR & 0.62 & 0.46 & 0.61\\ \hline
\end{tabularx}
\caption{Extractive Summarization Results}
\label{table:ExtractiveSummarizationResults}
\end{center}
\vspace{-7mm}
\end{table}
\subsection{Abstractive Summarization of Court Judgments using Rhetorical Roles}
The task of abstractive summarization requires generating concise text summaries of legal documents. For our experiments, we considered 50 randomly selected documents from the Law Briefs dataset (as described in \ref{subsection:extractive_summarization}) as test data. For this task we used pre-trained Legal Pegasus model.\footnote{\url{https://huggingface.co/nsi319/legal-pegasus}} Legal Pegasus is fine-tuned version of Pegasus \cite{zhang2020pegasus} on US securities litigation dataset.\footnote{\url{https://www.sec.gov/litigation/litreleases.htm}} We used the pre-trained Legal Pegasus model for generating abstractive summaries for the baseline. In particular, we split the document into non-overlapping chunks of 1024 tokens, and each chunk was passed through the model to generate summaries. The final summary was obtained by concatenating summaries of each chunk. It constituted the baseline model. We wanted to see how RR could help generate better summaries. Towards this goal, we segmented the document in terms of rhetorical roles, and each of the segments was passed separately through the Legal Pegasus model to generate summaries. The final summary was obtained by concatenating the summaries corresponding to each of the rhetorical roles in the order they appear in the document. This corresponds to the Legal Pegasus RR model. Both models are compared on the test set and ROUGE scores for both the model are shown in Table \ref{table:AbstractiveSummarizationResults}. As can be observed in Table \ref{table:AbstractiveSummarizationResults} use of rhetorical roles helps to improve the performance on the task of abstractive summarizing.
\begin{table}[ht]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|X|X|}
\hline
Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline
Legal Pegasus & 0.55 & 0.34 & 0.47 \\ \hline
Legal Pegasus RR & 0.56 & 0.36 & 0.48 \\ \hline
\end{tabularx}
\caption{Abstractive Summarization Results}
\label{table:AbstractiveSummarizationResults}
\vspace{-5mm}
\end{center}
\end{table}
\subsection{Court Judgment Prediction using Rhetorical Roles}
\newcite{malik-etal-2021-ildc} created the corpus (ILDC: Indian Legal Documents Corpus) and the task (CJPE: Court Judgment Prediction and Explanation) for predicting and explaining the court judgments based on legal judgment texts. It is essential for the judgment prediction task to identify which sentences provide hints about the final decision and use that filtered data as input for prediction. We predicted rhetorical role for each sentence of the train, test data using the baseline rhetorical role model. In the ILDC dataset, we removed the sentences with RPC and RATIO tags making the task more challenging. We also removed the judgments for which no ANALYSIS was predicted. Note that the ILDC dataset is already anonymized and takes care of the biases and ethical concerns associated with the task of judgment prediction. Moreover, we use judgment prediction only as a use case and do not believe that an automated system could remove a human judge; rather, such a system could augment a human and expedite legal processes, especially in highly populated countries like India.
For the task of judgment prediction, training data had 5044 judgments, and test data had 977 judgments. The idea is to filter the training data using rhetorical roles to check the impact on model performance, keeping the model architecture the same. We used XLNet on the ILDC single model proposed in \newcite{malik-etal-2021-ildc} to predict the judgment outcome on the last 512 tokens of the judgment text. We call this approach XLNet\_last512. The model ran for 13 epochs, and then it was early stopped. In another experiment, we trained the same architecture to predict judgment outcome on the last 512 tokens of ANALYSIS role sentences. We call this model as XLNet\_last512\_Analysis. The model ran for 12 epochs, and then it was early stopped. The model performance comparison are given in Table \ref{table:judgmentPrediction}. As observed from the results, filtering the input text for the ANALYSIS role improves the prediction.
\begin{table}[ht]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|X|c|c|c|}
\hline
\textbf{Model} & \textbf{Precision} & \textbf{Recall} & \textbf{F1} \\ \hline
XLNet\_last512 & 0.76 & 0.49 & 0.59 \\ \hline
XLNet\_last512\_Analysis & 0.71 & 0.55 & 0.62 \\ \hline
\end{tabularx}
\caption{Judgment prediction Results}
\label{table:judgmentPrediction}
\vspace{-5mm}
\end{center}
\end{table}
\section{Conclusion and Future Directions}
In this paper, we proposed a new corpus of legal judgment documents annotated with 13 different Rhetorical Roles. The corpus was created via crowdsourcing involving law students. We also proposed baseline models for automatic rhetorical role prediction in a legal document. For some of the roles, the model shows similar trends in predicting the roles as human annotators. Nevertheless, there is scope for further improvement and we have created a leaderboard for the task, so that researchers from community can contribute towards improving the RR prediction system. We also showed two applications of rhetorical roles: summarization and judgment prediction. For both the use-cases use of rhetorical role helps to improve results. We have released the corpus and the baseline models and encourage the community to use these to develop other legal applications as well.
\section{Rhetorical Roles Corpus} \label{sec:rr}
As outlined earlier, legal documents are typically long, and information is spread throughout the document. In order to make the automatic processing of documents easier, documents are divided into topically coherent segments referred to as Rhetorical Roles \cite{malik-rr-2021}. In this paper, we propose the use of 12 RRs and a NONE label. We started with the list of RR labels proposed by \newcite{bhattacharya2019identification}; however, we found some of the RR to be ambiguous, hence after having elaborate discussions with law professors, we split some of the RRs (arguments and precedents) to arrive at the list of 12 main roles. Details and definitions for each of the RR are as follows:
\begin{itemize}[noitemsep,nolistsep]
\item \textbf{Preamble (PREAMBLE):} This covers the metadata related to the legal judgment document. A typical judgment would start with the court name, the details of parties, lawyers and judges' names, headnote (summary). This section typically would end with a keyword like (JUDGMENT or ORDER). Some documents also have HEADNOTES, ACTS sections in the beginning. These are also part of the Preamble.
\item \textbf{Facts (FAC):} This corresponds to the facts of the case. It refers to the chronology of events that led to filing the case and how it evolved (e.g., First Information Report (FIR) at a police station, filing an appeal to the Magistrate, etc.) Depositions and proceedings of the current court, and summary of lower court proceedings.
\item \textbf{Ruling by Lower Court (RLC):} Cases are not directly filed in the higher courts but are appealed from lower courts. Consequently, the documents contain judgments given by the lower courts (Trial Court, High Court) based on the present appeal (to the Supreme Court or high court). The lower court's verdict, analysis, and the ratio behind the judgment by the lower court is annotated with this label.
\item \textbf{Issues (ISSUE):} Some judgments mention the key points on which the verdict needs to be delivered. Such Legal Questions Framed by the Court are ISSUES.
\item \textbf{Argument by Petitioner (ARG\_PETITIONER):} Arguments by petitioners' lawyers. Precedent cases argued by petitioner lawyers fall under this category, but when the court discusses them later, then they belong to either the relied / not relied upon category
\item \textbf{Argument by Respondent (ARG\_RESPONDENT):} Arguments by respondents' lawyers. Precedent cases argued by respondent lawyers fall under this, but when the court discusses them later, they belong to either the relied / not relied category.
\item \textbf{Analysis (ANALYSIS):} These are views of the court. This includes courts' discussion on the evidence, facts presented, prior cases, and statutes. Discussions on how the law is applicable or not applicable to the current case. Observations (non-binding) from the court. It is the parent tag for three tags: PRE\_RLEIED, PRE\_NOT\_RELIED, and STATUTE i.e., every statement which belongs to these three tags should also be marked as ANALYSIS.
\item \textbf{Statute (STA):} This includes texts in which the court discusses established laws, that can come from a mixture of sources: Acts , Sections, Articles, Rules, Order, Notices, Notifications, and Quotations directly from the bare act. The statute will have both the tags Analysis + Statute.
\item \textbf{Precedent Relied (PRE\_RELIED):} Texts in which the court discusses prior case documents, discussions and decisions which were relied upon by the court for final decisions. Precedent will have both the tags Analysis + Precedent.
\item \textbf{Precedent Not Relied (PRE\_NOT\_RELIED):} Texts in which the court discusses prior case documents, discussions and decisions which were not relied upon by the court for final decisions. It could be due to the fact that the situation, in that case, is not relevant to the current case.
\item \textbf{Ratio of the decision (Ratio):} This includes the main reason given for the application of any legal principle to the legal issue. It is the result of the analysis by the court. It typically appears right before the final decision. It is not the same as "Ratio Decidendi" taught in the legal academic curriculum.
\item \textbf{Ruling by Present Court (RPC):} Final decision + conclusion + order of the Court following from the natural/logical outcome of the rationale.
\item \textbf{NONE:} If a sentence does not belong to any of the above categories, it is labeled as NONE.
\end{itemize}
\subsection{Corpus Documents}
The corpus consists of legal judgment documents from the Supreme Court of India, High Courts in different Indian states, and some district-level courts. Raw judgment text files were scraped from Indian Court websites.\footnote{\url{https://main.sci.gov.in/}; \url{https://ecourts.gov.in/ecourts_home/static/highcourts.php}} Data has a mix of Supreme Court judgments (40\%) , High Courts judgments (40\%) and district court judgments (20\%). To develop baseline models, we divided the dataset into train, and validation. Test set was further divided into in-domain and out of domain. Train, validation and test (in-domain) datasets contain annotated judgments belonging to tax and criminal cases. Test (out-domains) contains annotated judgements from 3 domains : Motor Vehicles Act (9) , Industrial and Labour law (8) and Land and Property law (10). The statistics of the corpus are shown in Table \ref{table:Datasets summary statistics}. Table \ref{table:roleNumbers} gives number of sentences for each role in the entire corpus. Qualified law experts annotated test data with cross checks.
\begin{table}[t]
\small
\begin{center}
\begin{tabularx}{\columnwidth}{|l|X|X|X|X|}
\hline
\textbf{Dataset} & \textbf{Docs} & \textbf{Sen- tences} & \textbf{Tokens} & \textbf{Avg Tokens} \\ \hline
Train & 247 & 28986 & 938K & 3797 \\ \hline
Validation & 30 & 2879 & 88K & 2947 \\ \hline
Test (in-domain) & 50 & 4158 & 134K & 2681 \\ \hline
Test (out-domain) & 27 & 4292 & 127K & 4722\\\hline
\textbf{Total} & \textbf{354} & \textbf{40315} & \textbf{1.3M} & \textbf{3638} \\ \hline
\end{tabularx}
\caption{Corpus Statistics: The corpus is split into train, val and test. The table shows number of documents, sentences, tokens and average number of tokens per document.}
\label{table:Datasets summary statistics}
\end{center}
\vspace{-5mm}
\end{table}
\begin{table}[ht]
\small
\begin{center}
\begin{tabularx}{0.65\columnwidth}{|l|X|}
\hline
\textbf{Rhetorical Role} & \textbf{Sentences} \\ \hline
ANALYSIS & 14300\\ \hline
ARG PETITIONER & 1771\\ \hline
ARG RESPONDENT & 1068\\ \hline
FAC & 8045\\ \hline
ISSUE & 535\\ \hline
NONE & 2037\\ \hline
PREAMBLE & 6116\\ \hline
PRE NOT RELIED & 217\\ \hline
PRE RELIED & 1934\\ \hline
RATIO & 1014\\ \hline
RLC & 1081\\ \hline
RPC & 1562\\ \hline
STA & 625\\ \hline \hline
Overall & 40305\\ \hline
\end{tabularx}
\caption{Role-wise sentence count in the entire corpus}
\label{table:roleNumbers}
\end{center}
\vspace{-5mm}
\end{table}
\subsection{Annotation Process}
The annotation process was designed in consultation with legal experts (law professors and legal practitioners). Given the nature of the task, the RR annotations require a deep understanding of the law and the legal process. Consequently, we involved law students and legal practitioners in annotating the documents. The process involved annotating each sentence in a given document with one of the 12 RR + None labels described earlier. We experimented with different levels of granularity (phrase level, sentence level, paragraph level, etc.) for annotating the documents with RR. Pilot experiments indicated sentence level RR annotation to be appropriate as it maintains the balance (with regard to semantic coherence) between too short and too long texts. The legal documents were split using spaCy library \cite{spacy2021}. Rhetorical role annotation is not a trivial task; we faced two main challenges in the annotation activity: availability for a large group of legal experts and, secondly, motivating the legal experts to perform annotation consistently while maintaining quality. We performed the annotation activity via crowdsourcing as described next.
\subsection{Data Annotation Pipeline}
Corpus documents were annotated via a crowdsourcing activity. We invited law students from various law schools across the country to volunteer for the data annotation exercise. We created processes to onboard student volunteers and introduced them to the entire activity and its goal. Filtering was carried out at multiple stages to retain the most motivated and consistent (from the perspective of quality of the annotations) students. The entire pipeline is shown in Figure \ref{fig:DataAnnotationPipeline}. We describe each stage of the pipeline in the next sections.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.6]{images/RR_Annotation_Process.pdf}
\caption{Data Annotation Pipeline}
\label{fig:DataAnnotationPipeline}
\end{center}
\vspace{-5mm}
\end{figure}
\subsubsection{Student Selection} \label{sec:studentSelection}
We did a nationwide call for volunteers through a network of law students. The application required students to describe their motivation. A basic screening was done to eliminate applications that were partially filled. Finally, after filtering, we selected an initial group of 50 students. The selected students were then on-boarded and were motivated by explaining the big picture of the impact of their contribution. The data annotations were done voluntarily by law students from multiple Indian law universities. Interaction with the law students revealed that they were motivated to learn more about AI and contribute towards the development of the AI field, and hence they volunteered for the activity. In order to smoothly conduct the annotation activity via crowdsourcing, we organized the volunteers in a hierarchical structure based on their experience and performance during a pilot study. The organizational structure for this exercise is shown in Figure \ref{fig:OrganizationStructure}.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.15]{./images/organisation_structure_clipped.pdf}
\caption{Organization Structure}
\label{fig:OrganizationStructure}
\end{center}
\vspace{-5mm}
\end{figure}
\noindent\textbf{Project Administrators:} They designed data collection and communication processes, built tools for data collection, and supervised the overall activity. This group included law experts and authors of the paper.
\noindent\textbf{Project Coordinators:} They mentored and resolved the doubts of the students. They were responsible for assuring the quality of the data. Coordinators identified and rectified conceptual errors among the students. Further, the coordinators assisted the administrators during the adjudication process.
\noindent\textbf{Student Volunteers:} They annotated the data and also provided feedback on the entire process. Volunteers were in constant communication with the coordinators. At later stages of annotations, some of the best-performing students assisted in the adjudication process (\S \ref{sec:adjudication}). Best performing students were selected based on two criteria: timely submissions and ground truth agreement score. Students were assessed if they completed the task within a stipulated time at each annotation stage. Furthermore, each batch of annotation document consisted of sentences for which true (gold) RR labels were known apriori (also \S \ref{sec:dataAnnotation}). Students were assessed for their performance on the ground truth (sentences with gold RR labels), and students who were correct on at least 90\% of ground truth sentences were considered for the best performing category.
Before beginning the entire activity, we conducted a small pilot to assess the feasibility of crowdsourcing with student volunteers. Volunteers who completed MOOC, calibration and annotation exercises with satisfactory performance were then invited to become project coordinators for the subsequent data collection phase. The chance to become coordinator further provided positive reinforcement for the efforts, thus keeping the students well motivated. In the end, we selected eight students as project coordinators.
\subsubsection{MOOC}
Law students do not have an understanding of the workings of AI. We designed a MOOC (Massive Open Online Course)\footnote{\url{https://www.youtube.com/playlist?list=PL1z52lLL6eWnDnc3Wgfcu6neczrU3fFw0}} for the annotators. The MOOC explained the AI technologies to the law students, described the process of building datasets for AI algorithms, and explained the concept of the rhetorical role. Students were expected to complete the MOOC in a stipulated amount of time and complete the associated quiz, which checked for a basic understanding of the rhetorical role definitions.
\subsubsection{Calibration}
Since in the initial stages, students can differ in understanding RRs. We calibrated the students to bring them to a common ground. Calibration focused on shaping a common understanding of definitions among students. Students were asked to annotate three judgments that experts had already annotated. The sentences that differed from expert (gold) annotations were highlighted, and students were asked to calibrate their annotations. Calibration was an iterative process, and it was carried out till students came at the level of expert annotations.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{images/Ground_truth_score_hist.pdf}
\caption{Ground Truth Score Histogram}
\label{fig:groundTruth}
\end{center}
\vspace{-6mm}
\end{figure}
\subsubsection{Data Annotation} \label{sec:dataAnnotation}
In the end, 35 out of 50 selected students qualified for the calibration stage, and this was the final pool that annotated the entire corpus. Each student annotated 24 documents, and three students annotated each document. We did not observe any student dropout after the calibration stage. On average, it took about 40 minutes to annotate a single document. The entire annotation activity took around six weeks. Students annotated train and validation documents ( = 277), and experts annotated 77 test documents. As described earlier, during the annotation process, each student was also randomly assigned four documents (chosen randomly with replacement from the test set) for which gold (ground truth) annotations were known to coordinators and administrators but not to the students. The performance of students (referred to as Ground Truth Score) on these gold documents was assessed. Ground truth score is the percentage of sentences in gold documents that are correctly annotated. The average ground truth score for all students was 85\%. Figure \ref{fig:groundTruth} shows histogram of ground truth scores for a judgment. It shows that the majority of documents are in the 90 to 100 percent range, indicative of consistent annotations with ground truth docs. Note that documents shown in Figure \ref{fig:groundTruth} (y-axis) are chosen randomly (with replacement) from the test set and hence there is overlap between documents across different batches. Furthermore, coordinators provided feedback to students with lower scores to improve their overall annotation quality.
\subsubsection{Adjudication}\label{sec:adjudication}
A majority voting scheme was used to decide the final RR label. However, in some instances, annotators assigned three different labels; such documents were further sent for adjudication. The adjudication was done by experts, project coordinators, and some of the best-performing students (\S \ref{sec:studentSelection}).
\subsubsection{Annotation Quality Assessment}
Final annotation quality was evaluated using Fleiss Kappa \cite{fleiss2013statistical}. Overall, Fleiss Kappa score was 0.59, pointing towards moderate agreement. We saw high agreement amongst annotators on PREAMBLE, RPC, NONE, and ISSUE. There were medium agreements on FACTS, RLC, ANALYSIS, PRECEDENT, and ARGUMENTS. RATIO was the most ambiguous role. ANALYSIS was very often confused with FACTS and ARGUMENTS. In a judgment, a judge emphasizes some of the facts, which as per definition, are considered as analysis role; however, annotators often confuse them as facts role. Moreover, sometimes the judge may mention arguments and give their opinion on it; this, as per definition, is the analysis role, but annotators sometimes confuse it with the argument role. FACTS was sometimes confused with RLC (Ruling by Lower Court).
\section{Introduction}
In populous countries (e.g., India), pending legal cases have been growing exponentially. For example, according to India's National Judicial Data Grid, as of December 2021, there are approximately 40 million cases pending in various courts of the country \cite{njdc-district}. India follows a common-law system; consequently, due to subjectivity involved in the legal process, it may not be possible to automate the entire judicial pipeline completely; nevertheless, many intermediate tasks can be automated to augment legal practitioners, and hence expedite the system. For example, legal documents can be processed with the help of Natural Language Processing (NLP) techniques to organize and structure the data to be amenable to automatic search and retrieval. However, legal texts are different from commonly occurring texts typically used to train NLP models. Legal documents are quite long, running into tens (sometimes hundreds) of pages. Long documents make automatic processing challenging as information is spread throughout the document \cite{malik-etal-2021-ildc}. Another challenge with legal documents is the use of different lexicons. Though legal documents use natural language (e.g., English), many commonly occurring words/terms have different legal connotations. The use of different lexicons makes it challenging to adapt existing NLP models to legal texts \cite{malik-etal-2021-ildc}. Moreover, in countries like India, legal documents are manually typed and are highly unstructured and noisy (e.g., spelling and grammatical mistakes). Above mentioned challenges make it difficult to apply existing NLP models and techniques directly, which calls for the development of legal domain-specific techniques.
\begin{figure*}[h]
\begin{center}
\includegraphics[scale=0.35]{./images/Paper_Graphics_2.pdf}
\caption{Example of document segmentation via Rhetorical Roles labels. On the left is excerpt from a legal document and on the right is document segmented and labelled with rhetorical role labels.}
\label{fig:example}
\end{center}
\end{figure*}
Existing state-of-the-art models in NLP are data-driven and are trained on annotated corpora. However, the legal domain suffers from the deficiency of availability of annotated corpora. It has hindered the growth of the Legal NLP domain. For example, much of the recent success in the computer vision community can be owed to the creation and availability of annotated vision corpora such as ImageNet \cite{imagenet_cvpr09,ILSVRCanalysis_ICCV2013,ILSVRC15}. In this paper, we contribute to creating annotated legal text corpora. In particular, we create a new corpus of Indian legal judgments in English that are structured and annotated with topically coherent semantic units. Since legal documents are long and unstructured, these can be divided into topically coherent parts (e.g., facts, arguments) referred to as \textit{Rhetorical Roles} \cite{saravanan2008automatic,bhattacharya2019identification,malik-rr-2021}. In this paper, with the help of legal experts, we annotate legal documents with 12 different Rhetorical Roles (RRs) (details in \S \ref{sec:rr}). An example text annotated with some of the RRs is shown in Figure \ref{fig:example}. As shown in the figure, an unstructured legal judgment document is segmented into semantically coherent parts, and each part is annotated with a rhetorical role label such as preamble, fact, ratio, etc. We experimented with different levels of granularity (phrase level, sentence level, paragraph level) for annotating RRs and decided to go for sentence-level RR annotations based on initial experiments. Each sentence in a legal document is annotated with a rhetorical role label in the proposed corpus. Typically, consecutive sentences can have a similar role in a judgment document. The rhetorical role corpus is part of a general open-source effort of creating various legal corpora for promoting the development and bench-marking of legal NLP systems. This project is called BUILDNyAI.\footnote{The word BUILDNyAI is a code-mixed (English+Hindi) term having English word BUILD and Hindi word nyAI (short for nyayi, which means justice). The project is hosted at \url{https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/}} We make the following contributions in this paper:
\begin{itemize}[noitemsep,nolistsep]
\item We create a corpus of 354 Indian legal documents annotated with rhetorical roles. The corpus has 40,305 sentences annotated with 12 different RRs. To the best of our knowledge, this is the largest corpus of legal documents annotated with RRs.
\item In order to be of practical value, using the corpus, we develop a transformer-based baseline model for automatically annotating legal documents with sentence-level RR.
\item We show two use-cases for RRs. In particular, we show applications of RRs to the task of legal case summarization and legal judgment prediction.
\item We release the corpus and the model implementations: \url{https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/}
\end{itemize}
\section*{Acknowledgements}
We thank EkStep Foundation for funding this work. We thank all the law experts, student volunteers, and coordinators for contributing to data annotation. We thank LawBriefs for sharing the summaries. The author Ashutosh Modi would like to acknowledge the support of Google Research India via the Faculty Research Award Grant 2021.
\section{Bibliographical References}\label{reference}
\bibliographystyle{lrec2022-bib}
\section{RR Prediction Baseline Models}
The end goal behind this work has been to encourage the development of systems that can segment a new legal document automatically in terms of rhetorical roles. Towards this goal, we experimented with some baseline models. Since transformer-based models \cite{wolf-etal-2020-transformers} have shown state-of-the-art (SOTA) performance on most of the NLP tasks, including the tasks in legal NLP domain \cite{malik-etal-2021-ildc}, we mainly experimented with them. In the RR prediction task, given a legal document, the task is to predict the RR label for each sentence in the document. We pose this as a multi-class sequence prediction problem. We initially experimented with variants of the model by \newcite{bhattacharya2019identification}. In particular, we use a CRF (Conditional Random Field) model for RR prediction. The features for this CRF model come from a transformer, i.e., the BERT-BASE \cite{DBLP:journals/corr/abs-1810-04805} model is used to get sentence embeddings corresponding to the CLS token. These sentence embeddings are then passed through the CRF layer to get final predictions. We call this model BERT\_CRF. We also tried the architecture proposed by \newcite{cohan-2019} which captures contextual dependencies using only BERT without the need for hierarchical encoding using a CRF. We call this model BERT\_only. After experiments with vanilla transformer models, we finally created the baseline system using the SciBERT-HSLN architecture \cite{brack2021sequential}. Figure \ref{fig:BaselineModel} shows the overall architecture of the proposed model. In the proposed model, each sentence is passed through BERT BASE model to get word embeddings, these embeddings are further processed by Bi-LSTM layer followed by attention-based pooling layer to get sentence representations $\{s_1,s_2,..s_n\}$. Context Enrichment layer encodes the contextual information, by taking sequence of sentence representations, resulting in contextualized sentence representations: $\{c_1,c_2,..,c_n\}$. This is followed by MLP layers and CRF that leverage the distributed representation features to predict the RR label for each sentence via softmax activation.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.35]{images/SciBERT-HSLN.pdf}
\caption{RR Prediction Baseline model inspired by~\protect\newcite{brack2021sequential}}
\label{fig:BaselineModel}
\end{center}
\end{figure}
\begin{table}[t]
\small
\begin{center}
\begin{tabularx}{0.85\columnwidth}{|X|c|c|c|}
\hline
\textbf{Model} & \textbf{Precision} & \textbf{Recall} & \textbf{F1}\\ \hline
BERT\_CRF & 0.24 & 0.24 & 0.23\\ \hline
BERT\_only & 0.67 & 0.68 & 0.67\\ \hline
SciBERT-HSLN & 0.79 & 0.80 & 0.79\\ \hline
\end{tabularx}
\caption{Performance of models on test (in-domain) data}
\label{Comparison of Rhetorical Roles Prediction models}
\end{center}
\vspace{-5mm}
\end{table}
\textbf{Results:} The performance of different models was tested on test(in-domain) data and results are given in Table \ref{Comparison of Rhetorical Roles Prediction models}. We use standard weighted F1 score metric for evaluation. As can be observed, the BERT\_CRF model performs the worst, and the BERT\_only model performs worse than the proposed model SciBERT-HSLN, which achieved a weighted F1 score of 78\%. It is perhaps because SciBERT-HSLN, being a sequential model, can capture longer range dependencies between sentences in a document. The results of the model on the test set for each of the RR labels are shown in Table \ref{table:modelScores}. Figure \ref{fig:confusion_matrix} shows the confusion matrix for the SciBERT-HSLN model. As can be observed from Table \ref{table:modelScores} and Figure \ref{fig:confusion_matrix}, ARGUMENTS based roles are miss-classified very often and confused among the two types of ARGUMENTS and also sometimes confused with FACTS and ANALYSIS. PREAMBLE is almost perfectly classified. As can be seen, PRECEDENT NOT RELIED is completely miss-classified and confused with PRECEDENT RELIED and ANALYSIS. RATIO is often confused with ANALYSIS, and this trend is similar to what was observed for annotators as well. Similar to what was observed for annotators, RPC, PREAMBLE, NONE and ISSUE are classified with decent F1 scores. STATUES are also not well classified as many times a judge mentions some laws in their opinion and model tends to learn these spurious patterns as analysis and miss-classifies actual statues as analysis. We have also created a leaderboard\footnote{\url{https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/}} for the task of RR prediction where other researchers can experiment with various approaches.
\textbf{Results on test (out-domain) data:}
In order to check if the baseline model trained on Criminal and Tax cases generalized to other domains, we tested the baseline model on 27 judgments from Motor Vehicles, Industrial and Labour and Land and Property cases. Weighted F1 reduced to 0.70. This degradation in performance is mainly due to different style of writing in the judgments.
\begin{table}[t]
\small
\begin{center}
\begin{tabularx}{0.95\columnwidth}{|X|c|c|c|}
\hline
\textbf{Rhetorical Role} & \textbf{Precision} & \textbf{Recall} & \textbf{F1}\\ \hline
ANALYSIS & 0.77 & 0.89 & 0.83 \\ \hline
ARG\_PETITIONER & 0.60 & 0.64 & 0.62 \\ \hline
ARG\_RESPONDENT & 0.84 & 0.41 & 0.55 \\ \hline
FAC & 0.80 & 0.84 & 0.82 \\ \hline
ISSUE & 0.93 & 0.87 & 0.90 \\ \hline
NONE & 0.85 & 0.84 & 0.85 \\ \hline
PREAMBLE & 0.96 & 0.98 & 0.97 \\ \hline
PRE\_NOT\_RELIED & 0.00 & 0.00 & 0.00 \\ \hline
PRE\_RELIED & 0.79 & 0.60 & 0.68 \\ \hline
RATIO & 0.53 & 0.56 & 0.54 \\ \hline
RLC & 0.75 & 0.45 & 0.57 \\ \hline
RPC & 0.78 & 0.87 & 0.82 \\ \hline
STA & 0.77 & 0.54 & 0.64 \\ \hline
\hline
Overall & 0.79 & 0.80 & 0.79\\ \hline
\end{tabularx}
\caption{F1 scores of RR baseline model for each of the rhetorical role on test data}
\label{table:modelScores}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{images/baseline_confusion_matrix.pdf}
\caption{Confusion Matrix for SciBERT-HSLN model predictions on the test data}
\label{fig:confusion_matrix}
\end{center}
\vspace{-5mm}
\end{figure}
\section{Related Work}
In recent times, there has been lot of work in the area of legal text processing. Different tasks and techniques have been proposed. For example, Prior Case Retrieval \cite{jackson2003information}, Summarization \cite{moens1999abstracting,saravanan2007using}, Case Prediction \cite{malik-etal-2021-ildc,chalkidis-etal-2019-neural,strickson-legal-2020,sulea-etal-2017-predicting,kapoor-etal-2022-HLDC}, Argument Mining \cite{wyner2010approaches,moens2007automatic},
Information Extraction and Retrieval \cite{tran2019building,grabmair2011toward,tran2019building}, and
Event Extraction \cite{lagos2010event,maxwell2009evaluation,lagos2010event}.
Recently, efforts have been made to develop corpora that could aid various legal NLP tasks; for example, \newcite{malik-etal-2021-ildc} have released a corpus of 35K Indian Supreme Court documents for the task of judgment prediction and explanation. \newcite{chalkidis-etal-2019-neural} have released 11,478 legal documents corresponding to the European Court of Human Rights (ECHR). \newcite{strickson_legal_2020} have proposed a corpus of 4,959 UK Supreme Court documents. \newcite{xiao2018cail2018} have created a large-scale corpus of 2.68 million criminal case documents and released CAIL (Chinese AI and Law Challenge) dataset for judgment prediction. A new multilingual dataset of European Union (EU) legal documents has been recently released by \newcite{chalkidis-etal-2021-multieurlex}.
Research in rhetorical roles for legal text processing has been active in the past few years. \newcite{Farzindar2004LetSumAA,hachey2006extractive} have leveraged rhetorical roles to create summaries of legal texts. \newcite{saravanan2008automatic} proposed a CRF-based model using hand-crafted features for segmenting documents using seven different roles. \newcite{bhatia2014analysing} created Genre Analysis of Legal Texts to create seven rhetorical categories. \newcite{bhattacharya2019identification} have proposed CRF-BiLSTM model for automatically assigning rhetorical roles to sentences in Indian legal documents. \cite{malik-rr-2021} have created a RR corpus and annotated with 13 fine-grained roles and further they have developed a multi-task learning based model for predicting RR. In this paper, we also propose a corpus of English Indian legal judgment documents annotated with Rhetorical Roles; however, we annotate the documents with a more extensive set of 12 rhetorical role labels and a NONE label (in the case none of the 12 labels are applicable). Moreover, to the best of our knowledge, we create the largest corpus of 354 documents (vs. 100 documents in previous RR corpus by \newcite{malik-rr-2021}), with 40,315 sentences annotated with 13 (12 + NONE) different types of rhetorical role labels. We propose state-of-the-art transformer models for RR prediction and show the use case of RRs for case summarization and legal judgment prediction.
Recent success in almost every area in NLP has been due to transformer-based neural architectures \cite{wang2018glue}. We do not discuss the details of transformer architectures here and refer the reader to the survey on transformers by \newcite{tay2020efficient}. We develop transformer-based baseline models for automatically segmenting legal documents into RRs units.
|
1,116,691,500,298 | arxiv | \section{Introduction}
\subsection{Curried algebra}
Let $V$ be a finite-dimensional vector space. The general linear Lie algebra on $V$, denoted $\fgl(V)$, can be identified with the tensor product $V \otimes V^*$. A representation of $\fgl(V)$ on a vector space $M$ is a linear map
\begin{displaymath}
\mu \colon \fgl(V) \to \End(M)
\end{displaymath}
satisfying the equation
\begin{equation} \label{eq:gl1}
\mu([X,Y]) = [\mu(X), \mu(Y)]
\end{equation}
for all $X,Y \in \fgl(V)$. By currying (also known as tensor-hom adjunction), giving the linear map $\mu$ is equivalent to giving a linear map
\begin{displaymath}
a \colon V \otimes M \to V \otimes M.
\end{displaymath}
A natural problem, then, is to determine what condition \eqref{eq:gl1} corresponds to in terms of $a$. In Proposition~\ref{prop:glid}, we find that it amounts to the identity
\begin{equation} \label{eq:gl2}
\tau a \tau a - a \tau a \tau = a \tau - \tau a
\end{equation}
in $\End(V \otimes V \otimes M)$, where $\tau$ is the map that switches the first two tensor factors and we have written $a$ for $\id \otimes a$. We thus have two equivalent ways of viewing representations of $\fgl(V)$: as linear maps $\mu$ satisfying \eqref{eq:gl1}, or as a linear map $a$ satisfying \eqref{eq:gl2}.
The advantage of the second point of view is that it makes sense in contexts where we may not have duals. Indeed, suppose that $V$ is an object in a tensor category $\cC$. We define the \defi{curried general linear Lie algebra} on $V$, denoted $\ul{\fgl}(V)$, in a Tannakian sense: a $\ul{\fgl}(V)$-module is a map $a \colon V \otimes M \to V \otimes M$ satisfying \eqref{eq:gl2}. We emphasize that $\ul{\fgl}(V)$ is not actually an object of $\cC$: only the notion of $\ul{\fgl}(V)$-module is defined. If $V$ is dualizable then one can form the Lie algebra $\fgl(V)=V \otimes V^*$ in $\cC$, and $\ul{\fgl}(V)$-modules are equivalent to $\fgl(V)$-modules. However, one can consider $\ul{\fgl}(V)$-modules even if $V$ is not dualizable.
The above process can be applied to many algebras built out of a vector space and its dual, and we examine a number of cases in detail. Our primary motivation for developing this theory lies with its applications to representations of combinatorial categories, which we now explain.
\subsection{Representations of combinatorial categories}
Let $\fG$ a category and let $\bk$ be a commutative ring. A \defi{$\fG$-module} is a functor $\fG \to \Mod_\bk$. Representations of categories, especially those of a combinatorial flavor, have received extensive attention in the last decade, and (for certain $\fG$'s) form the main subject of this series of papers. We now describe how curried algebras can be used to better understand these objects.
Let $\mathbf{FB}$ be the category of finite sets and bijections. An $\mathbf{FB}$-module, also known as a linear species, is simply a sequence of symmetric group representations. Given two $\mathbf{FB}$-modules $M$ and $N$, we define their tensor product $V \otimes W$ to be the $\mathbf{FB}$-module given by
\[
(M \otimes N)(S) = \bigoplus_{S=A \amalg B} M(A) \otimes N(B).
\]
This gives the category of $\mathbf{FB}$-modules a symmetric monoidal structure. The motivating problem for this paper is the following: given a diagram category $\fG$, express $\fG$-modules as $\mathbf{FB}$-modules with extra structure, defined in terms of the tensor product. The curried perspective will help us understand this extra structure.
Here is the simplest case (which does not require currying). Following Church, Ellenberg, and Farb \cite{fimodule}, let $\mathbf{FI}$ be the category of finite sets and injections. An $\mathbf{FI}$-module is a sequence of symmetric group representations (i.e., an $\mathbf{FB}$-module) with some transition maps. Let $\bV$ be the \defi{standard} $\mathbf{FB}$-module: this is $\bk$ on sets of size~1 and~0 on all other sets. It turns out that the transition maps in $M$ can be encoded as a map of $\mathbf{FB}$-modules $a \colon \bV \otimes M \to M$. Not every such map $a$ defines an $\mathbf{FI}$-module: the key condition is that $a$ should give $M$ the structure of a $\Sym(\bV)$-module. This perspective led to a rich analysis of the category of $\mathbf{FI}$-modules in \cite{symc1}.
We now look at a slightly more complicated case. Consider the category $\mathbf{FI}\sharp$, also introduced by Church, Ellenberg, and Farb. Its objects again are finite sets, but now a morphism $S \to T$ is a pair $(S_0, i)$ where $S_0$ is a subset of $S$ and $i \colon S_0 \to T$ is an injection. An $\mathbf{FI}\sharp$-module $M$ is an $\mathbf{FB}$-module equipped with transition maps $M([n]) \to M([n+1])$, corresponding to the standard inclusion $[n] \to [n+1]$, and $M([n+1]) \to M([n])$, corresponding to the standard partial injection $[n+1] \to [n]$ defined on $[n]$. These transition maps can be encoded as maps of $\mathbf{FB}$-modules
\begin{displaymath}
a \colon \bV \otimes M \to M, \qquad b \colon M \to \bV \otimes M
\end{displaymath}
Not every pair $(a,b)$ defines an $\mathbf{FI}\sharp$-module structure on $M$: there are a few conditions that must be satisfied (see \cite{curried}). Initially, these conditions do not seem to have much meaning. This is where the curried perspective comes in: it turns out that the conditions for $(a,b)$ to define an $\mathbf{FI}\sharp$-module are (nearly) the conditions needed for it to define a representation of the curried Weyl algebra.
This phenomenon occurs throughout this paper. In particular, we find that representations of all of the Brauer-like categories of interest in this series of papers can be viewed as representations of the curried forms of familiar Lie algebras. For example, representations of the Brauer category itself are equivalent to representations of the curried symplectic Lie algebra $\ul{\fsp}$ in $\Mod_{\mathbf{FB}}$. See Figure~\ref{fig1} for a summary. The details for the examples not appearing in the current article can be found in \cite{curried}.
\begin{figure}[!h]
\begin{tabular}{ll}
\thickhline \\[-11pt]
Diagram category & Curried algebra \\[2pt]
\hline \\[-11pt]
Brauer & Symplectic Lie algebra on $\bV \oplus \bV^*$ \\
Signed Brauer & Orthogonal Lie algebra on $\bV \oplus \bV^*$ \\
Spin Brauer & Orthosymplectic Lie superalgebra on $\bV[1] \oplus \bk \oplus \bV^*[1]$\\
Signed spin Brauer & Orthogonal Lie algebra on $\bV \oplus \bk \oplus \bV^*$\\
Periplectic Brauer & Periplectic Lie superalgebra on $\bV \oplus \bV^*[1]$ \\
Partition & Weyl Lie algebra on $\bV \oplus \bV^*$ \\
Degenerate partition & Hamiltonian Lie algebra on $\bV \oplus \bV^*$ \\
$\mathbf{FI}\sharp(\delta)$ & Heisenberg Lie algebra on $\bV \oplus \bV^*$\\
$\mathbf{FI}$ & Symmetric algebra on $\bV$ \\
$\mathbf{FA}$ & Witt Lie algebra on $\bV^*$ \\
$\mathbf{FA}^{\op}$ & Witt Lie algebra on $\bV$ \\[2pt]
\thickhline
\end{tabular}
\caption{Diagram categories and corresponding curried algebras in $\Mod_{\mathbf{FB}}$.} \label{fig1}
\end{figure}
\subsection{Uses}
There are a few reasons that the curried perspective on diagram categories is useful. First, it provides intuition: e.g., knowing that $\mathbf{FI}\sharp$-modules are modules for a Heisenberg algebra can help one guess how they should behave (though for $\mathbf{FI}\sharp$ itself this is not really necessary, since they are well understood). Second, it suggests new directions: for example, the curried Hamiltonian Lie algebra led us to a novel variant of the partition category that we expect to be interesting.
Finally, the curried perspective helps in applying Schur--Weyl duality as in \cite{infrank} (and this was our main motivation). For us, Schur--Weyl duality is the statement that, in characteristic~0, the category $\Mod_{\mathbf{FB}}$ is equivalent to the category $\Rep^{\pol}(\GL)$ of polynomial representations of the infinite general linear group. This equivalence is a tensor equivalence, so anything stated using the tensor structure on $\Mod_{\mathbf{FB}}$ will transfer nicely to $\Rep^{\pol}(\GL)$. Using this, we find that the Schur--Weyl dual of a module for the Brauer category belongs to parabolic category $\cO$ for an infinite rank symplectic Lie algebra. Furthermore, due to the existence of specialization functors from $\Rep^\pol(\GL)$ to $\Rep^\pol(\GL_n)$ for all finite $n$, we immediately get specialization functors from this parabolic category $\cO$ in the infinite rank case to the finite rank case. This will be the focus of the next paper in this series.
\subsection{Method}
Establishing an equivalence between a curried algebra and a diagram category is entirely elementary, but it can get somewhat complicated. We have therefore developed the following method to treat this problem systematically and keep different concerns isolated:
\begin{enumerate}
\item We first carry out the currying process. We start with a ``model algebra'' $A$ built out of a vector space and its dual, and write down exactly what an $A$-module is without using duals, by the currying procedure. We extrapolate from this a general definition of curried $A$-module in a tensor category.
\item We then specialize this notion to the tensor category $\Mod_{\mathbf{FB}}$, and write down exactly what a curried $A$-module is in terms of $\mathbf{FB}$-modules equipped with certain operations.
\item Finally, we match the above description to a diagram category; this typically involves finding a presentation for the diagram category.
\end{enumerate}
Here is how this process works for relating the symplectic Lie algebra and the Brauer category:
\begin{enumerate}
\item Let $V$ be a finite dimensional vector space. Then $V \oplus V^*$ carries a canonical symplectic form. We take $\fsp(V \oplus V^*)$ to be our model algebra. We have a natural decomposition
\begin{displaymath}
\fsp(V \oplus V^*) = \Div^2(V^*) \oplus \fgl(V) \oplus \Div^2(V).
\end{displaymath}
We thus see that giving a $\fsp(V \oplus V^*)$-module $M$ amounts to giving linear maps
\begin{displaymath}
a \colon V \otimes M \to V \otimes M, \qquad b \colon \Div^2(V) \otimes M \to M, \qquad b' \colon M \to \Sym^2(V) \otimes M.
\end{displaymath}
satisfying certain conditions, which we determine explicitly. Given an object $V$ in a tensor category, we define a module for the \defi{curried symplectic algebra} $\ul{\fsp}(V \oplus V^*)$ to be an object $M$ with maps as above satisfying the conditions we just alluded to.
\item We now examine the curried symplectic algebra in linear species. Thus suppose that $M$ is a $\ul{\fsp}(\bV \oplus \bV^*)$-module, where $\bV$ is the standard $\mathbf{FB}$-module. Giving the map $b$ amounts to giving natural maps $\beta \colon M(S \setminus \{i,j\}) \to M(S)$, where $S$ is a finite set and $i$ and $j$ are distinct elements of it; this is what we mean by an operation on the $\mathbf{FB}$-module $M$. We can similarly describe $a$ and $b'$ in terms of operations. We explicitly write down the conditions on these operations that correspond to the defining conditions of $\ul{\fsp}(\bV \oplus \bV^*)$.
\item Finally, we show that an $\mathbf{FB}$-module with operations as above is the same thing as a module for the Brauer category. The basic idea is that $\beta$ gives the action of a single cap, while the operation corresponding to $b'$ gives the action of a single cup. To prove this, one must show that the identities from the previous step give all the defining relations between cups and caps in the Brauer category, which we do.
\end{enumerate}
\subsection{Open problems}
One broad class of open problem is to examine the currying procedure in other situations. There are other algebras that could be interesting to curry, such as the exceptional Lie algebras (see \cite{jun} for some work on $\fg_2$ and $\fe_6$), quantum groups, or truncated Cartan algebras in positive characteristic. Similarly, there are some diagram categories that would be interesting to see from the curried perspective, such as the simplex category, the category $\mathbf{OS}^{\op}$ from \cite[\S 8]{grobner}, or various linear analogues of $\mathbf{FI}$ like $\mathbf{VI}$ or $\mathbf{VIC}$. Finally, we have mainly focused on currying in the tensor category $\Mod_{\mathbf{FB}}$. What about other categories? (See Remark~\ref{rmk:VA} for a comment on $\Mod_{\mathbf{VB}}$.)
In \S \ref{s:abs}, we define an abstract notion of curried algebra. It would be helpful if this notion were better developed. In particular, is there a way to obtain the results of this paper with less casework?
\subsection{Relation to other papers in this series}
This paper can be read independently of the other papers in this series. The following paper \cite{brauercat3} will make essential use of this paper. Many more examples of curried algebras can be found in \cite{curried}.
There is an extensive literature related to Brauer categories, see
\cite{brauercat1} for a more detailed discussion.
\subsection{Notation and conventions}
Throughout, $\bk$ denotes a fixed commutative ring. Unless otherwise stated, a tensor category is a $\bk$-linear category with a $\bk$-bilinear symmetric monoidal structure.
\subsection{Outline}
In \S \ref{s:bg}, we review linear species, and in \S \ref{s:tri}, we review the theory of triangular categories. In \S \ref{s:gl}, we look at the general linear Lie algebra from the curried perspective; this is the most important example. In \S \ref{s:symp}, \S \ref{s:witt}, and \S \ref{s:weyl}, we look at the symplectic, Witt, and Weyl Lie algebras from the curried perspective. These examples are important cases, and also representative of the currying process in general. Finally, in \S \ref{s:abs}, we make a few comments on abstract curried algebras.
\subsection*{Acknowledgments}
Some of the ideas in \S \ref{ss:fa} came out of joint discussions with Phil Tosteson; we thank him for letting us include this material here.
\section{Linear species} \label{s:bg}
\subsection{$\mathbf{FB}$-modules}
Let $\mathbf{FB}$ be the category of finite sets and bijections. An \defi{$\mathbf{FB}$-module} (also called a \defi{linear species}) is a functor $\mathbf{FB} \to \Mod_{\bk}$. A \defi{morphism} (or \defi{map}) between $\mathbf{FB}$-modules is a natural transformation of functors. We let $\Mod_{\mathbf{FB}}$ be the category of $\mathbf{FB}$-modules. It is a Grothendieck abelian category. Note that $\mathbf{FB}$-modules are equivalent to sequences $(M_n)_{n \ge 0}$ where $M_n$ is a representation of the symmetric group $\fS_n$.
Given two $\mathbf{FB}$-modules $M$ and $N$, we define their tensor product by
\begin{displaymath}
(M \otimes N)(S) = \bigoplus_{T \subseteq S} M(T) \otimes N(S \setminus T).
\end{displaymath}
From the sequence point of view, we have
\begin{displaymath}
(M \otimes N)_n = \bigoplus_{i+j=n} \Ind_{\fS_i \times \fS_j}^{\fS_n}(M_i \otimes N_j).
\end{displaymath}
The above tensor product gives $\Mod_{\mathbf{FB}}$ the structure of a symmetric monoidal category.
We define the \defi{standard $\mathbf{FB}$-module}, denoted $\bV$, to be the $\mathbf{FB}$-module that is $\bk$ on sets of cardinality~1, and 0~on all other sets. If $S$ is a finite set of cardinality $n$ then $\bV^{\otimes n}(S)$ is the $\bk$-vector space with basis given by all total orderings $(s_1,\dots,s_n)$ of the elements of $S$, and $\bV^{\otimes n}(T) = 0$ if $|T| \ne n$. There is an additional action of $\sigma \in \fS_n$ on $\bV^{\otimes n}$ given by
\[
\sigma \cdot (s_1 ,\dots,s_n) = (s_{\sigma^{-1}(1)}, \dots, s_{\sigma^{-1}(n)}).
\]
The $n$th symmetric power $\Sym^n(\bV)$ is the $\fS_n$-coinvariants of $\bV^{\otimes n}$. From the above description, we see that this $\mathbf{FB}$-module is 1-dimensional when evaluated on a set $S$ of cardinality $n$; we write $t^S$ for the distinguished basis vector. The symmetric algebra is
\[
\Sym(\bV)=\bigoplus_{n \ge 0} \Sym^n(\bV).
\]
It admits both a multiplication map
\[
m \colon \Sym(\bV) \otimes \Sym(\bV) \to \Sym(\bV),
\]
and a comultiplication map
\begin{displaymath}
\Delta \colon \Sym(\bV) \to \Sym(\bV) \otimes \Sym(\bV).
\end{displaymath}
In terms of bases, these maps are given by
\begin{displaymath}
m(t^A \otimes t^B)=t^{A \cup B}, \qquad
\Delta(t^S) = \sum_{S = A \sqcup B} t^A \otimes t^B,
\end{displaymath}
where the second sum is over all decompositions of $S$ as a union of two disjoint subsets.
We can also consider the $n$th divided power $\Div^n(\bV)$, which is the $\fS_n$-invariants of $\bV^{\otimes n}$. Again, on a finite set $S$ of cardinality $n$, this space is 1-dimensional, and we let $t^{[S]}$ be a basis vector. There is an averaging map
\begin{displaymath}
\avg \colon \Sym^n(\bV) \to \Div^n(\bV).
\end{displaymath}
On basis vectors, this takes $t^S$ to $t^{[S]}$, and so it is an isomorphism. This isomorphism is compatible with the multiplication and comultiplications on $\Div(\bV)=\bigoplus_{n \ge 0} \Div^n(\bV)$. For this reason, we will not really need divided powers in the context of $\mathbf{FB}$-modules.
\begin{remark}
This contrasts with the standard situation in vector spaces: roughly speaking, this is due to the fact that we are dealing with sets rather than multisets, so that the action of $\fS_n$ on $\bV^{\otimes n}(S)$ is free. All of the complications and differences arise in the standard situation due to the existence of monomials with exponents greater than 1.
\end{remark}
\subsection{Operations on $\mathbf{FB}$-modules} \label{ss:fbstruc}
Let $S$ be a finite set. We write $S^{[n]}$ for the subset of $S^n$ consisting of tuples with distinct coordinates. We let $S^{[*]}=\coprod_{n \ge 0} S^{[n]}$. Given $\ul{x} \in S^{[n]}$, we write $S \setminus \ul{x}$ in place of $S \setminus \{x_1, \ldots, x_n\}$. We say that two elements $\ul{x} \in S^{[n]}$ and $\ul{y} \in S^{[m]}$ are \defi{disjoint} if $\{x_1,\dots,x_n\} \cap \{y_1,\dots,y_m\} = \emptyset$.
Let $M$ be an $\mathbf{FB}$-module. An \defi{operation} on $M$ is a rule $\phi$ that assigns to every finite set $S$ and elements $\ul{x},\ul{y} \in S^{[*]}$ a linear map
\begin{displaymath}
\phi^S_{\ul{x}, \ul{y}} \colon M(S \setminus \ul{y}) \to M(S \setminus \ul{x})
\end{displaymath}
that is natural, in the sense that if $i \colon S \to T$ is a bijection then the diagram
\begin{displaymath}
\xymatrix@C=6em{
M(S \setminus \ul{y}) \ar[r]^{\phi^S_{\ul{x},\ul{y}}} \ar[d]_i &
M(S \setminus \ul{x}) \ar[d]^i \\
N(T \setminus i(\ul{y})) \ar[r]^{\phi^T_{i(\ul{x}),i(\ul{y})}} &
N(T \setminus i(\ul{x})) }
\end{displaymath}
commutes. It is useful to picture operations diagrammatically; see Figure~\ref{fig:op}.
\begin{figure}
\begin{displaymath}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node at (0,.5) (a1) {$\bullet$};
\node at (0,1) (a2) {$\bullet$};
\node at (0,1.5) (a3) {$\bullet$};
\node at (0,3) (a6) {$\bullet$};
\node at (0,3.5) (a7) {$\bullet$};
\node at (5.5,2) (b4) {$\bullet$};
\node at (5.5,2.5) (b5) {$\bullet$};
\node at (5.5,3) (b6) {$\bullet$};
\node at (5.5,3.5) (b7) {$\bullet$};
\node[xshift=-8pt] at (a1.center) {\tiny 1};
\node[xshift=-8pt] at (a2.center) {\tiny 2};
\node[xshift=-8pt] at (a3.center) {\tiny 3};
\node[xshift=-8pt] at (a6.center) {\tiny 6};
\node[xshift=-8pt] at (a7.center) {\tiny 7};
\node[xshift=8pt] at (b4.center) {\tiny 4};
\node[xshift=8pt] at (b5.center) {\tiny 5};
\node[xshift=8pt] at (b6.center) {\tiny 6};
\node[xshift=8pt] at (b7.center) {\tiny 7};
\draw (a1.center) to (2,.5);
\draw (a2.center) to (2,1);
\draw (a3.center) to (2,1.5);
\draw (b4.center) to (3.5,2);
\draw (b5.center) to (3.5,2.5);
\filldraw [fill=blue!20!white, draw=black] (2,.25) rectangle (3.5,2.75);
\draw (a6.center) to (b6.center);
\draw (a7.center) to (b7.center);
\end{tikzpicture}
\end{displaymath}
\caption{Diagrammatic view of $\phi^S_{\ul{x},\ul{y}}$ where $S=\{1,\ldots,7\}$, $\ul{x}=(1,2,3)$, and $\ul{y}=(4,5)$. We picture the operation as the box that takes input on the $\ul{x}$ strands and produces output on the $\ul{y}$ strands.}
\label{fig:op}
\end{figure}
The definition of operation is quite general; in practice, our operations will be constrained in various ways. We mention a few of the important constraints here. Fix an operation $\phi$ for what follows.
\begin{itemize}
\item We say that $\phi$ is \defi{symmetric} if $\phi^S_{\ul{x},\ul{y}}$ is invariant under permutations of $\ul{x}$ and $\ul{y}$. In this case, we can simply regard $\ul{x}$ and $\ul{y}$ as subsets $A$ and $B$ of $S$, and we typically write $\phi^S_{A,B}$ instead.
\item Similarly, we say that $\phi$ is \defi{skew-symmetric} if $\phi^S_{\ul{x},\ul{y}}$ transforms under the sign character when $\ul{x}$ or $\ul{y}$ is permuted.
\item We say that $\phi$ is an \defi{$(m,n)$-operation} if $\phi^S_{\ul{x},\ul{y}}=0$ unless $\ul{x}$ has length $m$ and $\ul{y}$ has length $n$. In this case, we typically regard $\phi^S_{\ul{x},\ul{y}}$ as only defined on such tuples.
\item We say that $\phi$ is \defi{simple} if $\phi^S_{\ul{x},\ul{y}}=0$ unless $\ul{x}$ and $\ul{y}$ are disjoint. Again, in this case we typically regard $\phi^S_{\ul{x},\ul{y}}$ as only being defined on disjoint tuples.
\end{itemize}
Every operation can be expressed in terms of simple operations. We explain this in the case where $\phi$ is symmetric, as this somewhat simplifies the situation. For $n \in \bN$ define a simple operation $\phi[n]$ by $\phi[n]^S_{A,B}=\phi^{S \amalg [n]}_{A \amalg [n], B \amalg [n]}$ if $A$ and $B$ are disjoint. The naturality of $\phi$ implies that
\begin{displaymath}
\phi^S_{A,B}=\phi[n]^{S \setminus (A \cap B)}_{A \setminus B, B \setminus A}
\end{displaymath}
where $n=\#(A \cap B)$. Thus $\phi$ determines, and is determined by, the sequence of simple operations $(\phi[n])_{n \ge 0}$.
Operations are closely related to the tensor product on $\mathbf{FB}$-modules. For example, giving a symmetric $(m,n)$-operation $\phi$ on $M$ is equivalent to giving a map of $\mathbf{FB}$-modules
\begin{displaymath}
a \colon \Sym^n(\bV) \otimes M \to \Sym^m(\bV) \otimes M.
\end{displaymath}
Indeed, given a finite set $S$, a subset $B$ of $S$ of cardinality $n$, and an element $x \in M(S \setminus B)$, we can write
\begin{displaymath}
a(t^B \otimes x) = \sum_{\substack{A \subseteq S\\ \# A=m}} t^A \otimes \phi^S_{A,B}(x)
\end{displaymath}
where $\phi^S_{A,B}(x)$ belongs to $M(S \setminus A)$. This defines a map
\begin{displaymath}
\phi^S_{A,B} \colon M(S \setminus B) \to M(S \setminus A),
\end{displaymath}
and these maps define an $(m,n)$-operation $\phi$.
Let $\phi$ and $\psi$ be operations. We say that $\phi$ and $\psi$ \defi{commute} if the following condition holds: given a finite set $S$ and tuples $\ul{x}, \ul{y}, \ul{w}, \ul{z} \in S^{[*]}$ such that $\ul{x}$ and $\ul{w}$ are disjoint and $\ul{y}$ and $\ul{z}$ are disjoint, the diagram
\begin{displaymath}
\xymatrix@C=8em{
M(S \setminus (\ul{y} \cup \ul{z})) \ar[r]^{\phi^{S \setminus \ul{z}}_{\ul{x},\ul{y}}} \ar[d]_{\psi^{S \setminus \ul{y}}_{\ul{w},\ul{z}}} &
M(S \setminus (\ul{x} \cup \ul{z})) \ar[d]^{\psi^{S \setminus \ul{x}}_{\ul{w},\ul{z}}} \\
M(S \setminus (\ul{y} \cup \ul{w})) \ar[r]^{\phi^{S \setminus \ul{w}}_{\ul{x},\ul{y}}} &
M(S \setminus (\ul{x} \cup \ul{w})) }
\end{displaymath}
commutes. See Figure~\ref{fig:comm-op} for a diagrammatic interpretation of this condition. Similarly, we say that $\phi$ and $\psi$ \defi{skew-commute} if the two paths above are negatives of each other. We note that an operation need not commute with itself, and can skew-commute with itself while still being non-trivial (even in characteristic~0).
\begin{figure}
\begin{displaymath}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node at (0,.5) (a1) {$\bullet$};
\node at (0,1) (a2) {$\bullet$};
\node at (0,1.5) (a3) {$\bullet$};
\draw (a1.center) to (1,.5);
\draw (a2.center) to (1,1);
\draw (a3.center) to (1,1.5);
\filldraw [fill=orange!20!white, draw=black] (1,.25) rectangle (2.5,1.75);
\node at (0,2.5) (a4) {$\bullet$};
\node at (0,3) (a5) {$\bullet$};
\draw (a4.center) to (3.5,2.5);
\draw (a5.center) to (3.5,3);
\filldraw [fill=blue!20!white, draw=black] (3,2.25) rectangle (4.5,3.25);
\node at (5.5,.75) (b1) {$\bullet$};
\node at (5.5,1.25) (b2) {$\bullet$};
\node at (5.5,2.5) (b3) {$\bullet$};
\node at (5.5,3) (b4) {$\bullet$};
\draw (b1.center) to (2.5,.75);
\draw (b2.center) to (2.5,1.25);
\draw (b3.center) to (4.5,2.5);
\draw (b4.center) to (4.5,3);
\node[xshift=-8pt] at (a1.center) {\tiny 1};
\node[xshift=-8pt] at (a2.center) {\tiny 2};
\node[xshift=-8pt] at (a3.center) {\tiny 3};
\node[xshift=-8pt] at (a4.center) {\tiny 4};
\node[xshift=-8pt] at (a5.center) {\tiny 5};
\node[xshift=8pt] at (b1.center) {\tiny 6};
\node[xshift=8pt] at (b2.center) {\tiny 7};
\node[xshift=8pt] at (b3.center) {\tiny 8};
\node[xshift=8pt] at (b4.center) {\tiny 9};
\end{tikzpicture}
\qquad=\qquad
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node at (0,.5) (a1) {$\bullet$};
\node at (0,1) (a2) {$\bullet$};
\node at (0,1.5) (a3) {$\bullet$};
\draw (a1.center) to (3,.5);
\draw (a2.center) to (3,1);
\draw (a3.center) to (3,1.5);
\filldraw [fill=orange!20!white, draw=black] (3,.25) rectangle (4.5,1.75);
\node at (0,2.5) (a4) {$\bullet$};
\node at (0,3) (a5) {$\bullet$};
\draw (a4.center) to (1,2.5);
\draw (a5.center) to (1,3);
\filldraw [fill=blue!20!white, draw=black] (1,2.25) rectangle (2.5,3.25);
\node at (5.5,.75) (b1) {$\bullet$};
\node at (5.5,1.25) (b2) {$\bullet$};
\node at (5.5,2.5) (b3) {$\bullet$};
\node at (5.5,3) (b4) {$\bullet$};
\draw (b1.center) to (4.5,.75);
\draw (b2.center) to (4.5,1.25);
\draw (b3.center) to (2.5,2.5);
\draw (b4.center) to (2.5,3);
\node[xshift=-8pt] at (a1.center) {\tiny 1};
\node[xshift=-8pt] at (a2.center) {\tiny 2};
\node[xshift=-8pt] at (a3.center) {\tiny 3};
\node[xshift=-8pt] at (a4.center) {\tiny 4};
\node[xshift=-8pt] at (a5.center) {\tiny 5};
\node[xshift=8pt] at (b1.center) {\tiny 6};
\node[xshift=8pt] at (b2.center) {\tiny 7};
\node[xshift=8pt] at (b3.center) {\tiny 8};
\node[xshift=8pt] at (b4.center) {\tiny 9};
\end{tikzpicture}
\end{displaymath}
\caption{Commuting operations.}
\label{fig:comm-op}
\end{figure}
\section{Triangular categories} \label{s:tri}
Most of the diagram categories considered in this paper are \emph{triangular categories}, a notion introduced in \cite{brauercat1} (and similar to the notion of semi-infinite highest weight category in the sense of \cite{BrundanStroppel}). We will use this structure to aid us in establishing presentations for these categories. We recall the definition here and establish a few properties of these categories that will be useful.
Let $\fG$ be a $\bk$-linear category satisfying the following condition:
\begin{itemize}
\item[(T0)] The category $\fG$ is essentially small, and all $\Hom$ spaces are finite dimensional.
\end{itemize}
We denote the set of isomorphism classes in $\fG$ by $\vert \fG \vert$. Recall that a subcategory is \defi{wide} if it contains all objects.
\begin{definition}
A \defi{triangular structure} on $\fG$ is a pair $(\fU, \fD)$ of wide subcategories of $\fG$ such that the following axioms hold:
\begin{itemize}
\item[(T1)] We have $\End_{\fU}(x)=\End_{\fD}(x)$ for all objects $x$.
\item[(T2)] There exists a partial order $\le$ on $\vert \fG \vert$ such that:
\begin{enumerate}
\item For all $x \in \vert \fG \vert$ there are only finitely many $y \in \vert \fG \vert$ with $y \le x$.
\item The category $\fU$ is upwards with respect to $\le$, i.e., if there exists a non-zero morphism $x \to y$, then $x \le y$.
\item The category $\fD$ is downwards with respect to $\le$, i.e., if there exists a non-zero morphism $x \to y$, then $y \le x$.
\end{enumerate}
\item[(T3)] For all $x,y \in \fG$, the natural map
\begin{displaymath}
\bigoplus_{y \in \vert \fG \vert} \Hom_{\fU}(y,z) \otimes_{\End_{\fU}(y)} \Hom_{\fD}(x,y) \to \Hom_{\fG}(x,z)
\end{displaymath}
is an isomorphism.
\end{itemize}
A \defi{triangular category} is a $\bk$-linear category satisfying (T0) equipped with a triangular structure.
\end{definition}
\begin{remark}
In \cite{brauercat1}, we required the rings $\End_{\fU}(x)$ to be semisimple; we do not make that assumption here.
\end{remark}
Fix a triangular category $\fG$, and set
\[
\fM = \fU \cap \fD.
\]
Note that all non-zero morphisms in $\fM$ are between isomorphic objects; in our applications $\fM$ will almost always be the $\bk$-linearization of $\mathbf{FB}$. Recall that if $\fC$ is a $\bk$-linear category then a \defi{$\fC$-module} is a $\bk$-linear functor $\fC \to \Mod_{\bk}$. We are interested in modules over the categories $\fG$, $\fU$, $\fD$, and $\fM$. Suppose that $\fC$ is one of these categories. Then $\fC$ has the same objects of $\fM$ and contains $\fM$. Thus a $\fC$-module can be regarded as an $\fM$-module equipped with extra structure; we refer to this extra structure as a \defi{$\fC$-structure}. By (T3) it follows that a $\fG$-structure on an $\fM$-module is determined by its restrictions to $\fD$ and $\fU$. We say that a $\fD$-structure and a $\fU$-structure on an $\fM$-module are \defi{compatible} if they come from a $\fG$-structure. We now investigate compatibility in more detail.
Fix an $\fM$-module $M$ equipped with a $\fD$-structure and a $\fU$-structure. Let $\alpha$ be a morphism in $\fG$. Write
\begin{displaymath}
\alpha = \sum_{i=1}^n \phi_i \circ \psi_i
\end{displaymath}
with $\phi_i$ in $\fU$ and $\psi_i$ in $\fD$, which is possible by (T3). We then define
\begin{displaymath}
\alpha_* = \sum_{i=1}^n (\phi_i)_* (\psi_i)_*.
\end{displaymath}
This is well-defined by (T3) and the fact that the $\fU$- and $\fD$-structures agree on $\fM$. Suppose that $\beta$ is a second morphism such that $\beta \circ \alpha$ is defined. We say that $(\alpha, \beta)$ is \defi{compatible} if $(\beta \circ \alpha)_*=\beta_* \alpha_*$. We note that $(\alpha, \beta)$ is automatically compatible if $\beta$ belongs to $\fU$, or if $\alpha$ belongs to $\fD$. Clearly, the $\fU$- and $\fD$-structures on $M$ are compatible if and only if $(\alpha, \beta)$ is compatible for all $\alpha$, $\beta$ such that $\beta \circ \alpha$ is defined. In fact, one has the following:
\begin{proposition}
The $\fU$- and $\fD$-structures on $M$ are compatible if and only if all pairs $(\phi, \psi)$ with $\phi$ in $\fU$ and $\psi$ in $\fD$ are compatible.
\end{proposition}
\begin{proof}
Let $\alpha$ and $\beta$ be morphisms in $\fG$ such that $\beta \circ \alpha$ is defined. Write
\begin{displaymath}
\alpha = \sum_{i=1}^n \phi_i \circ \psi_i, \qquad
\beta = \sum_{j=1}^m \phi_j' \circ \psi_j'
\end{displaymath}
with $\phi_i$ and $\phi'_j$ in $\fU$ and $\psi_i$ and $\psi'_j$ in $\fD$. For each $(i,j)$, write
\begin{displaymath}
\psi_j' \circ \phi_i = \sum_{k=1}^{N_{i,j}} \phi''_{i,j,k} \circ \psi''_{i,j,k}
\end{displaymath}
where, again, the $\phi''$ belong to $\fU$ and the $\psi''$ belong to $\fD$. Then
\begin{displaymath}
\beta \circ \alpha= \sum_{i=1}^n \sum_{j=1}^m \sum_{k=1}^{N_{i,j}} (\phi'_j \circ \phi''_{i,j,k}) \circ (\psi''_{i,j,k} \circ \psi_i).
\end{displaymath}
We thus have
\begin{align*}
(\beta \circ \alpha)_*
&= \sum_{i,j,k} (\phi'_j)_* (\phi''_{i,j,k})_* (\psi''_{i,j,k})_* (\psi_i)_* \\
&= \sum_{i,j} (\phi'_j)_* (\psi'_j)_* (\phi_i)_* (\psi_i)_* \\
&= \beta_* \alpha_*,
\end{align*}
where in the first step we used the definition of $(\beta \circ \alpha)_*$, in the second step we used the compatibility of $(\phi_i, \psi'_j)$ for all $i$ and $j$, and in the third step we used the definitions of $\alpha_*$ and $\beta_*$. Thus $(\alpha, \beta)$ is compatible, and the proof is complete.
\end{proof}
We now give a refinement of the above criterion. Let $\fC$ be a $\bk$-linear category. We say a class of morphisms $\cC$ in $\fC$ \defi{generates} if every morphism in $\fC$ can be expressed as a $\bk$-linear combination of finite compositions of morphisms in $\cC$.
\begin{proposition} \label{prop:tri-comp}
Let $\cU$ generate $\fU$ and let $\cD$ generate $\fD$. Suppose that $(\phi, \psi)$ are compatible whenever $\phi \in \cU$ and $\psi \in \cD$, and $\psi \circ \phi$ is defined. Then the $\fU$- and $\fD$-structures are compatible.
\end{proposition}
\begin{proof}
We write $s(\phi)$ and $t(\phi)$ for the source and target of a morphism $\phi$. For $x \in \vert \fG \vert$, consider the following statements:
\begin{itemize}
\item[$S(x)$:] Let $\alpha$ and $\beta$ be morphisms in $\fG$ such that $\beta \circ \alpha$ is defined and $t(\alpha) =x$. Then $(\alpha, \beta)$ is compatible.
\item[$S_{\le x}$:] Statement $S(y)$ holds for all $y \le x$.
\item[$S_{<x}$:] Statement $S(y)$ holds for all $y<x$.
\end{itemize}
Clearly, it suffices to prove $S(x)$ for all $x$. We prove that $S_{<x}$ implies $S_{\le x}$ for all $x$. This implies $S(x)$ for all $x$ by an inductive argument, which is enabled by the condition (T2a). Thus let $x \in \vert \fG \vert$ be given and suppose $S_{<x}$ holds.
First suppose that $\phi$ is a morphism in $\fU$ and $\psi$ is a morphism in $\fD$ such that $\psi \circ \phi$ is defined and $t(\phi) \le x$. We show that $(\phi, \psi)$ is compatible. If $\phi$ or $\psi$ belongs to $\fM$, the statement is trivial, so assume this is not the case. We can express $\phi$ as a linear combination of compositions of morphisms in $\cU$. Since compatibility interacts well with linear combinations, it suffices to treat the case where $\phi$ is a composition of morphisms in $\cU$. We can thus write $\phi=\phi^1 \phi^2$ where $\phi^1$ belongs to $\cU$ but not to $\fM$, and $\phi^2$ belongs to $\fU$. Similarly, we can assume $\psi=\psi^2 \psi^1$ where $\psi^1$ belongs to $\cD$ but not to $\fM$, and $\psi^2$ belongs to $\fD$. Write
\begin{displaymath}
\psi^1 \circ \phi^1 = \sum_{i=1}^n \phi_i^3 \circ \psi_i^3
\end{displaymath}
with $\phi_i^3$ in $\fU$ and $\psi_i^3$ in $\fD$. Since $(\phi^1, \psi^1)$ is compatible by assumption, we have
\begin{displaymath}
\psi^1_* \phi^1_* = (\psi^1 \circ \phi^1)_* = \sum_{i=1}^n (\phi_i^3)_* (\psi_i^3)_*.
\end{displaymath}
We thus have
\begin{align*}
\psi^2_* \psi^1_* \phi^1_* \phi^2_*
&= \sum_{i=1}^n \psi^2_* (\phi_i^3)_*(\psi_i^3)_* \phi^2_*
= \sum_{i=1}^n \psi^2_* (\phi_i^3)_* (\psi_i^3 \circ \phi^2)_* \\
&= \sum_{i=1}^n \psi^2_* (\phi_i^3 \circ \psi_i^3 \circ \phi^2)_*
= \sum_{i=1}^n (\psi^2 \circ \phi_i^3 \circ \psi_i^3 \circ \phi^2)_*
= (\psi^2 \circ \psi^1 \circ \phi^1 \circ \phi^2)_*
\end{align*}
where we have repeatedly used $S_{<x}$. Note that
\begin{align*}
t(\phi^2)=s(\phi^1)<t(\phi^1) &\le x \\
t(\psi_i^3) \le s(\psi_i^3)=t(\phi^2) &<x \\
t(\phi_i^3)=t(\psi^1)<s(\psi^1)=t(\phi^1) &\le x
\end{align*}
which justifies applying $S_{<x}$ in each case. We thus see that $\psi_* \phi_*=(\psi \circ \phi)_*$, and so $(\phi, \psi)$ is compatible.
We now treat the general case. Thus let $\alpha$ and $\beta$ be morphisms in $\fG$ such that $\beta \circ \alpha$ is defined and $t(\alpha) \le x$. We show that $(\alpha, \beta)$ is compatible. Write
\begin{displaymath}
\alpha = \sum_{i=1}^n \phi_i \circ \psi_i, \qquad
\beta = \sum_{j=1}^m \phi'_j \circ \psi_j'
\end{displaymath}
where $\phi_i$ and $\phi'_j$ belong to $\fU$ and the $\psi_i$ and $\psi'_j$ belong to $\fD$.
We have
\begin{align*}
\beta_* \alpha_*
&= \sum_{i,j} (\phi'_j)_* (\psi'_j)_* (\phi_i)_* (\psi_i)_*
=\sum_{i,j} (\phi'_j)_* (\psi'_j \circ \phi_i)_* (\psi_i)_* \\
&= \sum_{i,j} (\phi'_j \circ \psi'_j \circ \phi_i \circ \psi_i)_*
= (\beta \circ \alpha)_*.
\end{align*}
In the second step we used the previous paragraph, and in the third step we used the automatic compatibility for morphisms in $\fU$ and $\fD$. This completes the proof.
\end{proof}
\section{The general linear Lie algebra} \label{s:gl}
\subsection{Currying}
Let $V$ be a finite-dimensional vector space, and consider the Lie algebra $\fgl(V)$. A \textbf{representation} of $\fgl(V)$ consists of a vector space $M$ equipped with a linear map
\begin{displaymath}
\mu \colon \fgl(V) \to \End(M)
\end{displaymath}
such that
\begin{equation} \label{eq:rep}
\mu([X,Y])=[\mu(X),\mu(Y)]
\end{equation}
holds for all $X,Y \in \fgl(V)$, where $[X,Y]=XY-YX$ denotes the commutator. Now, $\fgl(V)$ is canonically isomorphic to $V \otimes V^*$. Thus giving a linear map $\mu$ as above is equivalent to giving a linear map
\begin{displaymath}
a \colon V \otimes M \to V \otimes M,
\end{displaymath}
and the following proposition determines the condition that \eqref{eq:rep} imposes on $a$. We first introduce some notation. For a linear map $a$ as above, we define maps
\begin{displaymath}
a_1, a_2, \tau \colon V \otimes V \otimes M \to V \otimes V \otimes M
\end{displaymath}
as follows. First, $\tau$ switches the first two tensor factors, i.e., $\tau(v \otimes w \otimes x)=w \otimes v \otimes x$. Next, $a_2$ is $\id \otimes a$, i.e., $a_2(v \otimes w \otimes x) = v \otimes a(w \otimes x)$. Finally, $a_1 = \tau \circ a_2 \circ \tau$. We now have:
\begin{proposition} \label{prop:glid}
Let $\mu$ and $a$ be corresponding linear maps as above. Then $\mu$ satisfies \eqref{eq:rep} if and only if $a$ satisfies the equation
\addtocounter{equation}{-1}
\begin{subequations}
\begin{equation} \label{eq:glid}
[a_1,a_2]=\tau(a_1-a_2).
\end{equation}
\end{subequations}
\end{proposition}
\begin{proof}
Assume $\mu$ defines a representation of $\fgl(V)$. Let $\{v_i\}_{1 \le i \le n}$ be a basis for $V$, let $\{v_i^*\}$ be the dual basis, and write $v_iv_j^*$ for the element of $\fgl(V)$ corresponding to $v_i \otimes v_j^*$. The map $a$ is given by
\begin{displaymath}
a(v_i \otimes x) = \sum_{j=1}^n v_j \otimes (v_iv_j^*) x,
\end{displaymath}
where here $(v_iv_j^*) x$ denotes $\mu(v_iv_j^*)(x)$. We have
\begin{displaymath}
a_1(a_2(v_i \otimes v_k \otimes x)) = \sum_{1 \le j,\ell \le n} v_j \otimes v_{\ell} \otimes (v_iv_j^*) (v_kv_{\ell}^*) x.
\end{displaymath}
The formula for $a_2(a_1(v_i \otimes v_k \otimes x))$ is the same, except that the order of $v_iv_j^*$ and $v_kv_{\ell}^*$ on the right is reversed. We thus find
\begin{displaymath}
[a_1, a_2](v_i \otimes v_k \otimes x) = \sum_{1 \le j,\ell \le n} v_j \otimes v_{\ell} \otimes [v_iv_j^*,v_kv_{\ell}^*] x.
\end{displaymath}
Using the formula
\begin{displaymath}
[v_iv_j^*,v_kv_{\ell}^*] = \delta_{j,k} (v_iv_{\ell}^*) - \delta_{i,\ell} (v_kv_j^*),
\end{displaymath}
we find
\begin{displaymath}
[a_1, a_2](v_i \otimes v_k \otimes x) = \left( \sum_{1 \le \ell \le n} v_k \otimes v_{\ell} \otimes (v_iv_{\ell}^*) x \right) - \left( \sum_{1 \le j \le n} v_j \otimes v_i \otimes (v_kv_j^*) x \right).
\end{displaymath}
The first term on the right is $(\tau a_1)(v_i \otimes v_k \otimes x)$, while the second is $(\tau a_2)(v_i \otimes v_k \otimes x)$. We thus see that $a$ satisfies \eqref{eq:glid}. The reasoning is reversible, and so if $a$ satisfies \eqref{eq:glid} then $\mu$ defines a representation of $\fgl(V)$.
\end{proof}
\begin{remark}
The identity \eqref{eq:glid} can be expressed equivalently in the form
\begin{displaymath}
\tau a \tau a - a \tau a \tau = a \tau - \tau a,
\end{displaymath}
where here we have written $a$ in place of $a_2=\id_V \otimes a$.
\end{remark}
We now extrapolate a general definition from Proposition~\ref{prop:glid}:
\begin{definition}
Let $V$ be an object of a tensor category $\cC$. We define the \textbf{curried general linear Lie algebra} on $V$, denoted $\ul{\fgl}(V)$, as follows. A representation of $\ul{\fgl}(V)$ consists of an object $M$ of $\cC$ together with a morphism $a \colon V \otimes M \to V \otimes M$ such that the equation $[a_1,a_2]=\tau (a_1-a_2)$ holds, using notation as in Proposition~\ref{prop:glid}.
\end{definition}
If $M,N$ are $\ul{\fgl}(V)$-module, then a \textbf{morphism} of $\ul{\fgl}(V)$-modules $\phi \colon M \to N$ is a morphism in $\cC$ such that the diagram
\begin{displaymath}
\xymatrix{
V \otimes M \ar[d]_{\id \otimes \phi} \ar[r]^a & V \otimes M \ar[d]^{\id \otimes \phi} \\
V \otimes N \ar[r]^a & V \otimes N }
\end{displaymath}
commutes. We write $\Rep(\ul{\fgl}(V))$ for the category of $\ul{\fgl}(V)$-modules. It is easily verified to be an abelian category.
\subsection{General observations} \label{ss:glgen}
We now discuss some basic aspects of $\ul{\fgl}(V)$-modules.
\textit{Trivial representations.} Given any object $M$ of $\cC$, we can define a $\ul{\fgl}(V)$-module structure on $M$ by taking the structure map $V \otimes M \to V \otimes M$ to be the zero map. We refer to this as the \defi{trivial representation} of $\ul{\fgl}(V)$ on $M$. By \emph{the} trivial representation, we mean the one on the unit object $\mathbf{1}$.
\textit{The standard representation.} Let $M=V$ and take $a=\tau$. We verify \eqref{eq:glid}. This is an identity among endomorphisms of $V^{\otimes 3}$. What is called $\tau$ there is really $\tau_{12}$, and what is called $a$ is $\tau_{23}$. Using the braid relation, we have
\begin{displaymath}
\tau_{12} \tau_{23} \tau_{12} \tau_{23} = \tau_{23} \tau_{12} \tau_{23}^2 = \tau_{23} \tau_{12}.
\end{displaymath}
Similarly, $\tau_{23} \tau_{12} \tau_{23} \tau_{12}=\tau_{12} \tau_{23}$. The identity \eqref{eq:glid} follows. We call $V$ with this action the \defi{standard representation} of $\ul{\fgl}(V)$.
\textit{Tensor products.} Suppose that $M$ and $N$ are two $\ul{\fgl}(V)$-modules, with action maps $a$ and $b$. Regard $\End(V \otimes M)$ and $\End(V \otimes N)$ as subalgebras of $\End(V \otimes M \otimes N)$ in the obvious way. We give $M \otimes N$ the structure of a $\ul{\fgl}(V)$-module by taking the action map to be $a+b$. To see that this satisfies \eqref{eq:glid}, note that $a_1$ and $b_2$ commute in $\End(V^{\otimes 2} \otimes M \otimes N)$, since $a_1$ uses the first and third factors and $b_2$ the second and fourth, and similarly for $b_1$ and $a_2$. Therefore,
\begin{displaymath}
[a_1+b_1,a_2+b_2]
= [a_1,a_2]+[b_1,b_2]+[a_1,b_2]+[b_1,a_2]
= \tau(a_1-a_2)+\tau(b_1-b_2).
\end{displaymath}
The operation $\otimes$ endows $\Rep(\ul{\fgl}(V))$ with the structure of a tensor category.
\textit{Tensor powers of the standard representation.} Let $M=V^{\otimes n}$ be the $n$th tensor power of the standard representation. The action map is the endomorphism $\sum_{i=2}^{n+1} \tau_{1,i}$ of $V^{\otimes (n+1)}$.
\textit{Twisting by trace.} Let $M$ be a $\ul{\fgl}(V)$-module with structure map $a \colon V \otimes M \to V \otimes M$, and let $\delta$ be an element of the coefficient field $\bk$. Then the map $a+\delta \cdot \id_{V \otimes M}$ defines a new $\ul{\fgl}(V)$ representation on $M$. We denote the resulting $\ul{\fgl}(V)$-module by $M(\delta)$. We have $M(\delta)=M \otimes \mathbf{1}(\delta)$ and $\mathbf{1}(\delta_1) \otimes \mathbf{1}(\delta_2) = \mathbf{1}(\delta_1+\delta_2)$. The representation $\mathbf{1}(\delta)$ is analogous to the representation of $\fgl_n$ given by $X \mapsto \delta \tr(X)$.
\textit{Behavior under tensor functors.} We have defined $\ul{\fgl}(V)$-modules purely in terms of the tensor structure on $\cA$. It follows that if $\Phi \colon \cA \to \cB$ is a tensor functor (with no exactness properties assumed) then $\Phi$ carries $\ul{\fgl}(V)$-modules to $\ul{\fgl}(\Phi(V))$-modules. This will remain true for the other curried Lie algebras we define, and will be a useful observation later on.
\textit{Some unexpected behavior.} There are examples where an ``actual'' $\fgl(V)$ exists in $\cC$, but where representations of $\fgl(V)$ and $\ul{\fgl}(V)$ are not the same. For example, let $\cC$ be the category of abelian groups, let $V=(\bZ/p\bZ)^n$, and let $M=\bZ^n$. Then $M$ does not admit a non-trivial representation of $\fgl(V)$: since $\End(M)$ is torsion-free under addition, there are no non-zero maps $\fgl(V) \to \End(M)$. However, $M$ does admit a non-trivial representation of $\ul{\fgl}(V)$: one can take the switching of factors map on $V \otimes M \cong V \otimes V$. The source of this discrepancy is that $V$ is not a dualizable object in $\cC$, and so $\fgl(V)$ is not isomorphic to $V \otimes V^*$; the curried algebra $\ul{\fgl}(V)$ always behaves as if it were $V \otimes V^*$.
\subsection{In species} \label{ss:gl-fb}
Let $\bV$ be the standard $\mathbf{FB}$-module and let $M$ be an arbitrary $\mathbf{FB}$-module. Suppose we have a map of $\mathbf{FB}$-modules
\begin{displaymath}
a \colon \bV \otimes M \to \bV \otimes M.
\end{displaymath}
Given a finite set $S$, an element $j \in S$, and an element $x \in M(S \setminus j)$, we can write
\begin{equation} \label{eq:gl-fb}
a(t^j \otimes x) = t^j \otimes \omega^{S \setminus j}(x) + \sum_{i \in S \setminus j} t^i \otimes \alpha^S_{i,j}(x).
\end{equation}
Thus $\omega$ is a $(0,0)$-operation on $M$, i.e., an endomorphism of $M$ as an $\mathbf{FB}$-module, and $\alpha$ is a simple $(1,1)$-operation on $M$.
\begin{proposition} \label{prop:gl-fb}
The map $a$ defines a representation of $\ul{\fgl}(\bV)$ on $M$ if and only if the following conditions hold:
\begin{enumerate}
\item The operations $\alpha$ and $\omega$ commute with themselves and each other.
\item Given a finite set $S$ and three distinct elements $i,j,k \in S$, we have $\alpha^{S \setminus i}_{j,k} \alpha^{S \setminus k}_{i,j} = \alpha^{S \setminus j}_{i,k}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\phi$ be the (non-simple) $(1,1)$-operation on $M$ corresponding to $a$. Thus
\begin{displaymath}
a(t^j \otimes x) = \sum_{i \in S} t^i \otimes \phi^S_{i,j}(x).
\end{displaymath}
We have $\alpha=\phi[0]$ and $\omega=\phi[1]$ in the notation of \S \ref{ss:fbstruc}. Let $S$ be a finite set, let $j,k \in S$ be distinct, and let $x \in M(S \setminus \{j,k\})$. A simple computation gives
\begin{align*}
a_1(a_2(t^j \otimes t^k \otimes x)) &= \sum_{\ell \in S \setminus j} \sum_{i \in S \setminus \ell} t^i \otimes t^{\ell} \otimes \phi^{S \setminus \ell}_{i,j}(\phi^{S \setminus j}_{\ell,k}(x)) \\
a_2(a_1(t^j \otimes t^k \otimes x)) &= \sum_{i \in S \setminus k} \sum_{\ell \in S \setminus i} t^i \otimes t^{\ell} \otimes \phi^{S \setminus i}_{\ell,k}(\phi^{S \setminus k}_{i,j}(x)) \\
\tau a_1(t^j \otimes t^k \otimes x) &= \sum_{\ell \in S \setminus k} t^k \otimes t^{\ell} \otimes \phi^{S \setminus k}_{\ell,j}(x) \\
\tau a_2(t^j \otimes t^k \otimes x) &= \sum_{i \in S \setminus j} t^i \otimes t^j \otimes \phi^{S \setminus j}_{i,k}(x)
\end{align*}
Now, consider the equation $[a_1,a_2]=\tau(a_1-a_2)$. Letting $i,\ell \in S$ be distinct elements and examining the coefficients of $t^i \otimes t^{\ell}$, we obtain the following equations:
\begin{align*}
\phi^{S \setminus \ell}_{i,j} \circ \phi^{S \setminus j}_{\ell,k} &=
\phi^{S \setminus i}_{\ell,k} \circ \phi^{S \setminus k}_{i,j} & \text{if $i \ne k$ and $\ell \ne j$}, \\
\phi^{S \setminus i}_{j,k} \circ \phi^{S \setminus k}_{i,j} &=
\phi^{S \setminus j}_{i,k} & \text{if $i \ne k$ and $\ell=j$}, \\
\phi^{S \setminus \ell}_{k,j} \circ \phi^{S \setminus j}_{\ell,k} &=
\phi^{S \setminus k}_{\ell,j} & \text{if $i=k$ and $\ell \ne j$}, \\
\phi^{S \setminus k}_{j,j} &= \phi^{S \setminus j}_{k,k} & \text{if $i=k$ and $\ell=j$}.
\end{align*}
The first equation above is equivalent to condition (a). The second and third equations above are equivalent to each other, and to (b). The final equation above is automatic: it follows from the naturality of $\phi$. This completes the proof.
\end{proof}
For a finite set $S$ and distinct elements $i,j \in S$, let $\iota^S_{i,j} \colon S \setminus \{j\} \to S \setminus \{i\}$ be the bijection given by
\begin{displaymath}
\iota^S_{i,j}(k) = \begin{cases} j & \text{if $k=i$} \\ k & \text{if $k \ne i$} \end{cases}.
\end{displaymath}
Let $M$ be an $\mathbf{FB}$-module and let $\delta \in \bk$. We define the \defi{$\delta$-standard $\ul{\fgl}(\bV)$-structure} on $M$ to be the representation of $\ul{\fgl}(\bV)$ on $M$ given by Proposition~\ref{prop:gl-fb} with $\omega=\delta\cdot \id$ and $\alpha^S_{i,j}=(\iota^S_{i,j})_*$. (One easily verifies the conditions of Proposition~\ref{prop:gl-fb}.) Explicitly, for $j \in S$ and $x \in M(S \setminus j)$, we have
\begin{displaymath}
a(t^j \otimes x) = \delta x + \sum_{i \in S \setminus j} t^i \otimes (\iota_{i,j}^S)_*(x).
\end{displaymath}
One easily verifies that this construction is functorial: any map of $\mathbf{FB}$-modules induces a map between the corresponding $\delta$-standard $\ul{\fgl}(\bV)$-modules.
\begin{remark} \label{rmk:gl-seq}
Let $M$ be a $\ul{\fgl}(\bV)$-module. Given a finite set $S$ and an element $i \in S$, define $\rho_i$ to be the composition
\begin{displaymath}
\xymatrix@C=4em{
M(S) \ar[r]^-{\alpha^{S \amalg \{\ast\}}_{i,\ast}} &
M(S \cup \{\ast\} \setminus i) \ar[r]^-{\iota^{S \amalg \{\ast\}}_{\ast,i}} &
M(S), }
\end{displaymath}
where $\{\ast\}$ is a one-point set. One easily verifies that $\rho_i^2=\rho_i$, and that for $i \ne j$, the operators $\rho_i$ and $\rho_j$ commute. Furthermore, for $\pi \in \Aut(S)$ we have $\pi \rho_i \pi^{-1}=\rho_{\pi(i)}$. Let $\fA_n$ be the monoid freely generated by $n$ commuting idempotents. We thus see that $M([n])$ carries a representation of the monoid $\bN \times (\fS_n \ltimes \fA_n)$, where the generator of the $\bN$ acts by the $\omega$ operation. In fact, a $\ul{\fgl}(\bV)$-module $M$ exactly corresponds to a sequence $(M_n)_{n \ge 0}$ where $M_n$ is a representation of $\bN \times (\fS_n \ltimes \fA_n)$. From this point of view, a $\delta$-standard $\ul{\fgl}(\bV)$-module is one where $\fA_n$ acts trivially and the generator of $\bN$ acts by $\delta$.
\end{remark}
\begin{remark} \label{rmk:pieri}
Assume that $\bk$ is a field of characteristic $0$. Consider the standard $\ul{\fgl}(\bV)$ action $a$ on the irreducible Specht module $M=\bM_{\lambda}$. Since $\bV \otimes M$ (recall this is the induction product) is multiplicity-free by the Pieri rule, $a$ is simply multiplication by a scalar on each piece $\bM_\mu$. We claim that this scalar is the content of the box in the Young diagram $\mu \setminus \lambda$, where the content is its row index minus its column index, i.e., if $i$ is the unique index such that $\mu_i > \lambda_i$, the content is $\mu_i - i$.
To prove this, we can first use Schur--Weyl duality to translate this into a statement about Schur functors $\bS_\lambda$. The advantage is that we can evaluate on vector spaces of different dimensions to deduce the following:
\begin{enumerate}[\indent (1)]
\item The value of $a$ on $\bS_\mu(\bk^n)$ can be computed on a highest weight vector, so it is independent of $n$ as long as $n \ge \ell(\mu)$. So we may as well assume $n = \ell(\mu)$.
\item To compute $a$ on the tensor power $(\bk^n)^{\otimes d}$, we have
\[
a(e_i \otimes (e_{j_1} \otimes \cdots \otimes e_{j_d})) = \sum_{k=1}^d e_{j_k} \otimes (e_{j_1} \otimes \cdots \otimes e_i \otimes \cdots \otimes e_{j_d})
\]
where the sum is over all ways of swapping $e_i$ with some $e_{j_k}$. In particular, tensoring with the determinant character increases the eigenvalues of $a$ by $1$, so using this and (1), we may as well assume that we are adding a box to the first column of $\lambda$.
\item As can be seen with the tensor power $(\bk^n)^{\otimes d}$ in (2), applying the transpose duality to $a$ multiplies its eigenvalues by $-1$ since this affects Schur--Weyl duality by tensoring the usual $\fS_d$-action on tensor powers with the sign character. So to add a box to the first column, we just need to understand adding a box to the first row.
\item Iterating, we reduce to the case that $\lambda=\emptyset$ and $\mu=(1)$. In that case, it follows immediately that $a$ is the $0$ map, which is the content of the box that we added. \qedhere
\end{enumerate}
\end{remark}
\subsection{In $\mathbf{OB}$-modules}
Let $\mathbf{OB}$ be the category of finite totally ordered sets and order-preserving bijections, and let $\Mod_{\mathbf{OB}}$ denote the category of $\mathbf{OB}$-modules. Given $\mathbf{OB}$-modules $M$ and $N$, their \defi{shuffle tensor product} is
\begin{displaymath}
(M \otimes_{\rm shuff} N)(S) = \bigoplus_{S=A \amalg B} M(A) \otimes N(B),
\end{displaymath}
where the sum is over all partitions of $S$ into two disjoint sets $A$ and $B$, and $A$ and $B$ are given the induced order. The shuffle tensor product gives the category $\Mod_{\mathbf{OB}}$ of $\mathbf{OB}$-modules the structure of a symmetric monoidal category. (Note: $\mathbf{OB}$-modules are equivalent to graded vector spaces, but the shuffle tensor product does not correspond with the usual tensor product of graded vector spaces.)
Let $V$ be the $\mathbf{OB}$-module that is $\bk$ in degree~1 and~0 in other degrees. We make one comment on $\ul{\fgl}(V)$-modules. Recall that $\fA_n$ is the monoid generated by $n$ commuting idempotents $e_1, \ldots, e_n$. The symmetric group $\fS_n$ acts on $\fA_n$, and so we can form the semi-direct product $\fS_n \ltimes \fA_n$. Define $\fM_n$ to be the submonoid of $\fS_n \ltimes \fA_n$ generated by the elements $s_i e_i$ and $e_i s_i=s_i e_{i+1}$ for $1 \le i \le n-1$, where $s_i$ is the transposition of $\fS_n$ that swaps $i$ and $i+1$. Then one can show that giving a $\ul{\fgl}(V)$-module is equivalent to giving a sequence of representations of the monoids $\bN \times \fM_n$ (compare with Remark~\ref{rmk:gl-seq}). Details and other examples in $\Mod_{\mathbf{OB}}$ can be found in \cite{jun}.
\subsection{Braidings} \label{ss:braidedgl}
Suppose that $\cC$ is a (not necessarily braided) tensor category and $V$ is a braided object of $\cC$, that is, we are given an isomorphism $\beta \colon V \otimes V \to V \otimes V$ such that the endomorphisms $\id \otimes \beta$ and $\beta \otimes \id$ of $V^{\otimes 3}$ satisfy the braid relation. We can define $\ul{\fgl}(V)$ in this setting, as follows: a $\ul{\fgl}(V)$-module is an object $M$ equipped with a map $a \colon V \otimes M \to V \otimes M$ satisfying
\begin{displaymath}
\beta^{-1} a \beta a - a \beta a \beta^{-1} = a \beta - \beta a.
\end{displaymath}
Here we have written $\beta$ for $\beta \otimes \id$ and $a$ for $\id \otimes a$. The inverses are included on some factors so that $M=V$ with $a=\beta$ defines a $\ul{\fgl}(V)$-module (the standard representation).
\begin{remark} \label{rmk:VA}
Let $\mathbf{VA}$ be the category of finite dimensional vector spaces over the finite field $\bF$, and let $\mathbf{VB}$ be the subcategory where the morphisms are isomorphisms. We assume $\operatorname{char}(\bF)$ is invertible in $\bk$. Given two $\mathbf{VB}$-modules $M$ and $N$, we define their \defi{parabolic tensor product} by
\begin{displaymath}
(M \otimes_{\rm par} N)(X) = \bigoplus_{Y \subseteq X} M(Y) \otimes N(X/Y)
\end{displaymath}
where the sum is over all subspaces $Y \subseteq X$. This tensor product has a natural braiding, first considered by Joyal--Street \cite{joyal-street}. We expect that $\mathbf{VA}$-modules can be expressed as a curried structure in the braided category of $\mathbf{VB}$-modules. It would be interesting to understand $\ul{\fgl}(V)$-modules where $V$ is the standard $\mathbf{VB}$-module (i.e., $V(X)=\bk$ if $X$ is one-dimensional and $V(X)=0$ otherwise).
\end{remark}
\section{The symplectic Lie algebra} \label{s:symp}
\subsection{Currying}
Let $V$ be a finite-dimensional vector space. The space $V \oplus V^*$ carries a natural symplectic form, and so we can consider the corresponding symplectic Lie algebra $\fsp(V \oplus V^*)$. This algebra admits a decomposition
\begin{displaymath}
\fsp(V \oplus V^*) = \Div^2(V^*) \oplus \fgl(V) \oplus \Div^2(V).
\end{displaymath}
We thus see that giving a linear map
\begin{displaymath}
\mu \colon \fsp(V \oplus V^*) \otimes M \to M
\end{displaymath}
is equivalent to giving linear maps
\begin{displaymath}
a \colon V \otimes M \to V \otimes M, \qquad b \colon \Div^2(V) \otimes M \to M, \qquad b' \colon M \to \Sym^2(V) \otimes M.
\end{displaymath}
\begin{proposition} \label{prop:sp-curry}
Let $\mu$ and $(a,b,b')$ as above correspond. Then $\mu$ defines a representation of $\fsp(V \oplus V^*)$ if and only if $(a,b,b')$ satisfy the following conditions:
\begin{enumerate}
\item $a$ satisfies \eqref{eq:glid}, that is, it defines a $\ul{\fgl}(V)$ structure on $M$.
\item $b b_2=b b_1$ holds as maps $\Div^2 V \otimes \Div^2 V \otimes M \to M$ and $b'_1 b'=b'_2 b'$ holds as maps $M \to \Sym^2 V \otimes \Sym^2 V \otimes M$, that is, the multiplication defined by $b$ is commutative and the co-multiplication defined by $b'$ is co-commutative.
\item $b$ and $b'$ are maps of $\ul{\fgl}(V)$-modules, where $\Div^2(V)$ and $\Sym^2(V)$ are equipped with their natural $\ul{\fgl}(V)$ actions.
\item $b'b-b_1 b'_2 = (m \otimes 1)(1 \otimes a)(\Delta \otimes 1)$ holds as maps $\Div^2 V \otimes M \to \Sym^2 V \otimes M$. Here $\Delta$ is comultiplication and $m$ is multiplication.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $M$ be a $\fsp(V\oplus V^*)$-module. (a) and (b) are translations of the relations satisfied by the subalgebras spanned by each component $\fgl(V)$, $\Div^2(V^*)$ and $\Div^2(V)$ while (c) is a translation of the relation between $\fgl(V)$ and the two components $\Div^2(V)$ and $\Div^2(V^*)$.
For (d), let $v_1,\dots,v_n$ be a basis for $V$. Pick $x \in M$ and $v_iv_j \in \Div^2 V$. Then
\begin{align*}
b'(b(v_iv_j \otimes x)) = \sum_k v_k^2 \otimes (v_k^*)^{[2]}(v_iv_j) x + \sum_{k < \ell} v_kv_\ell \otimes (v^*_kv^*_\ell) (v_iv_j) x
\end{align*}
and
\begin{align*}
b_1(b'_2(v_iv_j \otimes x)) = \sum_k v_k^2 \otimes (v_iv_j)(v_k^*)^{[2]} x + \sum_{k<\ell} v_kv_\ell \otimes (v_iv_j)(v^*_kv^*_\ell) x.
\end{align*}
Next
\begin{displaymath}
[v_k^*v_\ell^*, v_iv_j] = \delta_{i,k} v^*_\ell v_j + \delta_{j,k} v^*_\ell v_i + \delta_{i,\ell} v_k^* v_j + \delta_{j,\ell} v_k^* v_i,
\end{displaymath}
and so
\begin{displaymath}
(b'b-b_1b'_2)(v_iv_j \otimes x) = \sum_k v_i v_k \otimes (v_k^* v_j) x + \sum_k v_j v_k \otimes (v_k^* v_i) x.
\end{displaymath}
This is clearly the same as $(m \otimes 1)(1 \otimes a)(\Delta \otimes 1)$. If $2$ is invertible in $\bk$, then we are done. To finish the remaining case, we also need to show that these maps agree on the elements $v_i^{[2]} \otimes x$. The calculation is similar to what we have explained above, with the final result being
\[
(b'b-b_1b'_2)(v_i^{[2]} \otimes x) = \sum_k v_i v_k \otimes (v_k^* v_i) x = (m \otimes 1)(1 \otimes a)(\Delta \otimes 1)(v_i^{[2]} \otimes x).
\]
Conversely, if $M$ is equipped with the three maps $a,b,b'$, then we see that the corresponding action of $\fsp(V\oplus V^*)$ on $M$ respects the Lie bracket.
\end{proof}
\begin{definition} \label{def:curried-sp}
Let $V$ be an object of a tensor category $\cC$. We define a module over the \defi{curried symplectic algebra} $\ul{\fsp}(V \oplus V^*)$ to be an object $M$ of $\cC$ equipped with maps
\begin{displaymath}
a \colon V \otimes M \to V \otimes M, \qquad b \colon \Div^2V \otimes M \to M, \qquad b' \colon M \to \Sym^2V \otimes M.
\end{displaymath}
satisfying \cref{prop:sp-curry}{a}--\cref{prop:sp-curry}{d}.
\end{definition}
\subsection{In species} \label{ss:sp-sp}
Let $M$ be an $\mathbf{FB}$-module equipped with maps
\begin{displaymath}
a \colon \bV \otimes M \to \bV \otimes M, \qquad b \colon \Div^2(\bV) \otimes M \to M, \qquad b' \colon M \to \Sym^2(\bV) \otimes M.
\end{displaymath}
Let $\alpha$ and $\omega$ be the simple $(1,1)$- and $(0,0)$-operations corresponding to $a$ as in \eqref{eq:gl-fb}. Let $\beta$ and $\beta'$ be the symmetric $(0,2)$- and $(2,0)$-operations corresponding to $b$ and $b'$. Thus we have
\begin{displaymath}
b(t^{\{i,j\}} \otimes x) = \beta^S_{i,j}(x), \qquad
b'(y) = \sum_{\{i,j\} \subset S} t^{\{i,j\}} \otimes (\beta')^S_{i,j}(y).
\end{displaymath}
for $x \in M(S \setminus \{i,j\})$ and $y \in M(S)$.
\begin{proposition} \label{prop:sp-gl}
The triple $(a,b,b')$ defines a representation of $\ul{\fsp}(\bV \oplus \bV^*)$ on $M$ if and only if the following conditions hold (for all finite sets $S$):
\begin{enumerate}
\item $\alpha$, $\omega$, $\beta$, and $\beta'$ pairwise commute (and each commutes with itself).
\item Given $i,j,k \in S$ distinct, we have $\alpha^{S \setminus i}_{j,k} \circ \alpha^{S \setminus k}_{i,j} = \alpha^{S \setminus j}_{i,k}$.
\item Given $i,j,k \in S$ distinct, we have $\alpha^S_{i,j} \circ \beta^{S \setminus j}_{i,k}=\beta^{S \setminus i}_{j,k}$, and similarly $(\beta')^{S \setminus i}_{j,k} \circ \alpha_{i,j}^S = (\beta')^{S \setminus j}_{i,k}$.
\item Given $i,j,k \in S$ distinct, we have $(\beta')^S_{i,j} \circ \beta^S_{j,k}=\alpha_{i,k}^{S \setminus j}$.
\item Given $i,j \in S$ distinct, we have $(\beta')^S_{i,j} \circ \beta^S_{i,j}=2\omega^{S \setminus \{i,j\}}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that conditions \cref{prop:sp-gl}{a}--\cref{prop:sp-gl}{d} above hold. We verify that $(a,b,b')$ satisfy conditions \cref{prop:sp-curry}{a}--\cref{prop:sp-curry}{d}. Condition \cref{prop:sp-curry}{a} follows from \cref{prop:sp-gl}{a}, \cref{prop:sp-gl}{b}, and Proposition~\ref{prop:gl-fb}. Condition \cref{prop:sp-curry}{b} follows easily from \cref{prop:sp-gl}{a}.
We now verify \cref{prop:sp-curry}{c}. Let
\begin{displaymath}
a' \colon \bV \otimes \Div^2(\bV) \to \bV \otimes \Div^2(\bV)
\end{displaymath}
be the $\ul{\fgl}(\bV)$-action on $\Div^2(V)$. To show that $b$ is $\ul{\fgl}(V)$-equivariant, we must show that the diagram
\begin{displaymath}
\xymatrix@C=4em{
\bV \otimes \Div^2(\bV) \otimes M \ar[r]^-{1 \otimes b} \ar[d]_{a+a'} &
\bV \otimes M \ar[d]^a \\
\bV \otimes \Div^2(\bV) \otimes M \ar[r]^-{1 \otimes b} &
\bV \otimes M }
\end{displaymath}
commutes; recall from \S \ref{ss:glgen} that $a+a'$ defines the tensor product representation. Let $i,j,k \in S$ be distinct and let $x \in M(S \setminus \{i,j,k\})$. We have
\begin{displaymath}
a'(t^i \otimes t^{\{j,k\}})=t^j \otimes t^{\{i,k\}} + t^k \otimes t^{\{i,j\}},
\end{displaymath}
and so
\begin{align*}
(a+a')(t^i \otimes t^{\{j,k\}} \otimes x)
=& t^j \otimes t^{\{i,k\}} \otimes x + t^k \otimes t^{\{i,j\}} \otimes x + \\
&t^i \otimes t^{\{j,k\}} \otimes \omega^{S \setminus \{i,j,k\}}(x) + \sum_{\ell \in S \setminus \{i,j,k\}} t^{\ell} \otimes t^{\{j,k\}} \otimes \alpha_{\ell,i}^{S \setminus \{j,k\}}(x),
\end{align*}
and so
\begin{align*}
(1 \otimes b)(a+a')(t^i \otimes t^{\{j,k\}} \otimes x)
=& t^j \otimes \beta^{S \setminus j}_{i,k}(x) + t^k \otimes \beta^{S \setminus k}_{i,j}(x) \\
& t^i \otimes \beta^{S \setminus i}_{j,k}(\omega^{S \setminus \{i,j,k\}}(x)) +
\sum_{\ell \in S \setminus \{i,j,k\}} t^{\ell} \otimes \beta^{S \setminus \ell}_{j,k}(\alpha^{S \setminus \{j,k\}}_{\ell,i}(x)).
\end{align*}
On the other hand, we have
\begin{align*}
a(1 \otimes b)(t^i \otimes t^{\{j,k\}} \otimes x)
&= a(t^i \otimes \beta^{S \setminus i}_{j,k}(x)) \\
&= t^i \otimes \omega^{S \setminus i}(\beta^{S \setminus i}_{j,k}(x)) + \sum_{\ell \in S \setminus i} t^{\ell} \otimes \alpha^S_{\ell,i}(\beta^{S \setminus i}_{j,k}(x)).
\end{align*}
The above two expressions coincide if and only if the following equations hold (for $\ell \in S \setminus \{i,j,k\}$):
\begin{align*}
\beta^{S \setminus i}_{j,k}(\omega^{S \setminus \{i,j,k\}}(x)) &= \omega^{S \setminus i}(\beta^{S \setminus i}_{j,k}(x)) &
\beta^{S \setminus \ell}_{j,k}(\alpha^{S \setminus \{j,k\}}_{\ell,i}(x)) &= \alpha^S_{\ell,i}(\beta^{S \setminus i}_{j,k}(x)) \\
\beta^{S \setminus j}_{i,k}(x) &= \alpha^S_{j,i}(\beta^{S \setminus i}_{j,k}(x)) &
\beta^{S \setminus k}_{i,j}(x) &= \alpha^S_{k,i}(\beta^{S \setminus i}_{j,k}(x))
\end{align*}
The two equalities on the first line follow since $\beta$ commutes with $\alpha$ and $\omega$ by \cref{prop:sp-gl}{a}. The equalities on the second line are \cref{prop:sp-gl}{c}. This shows that $b$ is $\ul{\fgl}(\bV)$-equivariant. The proof for $b'$ is similar.
We now verify \cref{prop:sp-curry}{d}. We have
\begin{align*}
b'(b(t^{\{i_1,i_2\}} \otimes x)) &= \sum_{\{i_3, i_4\} \subset S} t^{\{i_3,i_4\}} \otimes \beta'_{i_3,i_4}(\beta_{i_1,i_2}(x)),\\
b_1b'_2(t^{\{i_1, i_2\}} \otimes x) &= \sum_{\{i_3, i_4\} \subset S \setminus \{i_1,i_2\}} t^{\{i_3, i_4\}} \otimes \beta_{i_1,i_2}(\beta'_{i_3,i_4}(x)).
\end{align*}
Now, for $\{i_3, i_4\} \subseteq S \setminus \{i_1,i_2\}$, we have $\beta_{i_1,i_2} \beta'_{i_3,i_4}=\beta'_{i_3,i_4} \beta_{i_1,i_2}$ since $\beta$ and $\beta'$ commute. We thus see that
\begin{displaymath}
(b'b-b_1b'_2)(t^{\{i_1, i_2\}} \otimes x) = \sum_{\substack{\{i_3,i_4\} \subseteq S\\ \{i_3,i_4\} \cap \{i_1,i_2\} \ne \emptyset}} t^{\{i_3, i_4\}} \otimes \beta'_{i_3,i_4}(\beta_{i_1,i_2}(x)).
\end{displaymath}
On the other hand,
\begin{align*}
& (m \otimes 1)(1 \otimes a)(\Delta \otimes 1)(t^{\{i_1,i_2\}} \otimes x) \\
=& 2 t^{\{i_1,i_2\}} \otimes \omega(x) + \sum_{j \in S \setminus \{i_1,i_2\}} (t^{\{j,i_2\}} \otimes \alpha_{j,i_1}(x) + t^{\{j,i_1\}} \otimes \alpha_{j,i_2}(x)).
\end{align*}
We claim these last two expressions coincide. The coefficient of $t^{\{i_1, i_2\}}$ in the first expression is $\beta'_{i_1,i_2}(\beta_{i_1,i_2}(x))$ and in the second expression is $2 \omega(x)$. These are equal by \cref{prop:sp-gl}{e}. Suppose now that $j \not\in \{i_1,i_2\}$. The $t^{\{i_1,j\}}$ component in the first expression is $\beta'_{i_1,j}(\beta_{i_1,i_2}(x))$ and in the second expression is $\alpha_{j,i_2}(x)$. These are equal by \cref{prop:sp-gl}{d}. The other components are similar.
This verifies the conditions \cref{prop:sp-curry}{a}--\cref{prop:sp-curry}{d}. This reasoning is completely reversible, and so the result follows.
\end{proof}
\subsection{The Brauer category} \label{ss:brauer}
Let $\fG=\fG(\delta)$ be the Brauer category with parameter $\delta \in \bk$. The objects of this category are finite sets. The space $\Hom_{\fG}(S,T)$ of morphisms is the vector space spanned by Brauer diagrams from $S$ to $T$; such a diagram is simply a perfect matching on the set $S \amalg T$. For the definition of composition (and additional details), see \cite[\S 5]{brauercat1}. We note that the composition law depends on the parameter $\delta$.
A Brauer diagram $S \to T$ is called upwards if there are no edges contained in $S$. The upwards Brauer category $\fU$ is the subcategory of $\fG$ containing all objects and where $\Hom_{\fU}(S,T)$ is spanned by upwards diagrams. There is a similarly defined downwards Brauer category $\fD$. The intersection $\fM$ of $\fU$ and $\fD$ is the linearization of $\mathbf{FB}$: that is, $\Hom_{\fM}(S,T)$ is the vector space spanned by bijections $S \to T$. The pair $(\fU, \fD)$ is a triangular structure on $\fG$, see \cite[Proposition~5.5]{brauercat1}. (Note that \cite{brauercat1} works in characteristic~0, but this statement and its proof hold in general.)
Suppose that $M$ is a $\fG$-module. Restricting to $\mathbf{FB} \subset \fG$, we can regard $M$ as an $\mathbf{FB}$-module. Let $S$ be a finite set and let $i,j \in S$ be distinct elements. We have a morphism $\eta^S_{i,j} \colon S \setminus \{i,j\} \to S$ in $\fG$ corresponding to the diagram with an edge between $i$ and $j$ in the target, and that is the identity elsewhere. This induces a linear map
\begin{displaymath}
\beta^S_{i,j} \colon M(S \setminus \{i,j\}) \to M(S).
\end{displaymath}
One easily sees that $\beta$ is a symmetric $(0,2)$-operation on $M$. Similarly, we have a morphism $(\eta')^S_{i,j} \colon S \to S \setminus \{i,j\}$ in $\fG$ using the opposite diagram, and this induces a linear map
\begin{displaymath}
(\beta')^S_{i,j} \colon M(S) \to M(S \setminus \{i,j\}).
\end{displaymath}
As above, $\beta'$ is a symmetric $(2,0)$-operation on $M$. Using the rule for composition in $\fG$, one easily sees that $\beta$ and $\beta'$ satisfy the following conditions (in what follows, $S$ is a finite set):
\begin{enumerate}
\item $\beta$ and $\beta'$ commute with themselves and with each other.
\item Let $i,j,k \in S$ be distinct. Then $(\beta')^S_{i,j} \beta^S_{j,k}=\iota^{S \setminus j}_{i,k}$.
\item Let $i,j \in S$ be distinct. Then $(\beta')^S_{i,j} \beta^S_{i,j}=\delta \cdot {\rm id}$.
\end{enumerate}
Since the $\eta$ and $\eta'$ morphisms, together with the morphisms in $\mathbf{FB}$, generate $\fG$, we see that that the operations $\beta$ and $\beta'$ completely determine the $\fG$-structure on $M$. The following proposition shows that the above conditions exactly characterize the operations we see in this manner:
\begin{proposition} \label{prop:brauer-op}
Let $M$ be an $\mathbf{FB}$-module equipped with a symmetric $(0,2)$-operation $\beta$ and a symmetric $(2,0)$-operation $\beta'$ satisfying (a), (b), and (c) above. Then $M$ carries a unique $\fG$-structure inducing $\beta$ and $\beta'$.
\end{proposition}
\begin{proof}
We claim that giving a $\fU$-structure on $M$ is equivalent to giving a self-commuting symmetric $(0,2)$-operation. First, suppose that $M$ has a $\fU$-structure. For any set $S$ and distinct elements $i,j \in S$, we define $\beta^S_{i,j}$ to be the action of $\eta^S_{i,j}$ on $M$. Then $\beta$ is a symmetric operation by construction and commutes with itself (if we compose such morphisms, the result does not depend on the order in which we pair off the elements).
Conversely, suppose that $M$ has a self-commuting symmetric $(0,2)$-operation $\beta$. We use $\beta$ to construct a $\fU$-structure on $M$. A $\fU$-morphism $\phi \colon S \to T$ can be factored as a bijection $\sigma \colon S \to \phi(S)$ followed by morphisms of the form $\eta^U_{i,j}$ where $i,j$ are distinct. We define $M_{\phi} \colon M(S) \to M(T)$ to be the composition $M_\sigma \colon M(S) \to M(\phi(S))$ with the corresponding composition of maps given by $\beta$ coming from the factorization; since $\beta$ is self-commuting the order of the factorization does not affect the result, and since it is symmetric the order of the elements $i,j$ at each stage also does not affect the result. Given another $\fU$-morphism $\psi \colon T \to U$, the functoriality $M_{\psi \phi} = M_{\psi} M_{\phi}$ follows from the naturality condition on operations (we omit the details).
Similarly, giving a $\fD$-structure on $M$ is equivalent to giving a self-commuting symmetric $(2,0)$-operation. Thus $\beta$ and $\beta'$ define $\fU$- and $\fD$-structures on $M$.
Let $\cU$ be the class of morphisms in $\fU$ isomorphic to $\eta^S_{i,j}$ for some $S$, $i$, and $j$, and define $\cD$ similarly using $\eta'$. One easily sees that $\cU$ generates $\fU$ and $\cD$ generates $\fD$. Thus, by Proposition~\ref{prop:tri-comp}, it suffices to show that $(\phi, \psi)$ is compatible for $\phi \in \cU$ and $\psi \in \cD$ with $\psi \circ \phi$ defined. Let $\phi = \eta^S_{i,j}$ and $\psi = (\eta')^S_{k,\ell}$. There are three cases to consider depending on the cardinality $n$ of $\{i,j\} \cap \{k,\ell\}$.
First suppose $n=0$. Then
\begin{displaymath}
\psi \circ \phi = (\eta')^S_{k,\ell} \circ \eta^S_{i,j} = \eta^{S \setminus \{k,\ell\}}_{i,j} \circ (\eta')^{S \setminus \{i,j\}}_{k,\ell},
\end{displaymath}
where the second equality comes from the following composition of Brauer diagrams:
\[
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\node at (2,2) (a3) {\tiny $i$};
\node at (3,2) (a4) {\tiny $j$};
\node [node] at (0,1) (b1) {};
\node [node] at (1,1) (b2) {};
\node [node] at (2,1) (b3) {};
\node [node] at (3,1) (b4) {};
\node at (0,0) (c1) {\tiny $k$};
\node at (1,0) (c2) {\tiny $\ell$};
\draw[thick, orange] (c1) to (b1);
\draw[thick, orange] (c2) to (b2);
\draw[thick, orange] (b3) to (a3);
\draw[thick, orange] (b4) to (a4);
\draw[thick, orange] (b1) to[out=20, in=160] (b2);
\draw[thick, orange] (b3) to[out=-20, in=-160] (b4);
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\node at (0,2) (a3) {\tiny $i$};
\node at (1,2) (a4) {\tiny $j$};
\node at (0,0) (c1) {\tiny $k$};
\node at (1,0) (c2) {\tiny $\ell$};
\draw[thick, orange] (c1) to[out=20, in=160] (c2);
\draw[thick, orange] (a3) to[out=-20, in=-160] (a4);
\end{tikzpicture}.
\]
We thus have
\begin{displaymath}
(\psi \circ \phi)_*=\beta^{S \setminus \{k,\ell\}}_{i,j} \circ (\beta')^{S \setminus \{i,j\}}_{k,\ell}=(\beta')^S_{k,\ell} \circ \beta^S_{i,j} = \psi_* \circ \phi_*,
\end{displaymath}
where the first equality uses the above computation and the second uses condition~(a). Thus $(\phi,\psi)$ is compatible.
Now suppose $n=1$, and, without loss of generality, $j=k$. Then
\begin{displaymath}
\psi \circ \phi = (\eta')^S_{j,\ell} \circ \eta^S_{i,j} = \iota_{i,\ell}^{S \setminus j}
\end{displaymath}
where the second equality comes from the following composition of Brauer diagrams:
\[
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\node at (2,2) (a3) {\tiny $i$};
\node [node] at (0,1) (b1) {};
\node [node] at (1,1) (b2) {};
\node [node] at (2,1) (b3) {};
\node at (0,0) (c1) {\tiny $\ell$};
\draw[thick, orange] (c1) to (b1);
\draw[thick, orange] (b3) to (a3);
\draw[thick, orange] (b1) to[out=20, in=160] (b2);
\draw[thick, orange] (b2) to[out=-20, in=-160] (b3);
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\node at (0,2) (a3) {\tiny $i$};
\node at (0,0) (c1) {\tiny $\ell$};
\draw[thick, orange] (c1) to (a3);
\end{tikzpicture}.
\]
We thus have
\begin{displaymath}
(\psi \circ \phi)_* = \iota_{i,\ell}^{S \setminus j} = (\beta')^S_{j,\ell} \circ \beta^S_{i,j} = \psi_* \circ \phi_*
\end{displaymath}
by (b), which establishes the compatibility.
Finally, suppose $n=2$. Then
\begin{displaymath}
\psi \circ \phi = (\eta')^S_{i,j} \circ \eta^S_{j,i} = \delta \cdot {\rm id}
\end{displaymath}
using the composition
\[
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}]
\node [node] at (0,0) (a1) {};
\node [node] at (1,0) (a2) {};
\draw[thick, orange] (a1) to[out=20, in=160] (a2);
\draw[thick, orange] (a1) to[out=-20, in=-160] (a2);
\end{tikzpicture}
\quad = \quad \delta,
\]
and so
\begin{displaymath}
(\psi \circ \phi)_* = \delta = (\beta')^S_{i,j} \circ \beta^S_{j,i} = \psi_* \circ \phi_*
\end{displaymath}
by (c), which establishes the compatibility.
\end{proof}
\subsection{The comparison theorem}
Fix $\delta \in \bk$. If $M$ is a representation of $\ul{\fsp}(\bV \oplus \bV^*)$ given by data $(a,b,b')$ then $a$ defines a representation of $\ul{\fgl}(\bV)$ on $M$. We say that $M$ is \defi{$\delta$-standard} if the representation of $\ul{\fgl}(\bV)$ is $\delta$-standard (see \S \ref{ss:gl-fb}). We let $\Rep_{\delta}(\ul{\fsp}(\bV \oplus \bV^*))$ be the category of $\delta$-standard representations.
We define a functor
\[
\Phi \colon \Mod_{\fG(2\delta)} \to \Rep_\delta(\ul{\fsp}( \bV \oplus \bV^*) )
\]
as follows. Let $M$ be a representation of $\fG(2\delta)$. To define $\Phi(M)$, we only need to define the operations $\alpha$, $\omega$, $\beta$ and $\beta'$. First, $M$ is an $\mathbf{FB}$-module by restriction, and we choose $\alpha$ and $\omega$ as in \S\ref{ss:gl-fb} so that the result is $\delta$-standard. The operations $\beta$ and $\beta'$ are defined using the morphisms $\eta$ and $\eta'$ as in \S\ref{ss:brauer}. Then $\Phi(M)$ is indeed an object of $\Rep_\delta(\ul{\fsp}(\bV \oplus \bV^*))$ by Proposition~\ref{prop:sp-gl}. For a morphism $f \colon M \to N$, we let $\Phi(f)$ be the same morphism of the underlying $\mathbf{FB}$-modules.
The following is the main result of this section:
\begin{theorem} \label{thm:fB=B}
The functor $\Phi$ defines a natural isomorphism of categories
\begin{displaymath}
\Mod_{\fG(2\delta)} \cong \Rep_{\delta}(\ul{\fsp}(\bV \oplus \bV^*)).
\end{displaymath}
\end{theorem}
\begin{proof}
The inverse of $\Phi$ is defined by reversing the steps in the definition of $\Phi$. This is well-defined by Proposition~\ref{prop:brauer-op}.
\end{proof}
\begin{remark}
If 2 is a zerodivisor in $\bk$, then $\delta$ cannot generally be recovered from $2\delta$. In particular, if $2\delta=2\delta'$, we see that there is an equivalence of the form
\[
\Rep_\delta(\ul{\fsp}(\bV \oplus \bV^*)) \cong \Rep_{\delta'}(\ul{\fsp}(\bV \oplus \bV^*)). \qedhere
\]
\end{remark}
\section{The Witt algebra} \label{s:witt}
\subsection{Currying} \label{ss:witt-curry}
Let $V$ be a finite-dimensional vector space with basis $\{\xi_i\}$. The \defi{Witt algebra} on $V$, denote $W(V)$, is the Lie algebra of $\bk$-linear derivations of the polynomial ring $\bk[\xi_i]$. Thus it is spanned by elements $f \partial_i$ where $f$ is a polynomial in $\{\xi_j\}$ and $\partial_i$ is the partial derivative with respect to $\xi_i$, and the bracket is given by
\[
[f \partial_i, g\partial_j] = f \frac{\partial g}{\partial \xi_i} \partial_j - g \frac{\partial f}{\partial \xi_j} \partial_i.
\]
We have a canonical isomorphism of vector spaces $W(V) = \bigoplus_{n \ge 0} \Sym^{n}(V) \otimes V^*$.
\begin{remark}
The algebra of derivations of the ring $\bk[z,z^{-1}]$ of Laurent polynomials is also sometimes referred to as the Witt algebra. We do not know of a good analogue of this Lie algebra in the multivariate case.
\end{remark}
A linear map $\mu \colon W(V) \otimes M \to M$ is the same data as linear maps
\[
a^{(n)} \colon \Sym^{n+1} V \otimes M \to V \otimes M, \qquad n \ge -1.
\]
For notational simplicity, we package these together for all $n \ge -1$ into a single map
\[
a \colon \Sym V \otimes M \to V \otimes M.
\]
Fix a map $a$ as above. We define a map
\[
a' \colon \Sym V \otimes \Sym V \otimes M \to V \otimes V \otimes M
\]
as the composition
\begin{align*}
\Sym V \otimes \Sym V \otimes M &\xrightarrow{{\rm id} \otimes \Delta \otimes {\rm id}} \Sym V \otimes V \otimes \Sym V \otimes M\\
&\xrightarrow{\tau \otimes {\rm id} \otimes {\rm id}} V \otimes \Sym V \otimes \Sym V \otimes M\\
&\xrightarrow{{\rm id} \otimes m \otimes {\rm id}} V \otimes \Sym V \otimes M\\
&\xrightarrow{{\rm id} \otimes a} V \otimes V \otimes M,
\end{align*}
where $\Delta \colon \Sym V \to V \otimes \Sym V$ is the comultiplication given by $f \mapsto \sum_{i=1}^n x_i \otimes \frac{\partial f}{\partial x_i}$, $m \colon \Sym V \otimes \Sym V \to \Sym V$ is the multiplication map, and $\tau$ is the usual switching map. We define $a'' = \tau a' \tau$.
\begin{proposition} \label{prop:witt-curry}
Let $\mu$ and $a$ be corresponding linear maps as above. Then $\mu$ defines a representation of $W$ if and only if $[a_1,a_2]=a'-a''$ holds as maps $\Sym V \otimes \Sym V \otimes M \to V \otimes V \otimes M$.
\end{proposition}
\begin{proof}
Pick $\xi^\alpha,\xi^\beta \in \Sym V$ and $x \in M$. Let $\eps_i$ be the exponent vector which is 1 in position $i$ and 0 elsewhere. Then
\[
[a_1,a_2](\xi^\alpha \otimes \xi^\beta \otimes x) = \sum_{i,j} \xi_j \otimes \xi_i \otimes [\xi^\alpha \partial_j, \xi^\beta \partial_i] x,
\]
and
\[
[\xi^\alpha \partial_j, \xi^\beta \partial_i] = \beta_j \xi^{\alpha+\beta-\eps_j} \partial_i - \alpha_i \xi^{\alpha+\beta-\eps_i} \partial_j.
\]
Next, we compute the effect of $a'$ via the maps it is a composition of:
\begin{align*}
\xi^\alpha \otimes \xi^\beta \otimes x &\mapsto \sum_j \beta_j \xi^\alpha \otimes \xi_j \otimes \xi^{\beta-\eps_j} \otimes x\\
&\mapsto \sum_j \beta_j \xi_j \otimes \xi^{\alpha+\beta-\eps_j} \otimes x\\
&\mapsto \sum_{i,j} \beta_j \xi_j \otimes \xi_i \otimes (\xi^{\alpha+\beta-\eps_j} \partial_i) x.
\end{align*}
So
\[
a''(\xi^\alpha \otimes \xi^\beta \otimes x) = \sum_{i,j} \alpha_j \xi_i \otimes \xi_j \otimes (\xi^{\alpha + \beta - \eps_j} \partial_i) x.
\]
Hence we see that $[a_1,a_2]=a'-a''$. On the other hand, if $[a_1,a_2]=a'-a''$, then the above calculations show that
\[
[\xi^\alpha \partial_j, \xi^\beta \partial_i] x = (\xi^{\alpha} \partial_j)(\xi^\beta \partial_i)x - (\xi^\beta \partial_i) (\xi^{\alpha} \partial_j) x
\]
which shows that $\mu$ defines a Lie algebra action on $M$.
\end{proof}
\begin{definition}
Given an object $V$ of a tensor category $\cC$, we define a module over the \defi{curried Witt algebra} $\ul{W}(V)$ to be an object $M$ together with a map $a \colon \Sym V \otimes M \to V \otimes M$ such that $[a_1,a_2] = a'-a''$ with notation as in Proposition~\ref{prop:witt-curry}.
\end{definition}
\begin{remark}
There are several variants of the above definition one can consider. For instance, one can consider the Lie subalgebra $W^+(V)$ of $W(V)$ consisting of derivations $f \partial_i$ where $f$ has no constant term; it has a curried form $\ul{W}^+(V)$ similar to that for $W(V)$. One can also define a curried algebra $\ul{W}(V^*)$ by considering maps $V \otimes M \to \Div(V) \otimes M$.
\end{remark}
\begin{proposition} \label{prop:witt-gl}
Let $(M,a)$ be a representation of $\ul{W}(V)$. Then $a^{(0)}$ defines a representation of $\ul{\fgl}(V)$ on $M$.
\end{proposition}
\begin{proof}
The restrictions of $a'$ and $a''$ to $V\otimes V \otimes M$, respectively, are $\tau a^{(0)}_1$ and $\tau a^{(0)}_2$, so the identity $[a^{(0)}_1,a^{(0)}_2] = \tau( a^{(0)}_1 - a^{(0)}_2 )$ follows immediately from the definition of a representation of $\ul{W}(V)$.
\end{proof}
\subsection{In species}
Let $M$ be an $\mathbf{FB}$-module equipped with a map
\begin{displaymath}
a \colon \Sym(\bV) \otimes M \to \bV \otimes M.
\end{displaymath}
Let $\phi$ be the symmetric $(1,\ast)$-operation associated to $a$; by $(1,\ast)$ we mean that $\phi_{A,B}=0$ unless $A$ has cardinality~1. Explicitly, for a finite set $S$, a subset $B$ of $S$, and $x \in M(S \setminus B)$, we have
\begin{displaymath}
a(t^B \otimes x) = \sum_{i \in S} t^i \otimes \phi^S_{i,B}(x).
\end{displaymath}
Let $\alpha=\phi[0]$ and $\omega=\phi[1]$ be the simple operations associated to $\phi$. Explicitly, for $S$, $B$, and $x$ as above, we have
\begin{displaymath}
a(t^B \otimes x) = \sum_{i \in B} t^i \otimes \omega^{S \setminus i}_{B \setminus i}(x) + \sum_{i \in S \setminus B} t^i \otimes \alpha^S_{i,B}(x).
\end{displaymath}
In general, $\omega^S_B$ is a map $M(S \setminus B) \to M(S)$.
\begin{proposition} \label{prop:witt-fb}
With notation as above, $a$ defines a representation of $\ul{W}(\bV)$ if and only if the following conditions hold ($S$ is a finite set and $A$ and $B$ are disjoint subsets of $S$):
\begin{enumerate}
\item The operations $\alpha$ and $\omega$ commute with themselves and each other.
\item Let $j \in B$ and $i \in S \setminus (A \cup B)$. Then $\alpha^{S \setminus i}_{j,A} \circ \alpha^{S \setminus A}_{i,B}=\alpha^{S \setminus j}_{i, A \cup B \setminus j}$.
\item Let $j \in B$. Then $\alpha^S_{j,A} \circ \omega^{S \setminus A}_B=\omega^{S \setminus j}_{A \cup B \setminus j}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $A$ and $B$ be disjoint subsets of $S$ and let $x \in M(S \setminus (A \cup B))$. Then we have
\begin{align*}
a_1a_2(t^A \otimes t^B \otimes x)
&= \sum_{i \in S \setminus A} \sum_{j \in S \setminus i} t^j \otimes t^i \otimes \phi^{S \setminus i}_{j, A}(\phi^{S \setminus A}_{i, B}(x)) \\
a_2a_1(t^A \otimes t^B \otimes x)
&= \sum_{j \in S \setminus B} \sum_{i \in S \setminus j} t^j \otimes t^i \otimes \phi^{S \setminus j}_{i, B}(\phi^{S \setminus B}_{j, A}(x)).
\end{align*}
Next, we compute $a'$ as a composition of maps:
\begin{align*}
t^A \otimes t^B \otimes x
&\mapsto \sum_{j \in B} t^A \otimes t^j \otimes t^{B \setminus j} \otimes x\\
&\mapsto \sum_{j \in B} t^j \otimes t^{A \cup B \setminus j} \otimes x\\
&\mapsto \sum_{j \in B} \sum_{i \in S \setminus j} t^j \otimes t^i \otimes \phi^{S \setminus j}_{i, A \cup B \setminus j}(x).
\end{align*}
Similarly,
\begin{align*}
a''(t^A \otimes t^B \otimes x)
&= \sum_{i \in A} \sum_{j \in A \setminus i} t^j \otimes t^i \otimes \phi^{S \setminus i}_{j, A \cup B \setminus i}(x).
\end{align*}
Equating coefficients in the equation $[a_1,a_2]=a'-a''$ we find the following (for distinct $i, j\in S$):
\begin{enumerate}[(i)]
\item If $i \not\in A$ and $j \not\in B$ then $\phi^{S \setminus i}_{j,A} \circ \phi^{S \setminus A}_{i,B}=\phi^{S \setminus j}_{i,B} \circ \phi^{S \setminus B}_{j,A}$.
\item If $i \not\in A$ and $j \in B$ then $\phi^{S \setminus i}_{j,A}\circ \phi^{S \setminus A}_{i,B}=\phi^{S \setminus j}_{i,A \cup B \setminus j}$.
\item If $i \in A$ and $j \not\in B$ then $\phi^{S \setminus j}_{i,B}\circ \phi^{S \setminus B}_{j,A}=\phi^{S \setminus i}_{j, A \cup B \setminus i}$.
\item If $i \in A$ and $j \in B$ then $\phi^{S \setminus j}_{i,A \cup B \setminus j}=\phi^{S \setminus i}_{j, A \cup B \setminus i}$.
\end{enumerate}
Statement (i) is equivalent to (a); statement (ii) is equivalent to the conjunction of (b) and (c); statement (iii) is equivalent to statement (ii); and statement (iv) is automatic. The result follows.
\end{proof}
\subsection{The restricted partition category} \label{ss:res-part}
Let $\fG=\fG(\delta)$ be the partition category with parameter $\delta$. The objects of this category are finite sets. The space $\Hom_{\fG}(S,T)$ of morphisms is the vector space spanned by partition diagrams from $S$ to $T$; such a diagram is a set-partition of $S \amalg T$. For the definition of composition (and additional details), see \cite[\S 6]{brauercat1}. We note that the composition law depends on the parameter $\delta$.
A partition diagram $S \to T$ is called upwards if each part contains at least one element of $T$ and at most one element of $S$. The upwards partition category $\fU$ is the subcategory of $\fG$ containing all objects and where $\Hom_{\fU}(S,T)$ is spanned by upwards diagrams. There is a similarly defined downwards partition category $\fD$. The intersection $\fM$ of $\fU$ and $\fD$ is the linearization of $\mathbf{FB}$. The pair $(\fU, \fD)$ is a triangular structure on $\fG$, see \cite[Proposition~6.3]{brauercat1}. (Once again, note that \cite{brauercat1} works in characteristic~0, but this statement and its proof hold in general.)
We say that a partition diagram from $S$ to $T$ is \defi{restricted} if each part contains at most one element of $S$. We define the \defi{restricted partition category} $\fG^r=\fG^r(\delta)$ to be the subcategory of $\fG$ with all objects and where the $\Hom$ spaces are spanned by restricted partition diagrams. One readily verifies that this is indeed a subcategory of $\fG$. We let $\fU^r$ and $\fD^r$ be the intersections of $\fU$ and $\fD$ with $\fG^r$. One easily verifies that $(\fU^r, \fD^r)$ is a triangular structure on $\fG^r$.
Suppose that $M$ is a $\fG$-module. Restricting to $\mathbf{FB} \subset \fG$, we can regard $M$ as an $\mathbf{FB}$-module. Let $S$ be a finite set, let $A$ be a subset of $S$, and let $i \in S \setminus A$. We have a morphism $\eta^S_{i,A} \colon S \setminus A \to S \setminus i$ in $\fG$ corresponding to the diagram in which $A \cup \{i\}$ forms a single part, and the remaining diagram is the identity. This induces a linear map
\begin{displaymath}
\alpha^S_{i,A} \colon M(S \setminus A) \to M(S \setminus i).
\end{displaymath}
One easily sees that $\alpha$ is a simple symmetric $(1,\ast)$-operation on $M$. Similarly, we have a morphism $\zeta^S_A \colon S \setminus A \to S$ in $\fG$ in which $A$ forms a single part and the remaining diagram is the identity, and this induces a linear map
\begin{displaymath}
\omega^S_A \colon M(S \setminus A) \to M(S).
\end{displaymath}
Again, one verifies that $\omega$ is a symmetric $(0,\ast)$-operation on $M$. Using the rule for composition in $\fG$, one sees that $\sigma$ and $\omega$ satisfy conditions (a), (b), and (c) from Proposition~\ref{prop:witt-fb}, as well as the following:
\begin{enumerate} \setcounter{enumi}{3}
\item Let $i,j \in S$ be distinct, and put $A=\{j\}$. Then $\alpha^S_{i,A}=(\iota^S_{i,j})_*$.
\item We have $\omega^S_{\emptyset}=\delta$.
\end{enumerate}
Since the $\eta$ and $\zeta$ morphisms generate $\fG$, we see that that the operations $\alpha$ and $\omega$ completely determine the $\fG$-structure on $M$. The following proposition shows that the above conditions exactly characterize the operations we see in this manner:
\begin{proposition} \label{prop:res-part-op}
Let $M$ be an $\mathbf{FB}$-module equipped with a simple symmetric $(1,\ast)$-operation $\alpha$ and a symmetric $(0,\ast)$-operation $\omega$ satisfying (a), (b), and (c) from Proposition~\ref{prop:witt-fb} and (d) and (e) above. Then $M$ carries a unique $\fG^r$-structure inducing $\alpha$ and $\omega$.
\end{proposition}
\begin{proof}
Suppose $M$ is given with $\alpha$ and $\omega$ as in the statement of the proposition. We first show how to construct a $\fU^r$-structure on $M$. Let $\phi \colon S \to T$ be a $\fU^r$-morphism corresponding to a partition diagram. Let $B_1,\dots,B_r$ be the blocks of this diagram such that $|B_i \cap S|=1$ and let $B_1',\dots,B_s'$ be the blocks of this diagram such that $|B_i'\cap S|=0$. Let $X_i = B_i \cap T$ and $x_i$ be the unique element of $B_i \cap S$, and let $Y_i = B_i'$ (thought of as a subset of $T$). Also set $T' \setminus (Y_1 \cup \cdots \cup Y_s)$. We have the factorization
\[
\phi = \eta^{(T'\setminus (X_r \cup \cdots \cup X_2))\amalg \{x_1\} }_{x_1, X_1} \cdots \eta^{T' \amalg \{x_r\}}_{x_r, X_r} \zeta^{T \setminus (Y_s \cup \cdots \cup Y_2)}_{Y_1} \cdots \zeta^{T \setminus Y_s}_{Y_{s-1}} \zeta^T_{Y_s}
\]
We define $M_{\phi} \colon M(S) \to M(T)$ by replacing each $\zeta$ above by $\omega$ and each $\eta$ by $\alpha$. By (a), $\alpha$ and $\omega$ self-commute so the order of the blocks does not affect the definition of $M_{\phi}$. Furthermore, since $\alpha$ and $\omega$ commute with each other, we could have alternatively factored $\phi$ as a product of $\eta$ and $\zeta$ in any order. This fact, together with conditions (b) and (c), say that for any other restricted upwards partition diagram $\psi \colon T\to U$, we have $M_{\psi} M_{\phi} = M_{\psi \phi}$.
We can do the same to give a $\fD^r$-structure on $M$. This is like the above case but we only use $\eta$ such that the $X_i$ have size at most 1. Condition (d) tells us that the restriction of the $\fD^r$ and $\fU^r$ structures to $\mathbf{FB}$ agree with the usual $\mathbf{FB}$-action, so in particular they agree with each other.
Let $\cU$ be the class of morphisms in $\fU$ isomorphic to $\eta^S_{i, A}$ for some $S$, $i$, and $A$ (with $|A|>1$), or $\alpha^S_A$ for some $S$ and $A$, and define $\cD$ similarly using $\eta^S_{i, \emptyset}$. One easily sees that $\cU$ generates $\fU$ and $\cD$ generates $\fD$. Thus, by Proposition~\ref{prop:tri-comp}, it suffices to show that $(\phi, \psi)$ is compatible for $\phi \in \cU$ and $\psi \in \cD$ with $\psi \circ \phi$ defined.
Let $\psi = \eta^S_{j,\emptyset}$. First suppose $\phi = \eta^{S\amalg \{i\}}_{i, A}$ for $i \notin S$. If $j \in A$, then compatibility follows from (b), and if $j \notin A$, then compatibility follows from (a) since $\alpha$ self-commutes. Now suppose that $\phi = \zeta^S_A$. If $j \in A$, then compatibility follows from (c) and if $j \notin A$, then compatibility follows again from (a) since $\alpha$ and $\omega$ commute with each other.
\end{proof}
\subsection{The comparison theorem}
Recall from Proposition~\ref{prop:witt-gl} that if $M$ is a $\ul{W}(\bV)$-module then, restricting the action map to $\bV \subset \Sym(\bV)$, we obtain a representation of $\ul{\fgl}(\bV)$ on $M$. We say that $M$ is \defi{$\delta$-standard} if this representation of $\ul{\fgl}(\bV)$ is $\delta$-standard. We write $\Rep_{\delta}(\ul{W}(\bV))$ for the full subcategory of $\Rep(\ul{W}(\bV))$ spanned by the $\delta$-standard representations. The following is the main result of this section:
\begin{theorem} \label{thm:witt}
We have a natural isomorphism of categories:
\begin{displaymath}
\Mod_{\fG^r(\delta)} \cong \Rep_{\delta}(\ul{W}(\bV)).
\end{displaymath}
\end{theorem}
\begin{proof}
This follows from combining Propositions~\ref{prop:witt-fb} and~\ref{prop:res-part-op}. We note that conditions (d) and (e) in the latter correspond to the $\delta$-standard condition.
\end{proof}
\subsection{Application to $\mathbf{FA}$} \label{ss:fa}
We now consider $\fG^r=\fG^r(0)$ with parameter $\delta=0$. Let $\mathbf{FA}$ be the category of finite sets and all functions. A function $f \colon T \to S$ can be viewed as a restricted partition diagram from $S$ to $T$: the parts are $\{x\} \cup f^{-1}(x)$ with $x \in S$. Furthermore, this identification is compatible with composition. We thus see that the linearized category $\bk[\mathbf{FA}^{\op}]$ is equivalent to the subcategory of $\fG^r$ spanned by the $\eta$ morphisms. Since $\delta=0$, the $\zeta$ morphisms form an ideal of $\fG^r$, and we have a $\bk$-linear functor $\fG^r \to \bk[\mathbf{FA}^{\op}]$ that kills the $\zeta$ morphisms. We therefore see that an $\mathbf{FA}^{\op}$-module is the same as a $\fG^r$-module in which the $\zeta$ morphisms act by zero. Let $\Rep_0'(\ul{W}(\bV))$ be the full subcategory of $\Rep(\ul{W}(\bV))$ spanned by representations that are 0-standard and in which the $\omega$ operation vanishes. Since the $\zeta$ morphisms correspond to the $\omega$ operation, Theorem~\ref{thm:witt} immediately implies the following:
\begin{proposition}
There is a natural isomorphism of categories
\begin{displaymath}
\Mod_{\mathbf{FA}^{\op}} = \Rep_0'(\ul{W}(\bV)).
\end{displaymath}
\end{proposition}
\begin{remark}
Let $\mathbf{FS}$ be the category of finite sets and surjections. There is an analog of the above proposition in which $\mathbf{FA}^{\op}$ is replaced with $\mathbf{FS}^{\op}$ and $\ul{W}(\bV)$ is replaced with $\ul{W}^+(\bV)$. This point of view will be pursued further in \cite{witt}.
\end{remark}
\section{The Weyl Lie algebra} \label{s:weyl}
\subsection{Currying}
Let $(U, \omega)$ be a finite-dimensional symplectic space. Recall that the \defi{Weyl algebra} $A=A(U)$ is the quotient of the tensor algebra $T(U)$ by the 2-sided ideal generated by the relations $xy-yx=\omega(x,y)$ with $x,y \in U$. We define the \defi{Weyl Lie algebra}, denoted $\fa=\fa(U)$, to be the Lie algebra of this associative algebra. Thus $\fa=A$ as a vector space, and the bracket in $\fa$ is the commutator bracket in $A$. Note that $\fa$-modules are vastly more complicated than $A$-modules; indeed an $\fa$-module is the same as a module over the universal enveloping algebra $\cU(\fa)$, which is much larger than $A$.
Let $V$ be a finite-dimensional vector space, and let $\fa=\fa(V \oplus V^*)$, where we regard $U=V \oplus V^*$ as a symplectic space in the usual manner. As a vector space, we have $\fa=\Sym(V \oplus V^*)$. We thus see that, for a vector space $M$, giving a map
\begin{displaymath}
\mu \colon \fa \otimes M \to M
\end{displaymath}
is equivalent to giving maps
\begin{displaymath}
\mu_{r,s} \colon \Sym^r(V) \otimes \Sym^s(V^*) \otimes M \to M
\end{displaymath}
for all $r,s \in \bN$, which, in turn, is equivalent to giving maps
\begin{displaymath}
a_{r,s} \colon \Sym^r(V) \otimes M \to \Div^s(V) \otimes M
\end{displaymath}
for all $r,s \in \bN$. We assume for simplicity, that for $f \in \Sym^r(V)$ and $x \in M$ we have $a_{r,s}(f \otimes x)=0$ for all but finitely many $s$; this will automatically hold in the main case of interest to us. We can therefore package the $a_{r,s}$'s into a single map
\begin{displaymath}
a \colon \Sym(V) \otimes M \to \Div(V) \otimes M.
\end{displaymath}
Given a map $a$ as above, we define $a'$ to be the composition in the following diagram (set $S=\Sym(V)$ and $D=\Div(V)$):
\begin{displaymath}
\resizebox{\textwidth}{!}{
\xymatrix@C=6em{
S \otimes S \otimes M \ar[r]^-{\id \otimes \Delta \otimes \id} \ar@{..>}[d]_{a'} &
S \otimes S \otimes S \otimes M \ar[r]^-{\tau \otimes \id \otimes \id} &
S \otimes S \otimes S \otimes M \ar[r]^-{\id \otimes m \otimes \id} &
S \otimes S \otimes M \ar[d]^{\id \otimes a} \\
D \otimes D \otimes M &
D \otimes D \otimes D \otimes M \ar[l]_-{m \otimes \id \otimes \id} &
S \otimes D \otimes D \otimes M \ar[l]_-{\avg \otimes \id \otimes \id \otimes \id} &
S \otimes D \otimes M \ar[l]_-{\id \otimes \Delta \otimes \id} } }
\end{displaymath}
Here $m$ and $\Delta$ are multiplication and comultiplication, $\tau$ is the symmetry of the tensor product, and $\avg \colon S \to D$ is the averaging map. We define $a'' = \tau_{1,2} a' \tau_{1,2}$.
\begin{proposition} \label{prop:part-curry}
Let $\mu$ and $a$ be corresponding linear maps as above. Then $\mu$ defines a representation of $\fa$ if and only if $[a_1,a_2]=a'-a''$.
\end{proposition}
\begin{proof}
Let $\xi_1, \ldots, \xi_n$ be a basis for $V$ and let $\eta_1,\dots,\eta_n$ be the dual basis for $V^*$. We identify $\Sym(V)$ with the polynomial ring $\bk[\xi_1, \ldots, \xi_n]$, and $\Sym(V^*)$ with the polynomial ring $\bk[\eta_1, \ldots, \eta_n]$. For an exponent vector $\alpha \in \bN^n$, we let $\xi^{\alpha}$ be the monomial $\xi_1^{\alpha_1} \cdots \xi_n^{\alpha_n}$. We also define the divided power $\xi^{[\alpha]}=\frac{\xi^{\alpha}}{\alpha!}$, where $\alpha!=(\alpha_1!) \cdots (\alpha_n!)$. We define $\eta^{\alpha}$ and $\eta^{[\alpha]}$ similarly. For $1 \le i \le n$, we let $\delta_i \in \bN^n$ be the exponent vector that is~1 in the $i$th coordinate and~0 elsewhere.
The identity map $\Sym^r(V^*) \to \Sym^r(V^*)$ curries to the map $\bk \to \Div^r(V) \otimes \Sym^r(V^*)$ taking~1 to $\sum_{\vert \sigma \vert=r} \xi^{[\sigma]} \otimes \eta^{\sigma}$. It follows that we have
\begin{displaymath}
a(\xi^{\alpha} \otimes x) = \sum_{\sigma} \xi^{[\sigma]} \otimes \xi^{\alpha} \eta^{\sigma} x,
\end{displaymath}
where the sum is over all exponent vectors, and $\xi^{\alpha} \eta^{\sigma}$ is regarded as an element of $\fa$. As usual, we thus have
\begin{displaymath}
[a_1,a_2](\xi^{\alpha} \otimes \xi^{\beta} \otimes x) = \sum_{\sigma,\tau} \xi^{[\sigma]} \otimes \xi^{[\tau]} \otimes [\xi^{\alpha} \eta^{\sigma}, \xi^{\beta} \eta^{\tau}] x.
\end{displaymath}
Now, in the Weyl algebra $A$ we have
\begin{displaymath}
\eta_i^r \xi_i^s = \sum_{\epsilon_i \in \bN} \binom{r}{\epsilon_i} \binom{s}{\epsilon_i} \epsilon_i! \cdot \xi_i^{s-\epsilon_i} \eta_i^{r-\epsilon_i},
\end{displaymath}
and so
\begin{displaymath}
[\xi^{\alpha} \eta^{\sigma}, \xi^{\beta} \eta^{\tau}] = \sum_{\epsilon \in \bN^n} \left( \binom{\beta}{\epsilon} \binom{\sigma}{\epsilon} - \binom{\alpha}{\epsilon} \binom{\tau}{\epsilon} \right) \epsilon! \cdot \xi^{\alpha+\beta-\epsilon} \eta^{\sigma+\tau-\epsilon},
\end{displaymath}
where $\binom{\alpha}{\epsilon} = \prod_{i=1}^n \binom{\alpha_i}{\epsilon_i}$ and $\epsilon! = \prod_{i=1}^n \epsilon_i!$.
Thus, we have
\begin{displaymath}
[a_1,a_2](\xi^{\alpha} \otimes \xi^{\beta} \otimes x) = \sum_{\sigma,\tau,\epsilon} \left( \binom{\beta}{\epsilon} \binom{\sigma}{\epsilon} - \binom{\alpha}{\epsilon} \binom{\tau}{\epsilon} \right) \epsilon! \cdot \xi^{[\sigma]} \otimes \xi^{[\tau]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\sigma+\tau-\epsilon} x.
\end{displaymath}
We now compute $a'(\xi^{\alpha} \otimes \xi^{\beta} \otimes x)$. The map $a'$ is defined as the composition of seven maps. The effect of each is worked out in turn in the following derivation
\begingroup
\allowdisplaybreaks
\begin{align*}
\xi^{\alpha} \otimes \xi^{\beta} \otimes x
&\mapsto \sum_{\epsilon} \binom{\beta}{\epsilon} \xi^{\alpha} \otimes \xi^{\epsilon} \otimes \xi^{\beta-\epsilon} \otimes x \\
&\mapsto \sum_{\epsilon} \binom{\beta}{\epsilon} \xi^{\epsilon} \otimes \xi^{\alpha} \otimes \xi^{\beta-\epsilon} \otimes x \\
&\mapsto \sum_{\epsilon} \binom{\beta}{\epsilon} \xi^{\epsilon} \otimes \xi^{\alpha+\beta-\epsilon} \otimes x \\
&\mapsto \sum_{\epsilon, \rho} \binom{\beta}{\epsilon} \xi^{\epsilon} \otimes \xi^{[\rho]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\rho} x \\
&\mapsto \sum_{\epsilon,\nu,\mu} \binom{\beta}{\epsilon} \xi^{\epsilon} \otimes \xi^{[\nu]} \otimes \xi^{[\mu]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\nu+\mu} x \\
&\mapsto \sum_{\epsilon,\nu,\mu} \binom{\beta}{\epsilon} \epsilon! \cdot \xi^{[\epsilon]} \otimes \xi^{[\nu]} \otimes \xi^{[\mu]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\nu+\mu} x \\
&\mapsto \sum_{\epsilon,\nu,\mu} \binom{\beta}{\epsilon} \binom{\epsilon+\nu}{\epsilon} \epsilon! \cdot \xi^{[\epsilon+\nu]} \otimes \xi^{[\mu]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\nu+\mu} x \\
&= \sum_{\epsilon,\sigma,\tau} \binom{\beta}{\epsilon} \binom{\sigma}{\epsilon} \epsilon! \cdot \xi^{[\sigma]} \otimes \xi^{[\tau]} \otimes \xi^{\alpha+\beta-\epsilon} \eta^{\sigma+\tau-\epsilon} x.
\end{align*}
\endgroup
We thus see that $a'$ gives the first term in $[a_1,a_2]$. A similar computation shows that $a''$ gives the second, which completes the proof.
\end{proof}
\begin{definition}
Let $\cC$ be a tensor category and let $V$ be an object of $\cC$. We define the \textbf{curried Weyl Lie algebra $\ul{\fa}(V \oplus V^*)$} as follows. A representation of $\ul{\fa}(V \oplus V^*)$ is an object $M$ of $\cC$ equipped with maps
\begin{displaymath}
a_{n,m} \colon \Sym^n(V) \otimes M \to \Div^m(V) \otimes M
\end{displaymath}
for all $n,m \ge 0$, such that $[a_1,a_2]=a'-a''$, where $a'$ and $a''$ are defined as in the previous section.
\end{definition}
\begin{proposition} \label{prop:weyl-to-witt}
Let $V$ be an object of $\cC$ and let $(M,\alpha)$ be a representation of $\ul{\fa}(V \oplus V^*)$. Let $a$ be the composition
\begin{displaymath}
\xymatrix{
\Sym(V) \otimes M \ar[r]^\alpha & \Div(V) \otimes M \ar[r]^-{\pi \otimes \id} & V \otimes M}
\end{displaymath}
where the second map comes from the projection $\pi \colon \Div(V) \to V$. Then $a$ is a representation of the curried Witt algebra $\ul{W}(V)$. In particular, the composition
\begin{displaymath}
\xymatrix{
V \otimes M \ar[r] & \Sym(V) \otimes M \ar[r]^\alpha & \Div(V) \otimes M \ar[r] & V \otimes M}
\end{displaymath}
is a representation of $\ul{\fgl}(V)$.
\end{proposition}
\begin{proof}
First, we have
\[
[a_1,a_2] = (\pi \otimes \pi \otimes \id) \circ [\alpha_1,\alpha_2].
\]
Second, $a'$ is equal to the following composition:
\begin{align*}
\Sym V \otimes \Sym V \otimes M &\xrightarrow{{\rm id} \otimes \Delta \otimes {\rm id}} \Sym V \otimes \Sym V \otimes \Sym V \otimes M\\
&\xrightarrow{\tau \otimes {\rm id} \otimes {\rm id}} \Sym V \otimes \Sym V \otimes \Sym V \otimes M\\
&\xrightarrow{{\rm id} \otimes m \otimes {\rm id}} \Sym V \otimes \Sym V \otimes M\\
&\xrightarrow{\id \otimes \alpha} \Sym V \otimes \Div V \otimes M\\
&\xrightarrow{\pi \otimes \pi \otimes \id} V \otimes V\otimes M.
\end{align*}
The first four maps agree with the first four maps of the definition of $\alpha'$. It is straightforward to verify that the map
\[
\pi \otimes \pi \otimes \id \colon \Sym V \otimes \Div V \otimes M \to V \otimes V \otimes M
\]
agrees with the composition
\begin{align*}
\Sym V \otimes \Div V \otimes M &\xrightarrow{\id \otimes \Delta \otimes \id} \Sym V \otimes \Div V \otimes \Div V \otimes M\\
&\xrightarrow{\avg \otimes \id \otimes \id \otimes \id}\Div V \otimes \Div V \otimes \Div V \otimes M\\
&\xrightarrow{\id \otimes m \otimes \id} \Div V \otimes \Div V \otimes M\\
&\xrightarrow{\pi \otimes \pi \otimes \id} V \otimes V \otimes M.
\end{align*}
In particular, we have $a_1 = (\pi \otimes \pi \otimes \id) \circ \alpha_1$ and by applying $\tau$, we conclude that $a_2 = (\pi \otimes \pi \otimes \id) \circ \alpha_2$. This means $[a_1,a_2]=a'-a''$ is a result of applying $(\pi \otimes \pi \otimes \id)$ to the identity $[\alpha_1,\alpha_2] = \alpha'-\alpha''$.
\end{proof}
\subsection{In species}
Let $M$ be an $\mathbf{FB}$-module and consider a map
\begin{displaymath}
a \colon \Sym(\bV) \otimes M \to \Sym(\bV) \otimes M.
\end{displaymath}
Let $\phi$ be the corresponding symmetric operation on $M$. Thus if $S$ is a finite set, $B$ is a subset of $S$, and $x$ is an element of $M(S \setminus B)$, then
\begin{displaymath}
a(t^B \otimes x) = \sum_{A \subseteq S} t^A \otimes \phi^S_{A,B}(x).
\end{displaymath}
We consider the following conditions on $\phi$. Let $A$, $B$, $C$, and $D$ be subsets of a finite set $S$, with $A \cap B = \emptyset$ and $C \cap D = \emptyset$.
\begin{itemize}
\item[(B1)] If $A \cap C=B \cap D=\emptyset$ then
\begin{displaymath}
\phi^{S \setminus C}_{D, A} \circ \phi^{S \setminus A}_{C,B}
= \phi^{S \setminus D}_{C,B} \circ \phi^{S \setminus B}_{D,A}.
\end{displaymath}
In other words, $\phi$ commutes with itself.
\item[(B2)] If $A \cap C=\emptyset$ and $B \cap D \ne \emptyset$ then
\begin{displaymath}
\phi^{S \setminus C}_{D, A} \circ \phi^{S \setminus A}_{C,B}
= \sum_{\substack{X \subseteq B \cap D\\ X \ne \emptyset}} \phi^{S \setminus X}_{(D \setminus X) \cup C,A \cup (B \setminus X)}.
\end{displaymath}
We remark that there is also a version of this condition if $A \cap C \ne \emptyset$ and $B \cap D = \emptyset$; however, since the above condition holds for all choices of $A,B$ and $C,D$ and they play symmetric roles, we omit listing it separately as it is actually redundant.
\item[(B3)] If $A \cap C \ne \emptyset$ and $B \cap D \ne \emptyset$ then
\begin{displaymath}
\sum_{X \subseteq B \cap D} \phi^{S \setminus X}_{(D \setminus X) \cup C,A \cup (B \setminus X)}
= \sum_{X \subseteq A \cap C} \phi^{S \setminus X}_{(C \setminus X) \cup D, B \cup (A \setminus X)}.
\end{displaymath}
\end{itemize}
We then have the result:
\begin{proposition}
The map $a$ defines a representation of $\ul{\fa}(\bV \oplus \bV^*)$ if and only if the operation $\phi$ satisfies (B1), (B2), and (B3).
\end{proposition}
\begin{proof}
Let $A$ and $B$ be disjoint subsets of $S$ and let $x \in M(S \setminus (A \cup B))$. Then
\begin{align*}
a_1(a_2(t^A \otimes t^B \otimes x)) &= \sum_{\substack{C \amalg D \subseteq S\\ A \cap C = \emptyset}} t^{[D]} \otimes t^{[C]} \otimes \phi^{S \setminus C}_{D, A}(\phi^{S \setminus A}_{C,B}(x)),\\
a_2(a_1(t^A \otimes t^B \otimes x)) &= \sum_{\substack{C \amalg D \subseteq S\\ B \cap D = \emptyset}} t^{[D]} \otimes t^{[C]} \otimes \phi^{S \setminus D}_{C,B}(\phi^{S \setminus B}_{D,A}(x)),\\
a'(t^A \otimes t^B \otimes x) &= \sum_{C \amalg D \subseteq S} \sum_{X \subseteq B \cap D} t^{[D]} \otimes t^{[C]} \otimes \phi^{S \setminus X}_{(D \setminus X) \cup C,A \cup (B \setminus X)}(x),\\
a''(t^A \otimes t^B \otimes x) &= \sum_{C \amalg D \subseteq S} \sum_{X \subseteq A \cap C} t^{[D]} \otimes t^{[C]} \otimes \phi^{S \setminus X}_{(C \setminus X) \cup D, B \cup (A \setminus X)}(x).
\end{align*}
Equating coefficients, one sees that $[a_1,a_2]=a'-a''$ if and only if (B1), (B2), and (B3) hold.
\end{proof}
Recall that a symmetric operation $\phi$ corresponds to a sequence $(\phi[n])_{n \ge 0}$ of simple symmetric operations. The correspondence is given by $\phi[n]^S_{A,B}=\phi^{S \amalg [n]}_{A \amalg [n], B \amalg [n]}$. We now wish to translate the conditions (B1), (B2), and (B3) to the $\phi[n]$. We begin with the following observation:
\begin{proposition}
Condition {\rm (B3)} is equivalent to the following condition:
\begin{itemize}
\item[\rm (B3$'$)] We have $\phi[n]=(-1)^{n+1} \phi[1]$ for all $n \ge 1$.
\end{itemize}
\end{proposition}
\begin{proof}
Suppose (B3) holds. Let $P$ and $Q$ be disjoint subsets of a set $S$. Let $r \ge 0$ and put
\begin{displaymath}
\tilde{S}=S \amalg \{i_1, \ldots, i_r, j_1, j_2, k \}
\end{displaymath}
where the $i$'s, $j$'s, and $k$ are distinct from each other and all elements of $S$. Put
\begin{displaymath}
A = Q \cup \{i_1, \ldots, i_r, k \}, \quad
B = \{j_1, j_2\}, \quad
C = P \cup \{k\}, \quad
D = \{i_1,\ldots,i_r,j_1,j_2\}.
\end{displaymath}
We have
\begin{align*}
\sum_{X \subseteq B \cap D} \phi^{\tilde{S} \setminus X}_{(D \setminus X) \cup C, A \cup (B \setminus X)} &= \phi[r+3]^S_{P,Q}+2\phi[r+2]^S_{P,Q}+\phi[r+1]^S_{P,Q} \\
\sum_{X \subseteq A \cap C} \phi^{\tilde{S} \setminus X}_{D \cup (C \setminus X), (A \setminus X) \cup B} &= \phi[r+3]^S_{P,Q}+\phi[r+2]^S_{P,Q}.
\end{align*}
By (B3), the above two expressions are equal. We thus find
\begin{displaymath}
\phi[r+2]=-\phi[r+1].
\end{displaymath}
As this holds for all $r \ge 0$, we find $\phi[n]=(-1)^{n+1} \phi[1]$ for $n \ge 1$, and so (B3$'$) holds.
Now suppose (B3$'$) holds. This implies
\begin{displaymath}
\phi^{S \amalg Y}_{P \amalg Y, Q \amalg Y}=(-1)^{\# Y} \phi^S_{P,Q}
\end{displaymath}
provided that $P$ and $Q$ are not disjoint. Let $A$, $B$, $C$, and $D$ be as in (B3). Put $m=\# (B \cap D)$, and suppose $X$ is a subset of $B \cap D$ of size $k$. Then applying the above equation with $Y=(B \cap D) \setminus X$, we find
\begin{displaymath}
\phi^{S \setminus X}_{(D \setminus X) \cup C, A \cup (B \setminus X)}=
(-1)^{m-k} \phi^{S \setminus (B \cup D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)} =0.
\end{displaymath}
Note that $A \cap C \ne \emptyset$ since we are in the setting of (B3). It follows that
\begin{displaymath}
\sum_{X \subseteq B \cap D} \phi^{S \setminus X}_{(D \setminus X) \cup C, A \cup (B \setminus X)}
=\sum_{k=0}^m \binom{m}{k} (-1)^{m-k} \phi^{S \setminus (B \cup D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)} =0.
\end{displaymath}
Similarly, the other sum in (B3) vanishes, and so (B3) holds.
\end{proof}
The above proposition shows that we just need to understand the operations $\phi[0]$ and $\phi[1]$. To this end, we introduce some notation. Let $\sB_M$ denote the set of symmetric operations $\phi$ on $M$ satisfying (B1), (B2), and (B3), and let $\sC_M$ denote the set of pairs $(\alpha, \omega)$ of simple symmetric operations on $M$ satisfying the following conditions (C1) and (C2). In what follows, $A$, $B$, $C$, and $D$ are subsets of a finite set $S$.
\begin{itemize}
\item[(C1)] The operations $\alpha$ and $\omega$ commute with themselves and each other. Precisely, assuming that $A$, $B$, $C$ and $D$ are pairwise disjoint, we have
\begin{displaymath}
\alpha^{S\setminus C}_{D,A} \circ \omega^{S \setminus A}_{C,B} = \omega^{S \setminus D}_{C,B} \circ \alpha^{S \setminus B}_{D,A},
\end{displaymath}
and similarly with $\omega$ replaced by $\alpha$, or $\alpha$ replaced by $\omega$.
\item[(C2)] Suppose $B \cap D \ne \emptyset$, but all other pairs disjoint. Put
\begin{align*}
\alpha_1 &= \alpha^{S \setminus C}_{D,A} &
\alpha_2 &= \alpha^{S \setminus A}_{C,B} &
\alpha_3 &= \alpha^{S \setminus (B \cap D)}_{C \cup (D \setminus B),A \cup (B \setminus D)} \\
\omega_1 &= \omega^{S \setminus C}_{D,A} &
\omega_2 &= \omega^{S \setminus A}_{C,B} &
\omega_3 &= \omega^{S \setminus (B \cap D)}_{C \cup (D \setminus B),A \cup (B \setminus D)}
\end{align*}
and let $m=\# (B \cap D)$. Then
\begin{displaymath}
\alpha_1 \alpha_2=\alpha_3, \qquad \alpha_1 \omega_1=\alpha_2 \omega_1=0, \qquad \omega_1 \omega_2=(-1)^{m+1} \omega_3.
\end{displaymath}
\end{itemize}
We now have the following:
\begin{proposition} \label{prop:B-C-bijec}
We have a bijection
\begin{displaymath}
\Theta \colon \sB_M \to \sC_M, \qquad \phi \mapsto (\phi[0]+\phi[1],-\phi[1]).
\end{displaymath}
The inverse to $\Theta$ can be described as follows: $\phi=\Theta^{-1}(\alpha,\omega)$ is the unique symmetric operation on $M$ satisfying $\phi[0]=\alpha+\omega$ and $\phi[n]=(-1)^n \omega$ for $n \ge 1$.
\end{proposition}
We first prove a lemma.
\begin{lemma}
Suppose that $\phi$ satisfies {\rm (B3)}. Then {\rm (B2)} is equivalent to the following condition:
\begin{itemize}
\item[\rm (B$2'$)] Let $A$, $B$, $C$, and $D$ be subsets of a finite set $S$ such that $B \cap D \ne \emptyset$, but all other pairs are disjoint. Put $m=\# (B \cap D)$. Then we have
\begin{align*}
\phi[0]^{S \setminus C}_{D,A} \circ \phi[0]^{S \setminus A}_{C,B} &= \phi[0]^{S \setminus (B \cap D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)} + (1+(-1)^m) \phi[1]^{S \setminus (B \cap D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)}, \\
\phi[p]^{S \setminus C}_{D,A} \circ \phi[q]^{S \setminus A}_{C,B} &= (-1)^{p+q+m} \phi[1]^{S \setminus (B \cap D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)},
\end{align*}
where in the second equation $p$ and $q$ are non-negative and not both zero.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose $\phi$ satisfies (B2). Let $A$, $B$, $C$, and $D$ be as above. Let $P$ and $Q$ be sets disjoint from each other and from $S$ of cardinalities $p$ and $q$. Put
\begin{displaymath}
S'=S \cup P \cup Q, \quad
A'=A \cup P, \quad
B'=B \cup Q, \quad
C'=C \cup Q, \quad
D'=D \cup P.
\end{displaymath}
Let $m=\# (B \cap D)$. Applying (B2) to the prime sets, we find
\begin{displaymath}
\phi[p]^{S \setminus C}_{D,A} \circ \phi[q]^{S \setminus A}_{C,B} = \sum_{k=1}^m \binom{m}{k} \phi[p+q+m-k]^{S \setminus (B \cap D)}_{(D \setminus B) \cup C, A \cup (B \setminus D)}.
\end{displaymath}
Now, if $p+q>0$ then by (B3$'$) $\phi[p+q+m-k]=(-1)^{p+q+m-k+1} \phi[1]$, and we obtain the second equation in (B2$'$). If $p+q=0$ then (B3$'$) only gives this identity for $0 \le k<m$, and so the final term in the above sum must be handled differently; this gives the first equation in (B2$'$). Thus (B2$'$) holds. The same reasoning yields the reverse implication.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:B-C-bijec}]
Let $\phi \in \sB_M$ be given, and put $\alpha=\phi[0]+\phi[1]$ and $\omega=-\phi[1]$. Condition (C1) follows immediately from (B1). We now examine condition (C2); use the notation from there. Translating (B2$'$) to this notation gives
\begin{align*}
(\alpha_1+\omega_1)(\alpha_2+\omega_2) &= (\alpha_3+\omega_3)-(1+(-1)^m) \omega_3, \\
(\alpha_1+\omega_1) \omega_2 &= (-1)^{m+1} \omega_3, \\
\omega_1 (\alpha_2+\omega_2) &= (-1)^{m+1} \omega_3, \\
\omega_1 \omega_2 &= (-1)^{m+1} \omega_3.
\end{align*}
Here the first equation is the first equation from (B2$'$), while the final three equations come from taking $(p,q)$ to be $(0,1)$, $(1,0)$, and $(1,1)$ in (B2$'$). One easily sees that the above equations yield those from (C2). Thus $(\alpha, \omega)$ belongs to $\sC_M$. The above reasoning is reversible.
\end{proof}
\subsection{The partition category} \label{ss:part}
Let $\fP=\fP(\delta)$ be the partition category with parameter $\delta$ (see \S \ref{ss:res-part}). Suppose that $M$ is a $\fP$-module. Then $M$ restricts to an $\mathbf{FB}$-module. Given a finite set $S$ and disjoint subsets $A$ and $B$ such that at least one is non-empty, let $\eta^S_{A,B} \colon S \setminus B \to S \setminus A$ be the morphism in $\fP$ in which $A \cup B$ forms a single block, and the remaining diagram is the identity. By convention, $\eta^S_{\emptyset,\emptyset}=\delta \cdot {\rm id}_S$. Let
\begin{displaymath}
\alpha^S_{A,B} \colon M(S \setminus B) \to M(S \setminus A)
\end{displaymath}
be the induced map. One easily sees that $\sigma$ is a simple symmetric operation on $M$. Using the rule for composition in $\fP$, we find that the following conditions hold:
\begin{itemize}
\item[(D1)] $\alpha$ commutes with itself in the sense of (C1).
\item[(D2)] We have $\alpha_1 \alpha_2=\alpha_3$ in the notation of (C2).
\item[(D3)] $\alpha^S_{\emptyset,\emptyset}=\delta$ for all finite sets $S$.
\item[(D4)] For distinct elements $i,j \in S$, we have $\alpha^S_{\{i\},\{j\}}=(\iota^S_{i,j})_*$.
\end{itemize}
\begin{proposition} \label{prop:part-op}
Let $M$ be an $\mathbf{FB}$-module equipped with a simple symmetric operation $\sigma$ satisfying the above conditions. Then $M$ carries a unique $\fP$-structure inducing $\sigma$.
\end{proposition}
\begin{proof}
First, we use $\alpha$ to construct $\fU$ and $\fD$-structures on $M$. The operation $\alpha$ gives, by restriction, a simple $(1,*)$-operation and a simple $(0,*)$-operation. We can use these to define an action of the upwards restricted partition category $\fU^r$ on $M$ as in the proof of Proposition~\ref{prop:res-part-op}, and we note that $\fU^r=\fU$. Similarly, we get, by restriction, a simple $(*,1)$-operation and a simple $(*,0)$-operation which gives a $\fD$-structure on $M$. (D4) tells us that the restriction of the $\fD$ and $\fU$ structures to $\mathbf{FB}$ agree with the usual $\mathbf{FB}$-action, so in particular they agree with each other.
Let $\cU$ be the class of morphisms in $\fU$ isomorphic to $\eta^S_{A,B}$ for some $S$, $A$, and $B$ with $|A| \le 1$, and define $\cD$ similarly using $\eta^S_{A,B}$ with $|B| \le 1$. One easily sees that $\cU$ generates $\fU$ and $\cD$ generates $\fD$. Thus, by Proposition~\ref{prop:tri-comp}, it suffices to show that $(\phi, \psi)$ is compatible for $\phi \in \cU$ and $\psi \in \cD$ with $\psi \circ \phi$ defined.
Let $A,B,C,D$ be subsets of $S$ such that $|C|\le 1$ and $|A|\le 1$ and all pairs of subsets are disjoint, except possibly $B$ and $D$. We set $\phi = \eta^{S\setminus A}_{C,B}$ and $\psi = \eta^{S \setminus C}_{D,A}$. If $B \cap D = \emptyset$, then compatibility follows from (D1), i.e., the condition that $\alpha$ is self-commuting. Otherwise, suppose that $B \cap D \ne \emptyset$. Then compatibility follows from (D2) if at least one of $C \cup (D \setminus B)$ and $A \cup (B\setminus D)$ is non-empty. If both are empty, then we also need to use (D3).
\end{proof}
Suppose that $\cC$ and $\cD$ are two $\bk$-linear categories whose objects are finite sets and which contain all bijections. We define a new $\bk$-linear category $\cC \star \cD$ whose objects are finite sets as follows. The Hom spaces are defined by
\begin{displaymath}
\Hom_{\cC \star \cD}(S,T)=\bigoplus_{\substack{S=S_1 \sqcup S_2 \\ T= T_1 \sqcup T_2}} \Hom_{\cC}(S_1, T_1) \otimes \Hom_{\cD}(S_2, T_2).
\end{displaymath}
Suppose that $S \to T$ and $T \to U$ are morphisms in $\cC \star \cD$ corresponding to decompositions $S=S_1 \sqcup S_2$ and $T=T_1 \sqcup T_2$, and $T=T_1' \sqcup T_2'$ and $U=U_1 \sqcup U_2$. If $T_1=T_1'$ and $T_2=T_2'$ the composition is defined in the obvious manner, using the composition laws in $\cC$ and $\cD$; otherwise, the composition is defined to be 0. We have a functor $\mathbf{FB} \to \cC \star \cD$ that is the identity on objects and takes a bijection $\phi \colon S \to T$ to
\begin{displaymath}
\sum_{S=S_1 \sqcup S_2} (\phi \colon S_1 \to \phi(S_1)) \otimes (\phi \colon S_2 \to \phi(S_2)).
\end{displaymath}
In particular, the identity morphism of $S$ in $\cC \star \cD$ is $\sum_{S=A \sqcup B} \id_{\cC,A} \otimes \id_{\cD,B}$. There is no natural functor $\cC \to \cC \star \cD$ (the obvious attempt does not preserve identity morphisms), but there is a natural functor $\cC \star \cD \to \cC$ which kills all morphisms in $\cD$. Similarly, there is a natural functor $\cC \star \cD \to \cD$ which kills all morphisms in $\cC$. We will apply this construction with $\cC=\fP(\delta)$ and $\cD=\fP(\epsilon)$ below.
\subsection{The comparison theorem}
Let $M$ be an $\ul{\fa}(\bV \oplus \bV^*)$-module. We say that $M$ is \defi{$\delta$-standard} if its restriction to $\ul{\fgl}(V)$ (see Proposition~\ref{prop:weyl-to-witt}) is $\delta$-standard. We say that $M$ has \defi{central character} $\chi \in \bk$ if the composition
\begin{displaymath}
M \to \Sym(V) \otimes M \to \Div(V) \otimes M \to M
\end{displaymath}
is $\chi$ times the identity, where the first map is the natural isomorphism $M \to \Sym^0(V) \otimes M$ followed by the inclusion into $\Sym(V) \otimes M$ while the second map is the projection $\Div(V) \otimes M \to \Div^0(V) \otimes M$ followed by the natural isomorphism with $M$.
We let $\Rep^{\chi}_{\delta}(\ul{\fa}(\bV \oplus \bV^*))$ be the full subcategory of $\ul{\fa}(\bV \oplus \bV^*)$ spanned by $\delta$-standard modules with central character $\chi$.
\begin{theorem}
We have an equivalence of categories
\begin{displaymath}
\Rep_{\epsilon}^{\delta-\epsilon}(\ul{\fa}(\bV \oplus \bV^*)) = \Rep(\fP(\delta) \star \fP(\epsilon)).
\end{displaymath}
\end{theorem}
\begin{proof}
Let $M$ be a representation of $\fP(\delta) \star \fP(\epsilon)$. Then $M$ restricts to an $\mathbf{FB}$-module via the functor $\mathbf{FB} \to \fP(\delta) \star \fP(\epsilon)$ defined above. Let $S$ be a finite set with disjoint subsets $A$ and $B$. If $A \cup B \ne \emptyset$, we define
\[
\alpha^S_{A,B} \colon M(S \setminus B) \to M(S \setminus A)
\]
using the morphism in $\fP(\delta) \star \fP(\epsilon)$ that is the identity on $S \setminus (A \cup B)$ and the morphism $A \to B$ in $\fP(\delta)$ given by a single block. We define $\alpha^S_{\emptyset, \emptyset}$ to be $\delta$ times the identity on $M(S)$. Similarly, if $A \cup B \ne \emptyset$, we define
\[
\omega^S_{A,B} \colon M(S \setminus B) \to M(S\setminus A)
\]
to be $(-1)^{|A|+1}$ times the morphism which is the identity on $S \setminus (A\cup B)$ and the morphism $A \to B$ in $\fP(\epsilon)$, and we define $\alpha^S_{\emptyset, \emptyset}$ to be $-\epsilon$ times the identity on $M(S)$. It follows from the above discussion that $\alpha$ and $\omega$ satisfy conditions (C1) and (C2) and thus define a representation of $\ul{\fa}(V \oplus V^*)$ on $M$.
Let $\phi \in \sB_M$ be the operation associated to $(\alpha,\omega)$. Thus $\phi[0]=\alpha+\omega$ and $\phi[1]=-\omega$. We have
\begin{displaymath}
\phi^S_{\emptyset,\emptyset}=\alpha^S_{\emptyset,\emptyset}+\omega^S_{\emptyset,\emptyset}=\delta - \epsilon
\end{displaymath}
and so we see that $M$ has central character $\delta - \epsilon$. Let $i,j \in S$. For $i=j$, we have
\begin{displaymath}
\phi^S_{i,i} = \phi[1]^{S \setminus x}_{\emptyset,\emptyset} = \epsilon.
\end{displaymath}
For $i \ne j$, we have
\begin{displaymath}
\phi^S_{i,j} = \alpha^S_{i,j}+\omega^S_{i,j} = (\iota^S_{i,j})_*,
\end{displaymath}
where $\iota^S_{i,j} \colon S \setminus j \to S \setminus i$ is the natural bijection; the second equality follows from the definition of the $\mathbf{FB}$-structure on $M$. We thus see that the $\ul{\fgl}(V)$-module structure on $M$ is given by
\begin{displaymath}
t^i \otimes x \mapsto \epsilon x + \sum_{j \ne i} t^j \otimes (\iota_{i,j}^S)_*(x)
\end{displaymath}
where $x \in M(S \setminus i)$. Hence the action is $\epsilon$-standard.
\end{proof}
Let $\Rep^{\delta}(\ul{\fa}(\bV \oplus \bV^*))'$ be the full subcategory of $\Rep(\ul{\fa}(\bV \oplus \bV^*))$ spanned by representations that have central character $\delta$, are 0-standard, and for which the $\omega$ operations vanish.
\begin{corollary}
We have a natural isomorphism of categories
\begin{displaymath}
\Rep^{\delta}(\ul{\fa}(\bV \oplus \bV^*))' \cong \Rep(\fP(\delta)).
\end{displaymath}
\end{corollary}
\section{Abstract curried algebras} \label{s:abs}
\subsection{The definition}
We have defined the notion of representation for several curried algebras. However, we have not given a general definition of curried algebra. We now briefly (and informally) give such a definition. It would be interesting to explore this idea in more detail.
Let $\cC$ be a symmetric monoidal $\bk$-linear category. We assume that the monoidal structure $\otimes$ is $\bk$-bilinear. Let $M$ be an object of $\cC$. Given two other objects $V$ and $W$ of $\cC$, a \defi{$(V,W)$-operation} on $M$ is a map
\begin{displaymath}
a \colon V \otimes M \to W \otimes M.
\end{displaymath}
Intuitively, giving a curried algebra should amount to giving some $(V,W)$-operations (for various $V$ and $W$) satisfying some relations. ``Relations'' will mean that certain operations built out of these operations vanish. Given a $(V,W)$-operation $a$, there are four ways to build new operations:
\begin{enumerate}
\item Tensor with an arbitrary object $X$ to obtain an $(X \otimes V, X \otimes W)$-operation:
\begin{displaymath}
\xymatrix@C=4em{
X \otimes V \otimes M \ar[r]^-{\id \otimes a} & X \otimes W \otimes M. }
\end{displaymath}
\item Pre-compose with a morphism $f \colon V' \to V$ and post-compose with a morphism $g \colon W \to W'$ to obtain a $(V',W')$-operation:
\begin{displaymath}
\xymatrix@C=4em{
V' \otimes M \ar[r]^{f \otimes \id} & V \otimes M \ar[r]^-a & W \otimes M \ar[r]^{g \otimes \id} & W' \otimes M. }
\end{displaymath}
\item Given a $(U,V)$-operation $b$, compose to obtain a $(U,W)$-operation:
\begin{displaymath}
\xymatrix@C=4em{
U \otimes M \ar[r]^b & V \otimes M \ar[r]^a & W \otimes M.}
\end{displaymath}
\item Given a finite collection of $(V,W)$-operations $\{a_i\}$, any $\bk$-linear combination $\sum_i \lambda_i a_i$ is also a $(V,W)$-operation.
\end{enumerate}
This suggests the following definition:
\begin{definition}
A \defi{curried algebra} $A$ in $\cC$ consists of the following primary data:
\begin{itemize}
\item For each pair of objects $(V,W)$ in $\cC$, a $\bk$-vector space $A(V,W)$. This is called the space of $(V,W)$-operations in $A$. For $V=W$, there is a distinguished ``identity operation'' in $A(V,V)$.
\end{itemize}
Additionally, we require the following operations on $A$:
\begin{enumerate}
\item For objects $V$, $W$, and $X$, a $\bk$-linear map $A(V, W) \to A(X \otimes V, X \otimes W)$.
\item For morphisms $V' \to V$ and $W \to W'$, a $\bk$-linear map $A(V,W) \to A(V',W')$.
\item For objects $U$, $V$, and $W$, A $\bk$-linear map $A(U,V) \otimes A(V,W) \to A(U,W)$.
\end{enumerate}
A number of conditions should hold (that we do not specify).
\end{definition}
\begin{remark} \label{rmk:curcat}
Let $A$ be a curried algebra in $\cC$. One can then define a $\bk$-linear category $\cD$ with the same objects as $\cC$ and with $\Hom_{\cD}(V,W)=A(V,W)$. Composition is given by the operation in (c). The operations in (a) and (b) define an action of the monoidal category $\cC$ on $\cD$, i.e., a functor $\cC \times \cD \to \cD$ satisfying certain conditions. In fact, giving $A$ is equivalent to giving $\cD$ (together with this action), and so one can view curried algebras as certain kinds of categories.
\end{remark}
\subsection{Constructions}
Assuming $\cC$ satisfies some mild conditions, there are two general constructions of curried algebras:
\begin{itemize}
\item Given a collection $\{(V_i,W_i)\}_{i \in I}$ of pairs of objects in $\cC$, there is a free curried algebra containing a distinguished $(V_i,W_i)$-operation for each $i \in I$.
\item Given a curried algebra $A$ and a collection of elements $\{x_i \in A(V_i,W_i)\}_{i \in I}$, there is a quotient curried algebra $B$ in which each $x_i$ maps to~0 (and which is universal subject to this).
\end{itemize}
These two constructions allow one to build curried algebras by generators and relations. For instance, one can build the curried algebra $\ul{\fgl}(V)$ by taking the free curried algebra on a single $(V,V)$-operation $a$ and quotienting by the $(V \otimes V, V \otimes V)$-operation $[a_1,a_2]-\tau(a_1-a_2)$.
There is one additional important construction of curried algebras. Let $M$ be an object of $\cC$. We define the \defi{endomorphism curried algebra} of $M$, denoted $E_M$, by
\begin{displaymath}
E_M(V,W) = \Hom_{\cC}(V \otimes M, W \otimes M).
\end{displaymath}
If $A$ is an arbitrary curried algebra, then a \defi{representation} of $A$ on $M$ is a homomorphism of curried algebras $A \to M$. The representations of various curried algebras that we have discussed above all fit into this general framework.
\subsection{Diagram categories}
We have seen several examples of diagram categories in this paper, such as the Brauer category and the partition category. We now propose a precise definition of ``diagram category.''
To motivate the definition, suppose that $\fG$ is one of the familiar $\bk$-linear diagram categories, such as the Brauer category. In particular, the objects of $\fG$ are finite sets. If $T$ is a finite set, then there is a functor $\fG \to \fG$ given on objects by $S \mapsto S \amalg T$. On morphisms, this functor corresponds to adding vertices indexed by $T$ to the source and target, and connecting these vertices by lines. In other words, we see that $\fG$ admits an action by the monoidal category $\mathbf{FB}$. We now give our definition:
\begin{definition}
A \defi{diagram category} is a $\bk$-linear category whose objects are finite sets equipped with a $\bk$-linear action by the monoidal category $\mathbf{FB}$ that lifts disjoint union.
\end{definition}
Note that a diagram category in the above sense is just a curried algebra in $\mathbf{FB}$, from the point of view of Remark~\ref{rmk:curcat} (with the caveat that $\mathbf{FB}$ is not a $\bk$-linear category). The point of this paper can now be rephrased as follows: for many diagrams categories $\fG$, we associated a curried algebra $A$ in $\Mod_{\mathbf{FB}}$ such that $\fG$-modules and $A$-modules coincide.
\addtocontents{toc}{\vskip 6pt}
|
1,116,691,500,299 | arxiv | \section{Introduction}
\label{introduction}
Shot noise is a nonequilibrium fluctuation of the
current in mesoscopic conductors caused by random flow
of the charge. It can be thought of as an uncorrelated
Poisson process \cite{Schottky} giving rise to a simple formula
for the spectral density of the shot noise,
$S^c=eI$, where
$I$ is the current through the conductor
and $e$ is the electron charge.
Being the result of charge quantization, the
shot noise is an interesting and highly nontrivial
physical phenomenon.\cite{Jongrev} In contrast to the
thermal fluctuations of the current, the shot noise
provides important information about microscopic
transport properties of the conductors beyond
the linear response coefficients such as the conductance.
For instance,
the shot noise serves
as a sensitive tool to study correlations
in conductors:
while shot noise assumes the Poissonian value in the absence of correlations,
it becomes suppressed when correlations set in as e.g.
imposed by the Pauli principle.
\cite{Khlus,Landauer,Lesovik,Yurke,Buttiker1}
In particular, the shot noise is completely suppressed
in ballistic conductors,\cite{Kulik}
and it appears thus only in the presence of a disorder.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=7.0cm
\epsffile{noisefig.eps}
\end{center}
\caption{
a) Multiterminal diffusive conductor of arbitrary 2D or 3D shape
and with arbitrary impurity distribution.
There are $N$ leads with metallic contacts of area $L_n$,
and $I_m$ is the m-th outgoing current.
$S$ denotes the remaining surface of the conductor where no current
can pass through.
b) Conductor of a star geometry with $N$ long leads
which join each other at a small crossing
region. The resistance of this region is assumed to be
much smaller than the resistance of the leads.
c) Wide conductor: the contacts are connected through a wide
region, so that the resistance of the conductor comes mainly from
the regions near the contacts, while the resistance of the wide
region is negligible.
d) H-shaped conductor with four leads of equal
conductances, $G/4$, connected
by a wire in the middle of conductance $G_0$.}
\label{noisefig}
\end{figure}
In diffusive mesoscopic
two-terminal conductors where the inelastic scattering lengths
exceed the system
size the shot-noise suppression factor for ``cold'' electrons
(i.e. for vanishing electron temperature)
was predicted
\cite{Beenakker1,Nagaev1,Jong1,Nazarov,Altshuler,Jong}
to be $1/3$. The suppression of shot noise in diffusive conductors
is now experimentally
confirmed.
\cite{Liefrink,Steinbach,Schoelkopf,Schoenenberger,Schoenenberger2}
While some derivations
are based on a scattering matrix approach \cite{Beenakker1,Jong1}
or conventional Green's function technique \cite{Nazarov,Altshuler}
and thus a priori include
quantum phase coherence, no such effects are included in the
semiclassical Boltzmann-Langevin equation approach,
which nevertheless leads to the same result.\cite{Nagaev1,Jong}
However,
while in the quantum approach for a two-terminal conductor
the factor $1/3$ was even shown to be universal,\cite{Nazarov}
the semiclassical derivations given so far \cite{Nagaev1,Jongrev}
are restricted to quasi--onedimensional conductors.
Thus, although phase coherence is believed not to be essential for
the suppression of shot noise,\cite{Shimitzu}
the equivalence of different approaches for calculating
noise in mesoscopic conductors is not evident.
In the regime of hot electrons the noise suppression factor
was found\cite{Nagaev2,Kozub} to be $\sqrt{3}/4$.
Again, this result, which is based on a Boltzmann-Langevin
equation approach, is restricted to quasi--onedimensional conductors.
The generalization of these results
to the case of arbitrary multiterminal conductors is not obvious.
We present here the systematic study of transport and noise
in multiterminal diffusive conductors.
This problem has been recently addressed by
Blanter and B\"{u}ttiker in Ref.\ \onlinecite{Blanter}, where they use
the scattering matrix formulation followed by an impurity averaging
procedure. Having the advantage of including quantum phase coherence,
this approach is somewhat cumbersome to generalize to an arbitrary
geometry and arbitrary disorder.
In contrast to this,
our approach is based on semiclassical Boltzmann-Langevin equation,
which greatly
simplifies the calculations.
We consider a multiterminal mesoscopic
diffusive conductor (see Fig.\ \ref{noisefig}a) connected to
an arbitrary number $N$ of perfect metallic reservoirs at the
contact surfaces
$L_n$, $n=1,\ldots ,N$, where the voltages $V_n$ or outgoing currents $
I_n$
are measured. The reservoirs are maintained at equilibrium
and have in general different lattice temperatures $T_n$.
Unless specified otherwise the conductor has an arbitrary
3D or 2D geometry with an arbitrary disorder distribution.
Our goal is to calculate the multiterminal spectral densities
of current fluctuations $\delta I_n(t)$
at zero frequency, $\omega =0$,
\begin{equation}
S^c_{nm}=\int\limits_{-\infty}^{\infty}dt\langle\delta I_n(t)\delta
I_m(0)\rangle,
\label{spectr}
\end{equation}
where the brackets $\langle\ldots\rangle$ indicate an ensemble average.
We consider the effects of purely elastic
scattering and those of energy relaxation due to
electron-electron and electron-phonon scattering on the
same basis.
Starting our analysis with a brief summary of the
Boltzmann-Langevin kinetic equation approach,\cite{Kadomtsev,Kogan}
we then apply the standard diffusion approximation and
reduce the problem of evaluating Eq.\ \ref{spectr}
to the solution of a diffusion equation.
First, we solve the diffusion equation for the distribution function
to obtain the multiterminal conductance matrix and energy transport
coefficients in terms of well defined
``characteristic potentials''.\cite{Buttiker3}
We formulate the Wiedemann-Franz law for the case
of a multiterminal conductor.
Then we turn to the calculation of the noise spectrum.
We derive the exact general formula (\ref{MSD}) for the multiterminal
spectral density of the noise, which together with Eqs.\
(\ref{temp},\ref{temp3},\ref{Pi2},\ref{MSD4a}) is the central result
of our paper.
Using this formula we demonstrate
that the shot-noise suppression
factor of $1/3$ is {\it universal}
also in the semiclassical Boltzmann-Langevin approach,
in the sense that
it holds for a
multiterminal diffusive conductor of arbitrary shape,
electron spectrum and disorder distribution.
We first prove this for cold electrons and then for the case of
hot electrons where the suppression factor is $\sqrt{3}/4$.
Thereby we extend previous semiclassical
investigations\cite{Nagaev2,Kozub} for
two-terminal conductors
to an arbitrary multiterminal geometry.
This allows us then to compare our semiclassical approach
with the scattering matrix approach
for multiterminal conductors,\cite{Buttiker1,Buttiker2,Martin1}
in particular with some explicit results recently obtained
for diffusive conductors.\cite{Blanter}
The universality of shot noise proven here gives further
support to the suggestion
\cite{Jong2} that phase coherence is not essential for the
suppression of shot noise in diffusive conductors.\cite{Landauer2}
Another remarkable property of shot noise in mesoscopic conductors is the
exchange effect introduced by B\"uttiker.\cite{Buttiker2} Although this effect
is generally believed to be phase-sensitive, we will show that this need not be
so. Indeed, for the particular case of an H-shaped conductor (see Fig.\
\ref{noisefig}d) we show that exchange effects can be of the same order as the
shot noise itself even in the framework of the semiclassical Boltzmann approach.
We prove that while the exchange effect measured in different contacts
(cross-correlations) can change the sign, it is always negative when measured in
the same contact (auto-correlations). Thus, the auto-correlations are always
suppressed, in agreement with the Pauli principle. Formally, these exchange
effects are shown to come from a non-linear dependence on the local distribution
function. Similarly we show that the same non-linearities are responsible for
non-local effects such as the suppression of shot noise by open leads even at
zero electron temperature.
Finally, we discuss a new phenomenon, namely the current noise in multiterminal
diffusive conductors induced by thermal transport. We consider the cases of hot
and cold electrons and prove the universality of noise in the presence of
thermal transport. We also propose a possible experiment which would allow one to
measure locally the effective noise temperature. Throughout the paper we
illustrate the general formalism introduced here by concrete numbers for various
conductor shapes that are of direct experimental interest.
We note that some of results of present paper has been published
in Ref.\ \onlinecite{sukhor} in less general form. Here we present the
details of the derivation of these results and generalize them to a
finite temperature and an arbitrary electron spectrum (band structure).
\section{Boltzmann-Langevin equation: diffusive regime}
\label{bl}
To calculate the spectral density of current fluctuations
we use the Boltzmann-Langevin kinetic
equation \cite{Kadomtsev,Kogan}
for the fluctuating distribution function $F({\bf p},{\bf r},t)=f(
{\bf p},{\bf r})+\delta f({\bf p},{\bf r},t)$,
which depends on the momentum ${\bf p}$, position ${\bf r}$, and time $
t$,
\begin{equation}
\left(\partial_t+{\bf v}\!\cdot\!\partial_{{\bf r}}+e{\bf E}\!\cdot\!
\partial_{{\bf p}}\right)F-I[F]-I_{im}[F]=\delta F^s,
\label{kineq}
\end{equation}
where ${\bf E}({\bf r},t)={\bf E}({\bf r})+\delta {\bf E}({\bf r},t)$
is the fluctuating electric field, ${\bf v}
=\nabla_{{\bf p}}\varepsilon$ is the velocity
of the electron and $\varepsilon$ is its kinetic energy.
$I[F]=I_{ee}[F]+I_{e-ph}[F]$ contains
the electron-electron and electron-phonon collision integrals, respectively
(we do not need to specify them here),
and $I_{im}[F]$ is the impurity collision integral,
\begin{eqnarray}
\lefteqn{
I_{im}[F]=\sum_{{\bf p}^{\prime}}\left(J_{{\bf p}^{\prime}{\bf p}}
-J_{{\bf p}{\bf p}^{\prime}}\right),
}\nonumber \\
& &
J_{{\bf p}{\bf p}^{\prime}}
({\bf r},t)=W_{{\bf p}{\bf p}^{\prime}}({\bf r})F({\bf p},{\bf r},
t)[1-F({\bf p}^{\prime},{\bf r},t)],
\label{coll}
\end{eqnarray}
where the elastic scattering rate from ${\bf p}$ into ${\bf p}^{\prime}$,
$W_{{\bf p}{\bf p}^{\prime}}({\bf r})$,
depends on the position ${\bf r}$ in the case
of disorder considered here.
The Langevin source of fluctuations $\delta F^s({\bf p},{\bf r},t)$
is induced
by the random (stochastic) process of the electron scattering
which is also responsible
for the momentum relaxation of the electron gas.
On the other hand, electron-electron scattering
conserves total momentum of the electron gas and therefore does not
contribute to $\delta F^s$.
Furthermore, we neglect the momentum relaxation due to
electron-phonon scattering
and electron-electron Umklapp process, assuming that they are weak
compared to the scattering by impurities in diffusive
conductors (phonon induced shot noise in ballistic wires has been
studied in Ref.\ \onlinecite{Gurevich}).
In other words, we assume that the collision integrals $I_{ee}[F]$ and
$I_{e-ph}[F]$ describe only energy relaxation process in the electron
gas, but it is only impurity scattering which gives rise to
momentum relaxation and to the shot noise in
diffusive conductors.
To describe the fluctuations $\delta F^s$ we make use of the
Langevin formulation introduced by
Kogan and Shul'man (Ref.\ \onlinecite{Kogan}).
In this approach
there are two contributions to the fluctuations
of the impurity collision integral.
First, there is the contribution $I_{im}[\delta
f]$
due to the fluctuations of the distribution function, which has
already been included in Eq.\ (\ref{coll}).
The second contribution, $\delta I_{im}[f]$,
stems from the random character
of the electron scattering, which is
the extra source
of fluctuation $\delta F^s$ occurring on the {\it rhs\/} of
Eq.\ (\ref{kineq}), i.e.,
\begin{equation}
\delta F^s=\sum_{{\bf p}^{\prime}}\left(\delta J_{{\bf p}^{\prime}
{\bf p}}-\delta J_{{\bf p}{\bf p}^{\prime}}\right),
\label{source}
\end{equation}
where the random variables $\delta J_{{\bf p}{\bf p}^{\prime}}$
are intrinsic fluctuations of the incoming and
outgoing fluxes $J_{{\bf p}{\bf p}^{\prime}}$.
Assuming now
that the flow of electrons, say, from state ${\bf p}$
to state ${\bf p}^{\prime}$ is described by a Poisson process
we can write\cite{Kogan}
\begin{eqnarray}
\lefteqn{
\left<\delta J_{{\bf p}{\bf p}^{\prime}}({\bf r},t)\delta J_{{\bf p}_
1{\bf p}^{\prime}_1}({\bf r}_1,t_1)\right>
\frac{}{}}\ \ \ \ \ \ \nonumber \\
& &
=\delta (t-t_1)\delta ({\bf r}
-{\bf r}_1)\delta_{{\bf p}{\bf p}_1}\delta_{{\bf p}^{\prime}{\bf p}_
1^{\prime}}\left<J_{{\bf p}{\bf p}^{\prime}}({\bf r},t)\right>,
\label{correlator}
\end{eqnarray}
where
\begin{equation}
\left<J_{{\bf p}{\bf p}^{\prime}}({\bf r},t)\right>=W_{{\bf p}{\bf p}^{
\prime}}({\bf r})f({\bf p},{\bf r})[1-f({\bf p}^{\prime},{\bf r})].
\label{flux}
\end{equation}
Using the preceding two equations together with Eq.\ (\ref{source}),
we obtain the correlator of the Langevin sources,
\begin{eqnarray}
\lefteqn{
\left<\delta F^s({\bf p},{\bf r},t)
\delta F^s({\bf p}^{\prime},{\bf r}^{\prime},t^{\prime})\right>
=\delta (t-t^{\prime})\delta ({\bf r}-{\bf r}^{\prime})
\frac{}{}
}\nonumber \\
& &
\times\sum_{{\bf p}^{\prime\prime}}
(\delta_{{\bf p}{\bf p}^{\prime}}-
\delta_{{\bf p}^{\prime\prime}{\bf p}^{\prime}})
W_{{\bf p}{\bf p}^{\prime\prime}}
\left[
f(1-f^{\prime\prime})+f^{\prime\prime}(1-f)
\right].
\label{corr}
\end{eqnarray}
with $f^{\prime\prime}\equiv f({\bf p}^{\prime\prime},{\bf r})$,
and $W_{{\bf p}{\bf p}^{\prime\prime}}=W_{{\bf p}^{\prime\prime}{\bf p}}$.
Next, we consider the {\it lhs\/} of Eq.\ (\ref{kineq}).
Since we are only interested in the $\omega =0$ limit
of the spectral density
(the effect of screening on frequency dependent shot noise in quasi--one
dimensional diffusive conductors has been studied recently
in Refs.\ \onlinecite{Naveh} and \onlinecite{Nagaev3}),
we may drop the first term $\partial F/\partial t$ in Eq.\ (\ref{kineq}).
The term $e{\bf E}\!\cdot\!\partial_{{\bf p}}F$ can be rewritten as follows:
$e{\bf E}\!\cdot\!\partial_{{\bf p}_F}F+e{\bf v}\!\cdot\!{\bf E}\partial_{
\varepsilon}F$, where ${\bf p}_F$ is the momentum at the
Fermi surface. From this we see that
the electric field ${\bf E}$ induced by an applied voltage plays
a twofold role: it effects the trajectories and changes
the energy of electrons. The first effect, $e{\bf E}\!\cdot\!\partial_{
{\bf p}_F}F\sim eE/p_F$,
is weak compared to ${\bf v}\!\cdot\partial_{{\bf r}}F\sim v_F/L$ ($
L$ is the size of the conductor)
and gives contribution of order $eV/\varepsilon_F$, which can be
neglected.\cite{estimate}
The second effect
can be taken into account by the replacement $\varepsilon\to\varepsilon
-eV({\bf r},t)$ in the
argument of the distribution function $F$, so that $\varepsilon$ now
is the total (kinetic $+$ potential) energy of the electron.
Then, the two terms ${\bf v}\!\cdot\!\partial_{{\bf r}}F+e{\bf E}\!\cdot\!
\partial_{{\bf p}}F$ in Eq.\ (\ref{kineq})
can be replaced by the total derivative ${\bf v}\!\cdot\!\nabla F$.
In a next step we apply the standard diffusion approximation
to the kinetic equation
\cite{Landau} where
the distribution function is split into two parts,
\begin{equation}
F({\bf p},{\bf r},t)=F_0(\varepsilon ,{\bf r},t)+{\bf l}({\bf p}_F,
{\bf r})\!\cdot\!{\bf F}_1(\varepsilon ,{\bf r},t),
\label{approx}
\end{equation}
where the vector ${\bf l}$ obeys the equation,
\begin{equation}
\sum_{{\bf p}^{\prime}}W_{{\bf p}{\bf p}^{\prime}}({\bf r})
[{\bf l}({\bf p}_F,{\bf r})
-{\bf l}({\bf p}^{\prime}_F,{\bf r})]={\bf v}\, .
\label{eqnl}
\end{equation}
The choice of the distribution function $F$ in the form (\ref{approx})
is dictated by the fact that the impurity collision integral
$I_{im}[F]$ does not affect the energy dependence of the distribution
function.
Inserting this ansatz into Eq.\ (\ref{kineq}) and averaging subsequently
over the momentum first
weighted with one and then with ${\bf l}$, we arrive
at
\begin{equation}
\nabla\!\cdot\!\hat {D}{\bf F}_1-
\overline{I[F]}=0,\label{kineq2a}
\end{equation}
\begin{equation}
\hat{D}(\nabla F_0+{\bf F}_1)=\overline {
{\bf l}\delta F^s}.
\label{kineq2b}
\end{equation}
Here the overbar means averaging over ${\bf p}_F$ at the Fermi surface
inside the Brillouin zone,
$\overline{\left(\ldots\right)}=
\int d{\bf p}_Fv_F^{-1}\left(\ldots\right)/\int d{\bf p}_Fv_F^{-1}$,
and we introduced the diffusion tensor,
\begin{equation}
\hat {D}({\bf r})\equiv D_{\alpha\beta}({\bf r})=\overline {v_{\alpha}
l_{\beta}({\bf p}_F,{\bf r})}.
\label{tensor}
\end{equation}
We also
used $\overline {\delta F^s}=0$, which follows from Eq.\ (\ref{corr})
and which reflects the
conservation of the number of electrons in the scattering process.
Using the distribution function (\ref{approx}) we can calculate
the current density ${\bf j}+\delta {\bf j}=e
\nu_F\hat {D}\int d\varepsilon {\bf F}_1$
and due to charge neutrality
(neglecting accumulation of charge) we get the potential,
$eV+e\delta V=\int_{\varepsilon_
c}^{\infty}d\varepsilon F_0$,
where $\varepsilon_c$ is a constant energy near the
Fermi level and chosen so that $F|_{\varepsilon_c}=1$,
and $\nu_F=\int d{\bf p}_Fv_F^{-1}$
is the density of states at the Fermi level. Upon integration
of Eqs.\ (\ref{kineq2a}) and (\ref{kineq2b})
over the energy $\varepsilon$ the collision integrals
vanish and we arrive at the diffusion equations for the potential and
density of current, resp.,
\begin{equation}
\nabla\!\cdot\!\hat{\sigma}\nabla V=0,\quad {\bf j}=-\hat{\sigma}
\nabla V,\label{diffa}
\end{equation}
\begin{equation}
\delta {\bf j}+\hat{\sigma}\nabla\delta V=
\delta {\bf j}^s,\quad\nabla\!\cdot\!\delta {\bf j}=0,
\label{diffb}
\end{equation}
where the conductivity tensor $\hat{\sigma }({\bf r})=e^2\nu_F\hat {
D}({\bf r})$ depends in general on the
position ${\bf r}$, and $\delta {\bf j}^s=e\nu_F\int
d\varepsilon\overline {
{\bf l}\delta F^s}$
is the Langevin
source of fluctuations of the current density. After
integrating over $\varepsilon$ in Eq.\ (\ref{corr}) and averaging over
${\bf p}$ (at the Fermi surface) we use then Eqs.\ (\ref{eqnl})
and (\ref{tensor}) to
obtain the correlation function of
the Langevin sources
\begin{eqnarray}
\langle\delta j^s_{\alpha}({\bf r},t)\delta j^s_{\beta}({\bf r}^{
\prime},t^{\prime})\rangle =\delta (t-t^{\prime})\delta ({\bf r}-{\bf r}^{
\prime})\sigma_{\alpha\beta}({\bf r})\Pi ({\bf r}),\nonumber\\\Pi
({\bf r})=2\int d\varepsilon f_0(\varepsilon ,{\bf r})[1-f_0(\varepsilon
,{\bf r})],
\label{corr2}
\end{eqnarray}
where $f_0$ is symmetric part of the average distribution function
$f=f_0+{\bf l}\!\cdot\!{\bf f}_1$.
The physical interpretation of
Eq.\ (\ref{corr2})
is now transparent: the function $\Pi$ describes the local broadening
of the distribution function and can be thought of as
an effective (noise) temperature. Then we see that
the correlator (\ref{corr2})
takes an equilibrium-like form
of the fluctuation-dissipation theorem. This is a direct
consequence of our diffusion approximation.
In the diffusive regime all microscopic details of the
transport and fluctuation mechanisms are hidden in the same
conductivity matrix, which appear in the correlator of the fluctuation
sources (\ref{corr2}) as well as in the diffusion equations
(\ref{diffa}, \ref{diffb}).
It is this
fact which leads to the universality of shot noise
that is independent of microscopic mechanisms of the noise.
Next, subtracting the fluctuating part from
Eqs.\ (\ref{kineq2a}) and (\ref{kineq2b})
we get the equations for the average distribution function $f$,
\begin{equation}
\nabla \!\cdot\!\hat\sigma\nabla f_0
+e^2\nu_F\overline{I[f]}=0,\quad
f=f_0-{\bf l}\!\cdot\!\nabla f_0,
\label{diff2}
\end{equation}
which complete the set of coupled equations to be solved.
Now we specify the boundary conditions to be imposed on
Eqs.\ (\ref{diffa}, \ref{diffb}, \ref{diff2}).
First, we
assume that
for a given energy there is no current
through the surface $S$ (see Fig.\ \ref{noisefig}a).
Second, since the contacts with area $
L_n$
are perfect conductors the average potential $V$ and its
fluctuations $\delta V$ are independent of position ${\bf r}$ on
$L_n$. Third,
the contacts are assumed to be in thermal equilibrium
with outside reservoirs.\cite{contactheat}
Then we write the boundary conditions for (\ref{diffa}) and (\ref{diffb}),
respectively, as
\begin{equation}
d{\bf s}\!\cdot\!{\bf j}({\bf r})|_S=0,\quad V({\bf r}
)|_{L_n}=V_n,
\label{bounda}
\end{equation}
\begin{equation}
d{\bf s}\!\cdot\!\delta {\bf j}({\bf r},t)|_S=0,\quad
\delta V({\bf r},t)|_{L_n}=\delta V_n(t),
\label{boundb}
\end{equation}
and for (\ref{diff2}),
\begin{equation}
f_0(\varepsilon ,{\bf r})|_{L_n}\!\! =\! f_{T_n}(\varepsilon\! -\! eV_
n),\;\; d{\bf s}\!\cdot\!\hat\sigma({\bf r})\nabla f_0
(\varepsilon ,{\bf r})|_S\! =\! 0,
\label{bound2}
\end{equation}
where $f_{T_n}(\varepsilon )=\left[1+\exp(\varepsilon /T_n)\right]^{-1}$
is the equilibrium distribution function at
temperature $T_n$, and
$d{\bf s}$ is a vector area element perpendicular to the surface.
Eqs.\ (\ref{diffa}, \ref{diffb}, \ref{diff2}) with the boundary conditions
(\ref{bounda}, \ref{boundb}, \ref{bound2})
are now a complete set of equations. In principle,
these equations can be solved exactly which would allow us
to evaluate $S_{nm}^c$ for an arbitrary multiterminal geometry
of the conductor and for an arbitrary disorder distribution.
\section{Solution of the diffusion equations}
\label{solution}
\subsection{Multiterminal conductance matrix}
\label{conduct}
The multiterminal conductance matrix is defined as follows:
$I_n=\sum_mG_{nm}V_m$ (throughout the paper
the sum over the contacts $m$ runs
from $m=1$ to $m=N$, and we omit the limits for convenience).
To calculate $G_{nm}$ we need to solve
Eqs. (\ref{diffa}) with boundary conditions (\ref{bounda}).
Following B\"uttiker \cite{Buttiker3} we introduce
characteristic potentials $\phi_n({\bf r})$, $n=1,\ldots ,N$, associated
with
the corresponding contacts. These functions
satisfy the diffusion equation and the boundary conditions:
\begin{equation}
\nabla\!\cdot\!\hat\sigma\nabla\phi_n=0, \label{character}
\end{equation}
\begin{equation}
d{\bf s}\!\cdot\!\hat\sigma\nabla\phi_n\left
|\right._S=0,\quad\phi_n|_{L_m}=\delta_{nm},
\label{character2}
\end{equation}
so that they are always positive $\phi_n({\bf r})\geq 0$, $n=1,\ldots
,N$
and obey the sum rule (see Appendix \ref{A}),
\begin{equation}
\sum_n\phi_n({\bf r})=1.
\label{sum}
\end{equation}
The potential $V$ can be expressed in terms of characteristic
potentials
\begin{equation}
V({\bf r})=\sum_n\phi_n({\bf r})V_n
\label{potential}
\end{equation}
to satisfy the diffusion equation
(\ref{diffa}) and boundary conditions (\ref{bounda}).
Then the outgoing current through the $m$-th
contact is $I_m=\int_{L_m}d{\bf s}\!\cdot\! {\bf j}=-\sum_n\int_{L_m}
d{\bf s}\!\cdot\!\hat\sigma\nabla\phi_n V_n$, and using the
definition of the conductance matrix we get
\begin{equation}
G_{mn}=-\int\limits_{L_m}d{\bf s}\!\cdot\!\hat\sigma\nabla\phi_n.
\label{conductance}
\end{equation}
We note here that the multiplication of the
integrand by $\phi_m$ does not change the integral in the {\em rhs} of this
equation. Moreover, the boundary conditions (\ref{character2}) for
the characteristic potentials allows us to extend the integral to the
entire surface. Doing so and taking into account Eq.\ (\ref{character}),
we then replace the surface integral by
an integral over the volume of the conductor. We are then left
with another
useful formula for $G_{nm}$,
\begin{equation}
G_{mn}=-\int d{\bf r}\nabla\phi_m\!\cdot\!\hat\sigma\nabla\phi_n.
\label{conductance2}
\end{equation}
From this expression and from the sum rule for $\phi_n$
it immediately follows that $G_{nm}=G_{mn}$, $\sum_nG_{nm}=0$,
and $G_{nn}<0$, as it should be. In Appendix \ref{A} we use a similar
procedure
to prove another quite natural property of the conductance matrix:
$G_{nm}>0$ for $n\neq m$.
\subsection{Energy transport coefficients}
\label{energy}
We have already seen that the local source of noise is defined
by the effective noise temperature $\Pi$ (see Eq.\ (\ref{corr2})),
which describes the broadening of the distribution function.
Another important quantity is given by the energy density
$\Upsilon ({\bf r})$ acquired by the electron gas due to the
broadening of the distribution function (effective heat density).
It is given explicitly by the integral
\begin{equation}
\Upsilon =\nu_F\int\limits_{\varepsilon_c}^{\infty}d\varepsilon\varepsilon\left[
f_0-\theta\left(\varepsilon -eV\right)\right]
=\Lambda -{1\over 2}\nu_F(eV)^2,
\label{edensity}
\end{equation}
where $\theta\left(\varepsilon -eV\right)$ is the local
equilibrium distribution function
at zero temperature, and
$\Lambda ({\bf r})=\nu_F\int_{\varepsilon_c}^{\infty}d\varepsilon\varepsilon
f_0(\varepsilon ,{\bf r})-\nu_F\varepsilon_c^2/2$ is the total energy
density
(up to irrelevant constant).
To calculate $\Upsilon$
we integrate the first of Eqs.\ (\ref{diff2}) over $\varepsilon$ with the
weight
of $\varepsilon$ and use the expression (\ref{edensity}) for $\Lambda$.
Then the electron-electron collision integral vanishes, and
we arrive at the following equation,\cite{comment0}
\begin{equation}
\nabla\!\cdot\hat {D}\nabla\Lambda =\nabla\!\cdot\hat {D}\nabla\Upsilon
+{\bf j}\!\cdot\!{\bf E}=q,
\label{econserv}
\end{equation}
where we introduced the rate of energy relaxation (or absorption) due to
phonons,
$q({\bf r})=-\nu_F\int_{\varepsilon_c}d\varepsilon\varepsilon
\overline{
I_{e-ph}[f]}$.
Eq.\ (\ref{econserv}) expresses energy conservation:
the work done on the system by the electric field, ${\bf j}\!\cdot\!
{\bf E}$,
is equal to
the energy flux to the lattice, $q$, plus
the heating of the electron gas, $-\nabla\!\cdot\hat {D}
\nabla\Upsilon$.
Integration of Eqs.\ (\ref{bound2}) gives us the boundary
conditions for $\Lambda$,
\begin{eqnarray}
\Lambda |_{L_n}=\Lambda_n=\nu_F\left[{\pi^2\over 6} T_n^2+{1\over
2}(eV_n)^2\right], \nonumber \\
d{\bf s}\!\cdot\!\hat {D}\nabla\Lambda |_S=0.
\label{bound3}
\end{eqnarray}
We assume now that electron-phonon interaction is
weak (the general case is discussed
in Sec.\ \ref{density}).
Then the energy exchange between the electron gas and the lattice
occurs in the metallic reservoirs far away
from the conductor, and inside the conductor we have $q=0$.
Eq.\ (\ref{econserv}) for $\Lambda$ with the boundary conditions (\ref{bound3})
can be solved in terms of $\phi_n$: $\Lambda ({\bf r})=\sum_n\phi_
n({\bf r})\Lambda_n$. Substituting
this expression into Eq.\ (\ref{edensity}) and
using Eq.\ (\ref{potential}) for $V$, we obtain $\Upsilon$,
\begin{equation}
\Upsilon =\nu_F
\sum_{n,m}\phi_n\phi_m
\left[{\pi^2\over 6}T_n^2+{e^2\over 4}(V_n-V_m)^2\right].
\label{edensity2}
\end{equation}
On the other hand, in perfect metallic reservoirs (where $\sigma\to
\infty$)
the term ${\bf j}\!\cdot\!{\bf E}\sim {\bf j}^2/\sigma$ can be neglected in
Eq.\ (\ref{econserv}).
Integration of this equation over the volume of the $n$th metallic
reservoir gives the total amount of energy transferred to (or absorbed from)
the
lattice in this reservoir, $Q_n=\int d{\bf r}q({\bf r})=-\int_{L_n}
d{\bf s}\!\cdot\!\hat {D}\nabla\Upsilon$.
In the particular case of thermal equilibrium between the reservoirs, i.e.,
$T_n=T$, $n=1,\ldots , N$,
we can use Eq.\ (\ref{edensity2}) to get the Joule heat
in the $n$th reservoir,
\begin{equation}
Q_n={1\over 2}\sum_mG_{nm}(V_n-V_m)^2.
\label{Jheat}
\end{equation}
For a two-terminal conductor, $(V_1-V_2)^2=V^2$,
$G_{12}=G_{21}=G$, we have $Q_1=Q_2=GV^2/2$, while the total Joule
heat is $Q_1+Q_2=IV$. We see in this case that the heat contributions
released on each side of the two-terminal conductor are
equal.\cite{Levinson}
This general conclusion holds for
an arbitrary shape of the conductor and arbitrary disorder distribution.
This fact is a consequence of
electron-hole symmetry.
The following simple analysis of the Eq.\ (\ref{Jheat})
exhibits its physical meaning. On one hand,
the total amount of Joule heat,
${1\over 2}\sum_{nm}G_{nm}(V_n-V_m)^2=-\sum_nI_nV_n
=\int d{\bf r}{\bf j}\!\cdot\!{\bf E}$, is
simply equal to the total work done by the electric field on the system.
On the other hand,
the value $\frac{1}{2}e^2\nu_{F}(V_n-V_m)^2$
can be thought of as the gauge invariant difference of energy
densities $\Lambda$ (i.\ e.\ minus the density of the potential energy,
$e^2\nu_F(V_m-V_n)V_n$) applied to the contacts of the conductor.
Then the energy transport coefficients, $G_{nm}/e^2\nu_F$,
are determined by the conductance matrix. The last fact
is a manifestation of the Wiedemann-Franz law, which holds for diffusive
conductors (together with Eqs.\ (\ref{edensity2}) and (\ref{Jheat}))
in the cases of cold and hot electrons, as soon as the
electron-phonon interaction is weak enough. To show
the Wiedemann-Franz law in its usual form, we consider the thermal
transport in multiterminal conductors in the absence of charge
transport, $V_n=0,$ $n=1,\ldots ,N$. In this case we can use again
the Eq.\ (\ref{edensity2}) to calculate the thermal current $Q_n$,
\begin{equation}
Q_n={{\pi^2}\over {6e^2}}\sum_mG_{nm}T_m^2.
\label{Q1}
\end{equation}
In particular, close to thermal equilibrium, $T_m=T+\Delta T_m$,
we have
\begin{equation}
Q_n={{\pi^2T}\over {3e^2}}\sum_mG_{nm}\Delta T_m,\quad\Delta T_m
\ll T,
\label{Q2}
\end{equation}
where ${{\pi^2T}\over {3e^2}}G_{nm}$ is the thermal conductance matrix.
This is now the Wiedemann-Franz law in its usual form.
\subsection{Multiterminal spectral density of noise}
\label{density}
In this section we derive the general formula
for the multiterminal spectral density of shot noise
in the case of arbitrary electron-phonon interaction.
We multiply the first of Eqs.\ (\ref{diffb}) by
$\nabla\phi_n$
and integrate it over the volume of the conductor. Then we evaluate
the first term in {\em lhs} of the equation integrating by parts and
using the second of Eqs.\ (\ref{diffb}),
$\int d{\bf r}\nabla\phi_n\!\cdot\!\delta {\bf j}=\oint d{\bf s}\!
\cdot\!\delta {\bf j}\phi_n$. Taking into account the boundary
conditions (\ref{boundb}) for $\delta {\bf j}$ and (\ref{character2}) for $
\phi_n$ we get
$\int d{\bf r}\nabla\phi_n\!\cdot\!\delta {\bf j}=\delta I_n$.
Integration by parts in the second term of the {\em lhs} of this
equation gives $\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\delta
V=\oint d{\bf s}\!\cdot\!\hat\sigma\nabla\phi_n\delta V=-\sum_kG_{nk}\delta
V_k(t)$,
where we used Eqs.\ (\ref{character}, \ref{character2})
for $\phi_n$, the boundary condition (\ref{boundb})
for $\delta V$, and
(\ref{conductance}) for the conductance matrix $G_{nm}$.
This leads us to the solution of the Langevin equation
(\ref{diffb}) in terms of characteristic potentials:
\begin{equation}
\delta\tilde {I}_n\equiv\delta I_n-\sum_mG_{nm}\delta V_m=
\int d{\bf r}\nabla\phi_n\!\cdot\!\delta {\bf j}^s.
\label{aux}
\end{equation}
Now, using the correlator (\ref{corr2}) for the Langevin sources $\delta
{\bf j}^s$,
we express the generalized multiterminal spectral density $S_{nm}$
defined as
\begin{equation}
S_{nm}=\int\limits_{-\infty}^{\infty}dt\langle\delta\tilde {I}_n(t)\delta
\tilde {I}_m(0)\rangle
\label{spectr2}
\end{equation}
in terms of characteristic potentials,
\begin{equation}
S_{nm}=\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\phi_m\Pi ,
\label{MSD}
\end{equation}
with the properties: $S_{nm}=S_{mn}$, $\sum_nS_{nm}=0$, and $S_{nn}>0$.
In equilibrium $\Pi({\bf r})=2T$, and (\ref{MSD}) together with Eq.\
(\ref{conductance2}) lead to the result for the thermal noise,
\begin{equation}
S_{nm}=-2G_{nm}T,
\label{thermal}
\end{equation}
which is again a manifestation of the fluctuation-dissipation theorem.
The formula (\ref{MSD}) is one of the central results of the paper.
It is valid for elastic and inelastic scatterings
and for an arbitrary multiterminal diffusive conductor.
The relation of $S_{nm}$ to the measured noise
is now as follows.
If, say, the voltages are fixed, then $\delta I_n(t)=\delta
\tilde {I}_n(t)$, and
the matrix $S_{nm}=S_{nm}^c$ is directly measured. On the other
hand, when currents are fixed, $S_{nm}$ can be obtained
from the measured voltage correlator
$S^v_{nm}=\int_{-\infty}^{\infty}dt\langle
\delta V_n(t)\delta V_m(0)\rangle$ by tracing it with conductance matrices:
$S_{nm}=\sum_{n^{\prime}m^{\prime}}G_{nn^{\prime}}G_{mm^{\prime}}S_{
n^{\prime}m^{\prime}}^v$. The physical interpretation of
(\ref{MSD}) becomes now transparent:
$\Pi$ describes the broadening of the distribution
function (effective temperature) that is induced by the voltage applied
to the conductor and $\hat\sigma\Pi$
can thus be thought of as a local noise {\it source} (see the discussion
following the Eq.\ (\ref{corr2})), while $\phi_n$
can be thought of as the {\it probe} of this local noise.
In particular, this means that only $S_{nm}$ is of physical
relevance but not the current or voltage correlators themselves.
Let us consider now one important application of Eq.(\ref{MSD}).
In an experiment one can measure the local broadening $\Pi$ of the
nonequilibrium distribution function $f_0$ (effective noise temperature
$\Pi$)
at some point ${\bf r}=
{\bf r}_0$ on the
surface of the conductor by measuring the voltage fluctuations
in a noninvasive voltage probe. This is an open contact with a
small area on the surface of the
conductor around the point ${\bf r}={\bf r}_0$. The contact is not attached
to the reservoir
so that it does not cause the equilibration of the electron
gas, and as a result $\Pi =const$ around the point ${\bf r}_0$. Then,
(\ref{MSD}) can be rewritten as follows:
$S=\int d{\bf r}\nabla\phi\!\cdot\!\hat{\sigma}\nabla\phi\Pi =\Pi
({\bf r}_0)\int d{\bf r}\nabla\phi\!\cdot\!\hat{\sigma}\nabla\phi$, where $
\phi$ is the characteristic
potential corresponding to the noninvasive probe.
Using Eqs.\ (\ref{conductance2})
we get $S=R^{-1}\Pi ({\bf r}_0)$, where $R$ is the resistance of the contact
which comes from the volume around ${\bf r}_0$. Finally, taking into account
(\ref{aux}, \ref{spectr}) and the fact, that there is no current through
the voltage probe, $\delta I=0$, we obtain
\begin{equation}
S^v=R\Pi ({\bf r}_0).
\label{measure}
\end{equation}
This means that $\Pi$ can be directly measured which gives an important
information about nonequilibrium processes in the conductor.
Eq.\ (\ref{measure}) resembles the fluctuation-dissipation theorem.
This is so because there is no transport through the noninvasive
probe, and therefore one can think of the probe as being in local
equilibrium
with the effective temperature $\Pi$.
For this reason our consideration restricted to the diffusion regime
can in principle be applied to the case of the tunnel coupling between
the probe and conductor. A possible experiment that could measure
shot noise at local tunneling contacts is discussed in detail in
Ref.\ \onlinecite{Gram}.
The above result can be easily generalized to take into account
the equilibration by the contact (see Sec. \ref{thermtrans}).
There will be then an additional
noise suppression factor in Eq.\ (\ref{measure}).
We note that (\ref{MSD}) together with
Eqs.\ (\ref{kineq2a}, \ref{kineq2b}, \ref{bound2}) for the average
distribution
function $f$ and Eqs.\ (\ref{character}, \ref{character2}) for the
characteristic potentials can serve as a starting point for
numerical evaluations of $S_{nm}$.
For purely elastic scattering as well as for hot electrons
it is even possible
to get closed analytical expressions for $S_{nm}$ as we will show next.
The physical conditions for different transport regimes are
discussed in Ref.\ \onlinecite{Nagaev2}.
In Sec.\ \ref{elastic} and \ref{hot} we will consider
the charge transport ($T_n=T$, $n=1\ldots N$), and in the Sec.\
\ref{thermtrans} we will discuss the thermal transport
($V_n=0$, $n=1\ldots N$).
\section{Elastic scattering}
\label{elastic}
In the case of purely elastic scattering, $I[f]=0$,
the average distribution function satisfies the diffusion equation
\begin{equation}
\nabla\!\cdot\!\hat\sigma\nabla f_0=0,
\label{diff3}
\end{equation}
and the boundary conditions (\ref{bound2}) with $T_n =T$
(i.\ e.\ in the charge transport regime). Using this equation
one can prove (see Appendix \ref{A}) that for elastic scattering
cross correlations ($n\neq m$) are always negative,
in agreement with the general conclusion of
Ref.\ \onlinecite{Buttiker2}.
Eq.\ (\ref{diff3}) can be solved in terms of $\phi_n$:
$f_0=\sum_n\phi_nf_T(\varepsilon -eV_n)$.
Substituting this solution
into Eq.\ (\ref{corr2}) and using the sum rule (\ref{sum}) for $\phi_n$,
we can express $\Pi$ in the following form,\cite{comparison}
\begin{equation}
\Pi =2\int d\varepsilon\sum_{k,l}\phi_k\phi_lf_T(\varepsilon -eV_
k)[1-f_T(\varepsilon -eV_l)].
\label{Pi}
\end{equation}
Performing the integration
over $\varepsilon$ we obtain,
\begin{equation}
\Pi =e\sum_{k,l}\phi_k\phi_l\left(V_k-V_l\right)\coth\left[{{e\left
(V_k-V_l\right)}\over {2T}}\right],
\label{temp}
\end{equation}
which in combination with Eq.\ (\ref{MSD}) gives the final expression
for $S_{nm}$ which is valid for purely elastic scattering.
Eq.\ (\ref{temp}) describes
the crossover from the shot noise in multiterminal diffusive
conductors ($T\to 0$),
\begin{equation}
\Pi =e\sum_{k,l}\phi_k\phi_l|V_k-V_l|,
\label{temp2}
\end{equation}
to the equilibrium Johnson-Nyquist noise given by (\ref{thermal}).
\subsection{Universality of noise}
\label{universality1}
Now we are in the position to generalize the proof of universality
of the $1/3$-suppression of shot noise
\cite{Beenakker1,Nagaev1,Jong1,Nazarov} to the case of
an arbitrary {\it multiterminal\/} diffusive conductor.
To be specific we choose
$V_n=0$,
for $n\neq 1$, i.e. only contact $n=1$ has a non-vanishing voltage.
Then, using the sum rule (\ref{sum}) for $\phi_n$,
we get
\begin{eqnarray}
\Pi =2e\phi_1(1-\phi_1)V_1\coth\left(eV_1/2T\right) \nonumber \\
+2T(1-\phi_1)^2+2T\phi_1^2 \, .\nonumber
\end{eqnarray}
To get $S_{1n}$ we substitute this equation into
(\ref{MSD}) and evaluate the first term as follows:
$\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\phi_1\phi_1(1-\phi_1)=
\oint d{\bf s}\!\cdot\!\hat\sigma \nabla\phi_n(\phi_1^2/2-\phi_1^3/3)=
-G_{1n}/6$, where
we used (\ref{character}, \ref{character2}).
Similarly, for the integrals in the second and third term
we get: $\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\phi_1(1-\phi_1
)^2=\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\phi_1\phi_1^2=-G_{1
n}/3$.
Combining these results we arrive at
\begin{equation}
S_{1n}=-{1\over 3}G_{1n}\left[4T+eV_1\coth\left(eV_1/2T\right)\right].
\label{unicold}
\end{equation}
When $V_1=0$ we get $S_{1n}=-2G_{1n}T$, and the formula
for the Johnson-Nyquist noise is recovered. When $T=0$, we
express $S_{1n}$ in terms of outgoing currents, $I_n = G_{1n}V_1$:
\begin{eqnarray}
S_{1n} & = & -{1\over 3}e|I_n|,\quad n\neq 1,\nonumber \\
S_{11} & = & {1\over 3}e|I_1|.
\label{unicold2}
\end{eqnarray}
We note that above derivation is valid for arbitrary impurity distribution
and shape of the conductor, and for an arbitrary electron spectrum
(band structure). In this sense the suppression
factor ${1\over 3}$ is indeed universal. This generalizes the known
universality of a two--terminal conductor
\cite{Nazarov} to a multiterminal geometry.
Finally, we mention here some
inequalities (derived in Appendix \ref{A}), which can be used to estimate
the spectral density $S_{nm}$ in the $T=0$ limit.
First, the correlations are bounded from below,
\begin{equation}
S_{nn}\geq{1\over 3}e|I_n|,
\label{below}
\end{equation}
but due to the nonlocality of the noise (see the discussion in
Sec.\ \ref{nonlocality}) there can be no upper bound
in terms of the current $I_n$ through the
same contact. In other words, the current $I_n$ flowing through
the $n$-th contact creates the noise $\frac{1}{3}eI_n$ in this contact.
However, other contacts also contribute to the noise in the $n$-th contact,
and
this contribution is not universal and
makes the noise arbitrarily larger compared to the value $\frac{1}{3}eI_n$.
Nevertheless, we can write: $\Pi<\max\{|V_k-V_l|\}$, $k,l=1,\ldots ,N$,
which gives the rough
estimate
\begin{equation}
S_{nn} < e|G_{nn}|\max\{|V_k-V_l|\}.
\label{rough}
\end{equation}
In contrast, the cross correlations possess an upper bound,
\begin{equation}
|S_{nm}|\leq{1\over 2}(S_{nn}+S_{mm}).
\label{above}
\end{equation}
$S_{nm}$ vanishes when the $n$-th and $m$-th contacts are completely
disconnected.
\subsection{Wide and star-shaped conductors}
\label{wide}
Next we specialize to two experimentally important cases.
First we consider a multiterminal conductor
of a star geometry with $N$ long leads
(but with otherwise arbitrary shape)
which join each other at a small crossing
region (see Fig.\ \ref{noisefig}b).
The resistance of this region is assumed to be
much smaller
than the resistance of the leads.
In the second case the contacts are connected through a wide
region (see Fig.\ \ref{noisefig}c),
where again the
resistance of the conductor comes mainly from the regions near
the contacts, while the resistance of the wide region is
negligible.
Both shapes are characterized by the requirement
that $w/L\ll 1$, where $w$ and $L$ are the characteristic
sizes of the contact and of the entire conductor, resp.
In both cases the conductor can be divided (more or less arbitrary)
into $N$ subsections $\Gamma_k$, $k=1,\ldots ,N$,
associated
with a particular contact so that the potential $V$ is
approximately constant (for $w/L\ll 1$) on the dividing surfaces
$C_k$. Each subsection then can be thought as a two-terminal
conductor with the corresponding characteristic potential $\theta_
k({\bf r})$,
\begin{equation}
\nabla\!\cdot\!\hat\sigma\nabla\theta_k\! =\! 0,\quad
d{\bf s}\!\cdot\!\hat\sigma\nabla\theta_
k|_S\! =\! \theta_k|_{L_k}\! =\! 0,\;\;\theta_k|_{C_k}\! =\! 1.
\label{theta}
\end{equation}
We will show now that both, the multiterminal
conductance matrices $G_{nm}$ and the spectral densities $S_{nm}$, can be
expressed in terms of the conductances $G_k$ of these subsections,
\begin{equation}
G_k=-\int\limits_{L_k}d{\bf s}\!\cdot\!\hat\sigma\nabla\theta_k =
\int\limits_{C_k}d{\bf s}
\!\cdot\!\hat\sigma\nabla\theta_k .
\label{subsect}
\end{equation}
Since each potential $\phi_n$ is approximately constant in the central
region
of the multiterminal conductor, we can write,
\begin{equation}
\phi_n({\bf r})|_{C_k}=\alpha_n=const.,\quad\sum_n\alpha_n=1,
\label{alpha}
\end{equation}
for an arbitrary $k=1,\ldots ,N$, where the second equation follows from
the sum rule for $\phi_n$.
Comparing Eqs.\ (\ref{character}, \ref{character2}, \ref{alpha})
with the definition of $\theta_k$ (\ref{theta}), we
immediately obtain
\begin{equation}
\phi_n({\bf r})|_{{\bf r}\in\Gamma_k}=\alpha_n\theta_k({\bf r})+
[1-\theta_k({\bf r})]\delta_{nk}.
\label{phi}
\end{equation}
The calculation of $G_{nm}$ and $S_{nm}$ is now straightforward.
We substitute (\ref{phi}) into (\ref{conductance}) and use
Eq.\ (\ref{subsect}) to get
\begin{equation}
G_{nm}=(\alpha_m-\delta_{nm})G_n,\quad \alpha_m=G_m/G,
\label{conductance3}
\end{equation}
where $G\equiv\sum_nG_n$, and
the equation for $\alpha_m$ follows from $\sum_nG_{nm}=0$.
Substituting (\ref{phi}) into (\ref{MSD}) and
applying similar arguments as above in the proof
of the 1/3-suppression we find the explicit expressions
(for details of the derivation see Appendix \ref{B})
\begin{eqnarray}
S_{nm}&=&{1\over 3}e\sum\limits_k\alpha_n\alpha_k(J_k+J_n)(\delta_{
nm}-\delta_{km})-{2\over 3}G_{nm}T,\nonumber\\J_n&=&\sum\limits_lG_
l(V_n-V_l)\coth\left[{{e(V_n-V_l)}\over {2T}}\right].
\label{MSD2}\end{eqnarray}
We note that this result is a consequence of above approximation (\ref{alpha}).
Comparing the resistance of the subsections to the resistance
of the central region of the conductor (which is neglected) we find
that the corrections to Eq.\ (\ref{alpha}) and consequently to
Eq.\ (\ref{MSD2}) are of order $w/L$ in 3D and for a star geometry in 2D,
and up to corrections of order $[\ln (L/w)]^{-1}$
for wide conductors in 2D.
In principle, (\ref{MSD2}) and (\ref{conductance3}) allow us
to calculate the noise for arbitrary voltages and temperature, but
for illustrative purposes we consider
the simple case of a cross-shaped conductor
with four equivalent leads
(i.e. $\alpha_n=1/4$) and $T=0$.
Suppose the voltage is applied to only one contact, say
$V_1>0$, $V_{n\neq 1}=0$, and $I=-I_1=3I_{n\neq 1}>0$.
Then, from (\ref{MSD2})
we obtain: $S_{11}={1\over 3}eI$, $S_{12}=S_{13}=S_{14}=-{1\over 9}
eI$, all being in agreement with
the universal $1/3$-suppression proven above. Then,
$S_{22}=S_{33}=S_{44}={2\over 9}eI$, and $S_{23}=S_{24}=S_{34}=-{1\over {
18}}eI$.
These numbers seem to be new \cite{comment1} and it would be interesting
to test them experimentally.
\subsection{Nonlocality and exchange effect}
\label{nonlocality}
We are now in the position to address the issue
of {\em non-locality} and {\em exchange} effect in shot
noise ($T=0$) in multiterminal conductors.
For this we consider for instance a star geometry
and assume that the
current enters the conductor through
the $n$-th contact, i.e. $I_n=-I$, and leaves it through
the $m$-th contact, i.e. $I_m=I$,
while the other contacts are open, i.e. $I_k=0$ for $k\neq n,m$.
{}From (\ref{conductance3}) we obtain for the conductance
$G_nG_m/(G_n+G_m)$ (two contacts are in series), and we see that
it does not depend on the other leads, which simply reflects
the {\em local} nature of diffusive transport.
However, contrary to one's first expectation,
this locality does {\em not} carry over to the noise
in general. Indeed, from (\ref{MSD2})
it follows that
$S_{nm}=-{1\over 3}(\alpha_n+\alpha_m)eI$.
The additional
suppression factor $0<\alpha_n+\alpha_m<1$ for $N>2$ reflects the
{\em non-locality} of the current noise.
For instance, for a cross with $N=4$ equivalent leads we have
$\alpha_m=\alpha_n=1/4$, and thus $S_{nm}=-{1\over 6}eI$.
An analogous reduction factor was obtained in
Ref.\ \onlinecite{Beenakker1} under a different point of view.
Hence, one cannot disregard
open contacts simply because no current is flowing through them;
on the contrary, these open contacts which are still connected to
the reservoir induce equilibration of the electron gas
and thereby reduce its current noise.
We emphasize that this non-locality is a classical
effect in the sense that no quantum phase interference is involved
(phase coherent effects are {\it not\/} contained
in our Boltzmann approach). On the other hand,
the origin of this non-locality
can be traced back to the non-linear dependence
of $\Pi$ on the distribution $f$ in (\ref{corr2}),
which is a consequence of the Pauli exclusion principle.
Next we discuss exchange effects \cite{Buttiker2}
in a four terminal conductor.
According to Blanter and B\"uttiker \cite{Blanter} they can be
probed by measuring $S_{13}$ in three ways:
$V_n=V_0\delta_{n2}$ (A), $V_n=V_0\delta_{n4}$ (B), and
$V_n=V_0\delta_{n2}+V_0\delta_{n4}$ (C). Then we take
$\Delta S_{13}=S_{13}^C-S_{13}^A-S_{13}^B$
as a measure of the exchange effect.
This experiment is analogous to the experiment of
Hanbury Brown and Twiss in optics. \cite{Brown}
It measures the interference of electrons coming
from mutually incoherent sources, which is caused by the
indistinguishability of the electrons.
Naively, one might expect that this interference
effect averages to zero in diffusive conductors.
However, it comes now as some surprise that in our semiclassical
Boltzmann approach
$\Delta S_{13}$ turns out to be non-zero in general
and can even be of the order of the shot noise itself.
Again, the reason for that is that $\Pi$ is non-linear in $f_0$
(see (\ref{corr2})), which is the consequence of the Pauli exclusion
principle. So, the value $\Pi^C-\Pi^A-\Pi^B$
which enters $\Delta S_{13}$ is not necessarily zero.
Indeed, while exchange effects vanish for cross
shaped conductors (in agreement with Ref.\ \onlinecite{Blanter}
up to corrections
of order $w/L$ which are neglected in our approximation),
it is not so for an H-shaped conductor (see Fig.\ \ref{noisefig}d).
Calculations similar to those leading to
(\ref{MSD2}) give for this case:
\begin{equation}
\Delta S_{13}={1\over {24}}{{eV_0G^2G_0}\over {(G+4G_0)^2}},
\label{exchange}
\end{equation}
where $G_n=G/4$ are the conductances (all being equal) of the
outer four leads, while the conductance of the connecting wire
in the middle
is denoted by $G_0$.
This exchange term $\Delta S_{13}$ vanishes
for $G_0\to\infty$, because then the case of a simple cross is
recovered, and also
for $G_0\to 0$, because then the $1$-st and $3$-rd
contacts are disconnected.
$\Delta S_{13}$ takes on its maximum value
for $G_0=G_n$ and becomes equal to ${1\over {60}}eI^A$,
where $I^A$ is the current
through the $2$-nd contact for case (A).
Although $\Delta S_{13}$ is positive in the example considered above
this is not the case in general.
For an arbitrary four--terminal geometry of the conductor
the exchange effect can be expressed in terms of characteristic
potentials:
\begin{equation}
\Delta S_{nm}=-4eV_0\int d{\bf r}\nabla\phi_n\!\cdot\!\hat\sigma\nabla\phi_m
\phi_k\phi_l,
\label{exchange2}
\end{equation}
where all indices
are different. From this general formula it follows that
$\Delta S_{nm}=\Delta S_{kl}$, and $\Delta S_{nm}+\Delta S_{nl}+\Delta
S_{nk}=0$. The last equation means
that the exchange effect can change sign, i.e.\ cross correlations can be
either suppressed or enhanced.
On the other
hand, the set-up can be slightly modified: instead of cross
correlations,
the noise density in one of the contacts of a multiterminal ($N>2$) diffusive
conductor is measured, say $S_{11}$, while the
electrons are injected through the contacts 2 (A), 3 (B),
and 2 and 3 (C). Again,
$\Delta S_{11}=S_{11}^C-S_{11}^A-S_{11}^B$ is a measure of the exchange
effect.
Then, it follows from (\ref{exchange2}) that
\begin{equation}
\Delta S_{11}=-4eV_0\int d{\bf r}\nabla\phi_1\!\cdot\!\hat\sigma\nabla\phi_1
\phi_2\phi_3<0,
\label{exchange3}
\end{equation}
i.e.\ the correlations are always suppressed due to the
interference effect, which is a direct manifestation of the Pauli principle.
In the particular case of star-shaped conductors we have
\begin{equation}
\Delta S_{11}=-\frac{4}{3}eV_0\frac{G_1G_2G_3}{G^2},\quad
G=\sum_{n=1}^{N}G_n.
\label{exchange4}
\end{equation}
The suppression of noise due to the interference of mutually
incoherent electrons was recently observed in an experiment with a
ballistic electron beam splitter. \cite{Tarucha}
We have shown here that this
effect is also observable in mesoscopic diffusive conductors.
\section{Hot electrons}
\label{hot}
We consider now the case of ``hot" electrons
where $I_{ee}\neq 0$, but still $I_{e-ph}=0$, and we assume that
electron-electron scattering is sufficiently strong
to cause thermal equilibration of
the electron gas
(i.e. $l_{ee}=\sqrt {D\tau_{ee}}\ll L$,
where $D$ is the diffusion coefficient and $\tau_{ee}$
the electron-electron relaxation time).
The average distribution then assumes the Fermi-Dirac
form:
\begin{equation}
f_0(\varepsilon ,
{\bf r})=\left\{1+\exp\left[{{\varepsilon -eV({\bf r})}\over {T_e({\bf r})}}
\right]\right\}^{-1},
\label{distribution}
\end{equation}
with the local electron temperature $T_e({\bf r})$.
Substituting this $f_0$ into (\ref{corr2}) we immediately get
$\Pi ({\bf r})=2T_e({\bf r})$. On the other hand, from Eq.\ (\ref{edensity})
it follows that $(T_e({\bf r}))^2=(6/\pi^2\nu_F)\Upsilon ({\bf r})$,
where $\Upsilon ({\bf r})$ is given by (\ref{edensity2}) with $T_n =T$
(i.\ e.\ in the charge transport regime).
Thus, we finally obtain
\begin{equation}
\Pi =2T\left[1+2\sum_{n,m}\phi_n\phi_m(\beta_n\! -\!\beta_m)^2\right]^{{
1\over 2}},\;
\beta_n\! =\! {{\sqrt 3eV_n}\over {2\pi T}},
\label{temp3}
\end{equation}
which in combination with (\ref{MSD}) gives the general solution
for the case of hot electrons.
We would like to note here that
the cross correlations are always negative
also in the case of hot electrons (the proof is given in the Appendix
\ref{A}).
\subsection{Universality of noise}
\label{universality2}
Next we show that the shot-noise suppression factor
$\sqrt 3/4$ (Refs.\ \onlinecite{Nagaev2,Kozub})
for hot electrons in a multiterminal conductor
is also {\it universal\/}.
As before
we can consider
the case where the voltage is applied to only one contact:
$V_n=V_1\delta_{n1}$. Then
\begin{equation}
\Pi =2T\sqrt {1+4\beta^2\left(\phi_1-\phi_1^2\right)},
\label{temp4}
\end{equation}
where $\beta\equiv\beta_1$.
Using the relation
\begin{eqnarray}
2\sqrt {1-\Phi^2}\nabla\Phi =\nabla\left\{\arcsin \Phi +\Phi\sqrt {
1-\Phi^2}\right\},\nonumber \\
\Phi =\beta\left(1+\beta^2\right)^{-1/2}(2\phi_1-1),
\label{relation}
\end{eqnarray}
we transform the volume integral in
(\ref{MSD}) into a surface integral
and obtain the spectral density of noise:
\begin{equation}
S_{1n}=\! -G_{1n}T\left[1+\left(\beta\! +\!\frac{1}{\beta}\right)\arctan
\beta \right],\;
\beta ={{\sqrt 3eV_1}\over {2\pi T}}.
\label{unihot}
\end{equation}
This expression describes the crossover from
the thermal noise
($\beta\ll 1$) given by (\ref{thermal})
to the transport noise $(\beta\gg 1$)
\begin{eqnarray}
S_{1n} & = & -{{\sqrt 3}\over 4}e|I_n|,\quad n\neq 1,\nonumber \\
S_{11} & = & {{\sqrt 3}\over 4}e|I_1|.
\label{unihot2}
\end{eqnarray}
This general result shows that
in the case of hot electrons
the shot-noise suppression factor $\sqrt 3/4$ is indeed universal,
i.e. it does not depend on the shape of the multiterminal diffusive
conductor nor on its disorder distribution.\cite{comment2}
The origin of this universality becomes clear from the following
argumentation.
We have seen that the distribution of the
effective noise temperature $\Pi ({\bf r})$ for the case of hot electrons is
controlled by the transport equations for the energy, (\ref{econserv}, \ref{bound3})
through the heat density $\Upsilon ({\bf r})$.
The spectral density of noise, in turn,
is given by $\Pi ({\bf r})$ through the transport equations for charge,
(\ref{diffb}, \ref{boundb}). On the other hand, according to
the Wiedemann-Franz law both, the energy and charge transports, are
determined by the same kinetic coefficients, namely, by $\hat\sigma ({\bf
r})$.
Thus, the physical origin of the universality of the
suppression factor $\sqrt 3/4$ can be traced back to the
Wiedemann-Franz law.
Conversely, a violation of the Wiedemann-Franz law will
cause deviations from universality.
We would like to note here that
the universality of the noise (for cold and hot electrons)
has been proven here for the case where the voltage is applied to
only one contact of a multiterminal diffusive conductor which
made it possible to express the spectral densities $S_{nm}$
in terms of conductances $G_{nm}$. This is no longer possible in general.
Nevertheless, in the case of a 2D geometry and isotropic
conductivity,
$\sigma_{\alpha\beta}({\bf r})=\sigma ({\bf r})\delta_{\alpha\beta}$,
both $G_{nm}$ and $S_{nm}$
are of the same universality class. Indeed, one can easily see that
they are invariant
under conformal transformation of coordinates.
$G_{nm}$ and $S_{nm}$ can be expressed in terms of characteristic
potentials $\phi_n$, which satisfy the conformal invariant
diffusion equation (\ref{character}) and boundary conditions
(\ref{character2}).
Moreover, the combination $d{\bf r}\nabla\phi_n\!\cdot\!\nabla\phi_
m$ does not change with the
conformal transformation of coordinates, which finally makes
the integrals for $G_{nm}$ (\ref{conductance2}) and for $S_{nm}$
(\ref{MSD}) conformal invariant.
We close this section by another
illustrative example. Let us consider again
a cross-shaped conductor with four equivalent leads,
\cite{comment3}
$G_n=G/4$, at $T=0$ and where we choose
$V_n=V_1\!\delta_{1n}$, $I=-I_1=3I_{n\neq 1}>0$. We then find
$S_{11}={{\sqrt 3}\over 4}eI$,
and $S_{1n}=-{1\over {4\sqrt 3}}eI$,
for $n\neq 1$,
while
$S_{nn}=\left({{35\sqrt 3}\over {108}}-{2\over {3\pi}}\right)eI$,
and
$S_{n\neq m}=-\left({{13\sqrt 3}\over {108}}-{1\over {3\pi}}\right
)eI$,
for $n,m\neq 1$.
These new numbers are consistent with the universal factor $\sqrt
3/4$.
\section{Noise induced by thermal transport}
\label{thermtrans}
In this section we address a new phenomenon, namely the current
noise in multiterminal diffusive conductors in the presence of
thermal transport. We assume no energy relaxation
in the conductor due to phonons, $I_{e-ph}=0$, and no voltage is applied
to the contacts, $V_n=0$, $n=1,\ldots ,N$, which are kept
in equilibrium at different temperatures $T_n$. The thermal transport
is considered in Sec. \ref{energy}, where the outgoing thermal
currents $Q_n$ are calculated (see Eqs.\ (\ref{Q1}) and (\ref{Q2})).
We turn now to the calculation of the spectral density of noise.
\subsection{Elastic scattering}
\label{coldheat}
To calculate $\Pi$ we need to know the distribution function
$f_0$. It obeys the Eq.\ (\ref{diff3}) with the boundary conditions
(\ref{bound2}) in the contacts. The solution then
reads explicitly,
\begin{equation}
f_0=\sum_n\phi_nf_{T_n},
\label{distribution2}
\end{equation}
and with the help of (\ref{sum}) we get,
\begin{eqnarray}
\lefteqn{\Pi =\sum_{kl}\phi_k\phi_lZ_{kl},}\nonumber \\
& & Z_{kl}=T_kT_l\int\limits^{\infty}_{
-\infty}ds\left[1-\tanh (T_ks)\tanh (T_ls)\right].
\label{Pi2}
\end{eqnarray}
This together with Eq.\ (\ref{MSD}) gives the spectral density
of noise $S_{nm}$.
In equilibrium $T_n=T$, $Z_{kl}=2T$ and (\ref{Pi2})
and we find for the equilibrium noise, $S_{nm}=-2G_{nm}T$.
On the other hand, if for example $T_k\gg T_l$, then $Z_{kl}=(2\ln
2)T_k$.
We consider then two situations, where e.g. either
$T_1=T$ and $T_{n\neq 1}=0$, or $T_1=0$ and $T_{n\neq 1}=T$.
In other words, only one contact is either heated
up to high enough temperature $T$ or cooled down to zero
temperature and the other cantacts are kept at the same temperature.
Using the sum rule for $\phi_n$
and carrying out the integration in (\ref{MSD}) we obtain for both
cases,
\begin{equation}
S_{1n}=-{2\over 3}(1+\ln 2)G_{1n}T.
\label{MSD3a}
\end{equation}
$S_{1n}$ for this two situations can be expressed in terms of
thermal currents $Q_n$
\begin{equation}
S_{1n}=\pm 4(1+\ln 2)(e/\pi )^2T^{-1}Q_n,
\label{MSD3b}
\end{equation}
with the sign depending on the
sign of $Q_n$.
\subsection{Hot electrons}
\label{hotheat}
We consider now the case of hot electrons, where
$\Pi =2T_e$, while $T_e^2=\sum_k\phi_kT_k^2$
(see Eq.\ (\ref{edensity2})). Substituting this into
(\ref{MSD}) we get,
\begin{equation}
S_{nm}=2\int d{\bf r}\nabla\phi_n\!\cdot\!\hat{\sigma}\nabla\phi_m
\sqrt{\sum_k\phi_kT_k^2}.
\label{MSD4a}
\end{equation}
In particular, if the electron gas in the conductor is pushed
out of equilibrium
by heating (or cooling)
one of the contacts (with $n=1$) while the other contacts are kept at the
temperature $T_n=T_2$, $n\neq 1$, the integral can be calculated explicitly,
and we have
\begin{equation}
S_{1n}=-{4\over 3}G_{1n}{{T_1^2+T_2^2+T_1T_2}\over {T_1+T_2}}.
\label{MSD4b}
\end{equation}
In the cases where $T_1=T\gg T_2$,
and $T_2=T\gg T_1$, we obtain with the help of Eq.\ (\ref{Q2}) that
the spectral density $S_{1n}$ can be expressed in terms of thermal
currents:
\begin{equation}
S_{1n}=\pm 8(e/\pi )^2T^{-1}Q_n.
\label{MSD4c}
\end{equation}
Expressions (\ref{MSD3b}, \ref{MSD4c}) are analogous to
Eqs.\ (\ref{unicold2}, \ref{unihot2})
and reflect the universality of the noise in the presence of thermal
transport.
\section{Conclusion}
\label{conclusion}
In conclusion, we have systematically studied the transport and noise in
multiterminal diffusive conductors. Applying a diffusion approximation to the
Boltzmann-Langevin kinetic equation we have derived the diffusion equations for
the distribution function and its fluctuations. We then solved these equations in
general terms of well defined ``characteristic potentials'' and we derived exact
formulas for the conductance matrix, energy transport coefficients and the
multiterminal spectral density of noise. In this way we have obtained the
following results. In both regimes of cold and hot electrons the shot noise turns
out to be universal in the sense that it depends neither on the geometry of a
multiterminal conductor and the spectrum of carriers, nor on the disorder
distribution. We have studied the noise in the presence of thermal transport and
find that being expressed in terms of thermal currents it is also universal. We
believe that the origin of this universality lies in the fact that in the
diffusive regime the correlator of the local current densities (Langevin sources)
takes an equilibrium-like form of the fluctuation-dissipation theorem involving
an effective noise temperature. Thus, the transport and noise properties are
determined by the same conductivity tensor. One can surmise then that the proven
universality holds as long as the energy transport is governed by the
Wiedemann-Franz law.
The exchange effect is proven to be non-zero even within our semiclassical
Boltzmann approach. The exchange effect can change sign when measured in
cross-correlations, and (in agreement with the Pauli principle) it gives always
negative contribution to the auto-correlations. The exchange effect comes from a
non-linear dependence on the local distribution function. Similarly we show that
the same non-linearities are responsible for non-local effects such as the
suppression of shot noise by open leads even at zero electron temperature.
Finally, we have proposed a possible experiment which would allow one to locally
measure the effective noise temperature, and we have given new suppression
factors for shot noise in various geometries which can be tested experimentally.
\acknowledgments
We would like to thank M.\ B\"uttiker and Ch.\ Sch\"onenberger
for helpful discussions.
This work is supported by
the Swiss National Science
Foundation.
|
1,116,691,500,300 | arxiv |
\section*{Appendix}
\label{sec:appx}
\section{More Comparisons on GLUE}
\label{apx:more_compar}
\begin{table*}[ht]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{lcccccclcc}
\hline
System & CoLA & MNLI-m & MNLI-mm & MRPC & QNLI & QQP & RTE & SST-2 & STS-B \\
& (8.5k) & (393k) & (393k) & (3.7k) & (105k) & (364k) & (2.5k) & (67k) & (5.7k) \\
& Mcc & Acc & Acc & F1/Acc & Acc & F1/Acc & Acc & Acc & Pear/Spea \\ \hline
\multicolumn{10}{l}{\it Same Student Architecture ($M$=6;$d'$=768;$d'_i$=3072)} \\ \hline
DistilBERT$_{6}$ & 51.3 & 82.2 & - & 87.5/- & 89.2 & -/88.5 & 59.9 & 92.7 & -/86.9 \\
Poor Man’s BERT$_{6}$ & - & 81.1 & - & -/80.2 & 87.6 & -/90.4 & 65.0 & 90.3 & -/88.5 \\
BERT-of-Theseus & 51.1 & 82.3 & - & 89.0/- & 89.5 & -/89.6 & 68.2 & 91.5 & -/88.7 \\
MiniLM$_{6}$ & 49.2 & 84.0 & - & 88.4/- & 91.0 & -/91.0 & 71.5 & 92.0 & - \\
TinyBERT$_{6}$ & \textbf{54.0} & \textbf{84.5} & \textbf{84.5} & \textbf{90.6/86.3} & \textbf{91.1} & \textbf{88.0/91.1} & \textbf{73.4} & \textbf{93.0} & \textbf{90.1/89.6} \\ \hline
\end{tabular}}
\caption{Comparisons between TinyBERT with other baselines on the dev set of GLUE tasks. Mcc refers to Matthews correlation and Pear/Spea refer to Pearson/Spearman.}
\label{tab:compare_dev}
\end{center}
\end{table*}
Since some prior works on BERT compression only evaluate their models on the GLUE dev set, for an easy and direct comparison, we here compare our TinyBERT$_6$ with the reported results from these prior works. All the compared methods have the same model architecture as TinyBERT$_6$ (i.e. $M$=6, $d'$=768, $d'_i$=3072).
The direct comparison results are shown in Table~\ref{tab:compare_dev}. We can see the TinyBERT$_6$ outperforms all the baselines under the same settings of architecture and evaluation methods. The effectiveness of TinyBERT is further confirmed.
\section{Results on SQuAD v1.1 and v2.0}
\label{apx:qa_tasks}
We also demonstrate the effectiveness of TinyBERT on the question answering (QA) tasks: SQuAD v1.1~\citep{rajpurkar2016squad} and SQuAD v2.0~\citep{rajpurkar2018know}. Following the learning procedure in the previous work~\citep{devlin2019bert}, we treat these two tasks as the problem of sequence labeling which predicts the possibility of each token as the start or end of answer span.
One small difference from the GLUE tasks is that we perform the prediction-layer distillation on the original training dataset instead of the augmented dataset, which can bring better performances.
The results show that TinyBERT consistently outperforms both the 4-layer and 6-layer baselines, which indicates that the proposed framework also works for the tasks of token-level labeling. Compared with sequence-level GLUE tasks, the question answering tasks depend on more subtle knowledge to infer the correct answer, which increases the difficulty of knowledge distillation. We leave how to build a better QA-TinyBERT as future work.
\begin{table}
\begin{center}
\scalebox{0.85}{
\begin{tabular}{l|cccc}
\hline
System & \multicolumn{2}{c}{\textbf{SQuAD 1.1}} & \multicolumn{2}{c}{\textbf{SQuAD 2.0}} \\
& EM & F1 & EM & F1 \\ \hline
BERT$_{\rm BASE}$ (Teacher) & 80.7 & 88.4 & 74.5 & 77.7 \\ \hline
\multicolumn{5}{l}{\it 4-layer student models} \\ \hline
BERT$_{4}$-PKD & 70.1 & 79.5 & 60.8 & 64.6 \\
DistilBERT$_{4}$ & 71.8 & 81.2 & 60.6 & 64.1 \\
MiniLM$_{4}$ & - & - & - & 69.7 \\
TinyBERT$_{4}$ & \textbf{72.7} & \textbf{82.1} & \textbf{68.2} & \textbf{71.8} \\ \hline
\multicolumn{5}{l}{\it 6-layer student models} \\ \hline
BERT$_{6}$-PKD & 77.1 & 85.3 & 66.3 & 69.8 \\
DistilBERT$_{6}$ & 78.1 & 86.2 & 66.0 & 69.5 \\
MiniLM$_{6}$ & - & - & - & 76.4 \\
TinyBERT$_{6}$ & \textbf{79.7} & \textbf{87.5} & \textbf{74.7} & \textbf{77.7} \\ \hline
\end{tabular}}
\caption{Results (dev) of baselines and TinyBERT on question answering tasks. The architecture of MiniLM$_{4}$ is ($M$=4, $d$=384, $d_i$=1536) which is wider than TinyBERT$_{4}$, and the architecture of MiniLM$_{6}$ is the same as TinyBERT$_6$($M$=6, $d$=768, $d_i$=3072)}
\label{exp:squad}
\end{center}
\end{table}
\section{Initializing TinyBERT with BERT$_{\rm TINY}$}
\label{apx:bert_small}
In the proposed two-stage learning framework, to make TinyBERT effectively work for different downstream tasks, we propose the General Distillation~(GD) to capture the general domain knowledge, through which the TinyBERT learns the knowledge from intermediate layers of teacher BERT at the pre-training stage. After that, a general TinyBERT is obtained and used as the initialization of student model for Task-specific Distillation~(TD) on downstream tasks.
\begin{table}[t]
\begin{center}
\scalebox{0.66}{
\begin{tabular}{@{}l|cccc|c}
\hline
System & MNLI-m & MNLI-mm & MRPC & CoLA & { Avg} \\
& (392k) & (392k) & (3.5k) & (8.5k) & \\ \hline
BERT$_{\rm TINY}$ & 75.9 & 76.9 & 83.2 & 19.5 & 63.9 \\
BERT$_{\rm TINY}$(+TD) & 79.2 & 79.7 & 82.9 & 12.4 & 63.6 \\ \hline
TinyBERT~(GD) & 76.6 & 77.2 & 82.0 & 8.7 & 61.1 \\
TinyBERT~(GD+TD) & 80.5 & 81.0 & 82.4 & 29.8 & 68.4 \\ \hline
\end{tabular}}
\caption{Results of different methods at pre-training stage. TD and GD refers to Task-specific Distillation (without data augmentation) and General Distillation, respectively. The results are evaluated on dev set.}
\label{tab:general_distill}
\end{center}
\end{table}
In our preliminary experiments, we have also tried to initialize TinyBERT with the directly pre-trained BERT$_{\rm TINY}$, and then conduct the TD on downstream tasks. We denote this compression method as BERT$_{\rm TINY}$(+TD). The results in Table~\ref{tab:general_distill} show that BERT$_{\rm TINY}$(+TD) performs even worse than BERT$_{\rm TINY}$ on MRPC and CoLA tasks. We conjecture that if without imitating the BERT$_{\rm BASE}$'s behaviors at the pre-training stage, BERT$_{\rm TINY}$ will derive mismatched distributions in intermediate representations (e.g., attention matrices and hidden states) with the BERT$_{\rm BASE}$ model. The following task-specific distillation under the supervision of fine-tuned BERT$_{\rm BASE}$ will further disturb the learned distribution/knowledge of BERT$_{\rm TINY}$, finally leading to poor performances on some less-data tasks. For the intensive-data task~(e.g. MNLI), TD has enough training data to make BERT$_{\rm TINY}$ acquire the task-specific knowledge very well, although the pre-trained distributions have already been disturbed.
From the results of Table~\ref{tab:general_distill}, we find that GD can effectively transfer the knowledge from the teacher BERT to the student TinyBERT and achieve comparable results with BERT$_{\rm TINY}$~(61.1 vs. 63.9), even without performing the MLM and NSP tasks. Furthermore, the task-specific distillation boosts the performances of TinyBERT by continuing on learning the task-specific knowledge from fine-tuned teacher BERT$_{\rm BASE}$.
\section{GLUE Details}
\label{apx:glue}
The GLUE datasets are described as follows:
\noindent\textbf{MNLI.} Multi-Genre Natural Language Inference is a large-scale, crowd-sourced entailment classification task \citep{williams2018broad}. Given a pair of $\langle premise, hypothesis \rangle$, the goal is to predict whether the $hypothesis$ is an entailment, contradiction, or neutral with respect to the $premise$.
\noindent\textbf{QQP.} Quora Question Pairs is a collection of question pairs from the website Quora. The task is to determine whether two questions are semantically equivalent \citep{chen2018quora}.
\noindent\textbf{QNLI.} Question Natural Language Inference is a version of the Stanford Question Answering Dataset which has been converted to a binary sentence pair classification task by \citet{wang2018glue}. Given a pair $\langle question, context \rangle$. The task is to determine whether the $context$ contains the answer to the $question$.
\noindent\textbf{SST-2.} The Stanford Sentiment Treebank is a binary single-sentence classification task, where the goal is to predict the sentiment of movie reviews~\cite{socher2013recursive}.
\noindent\textbf{CoLA.} The Corpus of Linguistic Acceptability is a task to predict whether an English sentence is a grammatically correct one~\citep{warstadt2019neural}.
\noindent\textbf{STS-B.} The Semantic Textual Similarity Benchmark is a collection of sentence pairs drawn from news headlines and many other domains \citep{cer2017semeval}. The task aims to evaluate how similar two pieces of texts are by a score from 1 to 5.
\noindent\textbf{MRPC.} Microsoft Research Paraphrase Corpus is a paraphrase identification dataset where systems aim to identify if two sentences are paraphrases of each other~\cite{dolan2005automatically}.
\noindent\textbf{RTE.} Recognizing Textual Entailment is a binary entailment task with a small training dataset~\citep{bentivogli2009fifth}.
\section{Conclusion and Future Work}
In this paper, we introduced a new method for Transformer-based distillation, and further proposed a two-stage framework for TinyBERT. Extensive experiments show that TinyBERT achieves competitive performances meanwhile significantly reducing the model size and inference time of BERT$_{\rm BASE}$, which provides an effective way to deploy BERT-based NLP models on edge devices.
In future work, we would study how to effectively transfer the knowledge from wider and deeper teachers (e.g., BERT$_{\rm LARGE}$) to student TinyBERT. Combining distillation with quantization/pruning would be another promising direction to further compress the pre-trained language models.
\label{sec:con}
\section*{Acknowledgements}
This work is supported in part by NSFC NO.61832020, No.61821003, 61772216, National Science and Technology Major Project No.2017ZX01032-101, Fundamental Research Funds for the Central Universities.
\section{Experiments}\label{sec:exp}
\begin{table*}[tbp]
\begin{center}
\scalebox{0.72}{
\begin{tabular}{@{}l|ccc|cccccccc|c@{}}
\textbf{System} & \textbf{\#Params} & \textbf{\#FLOPs} & \textbf{Speedup} & \textbf{MNLI-(m/mm)} & \textbf{QQP} & \textbf{QNLI} & \textbf{SST-2} & \textbf{CoLA} & \textbf{STS-B} & \textbf{MRPC} & \textbf{RTE} & \textbf{Avg} \\ \hline
BERT$_{\rm BASE}$ (Teacher) & 109M & 22.5B & 1.0x & 83.9/83.4 & 71.1 & 90.9 & 93.4 & 52.8 & 85.2 & 87.5 & 67.0 & 79.5 \\ \hline
BERT$_{\rm TINY}$ & 14.5M & 1.2B & 9.4x & 75.4/74.9 & 66.5 & 84.8 & 87.6 & 19.5 & 77.1 & 83.2 & 62.6 & 70.2 \\
BERT$_{\rm SMALL}$ & 29.2M & 3.4B & 5.7x & 77.6/77.0 & 68.1 & 86.4 & 89.7 & 27.8 & 77.0 & 83.4 & 61.8 & 72.1 \\
BERT$_{4}$-PKD & 52.2M & 7.6B & 3.0x & 79.9/79.3 & 70.2 & 85.1 & 89.4 & 24.8 & 79.8 & 82.6 & 62.3 & 72.6 \\
DistilBERT$_{4}$ & 52.2M & 7.6B & 3.0x & 78.9/78.0 & 68.5 & 85.2 & 91.4 & 32.8 & 76.1 & 82.4 & 54.1 & 71.9 \\
MobileBERT$_{\rm TINY} \dagger$ & 15.1M & 3.1B & - & 81.5/81.6 & 68.9 & \textbf{89.5} & 91.7 & \textbf{46.7} & 80.1 & \textbf{87.9} & 65.1 & \textbf{77.0} \\
TinyBERT$_{4}$ (ours) & 14.5M & 1.2B & 9.4x & \textbf{82.5}/\textbf{81.8} & \textbf{71.3} & 87.7 & \textbf{92.6} & 44.1 & \textbf{80.4} & 86.4 & \textbf{66.6} & \textbf{77.0} \\ \hline
BERT$_{6}$-PKD & 67.0M & 11.3B & 2.0x & 81.5/81.0 & 70.7 & 89.0 & 92.0 & - & - & 85.0 & 65.5 & - \\
PD & 67.0M & 11.3B & 2.0x & 82.8/82.2 & 70.4 & 88.9 & 91.8 & - & - & 86.8 & 65.3 & - \\
DistilBERT$_{6}$ & 67.0M & 11.3B & 2.0x & 82.6/81.3 & 70.1 & 88.9 & 92.5 & 49.0 & 81.3 & 86.9 & 58.4 & 76.8 \\
TinyBERT$_{6}$ (ours) & 67.0M & 11.3B & 2.0x & \textbf{84.6}/\textbf{83.2} & \textbf{71.6} & \textbf{90.4} & \textbf{93.1} & \textbf{51.1} & \textbf{83.7} & \textbf{87.3} & \textbf{70.0} & \textbf{79.4}
\end{tabular}
}
\caption{Results are evaluated on the test set of GLUE official benchmark. The best results for each group of student models are in-bold. The architecture of TinyBERT$_{4}$ and BERT$_{\rm TINY}$ is ($M$=4, $d$=312, $d_i$=1200), BERT$_{\rm SMALL}$ is ($M$=4, $d$=512, $d_i$=2048), BERT$_{4}$-PKD and DistilBERT$_{4}$ is ($M$=4, $d$=768, $d_i$=3072) and the architecture of BERT$_{6}$-PKD, DistilBERT$_{6}$ and TinyBERT$_{6}$ is ($M$=6, $d$=768, $d_i$=3072). All models are learned in a single-task manner. The inference speedup is evaluated on a single NVIDIA K80 GPU. $\dagger$~denotes that the comparison between MobileBERT$_{\rm TINY}$ and TinyBERT$_4$ may not be fair since the former has 24 layers and is task-agnosticly distilled from IB-BERT$_{\rm LARGE}$ while the later is a 4-layers model task-specifically distilled from BERT$_{\rm BASE}$.}
\label{tab:glue_main}
\vspace{-0.1in}
\end{center}
\end{table*}
\iffalse
\begin{table}[!t]
\centering
\scalebox{0.62}{
\begin{tabular}{@{}l|ccccc@{}}
\hline
System & Layers & Hidden & Feed-forward & Model & Inference \\
& & Size & Size & Size & Time \\
\hline
BERT$_{\rm BASE}$ (Teacher) & 12 & 768 & 3072 & 109M($\times 1.0$) & 188s($\times 1.0$) \\
BERT-PKD/DistilBERT & 4 & 768 & 3072 & 52.2M($\times 2.1$) & 63.7s($\times 3.0$)\\
\hline
TinyBERT/BERT$_{\rm SMALL}$ & 4 & 312 & 1200 & 14.5M($\times 7.5$) & 19.9s($\times 9.4$)\\
\hline
\end{tabular}}
\caption{The model sizes and inference time for baselines and TinyBERT. The number of layers does not include the embedding and prediction layers.}
\label{tab:model_efficiency}
\end{table}
\fi
In this section, we evaluate the effectiveness and efficiency of TinyBERT on a variety of tasks with different model settings.
\subsection{Datasets}\label{subsec:datasets}
We evaluate TinyBERT on the General Language Understanding Evaluation (GLUE)~\cite{wang2018glue} benchmark, which consists of 2 single-sentence tasks: CoLA~\cite{warstadt2019neural}, SST-2~\cite{socher2013recursive}, 3 sentence similarity tasks: MRPC~\cite{dolan2005automatically}, STS-B~\cite{cer2017semeval}, QQP~\cite{chen2018quora}, and 4 natural language inference tasks: MNLI~\cite{williams2018broad}, QNLI~\cite{rajpurkar2016squad}, RTE~\cite{bentivogli2009fifth} and WNLI~\cite{levesque2012winograd}. The metrics for these tasks can be found in the GLUE paper~\cite{wang2018glue}.
\subsection{TinyBERT Settings}\label{subsec:tinybert_setup}
We instantiate a tiny student model (the number of layers $M$=4, the hidden size $d'$=312, the feed-forward/filter size $d'_i$=1200 and the head number $h$=12) that has a total of 14.5M parameters. This model is referred to as TinyBERT$_{4}$. The original BERT$_{\rm BASE}$ ($N$=12, $d$=768, $d_i$=3072 and $h$=12) is used as the teacher model that contains 109M parameters. We use $g(m)=3\times m$ as the layer mapping function, so TinyBERT$_{4}$ learns from every 3 layers of BERT$_{\rm BASE}$. The learning weight $\lambda$ of each layer is set to 1. Besides, for a direct comparisons with baselines, we also instantiate a TinyBERT$_{6}$~($M$=6, $d'$=768, $d'_i$=3072 and $h$=12) with the same architecture as BERT$_{6}$-PKD~\cite{sun2019patient} and DistilBERT$_{6}$~\cite{sanh2019distilbert}.
TinyBERT learning includes the general distillation and the task-specific distillation. For the general distillation, we set the maximum sequence length to 128 and use English Wikipedia (2,500M words) as the text corpus and perform the {\it intermediate layer distillation} for 3 epochs with the supervision from a pre-trained BERT$_{\rm BASE}$ and keep other hyper-parameters the same as BERT pre-training~\cite{devlin2019bert}. For the task-specific distillation, under the supervision of a fine-tuned BERT, we firstly perform {\it intermediate layer distillation} on the augmented data for 20 epochs\footnote{For large datasets MNLI, QQP, and QNLI, we only perform 10 epochs of the {\it intermediate layer distillation}, and for the challenging task CoLA, we perform 50 epochs at this step.} with batch size 32 and learning rate 5e-5, and then perform {\it prediction layer distillation} on the augmented data~\footnote{For regression task STS-B, the original train set is better.} for 3 epochs with choosing the batch size from \{16, 32\} and learning rate from \{1e-5, 2e-5, 3e-5\} on dev set. At task-specific distillation, the maximum sequence length is set to 64 for single-sentence tasks, and 128 for sequence pair tasks.
\subsection{Baselines}\label{baselines}
We compare TinyBERT with BERT$_{\rm TINY}$, BERT$_{\rm SMALL}$\footnote{\url{https://github.com/google-research/bert}}~\cite{turc2019well} and several state-of-the-art KD baselines including BERT-PKD~\cite{sun2019patient}, PD~\cite{turc2019well}, DistilBERT~\cite{sanh2019distilbert} and MobileBERT~\cite{sun2020mobilebert}. BERT$_{\rm TINY}$ means directly pretraining a small BERT, which has the same model architecture as TinyBERT$_{4}$. When training BERT$_{\rm TINY}$, we follow the same learning strategy as described in the original BERT~\cite{devlin2019bert}. To make a fair comparison, we use the released code to train a 4-layer BERT$_{4}$-PKD\footnote{\url{https://github.com/intersun/PKD-for-BERT-Model-Compression}} and a 4-layer DistilBERT$_{4}$\footnote{\url{https://github.com/huggingface/transformers/tree/master/examples/distillation}} and fine-tuning these 4-layer baselines with suggested hyper-paramters. For 6-layer baselines, we use the reported numbers or evaluate the results on the test set of GLUE with released models.
\subsection{Experimental Results on GLUE}
We submitted our model predictions to the official GLUE evaluation server to obtain results on the test set\footnote{\url{https://gluebenchmark.com}}, as summarized in Table~\ref{tab:glue_main}.
The experiment results from the 4-layer student models demonstrate that: 1) There is a large performance gap between BERT$_{\rm TINY}$ (or BERT$_{\rm SMALL}$) and BERT$_{\rm BASE}$ due to the dramatic reduction in model size. 2) TinyBERT$_{4}$ is consistently better than BERT$_{\rm TINY}$ on all the GLUE tasks and obtains a large improvement of 6.8\% on average. This indicates that the proposed KD learning framework can effectively improve the performances of small models on a variety of downstream tasks. 3) TinyBERT$_{4}$ significantly outperforms the 4-layer state-of-the-art KD baselines (i.e., BERT$_{4}$-PKD and DistilBERT$_{4}$) by a margin of at least 4.4\%, with $\sim$28\% parameters and 3.1x inference speedup. 4) Compared with the teacher BERT$_{\rm BASE}$, TinyBERT$_{4}$ is 7.5x smaller and 9.4x faster in the model efficiency, while maintaining competitive performances. 5) For the challenging CoLA dataset~(the task of predicting linguistic acceptability judgments), all the 4-layer distilled models have big performance gaps compared to the teacher model, while TinyBERT$_{4}$ achieves a significant improvement over the 4-layer baselines. 6) We also compare TinyBERT with the 24-layer MobileBERT$_{\rm TINY}$, which is distilled from 24-layer IB-BERT$_{\rm LARGE}$. The results show that TinyBERT$_4$ achieves the same average score as the 24-layer model with only 38.7\% FLOPs. 7) When we increase the capacity of our model to TinyBERT$_{6}$, its performance can be further elevated and outperforms the baselines of the same architecture by a margin of 2.6\% on average and achieves comparable results with the teacher. 8) Compared with the other two-stage baseline PD, which first pre-trains a small BERT, then performs distillation on a specific task with this small model, TinyBERT initialize the student in task-specific stage via general distillation. We analyze these two initialization methods in Appendix~\ref{apx:bert_small}.
In addition, BERT-PKD and DistilBERT initialize their student models with some layers of a pre-trained BERT, which makes the student models have to keep the same size settings of Transformer layer (or embedding layer) as their teacher. In our two-stage distillation framework, TinyBERT is initialized through general distillation, making it more flexible in choosing model configuration.
\noindent{\bf More Comparisons.} We demonstrate the effectiveness of TinyBERT by including more baselines such as Poor Man’s BERT~\cite{sajjad2020poor}, BERT-of-Theseus~\cite{xu2020bert} and MiniLM~\cite{wang2020minilm}, some of which only report results on the GLUE dev set. In addition, we evaluate TinyBERT on SQuAD v1.1 and v2.0. Due to the space limit, we present our results in the Appendix~\ref{apx:more_compar} and \ref{apx:qa_tasks}.
\iffalse
\begin{table}
\begin{center}
\scalebox{0.62}{
\begin{tabular}{l|cccc|c}
\hline
System & MNLI-m & MNLI-mm & MRPC & CoLA & { Average} \\
\hline
BERT$_{\rm BASE}$ (Teacher) & 84.2 & 84.4 & 86.8 & 57.4 & 78.2 \\
\hline
BERT-PKD ($M$=6;$d'$=768;$d'_i$=3072) & 80.9 & 80.9 & 83.1 & 43.1 & 72.0 \\
DistilBERT ($M$=6;$d'$=768;$d'_i$=3072) & 81.6 & 81.1 & 82.4 & 42.5 & 71.9 \\
\hline
TinyBERT ($M$=4;$d'$=312;$d'_i$=1200) & 82.8 & 82.9 & 85.8 & 49.7 & 75.3 \\
TinyBERT ($M$=4;$d'$=768;$d'_i$=3072) & 83.8 & 84.1 & 85.8 & 50.5 & 76.1 \\
TinyBERT ($M$=6;$d'$=312;$d'_i$=1200) & 83.3 & 84.0 & 86.3 & 50.6 & 76.1 \\
TinyBERT ($M$=6;$d'$=768;$d'_i$=3072) & 84.5 & 84.5 & 86.3 & 54.0 & 77.3 \\
\hline
\end{tabular}}
\caption{Results~(dev) of wider or deeper TinyBERT variants and baselines.}
\label{exp:model_size}
\end{center}
\end{table}
\subsection{Effects of Model Size}
We evaluate how much improvement can be achieved when increasing the model size of TinyBERT on several typical GLUE tasks, where MNLI and MRPC are used in the ablation studies of~\cite{devlin2018bert}, and CoLA is the most difficult task in GLUE. Specifically, three wider and deeper variants are proposed and their evaluation results on development set are displayed in Table~\ref{exp:model_size}. We can observe that: 1) All the three TinyBERT variants can consistently outperform the original smallest TinyBERT, which indicates that the proposed KD method works for the student models of various model sizes. 2) For the CoLA task, the improvement is slight when only increasing the number of layers~(from 49.7 to 50.6) or hidden size~(from 49.7 to 50.5). To achieve more dramatic improvements, the student model should become deeper and wider~(from 49.7 to 54.0). 3) Another interesting observation is that the smallest 4-layer TinyBERT can even outperform the 6-layers baselines, which further confirms the effectiveness of the proposed KD method.
\fi
\subsection{Ablation Studies}
In this section, we conduct ablation studies to investigate the contributions of : a) different procedures of the proposed two-stage TinyBERT learning framework in Figure~\ref{figure:tinybert_learning}, and b) different distillation objectives in Equation~\ref{eq:model_loss}.
\subsubsection{Effects of Learning Procedure} The proposed two-stage TinyBERT learning framework consists of three key procedures: GD~(General Distillation), TD~(Task-specific Distillation) and DA~(Data Augmentation). The performances of removing each individual learning procedure are analyzed and presented in Table~\ref{tab:learning_procedures}. The results indicate that all of the three procedures are crucial for the proposed method. The TD and DA has comparable effects in all the four tasks. We note that the task-specific procedures (TD and DA) are more helpful than the pre-training procedure (GD) on all of the tasks. Another interesting observation is that GD contribute more on CoLA than on MNLI and MRPC. We conjecture that the ability of linguistic generalization~\cite{warstadt2019neural} learned by GD plays an important role in the task of linguistic acceptability judgments.
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{l|cccc|c}
System & MNLI-m & MNLI-mm & MRPC & CoLA & { Avg} \\
\hline
TinyBERT$_{4}$ & 82.8 & 82.9 & 85.8 & 50.8 & 75.6 \\
\hline
w/o GD & 82.5 & 82.6 & 84.1 & 40.8 & 72.5 \\
w/o TD & 80.6 & 81.2 & 83.8 & 28.5 & 68.5 \\
w/o DA & 80.5 & 81.0 & 82.4 & 29.8 & 68.4 \\
\end{tabular}}
\caption{Ablation studies of different procedures~(i.e., TD, GD, and DA) of the two-stage learning framework. The variants are validated on the dev set.}
\label{tab:learning_procedures}
\vspace{-0.05in}
\end{center}
\end{table}
\begin{table}
\begin{center}
\scalebox{0.68}{
\begin{tabular}{l|cccc|c}
System & MNLI-m & MNLI-mm & MRPC & CoLA & { Avg} \\ \hline
TinyBERT$_{4}$ & 82.8 & 82.9 & 85.8 & 50.8 & 75.6 \\ \hline
w/o Embd & 82.3 & 82.3 & 85.0 & 46.7 & 74.1 \\
w/o Pred & 80.5 & 81.0 & 84.3 & 48.2 & 73.5 \\
w/o Trm & 71.7 & 72.3 & 70.1 & 11.2 & 56.3 \\
~~~~~~w/o Attn & ~~~~~~79.9 & ~~~~~~80.7 & ~~~~~~82.3 & ~~~~~~41.1 & ~~~~~~71.0 \\
~~~~~~w/o Hidn & ~~~~~~81.7 & ~~~~~~82.1 & ~~~~~~84.1 & ~~~~~~43.7 & ~~~~~~72.9
\end{tabular}}
\caption{Ablation studies of different distillation objectives in the TinyBERT learning. The variants are validated on the dev set.}
\label{exp:distill_ablation}
\vspace{-0.15in}
\end{center}
\end{table}
\subsubsection{Effects of Distillation Objective} We investigate the effects of distillation objectives on the TinyBERT learning. Several baselines are proposed including the learning without the Transformer-layer distillation (w/o Trm), the embedding-layer distillation (w/o Emb) or the prediction-layer distillation (w/o Pred)\footnote{The prediction-layer distillation performs soft cross-entropy as Equation~\ref{eq:pred_loss} on the augmented data. ``w/o Pred'' means performing standard cross-entropy against the ground-truth of original train set.} respectively. The results are illustrated in Table~\ref{exp:distill_ablation} and show that all the proposed distillation objectives are useful.
The performance w/o Trm\footnote{Under ``w/o Trm" setting, we actually 1) conduct embedding-layer distillation at the pre-training stage; 2) perform embedding-layer and prediction-layer distillation at fine-tuning stage.} drops significantly from 75.6 to 56.3. The reason for the significant drop lies in the initialization of student model. At the pre-training stage, obtaining a good initialization is crucial for the distillation of transformer-based models, while there is no supervision signal from upper layers to update the parameters of transformer layers at this stage under the w/o Trm setting. Furthermore, we study the contributions of attention (Attn) and hidden states (Hidn) in the Transformer-layer distillation. We can find the attention based distillation has a greater impact than hidden states based distillation. Meanwhile, these two kinds of knowledge distillation are complementary to each other, which makes them the most important distillation techniques for Transformer-based model in our experiments.
\subsection{Effects of Mapping Function}
\label{exp:distill_layers}
We also investigate the effects of different mapping functions $n=g(m)$ on the TinyBERT learning. Our original TinyBERT as described in section~\ref{subsec:tinybert_setup} uses the uniform strategy, and we compare with two typical baselines including top-strategy $\left(g(m)=m+N-M; 0<m \leq M \right)$ and bottom-strategy $\left(g(m)=m; 0<m \leq M \right)$.
The comparison results are presented in Table~\ref{table:distill_layers}. We find that the top-strategy performs better than the bottom-strategy on MNLI, while being worse on MRPC and CoLA, which confirms the observations that different tasks depend on the knowledge from different BERT layers. The uniform strategy covers the knowledge from bottom to top layers of BERT$_{\rm BASE}$, and it achieves better performances than the other two baselines in all the tasks. Adaptively choosing layers for a specific task is a challenging problem and we leave it as future work.
\begin{table}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{l|cccc|c}
System & MNLI-m & MNLI-mm & MRPC & CoLA & {Avg} \\
\hline
Uniform & 82.8 & 82.9 & 85.8 & 50.8 & 75.6 \\
\hline
Top & 81.7 & 82.3 & 83.6 & 35.9 & 70.9 \\
Bottom & 80.6 & 81.3 & 84.6 & 38.5 & 71.3 \\
\end{tabular}}
\caption{Results (dev) of different mapping strategies for TinyBERT$_{4}$.}
\label{table:distill_layers}
\vspace{-0.15in}
\end{center}
\end{table}
\iffalse
\begin{table}
\begin{center}
\scalebox{0.78}{
\begin{tabular}{l|cc}
System & SST-2 $\rightarrow$ SST-2 &SST-2 $\rightarrow$ IMDB \\
\hline
BERT$_{\rm BASE}$ & 93.4 & 85.2 \\
\hline
BERT$_{4}$-PKD & 89.4 & 80.8 \\
DistilBERT$_{4}$ & 91.4 & 80.2 \\
TinyBERT$_{4}$ & 92.6 & 76.5 \\
\hline
BERT$_{6}$-PKD & 92.0 & 80.1 \\
DistilBERT$_{6}$ & 92.5 & 82.6 \\
TinyBERT$_{6}$ & 93.1 & 81.3 \\
\end{tabular}}
\caption{OOD Results of different compressed models. we first train models on train set of SST-2 and then evaluate on the test set of SST-2 and IMDB respectively.}
\label{table:ood_exp}
\end{center}
\end{table}
\subsection{Out-of-Distribution Generalization}
\yyc {We also compare the Out-of-Distribution~(OOD) generalization of compressed models. Following the OOD setting~\cite{hendrycks2020pretrained}, we train the models in one dataset and evaluate them in other one which has a different distribution. The datasets of SST2~\cite{socher-etal-2013-recursive} and IMDB~\cite{maas2011learning} are used in our experiment and the results are displayed in Table~\ref{table:ood_exp} }
\fi
\section{Introduction}\label{sec:intro}
Pre-training language models then fine-tuning on downstream tasks has become a new paradigm for natural language processing~(NLP). Pre-trained language models~(PLMs), such as BERT~\cite{devlin2019bert}, XLNet~\cite{yang2019xlnet}, RoBERTa~\cite{liu2019roberta}, ALBERT~\cite{Lan2020ALBERT}, T5~\cite{raffel2019exploring} and ELECTRA~\cite{clark2020electra}, have achieved great success in many NLP tasks (e.g., the GLUE benchmark~\cite{wang2018glue} and the challenging multi-hop reasoning task~\cite{ding2019cognitive}).
However, PLMs usually have a large number of parameters and take long inference time, which are difficult to be deployed on edge devices such as mobile phones. Recent studies~\cite{kovaleva2019revealing,michel2019sixteen,voita2019analyzing} demonstrate that there is redundancy in PLMs. Therefore, it is crucial and feasible to reduce the computational overhead and model storage of PLMs while retaining their performances.
There have been many model compression techniques~\cite{han2015deep} proposed to accelerate deep model inference and reduce model size while maintaining accuracy. The most commonly used techniques include quantization~\cite{gong2014compressing}, weights pruning~\cite{han2015learning}, and knowledge distillation~(KD)~\cite{romero2014fitnets}. In this paper, we focus on knowledge distillation, an idea originated from~\citet{hinton2015distilling}, in a {\it teacher-student} framework. KD aims to transfer the knowledge embedded in a large teacher network to a small student network where the student network is trained to reproduce the behaviors of the teacher network. Based on the framework, we propose a novel distillation method specifically for the Transformer-based models~\cite{vaswani2017attention}, and use BERT as an example to investigate the method for large-scale PLMs.
KD has been extensively studied in NLP~\cite{kim2016sequence,hu2018attention} as well as for pre-trained language models \cite{sanh2019distilbert,sun2019patient,sun2020mobilebert,wang2020minilm}. The {\it pre-training-then-fine-tuning} paradigm firstly pre-trains BERT on a large-scale unsupervised text corpus, then fine-tunes it on task-specific dataset, which greatly increases the difficulty of BERT distillation. Therefore, it is required to design an effective KD strategy for both training stages.
To build a competitive TinyBERT, we firstly propose a new {\it Transformer distillation} method to distill the knowledge embedded in teacher BERT. Specifically, we design three types of loss functions to fit different representations from BERT layers: 1) the output of the embedding layer; 2) the hidden states and attention matrices derived from the Transformer layer; 3) the logits output by the prediction layer. The attention based fitting is inspired by the recent findings~\cite{clark2019does} that the attention weights learned by BERT can capture substantial linguistic knowledge, and it thus encourages the linguistic knowledge can be well transferred from teacher BERT to student TinyBERT. Then, we propose a novel {\it two-stage learning} framework including the {\it general distillation} and the {\it task-specific distillation}, as illustrated
in Figure~\ref{figure:tinybert_learning}. At general distillation stage, the original BERT without fine-tuning acts as the teacher model. The student TinyBERT mimics the teacher's behavior through the proposed Transformer distillation on general-domain corpus. After that, we obtain a general TinyBERT that is used as the initialization of student model for the further distillation. At the task-specific distillation stage, we first do the data augmentation, then perform the distillation on the augmented dataset using the fine-tuned BERT as the teacher model. It should be pointed out that both the two stages are essential to improve the performance and generalization capability of TinyBERT.
\begin{figure}
\centering
\includegraphics[scale=0.4]{KD_figures/TinyBERT_LP.pdf}
\caption{The illustration of TinyBERT learning.}
\label{figure:tinybert_learning}
\end{figure}
\iffalse
A detailed comparison between the proposed method and other existing methods is summarized in Table~\ref{tab:summary_methods}.
\begin{table*}[!htbp]
\setlength{\tabcolsep}{4.5pt}
\centering
\begin{tabular}{|c|c|cccc||cccc|c|}
\hline
\multirow{2}{*}{ KD Methods} &
\multicolumn{5}{c||}{KD at Pre-training Stage} &
\multicolumn{5}{c|}{KD at Fine-tuning Stage} \\
\cline{2-11}
& \small INIT & \small Embd & \small Attn &\small Hidn &\small Pred & \small Embd & \small Attn &\small Hidn &\small Pred & \small DA \\
\hline
Distilled BiLSTM$_{\tiny \hbox{SOFT}}$ & & & & & & & & & \checkmark & \checkmark \\
\hline
BERT-PKD & \checkmark & & & & & & & \checkmark \tablefootnote{The student learns from the {\tt [CLS]}~(a special classification token of BERT) hidden states of the teacher.} & \checkmark & \\
\hline
DistilBERT & \checkmark & & & & \checkmark \tablefootnote{The output of pre-training tasks (such as dynamic masking) is used as the supervision signal.} & & & & \checkmark & \\
\hline
TinyBERT~(our method) & & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\end{tabular}
\caption{A summary of KD methods for BERT. Abbreviations: {INIT}(initializing student BERT with some layers of pre-trained teacher BERT), {DA}(conducting data augmentation for task-specific training data). {Embd}, {Attn}, {Hidn}, and {Pred} represent the knowledge from embedding layers, attention matrices, hidden states, and final prediction layers, respectively.
}
\label{tab:summary_methods}
\end{table*}
\fi
The main contributions of this work are as follows: 1) We propose a new Transformer distillation method to encourage that the linguistic knowledge encoded in teacher BERT can be adequately transferred to TinyBERT; 2) We propose a novel two-stage learning framework with performing the proposed Transformer distillation at both the pre-training and fine-tuning stages, which ensures that TinyBERT can absorb both the general-domain and task-specific knowledge of the teacher BERT. 3) We show in the experiments that our TinyBERT$_{4}$ can achieve more than 96.8\% the performance of teacher BERT$_{\rm BASE}$ on GLUE tasks, while having much fewer parameters ($\sim$13.3\%) and less inference time ($\sim$10.6\%), and significantly outperforms other state-of-the-art baselines with 4 layers on BERT distillation; 4) We also show that a 6-layer TinyBERT$_{6}$ can perform on-par with the teacher BERT$_{\rm BASE}$ on GLUE.
\section{Preliminaries}\label{sec:background}
In this section, we describe the formulation of Transformer~\cite{vaswani2017attention} and Knowledge Distillation~\cite{hinton2015distilling}. Our proposed {\it Transformer distillation} is a specially designed KD method for Transformer-based models.
\subsection{Transformer Layer}
Most of the recent pre-trained language models (e.g., BERT, XLNet and RoBERTa) are built with Transformer layers, which can capture long-term dependencies between input tokens by self-attention mechanism. Specifically, a standard Transformer layer includes two main sub-layers: {\it multi-head attention}~(MHA) and {\it fully connected feed-forward} network~(FFN).
\noindent \textbf{Multi-Head Attention (MHA)}. The calculation of attention function depends on the three components of queries, keys and values, denoted as matrices $\bm{Q}$, $\bm{K}$ and $\bm{V}$ respectively. The attention function can be formulated as follows:
\begin{align}
\label{eq:attention_score}
\!\!\bm{A} \! & = \! \frac{\bm{Q}\bm{K}^{T}}{\sqrt{d_k}}, \\
\!\!\texttt{Attention}(\bm{Q},\bm{K},\bm{V}) \! & = \! \texttt{softmax}(\bm{A})\bm{V},
\end{align}
where $d_k$ is the dimension of keys and acts as a scaling factor, $\bm{A}$ is the attention matrix calculated from the compatibility of $\bm{Q}$ and $\bm{K}$ by dot-product operation. The final function output is calculated as a weighted sum of values $\bm{V}$, and the weight is computed by applying {\tt softmax()} operation on the each column of matrix $\bm{A}$. According to Clark et al.~\shortcite{clark2019does}, the attention matrices in BERT can capture substantial linguistic knowledge, and thus play an essential role in our proposed distillation method.
Multi-head attention is defined by concatenating the attention heads from different representation subspaces as follows:
\begin{align}
\label{muti-head-attention}
\texttt{MHA}(\bm{Q},\bm{K},\bm{V}) \! &= \! \texttt{Concat}({\rm h}_1,\ldots,{\rm h}_k)\bm{W},
\end{align}
where $k$ is the number of attention heads, and ${\rm h}_i$ denotes the $i$-th attention head, which is calculated by the $\texttt{Attention}()$ function with inputs from different representation subspaces. The matrix $\bm{W}$ acts as a linear transformation.
\noindent \textbf{Position-wise Feed-Forward Network (FFN)}. Transformer layer also contains a fully connected feed-forward network, which is formulated as follows:
\begin{align}
\label{eq:FFN}
\texttt{FFN}(x) = \max(0, x\bm{W}_1 + b_1)\bm{W}_2 +b_2.
\end{align}
We can see that the FFN contains two linear transformations and one ReLU activation.
\subsection{Knowledge Distillation}
KD aims to transfer the knowledge of a large teacher network $T$ to a small student network $S$. The student network is trained to mimic the behaviors of teacher networks. Let $f^{T}$ and $f^{S}$ represent the {\it behavior} functions of teacher and student networks, respectively. The behavior function targets at transforming network inputs to some informative representations, and it can be defined as the output of any layer in the network. In the context of Transformer distillation, the output of MHA layer or FFN layer, or some intermediate representations (such as the attention matrix $\bm{A}$) can be used as behavior function. Formally, KD can be modeled as minimizing the following objective function:
\begin{align}
\label{eq:point_distillation}
\mathcal{L}_{\text{KD}} = \sum_{x \in \mathcal{X}} L\big(f^S(x), f^T(x)\big),
\end{align}
where $L(\cdot)$ is a loss function that evaluates the difference between teacher and student networks, $x$ is the text input and $\mathcal{X}$ denotes the training dataset. Thus the key research problem becomes how to define effective behavior functions and loss functions. Different from previous KD methods, we also need to consider how to perform KD at the pre-training stage of BERT in addition to the task-specific training stage.
\begin{figure}
\centering
\includegraphics[scale=0.38]{KD_figures/transformer_layer_detail.pdf}
\caption{The details of Transformer-layer distillation consisting of Attn$_{loss}$({attention based distillation}) and Hidn$_{loss}$({hidden states based distillation}). }
\label{figure:different_knowledge}
\end{figure}
\section{Method}\label{sec:method}
In this section, we propose a novel distillation method for Transformer-based models, and present a {\it two-stage learning} framework for our model distilled from BERT, which is called TinyBERT.
\subsection{Transformer Distillation}
The proposed {\it Transformer distillation} is a specially designed KD method for Transformer networks. In this work, both the student and teacher networks are built with Transformer layers. For a clear illustration, we formulate the problem before introducing our method.
\noindent{\bf Problem Formulation}. Assuming that the student model has $M$ Transformer layers and teacher model has $N$ Transformer layers, we start with choosing $M$ out of $N$ layers from the teacher model for the {\it Transformer-layer distillation}. Then a function $n=g(m)$ is defined as the mapping function between indices from student layers to teacher layers, which means that the $m$-th layer of student model learns the information from the $g(m)$-th layer of teacher model. To be precise, we set 0 to be the index of embedding layer and $M + 1$ to be the index of prediction layer, and the corresponding layer mappings are defined as $0=g(0)$ and $N+1=g(M+1)$ respectively. The effect of the choice of different mapping functions on the performances is studied in the experiment section. Formally, the student can acquire knowledge from the teacher by minimizing the following objective:
\begin{align}
\label{eq:kd_loss}
\!\!\!\mathcal{L}_{\text{model}} \! = \! \sum_{x \in \mathcal{X}} \sum^{M+1}_{m=0} \!\lambda_{m} \mathcal{L}_{\text{layer}}(f^S_m(x), f^T_{g(m)}(x)),
\end{align}
where $\mathcal{L}_{\text{layer}}$ refers to the loss function of a given model layer~(e.g., Transformer layer or embedding layer), $f_m(x)$ denotes the behavior function induced from the $m$-th layers and $\lambda_m$ is the hyper-parameter that represents the importance of the $m$-th layer's distillation.
\noindent{\bf Transformer-layer Distillation}. The proposed Transformer-layer distillation includes the {\it attention based distillation} and {\it hidden states based distillation}, which is shown in Figure~\ref{figure:different_knowledge}. The attention based distillation is motivated by the recent findings that attention weights learned by BERT can capture rich linguistic knowledge~\cite{clark2019does}. This kind of linguistic knowledge includes the syntax and coreference information, which is essential for natural language understanding. Thus we propose the attention based distillation to encourage that the linguistic knowledge can be transferred from teacher (BERT) to student (TinyBERT). Specifically, the student learns to fit the matrices of multi-head attention in the teacher network, and the objective is defined as:
\begin{align}
\label{eq:att_loss}
\mathcal{L}_{\text{attn}} = \frac{1}{h}\sum\nolimits^{h}_{i=1} \texttt{MSE}(\bm{A}_i^{S}, \bm{A}_i^{T}),
\end{align}
where $h$ is the number of attention heads, $\bm{A}_i \in \mathbb{R}^{l\times l} $ refers to the attention matrix corresponding to the $i$-th head of teacher or student, $l$ is the input text length, and {\tt MSE()} means the {\it mean squared error} loss function. In this work, the (unnormalized) attention matrix $\bm{A}_i$ is used as the fitting target instead of its softmax output $\texttt{softmax}(\bm{A}_i)$, since our experiments show that the former setting has a faster convergence rate and better performances.
\comments{In this work, we use logits that is argument of the softmax function instead of softmax weight, as the fitting target. Because the experiments show that the logits fitting has faster convergence and better performances than the weight fitting. }
In addition to the attention based distillation, we also distill the knowledge from the output of Transformer layer, and the objective is as follows:
\begin{align}
\label{eq:hid_loss}
\mathcal{L}_{\text{hidn}} = \texttt{MSE}(\bm{H}^{S}\bm{W}_h, \bm{H}^{T}),
\end{align}
where the matrices $\bm{H}^{S} \in \mathbb{R}^{l\times d'}$ and $\bm{H}^{T} \in \mathbb{R}^{l \times d}$ refer to the hidden states of student and teacher networks respectively, which are calculated by Equation~\ref{eq:FFN}. The scalar values $d$ and $d'$ denote the hidden sizes of teacher and student models, and $d'$ is often smaller than $d$ to obtain a smaller student network. The matrix $\bm{W}_h \in \mathbb{R}^{d' \times d} $ is a learnable linear transformation, which transforms the hidden states of student network into the same space as the teacher network's states.
\noindent{\bf Embedding-layer Distillation}. Similar to the hidden states based distillation, we also perform embedding-layer distillation and the objective is:
\begin{align}
\label{eq:emb_loss}
\mathcal{L}_{\text{embd}} = \texttt{MSE}(\bm{E}^{S}\bm{W}_e, \bm{E}^{T}),
\end{align} where the matrices $\bm{E}^{S}$ and $\bm{H}^{T}$ refer to the embeddings of student and teacher networks, respectively. In this paper, they have the same shape as the hidden state matrices. The matrix $\bm{W}_e$ is a linear transformation playing a similar role as $\bm{W}_h$.
\comments{Following the teacher BERT's setting, we make the hidden size equal to the embedding size in student TinyBERT. Thus $\bm{E}^{S/T}$ have the same shapes with $\bm{H}^{S/T}$ respectively, and the learnable parameters $\bm{W}_e$ has the same shape with $\bm{W}_h$.}
\noindent{\bf Prediction-layer Distillation}. In addition to imitating the behaviors of intermediate layers, we also use the knowledge distillation to fit the predictions of teacher model as in~\citet{hinton2015distilling}. Specifically, we penalize the soft cross-entropy loss between the student network's logits against the teacher's logits:
\begin{align}
\label{eq:pred_loss}
\mathcal{L}_{\text{pred}} = \texttt{CE}(\bm{z}^{T}/t, \bm{z}^{S}/t),
\end{align}
where ${\bm z}^{S}$ and ${\bm z}^{T}$ are the logits vectors predicted by the student and teacher respectively, \texttt{CE} means the \textit{cross entropy} loss, and $t$ means the temperature value. In our experiment, we find that $t=1$ performs well.
Using the above distillation objectives~(i.e. Equations~\ref{eq:att_loss}, \ref{eq:hid_loss}, \ref{eq:emb_loss} and \ref{eq:pred_loss}), we can unify the distillation loss of the corresponding layers between the teacher and the student network:
\begin{eqnarray}
\mathcal{L}_{\text{layer}} \!\! = \!\!
\begin{cases}
\!\mathcal{L}_{\text{embd}}, \!\!\!\!\! & m \!= \!0 \\
\!\mathcal{L}_{\text{hidn}} \! + \! \mathcal{L}_{\text{attn}}, \!\!\!\!\! & M \!\geq \!\!m \!> \!0 \\
\!\mathcal{L}_{\text{pred}}, \!\!\!\!\! & m \!=\! M + 1
\end{cases}
\label{eq:model_loss}
\end{eqnarray}
\subsection{TinyBERT Learning}
The application of BERT usually consists of two learning stages: the pre-training and fine-tuning. The plenty of knowledge learned by BERT in the pre-training stage is of great importance and should be transferred to the compressed model. Therefore, we propose a novel two-stage learning framework including the {\it general distillation} and the {\it task-specific distillation}, as illustrated in Figure~\ref{figure:tinybert_learning}. General distillation helps TinyBERT learn the rich knowledge embedded in pre-trained BERT, which plays an important role in improving the generalization capability of TinyBERT. The task-specific distillation further teaches TinyBERT the knowledge from the fine-tuned BERT. With the two-step distillation, we can substantially reduce the gap between teacher and student models.
{\bf General Distillation}. We use the original BERT without fine-tuning as the teacher and a large-scale text corpus as the training data. By performing the Transformer distillation~\footnote{In the general distillation, we do not perform prediction-layer distillation as Equation~\ref{eq:pred_loss}. Our motivation is to make the TinyBERT primarily learn the intermediate structures of BERT at pre-training stage. From our preliminary experiments, we also found that conducting prediction-layer distillation at pre-training stage does not bring extra improvements on downstream tasks, when the Transformer-layer distillation (Attn and Hidn distillation) and Embedding-layer distillation have already been performed.} on the text from general domain, we obtain a general TinyBERT that can be fine-tuned for downstream tasks. However, due to the significant reductions of the hidden/embedding size and the layer number, general TinyBERT performs generally worse than BERT.
{\bf Task-specific Distillation}. Previous studies show that the complex models, such as fine-tuned BERTs, suffer from over-parametrization for domain-specific tasks~\cite{kovaleva2019revealing}. Thus, it is possible for smaller models to achieve comparable performances to the BERTs. To this end, we propose to produce competitive fine-tuned TinyBERTs through the task-specific distillation. In the task-specific distillation, we re-perform the proposed Transformer distillation on an augmented task-specific dataset. Specifically, the fine-tuned BERT is used as the teacher and a data augmentation method is proposed to expand the task-specific training set. Training with more task-related examples, the generalization ability of the student model can be further improved.
\begin{algorithm}[tb]
\algsetup{linenosize=\small} \small
\caption{Data Augmentation Procedure for Task-specific Distillation}
\label{alg:algorithm}
\textbf{Input}: $\mathbf{x}$ is a sequence of words\\
\textbf{Params}: $p_{t}$: the threshold probability\\
\hspace*{1.1cm} $N_{a}$: the number of samples augmented per example\\
\hspace*{1.1cm} $K$: the size of candidate set \\
\textbf{Output}: ${D^{\prime}}$: the augmented data
\begin{algorithmic}[1]
\STATE $n \gets 0\ ;\ \ D^{\prime} \gets [\ ]$
\WHILE{$n < N_{a}$}
\STATE $\mathbf{x}_{m} \gets \mathbf{x}$
\FOR{$i \gets $1\ to\ len$(\mathbf{x})$}
\IF{$\mathbf{x}[i]$ \ is\ a\ single-piece\ word}
\STATE Replace $\mathbf{x}_{m}[i]$ with $\texttt{[MASK]}$
\STATE $C \gets K$ most probable words of $\texttt{BERT}(\mathbf{x}_{m})[i]$
\ELSE
\STATE $C \gets K$ most similar words of $\mathbf{x}[i]$ from GloVe
\ENDIF
\STATE Sample $ p \sim$ Uniform(0, 1)
\IF{$p \leq p_{t}$}
\STATE Replace $\mathbf{x}_{m}[i]$ with a word in $C$ randomly
\ENDIF
\ENDFOR
\STATE Append $\mathbf{x}_{m}$ to ${D^{\prime}}$
\STATE $n \gets n + 1$
\ENDWHILE
\STATE \textbf{return} $D^{\prime}$
\end{algorithmic}
\end{algorithm}
{\bf Data Augmentation}. We combine a pre-trained language model BERT and GloVe~\cite{pennington2014glove} word embeddings to do word-level replacement for data augmentation. Specifically, we use the language model to predict word replacements for single-piece words~\cite{wu2019conditional}, and use the word embeddings to retrieve the most similar words as word replacements for multiple-pieces words\footnote{A word is tokenized into multiple word-pieces by the tokenizer of BERT.}. Some hyper-parameters are defined to control the replacement ratio of a sentence and the amount of augmented dataset. More details of the data augmentation procedure are shown in Algorithm~\ref{alg:algorithm}. We set $p_{t}$ = 0.4, $N_{a}$ = 20, $K$ = 15 for all our experiments.
The above two learning stages are complementary to each other: the general distillation provides a good initialization for the task-specific distillation, while the task-specific distillation on the augmented data further improves TinyBERT by focusing on learning the task-specific knowledge. Although there is a significant reduction of model size, with the data augmentation and by performing the proposed Transformer distillation method at both the pre-training and fine-tuning stages, TinyBERT can achieve competitive performances in various NLP tasks.
\section{Related Work}\label{sec:related}
\noindent{\bf Pre-trained Language Models Compression}
Generally, pre-trained language models~(PLMs) can be compressed by low-rank approximation~\cite{ma2019tensorized,Lan2020ALBERT}, weight sharing~\cite{dehghani2018universal,Lan2020ALBERT}, knowledge distillation~\cite{tang2019distilling,sanh2019distilbert,turc2019well,sun2020mobilebert,liu2020fastbert,wang2020minilm}, pruning~\cite{cui2019fine,mccarley2019pruning,Fan2020Reducing,Elbayad2020Depth-Adaptive,gordon2020compressing,hou2020dynabert} or quantization~\cite{shen2019q,zafrir2019q8bert}. In this paper, our focus is on knowledge distillation.
\noindent{\bf Knowledge Distillation for PLMs}
There have been some works trying to distill pre-trained language models~(PLMs) into smaller models. BiLSTM$_{\tiny \hbox{SOFT}}$~\cite{tang2019distilling} distills task-specific knowledge from BERT into a single-layer BiLSTM. BERT-PKD~\cite{sun2019patient} extracts knowledges not only from the last layer of the teacher, but also from intermediate layers at fine-tuning stage. DistilBERT~\cite{sanh2019distilbert} performs distillation at pre-training stage on large-scale corpus. Concurrent works, MobileBERT~\cite{sun2020mobilebert} distills a BERT$_{\rm LARGE}$ augmented with bottleneck structures into a 24-layer slimmed version by progressive knowledge transfer at pre-training stage. MiniLM~\cite{wang2020minilm} conducts deep self-attention distillation also at pre-training stage. By contrast, we propose a new {\it two-stage learning} framework to distill knowledge from BERT at both pre-training and fine-tuning stages by a novel transformer distillation method.
\noindent{\bf Pretraining Lite PLMs}
Other related works aim at directly pretraining lite PLMs. \citet{turc2019well} pre-trained 24 miniature BERT models and show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive. ALBERT~\cite{Lan2020ALBERT} incorporates embedding factorization and cross-layer parameter sharing to reduce model parameters. Since ALBERT does not reduce hidden size or layers of transformer block, it still has large amount of computations. Another concurrent work, ELECTRA~\cite{clark2020electra} proposes a sample-efficient task called replaced token detection to accelerate pre-training, and it also presents a 12-layer ELECTRA$_{\rm small}$ that has comparable performance with TinyBERT$_{4}$. Different from these small PLMs, TinyBERT$_{4}$ is a 4-layer model which can achieve more speedup.
|
1,116,691,500,301 | arxiv | \section{Introduction}
Let $p$ be an odd prime and $\mathbb{K} \supset \mathbb{Q}[ \zeta ]$ be a galois
extension containing the \nth{p} roots of unity, while $(\mathbb{K}_n)_{n \in
\mathbb{N}}$ are the intermediate fields of its cyclotomic $\mathbb{Z}_p$-extension
$\mathbb{K}_{\infty}$. Let $A_n = (\id{C}(\mathbb{K}_n))_p$ be the $p$-parts of the
ideal class groups of $\mathbb{K}_n$ and $A = \varprojlim_n A_n$ be their
projective limit. The subgroups $B_n \subset A_n$ are generated by
($p$-free powers of) the classes containing ramified primes above
$p$. If $\wp \subset \mathbb{K}$ is such a prime and $\rg{p} = [ \wp ]$ is its
class, having order $t \cdot p^q$ in the ideal class group, then
$\rg{p}^t \in B(\mathbb{K})$. We shall assume for simplicity that the orders
of the primes above $p$ in $\mathbb{K}_n$ are all $p$-powers and we let
\begin{eqnarray}
\label{bram}
A'_n & = & A_n/B_n, \\
B & = & \varprojlim_n B_n, \quad A' = A/B.\nonumber
\end{eqnarray}
The quotients $A'_n$ arise also as ideal class groups of the ring of
$p$-integers $\id{O}(\mathbb{K})[1/p]$, see \cite{Iw}, \S 4. We let $E_n =
(\id{O}(\mathbb{K}_n))^{\times}$ be the global units of $\mathbb{K}_n$ and $E'_n =
(\id{O}(\mathbb{K}_n)[ 1/p ])^{\times}$ be the $p$-units.
We denote as usual the galois group $\Gamma = \mbox{ Gal }(\mathbb{K}_{\infty}/\mathbb{K})$ and
$\Lambda = \mathbb{Z}_p[ \Gamma ] \cong \mathbb{Z}_p[[ \tau ]] \cong \mathbb{Z}_p[[ T ]]$, where $\tau
\in \Gamma$ is a topological generator and $T = \tau-1$. With this, the module
$A$ is a finitely generated $\Lambda$-torsion module. We let
\[ \omega_n = (T+1)^{p^{n-1}} - 1 \in \Lambda, \quad \nu_{n+1,n} =
\omega_{n+1}/\omega_n \in \Lambda. \] The groups $A, A', B$ are
endowed with an action of $\Lambda$. The groups are multiplicative and
we write the action of $\Lambda$ accordingly, so $a^T = \tau(a)/a$,
etc. It may be useful at places to skip for simplicity of notation to
additive written groups, and this shall be indicated in the text;
moreover, generic, abstract $\Lambda$-modules will be always written
additively, for the same reasons. We use the same notation for the
action of other group rings, such as $\mathbb{Z}_p[ \Delta ], \mathbb{Z}[ \Delta ]$,
etc.
Complex multiplication $\jmath \in \mbox{ Gal }(\mathbb{K}/\mathbb{Q})$ acts on any module $X$
attached to $\mathbb{K}$ and its extensions, for instance $X = \id{O}(\mathbb{K}_n),
A_n, B_n$ but also the galois groups $\mbox{ Gal }(\mathbb{H}_n/\mathbb{K}_n)$, etc.,
inducing a decomposition in plus and minus parts: $X^+ = (1+\jmath) X,
X^- = (1-\jmath) X$. Note that $p$ is odd, and since we shall be
concerned only with $p$-groups, division by $2$ is obsolete. If
$\mathbb{L}_n/\mathbb{K}_n$ is a $p$-abelian extension which is galois over $\mathbb{Q}$ and
with $\mbox{ Gal }(\mathbb{K}/\mathbb{Q})$ acting on $\mbox{ Gal }(\mathbb{L}_n/\mathbb{K}_n)$, then we define
\begin{eqnarray}
\label{hilpm}
\mathbb{L}_n^+ = \mathbb{L}_n^{\hbox{\tiny Gal}(\mathbb{L}_n/\mathbb{K}_n)^-}, \quad \mathbb{L}_n^- =
\mathbb{L}_n^{\hbox{\tiny Gal}(\mathbb{L}_n/\mathbb{K}_n)^+} .
\end{eqnarray}
If $\mathbb{K}/\mathbb{Q}$ is galois with group $\Delta = \mbox{ Gal }(\mathbb{K}/\mathbb{Q})$ and $\wp
\subset \mathbb{K}$ is a prime above $p$, we denote by $C = \Delta/D(\wp)$ a
set of coset representatives of the decomposition group of $\wp$. If
$\wp^+$ is the prime of $\mathbb{K}^+$ above $\wp$, then we write $C^+ =
\Delta^+/D(\wp^+)$; if $\wp^+$ splits in $\mathbb{K}/\mathbb{K}^+$, the set $C^+$ acts
transitively on the pairs of conjugate primes above $p$ in $\mathbb{K}$. We
let $s = | C^+ |$ be the number of such pairs.
If $M$ is a Noetherian $\Lambda$-torsion module and $f \in \mathbb{Z}_p[ T ]$
is a distinguished polynomial, we define the $f$-part and the
$f$-torsion of $M$ by
\begin{eqnarray}
\label{parts}
M(f) & = & \{ x \in M \ : \ \exists n > 0 : f^n x = 0 \}, \\
M[ f ] & = & \{ x \in M \ : \ f x = 0 \} \subset M(f) .\nonumber
\end{eqnarray}
Since $M$ is finitely generated, there is a minimal $n$ such that $f^n
M(f) = 0$, and we denote this by ${\rm ord}_f(M) = n$, the
$f$-order. Moreover, there is an exact sequence of pseudoisomorphisms
\begin{eqnarray}
\label{frk}
0 \rightarrow M[ f ] \rightarrow M(f) \rightarrow M(f) \rightarrow M(f)/(f M(f)) \rightarrow 0,
\end{eqnarray}
in which the middle arrow is induced by the map $x \mapsto f x$. We
define herewith, in analogy to the $p$-rank of finite abelian
$p$-groups, the $f$-rank of $M$ as the common number of elements of a
minimal set of generators, up to pseudoisomorphism, of $M[ f ]$ and
$M/(f M(f))$, as $\Lambda$-modules.
If $X$ is a finite abelian $p$-group, its exponent is $\exp(X) =
\max\{ {\rm ord}(x) : x \in X\}$, where the order ${\rm ord}(x)$ is defined as
the smallest power of $p$ which annihilates $x$ in $X$. The \textit{subexponent}
is defined in this case by
\[ {\rm sexp} (X) = \min\{ {\rm ord}(x) \ : \ x \in X \setminus p X \} . \] For
instance, if $X = C_p \times C_{p^3}$, then $\exp(X) = p^3$, but
${\rm sexp} (X) = p$. We have $\exp(X) = {\rm sexp} (X)$ iff $X$ is the direct
product of cyclic groups of the same order.
Leopoldt emitted in 1962 the hypothesis that the $p$-adic regulator of
the units $E(\mathbb{K})$ should be non-vanishing. His initial conjecture
referred to abelian fields $\mathbb{K}$ but it was soon accepted that one
should expect that the same happens in general for arbitrary number
fields $\mathbb{K}$. The statement for abelian fields could be proved in 1967
by Brumer \cite{Br}, using a $p$-adic variant of Baker's fundamental
result on linear forms in logarithms and an earlier argument of Ax
\cite{Ax}. Greenberg showed in 1973 \cite{Gr} how to define the
$p$-adic regulator\footnote{Although Greenberg did not use the term of
$p$-adic regulator of the $p$-units, his construction was later used
by Gross for defining this term.} $R(E'(\mathbb{K}))$ of the $p$-units, and
could prove, using the same argument of Ax and the Baker-Brumer result
on linear forms in $p$-adic logarithms, that the regulator $R(E'(\mathbb{K}))$
does not vanish for abelian extension $\mathbb{K}/\mathbb{Q}$. Several years later, in
1981, Federer and Gross \cite{FG} considered the question of the
vanishing of $R(E'(\mathbb{K}))$ for arbitrary CM extensions $\mathbb{K}/\mathbb{Q}$. Unlike
Greenberg, they cannot provide a proof for this assumption; in
exchange, they prove that $R(E'(\mathbb{K})) \neq 0$ is equivalent to $B^- =
A^-(T)$. Carroll and Kisilevski then gave an example in \cite{CK},
showing that $(A')^-(T) = \{ 1 \}$ does not hold for $\mathbb{Z}_p$-extensions
of $\mathbb{K}$, different from the cyclotomic.
The description in \cite{FG} yields a useful translation of the
Diophantine statement about the regulator into a class field
theoretical statement about the vanishing of $(A')^-(T)$. Quite at the
same time as Greenberg, and just around the Curtain, L. Kuz'min had
formulated in a lengthy paper \cite{Ku} on Iwasawa theory the {\em
Hypothesis H}, which contains the statement $| A'(T) | < \infty$ for
all number fields $\mathbb{K}$. The connection to regulators is not considered
in Kuz'min's paper, but we have here an adequate generalization of
Gross's conjecture to arbitrary number fields $\mathbb{K}$. In the case of CM
fields, the Hypothesis H contains also a statement $| (A^+)'(T) | <
\infty$. The conjecture of Leopoldt also has a simple class field
theoretical equivalent, which was proved by Iwasawa already in 1973,
in his seminal paper \cite{Iw}: for CM fields $\mathbb{K}$, this amounts to
the fact that the maximal $p$-abelian $p$-ramified extension
$\Omega(\mathbb{K}^+)$ is a finite extension of $\mathbb{K}^+_{\infty}$.
We stress here the dual Diophantine and class field theoretical
aspects of the conjectures of Gross-Kuz'min and Leopoldt by using the
common term of {\em regulator conjectures of classical Iwasawa
theory}. In 1986, L.J.Federer undertook the task of generalizing the
classical results of Iwasawa theory \cite{Fe}. These are results on
the asymptotic behavior of $A_n, A'_n$ and Federer considers
generalized class groups of $S$-integers. She thus considers the
structure of the galois groups of the maximal abelian $p$-extensions
$\mathbb{L}_n/\mathbb{K}_n$ which are ray-class field to some fixed ray, and in
addition split the primes contained in a separate fixed set of places
of $\mathbb{K}$. The paper is algebraic in nature with little reference to the
field theoretic background, but it confirms the general nature of
Iwasawa theory. In this flavor, one may ask in what way the regulator
conjectures of classical Iwasawa theory generalize to Federer's ray
class fields, and whether these generalizations also afford equivalent
formulations, in Diophantine and in class field theoretical forms. It
is likely that one may encounter a proper embedding of Jaulent's
conjecture -- which is a purely Diophantine generalization of the
Leopoldt conjecture, see \cite{Ja} -- in a systematic context of class
field theory.
The purpose of this breve remarks was to situate the questions and
methods that we shall deploy below in their broad context. One can
find in \cite{LMN} or in Seo's recent paper \cite{Seo} a good overview
of further conjectures related to the ones discussed above, as well as
an extensive literature on recent research related to the
Gross-Kuz'min conjecture. Jaulent established connections to
$K$-theory, e.g. in \cite{Ja1}. In this paper we prove some particular
cases of the conjecture:
\begin{theorem}
\label{gc}
Let $p$ be an odd prime and $\mathbb{K}$ a CM extension of $\mathbb{Q}$. Then
\begin{eqnarray}
\label{gross}
(A')^-(T) = \{ 1 \}.
\end{eqnarray}
Here $A'(T) \subset A'$ is the maximal submodule of the Noetherian
$\Lambda$-torsion module $A'$, which is annihilated by some finite
power of $T$.
\end{theorem}
The fact that $(A')^+(T) \sim B^+ \sim \{ 1 \}$ was established for
arbitrary CM fields in a separate paper concerning the conjecture of
Leopoldt \cite{Mi}.
\subsection{Notations and plan of the paper}
Unless otherwise specified, the fields $\mathbb{K}$ in this paper verify the following
\begin{assum}
\label{kassum}
The field $\mathbb{K}$ is a galois CM extension of $\mathbb{Q}$ which contains
the \nth{p} roots of unity and such that the primes above $p$ are
totally ramified in the cyclotomic $\mathbb{Z}_p$-extension
$\mathbb{K}_{\infty}/\mathbb{K}$ and split in $\mathbb{K}/\mathbb{K}^+$.
There is an integer $k$ such that $\mu_{p^k} \subset
\mathbb{K}$ but $\mu_{p^{k+1}} \not \subset \mathbb{K}$ and we use the numeration
$\mathbb{K}_1 = \mathbb{K}_2 = \ldots = \mathbb{K}_k \neq \mathbb{K}_{k+1}$; consequently, for $n > k$
we have $[ \mathbb{K}_n : \mathbb{K}_1 ] = p^{n-k}$. The base field will be chosen
such that for all $a = (a_n)_{n \in \mathbb{N}} \in A^-(T) \setminus
(A^-(T))^p$ we have $a_1 \neq 1$. As consequence of the numbering of
fields, we also have $a_1 = a_2 = \ldots = a_k$, etc. There is an
integer $z(a) > -k$ which do not depend on $n$, and such that
${\rm ord}(a_n) = p^{n+z(a)}$ for all $n \geq k$.
\end{assum}
We let $\mathbb{H}_n \supset \mathbb{K}_n$ be the maximal $p$-abelian unramified
extensions of $\mathbb{K}_n$ -- the $p$-Hilbert class fields of $\mathbb{K}_n$ -- and
$X_n := \mbox{ Gal }(\mathbb{H}_n/K_n) \cong A_n$, via the Artin Symbol, which we
shall denote by $\varphi$. Let $\mathbb{H} = \cup_n \mathbb{H}_n$ be the maximal
unramified $p$-extension of $\mathbb{K}_{\infty}$ and $X =
\mbox{ Gal }(\mathbb{H}/\mathbb{K}_{\infty})$. The isomorphisms $\varphi: A_n \rightarrow X_n$ are
norm compatible and yield an isomorphism in the projective limit,
which we shall also denote by $\varphi$:
\begin{eqnarray}
\label{plim}
\varphi(A) = \varphi(\varprojlim_n A_n) = \varprojlim_n(\varphi(A_n)) =
\varprojlim_n(X_n) = X.
\end{eqnarray}
The maximal subextension of $\mathbb{H}_n$ which splits all the primes above $p$ is
denoted by $\mathbb{H}'_n \subset \mathbb{H}_n$ and we have
\[ \mbox{ Gal }(\mathbb{H}'_n/\mathbb{K}_n) \cong A'_n, \quad \mbox{ Gal }(\mathbb{H}_n/\mathbb{H}'_n) = \varphi(
B_n ). \] The injective limit is $\mathbb{H}' \subset \mathbb{H}$, with
$\mbox{ Gal }(\mathbb{H}/\mathbb{H}') = \varphi(B)$. (e.g. \cite{Iw}, \S 3. - 4.)
The maximal $p$-abelian $p$-ramified extension of $\mathbb{K}_n$ is denoted by
$\Omega_n = \Omega(\mathbb{K}_n)$ and $\Omega = \cup_n \Omega(\mathbb{K}_n)$. Note
that $\Omega_n \supset \mathbb{K}_{\infty} \cdot \mathbb{H}_n$, for all $n$. It will
be useful to restrict to products of $\mathbb{Z}_p$-extensions. Therefore we
define $\tilde{\Omega}_n \subset \Omega_n$ as the maximal product of
$\mathbb{Z}_p$-extensions of $\mathbb{K}_n$. The units and $p$-units of $\mathbb{K}_n$ are
$E(\mathbb{K}_n)$, resp. $E'(\mathbb{K}_n)$ and they generate the following Kummer
extensions contained in $\Omega_n$:
\begin{eqnarray}
\label{omes}
\Omega_{E}(\mathbb{K}_n) & = & \Omega_n \ \bigcap \ \left( \bigcap_{m \geq n} \mathbb{K}_m[ E(\mathbb{K}_m)^{1/p^m} ] \right), \\
\Omega_{E'}(\mathbb{K}_n) & = & \Omega_n \ \bigcap \ \left( \bigcap_{m \geq n} \mathbb{K}_m[ E'(\mathbb{K}_m)^{1/p^m} ] \right). \nonumber
\end{eqnarray}
The extensions $\mathbb{H}^-, \Omega^-, (\mathbb{H}')^-$, etc. are defined according
to \rf{hilpm}. We denote by $N_{m,n}$ the norms $\mbox{\bf N}_{\mathbb{K}_m/\mathbb{K}_n}$,
for $m > n > 0$. In particular, $N_{n,1} = \mbox{\bf N}_{\mathbb{K}_n/\mathbb{K}}$. Note also
that $\mathbb{K}$ being a galois extension, $\Delta = \mbox{ Gal }(\mathbb{K}/\mathbb{Q})$ acts
transitively upon the primes $rhowp \subset \mathbb{K}$ above $p$. This implies
in particular that the modules $B_n$ are $\mathbb{Z}_p[ \Delta ]$-cyclic,
generated by the classes $b_n = [ \wp_n ]$ of one prime above $p$. In
order to obtain norm coherent sequences, it suffices to choose the
sequence of primes $\wp_n \subset \mathbb{K}_n$ with $N_{m,n} \wp_m = \wp_n$.
We let $\eu{K}_n = \mathbb{K}_n \otimes_{\mathbb{Q}} \mathbb{Q}_p = \prod_{\eu{p}}
\mathbb{K}_{n,\eu{p}}$, where $\eu{p}$ runs through all the ramified places
above $p$. If $\eu{p}_n \subset \mathbb{K}_n$ is a sequence of such places
with $\eu{p}_n^{p^{n-k}} = \wp \subset \mathbb{K}$, then $\mathbb{K}_{n,
\eu{p}_n}/\mathbb{K}_{\wp}$ is an intermediate field of the compositum of
the local cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$ and $\mathbb{K}_{\wp}$. We
denote by $U_n = \id{O}^{\times}(\eu{K}_n)$ the product of the local
units in the various completions and $E_n = \id{O}^{\times}(\mathbb{K}_n)$ are
the global units. Let $P = \{ (\eu{P}_i, \overline{\eu{P}}_i) \subset
\mathbb{K} \ : \ i = 1, 2, \ldots, s \}$ be the set of pairs of conjugate
primes above $p$ in $\mathbb{K}$. If $\mathbb{K}$ is galois, then $P = \{ (\nu \wp,
\nu \overline{\wp}) \ : \ \nu \in C^+ \}$, with $C^+$ defined
above. For $\eu{P} \in P$ we let $\iota_{\eu{P}} : \eu{K}_n \rightarrow
\mathbb{K}_{\eu{P}}[ \mu_{p^n} ]$ be the natural projection.
We show below that the assumptions \ref{kassum} are not restrictive
and imply the claim of the Theorem \ref{gc} in full generality. Note
that if the primes above $p$ are not split in $\mathbb{K}/\mathbb{K}^+$, then $B^- =
\{ 1 \}$ and the Lemma \ref{t2} below implies that $A^-(T) = \{ 1 \}$,
so this case is trivial.
The proof of Theorem \ref{gc} is based on the following two
observations, which are proved in chapter 2:
\begin{itemize}
\item[ 1. ] There is an injective map $\hat{T} : (A')^-[ T ] \rightarrow B^-$.
\item[ 2. ] The module $A^+(T^*)$ is finite for every CM extension of $\mathbb{Q}$.
\end{itemize}
Following the construction of Greenberg in \cite{Gr} and the
generalization given by Gross and Federer, we prove in Chapter 3 that,
if $(A')^-[ T ]$ is infinite, so the Theorem \ref{gc} is false, then
the map $\hat{T}$ extends to a map $\Theta : (A')^-[ T ] \rightarrow \mathbb{Z}_p[
\Delta ]$. If $\mathbb{L} = \mathbb{K}_{\infty}[ p^{1/p^{\infty}} ]$, this map leads
for all $a \in (A')^-[ T ]$ to non trivial, totally unramified
extensions $\mathbb{F}_a = \mathbb{L}[ b^{\Theta(a)/p^{\infty}} ]/\mathbb{L}$. A detailed
investigation of the finite levels of $\mathbb{F}_a$ leads in Chapter 4 to the
consequence that there is exist a sequence $a'(a) \in A^+[ T^* ]$ of
infinite order. This contradicts fact 2 above, and provides a proof of
the Theorem \ref{gc}.
\subsection{List of symbols}
We give here a list of the notations introduced below in connection with
Iwasawa theory
\begin{eqnarray*}
\begin{array}{l c l}
p & = & \hbox{An odd rational prime}, \\
\zeta_{p^n} & = & \hbox{Primitive
\nth{p^n} roots of unity with $\zeta_{p^n}^p = \zeta_{p^{n-1}}$ for all $n >
0$.},\\
\mu_{p^n} & = & \{ \zeta_{p^n}^k, k \in \mathbb{N} \},
\end{array}
\end{eqnarray*}
\begin{eqnarray*}
\begin{array}{l c l}
\mathbb{K} & = & \hbox{A galois CM extension determined by Assumption \ref{kassum}}, \\
\mathbb{K}_{\infty}, \mathbb{K}_{n}& = & \hbox{The cyclotomic $\mathbb{Z}_p$ - extension of $\mathbb{K}$, and
intermediate fields,}\\
\Delta & = & \mbox{ Gal }(\mathbb{K}/\mathbb{Q}), \\
A(\rg{K}) & = &
\hbox{$p$-part of the ideal class group of the field $\rg{K}$}, \\
\Gamma & = & \mbox{ Gal }( \mathbb{K}_{\infty}/\mathbb{K} ) = \mathbb{Z}_p \tau, \quad \hbox{$\tau$ a
topological generator of $\Gamma$},\\
T & = & \tau -1, \\ * & = & \hbox{Iwasawa's
involution on $\Lambda$ induced by $T^* = (p-T)/(T+1)$},\\
\jmath & = & \hbox{The image of complex conjugation
in $\mbox{ Gal }(\mathbb{K}_{\infty}/\mathbb{Q})$}, \\
s & = & \hbox{The
number of primes above $p$ in $\mathbb{K}^+$}, \\
C & = & \hbox{Coset representatives for $\Delta/D(\wp)$}, \\
A'_n = A'(\mathbb{K}_n) & = & \hbox{The $p$ - part of the ideal
class group of the $p$ - integers of $\mathbb{K}_n$}, \\
A' & = & \varprojlim A'_n, \\
B & = & \langle \{ b = (b_n)_{n \in \mathbb{N}} \in A : b_n = [ \wp_n ], \wp_n \supset
(p) \} \rangle_{\mathbb{Z}_p}, \\
\varphi & = & \hbox{The
Artin symbol, see also \rf{plim} }, \\
\mathbb{H} & = & \hbox{The maximal $p$ - abelian unramified
extension of $\mathbb{K}_{\infty}$},\\
\mathbb{H}' \subset \mathbb{H}_{\infty} & & \hbox{The maximal
subextension that splits the primes above $p$}, \\
\Omega(\mathbb{K}) & = & \hbox{The
maximal $p$-abelian $p$-ramified extension of $\mathbb{K}$},\\
\Omega_n & = & \Omega(\mathbb{K}_n) = \hbox{The
maximal $p$-abelian $p$-ramified extension of $\mathbb{K}_n$},\\
\tilde{\Omega}_n & \subseteq & \Omega_n: \hbox{
The maximal product of $\mathbb{Z}_p$-extensions of $\mathbb{K}_n$ in $\Omega_n$},\\
\Omega_E(\mathbb{K}_n) & = & \Omega_n \cap \bigcap_{m > n} \mathbb{K}_m[ E(\mathbb{K}_m)^{1/p^m} ], \hbox{ see \rf{omes},} \\
\Omega_{E'}(\mathbb{K}_n) & = & \Omega_n \cap \bigcap_{m > n} \mathbb{K}_m[ E'(\mathbb{K}_m)^{1/p^m} ], \hbox{ see \rf{omes}.} \\
E_n & = & \id{O}(\mathbb{K}_n)^{\times}, \\
E'_n & = & (\id{O}(\mathbb{K}_n)[ 1/p ])^{\times}, \\
U_n & = & U(\mathbb{K}_n) = \id{O}\left(\mathbb{K}_n \otimes_{\mathbb{Q}} \mathbb{Q}_p\right) = \prod_{\nu \in
C} U(\mathbb{K}_{n, \nu \wp}), \\
U^{(1)}_n & = & \prod_{\nu \in C} U^{(1)}(\mathbb{K}_{n, \nu \wp}) \\
\overline{E}_n & = & \left(\cap_N E_n \cdot U_n^{p^N} \right) \cap U^{(1)}_n.
\end{array}
\end{eqnarray*}
\section{General results}
Let now $\mathbb{K}'$ be an arbitrary CM extension of $\mathbb{Q}$. Then there is a
finite algebraic extension $\mathbb{K}/\mathbb{K}'$, which is galois, CM and satisfies
all the conditions for $\mathbb{K}$ which were defined above. Let thus $\mathbb{K} =
\mathbb{K}'[ \theta ]$, as a simple algebraic extension and let $\mathbb{K}'_{\infty}
= \mathbb{K}' \cdot \mathbb{B}$ be the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{K}'$, so
$\mathbb{K}_{\infty} = \mathbb{K}'_{\infty}[ \theta ]$.
If the Gross conjecture does not hold for $\mathbb{K}'$, then $(A'(\mathbb{K}'))^-(T)$ is
infinite. Since $\mbox{ Ker }(\iota: A(\mathbb{K}') \rightarrow A(\mathbb{K}))$ is finite and the ideal lift
map commutes with the action of $\Lambda$, it follows that $(A'(\mathbb{K}))^-(T)$
must also be infinite. Therefore it suffices to prove the conjecture for
extensions verifying the Assumptions \ref{kassum}.
\subsection{Class groups}
The following simple result, noted by Greenberg in \cite{Gr}, implies
that if $a \in A^-(T)$ represents the class $a' \in (A')^-(T)$, then
$a = a' \cdot b$ for some non trivial $b \in B^-$. By identifying $a
\in A^-$ with the class $a' = a B^- \in (A')^-$ we may consider the
interesting module $B' = ((A')^-[ T ])^T$. This is of course the
trivial module in $A'$, but the identification yields
\[ B' = \{ \ a^T \ : \ a \in A^- \hbox{ and } a B \in (A')^-[ T ] \ \}
\subset B^- , \] which is a well defined submodule of $B^-$.
\begin{lemma}
\label{t2}
Let $\mathbb{K}$ be a CM extension of $\mathbb{Q}$ and suppose that $(A')^-(T) \neq \{
1 \}$. Let $B' = ((A')^-[ T ])^T \subset B^-$. Then $B' \cong (A')^-[
T ]$ as $\mathbb{Z}_p$-modules and there is an injective map $\widehat{T} : (A')^-[ T ]
\hookrightarrow B^-$ given by the restriction of $T : A^- \rightarrow A^-$.
\end{lemma}
\begin{proof}
Assuming that $(A')^-(T) \neq \{ 1 \}$, there is some $a = (a_n)_{n \in \mathbb{N}}
\in A^-$ with non trivial image $a' \in (A')^-[ T ]$. We show that $(a')^T
\neq 1$; since this holds for arbitrary $a' \in (A')^-[ T ]$, it follows
that the map $\widehat{T} : (A')^-[ T ] \rightarrow B^-$ is injective.
Let $\eu{Q} \in a_n$ be a prime, let $n$ be sufficiently large
and ${\rm ord}(a_n) = p^{n+z}$, for some $z \in \mathbb{Z}$. Let $(\alpha_0) =
\eu{Q}^{p^{n+z}}$ and $\alpha =
\alpha_0/\overline{\alpha_0}$. Since $a' \in (A')^-[ T ]$, it also
follows that ${a'}_n^T \in B_n^-$ and thus $\eu{Q}^T = \eu{R}_n$
with $b_n := [ \eu{R}_n ] \in B_n^-$. If $b_n \neq 1$, then we are
done.
We thus assume that $b_n = 1$ and derive a contradiction. In this case
$\eu{R}_n^{1-\jmath} = (\rho_n)$ is a $p$-unit and $(\alpha^T) =
(\rho_n^{p^{n+z}})$, so
\[ \alpha^T = \delta \rho_n^{p^{n+z}}, \quad \delta \in
\mu_{p^n}. \] Taking the norm $N = N_{\mathbb{K}_n/\mathbb{K}}$ we obtain $1 =
N(\delta) N(\rho_n)^{p^{n+z}}$. The unit $N(\delta) \in
E(\mathbb{K})^{1-\jmath} = \mu(\mathbb{K}) = < \zeta_{p^k} >$. It follows that
$\rho_1 := N(\rho_n)$ verifies $\rho_1^{p^{n+z}} = \delta_1$. Thus
$\rho_1$ must be a root of unity of $\mathbb{K}$ and thus $\rho_1^{p^k} =
\pm 1$. By Hilbert 90 we deduce that $\rho_n^{p^k} = \pm x^T, x \in
\mathbb{K}_n^{\times}$. In terms of ideals, we have then
\begin{eqnarray*}
\eu{Q}^{(1-\jmath) T p^{n+z}} & = & (\alpha^T) = (x^{T p^{n+z-k}}), \quad
\hbox{hence} \\
\left(\eu{Q}^{(1-\jmath) p^k}/(x)\right)^{T p^{n+z-k}} & = & (1) \quad
\Rightarrow \quad (\eu{Q}^{(1-\jmath) p^k}/(x))^T = (1).
\end{eqnarray*}
The ideal $\eu{B} = \eu{Q}^{(1-\jmath) p^k}/(x) \in a_n^{2 p^k}$
verifies thus $\eu{B}^T = (1)$; since $a_n \in A'_n$, it follows that
$\eu{B} = \id{O}(\mathbb{K}_n) \eu{B}_1$ is the lift of an ideal $\id{B}_1
\subset \mathbb{K}$. But then ${\rm ord}(a'_n) \leq p^k \exp(A_1^-)$, so the orders
of $a'_n$ are uniformly bounded, which is impossible since $a'_n \in
A_n^-$.
\end{proof}
The next result is a known fact of class field theory which will be
used for deriving the final contradiction for the proof of the Gross
conjecture.
\begin{lemma}
\label{noT*}
Let $\mathbb{K}$ be a CM extension of $\mathbb{Q}$. Then $A^+(T^*)$ is a finite
module.
\end{lemma}
\begin{proof}
Suppose that $A^+(T^*)$ is infinite. Then
$\mbox{ Gal }(\Omega^+/\mathbb{K}_{\infty})(T^*)$ is a fortiori infinite and since
$E^-(\mathbb{K}_n) = \mu(\mathbb{K}_n)$, we have $\Omega^+ = \mathbb{K}_{\infty}[
(A^-)^{1/p^{\infty}} ]$. Let $\id{B} \subset A^-$ be the submodule of sequences
of classes with $\mathbb{K}_{\infty}[ b^{1/p^{\infty}} ] \subset \mathbb{H}^+$; it follows
also that $\id{B}(T)$ is infinite too, being -- up to possible finite
cokernel -- the radical of the maximal subextension $\mathbb{L} \subset
\mathbb{H}^+$ with $\mbox{ Gal }(\mathbb{L}/\mathbb{K}_{\infty})^{{T^*}^m} = \{ 1 \}$ for some $m >
0$.Thus $\id{B}_n \subset A^-_n$ is the submodule of classes $b'_n \in
A^-_n$ such that there is an $\eu{B} \in b_n$ and $\beta \in
\eu{B}^{{\rm ord}(b_n)}$ with $\mathbb{K}_{\infty}[ \beta^{1/{\rm ord}(b_n)} ] \subset
\mathbb{H}_{\infty}$ is totally unramified over $\mathbb{K}_{\infty}$ and thus
\[ \mathbb{K}_{\infty}[ (\id{B})^{1/p^{\infty}} ] = \mathbb{H}^+. \]
Since $B^- \cap \id{B} = \{ 1 \}$ -- we shall give a detailed proof of
this fact below -- it follows that $\id{B} \cap (A')^-(T) \neq \{ 1
\}$. Let thus $b' = (b'_n)_{n \in \mathbb{N}} \in \id{B} \cap (A')^-(T) $ and
$\eu{B}_n \in b'_n, \beta \in \eu{B}_n^{(1-\jmath) {\rm ord}(b'_n)}$. Then
$\beta \in U(\mathbb{K}_n)^{{\rm ord}(b'_n)}$. In view of Lemma \ref{t2}, there is
a $j > 0$ such that $(b')^{T^j} = b \in B^-$ and $b \neq 1$.
Therefore $\eu{B}_n^{T^j} \in b_n$ while
\[ \beta^{T^j} \in b_n^{{\rm ord}(b_n)} \cap U(\mathbb{K}_n)^{{\rm ord}(b_n)}. \] We
reach thus a contradiction to $B^- \cap \id{B} := \{ 1 \}$, which
shows that the hypothesis that $A^+(T^*)$ is infinite is untenable.
\end{proof}
\begin{remark}
\label{Jau}
I owe the following observations to a question raised by F. Jaulent:
let $a' \in (A')^-(T) \setminus (A^-)^p$. Then there is an $a \in A^-
\setminus (A^-)^p$ with $a' = a \cdot B^-$. For $a' \in (A')^-[ T ]
\setminus (A-)^p$ we let $a' = a B$ and assume that $a = d^p$ for some
$d \in A^-$; by the above lemma $(a')^T = a^T = d^{p T} \in B^- \cap
A^p$. But for $x \in A^-$ with $x^p \in A^p \cap B^-$ we have $x^{T
p} = 1$, so $x^T = 1$, since $x \in A^-$ has infinite order. If $x
\not \in B^-$, then $x^T \neq 1$ by the above lemma - so we must have
$x \in B^-$ and thus
\[ (A^-)^p \cap B^- = (B^-)^p. \]
Finally, we claim that for $a \in A[ T^{p-2} ]$ we have ${\rm ord}(a_n) = p
{\rm ord}(a_{n-1})$ for all $n > 1$. Indeed, since the ideal lift map is
injective on the minus part, we have
\[ {\rm ord}(a_{n-1}) = {\rm ord}(N_{n,n-1}(a_n)) = {\rm ord}\left(\left(p
f_n(\omega_{n-1}) + \omega_{n-1}^{p-1})\right) a_n\right) = {\rm ord}(p
a_n), \] as claimed.
\end{remark}
Next, we investigate the group structure of $B^-$:
\begin{lemma}
\label{bstruct}
The module $B^-$ is spanned by the classes $\nu b, \nu \in C^+$ and
$B^- = \mathbb{Z}_p[ \Delta ] b$ is a cyclic $\mathbb{Z}_p[ \Delta ]$-module of
$\mathbb{Z}_p$-rank $s = | C^+ |$.
\end{lemma}
\begin{proof}
We recall that $\eu{R}_n = \wp_n^{1-\jmath}$ and $b_n = [ \eu{R}_n ]
\in B_n^-$; then $b = \varprojlim_n$ and it follows from the
definition of $B^-$ that $C^+ b = \{ \nu b \ : \ \nu \in C^+\}$
generate $B^-$ as a $\mathbb{Z}_p$-module. Note that since $b^T = 1$, it
follows that the structure of $B^-$ as $\mathbb{Z}_p$- and as
$\Lambda$-modules coincide. It remains to show that the $C^+ b$ are
linearly independent over $\mathbb{Z}_p$. If this were not the case, then
there is a $\mathbb{Z}_p$-linear dependence and we can assume without loss
of generality that
\[ b = \prod_{\nu \in C^+; \nu \neq 1} \nu (b^{n_{\nu}}) =
b^{\theta}, \quad n_{\nu} \in \mathbb{Z}_p . \]
Let $n > 0$ be fixed and $z_{\nu} \in \mathbb{Z}$ approximate $n_{\nu}$ to the
power $q p^{n-k}$. Then
\[ \eu{R}_n = (x) \cdot \prod_{\nu \in C^+; \nu \neq 1} \nu
(\eu{R}_n^{z_{\nu}}) , \quad x \in \mathbb{K}_n^{1-\jmath}. \] By applying $T$
to the above identity, it follows that $(x^T) = (1)$, so $x^T = \delta
\in \id{O}^{\times}(\mathbb{K}_n)$. But $x \overline x = 1$ so a fortiori
$x^T \in \mathbb{K}_n^{1-\jmath}$, thus $\delta$ is a root of unity and there
is an integer $0 \leq a < p^n$ such that $x^T =
\zeta_{p^n}^a$. Moreover, taking the norm to $\mathbb{K}$ implies that
$N_{n,1}(\zeta_{p^n}^a) = \zeta^a = 1$ and thus $a \equiv 0 \bmod
p^k$. Since $\mu_{p^n}^p = \mu_{p^n}^T$, there is a root of unity $\xi
\in \mu_{p^n}$ such that $x^T = \xi^T$. By replacing $x$ with
$\overline{\xi} x \in (x)$, we see that we may choose $x \in
\mathbb{K}^{1-\jmath}$. The primes $(\nu(\wp_n), \wp_n) = 1$;
taking the valuation at $\wp_n$ in the previous identity implies
$v_{\wp_n}(x) = 1$. Thus $x \in \wp_n \cap \mathbb{K} =
\wp$ and since the latter prime is totally ramified, we have
$v_{\wp_n}(\wp) = p^{n-k}$ and a fortiori $p^{n-k} | v_{\wp_n}(x)$, in
contradiction with the condition $v_{\wp_n}(x) = 1$ derived above. It
follows that $\nu b_n$ are indeed linearly independent, which
completes the proof in this case.
\end{proof}
\subsection{Norm residues and local uniformizors}
We consider the set of formal $\mathbb{Z}_p$-linear combinations
\begin{eqnarray}
\label{zpc}
\eu{ C } = [ C^+ ]_{\mathbb{Z}_p} = \left\{ t = \sum_{\nu \in C^+ } c_{\nu} \nu \ : \ c_{\nu} \in \mathbb{Z}_p \right\} \subset
\mathbb{Z}_p[ \Delta ].
\end{eqnarray}
This is a $\mathbb{Z}_p$-submodule of $\mathbb{Z}_p[ \Delta ]$, but it is in general
not a ring. If $M$ is a $\mathbb{Z}_p[ \Delta ]$-module and $x \in M$, we shall write
\[ \eu{C} x = \left\{ t x = \prod_{\nu \in C^+} \nu(x)^{c_{\nu}} \ :
\ t = \sum_{\nu \in C^+} c_{\nu} \nu \in \eu{C} \right\} \subset M.
\]
If $x$ is in addition fixed by $D(\wp)$, then $\eu{C} x \subset M$ is
a canonical module in the sense that it does not depend on the choice
of $C^+$. Through the rest of this paper, $\eu{C}$ will be applied to
$D(\wp)$-invariant elements. As an important instance, $B^- = \eu{C} b$
in a canonical way, since $b$ is fixed by $D(\wp)$; the lemma
\ref{bstruct} implies that in fact $C^+ b$ build a base for the
$\mathbb{Z}_p$-module $B^- $.
We let $\rho \in \mathbb{K}^{1-\jmath}$ be fixed, such that
$\wp^{(1-\jmath) q} = \eu{R}^q = (\rho)$: for $e \in \mathbb{Z}$ divisible by
the order $w = | W |$ of the group of roots of unity $W \subset E(\mathbb{K})$,
$\rho^e$ is uniquely defined by $\wp$. We also have:
\begin{eqnarray}
\label{rho}
\left(\eu{R}_n^{p^{n-k}}\right)^q = \eu{R}^q = (\rho), \quad \forall n > 0.
\end{eqnarray}
If $c_n = b_n^{\theta} \in B_n^-$ is an arbitrary class of $B_n^-$, with
$\theta = \sum_{\nu \in C^+} c_{\nu} \nu \in \eu{C}$, and for $\eu{B}
\in c_n$ an arbitrary ideal, we have the following useful
relation:
\begin{eqnarray}
\label{bn}
\eu{B}^{q p^{n-k}} = (\gamma^{q p^{n-k}} \rho^{\theta} ), \quad
\gamma \in \mathbb{K}_n^{1-\jmath} \quad
\hbox{ with } \quad \eu{B} = (\gamma) \eu{R}_n^{\theta}.
\end{eqnarray}
We next investigate some useful norm coherent sequences of
uniformizors in local fields. Let $\rg{K}_m = \mathbb{Q}_p[ \mu_{p^m} ]$ be
the \nth{p^m} cyclotomic extension of $\mathbb{Q}_p$ and $\rg{B}_m \subset
\rg{K}_m$ be the subfield fixed by the unique cyclic subgroup of order
$p-1$; thus $\mathbb{B}_m \subset \rg{B} := \cup_m \rg{B}_m$, which is the
cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}_p$. The numbers $1-\zeta_{p^m} \in
\rg{K}_m$ are uniformizors with the property of being norm coherent
with respect to the norms $N_{m',m} : \rg{K}_{m'} \rightarrow \rg{K}_m$. Then
$\varsigma_m := \mbox{\bf N}_{\rg{K}_m/\rg{B}_m}(1-\zeta_{p^m})$ form a
fortiori a norm coherent sequence of uniformizors for $\rg{B}_m$, with
$\varsigma_1 = p$. Let $\delta_m = \varsigma_p^T \in
\id{O}^{\times}(\mathbb{B}_m)$; then $\delta_m$ form a norm coherent sequence
of units with $N_{m,1} (\delta_m) = 1$. Moreover, $\delta_m \not \in
\rg{B}_m^p$; in order to see this, we consider the cyclotomic
$\mathbb{Z}_p$-extension $\mathbb{B}/\mathbb{Q}$ with $\mathbb{B}_m = \mathbb{B} \cap \rg{B}_m$. Since
$\delta_m \in E(\mathbb{B}_m)$, the assumption $\delta_m \in \rg{B}_m^p$,
would imply that $\mathbb{B}_m[ \zeta, \delta_m^{1/p} ]$ is an unramified
extension of $\mathbb{Q}[ \zeta_{p^m} ]$ and Kummer duality leads to a
contradiction to Herbrand's Theorem (\cite{Wa}, p. 100). Let now
$\rg{L} \supset \mathbb{Q}_p$ be any finite extension containing the \nth{p}
roots of unity, let $\rg{L}_m = \rg{L} \cdot \rg{K}_m$ and let $e$ be
the ramification index in $\rg{L}/(\rg{L} \cap \rg{B}_{\infty})$; this
ramification index is constant for all extensions $\rg{L}_m/(\rg{L}_m
\cap \rg{B}_{\infty})$. If $\eu{M}_m \subset \rg{L}_m$ is the maximal
ideal, then $\varsigma_m \in \eu{M}_m^e$ for all sufficiently large
$m$. Note the identity
$N_{\mathbb{B}_m/\mathbb{Q}} = p^{m-1} + T f(T) \in \mathbb{Z}[ T ]$, which holds for some
(distinguished) polynomial $f \in \mathbb{Z}[ T ]$. It implies that
\[ p = N_{\mathbb{B}_m/\mathbb{Q}}(\varsigma_m) = \varsigma_{m}^{p^{m-1}} \cdot
\delta_m^{f(T)} ; \] In particular, $p^{1/p^m} \in \mathbb{B}_{m+1} \cdot
\delta_{m+1}^{1/p^m} \subset \Omega_E(\mathbb{B}_{m+1})$.
We note for future reference:
\begin{lemma}
\label{locuni}
If $\rg{B} = \cup_m \rg{B}_m$ is the $\mathbb{Z}_p$-cyclotomic extension of
$\mathbb{Q}_p$, then
\[ \varsigma_m = N_{\rg{B}_m[ \zeta ]/ \rg{B}_m} (1-\zeta_{p^m}),
\quad \delta_m = \varsigma_m^T , \] are norm coherent sequences of
(global) uniformizors, resp. units. Moreover $\delta_m \not \in
\rg{B}_m^p$ for all $m$ and
\begin{eqnarray}
\label{p1m}
p^{1/m} \in \mathbb{B}_{m+1}[ \delta_{m+1}^{1/p^m} ] \subset \Omega_E(\mathbb{B}_{m+1}) .
\end{eqnarray}
\end{lemma}
For $\rg{K}$ a local field containing the \nth{p} roots of unity and
$\rg{K}_n = \rg{K}[ \mu_{p^n} ]$, the norm defect is given by
\begin{lemma}
\label{deco}
Let $N_{\infty}= \cap_n N_{n,1} (\rg{K}^{\times}_n)$ and $K =
U^{(1)}(\mathbb{Q}_p) = \{ u \in \mathbb{Z}_p : u \equiv 1 \bmod p \mathbb{Z}_p \}$. Then
$N_{\infty}, K$ are canonical $\mathbb{Z}_p[ \mbox{ Gal }(\rg{K}/\mathbb{Q}_p) ]$-modules with
$\rg{K}^{\times} = K \oplus N_{\infty}$ and $K = U^{(1)}(\mathbb{Q}_p)$.
Moreover
\begin{eqnarray}
\label{unitdeco}
U(\mathbb{K}) = U^{(1)}(\mathbb{Q}_p) \oplus (N_{\infty} \cap U(\mathbb{K})).
\end{eqnarray}
\end{lemma}
\begin{proof}
By local class field theory, $\mbox{ Gal }(\rg{K}_n/\rg{K}) \cong
\rg{K}^{\times} / (N_{n,1} (\rg{K}^{\times}_n))$, the isomorphism
being one of $\mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$-modules. Since $\mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$
acts on $\Gamma = \mbox{ Gal }(\rg{K}_{\infty}/\rg{K})$ by conjugation,
fixing $\Gamma$, it follows that the \textit{norm defect}
$\rg{K}^{\times} / (N_{n,1} (\rg{K}^{\times}_n))$ is a cyclic group
of order $p^n$ which is invariant under $\mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$. In the
limit, $\rg{K}^{\times}/N_{\infty} \cong \mathbb{Z}_p$ is fixed by
$\mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$. Let $U^{(n)} = \{ u \in \mathbb{Z}_p \ : \ u - 1 \in
p^n \mathbb{Z}_p, \}$. Then $N_{n,1}(U^{(1)}) = U^{(n-c)}$, for some
constant $c$ depending on $\rg{K}$, and thus $N_{\infty,1}(U^{(1)})
= \{ 1 \}$. The module $U^{(1)}$ being fixed by $\mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$,
it follows that
\[ \rg{K}^{\times}/N_{\infty} \cong U^{(1)}(\mathbb{Q}_p), \quad U^{(1)}(\mathbb{Q}_p) \subseteq
\mbox{ Ker }\left(N_{\infty,1} : \rg{K}^{\times}_{\infty} \rightarrow \rg{K}^{\times}\right).
\]
Moreover $N_{\infty} \cap U^{(1)}(\mathbb{Q}_p) = \{ 1 \}$ by definition and
both are canonical $\mathbb{Z}_p$-modules. Let $\sigma \in \mbox{ Gal }(\rg{K}/\mathbb{Q}_p)$
and $x \in N_{\infty}$, so there is a norm coherent sequence $(x_n)_{n
\in \mathbb{N}}, \ x_n \in \rg{K}^{\times}_n$ with $N_{n,1}(x_n) = x$. We
claim that $\sigma x \in N_{\infty}$; indeed, let $\tilde{\sigma} \in
\mbox{ Gal }(\rg{K}_{\infty}/\mathbb{Q}_p)$ be a lift of $\sigma$ and $y_n = \tilde{\sigma}(x_n)
\in \rg{K}_n^{\times}$. Since $\sigma N_{n,1} = N_{n,1} \sigma$, it
follows that $N_{n,1}(y_n) = \sigma N_{n,1}(x_n) = \sigma x$, and thus
$\sigma x \in N_{n,1}(\rg{K}_n^{\times})$ for all $n$, thus $\sigma x
\in N_{\infty}$. Consequently, $N_{\infty}$ is a canonical $\mathbb{Z}_p[
\mbox{ Gal }(\rg{K}/\mathbb{Q}_p) ]$-module; the same claim is trivial for
$U^{(1)}(\mathbb{Q}_p)$, which is fixed by the galois group. Finally, since
$\rg{K}^{\times}/N_{\infty} \equiv U^{(1)}(\mathbb{Q}_p)$, it follows that
$\rg{K}^{\times} \cong U^{(1)}(\mathbb{Q}_p) \oplus N_{\infty}$. The identity
\rf{unitdeco} follows from this, by restriction to the submodule of
the units.
\end{proof}
Finally, we consider the case of a global extension $\mathbb{K}$ verifying the Assumptions \rf{kassum}
and let $\wp \subset \mathbb{K}$ be a prime above $p$, with $\rg{K} \cong \mathbb{K}_{\wp}$ the local field
obtained by completion of $\mathbb{K}$ at the prime $\wp$. Let $s$ denote the
number of conjugate primes above $p$ and $C \subset \mbox{ Gal }(\mathbb{K}/\mathbb{Q})$ act
transitively on these primes. We let
\[ \eu{Z} = \prod_{\nu \in C} U^{(1)}(\mathbb{Q}_p), \] where the various
copies of $U^{(1)}(\mathbb{Q}_p)$ are identified with submodules of the
embeddings $\mathbb{K} \hookrightarrow \rg{K}$ induced by completion at the
prime $\nu \wp$. By definition, $\eu{Z}$ is $C^+$ invariant.
Let $N_{\infty} \subset \rg{K}$ be defined like in the above lemma and
$\id{N} = \prod_{\nu \in C} (N_{\infty, \nu} \cap U(\rg{K}))$, where
$N_{\infty, \nu}$ are identified with submodules of embeddings of
$\mathbb{K}$, as in the case of $\eu{Z}$. Then \rf{unitdeco} yields the global
identity:
\begin{eqnarray}
\label{gldeco}
U(\mathbb{K}) = \id{N} \oplus \eu{Z}.
\end{eqnarray}
Note that this decomposition can be obtained for all $\mathbb{K}_n$ as base
fields. Moreover, if $N_{\infty,n}$ is defined naturally, we have
$N_{n,1}(N_{\infty,n}) = N_{\infty}$.
\section{The maps $\psi$}
Let $\mathbb{K}$ verify Assumption \rf{kassum}, fix a prime $\wp \subset \mathbb{K}$
above $p$ and let $C, C^+$ be defined in \S 1.2; by a previous
observation, we know that complex conjugation $\jmath \in C$, since
otherwise the Gross conjecture is trivially true.
We consider in this chapter $a = (a_n)_{n \in \mathbb{N}} \in A^-$, such that
$a' = a B \in (A')^-[ T ]$ and let $c = a^T$. Then by Lemma \ref{t2},
there is a $\theta \in \eu{C}$ such that $a^T = c = b^{\theta}$. In
fact, we obtain a map
\begin{eqnarray}
\label{Theta}
\Theta : (A')^-[ T ] \rightarrow \eu{C}
\end{eqnarray}
which is an homomorphism of $\Lambda$-modules and verifies $a^T =
b^{\Theta(a)}$ for all $a \in (A')^-[ T ]$. In the rest of this
chapter, we investigate the action of $\Theta$ on $\rho$, defined by
\rf{rho}. We shall show that
\begin{proposition}
\label{mred}
The map $\Theta : (A')^-[ T ] \rightarrow \eu{C}$ given by $a^T =
b^{\Theta(a')}$ for $a' = a B$ verifies
\begin{eqnarray}
\rho^{w \theta_n} \in \left( \prod_{\nu \in C} p^{\mathbb{Z}} \right) \cdot (\mathbb{K}^{\times})^{p^n}
\end{eqnarray}
for all $\theta_n \in \mathbb{Z}[ \Delta ]$ with $\theta_n - \Theta(a) \in p^n
\mathbb{Z}_p[ \Delta ]$.
\end{proposition}
The proof of the proposition will take the rest of this chapter. We
fix a sequence $a \in A^-$ with $a' = aB \in (A')^-[ T ]$, and
generalize a representation used by Greenberg for the proof of the
abelian case.
\subsection{Pseudouniformizors}
Recall the definition $\eu{K}_n = \mathbb{K}_n \otimes_{\mathbb{Q}} \mathbb{Q}_p = \prod_{\nu
\in C} \mathbb{K}_{n, \nu \wp}$, where $\mathbb{K}_{n, \nu \wp}$ is the completion
of $\mathbb{K}$ at the prime $\nu \wp_n$, which is determined by $\nu
\wp$. The local units are
\[ U(\mathbb{K}_n) = \id{O}^{\times}(\eu{K}_n) = \prod_{\nu \in C}
\id{O}^{\times}(\mathbb{K}_{n,\nu \wp}). \] The projections $\iota_{\nu} :
\eu{K}_n \rightarrow \mathbb{K}_{n, \nu \wp}$ yield the \textit{components} of
elements $x \in \eu{K}_n$ in the various completions. Note that
we use no index $n$ for the projection, the index being determined by
the context.
Let $\eu{M}_n \subset \eu{K}_n$ be the maximal ideal and let $e = |
D(\wp) |$ be the ramification index of a prime above $p$ in $\mathbb{K}$. By
choice of $\mathbb{K}$, all the completions $\mathbb{K}_{n, \nu \wp_n} \supset
\rg{B}_n$ are isomorphic. There is thus a constant ramification index
$e$ which is the same for all extensions $\mathbb{K}_{n, \nu \wp}/(\mathbb{K}_{n,\nu
\wp} \cap \rg{B})$. We denote the least common multiple of $e, q$ by
$e(\mathbb{K})$; if $w = | W |$ is the order of the roots of unity of $\mathbb{K}$, we
denote the least common multiple of $e(\mathbb{K}), w$ by $w(\mathbb{K})$, a multiple
of $q$.
Let now $\rg{K}_n = \mathbb{K}_{n, \wp_n}$ be a finite extension of $\mathbb{Q}_p$ and
$\pi_n \in \rg{K}_n$ be a uniformizor for $\rg{K}_n$. Then for all $x
\in \rg{K}_n$ there are unique $a \in \mathbb{Z}$ and $u \in
\id{O}^{\times}(\rg{K}_n)$ such that $x = \pi_n^a \cdot u$. We extend
this decomposition to $\eu{K}_n$ as follows: let $\pi'_n = (\pi_n, 1,
\ldots, 1)$ in the usual Chinese Remainder decomposition of
$\eu{K}_n$. Since $\nu \in C$ permute the primes above $p$, the
projection $\iota_{\nu}(\nu \pi'_n)$ is a uniformizor for $\mathbb{K}_{n, \nu
\wp_n}$. Therefore, if $x \in \eu{K}_n$, there are unique $t \in [
C ]_{\mathbb{Z}}$ and $u \in U(\mathbb{K}_n)$ such that $x = (\pi'_n)^t \cdot u$.
\begin{definition}
\label{unif}
In order to bring the uniformizors $\varsigma_n$ in the game, we
define $S_n = (\varsigma_n^{e(\mathbb{K})/e}, 1, 1, \ldots, 1) \in \eu{K}_n$
with $\iota_1(S_n) \in \eu{M}(\mathbb{K}_{n,\wp})^{e(\mathbb{K})}$. We note that $S =
(S_n)_{n \in \mathbb{N}}$ form by definition a norm coherent sequence. Let now
$U = (u_n)_{n \in \mathbb{N}}$ be a norm coherent sequence of units $u_n \in
U(\mathbb{K}_n)$. A \textit{pseudouniformizor sequence} is a norm coherent
sequence $\Pi = S \cdot U = (\pi_n)_{n \in \mathbb{N}}$, for some sequence $U$
as before. For each $n$, the projection $\iota_1(\pi_{n}) \in
\eu{M}(\mathbb{K}_{n,\wp})^{e(\mathbb{K})}$ is a generator of this power of the
maximal ideal, while for all $1 \neq \nu \in C$ we have
$v_p(\iota_{\nu}(\pi_{n})) = 0$. Moreover, $v_p(\iota_{\nu}(\nu
\pi_n)) = e(K)/e$ for all $\nu \in C$. We denote the individual
elements $\pi_n$ in the sequence by pseudouniformizors for $\eu{K}_n$.
If $\pi_n \in \eu{K}_n$ is a pseudouniformizor, we have
$\eu{K}_n^{\times} = \prod_{\nu \in C} U(\mathbb{K}_{n,\wp_n}) \cdot
\nu(\pi_n)^{\mathbb{Z}}$ and we let $\widehat{\eu{K}}_n = \eu{K}_n^{\times}
\otimes_{\mathbb{Z}} \mathbb{Z}_p = \prod_{\nu \in C} U(\mathbb{K}_{n,\wp_n}) \cdot
\nu(\pi_n)^{\mathbb{Z}_p}$, a completion which does not depend upon the choice
of $\pi_n$.
Finally, we let $\id{N}_n = N_{n,1}(\eu{K}_n) \subset \eu{K}$ and
$\id{N} = \cap_n \id{N}$. The images under tensoring with $\mathbb{Z}_p$ are
denoted by $\widehat{\id{N}_n} \subset \widehat{\eu{K}}$.
\end{definition}
\subsection{Fundamental maps}
This leads to the definition of our maps. We fix $\Pi = (\pi_n)_{n \in
\mathbb{N}}$, a pseudouniformizor sequence, with $\pi_n \in \eu{K}_n$. Let
$x \in \widehat{\eu{K}_n}$ be such that $e \cdot v_p(\iota_{\nu}(x)) \equiv
0 \bmod q$ for all $\nu \in C$; then there is a unit $u \in U(\mathbb{K})$ and
a $t \in \eu{C}$ such that $x^{w(\mathbb{K})/q} = \pi_n^{ t w(\mathbb{K})/e(\mathbb{K})} \cdot
u$; both $t$ and $u$ are uniquely determined by $x$, for any fixed
$\Pi$. We define herewith the map
\begin{eqnarray}
\label{psidef}
\psi_{n, \Pi} : \widehat{\eu{K}_n} \rightarrow U(\mathbb{K}_n), \quad x \mapsto u
\end{eqnarray}
By definition, $\psi_{n,\Pi}$ acts on $U(\mathbb{K}_n)$ by $y \mapsto
y^{w(\mathbb{K})/q}$; moreover, if $x \in (\eu{K}_n)^{p^m}$, then
$\psi_{n,\Pi}(x) \in U(\mathbb{K}_n)^{w(\mathbb{K}) p^m}$. It can be verified from
the definition that $\psi_{n, \Pi}$ are homomorphisms of
$\mathbb{Z}_p$-modules; they are not homomorphisms of $\mathbb{Z}_p[ \Delta
]$-modules, since $\Delta$ may act on $\pi_n$. However, we have the
following useful fact:
\begin{lemma}
\label{euc}
Let $\psi = \psi_{n,\Pi}$ be the map defined above; then $\psi(\nu
\rho) = \nu (\psi(\rho))$ for all $\nu \in C^+$. For all $m > n \geq 1$
and all $x \in \widehat{\eu{K}_m}$ we have
\begin{eqnarray}
\label{normcoh}
N_{m,n}(\psi_{m,\Pi}(x)) = \psi_{n,\Pi}(N_{m,n}(x)).
\end{eqnarray}
Moreover, for arbitrary $\theta \in \eu{C}$ and arbitrary $n$ we have
\begin{eqnarray}
\label{equic}
\psi_{n,\Pi}\left( \rho^{\theta}\right) & = & \psi_{n,\Pi}(\rho)^{\theta}, \quad \hbox{and} \\
x \in \mathbb{K}_n^{p^m}, \ y \in U(\mathbb{K}_n) \ & \Rightarrow & \
\psi_{n,\Pi}(x) \in U(\mathbb{K}_n)^{w(\mathbb{K}) p^m}, \ \psi_{n,\Pi}(y) = y^{w(\mathbb{K})}.\nonumber
\end{eqnarray}
\end{lemma}
\begin{proof}
By definition of $\rho$ we have $v_{\wp_n}(\rho) = q p^{n-k}$ and
$v_{\overline{\wp_n}}(\rho) = - q p^{n-k}$ while $v_{\nu
\wp_n}(\rho) = 0$ for all $\nu \in C \setminus \{1, \jmath\}$. Let
$\pi_n = (\pi, 1, \ldots, 1) \in \eu{K}$ be the pseudouniformizor
with respect to which $\psi_{n,\Pi}$ is defined. By comparing
valuations, we see that there is a unit $u \in U(\mathbb{K}_n)$ with $\rho =
(\pi_n/\overline{\pi_n})^{q p^{n-k}} \cdot u$ and the definition of
$\psi_{n,\Pi}$ yields $\psi_{n,\Pi}(\rho) = u$. For arbitrary $t =
\sum_{\nu \in C^+} \in \eu{C}$ we have
\[ \rho^{t} = \prod_{\nu \in C^+} \nu(\pi_n/\overline{\pi_n})^{q
p^{n-k} c_{\nu}} \cdot \nu(u)^{c_{\nu}} \in \widehat{\eu{K_n}}. \] The
definition of $\psi_{n,\Pi}$ yields $\psi_{n,\Pi}\left(
\rho^t\right) = u^t = \psi_{n,\Pi}(\rho)^t$, which is the first
claim in \rf{equic}. If $y \in U(\mathbb{K}_n)$ then $\psi_{n,\Pi}(y)$ is
the unit in the decomposition of $y^{w(\mathbb{K})}$, and since the
projections $\iota_{\nu}(y)$ are coprime to $p$, it follows that
$\psi_{n,\Pi}(y) = y^{w(\mathbb{K})}$. If $x = z^{p^m} \in \eu{K}_n^{p^m}$,
then there is a uniformizor $\pi_n \in \mathbb{K}_{n,\wp_n}$, a $t \in [ C
]_{\mathbb{Z}}$ and $u \in U(\mathbb{K})$, such that $x = \pi_n^{p^m t} \cdot
u^{p^m}$. Thus $\psi_{n,\Pi}(x) = u^{w(\mathbb{K}) p^m} \in U(\mathbb{K})^{w(\mathbb{K})
p^m}$.
For \rf{normcoh}, let $x^{w(\mathbb{K})} = \pi_m^t \cdot u_x, \ t \in
\eu{C}, u_x \in U(\mathbb{K}_m)$. Since the uniformizors are norm coherent,
we have $N_{m,n}(x)^{w(\mathbb{K})} = \pi_n^t N_{m,n}(u_x)$ with
$N_{m,n}(u_x) \in U(\mathbb{K}_n)$. By applying $\psi$, we find
\[ N_{m,n}(\psi_{m,\Pi}(x)) = N_{m,n}(u_x) = \psi_{n,\Pi}(N_{m,n}(u_x)).\]
This completes the proof.
\end{proof}
The use of $\widehat{\eu{K}_n}$ is temporary and it is introduced in order
to have a $\mathbb{Z}_p$-homomorphism. We will derive later maps which are
defined on modules endowed with their own $\mathbb{Z}_p$-module structure,
using only restrictions of $\psi_{n,\Pi}$. Since the $\pi_n$ build a
norm coherent sequence, if follows from Lemma \ref{euc} that the maps
$\psi_{n,\Pi}$ form themselves a norm coherent sequence.
Let $\Pi = (\pi_n)_{n \in \mathbb{N}}$ be an arbitrary pseudouniformizors
sequence and $\psi_{n, \Pi} : \widehat{\eu{K}_n} \rightarrow U(\mathbb{K}_n)$ be the
sequence of $\mathbb{Z}_p$-homomorphisms defined in \rf{psidef}. Let $a =
(a_n)_{n \in \mathbb{N}} \in A^-$ be as in the statement of the Proposition
\ref{mred} and $\theta = \Theta(a B)$, so $a^T = b^{\theta}$.
We fix $n > 1$ a large integer and let $m >> n$, say $m > n^2$. Let
$\eu{A}_{2m} \in a_{2m}$ be a prime ideal coprime to $p$, which is
totally split in $\mathbb{K}_{2m}$ and verifies
\[ \eu{A}_{2m}^{(1-\jmath) q p^{2m-k}} = (\alpha_{2m }), \quad
\alpha_{2m} \in \mathbb{K}_n^{1-\jmath}. \] At finite levels, using \rf{bn},
we find that there are $\gamma_{2m} \in \mathbb{K}_{2m}^{1-\jmath}$ and
$t_{2m} \in [ C^+ ]_{\mathbb{Z}} \subset \eu{C}$, which are approximants of
$\theta$ to the power $q p^{2m-k}$, such that
\begin{eqnarray}
\label{ideals}
\eu{A}_{2m}^{(1-\jmath) T} = (\gamma_{2m}) \cdot \eu{R}_n^{t_{2m}}.
\end{eqnarray}
Raising this relation to the power $q p^{2m-k}$, and using \rf{rho},
we obtain $ (\alpha_{2m}^T) = (\gamma_{2m}^{q p^{2m-k}}) \cdot
(\rho^{t_{2m}})$. Thus $\alpha_{2m}^T \cdot \gamma_{2m}^{-q p^{2m-k}}
= \xi \rho^{t_{2m}}$ for some root of unity $\xi$; taking norms to
$\mathbb{K}_1$ on both sides yields $\gamma_1^{-q p^{2m-k}} = N(\xi) \cdot
\rho^{p^{2m-k}}$ and thus $N(\xi) \in \mathbb{K}^{p^{2m-k}} \cap \mu_{p^k}
= \{ 1 \}$. In particular
\[ \xi \in N_{n,1}^{-1}(1) \cap \mu_{p^{2m}} \subset \mu_{p^{2m}}^T. \]
An adequate choice of $\alpha_{2m}$ allows the assumption that $\xi =
1$. We thus obtain the fundamental identities:
\begin {eqnarray}
\label{val}
\alpha_{2m}^T \cdot \gamma_{2m}^{-q p^{2m-k}} & = & \rho^{t_{2m}}, \nonumber \\
\alpha_{m}^T \cdot \gamma_{m}^{-q p^{m-k}} & = & \rho^{t_{2m}}, \\
\alpha_{n}^T \cdot \gamma_{n}^{-q p^{n-k}} & = & \rho^{t_{2m}}.\nonumber
\end {eqnarray}
The lower identities are obtained from the first one by taking the
norms $N_{2m, m}, N_{2m,n}$ and then extracting the \nth{p^{2m-j}}
root, with $j = m,n$. Here $\alpha_m^{p^m} = \alpha_{2m}$ so that
$(\alpha_m) = \eu{A}_m^{(1-\jmath) q p^{m-k}}$ with $\eu{A}_m =
N_{2m,m} \eu{A}_{2m}$, etc. The value $\alpha_m =
(N_{2m,m}\alpha_{2m})^{1/p^m}$ is determined only up to roots of
unity, and it will be chosen such that the two sides of the equation
agrees. The case $\alpha_n$ is treated similarly.
Taking in addition the norm $N_{2m,1}$ we see that $\gamma_1^{-q
p^{2m-k}} = \rho^{t_{2m} p^{m-2k}}$ and after taking roots we have
$\gamma_1^{-q} = \zeta^c \rho^{t_{2m}}$. It follows that
$\rho^{t_{2m}} \in N_{2m,1}(\mathbb{K}_{2m}^{\times})$. We claim that
$\rho^{\theta} \in \widehat{\id{N}} = \cap_m
N_{m,1}(\widehat{\mathbb{K}_{m}}^{\times})$. In order to see this, we note that we
may modify $\gamma$ in one of the equations in \rf{val} so as to
obtain, for some arbitrarily large $M > m$:
\[ \alpha_m^T = \gamma_{M,m}^{q p^{m-k}} \rho^{t_M} . \] Upon taking
norms, we obtain $\rho^{t_M} \in N_{m,1}(\mathbb{K}_{m}^{\times})$, for all $M
> m$. By passing to the limit, we have $\rho^{\theta} = \lim_{M \rightarrow
\infty}(\rho^{t_M}) \in \widehat{N_{m,1}(\mathbb{K}_{m}^{\times})}$, the
completion being taken in $\widehat{\eu{K}_1}$. This holds for all $m$ and
thus
\begin{eqnarray}
\label{univnorm}
\rho^{\theta} \in \bigcap_m \widehat{N_{m,1}(\mathbb{K}_{m}^{\times})} =: \widehat{\id{N}}.
\end{eqnarray}
We show next:
\begin{lemma}
\label{rho1}
We can choose $\Pi$ such that $\psi_{1,\Pi}(\rho^{\theta}) = 1$. If $U
= (u_n)_{n \in \mathbb{N}}, u_n \in U(\mathbb{K}_n)$ is any norm coherent sequence of
units with $u_1 = 1$, then $\Pi \cdot U$ is also a norm coherent
sequence of pseudouniformizors verifying the same property.
\end{lemma}
\begin{proof}
We have shown in Lemma \ref{deco} and the comments following its
proof, that $\eu{\mathbb{K}}^{\times} = \id{N} \oplus \eu{Z}$; the relation
is maintained upon completion, so
\[ \widehat{\eu{K}} = \widehat{\id{N}} \oplus \eu{Z}. \] Let
$\psi_{1,\Pi}(\rho) = u^{\bot} \cdot u^{\top}$ with $u^{\bot} \in
\eu{Z}$ and $u^{\top} \in \widehat{\id{N}}$ be the according
decomposition. We may choose a norm coherent sequence of units $u_n
\in U(\mathbb{K}_n)$ such that $u_1^{1-\jmath} = u^{\top}$. By letting
$\pi'_n = \pi_n u_n$ we obtain a new norm coherent sequence of
uniformizors $\Pi'$. By definition,
\[ \rho = (u_1 \pi_1)^{(1-\jmath)} u^{\bot} =
\pi'_1/\overline{\pi'_1} \cdot u^{\bot} . \] Raising to $\theta$ and
using the fact that $\eu{Z}$ is $C^+$ invariant, as established at
the end of the previous chapter, it follows that
\[ \psi_{1,\Pi'}(\rho^{\theta}) = \psi_{1,\Pi'}(\rho)^{\theta} = (u^{\bot})^{\theta}
\in \widehat{\id{N}} \cap \eu{Z} = \{ 1 \}.
\]
It is obvious that $\Pi'$ is defined up to norm coherent sequences
with $u_1 = 1$. This completes the proof.
\end{proof}
We fix from now on $\psi_n = \psi_{n, \Pi'}$, a family of maps
verifying $\psi_1(\rho)^{\theta} = 1$ and $N_{j,l}(\psi_j(x)) =
\psi_l(N_{j,l}(x))$ for all $x \in \widehat{\eu{K}_j}, j > l > 1$; thus we
complete the proof of Proposition \ref{mred}:
\begin{proof}
Let $\psi_{n}$ be chosen like above. Then we have shown that
$\psi_1(\rho)^{\Theta(a)} = 1$ and thus, for $\theta_n \in \mathbb{Z}[
\Delta ]$ approximating $\Theta(a)$ by $\theta_n - \Theta(a) \in p^n
\mathbb{Z}_p[ \Delta ]$, we have indeed $\rho^{w(\mathbb{K}) \theta_n} \in
\prod_{\nu \in C} p^{\mathbb{Z}}$. Since $w | w(\mathbb{K})$, it
follows that $\rho^{e(\mathbb{K})}$ does not depend on the choice of
$\rho \in \eu{R}^q$.
\end{proof}
Let $\mathbb{L} = \mathbb{K}_{\infty}[ p^{1/p^{\infty}} ] \subset
\Omega_E(\mathbb{K}_{\infty})$, as mentioned in Lemma \ref{locuni}. As a
consequence of Proposition \ref{mred} we have
\begin{corollary}
\label{unramif}
Suppose that $(A')^-[ T ] \neq \{ 1 \}$. Then $\mathbb{F}_a := \mathbb{L}[
b^{\Theta(a)/p^{\infty}} ]$ is a totally unramified extension of
$\mathbb{L}$, for any $a \in (A')^-[ T ]$.
\end{corollary}
\begin{proof}
Let $\mathbb{F}'_a = \mathbb{K}_{\infty}[ b^{\Theta(a)/p^{\infty}} ]$ and $\eu{p}$
be any prime above $p$ in $\mathbb{K}_{\infty}$, and $\eu{P} \subset \mathbb{L},
\eu{P}' \subset \mathbb{F}'_a$ the ramified primes of $\mathbb{L}$ resp. $\mathbb{F}'_a$,
above it. By Proposition \ref{mred},
$\iota_{\wp_n}(\rho^{\theta_n(a)}) = p^{c(\wp)}$ and thus
$\mathbb{K}_{\infty, \eu{p}}[ b_n^{1/p^n} ] \subset \mathbb{K}_{\infty, \eu{p}}[
p^{1/p^{\infty}} ]$ for all $n$. Consequently
\[ \mathbb{F}'_{a, \eu{P}'} = \mathbb{K}_{\infty, \eu{p}}[ p^{1/p^{\infty}} ] =
\mathbb{L}_{\eu{P}} . \] Since the two completions coincide, it follows
that $\mathbb{F}_a = \mathbb{L} \cdot \mathbb{F}'_a$ is unramified at $\eu{P}$. This holds
for all (of the finitely many) primes $\eu{p}$ above $p$ in
$\mathbb{K}_{\infty}$, showing that $\mathbb{F}_a/\mathbb{L}$ is unramified at $p$. By
construction, it is unramified outside $p$, so it is a totally
unramified extension, as claimed.
\end{proof}
Note that $\Omega_{E'}(\mathbb{K}_{\infty})/\Omega_{E}(\mathbb{K}_{\infty})$ is a
$p$-ramified extension with galois group isomorphic to $\mathbb{Z}_p^s$. The
statement of the above corollary is an equivalence, and it implies
that the maximal unramified extension $\mathbb{F}/\Omega_{E}(\mathbb{K}_{\infty})$
contained in $\Omega_{E'}(\mathbb{K}_{\infty})$ has group of essential
$p$-rank equal to $\mathbb{Z}_p\hbox{-rk}((A')^-[ T ])$. The Gross conjecture states
thus that $\Omega_{E'}(\mathbb{K})$ is totally ramified -- up to finite
subextensions -- over $\Omega_{E}(\mathbb{K}_{\infty})$.
At finite levels, we have seen that ${\rm ord}(b_n) = q p^{n-k}$ for a
fixed $q$ which depends only on $\mathbb{K}$, provided $b \not \in
(B^-)^p$. Let thus $z, z' \in \mathbb{Z}$ such that ${\rm ord}(b_n) = p^{n+z}$ for
all $n$ and $\mathbb{K}_{n+z'}$ is the smallest extension which contains the
\nth{p^{n+z}} roots of unity, for $n > k + | z |$, say. Let $\eu{R}_n
\in b_n^{1-\jmath}$ and $\theta_n \in \mathbb{Z}, \theta_n - \Theta(a) \in
p^{n+z} \mathbb{Z}_p[ \Delta ]$. Then
\[ \eu{R}_n^{q p^{n-k} \theta_n } = \eu{R}^{p^{n+z}\theta_n} =
\wp_1^{(1-\jmath)q} = (\rho^{\theta_n}) .\] Consequently,
\[ \mathbb{F}_{n,a} := \mathbb{K}_{n+z'}[ b_n^{1/{\rm ord}(b_n)} ] = \mathbb{K}_{n+z'}[
\rho^{\theta_n/p^{n+z}} ] , \] and this is a $p$-ramified extension
which has no cyclic continuation. We note also that, by Proposition
\ref{mred},
\[ \iota_{\nu}\left(\psi_1(\rho^{\theta_n})\right) \in
U(\mathbb{K})^{p^{n+z}}, \] and consequently, $\iota_{\nu}(\rho^{\theta_n
\cdot w(\mathbb{K})}) = p^{c(\nu)} x^{p^{n+z}}$, for some $c(\nu) \in \mathbb{Z}$ and
$x \in U(\mathbb{K}_{\nu \wp})$. But since $\iota_{\nu}(\rho^{\theta_n }) \in
\mathbb{K}_{\nu \wp}$ too, it follows that $p^{c(\nu)} \in (\mathbb{K}_{\nu
\wp})^{e(\mathbb{K})}$. Consequently,
\begin{corollary}
\label{finlev}
Using the notations above, there are well defined extensions
$\mathbb{F}'_{a,n} = \mathbb{K}_{n+z'}[ \rho^{\theta_n(a)/p^{n+z}} ]$ which are
$p$-ramified, maximal cyclic over $\mathbb{K}_{n+z'}$ and such that, letting
$\mathbb{L}_n = \mathbb{K}_{n+z'}[ p^{1/p^{n+z}} ]$, the extension $\mathbb{L}_n[
\rho^{\theta_n(a)/p^{n+z}} ]/\mathbb{L}_n = (\mathbb{L}_n \cdot \mathbb{F}'_{a,n})/\mathbb{L}_n$ is
totally unramified.
\end{corollary}
\begin{proof}
The proof is, at finite levels, identical with the proof of the
previous corollary.
\end{proof}
In the next chapter, we shall investigate the extensions $(\mathbb{L}_n \cdot
\mathbb{F}'_{a,n})/\mathbb{L}_n$ in more detail and deduce that their existence
implies that $A^+[ T^* ]$ must be infinite, which contradicts the
Lemma \ref{noT*}.
\section{Proof of the Theorem \ref{gc}}
We let $a = (a_n)_{n \in \mathbb{N}} \in A^-$ be like in the previous section,
so $a B \in (A')^[ T ]$ and $a \not \in A^p$. Let $\theta_n \in \mathbb{Z}[
\Delta ]$ with $\Theta - \theta_n \in p^n \mathbb{Z}_p[ \Delta ]$ and let
\[ (\rho^{\theta}) = \eu{R}_n^{p^n(1-\jmath) \theta_n} \cdot
(w^{p^n(1-\jmath)}, \quad w \in \mathbb{K}_n. \] Let $\mathbb{L}_n = \mathbb{K}_n[ p^{1/p^n}
]$ and $\mathbb{F}'_n = \mathbb{K}_n[ \beta^{1/p^n} ]$ and $\mathbb{F}_n = \mathbb{F}'_n \cdot \mathbb{L}_n$:
we assume that $z = 0$ and thus ${\rm ord}(b_n^{\theta}) = p^n$ for all $n
> k$; this allows a simplification of the notation throughout this
chapter. We shall discuss at the end that this choice was indeed not a
restriction of generality. Let $\nu \in \mbox{ Gal }(\mathbb{L}_n/\mathbb{K}_n)$ be a
generator of this cyclic galois group and $s = \nu-1$. The extensions
$\mathbb{F}_n, \mathbb{F}'_n$ depend on $a$ and we let
\[ \mathbb{M}'_n = \prod_{a_n \in \left((A')^-[ T ]\right)_n} \mathbb{F}'_n(a), \quad
\mathbb{M}_n = \mathbb{M}_n \cdot \mathbb{L}_n \quad
\]
be the compositum of all the extensions when $a$ ranges through
$(A')^-[ T ]$. We deduce from the corollary \ref{unramif} that
$\mathbb{F}_n/\mathbb{L}_n$ and $\mathbb{M}_n/\mathbb{L}_n$ are totally unramified extensions. We
shall prove more now, namely that ${\rm sexp} (\mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n)) = p^n$ and
$\mathbb{F}_n$ are maximal cyclic in the maximal unramified abelian
$p$-extensions $\mathbb{H}(\mathbb{L}_n)$: First we describe the extension $\mathbb{F}_n$:
\begin{lemma}
\label{fields}
Notations being like above, $\mathbb{F}_n$ is unramified maximal cyclic over
$\mathbb{L}_n$ over degree $[ \mathbb{F}_n : \mathbb{L}_n ] = p^n$. Moreover $\mathbb{F}_n(a) \cap
\mathbb{F}_n(a')$ for $a B, a' B \in (A')^-[ T ]$ with $\Lambda a \cap \Lambda
a' = \{ 1 \}$ and ${\rm sexp} (\mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n)) = \mathbb{M}_n$. There is a
subgroup $\id{C} \subset A(\mathbb{L}_n)$ which is a $\mathbb{Z}_p[ \Delta, s
]$-module and such that
\[ \varphi(\id{C}) \vert \mathbb{L}_n = \mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n) \quad
\mathbb{H}(\mathbb{L}_n)^{\varphi(\id{C})} \cap \mathbb{M}_n = \mathbb{L}_n . \]
\end{lemma}
\begin{proof}
We have shown in Corollary \ref{unramif} that $\mathbb{F}_n/\mathbb{L}_n$ is
totally unramified, so $\mathbb{F}_n \subset \mathbb{H}(\mathbb{L}_n)$, the maximal $p$
abelian unramified extension of $\mathbb{L}_n$. Since ${\rm ord}(b_n) = p^n$, if
follows that $[ \mathbb{F}_n : \mathbb{K}_n ] = [ \mathbb{F}'_n : \mathbb{K}_n ] = p^n$. We show
that the extension is maximal cyclic.
Suppose this is not the case and let $\rg{L} \subset \mathbb{H}(\mathbb{L}_n)$ be
a cyclic extension of $\mathbb{L}_n$ such that $\mathbb{F}_n \subsetneq
\rg{L}_n$. We show that $\mathbb{F}_n = \rg{L}_n$. Otherwise, assume without
restriction of generality that $[ \rg{L}_n : \mathbb{F}_n ] = p$. Then
$\rg{L}_n \cdot \mathbb{K}_{n+1} = \mathbb{L}_n[ \zeta_{p^{n+1}} ][ x^{p^{n+1}} ]$
for some radical $x \in \mathbb{L}_n[ \zeta_{p^{n+1}} ]$. Kummer theory
implies that $x \rho_n^{c \theta_n} \cdot y^{p^n}, y \in \mathbb{L}_n[
\zeta_{p^{n+1}} ], (c, p) = 1$. Let $\pi = p^{1/p^n} \in \mathbb{L}_n$ and
fix a prime $\eu{P} \subset \mathbb{L}_n$ above $p$. This will be a totally
ramified prime above some $\wp \subset \mathbb{K}$, so the completion
$\rg{K} := \mathbb{L}_{n,\eu{P}} = \mathbb{K}_{\wp}[ \zeta_{p^{n}}, \pi ]$. Since
$(\rho^T) = (1)$ and $\iota_{\eu{P}}(\rho^{c \theta}) = \pi^{c p^n}
\cdot w^{p^{n+1}}, w \in \rg{K}$ and $(c,p) = 1$: indeed, $\rho^{c
\theta_n}$ is locally a \nth{p^n} but not an \nth{p^{n+1}}
power. Since $\rg{K}[ \zeta_{p^{n+1}}][ x^{1/p^{n+1}} ]$ is
unramified, it follows that
\[ \iota_{\eu{P}}(x) = \iota_{\eu{P}}(\rho^{c \theta_n} y^{p^n}) = (\pi^{c}
\iota_{\eu{P}}(y))^{p^n} . \] Consequently $\rg{K}[ \zeta_{p^{n+1}}
][ (\pi^c \cdot y_{\eu{P}})^{1/p} ]$ is either a totally split or an
unramified extension. In the first case, it follows that $\pi^c
\cdot y_{\eu{P}} = w^p \in \rg{K}[ \zeta_{p^{n+1}} ]$; moreover
$\mathbb{L}_n[ \zeta_{p^{n+1}} ][ (y^{p^n} \rho^{c \theta_n})^{1/p^{n+1}} ]$ is
abelian over $\mathbb{L}_n$, and therefore $(y^{p^n} \rho^{c \theta_n})^{\omega_n^*}
= z^{p^{n+1}}, z \in \mathbb{L}_n[ \zeta_{p^{n+1}} ]$.
We show that $N_{n+1,n}(y) \in \mathbb{L}_n^p$. Since $\omega_n^* = p^n
-\omega_n$ up to units in $\Lambda$, and using $N_{n+1,n} = p +
\omega_n f(\omega_n) = p + p \frac{p-1}{2} \omega_n + g(\omega_n)$
and $\rho^{c \theta_n T} \in \mathbb{K}_{n}^{p^{n+1}}$, we have
\begin{eqnarray*}
z^{p^{n+1}} & = & (y^{p^n} \rho^{c \theta_n})^{\omega_n^*} = y^{-p^n \omega_n} \rho^{c \theta_n p^n} \in
(\mathbb{L}_n[ \zeta_{p^{n+1}} ])^{p^{n+1}}\quad
\hbox{hence} \\
y^{\omega_n} & = & \rho^{c \theta_n} \cdot z_1^{p}, \hbox{and} \\
N_{n+1,n}(y) & = & y^{\omega_n f(\omega_n) + p} = y^p \cdot z_1^{p f(\omega)} \cdot \rho^{c \theta_n p(p-1)/2}.
\end{eqnarray*}
This shows that $N_{n+1,n}(y) \in \mathbb{L}_{n}[ \zeta_{p^{n+1}} ] \cap
\mathbb{L}_n$. Since $\mathbb{L}_n[ \zeta_{p^{n+1}} ] = \mathbb{L}_n[ \zeta_{p^n}^{1/p} ]$
is a cyclic Kummer extension, it follows that $y = \zeta_{p^{n+1}}^d
\cdot w^p, w \in \mathbb{L}_{n}[ \zeta_{p^{n+1}} ]$. Therefore the radical
\[ y^{p^n} \rho^{c \theta_n} \in \zeta_p \rho^{c \theta_n} \cdot (\mathbb{L}[
\zeta_{p^{n+1}} ])^{p^{n+1}} , \] Consequently, $\rg{L}[
\zeta_{p^{n+1}} ] = \mathbb{L}_n[ \zeta_{p^{n+1}} ][ \rho^{c
\theta_n/p^{n+1}} ]$; however this is extension is ramified at $p$
since $p^{1/p^{n+1}} \not \in \rg{K}[ \zeta_{p^{n+1}} ]$, so we
obtained a contradiction which shows that $\mathbb{F}_n$ is maximal cyclic.
By definition, $b_n$ generates $B_n^-$ and thus $\mathbb{M}_n = \mathbb{L}_n[
(B_n^-)^{1/p^n} ]$ is an abelian, unramified subextension of
$\mathbb{H}(\mathbb{L}_n)$. Moreover, $\mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n) \cong B_n^-$ and we have in
particular
\[ {\rm sexp} (\mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n)) = {\rm sexp} (B_n^-) = {\rm sexp} \left( \left[ \nu b_n
: \nu \in C \right]_{\mathbb{Z}} \right) = p^n. \] Since $\mathbb{F}_n$ is maximal
cyclic, the sequence
\[ 1 \rightarrow \mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n) \rightarrow \mbox{ Gal }(\mathbb{H}(\mathbb{L}_n)/\mathbb{L}_n) \rightarrow
\mbox{ Gal }(\mathbb{H}(\mathbb{L}_n)/\mathbb{M}_n) \rightarrow 1 \] is split. It follows that $A(\mathbb{L}_n) =
\id{C} \oplus \id{C}'$, with $\varphi(\id{C}')$ fixing $\mathbb{M}_n$, while
$\id{C} \cong \mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n)$ is a subgroup fixing a field $\mathbb{M}'_n =
(\mathbb{H}(\mathbb{L}_n))^{\varphi(\id{C})}$ which is linearly disjoint from $\mathbb{M}_n$
over $\mathbb{L}_n$. In particular, $\varphi(\id{C}) \vert_{\mathbb{M}_n} =
\mbox{ Gal }(\mathbb{M}_n/\mathbb{L}_n)$. We have $\id{C} \cong (B_n^-)^{\bullet}$, which
implies that $\id{C}$ is indeed a cyclic $\mathbb{Z}_p[ \Delta, s ]$-module.
This completes the proof of the lemma.
\end{proof}
The next lemma describes the module $\id{C}$:
\begin{lemma}
\label{obst}
Suppose that $c \in \id{C}$ generates $\id{C}$ as a cyclic $\mathbb{Z}_p[ \Delta
]$-module, let $\tilde{\tau} \in \mbox{ Gal }(\mathbb{L}_n/\mathbb{K})$ be any lift of $\tau \in
\mbox{ Gal }(\mathbb{K}_n/\mathbb{K})$. Then $N_{\mathbb{L}_n,\mathbb{K}_n}(c) = c^{\tilde{T}^*} = c^s =
c^{p^n} = 1$ and $c \not \in A(\mathbb{L}_n)^{(s, \tilde{T}, p)}$.
\end{lemma}
\begin{proof}
The proof is an application of the Kummer pairing. Since $\mathbb{F}_n$ is
complementable in $\mathbb{H}(\mathbb{L}_n)$, we may apply the Kummer pairing of
$\mathbb{H}$ by restriction to $\mathbb{F}_n$, the only subfield of $\mathbb{H}(\mathbb{L}_n)$ on
which the Artin symbol $\varphi(c)$ acts non trivially. We note that
$\nu$ fixes $\mu_{p^n}$, so
\begin{eqnarray*}
\langle \rho^{\theta x}, c^s \rangle & = & \langle (\rho^{\theta x})^s, c \rangle = 1, \\
\langle \rho^{p^n \theta x}, c \rangle & = & \langle \rho^{\theta x }, c^{p^n} \rangle = 1, \\
\langle N_{\mathbb{L}_n/\mathbb{K}_n} \rho^{\theta x}, c \rangle & = & \langle (\rho^{\theta x})^s, N_{\mathbb{L}_n/\mathbb{K}_n} c \rangle \langle \rho^{p^n \theta x}, c \rangle = 1,
\end{eqnarray*}
and we conclude from the undegeneracy of Kummer pairing and the three
identities which hold for arbitrary $x \in \mathbb{Z}_p[\Delta]$, that $c^s =
c^{p^n} = N_{\mathbb{L}_n/\mathbb{K}_n}(c) = 1$. Since $c^s = 1$, it follows that
$c^{T^*}$ does not depend on the lift of $\tau$ chosen; then
\[ \langle \rho^{\theta x T }, c \rangle = \langle 1, c \rangle = 1 = \langle
\rho^{\theta x}, c^{T^*}, \rangle \] and we conclude like before, that
$c^{T^*} = 1$.
Let $\rho' = \rho^{\theta x} \in {\rm Rad } (\mathbb{F}_n/\mathbb{L}_n)$ be such that $\langle
\rho', c \rangle = \xi$ is a primitive \nth{p^n} root of unity; such an
element exists, since $c$ generates $\mbox{ Gal }(\mathbb{F}_n/\mathbb{L}_n)$ as a
$\mathbb{Z}_p[\Delta]$-module. Assume now that $c = d^s$ for some $d \in
A(\mathbb{L}_n)$. Then
\[ \xi = \langle \rho' , c \rangle = \langle \rho', d^s \rangle = \langle (\rho')^s,
d \rangle = \langle 1, d \rangle = 1. \] It follows that $c \not \in
A(\mathbb{L}_n)^s$. Assume in general that there is a $f \in (s,
\tilde{T}^*,p)$ and $d \in A(\mathbb{L}_n)$, such that $c = d^f$. Arguing
like above, we find
\[ \xi = \langle \rho' , c \rangle = \langle \rho', d^f \rangle = \langle
(\rho')^{f^*}, d \rangle = \langle (\rho')^p, d \rangle = (\langle \rho', d
\rangle)^p. \] However, $\xi$ is not a \nth{p} power in $\mathbb{L}_n$, so we
obtain again a contradiction, which confirms the last claim of the
lemma.
\end{proof}
It is natural to consider the canonic extension $\Omega_{E'}(\mathbb{L}_n)[
\id{C}^{1/p^n}]$, which is a canonical $p$-ramified extension. Using
the various galois actions on $\id{C}$, which were established in the
previous lemma, we show that this induces the existence of a
$p$-ramified extension of $\Omega_{E'}(\mathbb{K}_n)$ with isomorphic galois
group. This will then be used to derive a contradiction.
\begin{lemma}
\label{omfields}
The module $\id{C}$ induces an extension $\mathbb{U}''/\Omega_{E'}(\mathbb{K}_n)$ with
$(\mathbb{U}'')^s \subset (\Omega_{E'}(\mathbb{K}_n))^-$ and $\mathbb{U}'' \cap
\Omega_{E'}(\mathbb{K}_n) = \mathbb{K}_n$, which is unramified outside $p$ and has
\[ \mbox{ Gal }(\mathbb{U}''/\Omega_{E'}(\mathbb{K}_n)) \cong \id{C}^{\bullet} \cong
\rho^{\theta Z_p[ \Delta ]}. \] Here $\nu$ acts on extensions by
acting on radicals.
\end{lemma}
\begin{proof}
The extension $\mathbb{U}_n = \Omega_{E'}(\mathbb{L}_n)[ \id{C}^{1/p^n} ]$ is well
defined; indeed, if $\eu{c} \in c$ is a prime and $(\gamma) =
\eu{c}^{p^n}$, then
\[ \Omega[ \id{C}^{1/p^n} ] := \Omega[ \gamma^{\mathbb{Z}_p[ \Delta ]/p^n} ] \]
does not depend on the choice of a generator $c \in \id{C}$ or of primes in
this class. Since $c$ is annihilated by $s$ and $T^*$ we have
\[ \gamma^s, \gamma^{T^*} \in ({\Omega_{E'}(\mathbb{L}_n)}^{\times})^{p^n}
\cdot E(\mathbb{L}_n) . \] Let $\eu{c}^s = (\gamma_s), \gamma_s \in
\mathbb{L}_n$. Then $(\gamma^s) = (\gamma_s^{p^n})$ and there is a unit
$\varepsilon_s \in E(\mathbb{L}_n)$ with $\gamma^s = \varepsilon_s
\gamma^{p^n}_s$. If $\id{N} = \mbox{\bf N}_{\mathbb{L}_n/\mathbb{K}_n}$, we have
$(\id{N}(\gamma_s)) = \id{N}(\eu{c}^s) = (1)$ and there is thus a unit
$\varepsilon_0 \in \mathbb{K}_n$ such that $\id{N}(\gamma_s) = \varepsilon_0$.
Let $\id{N} = p^n + s F(s)$. Let $\alpha_0 \in \mathbb{K}_n$ be the norm
$(\alpha_0) = \id{N} (\eu{c})$, so $\alpha_0 \id{O}(\mathbb{L}_n) =
\eu{c}^{p^n + s F(s)} = (\gamma \cdot \gamma_s^{F(s)})$ and there is a
unit with $e \alpha_0 = \gamma \cdot \gamma_s^{F(s)}$. Therefore
\[ \gamma^s = e^s \gamma_s^{-s F(s)} = e^s \cdot
\gamma_s^{-\id{N}+p^n} = e^s \varepsilon_0 \gamma_s^{p^n} . \] After
replacing $\gamma$ by $\gamma/e$, we see that
\begin{eqnarray}
\label{ats}
\gamma^s = \varepsilon_0 \gamma_s^{p^n} \quad \hbox{with} \quad (\gamma_s) = \eu{c}^s \quad \hbox{and} \quad
\varepsilon_0 = \id{N}(\gamma_s).
\end{eqnarray}
Let now $\mathbb{U}'_n = \mathbb{L}_n[ \gamma^{1/p^n} ]$ for the value of $\gamma$
defined in \rf{ats}; by construction, we have $\mathbb{U}'_n \cap
\Omega_{E'}(\mathbb{L}_n) = \mathbb{L}_n$ and $(\mathbb{U}'_n)^s \subset
\Omega_{E'}(\mathbb{L}_n)$. Moreover the extension $\mathbb{U}'_n/\mathbb{L}_n$ is
unramified outside $p$ and is $\mathbb{Z}_p[ \Delta ]$-cyclic, with
subexponent $p^n$. Finally, we have $\mbox{ Gal }(\mathbb{U}'_n/\mathbb{L}_n)^s = \{ 1 \}$,
under the action of $\nu$ by conjugation. Consequently, $\mathbb{U}'_n/\mathbb{K}_n$
is abelian and there is a field $\mathbb{U}''_n \supset \mathbb{K}_n$ fixed by a lift
of $\nu$ to $\mbox{ Gal }(\mathbb{U}'_n/\mathbb{K}_n)$. Since $\gamma^s =\varepsilon_0
\gamma_s^{p^n}$, we conclude that $(\mathbb{U}''_n)^s \subset
\Omega_{E'}(\mathbb{K}_n)$, while the properties of $\mathbb{U}'_n$ imply that $\mathbb{U}''_n
\cap \Omega_{E'}(\mathbb{K}_n) = \mathbb{K}_n$ and $\mbox{ Gal }(\mathbb{U}''_n/\mathbb{K}_n) \cong
(\ZM{p^n})^t$ for some $t > 0$.
Finally,
\[ \mbox{ Gal }(\mathbb{U}''_n/\Omega_{E'}(\mathbb{K}_n)) = \mbox{ Gal }(\mathbb{U}'_n/\mathbb{L}_n) =
\mbox{ Gal }(\mathbb{U}_n/\Omega_{E'}(\mathbb{L}_n)) \cong \id{C}^{\bullet} , \] which
completes the proof of the lemma.
\end{proof}
We may now complete the proof of the Theorem \ref{gc}
\begin{proof}
Assuming that $(A')^-(T)$ is non trivial, we have started by showing
in the previous chapter, that there is a $\theta \in \mathbb{Z}_p[ \Delta ]$
such that $\psi_1(\rho^{\theta}) = 1$. Based on this fact, we
construct the extensions $\mathbb{L}_n = \mathbb{K}_n[ p^{1/p^n} ]$ and show that
the radical $\rho^{\theta \mathbb{Z}_p[ \Delta ]} $ lifts over $\mathbb{L}_n$ to
the radical of some unramified extension $\mathbb{F}_n/\mathbb{L}_n$, which
possesses a direct complement in $\mathbb{H}(\mathbb{L}_n)$; moreover, its galois
group is a $\mathbb{Z}_p[ s ]$-submodule $\id{C} \subset A^+(\mathbb{L}_n)$, via
the action of the Artin symbol -- these facts are proved in Lemma
\ref{fields}. We then used properties of Kummer pairings for the
investigation of $\id{C}$ and concluded in Lemma \ref{omfields},
that the canonical extension $\mathbb{U}_n = \Omega_{E'}(\mathbb{L}_n)[
\id{C}^{1/p^n} ]$ induces under the given properties of $\id{C}$ an
extension $\mathbb{U}''_n$ which is $p$-ramified, linear disjoint from
$\Omega_{E'}[ \mathbb{K}_n ]$ and such that $U^T :=
\mbox{ Gal }(\mathbb{U}''_n/\Omega^-_{E'}(\mathbb{K}_n))^T = \{ 1 \}$. Since the galois
group $U$ has subexponent $p^n$, it follows by Kummer duality that
there is a class $a \in A_n^+$ of order $p^n$ such that
\[ \Omega^-_{E'}(\mathbb{K}_n)[ a^{1/p^n} ] \subset \mathbb{U}''_n \cdot
\Omega^-_{E'}(\mathbb{K}_n) \setminus \Omega^-_{E'}(\mathbb{K}_n). \] By Kummer
duality, $U^T = 1$ implies that $a^{T^*} = 1$. We have shown in
Lemma \ref{noT*}, that $A^+(T^*)$ is finite for $\mathbb{K}$. We may thus
choose $n$ such that $p^n > \exp(A^+(T^*))$, obtaining a
contradiction with the fact that ${\rm ord}(b_n) = p^n$. This completes
the proof of the Theorem.
\end{proof}
Suppose that ${\rm ord}(b_n) = p^{n+a}$ for some constant $a$, depending
only on $b = (b_n)_{n \in \mathbb{N}}$. If $a > 0$, then we see that $\mathbb{F}_n$
has no unramified cyclic continuation, so the construction made under
the assumption $a = 0$ still holds. If $a < 0$, we replace $\mathbb{K}_n$ by
$\mathbb{K}_{n-a}$ and have ${\rm ord}(b_{n-a}) = p^n$, the rest of the proof being
identical with the one above.
\begin{remark}
We may also consider the extension $\mathbb{L}'_n = \mathbb{L}_n[ (1 + p)^{1/p^n}
]$ which is abelian over $\mathbb{K}_n$ and such that $\beta_n \in
U(\mathbb{L}'_n)^{p^n}$, thus giving raise to unramified extensions of
$\mathbb{L}'_n$ which have similar properties to $\mathbb{F}_n$. However, $\mathbb{L}'_n$
is ramified outside $p$, so descent will only work down to $\mathbb{K}_n[
(1+p)^{p^n} ]$. These must indeed be extensions which have an
important $T^*$-part of exponent $p^n$.
\end{remark}
\begin{remark}
Like in the case of Leopoldt's conjecture, it is obvious that the
Gross-Kuz'min conjecture follows from the Conjecture of Shanuel. It
would also suffice to have a generalization of Baker's result to
homogeneous forms in $p$-adic logarithms for arbitrary degrees.
\end{remark}
\textbf{Acknowledgment:} I thank Ralph Greenberg, Vicen\c{t}iu
Pa\c{s}ol, Inder Passi and Machiel van Frankenhuijser for helpful
discussions and comments during the writing of preliminary versions of
this paper. I am particularly grateful to Grzegorz Banaszak who
helped by detailed discussions on the choice of uniformizors to find
the proof of the general case presented here.
\bibliographystyle{abbrv}
|
1,116,691,500,302 | arxiv | \section{Introduction \label{sec:srintro}}
Attempts thus far to combine general relativity and quantum mechanics,
the two cornerstones of our description of nature, have led to
difficulties, an example of which is the black hole information
parardox (BHIP) \cite{haw74,pre93}. BHIP suggests that the process by
which an object collapses into a black hole and then evaporates by
emitting Hawking radiation is not unitary. The effect apparently
leads to a new kind of unpredictability, quite apart from the
conventional one associated with Heisenberg uncertainty. The
derivation of the paradox employs a semiclassical treatment of quantum
fields localized close to the event horizon of a black hole, which
would seem to leave open the possibility of resolution through a more
detailed treatment of quantum gravity. However, as the problem can be
posed of a region near the horizon of a large black hole, it need not
invoke a strong gravitational field, which suggests that the problem
is amenable to a local quantum field theoretic treatment \cite{boku}.
On the other hand, from the string theory standpoint it has been
argued that the detailed knowledge of the Planck-scale physics cannot
be ignored even if there is no strong curvature or other
coordinate-invariant manifestation of the event horizon. Arguably, the
issue is still open, and continues to attract efforts at resolving
\cite{pre93,boku,hrv05,smo06,hormal03,gott03} and clarifying it
\cite{bp07}. In particular, in the loop quantum gravity approach,
quantum effects eliminate black hole singularities. As a result, one
can in principle track information to the future of a would-be
singularity \cite{smo2006}, thereby preserving information. It has
also been argued that BHIP may be avoided by attributing Hawking
radiation solely to quantum decoherence, considering that pure states
remain pure under unitary, closed-system evolution
\cite{kie01,gam06}. This is consistent with the viewpoint that pure
quantum states do not form black holes \cite{mye95}.
In the present work, we propose that BHIP, in particular the question
of localization of information in an evaporating black hole, may be
indicative of an inconsistent self-reference occuring in the
semiclassical treatment of black hole evolution \cite{bhilos}.
Admittedly, a rigorous study of this claim would require an
axiomatization of the semiclassical theory. Nevertheless, we believe
there are plausible grounds for believing that there are features,
presented here, that any such axiomatic theory should satisfy. Inspite
of the very abstract nature of this approach to black hole evolution,
we will be led below to concrete, nontrivial consequences for black
hole formation. This work may be primarily regarded as a plea for
injecting metamathematical considerations in the study of fundamental
physics such as quantum gravity, and BHIP in particular. A similar
case can be made for applying quantum information theoretic and
computation theoretic insights to understanding the basic mathematical
structure of physical laws \cite{srigruska}.
The remaining article is arranged as follows. In Sections
\ref{sec:bhip} and \ref{sec:godel}, we briefly review the black hole
information paradox and G\"odel's first incompleteness theorem,
respectively. The ambiguity in the localization of information
falling into an evaporating black hole is introduced in Section
\ref{sec:bhiloc}. An argument that the localization problem may be a
formal inconsistency in the semiclassical theory is presented in
Section \ref{sec:sriambi}, and that this inconsistency can be viewed
as self-referential in origin is presented in Section
\ref{sec:srincons}. The question of restoring consistency by invoking
various ways to account for the G\"odel incompleteness obtained by
imposing consistency is considered in Section \ref{sec:srincomp}.
That of restoring consistency by means of avoiding self-reference is
considered in Section \ref{sec:avsf}. Finally, we conclude in Section
\ref{sec:konklu}.
\subsection{The black hole information paradox \label{sec:bhip}}
A brief introduction to BHIP is as follows. We denote by $H_M$ the
Hilbert space of a collapsing body $M$, of dimension $N$, where
$N=e^{\cal S}$, and ${\cal S}$ is the black hole's entropy. In the
semiclassical treatment of quantum field fluctuations on the
background spacetime determined by the collapse and evaporation of a
black hole, the Hilbert space of the fluctuations can be separated
into two subsystems, given by Hilbert spaces, respectively, $H_{\rm
in}$ and $H_{\rm out}$ (each also of dimension $N$), located inside
and outside the horizon. The correlations between the two fields is
characterized by the Unruh quantum state $|\Phi\rangle_{\rm in +
out}$, which looks like the vacuum in the far past, a maximally
entangled pure state \cite{gott03}
\begin{equation}
|\Phi\rangle_{\rm in + out} = \frac{1}{\sqrt{N}}\sum_{j=1}^N
|j\rangle_{\rm in}|j\rangle_{\rm out},
\label{eq:unru}
\end{equation}
where $|j\rangle_{\rm in}$ and $|j\rangle_{\rm out}$ are orthonormal
bases for $H_{\rm in}$ and $H_{\rm out}$, respectively. The Unruh
state contains a flux of particles in $H_{\rm out}$, that constitutes
the Hawking radiation.
To the outside observer $H_{\rm in}$ is inaccessible, and the field
localized outside is in the maximally mixed state $\sum_j
(1/N)|j\rangle_{\rm out}\langle j|_{\rm out}$ containing no detailed
information about $M$. When back-reaction is included in the
semiclassical approximation, the black hole will slowly lose its mass
through Hawking radiation, and disappear. From the classical black
hole geometry, the information about what formed the black hole cannot
come out without violating causality. So at late times, we obtain a
mixed state, even though $M$ began in a pure state. Clearly, this
process cannot be described as a unitary evolution, and suggests that
black holes destroy information. This problem is often referred to as
BHIP. However, it is convenient for us to regard it as one aspect of
the full paradox, which (aspect) we shall call the black hole
information loss problem. There is another aspect of the paradox,
introduced in Section \ref{sec:bhiloc}, which we call the black hole
information localization problem.
\subsection{G\"odel Incompleteness\label{sec:godel}}
A formalization or axiomatization of arithmetic (in general, any
deductive theory) is the reduction of arithmetic to a small set of
initial formulas and rules of symbolic manipulation, such that a chain
of formulas obtained by manipulation in the formal system corresponds
to and represents deductions in arithmetic. By looking at the
correspondence between the formal system and the deductive theory in
reverse, Hilbert originated metamathematics, his name for the study of
rigorous proof in mathematics and symbolic logic. Here a formal
system is a `game' constructed, independently of its interpretation,
as a sequence of formulas obtained mechanically according to the rules
of symbolic manipulation, starting from the initial formulas. The
formal system is interpreted as representing the deductive system if
the initial formulas can be interpreted as expressing the axioms of
the theory, and the rules of symbolic manipulation, its logical rules
of inference. Then, a metamathematical proof that a formula occurs in
a sequence of formulas of the formal system yields a proof that the
proposition which is the interpretation of this formula is a theorem
of the deductive theory.
From the standpoint of mathematic logic, it is important to
distinguish between statements in a deductive theory from
meta-statements in the metatheory, which studies concepts, proofs and
truths in the theory. Failure to do so can lead to inconsistency
through self-reference, of which a well-known example is the liar's
paradox: ``this statement is false". Here the statement acts as its
own metastatement. If the statement is true, then it is false, and
conversely: a contradiction. From a syntactic viewpoint, a formal
system is consistent if for any proposition $\alpha$, at most one of
$\alpha$ and its negation $\neg\alpha$ is provable. The formal system
is complete if for any proposition $\alpha$, at least one of $\alpha$
and $\neg\alpha$ is provable \cite{semantic}.
G\"odel's (first) incompleteness theorem, perhaps the most celebrated
result in metamathematics, states that any formal system that is (1)
rich enough to encompass arithmetic, (2) is finitely specified, and
(3) consistent, contains a proposition that can neither be proved nor
refuted within the system \cite{God}, and is thus incomplete. Here
`finitely specified' means that there is an algorithm to list all
axioms (initial formulas) and rules of inference (rules for symbolic
manipulation), which may be countably infinite. Regarding (3),
G\"odel actually requires the stronger condition of
$\omega$-consistency \cite{Omega}, a subtlety we may ignore here.
Every deductive theory that includes elementary arithmetic (the
notions of natural numbers, and of the operations of addition and
multiplication) also inherits this incompleteness. Only theories with
sufficiently simple logical structure, such as propositional or
sentential calculus, Presburger arithmetic and elementary geometry,
are complete.
G\"odel's theorem is a consequence of the fact that arithmetic has
enough expressive power to allow meta-arithmetic statements to be
mirrored into it, thus making some sort of self-reference unavoidable.
Crucial to G\"odel's proof is the observation that the symbols of a
formal arithmetic system, and hence formulas and proofs constructed in
it, can be assigned a unique number, now called the G\"odel number.
Any other method of assigning numbers to these objects in one-to-one
fashion will also work. As a result, meta-arithmetical statements
about arithmetic can be paraphrased arithmetically as statements about
their G\"odel numbers. The meta-arithmetic statement that a sequence
$\alpha$ of formulas is a proof of the formula $\beta$, or that
formula $\gamma$ is provable, can be expressed, respectively, as an
arithmetical relation between the G\"odel numbers for $\alpha$ and
$\beta$, or an arithmetic property of the G\"odel number of $\gamma$,
and thus expressed in the formal system. This isomorphic mapping of
meta-arithmetic into arithmetic opens up the danger of self-reference.
If one takes care to set up blocks that prohibit inconsistency of the
liar's paradox type, one is left with incompleteness as a side-effect,
as it were.
Let us briefly present a simplfied, illustrative but unrigorous sketch
of G\"odel's proof. Let system $P$ be a formalization of ordinary
arithmetic, whose alphabet ${\bf P}$ consists of the symbols ``0''
(zero), ``$s$'' (successor), the logical constants $\forall$, $\neg$
(negation), $\vee$, and variables of the first type $x_1, y_1, \cdots$
(for individuals, the numbers including 0), variables of the second
type $x_2,y_2,\cdots$ (for classes of individuals), and so on.
Metamathematically, it is immaterial what symbols we choose to
represent these basic signs, and we may choose natural numbers for
them. Accordingly, a formula is a finite series of natural numbers,
and a particular proof is a finite series of a finite series of
natural numbers. Metamathematical concepts and propositions thereby
become concepts and propositions concerning natural numbers, and
therefore, at least partially expressible in the symbols of $P$
itself. In particular, G\"odel shows that the metamathematical
concepts ``formula'', ``axiom'',''variable'', ``proof-schema'',
``provable formula'', etc., are definable within $P$.
We call formulas involving a single variable as class-signs. If
$\alpha$ is a class-sign, and $t$ a number, we designate by $[\alpha;
t]$ the formula obtained by substituting the sign for $t$ in place of
the free variable in $\alpha$. Let every class-sign be somehow
ordered, e.g., lexicographically. The concept class-sign and ordering
$R$ can be defined in $P$. Let $R(n)$ denote the $n$th class-sign.
We define a set $K$ of whole numbers by:
\begin{equation}
\label{eq:clasgn}
n \in K ~\equiv~ [R(n);n] {\rm ~is~}{\rm not~}
{\rm provable~in~} P.
\end{equation}
As the r.h.s of Eq. (\ref{eq:clasgn}) is definable in $P$, so is the
concept $K$ in the l.h.s. That is, there is a class-sign $W$ in $P$
such that $[W;n] \equiv n \in K$. For some positive integer $q$, $W =
R(q)$. We will find that the string, a G\"odel sentence for $P$,
\begin{equation}
\label{eq:God}
[R(q);q],
\end{equation}
is {\em undecidable} in $P$. If proposition (\ref{eq:God}) were
provable in $P$, then so would the proposition $q \in K$ by
definition. The latter would imply $[R(q);q]$ is {\em not} provable in
$P$ according to Eq. (\ref{eq:clasgn}). This is a contradiction. On
the other hand, if proposition (\ref{eq:God}) were refutable in $P$,
i.e., $\neg[R(q);q]$ were provable in $P$, this would supply the proof
that $\neg(q \in K)$, so that, by Eq. (\ref{eq:clasgn}), $[R(q);q]$ is
provable in $P$. Again, we obtain a contradiction. Therefore,
assuming $P$ is consistent, $[R(q);q]$ is undecidable in $P$
\cite{sriform}. Thus $P$ is incomplete. Proposition (\ref{eq:God}),
which involves supplying a formula its own serial number as its
argument, is an instance of the diagonal argument \cite{linz},
pioneered by the mathematician Cantor \cite{sricantor}. Clearly,
(\ref{eq:God}) is true, since if it were false, it would be provable
in $P$, thereby contradicting itself. We thus have the curious
situation that (\ref{eq:God}) is known to be true metamathematically,
even though it is unprovable in $P$ \cite{add}.
An existential proof of G\"odel's theorem is obtained by noting that
the set $\Pi$ of provable propositions in a formalization of
arithmetic is recursively enumerable (r.e.) \cite{re}, in fact,
recursive \cite{rec}, whereas the set $T$ of truths expressible in
arithmetic is not r.e \cite{usp}. In a (semantically) consistent
formalization, clearly, $\Pi \subseteq T$. Since $T$ is not r.e,
there should be truths that are unprovable in the given formalization.
G\"odel's incompleteness theorem is related to Turing uncomputability,
the unsolvability of certain problems algorithmically
\cite{tur36,sricantor}. If every proposition in an arithmetic system
$P$ were decidable by an algorithm $G$, this could serve as the basis
to solve the halting problem for Turing machines, which is known to be
undecidable \cite{halt}. The unsolvability of the halting problem
thus implies the existence of undecidable propositions in $P$.
Turing machines and the system $P$ derive their power of
self-reference from their universality: the existence of universal
Turing machines in the case of the former, and the ability to mirror
meta-arithmetic statements in the case of the latter. However we will
find that one can construct simpler systems in which self-reference
occurs, leading to incompleteness or inconsistency. As an informal
example, we note disentangling proposition (\ref{eq:God}) that it
asserts its own unprovability in $P$. It may thus be regarded as the
consistent, and hence incomplete, version of the liar's paradox,
which, by asserting its own falsehood, is complete, but inconsistent.
\section{Black hole information localization problem \label{sec:bhiloc}}
A problem in BHIP closely related to the information loss problem
concerns the situation that as a radiating semiclassical Schwarzschild
black hole shrinks, and possibly fully evaporates, there seems to be
no unique prediction of where the information about $M$ is localized.
The event horizon being a globally defined property of a spacetime, to
a freely falling observer (called Alice), matter falling towards a
black hole encounters nothing unusual while crossing the horizon. She
finds the quantum information contained in the initial matter $M$ pass
freely into the interior of the black hole. In contrast, according to
an observer Bob, who is outside the event horizon, at a large distance
from the black hole and approximately at rest with respect to it, the
collapsing or infalling object appears to increasingly slow down and
to freeze asymptotically at the horizon. He finds that it never quite
crosses the event horizon during the semiclassical stage, and perhaps
even later, as the black hole evaporates, possibly entirely. This
lack of a unique prediction of the localization of the
infalling/collapsing matter in the semiclasscal theory is the black
hole information localization problem. This can be made clearer as
follows.
\subsection{The black hole information localization problem viewed
as formal inconsistency \label{sec:sriambi}}
We consider Alice falling towards a Schwarzschild black hole of mass
$m > 0$. To Bob, the coordinate time $t$ of the Schwarzschild metric
corresponds approximately to his proper time. Alice initially stands
close to Bob before propelling herself forward and then allowing
herself to fall freely into the black hole. The black hole mass $m$
is assumed to be sufficiently large that, from her viewpoint, all
events in her worldline segment up to her infall into the singularity
can be regarded to good accuracy as happening in a region of spacetime
endowed with a classical, time-independent metric. Let $\epsilon$
denote the event of this worldline of Alice intersecting the horizon.
Consider the Kretschmann scalar $K^A(\tau) =
R_{abcd}(\tau)R^{abcd}(\tau)$, where $R_{abcd}(\tau)$ is the Riemann
tensor along Alice's worldline, parametrized by her proper time
$\tau$. For convenience, we set $\tau=0$ at $\epsilon$. For a
Schwarzschild metric (\ref{eq:schs}), $K^A=48m^2/r^6$ \cite{hen00}.
In particular, event $\epsilon$ is marked by the scalar value
$K^A(0)=3/4m^4$.
According to Bob, because of gravitational redshift, Alice is falling
ever more slowly, but never quite getting to the horizon. Throughout
the semiclassical regime, we may assume that the evaporating black
hole is approximately static and spherically symmetric, with a the
shrinking horizon, and that Bob remains the distant, stationary
observer. Thus, even when substantial black hole mass has evaporated,
Alice will not yet have crossed through the horizon, as viewed from
his perspective. A quantum gravity scenario in which she never does
so as seen by Bob is not inconceivable. Indeed, this is the accepted
situation in string theory \cite{boku}. Even if Alice's crossing the
horizon does eventually happen, this event (denoted $\eta$) would
presumably have to occur in the strongly quantum gravity regime. We
expect that the Kretschmann scalar $K^B(0)$ at $\eta$ would be far
larger. Even if $K^A(0) = K^B(0)$ by an extraordinary conspiracy, the
functions $K^A(\tau)$ and $K^B(\tau)$ could not be the same for $\tau
\ge 0$, as the former occurs in a purely classcal spacetime, whereas
the latter in a full quantum gravity regime. In a scenario where
Alice does not cross the horizon in Bob's perspective, $\eta$ is
termed a ``null event''.
Either way, we are led to conclude that $\epsilon \ne \eta$, and that
Alice's and Bob's perspectives are mutually incompatible. To the
extent that the assumptions made are realistic, this inability of the
semiclassical theory to assign a unique spacetime location to a
physical event \cite{nota} may be regarded as a formal inconsistency
in the theory. In particular, let ${\bf g}$ be the proposition that
the quantum information pertaining to an infalling body passes through
the horizon at event $\epsilon$. From Alice's perspective, one
predicts ${\bf g}$, and from Bob's perspective, one predicts $\neg{\bf
g}$. The inconsistency is that the theory does not (seem to) assign a
unique destiny to the infalling information.
It is worth stressing that this incompatibility is fundamental, and
should not be thought of as a manifestation of non-Boolean quantum
logic defined in the state space $H_M \otimes H_A \otimes H_B$, where
$H_A$ and $H_B$ are the Hilbert spaces of Alice and Bob, respectively.
In particular, the incompatibility cannot be accounted for by an
entanglement between the black hole, Alice and Bob, where $\epsilon$
($\eta$) occurs relative to Alice (Bob). The simple reason for this is
that distant Bob need not interact, and thus need not become
entangled, with $M$ and Alice. But the basic reason is that such a
`relative state' interpretation is possible only if Alice and Bob are
two states of the {\em same} system, rather than two different
systems, as is the case here \cite{2sys}.
In retrospect, the inconsistency stems from the fact that general
relativity permits the co-existence of observers of very disparate
powers: on the one hand, Bob, and on the other, Alice, who is
infinitely more powerful than him in the sense that there are events
(such as $\epsilon$) to his infinite future that happen in finite
proper time according to Alice (but none vice-versa). In fact, a
disparity of this kind may serve as a basis to construct a general
relativitistic hypercomputer to compute Turing uncomputable functions
\cite{maho}. The origin of the localization problem, then, seems to
be the combined effect of the existence and diminution of the domain
of the `infinitely powerful' observer, given by the interior of the
Schwarzschild black hole.
To our knowledge, of the various efforts to resolve BHIP, currently
string theory alone seems to acknowledge the localization problem.
According to this view, an infalling object carries its information
intact through the horizon as seen in Alice's perspective. Bob
perceives the black hole draped by a heated membrane that is the
source of the Hawking radiation, and situated just above the event
horizon. He considers any infalling information to become disrupted
upon approaching this membrane, and re-emitted as radiation to the
exterior universe, keeping the late-time state pure. While this
offers the prospect of solving the information loss problem, that of
information localization still remains.
The standard understanding among string theorists \cite{boku} is that
this perspective-dependent nonlocal existence of infalling matter can
be accounted for by the principle of black hole complementarity
\cite{suss}, the idea that Alice's description is complementary to
Bob's, somewhat in the same way that the description of a quantum
particle in terms of position and momentum are complementary.
Historically, this notion of nonlocality has served to motivate the
holographic principle \cite{hooft}. Thus the incompatibility of
Alice's and Bob's perspectives is treated as a new principle of
relativity, rather than as an inconsistency.
The idea that the observation or non-observation of an event can
depend on the the observer's reference frame occurs also in the
context the Unruh effect \cite{unru76}, where the Unruh radiation is
observed in an accelerated frame but not in the corresponding inertial
frame. Indeed, the existence of Unruh radiation can be linked to the
apparent event horizon perceived by an accelerated observer, thus
putting it in the same conceptual framework as Hawking radiation
\cite{hara}. It has long been argued that the appearance of particle
events is observer dependent \cite{ful73}. Thus, the observation or
non-observation of an event of horizon-crossing or not crossing can
reasonably depend on the frame of the observer and does not
necessarily signal inconsistency.
In string theory, this remarkable position is justified by appeal to a
verificationist philosophy: one can choose not to be bothered by the
`cloning' of information because it cannot be verified to have occured
by an observer within the semiclassical effective theory regime
\cite{suss}, inasmuch as any attempt to do so would require energy far
beyond the Planck scale. Thus, this standpoint defers a treatment of
the problem from the semiclassical regime to a full theory of quantum
gravity.
\subsection{BHIP viewed as an inconsistent self-reference in quantum gravity
\label{sec:srincons}}
Since physics is described in the language of mathematics, it is a
deep yet natural question to ask what, if any, is the impact of
G\"odel incompleteness and Turing uncomputability on physics, a
question that has elicited varied responses
\cite{cas96,svo03,perzur,haw02}. Recently, Heisenberg uncertainty has
been related \cite{cal04} to G\"odel incompleteness through the notion
of information theoretic incompleteness \cite{cha}. An interesting
survey of the possible impact of G\"odel's incompleteness theorems on
physics may be found in Ref. \cite{bar02}.
We expect that the laws of physics possess sufficiently rich logical
structure to express elementary arithmetic, as evidenced for example
by the simple fact of existence of electronic computers. Hence a
formalization of physics should be able to express metatheoretic
propositions, such as provability in itself. If consistent, this
formal system (and by extension, physics itself) will then be
incomplete. Intuitively, we regard physical systems as `computing'
their own evolution \cite{llo00}, and thus believe that there is an
algorithm to compute any property of a system. The Church-Turing
thesis of computer science \cite{linz} may then be invoked to suggest
that no physical process, interpreted as a procedure for computing a
function, can be more powerful than Turing machines. Thus the
incompleteness of physics could manifest in the physical
uncomputability of Turing uncomputable problems \cite{sri06}.
On the other hand, a candidate theory of physics could be inconsistent
through self-reference. It may not be straightforward to detect
inconsistency in the laws of the theory. G\"odel's second
incompleteness theorem shows that the formal system $P$ can prove its
own consistency if and only if it is inconsistent \cite{go2}. One way
to deal with the issue of consistency of a physical theory would be to
axiomatize it and then try to demonstrate its consistency or
inconsistency, failing which one may hope that it is consistent; or,
one may try to prove its consistency metatheoretically. Even in the
absence of a formalization of the theory, an inconsistency might be
revealed through the prediction of some genuine paradox. The
detection of an inconsistency would imply that the theory in question
is inadequate, or, what is less likely, that Nature herself harbors an
inconsistency \cite{bar02}. Here we propose that the localization
paradox of BHIP may signal an inconsistent self-reference in the
semiclassical approach to black hole evolution.
To view the formal inconsistency of the BHIP localization problem as
self-referential, it may be re-cast as a time paradox. Suppose Alice
freefalls for a while, but before reaching the horizon, switches on a
rocket and returns back to Bob. In this case, no paradox arises
because Alice's and Bob's perspectives will coincide, and their
respective observations will be continuously transformable into each
other, within the semiclassical theory. However, if Alice chooses to
continue to freefall, and enters the black hole at event $\epsilon$,
then Bob's perspective does not register $\epsilon$, but instead the
incompatible (possibly null) event $\eta$.
\begin{figure}
\includegraphics[width=7.0cm]{sriganesh0.eps}
\caption{Bird's eye `meta-view' of the BHIP localization paradox as a
self-referential classical information circuit. Negative time travel
is indicated through the curve labeled by the encircled ``$\nu$''.
(a) The lines $s$ and $h$ represent the singularity and the shrinking
horizon, respectively. The dash-dotted and dashed curves are the
worldlines of an infalling object, according to Alice's and Bob's
perspectives, respectively. We note that $\eta$, if it happens,
presumably occurs in the fully quantum gravity regime, and if it is a
null event, the `$\eta$-worldline' will not intercept the horizon. The
negative-time feed-back is triggered if and only if event $\epsilon$
happens. Prior to the feed-back induced nonlocal split, the object's
worldline exists in an unambiguous past.}
\label{fig:sriganesh0}
\end{figure}
This situation may be described in the following somewhat fanciful
language. The information about Alice (or $M$) propagates towards the
black hole initially from an unambiguous past. If Alice does not fire
her rocket but freefalls into the black hole, a signal time-travels
from her future self at the event $\epsilon$ to her past self occuring
earlier, instructing the latter to shift to the incompatible worldline
leading to $\eta$ in Bob's perspective. Thus, if an infalling body
enters the black hole at $\epsilon$, it `will not have entered' the
black hole during that event. And if $\eta$ is a null event, the
paradox is that if the body enters the black hole, then it will not
have entered. In this sense, the BHIP localization problem may be
viewed as a physical version of the liar's paradox. If Alice fires
her rocket to return back to Bob, no such time-traveling signal
occurs. More generally, this signal is generated, and an infalling
object `experiences' a nonlocal splitting of the self into the two
mutually incompatible perspectives, if and only if it crosses the
horizon in Alice's perspective. Since this time-traveling signal
occurs across the perspectives, the self-reference is perceived,
strictly speaking, in the `meta-perspective' that has a bird's eye
view of both perspectives, rather than in the Alice perspective or Bob
perspective alone (cf. Figure \ref{fig:sriganesh0}).
Although this self-reference is tied in a complicated way to the
causal structure of spacetime in general relativity modified by
quantum mechanics, the essential idea of the inconsistency as being
due to a temporal self-reference, and of the incompleteness obtained
by imposing consistency, can be roughly demonstrated using simple
`self-referential circuits' that compute a one-bit partial function
(cf. Appendix \ref{sec:sritysk}).
A quick way to impose consistency in the BHIP localization problem is
to proscribe objects from falling into the inconsistent zone which is
the evaporating black hole. Applied to any infalling body, this would
suggest that the horizon never forms in finite time in the first
place. G\"odel incompleteness would then correspond to the situation
that, if the theory is consistent, it could somehow not allow the
dynamic formation of a solution (the Schwarzschild black hole) that it
nevertheless allows to exist, because if this solution were formed in
finite time, it would be inconsistent. One could then require that
the detailed dynamics of the infalling matter, possibly involving new
physics, would somehow conspire to prevent an object's collapsing into
the horizon. We will return to this point in detail in the next
Section, where we consider this and various other proposals to resolve
BHIP in this light.
\section{Towards resolving BHIP via G\"odel incompleteness
\label{sec:srincomp}}
Even if we admit that the semiclassical theory of black hole
evaporation may be inconsistent in the above sense, nonrelativistic
quantum mechanics and classical general relativity are arguably
consistent in their own domains.
This suggests that the axioms of quantum mechanics are not compatible
with those of general relativity, and that BHIP may be a manifestation
of this incompatibility. Again, in the absence of a rigorous
axiomatization of semiclassical general relativity, three broad
operational responses to the situation may be considered in order to
eliminate the inconsistency, either by averting self-reference, or by
invoking possibly new physics that would explain the G\"odel-like
incompleteness corresponding to an imposition of consistency: (A) to
somehow thwart the full collapse of $M$ into a black hole from
happening in finite time; (B) to modify the semiclassical theory, with
the modifications being understood as coming from the full theory of
quantum gravity; (C) to modify standard non-relativistic quantum
mechanics, and/or classical general relativity.
The first two options are considered sequentially in the following two
subsections. Option (C) is considered in the next Section. Option
(A) is concerned with the introduction of G\"odel incompleteness in
the form of prohibiting the formation of black holes in finite time.
Option (B) admits the inconsistency in the initial phase of infall
through the horizon, but enforces late-time consistency, by means of a
black hole remnant or a black hole final state. Option (C) aims to
modify one or both of the ingredient theories of semiclassical quantum
gravity, i.e., quantum mechanics and general relativity, so that no
BHIP-like self-reference occurs in putting them together.
\subsection{Eternally collapsing objects instead of evaporating
black holes.\label{sec:ever}}
As noted briefly at the end of Section \ref{sec:srincons}, the
simplest way of imposing consistency on the evolution of infalling
bodies implies that non-zero mass black holes should never form in
finite time. If we believe in the consistency of the semiclassical
approach, we may then `predict' the existence of a dynamical mechanism
that would explain how a body may be prevented from collapsing into a
black hole of non-zero mass in finite time. The physics of such a
mechanism would supply the required G\"odel incompleteness. It is
possible that we would require new physics to fulfil this purpose, but
we expect that there would be little departure from semiclassical
theory near the horizon.
Remarkably, such a no-go mechanism may already exist in {\em
classical} general relativity. In this scenario, what are
conventionally considered to be black hole candidates are proposed to
be eternally collapsing objects (ECOs) \cite{mitra}, with various
initial mass distributions lead to $m=0$ eventually (cf. Ref.
\cite{mitra} and references therein). Further, in Ref.
\cite{tanmay1}, the inherently quantum functional Schr\"odinger
formalism applied to the quantum collapse of and radiation from a 3+1
dimensional shell of matter finds that the event horizon may never
form in finite time. In addition, the radiation as seen from the
outside observer's perspective turns out to be non-thermal
\cite{tanmay2}, which allows for information to be emitted out.
There is some supporting observational evidence in terms of a quasar
containing intrinsic magnetic moment \cite{schild}, indicative of the
absence of an event horizon.
Conventional belief in the existence of black holes rests on the exact
Oppenheimer-Snyder solution to the general relativistic spherical
collapse equations \cite{opsny}, in which the collapsing body is
modelled as ``pressureless dust", and is shown to form a black hole in
finite proper time that goes as $m^{-1/2}$. The point of departure to
ECOs is to note that this solution, in which the collapsing fluid is
implicitly of zero internal energy and zero heat flux, does not
correspond to any physical fluid, whereas in a realistic situation,
the collapsing fluid will have finite pressure and finite density
gradient. In particular, radiation density and heat flux should
increase sharply as the horizon is approached, and the build-up of
gravitationally trapped radiation pressure is predicted to keep
slowing down the collapse as the object becomes sufficiently compact.
Accordingly, ECO theory predicts that massive objects suffering
spherical gravitational collapse never actually form non-zero mass
black holes. In evolving towards a black hole, an ECO burns its
entire mass into radiation, so that the black hole that eventually
forms at infinite proper time, has mass $m=0$ \cite{mitra}.
Obviously, such a Schwarzschild black hole that lacks any entropy
would resolve the BHIP information loss problem.
In situations where ECO's are shown to be unavoidable, they could be a
manifestation of G\"odel incompleteness in the following sense:
acceptance of ECOs leads us to the situation that, even though a
Schwarzschild black hole occurs as a solution to the general
relativity field equations, there are no initial conditions on $M$
that can collapse into the black hole in finite time. The existence
of such a dynamically unattainable solution can furnish a G\"odel-like
incompleteness corresponding to the theory's consistency. This is
analogous to the expressibility of an unprovable proposition in the
formal system $P$, assumed to be consistent. ECO's then may be a
purely classical effect that anticipates the quantum effect of black
hole evaporation, just as the classical black hole area theorem
\cite{bek98} anticipates the notion of black hole temperature.
To clarify the formal character of the incompleteness, we may regard
the collection of physical bodies such as $M$ as a formal system
representing the deductive theory of general relativity (just as
Turing machines or physical computers may serve as a formal system
representing arithmetic). By direct physical evolution, this formal
system `proves' theorems of general relativity. More precisely,
physical evolution drives a celestial body into various states, which
can be interpreted metatheoretically as representing theorems in
general relativity, just as a series of symbolic manipulations in $P$
produces new formulas, which can be interpreted as theorems in
arithmetic. We know `metatheoretically', by direct mathematical
insight, that a Schwarzschild solution of finite mass $m$ exists.
However, ECO theory implies that our formal system is constrained not
to find this out in finite time. G\"odel incompleteness would then
correspond to the situation that the semiclassical Schwarzschild black
hole is a true solution that cannot be detected by any consistent
formalization of the semiclassical theory of gravity.
A formalized proof of the existence of black holes would be an
interpretation of the formation of a Schwarzschild black hole in
finite time within the semiclassical theory, which would make the
formalization inconsistent through BHIP. This is analogous to the
situation that, according to G\"odel's first incompleteness theorem, a
proof of proposition (\ref{eq:God}) within $P$ would make the formal
system inconsistent through self-reference.
If the ECO scenario holds true generally, and is not restricted to
isolated bodies that collapse with (approximately) spherical symmetry,
this would offer further support for the view that the semiclassical
approach is consistent but incomplete. The eternal-collapse scenario
potentially provides the most conservative resolution of BHIP, since
no new physics would be needed. If the BHIP localization problem can
be shown to arise in the formation of any horizon, one could `predict'
ECO's as a generic consequence of the consistency of the semiclassical
theory of gravity.
However, if it turns out that there are certain mass distributions for
which the finite proper time formation of non-zero mass black holes
cannot be avoided, we would be led to conclude that semiclassical
gravity is probably inconsistent, and new physics, such as the
possible alternatives discussed below, may have to be invoked in order
to resolve BHIP.
\subsection{Consistency through selection of an unambiguous future}
Unlike the approach in the preceding subsection, the present one,
which implements option (B), involves matter passing into the black
hole, and thus the BHIP localization problem is unavoidable in the
events pertaining to the initial phase of the infall through the
horizon. However, one may restore consistency at late time events, by
having the information localizations in the Alice and Bob versions
somehow merge eventually. Although this does not avert the
inconsistency in toto, we may be satisfied with restricting it to a
finite measure and avoiding the prospect of `eternal inconsistency',
in which the two versions diverge forever.
There seems to be little freedom to alter the Alice version during the
events of the initial phase of infall through the horizon of a large
black hole, since \mbox{(semi-)classical} physics presumably holds at
those events. Relatively speaking, there is some room to maneuver the
Bob version, depending on the theory of quantum gravity. Accordingly,
three broad scenarios of late-time resolution of the localization
problem are available. First is: (a) that the event $\eta$ occurs
eventually after the breakdown of the semiclassical approximation,
with both perspectives being agreed thereafter that the information is
localized inside the black hole, which, in the Bob perspective, is now
a black hole remnant or a naked singularity.
An alternative possibility is that the event $\eta$ does not occur, as
in the string theoretic description. This would have meant that the
Alice and Bob perspectives remain `eternally incompatible'. Therefore,
one option is: (b) nonlocally transferring the information as seen in
the Alice perspective to the Hawking radiation. As for Bob, he
perceives the information of the infalling object somehow pass into
the Hawking radiation without going through the horizon. Another
option is: (c) eventually destroying the information in both Alice's
and Bob's perspectives. This would make the localization problem
redundant, but at the cost of unitarity.
Simple examples of the above three scenarios (a), (b) and (c) are
presented in the following three headings in this Subsection,
respectively. They had originally been proposed in connection with
the BHIP information loss problem, rather than the localization
problem. Here we point out that they can be adapted to address the
latter at late-time events. In (b) and (c), we expect that this
enforcement of consistency will produce G\"odel-like incompleteness,
as indeed confirmed below. \\
\paragraph{The information is localized in a naked singularity or
black hole remnant.} ~\\
In the first example, which illustrates scenario (a), during Hawking
radiation the black hole retains all the initial matter together with
negative energy quanta entangled with the Hawking radiation without
mutual annihilation \cite{hrv05}. As the black hole's aggregate mass
drops to zero in the semiclassical limit, its Hawking temperature
($\propto 1/m$) rises to infinity. It is further assumed that such a
zero-mass, information-bearing object is somehow without detectable
impact on low-energy experiments \cite{pre93}. Crucially, it is argued
that the horizon does not vanish but recedes to $r=0$. To see this,
consider the Schwarzschild metric
\begin{equation}
\label{eq:schs}
ds^2 = \left(1-\frac{2m}{r}\right)dt^2 -
\left(1-\frac{2m}{r}\right)^{-1}dr^2 - r^2d\Omega^2,
\end{equation}
where we have used natural units in which $G=c=1$. At first sight,
the metric when the mass $m$ drops to zero seems to correspond to flat
spacetime, endowed with Minkowski metric $ds^2 = dt^2 - dr^2 -
r^2d\Omega^2$. However, a more careful calculation shows
\begin{equation}
\lim_{r\mapsto 0^+} g_{00}(r=2m)=0;\hspace{0.5cm}
\lim_{r\mapsto 0^+} g_{rr}(r=2m)=\infty.
\end{equation}
This corresponds to a kind of an `informationally dense' naked
singularity, with horizon at $r=0$. Information about $M$ exists in
the full entangled state encompassing the singularity and the Hawking
radiation. This provides a resolution to the information loss problem.
In regard to the localization problem, there is an initial
incompatibility between the Alice and Bob perspectives because
$\epsilon$ occurs at close to $r=2m$, whereas $\eta$ happens
presumably at about $r=0$, when the evaporating black hole's size is
of the order of Planck length. Thereafter an unambiguous future
localization, and hence restoration of consistency, is assumed to
occur, with unequivocal agreement between both perspectives that the
information is localized at the quantum black hole singularity. A
detailed quantum gravity treatment should presumably replace the
singularity with a black hole remnant. \\
\paragraph{The information becomes localized in the Hawking radiation.} ~\\
The second example, exemplifying scenario (b), is based on an
interesting recent proposal to reconcile the unitarity of the black
hole S-matrix with Hawking's semiclassical arguments \cite{hormal03}.
It aims to resolve the BHIP information loss problem by imposing a
final-state boundary condition at the spacelike singularity inside the
black hole, which causes the information inside it to be `ejected'
into the Hawking radiation. Here we will note that it also reconciles
the perspectives of Alice and Bob, because the information is now
unambiguously localized-- outside the black hole.
The final state boundary condition imposed at the singularity requires
the quantum state of $H_{M} \otimes H_{\rm in}$ to be the maximally
entangled state \cite{gott03}
\begin{equation}
_{M+{\rm in}}\langle\Phi|(S \otimes I)
\label{eq:finsta}
\end{equation}
where $_{M+{\rm in}}\langle\Phi| = N^{-1/2}\sum_{j=1}^{N} {_M}\langle
j| _{\rm in}\langle j|$, $S$ is a unitary transformation, and
$\{|j\rangle_M\}$ is an orthonormal basis for $H_M$. The effective
transformation from $H_M$ to $H_{\rm out}$ is seen to be \cite{gott03}
\begin{equation}
T \equiv~ _{M + {\rm in}}\langle\Phi|(S \otimes I)|\Phi\rangle_{\rm in
+ out} = \frac{1}{N}S,
\label{eq:T}
\end{equation}
the effectively unitary, black hole S-matrix. The $1/N$ factor
accounts for post-selection and indicates that a conventional
measurement would have resulted in the final state (\ref{eq:finsta})
with probability $1/N$. This process may be viewed as a sort of
quantum teleportation \cite{qtele} that consumes the Unruh state
entanglement in order to nonlocally transmit $M$'s state to the
outgoing Hawking radiation, but without the concomitant classical
communication. This enables the two versions to agree on the late
time localization of the information.
The application of this picture to a semiclassical theory of black
hole evaporation in which $\eta$ does not happen, could be as follows.
For concreteness, we consider a string-theory-like description of
black hole evaporation. In Bob's perspective, the information about
$M$ does not pass through the horizon, but is somehow transferred into
the Hawking radiation after being disrupted at the heated membrane.
In Alice's perspective, the information does initially pass through
the horizon, after which the projection into the black hole final
state `teleports' the information into the Hawking radiation. Thus,
an unambiguous future eventually appears, providing a late-time
resolution to the localization problem. This imposition of
consistency produces a G\"odel-like incompleteness.
The event of projection of the state in $H_M \otimes H_{\rm in}$ to
$_{M + {\rm in}}\langle\Phi|(S \otimes I)$ occurs after $M$ enters
into the horizon, as seen from Alice's perspective. However, because
in Bob's perspective $M$ does not enter the black hole, the scattering
process characterized by $T$ would have to be attributed in his
perspective to some fundamentally indeterminate quantum gravity effect
at the heated membrane, that destroys $M$, recreating it in $H_{\rm
out}$. We identify G\"odel incompleteness with this inability of the
theory to provide a detailed external-based description of the black
hole S-matrix.\\
\paragraph{The information is destroyed.} ~\\
The third example, a concrete instance of scenario (c), is based on a
careful critique \cite{gott03} of the above black hole final state
proposal, where it is pointed out that departures from unitarity of
$T$ can arise due to interactions $U$ between the collapsing body and
the infalling part of the Hawking radiation. Because of the black hole
final state, the resulting loss of information outside is not
compensated for by any information available inside the black hole.
This automatically reconciles the perspectives of Alice and Bob,
because now that the information is destroyed,
obviously both can assert the loss of information
unparadoxically. This provides a late-time resolution to the BHIP
localization problem but a negative resolution to the information loss
problem.
According to this proposal, the effective modified transformation on
the infalling body is, in place of Eq. (\ref{eq:T}),
\begin{equation}
T ~\equiv~ _{M + {\rm in}}\langle\Phi|(S \otimes I)U|\Phi\rangle_{\rm
in + out} ~\equiv~ _{M + {\rm in}}\langle\Phi|V|\Phi\rangle_{\rm in +
out}.
\label{eq:TU}
\end{equation}
If and only if $_{M + {\rm in}}\langle\Phi|V$ is a maximally entangled
state is $T$ unitary (after renormalization). If $V$ is chosen to be
a maximally (dis)entangling interaction, such as the controlled-sum
gate \mbox{$V|j,k\rangle = |j,(j+k)\mod N\rangle$}, then from
Eq. (\ref{eq:TU}), one has
\begin{equation}
T = \sum_n \frac{1}{N}|0\rangle\langle n|,
\label{eq:Tnon}
\end{equation}
i.e., the state of the outgoing radiation is $|0\rangle_{\rm out}$,
irrespective of the incoming state $|m\rangle$ of the collapsing body.
Interestingly, since the final state of the radiation is a fixed pure
state $|0\rangle_{\rm out}$, predictability is not lost. Such a
nonunitary black hole evaporation would serve as a novel `quantum
deletion' mechanism \cite{srienv}.
As before, to apply this picture to a semiclassical theory of black
hole evaporation in which $\eta$ does not happen, we consider for
concreteness a string-theory-like description of black hole
evaporation. In Bob's perspective, the information about $M$ does not
pass through the horizon, but is somehow disrupted and destroyed at
the heated membrane. As a result, the Hawking radiation is a truely
thermal mixture. In Alice's perspective, the information passes
through the horizon, and remains inside until the final state
projection. The interaction $U$ impairs the fidelity of the
`teleportation' of the inside information into the Hawking radiation,
thereby destroying the information. Thus, an unambiguous future
eventually appears, providing a late-time resolution to the
localization problem.
For a reason similar to that in the preceding scenario (b), the
imposition of consistency brings G\"odel-like incompleteness
corresponding to the fact that in Bob's perspective the origin of the
process represented by the operator $T$ in Eq. (\ref{eq:Tnon}) is
indeterminate. This is because the actions of $U$ and $V$, as well as
the final state projection happen behind the horizon, a region
inaccessible to the infalling object in his perspective.\\
\section{Towards resolving BHIP via avoidance of self-reference
\label{sec:avsf}}
Finally, under option (C), we first consider the possibility that
departure from standard quantum theory can help avoid the
self-reference that leads to the BHIP localization problem. A naive
way to do so would be to somehow `turn off' Hawking radiation. This
would prevent the evaporation of the black hole, and thus ensure that
an {\em evaporating} black hole does not exist in the theory, with the
event $\epsilon$ lying eternally to Bob's future, as in classical
general relativity. One problem with this approach is that it
requires the new physics to apply at the horizon, where the
semiclassical description of spacetime is expected to be reasonably
valid. This may not be insurmountable, since the suppression of
Hawking radiation would hardly be noticeable. Another problem is that
of giving a covariant specification of a condition that forbids
pair-production near the horizon, given that there is no (necessarily)
strong curvature or other coordinate-invariant manifestation of the
event horizon. A breakdown in covariance might be one price to pay.
A further possibility under option (C) is that standard general
relativity is inaccurate in the classical domain. We will consider an
extreme realization of this option. Since the relativity of spacetime
is essential to the localization problem of BHIP, the paradox vanishes
if space and time are not relativistic, but are absolute, somewhat in
the sense of the philosopher Immanuel Kant \cite{kant}. Most
physicists would probably consider this approach unwarranted, but
given the seriousness of BHIP, we think it worth at least a passing
mention.
In his theory of transcendental idealism, Kant maintained that time
and space are pure intuitions and {\em a priori} forms of intuition.
Considered from the empirical perspective, they form the absolute
context to objects in experience, i.e., phenomenal objects open to
scientific study. In this respect, his view of space and time is
Newtonian. However, the absoluteness is epistemological rather than
ontological in the sense that space and time are not objects of
perception, and do not exist for objects in themselves. Considered
from the transcendental perspective, space and time are pure, and
exist subjectively as conditions of knowledge, i.e., as cognitive
structuring imposed on perception by the mind \cite{sripen}. Kant
also maintained that the axioms of Euclidean geometry were {\em
synthetic} and known {\em a priori}. The former qualification means
that they are not true in any logically necessary way, and could be
denied without contradiction, as in non-Euclidean geometries.
Nevertheless the latter qualification indicates that knowledge of the
axioms of geometry precedes our experience of objects, depending only
on our pure intuition (imaginative visualization) of space and time.
Strictly speaking, this description applies to perceptual space rather
than physical space, but Kant may not have intended them to be
different.
It turns out that the proposition of absolute space and time is not as
difficult to implement as may at first seem. For example, it is known
that the special theory of relativity can be apparently reproduced in
Newtonian spacetime assuming Lorentz length contraction of metric rods
and slowing down of clocks moving with respect to a putative absolute
rest frame. In a model of gravitation in absolute space and time, all
events in spacetime, not merely causally connected ones, would be
assumed to possess an absolute chronological ordering, with each event
being assigned a unique spacetime point. Thus time paradoxes like the
BHIP localization problem are automatically forbidden. For example,
in Ref. \cite{schm02}, a detailed model of this kind has been
proposed, in which black holes are replaced by stable frozen objects
of the type discussed in Section \ref{sec:ever}. Not producing
Hawking radiation, they do not evaporate, which eliminates BHIP.
\section{Conclusions \label{sec:konklu}}
Various attempts have been made to resolve BHIP, mostly aimed at
understanding how information may be preserved during black hole
evaporation. Here we focussed on the problem of information
localization in BHIP, and argued that it signals an inconsistent
self-reference in semiclassical gravity. This may be regarded as
evidence of the inadequacy of the semiclassical treatment of black
hole evolution, or that standard quantum mechanics and general
relativity are incompatible. To restore consistency, we require to
avert self-reference by modifying one or both of the latter theories,
or introduce (new) physics that imparts to semiclassical gravity the
incompleteness that would correspond to imposing consistency. Various
scenarios have been discussed under this rubric.
|
1,116,691,500,303 | arxiv | \section{Introduction}
Causal inference is impossible without assumptions, and the predominant assumption to facilitate causal inference is unconfoundedness.
Unconfoundedness goes under other names, such as ignorability and causal exchangeability, but they all describe the setting where treatment is independent of potential outcomes, perhaps after conditioning on covariates.
A common assumption is that randomization ensures that unconfoundedness holds, and randomization is a common way to motivate the unconfoundedness assumption.
This idea is prevalent throughout most subfields of causal inference, as the following quotes illustrate.\footnote{Some of these quotes have been mildly edited for clarity, primarily by substituting mathematical notation with words.}
\citet{Angrist2009Mostly} write ``Random assignment of treatment solves the selection problem because random assignment makes treatment independent of potential outcomes.''
\citet{Wooldridge2010Econometric} writes ``Suppose that the treatment is statistically independent of potential outcomes, as would occur when treatment is randomized across agents.''
\citet{Pearl2018Book} write ``Randomization eliminates confounder bias.''
Finally, \citet{Hernan2020Causal} write ``Randomization is so highly valued because it is expected to produce exchangeability.''
The purpose of this paper is to highlight that randomization does not imply unconfoundedness and that unconfoundedness does not imply that treatment was randomized.
Randomization does allow investigators to estimate treatment effects without systematic errors, but this is true only when they have sufficient knowledge about the assignment mechanism.
The fact that treatment was randomized does not, by itself, ensure that we can estimate treatment effects accurately; knowledge about \emph{how} it was randomized is crucial.
Indeed, provided that the assignment mechanism is randomized and known, inference is possible regardless of whether unconfoundedness holds.
This is useful because randomized assignment mechanisms that are confounded may improve precision over mechanisms that are unconfounded.
Parts of what will be said in this paper is already known by some people working on causal inference.
Presumably, this group includes some of the authors quoted above.
However, practitioners appear to not fully appreciate the fact that randomization and unconfoundedness are distinct concepts.
The confusion likely stems from the fact that some of the most popular assignment mechanisms happen to ensure that unconfoundedness holds, and the term ``randomization'' is often used as shorthand for this subset of mechanisms.
As a consequence, practitioners have been led to believe that any type of randomization will produce unconfoundedness, and they estimate their treatment effects accordingly.
This confusion provides the motivation for the discussion in this paper.
\section{Preliminaries}
A few stylized examples will suffice to make the points of this paper, but the points apply more widely.
Consider an infinite population of units indexed by the unit interval.
Each unit $i$ has two potential outcomes $y_i(1), y_i(0)$, two covariates $x_i, u_i$, and a treatment indicator $w_i$.
All these variables are binary.
The usual assumptions are made on the potential outcomes to ensure that they are well-defined, including no interference and no hidden version of treatment.
The population is such that all units have $y_i(0) = 0$.
The population is then equally divided between the eight possible combination of values of $y_i(1)$, $x_i$ and $u_i$.
For example, $12.5\%$ of the units have $(y_i(1), u_i, x_i) = (1, 0, 1)$, and so on.
Treatment $w_i$ is potentially randomly assigned in the population, and we will discuss its assignment in detail later.
We draw a random sample of $n$ units from the population uniformly and independently.
The sample observations are indexed by the set $\braces{1, \dotsc, n}$.
To differentiate between the units in the population and in the sample, we will denote variables pertaining to sample observations with capital letters.
That is, $Y_i(1)$, $Y_i(0)$, $X_i$, $U_i$ and $W_i$ are the variables for the $i$th sample observation, which has no connection to unit $i$ in the population.
The index $i$ does not carry any information in the sample, so it will be dropped when convenient.
But the index could potentially carry information in the population, so it will never be dropped when referring to population variables.
We do not observe all variables in the sample.
We observe $Y_i$, $X_i$ and $W_i$, where $Y_i = W_i Y_i(1) + (1 - W_i) Y_i(0)$ is the realized outcome.
In other words, one of the potential outcomes and the covariate $U_i$ are unobserved.
We are now ready to formally define the relevant concepts.
\begin{definition}\label{def:randomized}
A study is \emph{randomized} if $\gamma < \Pr{w_i = 1} < 1 - \gamma$ for all units $i$ in the population and some $\gamma > 0$.
\end{definition}
\begin{definition}
A \emph{treatment probability} is the probability that an individual unit $i$ in the population is treated $\Pr{w_i = 1}$.
\end{definition}
\begin{definition}\label{def:unconfounded}
A study is \emph{unconditionally unconfounded} if the potential outcomes are independent of treatment assignment in the sample: $(Y(1), Y(0)) \protect\mathpalette{\protect\@indep}{\perp} W$. A study is \emph{conditionally unconfounded} if potential outcomes are conditionally independent of treatment assignment given observed covariates: $(Y(1), Y(0)) \protect\mathpalette{\protect\@indep}{\perp} W \mid X$.
\end{definition}
\begin{definition}\label{def:p-score}
A \emph{propensity score} is the probability that a sample observation with a certain covariate value is treated $\Pr{W = 1 \given X = x}$.
\end{definition}
Note that probability operators on population variables, such as $\Pr{w_i = 1}$, do not involve sampling variability.
However, probability operators on sample variables, such as $\Pr{W_i = 1}$, involve variability both from treatment assignment in the population and sampling.
Therefore, $\Pr{w_i = 1}$ and $\Pr{W_i = 1}$ will generally not take the same value.
Unconfoundedness is here defined as full independence between both potential outcomes and treatment.
In many cases, this can be weakened to independence of conditional expectation functions.
The arguments in this paper are unaffected by such a weakening.
The current setup has treatment assignment taking place in the population before sampling.
This was chosen to conform with the usual i.i.d.\ setting.
An alternative setup is to first sample units and then assign treatment in the sample.
As long as treatment is assigned independently between units, which is the case for all assignment mechanisms consider here, the difference between assignment in the population or sample is immaterial.
For more intricate assignment mechanisms that induce dependence between units' treatments, the difference can be important, but such an investigation is beyond the scope of this paper.
Nevertheless, a similar argument would still apply.
\section{Confounded random assignment}\label{sec:conf-rand}
Consider an assignment mechanism in the population that assigns treatment independent at random with the following treatment probabilities:
\begin{equation}
\Pr{w_i = 1} = \frac{3 - y_i(1)}{4}.\label{eq:conf-rand-prob}
\end{equation}
That is, a unit with $y_i(1) = 0$ will have a $75\%$ probability of being treated, and a unit with $y_i(1) = 1$ will have a $50\%$ probability.
Clearly, this satisfies Definition~\ref{def:randomized}, so the study is randomized.
But this study is not unconfounded.
To see this, note that unconfoundedness requires that $\Pr{Y(1) = 1 \given W = 1}$ is the same as $\Pr{Y(1) = 1 \given W = 0}$.
However, by Bayes' theorem, the probability $\Pr{Y(1) = 1 \given W = w}$ is equal to
\begin{equation}
\frac{\Pr{W = w \given Y(1) = 1} \Pr{Y(1) = 1}}{\Pr{W = w}},
\end{equation}
which in turn is equal to $2 / (3 + 2w)$.
In other words, a treated sample observation has a $40\%$ probability of having $Y(1) = 1$, while an untreated sample observation has a $66.7\%$ probability of having $Y(1) = 1$.
Clearly, this does not satisfy unconditional unconfoundedness.
Conditioning on $X$ does not change matters, and the study remains confounded.
This setting also highlights that a propensity score is different from a treatment probability.
As given by Definition~\ref{def:p-score}, the propensity score is $\Pr{W = 1 \given X = x}$.
Using the law of total probability and the fact that $Y(1)$ and $X$ are independent, we can write the score as
\begin{multline}
\Pr{Y(1) = 1} \Pr{W = 1 \given Y(1) = 1}
\\
+ \Pr{Y(1) = 0} \Pr{W = 1 \given Y(1) = 0},
\end{multline}
which evaluates to $5/8$.
Equation~\eqref{eq:conf-rand-prob} tells us that the treatment probabilities are either $1/2$ or $3/4$, but they are never $5/8$.
Clearly, propensity scores are not the same as treatment probabilities.
Standard estimation techniques will not accurately estimate the treatment effect in this setting.
The average treatment effect $\E{Y(1) - Y(0)}$ is here $1/2$, but the difference-in-means estimator,
\begin{equation}
\frac{\sum_{i=1}^n W_iY_i}{\sum_{i=1}^n W_i} - \frac{\sum_{i=1}^n (1-W_i)Y_i}{\sum_{i=1}^n (1-W_i)}, \label{eq:dif-est}
\end{equation}
will concentrate around the value $2/5$.
Estimators that make covariate adjustments will not change matters.
\section{Unconfounded deterministic assignment}
Consider an assignment mechanism that assigns treatment so that $w_i = 1$ for all units with $u_i = 1$, and $w_i = 0$ for all units with $u_i = 0$.
That is, we have $w_i = u_i$ for all units in the population.
Definition~\ref{def:randomized} is not satisfied here, so this study is not randomized.
All treatment probabilities are either zero or one, meaning that we would not observe any variability in the treatments assigned to the units in the population upon repeated draws from the assignment mechanism.
This is sometimes referred to as deterministic assignment or lack of positivity.
The consequence is that conventional estimation methods for randomized experiments that use information about treatment probabilities cannot be used in this setting \citep[see, e.g.,][]{Aronow2013Class}.
Still, this study is unconfounded.
Because $Y(1)$ is binary and $Y(0)$ is zero for all units, it is enough to show that $\Pr{Y(1) = 1 \given W = w}$ is constant in $w$ to prove unconditional unconfoundedness.
Because $W = U$ with probability one, we can consider the conditional probability of $Y(1)$ given $U$ instead.
Recall that half of the units with $U = 1$ had $Y(1) = 1$, and half of the units with $U = 0$ also had $Y(1) = 1$.
Hence, irrespective of $w$, we have
\begin{equation}
\Pr{Y(1) = 1 \given W = w} = 1/2.
\end{equation}
The conclusion is that the study is unconfounded.
Furthermore, while all treatment probabilities are either zero or one, both the conditional and unconditional propensity scores are one half:
\begin{equation}
\Pr{W = 1 \given X = x} = \Pr{W = 1} = 1/2,
\end{equation}
for all $x$ in the support of $X$.
Hence, the usual overlap condition, which stipulates that the propensity score is bounded away from zero and one, is satisfied.
This shows that we can have overlap without positivity.
What this means is that the difference-in-means estimator in Equation~\eqref{eq:dif-est} is unbiased for the average treatment effect.
The only complication is when we happen to draw a sample that contains only treated units or only control units.
However, the probability of that event diminishes exponentially in the sample size, so it can safely be ignored as long as the sample is not very small.
Some readers might question the practical relevance of this example.
Unconfoundedness rests on a knife edge here: exactly half of the units in each stratum of $y_i(1)$ have $u_i = 1$.
While it is unlikely that naturally occurring populations can sustain balancing acts like this, the purpose of the current example is to illustrate that it is possible, at least in principle, to have a deterministic assignment mechanism that is unconfounded.
In applied research, skepticism towards deterministic assignment mechanisms is sensible.
\section{Unconfounded random assignment}\label{sec:unconf-rand}
There are some randomized assignment mechanisms that do produce unconfoundedness.
One such mechanism is when the treatment probabilities are constant over the units in the population.
That is, $\Pr{w_i = 1} = p$ for all units in the population for some $p \in (0, 1)$.
In this case, the study is unconditionally unconfounded, and the propensity score equals the treatment probability $p$.
Presumably, most of the authors cited in the introduction had an assignment mechanism like this in mind.
A slightly more intricate assignment mechanism sets the unit-level treatment probability according to some function depending on the observable covariates.
That is, we have $\Pr{w_i = 1} = f(x_i)$ for all units in the population for some function $f$ bounded away from zero and one.
In this case, the study is unconfounded conditional on $X$, and the propensity score equals the function deciding treatment probabilities:
\begin{equation}
\Pr{W = 1 \given X = x} = f(x),
\end{equation}
for all $x$ in the support of $X$.
Of course, it is here important that the function deciding treatment probabilities only depends on observed covariates; the study might not be unconfounded if it depends on the unobserved covariate $u_i$.
While these assignment mechanisms ensure unconfoundedness, additional restrictions are required to ensure that we can estimate the treatment precisely.
In particular, nothing has yet been said about the joint assignment probabilities, but they are critical for the behavior of any estimator.
As an example, consider an assignment mechanism that flips one fair coin and assigns either all units to treatment or all units to control depending on the outcome of the coin flip.
Here, the unit-level treatment probabilities are all $\Pr{w_i = 1} = 1/2$, and the study is unconfounded.
Furthermore, overlap holds because both the unconditional and conditional propensity scores are $1/2$.
The issue is that we never observe a sample containing both treated and control units, because the population only ever contains units from one of the treatment groups.
We might have an unbiased estimator of the treatment effect here, but its precision will not noticeable improve with the sample size.
A simple solution is to require that the assignment mechanism is such that treatment is independently assigned in the population.
Provided that the propensity score is known or can be estimated well, an assignment mechanism like this would ensure that the treatment effect can be estimated with precision in large samples.
This is what \citet{Imbens2015Causal} refer to as a ``regular assignment mechanism,'' but there are many commonly occurring assignment mechanisms that do not fit this mold.
\section{Known assignment mechanism}
The fact that randomization does not imply unconfoundedness may seem to lessen the value of randomization, and indeed, it does.
As the example in Section~\ref{sec:conf-rand} shows, randomization alone is not very helpful.
It is when we have sufficiently knowledge about the assignment mechanism that randomization shines.
To show this, let $p_i = \Pr{w_i = 1}$ denote the treatment probability for unit $i$ in the population, and let $P_i$ be the treatment probability of the $i$th sample observation.
Note that $P_i$ is not the same as the unconditional propensity score $\Pr{W_i = 1}$, nor is it the same as the conditional propensity score $\Pr{W_i = 1 \given X_i}$.
Although, as noted above, they sometimes coincide.
If we observe $P_i$, and it is bounded away from zero and one, we can use the following inverse probability weighting-type estimator to estimate the average treatment effect:
\begin{equation}
\frac{1}{n} \sum_{i=1}^n \frac{W_iY_i}{P_i} - \frac{1}{n} \sum_{i=1}^n \frac{(1-W_i)Y_i}{1-P_i}. \label{eq:ht-est}
\end{equation}
This is sometimes called the Horvitz--Thompson estimator after \citet{Horvitz1952Generalization}, but it was first described by \citet{Narain1951}.
The power of a randomized assignment mechanism is that it ensures that this estimator is unbiased for the average treatment effect no matter how the treatment probabilities were decided upon.
This is true even if the treatment probabilities are strongly, or even perfectly, correlated with the potential outcomes.
This can be shown by an application of the law of iterated expectation:
\begin{equation}
\E[\bigg]{\frac{WY}{P}}
= \E[\bigg]{ \frac{Y(1)}{P} \E[\big]{W \given P, Y(1)} }
= \E{ Y(1) },
\end{equation}
where we used $\E{W \given P, Y(1)} = P$.
Further restrictions on the assignment mechanism are needed to ensure concentration, but they are fairly mild.
Inverse probability weighting-type estimators can also be used with propensity scores, in which case we would replace $P_i$ in Equation~\eqref{eq:ht-est} with $\Pr{W_i = 1 \given X_i}$, or an estimate thereof.
The two estimators appear similar, but the similarities are deceptive.
Provided that $0 < p_i < 1$ for all units, inverse probability weighting using $P$ will be unbiased for the treatment effect no matter how treatment otherwise was assigned.
If we observe $P$, we do not need to impose any restriction on the assignment mechanism, nor do we need any more information about it.
However, inverse probability weighting using $\Pr{W_i = 1 \given X_i}$ will be successful only when conditional unconfoundedness given $X$ holds.
That is, propensity score weighting requires a specific type of assignment mechanism, and its applicability is therefore more limited.
In a sense, $P$ acts as an universal control variable, always providing the information we need to make credible causal inferences no matter what the assignment mechanism happen to be.
One might be tempted to interpret $P$ as just another covariate and say that the study is unconfounded conditional on it.
However, this perspective misses the critical point that $P$ does not primarily provide information about the units, as ordinary covariates do.
Instead, it provides information directly about the assignment mechanism.
Some readers may find this an interesting curiosity but ask about the practical relevance of the result.
Why would one ever pick an assignment mechanism other than the ones described in Section~\ref{sec:unconf-rand}?
One reason is that investigators may not be in control of treatment assignment.
There could be budgetary, ethical or practical considerations that make an unconfounded mechanism impossible to implement.
There are also studies that leverage naturally occurring assignment mechanisms that are known to be randomized, but are completely beyond the control of the investigator.
Another reason is that assignment mechanisms that do not produce unconfoundedness can potentially perform better than unconfounded mechanisms.
In particular, precision can often be improved by inducing exactly the type of correlation between treatment and potential outcomes that renders the study confounded.
We will consider a slightly different stylized setting to illustrate this.
Such a stylized setting is not necessary to make these points, but it greatly eases the exposition.
Suppose that $y_i(0) = 0$ and $0 < y_i(1) < 2 \E{Y(1)}$ for all units in the population.
We will compare two assignment mechanisms.
The first mechanism sets $p_i$ to be proportional to the potential outcome under treatment.
In particular, it sets $p_i = y_i(1) / 2 \E{Y(1)}$.
The second mechanism sets all treatment probabilities to one half: $p_i = 1/2$.
For both mechanisms, the assignments are independent between units.
Note that both mechanisms ensure that half of the units are treated in expectation, $\E{P} = 1/2$, so any differences cannot be explained by having access to more or less observations of treated units.
The difference is instead that the first mechanism induces a strong correlation between treatment and the potential outcome, rendering it confounded, while the second mechanism makes the study unconfounded.
The estimator given in Equation~\eqref{eq:ht-est} is unbiased under both assignment mechanisms, so we will focus on its precision.
Because the sample is drawn independently and treatment is assigned independently, the terms of the estimator are independent.
Hence, the normalized variance under both mechanisms is
\begin{equation}
n \Var[\big]{\widehat\tau} = \Var[\bigg]{ \frac{WY}{P} }.
\end{equation}
This is, however, where the similarities end.
For the first mechanism, we have $P = Y(1) / 2 \E{Y(1)}$ with probability one, so
\begin{equation}
\Var[\bigg]{ \frac{WY}{P} } = 4 \Var{ W } \braces[\big]{\E{Y(1)}}^2.
\end{equation}
We have $\Var{ W } = 1/4$ because $\E{P} = 1/2$, so the normalized variance is $n \Varsub[\big]{1}{\widehat\tau} = \braces[\big]{\E{Y(1)}}^2$ under the first mechanism.
The derivation is somewhat more intricate for the second mechanism.
First, use the law of total variance to write the variance as
\begin{equation}
\Var[\Big]{ 2 Y(1) \E{ W } } + \E[\Big]{ 4 (Y(1))^2 \Var{ W } }
\end{equation}
Because $P$ is constant at one half here, we have $\E{W} = 1/2$ and $\Var{W} = 1/4$, and the normalized variance under the second mechanism is
\begin{equation}
n \Varsub[\big]{2}{\widehat\tau} = \Var[\big]{ Y(1) } + \E[\big]{ (Y(1))^2 }.
\end{equation}
The difference in variances for the two assignment mechanisms is therefore
\begin{equation}
n \Varsub[\big]{2}{\widehat\tau} - n \Varsub[\big]{1}{\widehat\tau} = 2 \Var[\big]{ Y(1) }.
\end{equation}
The variance is always greater under the second mechanism, and the difference can be substantial.
If our aim is to maximize precision, we should pick the confounded assignment mechanism.
The difference is even starker for other estimators or sampling procedures.
All uncertainty under the first mechanism can be ascribed to variability in the number of treated units in the sample, something which the estimator given in Equation~\eqref{eq:ht-est} does not account for.
If the sampling procedure is modified so that exactly half of the sampled units are treated, the variance is zero under the first mechanism.
That is, we learn the treatment effect without error.
The normalized variance is $2 \Var{ Y(1) }$ under the second mechanism, which is smaller than before, but typically larger than zero.
An estimator that adjusts for the number of sampled treated units will have a similar effect.
Examples of such estimators are described by \citet{Hajek1971} and \citet{Imbens2004Nonparametric}.
Of course, it is impossible to implement the first assignment mechanism in practice because it would require perfect knowledge of the potential outcomes.
But this is beside the point.
The point is instead to show that introducing correlation between the potential outcomes and treatment can be enormously beneficial.
While this particular mechanism is not possible to implement, we can hope to emulate it, and thereby possibly reap some of its benefits.
\section{Discussion}
At the heart of this paper is the insight that a study's sampling procedure is different from its treatment assignment mechanism.
Much of the causal inference literature treats them as one and the same.
This is a consequence of the practice of making i.i.d.\ assumptions by default.
The perspective that the actual characteristics of the sampling procedure and assignment mechanism should be taken into account in analysis is sometimes referred to as ``design-based.''
The design-based perspective has a long history in survey sampling, as described by for example \citet{Saerndal1992Model}.
The perspective is increasingly popular in causal inference \citep[see, e.g.,][]{Freedman2008Regression,Aronow2013Class,Lin2013Agnostic,Imbens2015Causal}.
One aim of this paper is to illustrate the usefulness, and often necessity, of this perspective.
The design-based perspective can be interpreted as an extension of the frequentist approach to statistical inference.
Probability statements are here not only seen as statements about limits of relative frequencies in an infinite sequence of trials, but investigators are also expected to explicitly specify what device generated those trials.
An abstract stream of observations from an unknown source will not cut it.
Therefore, investigators must ask from where the stochasticity in their studies comes, and analyze the data accordingly.
Doing so will avoid mistakes, including assuming that randomization always provides unconfoundedness.
Making the data generating process concrete tends to also make inferences clearer and more relevant.
Points related to the ones made here have recently been made by \citet{Abadie2020Sampling} and by \citet{Titiunik2021Natural}.
The first set of authors also highlights the difference between sampling and treatment assignment mechanism, but focuses on the precision of regression estimators when variability from treatment assignment is explicitly taken into account.
The discussion by \citet{Titiunik2021Natural} is largely parallel to the one in this paper, highlighting that random assignment does not imply unconfoundedness in the context of natural experiments.\footnote{I thank Peter Aronow for making me aware of the chapter by \citet{Titiunik2021Natural} shortly before the start of the workshop.}
These discussions complement the current paper, and interested readers will find them valuable.
\section*{Acknowledgements}
I thank Peter Aronow, Josh Kalla, Winston Lin and Jas Sekhon for helpful comments.
\bibliographystyle{icml2021}
|
1,116,691,500,304 | arxiv | \section{Introduction}
\label{sec:introduction}
Markov chain Monte Carlo (MCMC) algorithms are used to estimate expectations with respect to a probability distribution when independent sampling is difficult. Typically, interest is in estimating a vector of quantities. However, analysis of MCMC output routinely focuses on inference about complicated joint distributions only through their marginals. This, despite the fact that the assumption of independence across components holds only rarely in settings where MCMC is relevant. Thus standard univariate convergence diagnostics, sequential stopping rules for termination, effective sample size definitions, and confidence intervals all lead to an incomplete understanding of the estimation process. We overcome the drawbacks of univariate analysis by developing a methodological framework for multivariate analysis of MCMC output.
Let $F$ be a distribution with support $\mathcal{X}$ and $g: \mathcal{X} \to \mathbb{R}^{p}$ be an $F$-integrable function such that $\theta := \text{E}_F g$ is of interest. If $\{X_t\}$ is an $F$-invariant Harris recurrent Markov chain, set $\{Y_t\}=\{g(X_t)\}$ and estimate $\theta$ with $\theta_n = n^{-1} \sum_{t=1}^{n} Y_t$ since $\theta_n \to \theta$, with probability 1, as $n \to \infty$. Finite sampling leads to an unknown \textit{Monte Carlo error}, $\theta_n - \theta$, estimating which is essential to assessing the quality of estimation. If for $\delta > 0$, $g$ has $2 + \delta$ moments under $F$ and $\{X_t\}$ is polynomially ergodic of order $m > (2 + \delta)/\delta$, an approximate sampling distribution for the Monte Carlo error is available via a Markov chain central limit theorem (CLT). That is, there exists a $p \times p$ positive definite matrix, $\Sigma$, such that as $n \to \infty$,
\begin{equation} \label{eq:multi_clt}
\sqrt{n}(\theta_n - \theta) \overset{d}{\to} N_p(0, \Sigma) \; .
\end{equation}
Thus the CLT describes asymptotic behavior of the Monte Carlo error and the strong law for $\theta_n$ ensures that large $n$ leads to a small Monte Carlo error. But, how large is large enough? This question has not been adequately addressed in the literature since current approaches are based on the univariate CLT
\begin{equation} \label{eq:uni_clt}
\sqrt{n}(\theta_{n,i} - \theta_{i}) \overset{d}{\to} N(0, \sigma_i^2) \; \text{ as } n \to \infty,
\end{equation}
where $\theta_{n,i}$ and $\theta_i$ are the $i$th components of $\theta_n$ and $\theta$ respectively and $\sigma_i^2$ is the $i$th diagonal element of $\Sigma$. Notice that a univariate approach ignores cross-correlation across components, leading to an inaccurate understanding of the estimation process.
Many output analysis tools that rely on \eqref{eq:uni_clt} have been developed for MCMC (see \cite{atch:2011}, \cite{atch:2016}, \cite{fleg:jone:2010}, \cite{fleg:gong:2015}, \cite{gelm:rubi:1992a}, \cite{gong:fleg:2015}, and \cite{jone:hara:caff:neat:2006}). To determine termination, \cite{jone:hara:caff:neat:2006} implemented the \textit{fixed-width sequential stopping rule} where simulation is terminated the first time the width of the confidence interval for each component is small. More formally, for a desired tolerance of $\epsilon_i$ for component $i$, the rule terminates simulation the first time after some $n^* \ge 0$ iterations, for all components
\begin{equation*} \label{eq:absolute fixed}
t_{*} \dfrac{\sigma_{n,i}}{\sqrt{n}} +n^{-1} \le \epsilon_i,
\end{equation*}
where $\sigma^{2}_{n,i}$ is a strongly consistent estimator of $\sigma_i^2$, and $t_{*}$ is an appropriate $t$-distribution quantile. The role of $n^*$ is to ensure a minimum simulation effort (as defined by the user) so as to avoid poor initial estimates of $\sigma_i^2$. This rule laid the foundation for termination based on quality of estimation rather than convergence of the Markov chain. As a consequence, estimation is reliable in the sense that if the procedure is repeated again, the estimates will not be vastly different \citep{fleg:hara:jone:2008}. However, implementing the fixed-width sequential stopping rule can be challenging since (a) careful analysis is required for choosing $\epsilon_i$ for each $\theta_{n,i}$ which can be tedious or even impossible for large $p$; (b) to ensure the right coverage probability, $t_*$ is chosen to account for multiple confidence intervals (often by using a Bonferroni correction). Thus when $p$ is even moderately large, these termination rules can be aggressively conservative leading to delayed termination; (c) simulation stops when each component satisfies the termination criterion; therefore, all cross-correlations are ignored and termination is governed by the slowest mixing components; and (d) it ignores correlation in the target distribution.
To overcome the drawbacks of the fixed-width sequential stopping rule, we propose the \textit{relative standard deviation fixed-volume sequential stopping rule} that differs from the \cite{jone:hara:caff:neat:2006} procedure in two fundamental ways; (a) it is motivated by the multivariate CLT in \eqref{eq:multi_clt} and not by the univariate CLT in \eqref{eq:uni_clt}; and (b) it terminates simulation not by the absolute size of the confidence region, but by its size relative to the inherent variability in the problem. Specifically, simulation stops when the Monte Carlo standard error is small compared to the variability in the target distribution. Naturally, an estimate of the Monte Carlo standard error is required and for now, we assume that $\Sigma$ can be estimated consistently. Later we will discuss procedures for estimating $\Sigma$. The relative standard deviation fixed-volume sequential stopping rule terminates the first time after some user-specified $n^* \ge 0$ iterations
\begin{equation}
\label{eq:intro_rule}
\text{Volume of Confidence Region}^{1/p} + n^{-1} < \epsilon |\Lambda_n|^{1/2p} \; ,
\end{equation}
where $\Lambda_n$ is the sample covariance matrix, $| \cdot |$ denotes determinant, and $\epsilon$ is the tolerance level. As in the univariate setting, the role of $n^*$ is to avoid premature termination due to early bad estimates of $\Sigma$ or $\Lambda$; we will say more about how to choose $n^*$ in Section~\ref{sec:multivariate_effective_sample_size}.
\cite{wilks:1932} defines the determinant of a covariance matrix as the \textit{generalized variance}. Thus, an equivalent interpretation of \eqref{eq:intro_rule} is that simulation is terminated when the generalized variance of the Monte Carlo error is small relative to the generalized variance of $g$ with respect to $F$; that is, a scaled estimate of $|\Sigma|$ is small compared to the estimate of $|\Lambda| = |\text{Var}_{F}(Y_1)|$. We call $|\Lambda|^{1/2p}$ the \textit{relative metric}. For $p = 1$, our choice of the relative metric reduces \eqref{eq:intro_rule} to the relative standard deviation fixed-width sequential stopping rule of \cite{fleg:gong:2015}.
We show that if the estimator for $\Sigma$ is strongly consistent, the stopping rule in \eqref{eq:intro_rule} is asymptotically valid, in that the confidence regions created at termination have the right coverage probability as $\epsilon \to 0$. Our result of asymptotic validity holds for a wide variety of relative metrics. A different choice of the relative metric, leads to a fundamentally different approach to termination. For example, if instead of choosing $|\Lambda|^{1/2p}$ as the relative metric, we choose a positive constant, then our work provides a multivariate generalization of the absolute-precision procedure considered by \cite{jone:hara:caff:neat:2006}.
Another standard way of terminating simulation is to stop when the number of effective samples for each component reaches a pre-specified lower bound (see \cite{atk:gray:drum:2008}, \cite{drum:ho:phill:ramb:2006}, \cite{gior:brod:jord:2015}, \cite{gong:fleg:2015}, and \cite{krus:2014} for a few examples). We focus on a multivariate study of effective sample size (ESS) since univariate treatment of ESS ignores cross-correlations across components, thus painting an inaccurate picture of the quality of the sample. To the best of our knowledge, a multivariate approach to ESS has not been studied in the literature. We define
\[ \text{ESS} = n \left(\dfrac{|\Lambda|}{|\Sigma|} \right)^{1/p}.\]
When there is no correlation in the Markov chain, $\Sigma = \Lambda$ and ESS $ = n$. Notice that our definition of ESS involves the ratio of generalized variances. This ratio also occurs in \eqref{eq:intro_rule} which helps us arrive at a key result; terminating according to the relative standard deviation fixed-volume sequential stopping rule is asymptotically equivalent to terminating when the estimated ESS satisfies
\[\widehat{\text{ESS}} \geq W_{p, \alpha, \epsilon}, \]
where $W_{p, \alpha, \epsilon}$ can be calculated \textit{a priori} and is a function only of the dimension of the estimation problem, the level of confidence of the confidence regions, and the relative precision desired. Thus, not only do we show that terminating via ESS is a valid procedure, we also provide a theoretically valid, practical lower bound on the number of effective samples required.
Recall that we require a strongly consistent estimator of $\Sigma$. Estimating $\Sigma$ is a difficult problem due to the serial correlation in the Markov chain.
\cite{vats:fleg:jones:2017} demonstrated strong consistency for a class of multivariate spectral variance estimators while \cite{dai:jone:2017} introduced multivariate initial sequence estimators and established their asymptotic validity. However, both estimators are expensive to calculate and do not scale well with either $p$ or $n$. Instead, we use the \textit{multivariate batch means} (mBM) estimator of $\Sigma$ which is significantly faster to compute (see Section \ref{sec:discussion}) and requires weaker moment conditions on $g$ for strong consistency. Our strong consistency result weakens the conditions required in \cite{jone:hara:caff:neat:2006} for the univariate batch means (uBM) estimator. In particular, we do not require a one-step minorization and only require polynomial ergodicity (as opposed to geometric ergodicity). The condition is fairly weak since often the existence of the Markov chain CLT itself is demonstrated via polynomial ergodicity or a stronger result (see \cite{jone:2004} for a review). Many Markov chains used in practice have been shown to be at least polynomially ergodic. See \cite{acos:hube:jone:2015}, \cite{doss:hobe:2010}, \cite{hobe:geye:1998}, \cite{jarn:robe:2002}, \cite{jarn:hans:2000}, \cite{jarn:robe:2002}, \cite{john:jone:neat:2013}, \cite{john:jone:2015}, \cite{jone:robe:rose:2014}, \cite{jone:hobe:2004}, \cite{khare:hobe:2013}, \cite{marc:hobe:2004} \cite{robe:pols:1994}, \cite{tan:jone:hobe:2013}, \cite{tan:hobe:2012}, \cite{vats:2016}, among many others.
The multivariate stopping rules terminate earlier than univariate methods since (a) termination is dictated by the joint behavior of the components of the Markov chain and not by the component that mixes the slowest (b) using the inherent multivariate nature of the problem and acknowledging cross-correlations leads to a more realistic understanding of the estimation process, and (c) avoiding corrections for multiple testing give considerably smaller confidence regions even in moderate $p$ problems. There are also cases where univariate methods \emph{cannot} be implemented due to large memory requirements. On the other hand, the multivariate methods are inexpensive relative to the sampling time for the Markov chain and terminate significantly earlier. We present one such example in Section \ref{sec:spatio_example} through a Bayesian dynamic spatial-temporal model.
The rest of the paper is organized as follows. In the sequel we present a motivating Bayesian logistic regression model. In Section \ref{sec:termination_rules} we formally introduce a general class of relative fixed-volume sequential termination rules. In Section \ref{sec:multivariate_effective_sample_size} we define ESS and provide a lower bound on the number of effective samples required for simulation. Our theoretical results in these sections require a strongly consistent estimator for $\Sigma$; a result we show for the mBM estimator in Section \ref{sec:multivariate_batch_means}. In Section \ref{sec:examples} we continue our implementation of the Bayesian logistic regression model and consider additional examples. We choose a vector autoregressive process of order 1, where the convergence rate of the process can be manipulated. Specifically, we construct the process in such a way that one component mixes slowly, while the others are fairly well behaved. Such behavior is often seen in hierarchical models with priors on the variance components. The next example is
that of a Bayesian lasso where the posterior is in 51 dimensions. We also implement our output analysis methods for a fairly complicated Bayesian dynamic spatial temporal model. For this example we do not know if our assumptions on the process hold, thus demonstrating the situation users often find themselves in. We conclude with a discussion in Section \ref{sec:discussion}.
\subsection{An Illustrative Example}
\label{sub:illust_example}
For $i = 1, \dots, K$, let $Y_i$ be a binary response variable and $X_i = (x_{i1}, x_{i2}, \dots, x_{i5})$ be the observed predictors for the $i$th observation. Assume $\tau^2$ is known,
\begin{equation}\label{eq:logistic model}
Y_i | X_i, \beta \overset{ind}{\sim} \text{Bernoulli} \left( \dfrac{1}{1 + e^{-X_i \beta}} \right)\, ,~~~\text{ and } ~~~\beta \sim N_5(0, \tau^2 I_5)\; .
\end{equation}
This simple hierarchical model results in an intractable posterior, $F$ on $\mathbb{R}^5$. The dataset used is the \texttt{logit} dataset in the \texttt{mcmc} R package. The goal is to estimate the posterior mean of $\beta$, $\text{E}_F\beta$. Thus $g$ here is the identity function mapping to $\mathbb{R}^5$. We implement a random walk Metropolis-Hastings algorithm with a multivariate normal proposal distribution $N_5( \;\cdot \;, 0.35^2 I_5)$ where $I_5$ is the $5 \times 5$ identity matrix and the $0.35$ scaling approximates the optimal acceptance probability suggested by \cite{rob:gel:gilks}.
We calculate the Monte Carlo estimate for $\text{E}_F \beta$ from an MCMC sample of size $10^5$. The starting value for $\beta$ is a random draw from the prior distribution. We use the mBM estimator described in Section \ref{sec:multivariate_batch_means} to estimate $\Sigma$. We also implement the uBM methods described in \cite{jone:hara:caff:neat:2006} to estimate $\sigma_i^2$, which captures the autocorrelation in each component while ignoring the cross-correlation. This cross-correlation is often significant as seen in Figure \ref{fig:blog_trace_acf}, and can only be captured by multivariate methods like mBM. In Figure \ref{fig:ellipse_conf_region} we present $90\%$ confidence regions created using mBM and uBM estimators for $\beta_1$ and $\beta_3$ (for the purpose of this figure, we set $p = 2$). This figure illustrates why multivariate methods are likely to outperform univariate methods. The confidence ellipse is the smallest volume region for a particular level of confidence. Thus, these confidence ellipses are likely to be preferred over other confidence regions.
\begin{figure}[h]
\begin{center}
\subfloat[]{
\includegraphics[width = 2.5in]{blog_acf_trace_13} \label{fig:blog_trace_acf}}
\subfloat[]{
\includegraphics[width = 2.5in]{blog_beta13} \label{fig:ellipse_conf_region}}
\caption{\footnotesize(a) ACF plot for $\beta_1$, cross-correlation plot between $\beta_1$ and $\beta_3$, and trace plots for $\beta_1$ and $\beta_3$. (b) Joint 90\% confidence region for $\beta_1$ and $\beta_3$. The ellipse is made using mBM, the dotted line using uncorrected uBM, and the dashed line using the uBM corrected by Bonferroni. The Monte Carlo sample size is $10^5$ for both plots.}
\end{center}
\end{figure}
To assess the confidence regions, we verify their coverage
probabilities over 1000 independent replications with Monte Carlo
sample sizes in $\{10^4, 10^5, 10^6\}$. Since the true posterior mean is unknown, we use $(0.5706, 0.7516, 1.0559, 0.4517, 0.6545)$ obtained by averaging over $10^9$ iterations as a proxy. For each of the 1000 replications, it was
noted whether the confidence region contained the true posterior mean. The volume of the confidence region to the $p$th
root was also observed. Table \ref{tab:blog_coverage} summarizes the
results. Note that though the uncorrected univariate methods produce
the smallest confidence regions, their coverage probabilities are far
from desirable. For a large enough Monte Carlo sample size, mBM
produces 90\% coverage probabilities with systematically lower
volume than uBM corrected with Bonferroni (uBM-Bonferroni).
\begin{table}[h]
\footnotesize
\caption{ \footnotesize \label{tab:blog_coverage}Volume to the $p$th ($p=5$) root and coverage probabilities for 90\% confidence regions constructed using mBM, uBM uncorrected, and uBM corrected by Bonferroni. Replications = 1000 and standard errors are indicated in parenthesis.}
\begin{center}
\begin{tabular}{c|ccc}
\hline
$n$& mBM & uBM-Bonferroni & uBM \\
\hline
\multicolumn{4}{c}{Volume to the $p$th root} \\
\hline
1e4 & 0.062 \tiny{(7.94e-05)} & 0.066 \tiny{(9.23e-05)} & 0.046 \tiny{(6.48e-05)}\\
1e5 & 0.020 \tiny{(1.20e-05)} & 0.021 \tiny{(1.42e-05)} & 0.015 \tiny{(1.00e-05)}\\
1e6 & 0.006 \tiny{(1.70e-06)} & 0.007 \tiny{(2.30e-06)} & 0.005 \tiny{(1.60e-06)}\\
\hline
\multicolumn{4}{c}{Coverage Probabilities} \\
\hline
1e4 & 0.876 \tiny{(0.0104)} & 0.889 \tiny{(0.0099)} & 0.596 \tiny{(0.0155)}\\
1e5 & 0.880 \tiny{(0.0103)} & 0.910 \tiny{(0.0090)} & 0.578 \tiny{(0.0156)}\\
1e6 & 0.894 \tiny{(0.0097)} & 0.913 \tiny{(0.0094)} & 0.627 \tiny{(0.0153)}\\ \hline
\end{tabular}
\end{center}
\end{table}
Thus even simple MCMC problems produce complex
dependence structures within and across components of the
samples. Ignoring this structure leads to an incomplete
understanding of the estimation process. Not only do we gain more information about the Monte Carlo error using multivariate methods, but we also avoid using conservative Bonferroni methods.
\section{Termination Rules}
\label{sec:termination_rules}
We consider multivariate sequential termination rules that lead to asymptotically valid confidence regions. Let $T^2_{1-\alpha, p, q} $ denote the $1-\alpha$ quantile of a Hotelling's T-squared distribution with dimensionality parameter $p$ and degrees of freedom $q$. Throughout this section and the next, we assume $\Sigma_n$ is a strongly consistent estimator of $\Sigma$. A $100(1- \alpha) \%$ confidence region for $\theta$ is the set
\[
C_\alpha(n) = \left\{ \theta \in \mathbb{R}^p: n(\theta_n - \theta)^T \Sigma^{-1}_{n} (\theta_n - \theta) < T^2_{1-\alpha, p, q} \right\}\, ,
\]
where $q$ is determined by the choice of $\Sigma_n$. Then $C_\alpha(n)$ forms an ellipsoid in $p$ dimensions oriented along the directions of the eigenvectors of $\Sigma_{n}$. The volume of $C_{\alpha}(n)$ is
\begin{equation}
\label{eq:vol_conf}
\text{Vol} (C_\alpha(n)) = \dfrac{2\pi^{p/2} }{p \Gamma(p/2)} \left( \dfrac{T^2_{1-\alpha, p, q}}{n} \right)^{p/2} | \Sigma_{n} |^{1/2}\, .
\end{equation}
Since $p$ is fixed and $\Sigma_{n} \to \Sigma$ with probability 1, Vol($C_\alpha(n)$) $ \to 0$, with probability 1, as $n \to \infty$. If $\epsilon > 0 $ and $s(n)$ is a positive real valued function defined on the positive integers, then a fixed-volume sequential stopping rule terminates the simulation at the random time
\begin{equation}
\label{eq:glynn_whitt_rule}
T(\epsilon) = \inf \left \{n \geq 0: \text{Vol}(C_\alpha(n))^{1/p} + s(n) \leq \epsilon \right\}\, .
\end{equation}
\cite{glyn:whit:1992} provide conditions so that terminating at $T(\epsilon)$ yields
confidence regions that are asymptotically valid in that, as $\epsilon
\to 0, \Pr\left[\theta \in C_\alpha(T(\epsilon) ) \right] \to 1 -
\alpha$. In particular, they let $s(n)=\epsilon I(n < n^*) + n^{-1}$ which ensures simulation does not
terminate before $n^* \geq 0$ iterations. The sequential stopping rule \eqref{eq:glynn_whitt_rule} can be difficult to implement in practice since the choice of $\epsilon$ depends on the units of $\theta$, and has to be carefully chosen for every application. We present an alternative to \eqref{eq:glynn_whitt_rule} which can be used more naturally and which we will show connects nicely to the idea of ESS.
Let $\| \cdot \|$ denote the Euclidean norm. Let $K(Y,p)>0$ be an attribute of the estimation process and suppose $K_n(Y,p) > 0$ is an estimator of $K(Y,p)$; for example, take $\|\theta\| = K(Y,p)$ and $\|\theta_n\| = K_n(Y,p)$. Set $s(n) = \epsilon K_n(Y,p)I(n < n^*) + n^{-1}$ and define
\begin{equation*} \label{eq:universal_relative}
T^*(\epsilon) = \inf \left\{n \geq 0: \text{Vol}(C_\alpha(n))^{1/p} + s(n) \leq \epsilon K_n(Y,p) \right\}.
\end{equation*}
We call $K(Y,p)$ the relative metric. The following result establishes asymptotic validity of this termination rule. The proof is provided in the supplementary material.
\begin{theorem}
\label{thm:asymp_valid}
Let $g: \mathsf{X} \to {\mathbb R}^p$ be such that $\text{E}_F\|g\|^{2 + \delta} < \infty$ for some $\delta > 0$ and let $X$ be an $F$-invariant polynomially ergodic Markov chain of order $m > (1+ \epsilon_1)(1+ 2/\delta)$ for some $\epsilon_1 > 0$. If $K_n(Y,p) \to K(Y,p)$ with probability 1 and $\Sigma_n \to \Sigma$ with probability 1, as $n \to \infty$, then, as $\epsilon \to 0,\, T^*(\epsilon) \to \infty$ and $\Pr\left[\theta \in C_\alpha(T^*(\epsilon)) \right] \to 1 - \alpha$.
\end{theorem}
\begin{remark}
Theorem \ref{thm:asymp_valid} applies when $K(Y,p) = K_n(Y,p) = 1$. This choice of the relative metric leads to the absolute-precision fixed-volume sequential stopping rule; a multivariate generalization of the procedure considered by \cite{jone:hara:caff:neat:2006}.
\end{remark}
Suppose $K(Y,p)=|\Lambda|^{1/2p} =|\text{Var}_{F}Y_1|^{1/2p}$ and if $\Lambda_n$ is the usual sample covariance matrix for $\{Y_t\}$, set $K_n(Y,p)=|\Lambda_n|^{1/2p}$. Note that $\Lambda_n$ is positive definite as long as $n > p$, so $|\Lambda_n|^{1/2p} > 0$. Then $K_n(Y, p) \to K(Y,p)$, with probability 1, as $n\to \infty$ and $T^*(\epsilon)$ is the first time the variability in estimation (measured via the volume of the confidence region) is an $\epsilon$th fraction of the variability in the target distribution. The \textit{relative standard deviation fixed-volume sequential stopping rule} is formalized as terminating at random time
\begin{equation}
\label{eq:rel_sd_rule}
T_{SD}(\epsilon) = \inf \left\{n \geq 0: \text{Vol}(C_\alpha(n))^{1/p} + \epsilon |\Lambda_n|^{1/2p} I(n < n^*) + n^{-1}\leq \epsilon |\Lambda_n|^{1/2p} \right\} \, .
\end{equation}
\section{Effective Sample Size}
\label{sec:multivariate_effective_sample_size}
\cite{gong:fleg:2015}, \cite{kass:carlin:gelman:neal:1998}, \cite{liu:2008}, and \cite{robe:case:2013} define ESS for the $i$th component of the process as
\[
\text{ESS}_i = \dfrac{n}{1 + 2\sum_{k=1}^{\infty} \rho(Y^{(i)}_1, Y^{(i)}_{1+k})} = n \dfrac{\lambda_i^2}{\sigma_i^2},
\]
where $\rho(Y^{(i)}_1, Y^{(i)}_{1+k})$ is the lag $k$ correlation for the $i$th component of $Y$, $\sigma_i^2$ is the $i$th diagonal element of $\Sigma$, and $\lambda_i^2$ is the $i$th diagonal element of $\Lambda$.
A strongly consistent estimator of ESS$_i$ is obtained through strongly consistent estimators of $\lambda_i^2$ and $\sigma_i^2$ via the sample variance $(\lambda^2_{n,i})$ and univariate batch means estimators $(\sigma^2_{n,i})$, respectively.
ESS$_i$ is then estimated for each component separately, and a conservative estimate of the overall ESS is taken to be the minimum of all ESS$_i$. This leads to the situation where the estimate of ESS is dictated by the components that mix the slowest, while ignoring all other components.
Instead of using the diagonals of $\Lambda$ and $\Sigma$ to define ESS, we use the matrices themselves. Let $S_p^+$ denote the set of all $p \times p$ positive definite matrices. Univariate quantification of the matrices requires a mapping $S_p^+ \to {\mathbb R}_+$ that captures the variability described by the covariance matrix. We use the determinant since for a random vector, the determinant of its covariance matrix is its generalized variance. The concept of generalized variance was first introduced by \cite{wilks:1932} as a univariate measure of spread for a multivariate distribution. \cite{wilks:1932} recommended the use of the $p$th root of the generalized variance. This was formalized by \cite{sengupta:1987} as the \textit{standardized generalized variance} in order to compare variability over different dimensions. We define
\begin{equation*}
\label{eq:multi_ESS}
\text{ESS} = n \left( \dfrac{|\Lambda|}{|\Sigma|} \right)^{1/p}\, .
\end{equation*}
When $p = 1$, the ESS reduces to the form of univariate ESS presented above. Let $\Lambda_n$ be the sample covariance matrix of $\{Y_t\}$ and $\Sigma_n$ be a strongly consistent estimator of $\Sigma$. Then a strongly consistent estimator of ESS is
\begin{equation*}
\label{eq:multi_ESS_est}
\widehat{\text{ESS}} = n \left(\dfrac{|\Lambda_n|}{|\Sigma_n|}\right)^{1/p}.
\end{equation*}
\subsection{Lower Bound for Effective Sample Size}
\label{subsec:ess}
Rearranging the defining inequality in \eqref{eq:rel_sd_rule} yields that when $n \ge n^*$
\begin{align*}
\widehat{\text{ESS}} & \geq \left[ \left(\dfrac{2 \pi^{p/2}}{p \Gamma(p/2)} \right)^{1/p} \left(T^2_{1-\alpha, p, q}\right)^{1/2} + \frac{ |\Sigma_n|^{-1/2p}}{n^{1/2}}\right]^{2} \dfrac{1}{\epsilon^2} \approx \dfrac{2^{2/p} \pi}{(p \Gamma(p/2))^{2/p}} \left(T^2_{1-\alpha, p, q}\right) \dfrac{1}{\epsilon^2}\, .
\end{align*}
Thus, the relative standard deviation fixed-volume sequential stopping rule is equivalent to terminating the first time $\widehat{\text{ESS}}$ is larger than a lower bound. This lower bound is a function of $n$ through $q$ and thus is difficult to determine before starting the simulation. However, as $n \to \infty$, the scaled $T^2_{p,q}$ distribution converges to a $\chi^2_p$, leading to the following approximation
\begin{equation}
\label{eq:lower_bound_ess}
\widehat{\text{ESS}} \geq \dfrac{2^{2/p} \pi}{(p \Gamma(p/2))^{2/p}} \dfrac{\chi^2_{1-\alpha, p}}{\epsilon^2}\, .
\end{equation}
One can \textit{a priori} determine the number of effective samples required for their choice of $\epsilon$ and $\alpha$. As $p \to \infty$, the lower bound in \eqref{eq:lower_bound_ess} converges to $2\pi e/\epsilon^2$. Thus for large $p$, the lower bound is mainly determined by the choice of $\epsilon$.
On the other hand, for a fixed $\alpha$, having obtained $W$ effective samples, the user can use the lower bound to determine the relative precision ($\epsilon$) in their estimation. In this way, \eqref{eq:lower_bound_ess} can be used to make informed decisions regarding termination.
\begin{example} Suppose $p=5$ (as in the logistic regression setting of Section~\ref{sub:illust_example}) and that we want a precision of $\epsilon = .05$ (so the Monte Carlo error is 5\% of the variability in the target distribution) for a $95\%$ confidence region. This requires $\widehat{\text{ESS}} \geq 8605$. On the other hand, if we simulate until $\widehat{\text{ESS}} = 10000$, we obtain a precision of $\epsilon = .0464$. \end{example}
\begin{remark}\label{rem:nstar} Let $n_{\text{pos}}$ be the smallest integer such
that $\Sigma_{n_{\text{pos}}}$ is positive definite; in the next section we will discuss how to choose $n_{\text{pos}}$ for the mBM estimator. In light of the lower bound in \eqref{eq:lower_bound_ess}, a natural choice of $n^*$ is
\begin{equation} \label{eq:nstar_bound}
n^* \geq \max \left\{ n_{\text{pos}}, \dfrac{2^{2/p} \pi}{(p \Gamma(p/2))^{2/p}} \dfrac{\chi^2_{1-\alpha, p}}{\epsilon^2} \right\}\,.
\end{equation}
\end{remark}
\section{Strong Consistency of Multivariate Batch Means Estimator}
\label{sec:multivariate_batch_means}
In Sections \ref{sec:termination_rules} and \ref{sec:multivariate_effective_sample_size} we assumed the existence of a strongly consistent estimator of $\Sigma$. A class of multivariate spectral variance estimators were shown to be strongly consistent by \cite{vats:fleg:jones:2017}. However, when $p$ is large, this class of estimators is expensive to calculate as we show in Section \ref{sec:discussion}. Thus, we present the relatively inexpensive mBM estimator and provide conditions for strong consistency.
Let $n = a_n b_n$, where $a_n$ is the number of batches and $b_n$ is the batch size. For $k = 0, \dots, a_n-1$, define $\bar{Y}_k := b_n^{-1} \sum_{t = 1}^{b_n} Y_{kb_n + t}$. Then $\bar{Y}_k$ is the mean vector for batch $k$ and the mBM estimator of $\Sigma$ is given by \begin{equation} \label{eq:mbm}
\Sigma_{n} = \dfrac{b_n}{a_n - 1} \displaystyle \sum_{k=0}^{a_n - 1} \left(\bar{Y}_k - \theta_n \right) \left( \bar{Y}_k - \theta_n \right)^T.
\end{equation}
For the mBM estimator, $q$ in \eqref{eq:vol_conf} is $a_n - p$. In addition, $\Sigma_n$ is singular if $a_n < p$, thus $n_{\text{pos}}$ is the smallest $n$ such that $a_{n} > p$.
When $Y_t$ is univariate, the batch means estimator has been well studied for MCMC problems \citep{jone:hara:caff:neat:2006,fleg:jone:2010} and for steady state simulations \citep{dame:1994,glyn:igle:1990,glyn:whit:1991}. \cite{glyn:whit:1991} showed that the batch means estimator cannot be consistent for fixed batch size, $b_n$. \cite{dame:1994,damerdji:1995}, \cite{jone:hara:caff:neat:2006} and \cite{fleg:jone:2010} established its asymptotic properties including strong consistency and mean square consistency when \textit{both} the batch size and number of batches increases with $n$.
The multivariate extension as in \eqref{eq:mbm} was first introduced by \cite{chen:seila:1987}. For steady-state simulation, \cite{charnes:1995} and \cite{muno:glyn:2001} studied confidence regions for $\theta$ based on the mBM estimator, however, its asymptotic properties remain unexplored. In Theorem \ref{thm:mbm}, we present conditions for strong consistency of $\Sigma_n$ in estimating $\Sigma$ for MCMC, but our results hold for more general processes. Our main assumption on the process is that of a \textit{strong invariance principle} (SIP).
\begin{cond}
\label{cond:msip}
Let $\|\cdot\|$ denote the Euclidean norm and $ \{B(t), t\geq 0\}$ be a $p$-dimensional multivariate Brownian motion. There exists a $p \times p$ lower triangular matrix $L$, a nonnegative increasing function $\gamma$ on the positive integers, a finite random variable $D$, and a sufficiently rich probability space such that, with probability 1, as $n \to \infty$,
\begin{equation}\label{eq:sip}
\| n(\theta_n - \theta) - LB(n)\| < D \gamma(n)\,.
\end{equation}
\end{cond}
\begin{cond}
\label{cond:bn_conditions}
The batch size $b_n$ satisfies the following conditions,
\begin{enumerate}
\item \label{cond:bn_to_n}
the batch size $b_n$ is an integer sequence such that $b_n \to \infty$ and $n/b_n \to \infty$ as $n\to \infty$ where, $b_n$ and $n/b_n$ are monotonically increasing,
\item \label{cond:bn_by_n_c}
there exists a constant $c \geq 1$ such that $\sum_{n} \left({b_n}{n}^{-1}\right)^c < \infty$.
\end{enumerate}
\end{cond}
In Theorem \ref{thm:mbm} we show strong consistency of $\Sigma_n$. The proof is given in the supplementary material
\begin{theorem}
\label{thm:mbm}
Let $g$ be such that $E_F \|g\|^{2 + \delta} < \infty$ for some $\delta > 0$. Let $X$ be an $F$-invariant polynomially ergodic Markov chain of order $m > (1 + \epsilon_1)(1+2/\delta)$ for some $\epsilon_1 > 0$. Then \eqref{eq:sip} holds with $\gamma(n) = n^{1/2 - \lambda}$ for some $\lambda >0 $. If Condition \ref{cond:bn_conditions} holds and $b_n^{-1/2} (\log n)^{1/2} n^{1/2 - \lambda} \to 0$ as $n \to \infty$, then $\Sigma_{n} \to \Sigma$, with probability 1, as $n \to \infty$.
\end{theorem}
\begin{remark}\label{rem:proof_general}
The theorem holds more generally outside the context of Markov chains for processes that satisfy Condition \ref{cond:msip}. This includes independent processes \citep{berk:phil:1979,einm:1989,zait:1998}, Martingale sequences \citep{eber:1986}, renewal processes \citep{horv:1984} and $\phi$-mixing and strongly mixing processes \citep{kuel:phil:1980,dehl:phil:1982}. The general statement of the theorem is provided in the supplementary material
\end{remark}
\begin{remark}
\label{rem:lambda}
Using Theorem 4 from \cite{kuel:phil:1980}, \cite{vats:fleg:jones:2017} established Condition 1 with $\gamma(n) = n^{1/2 - \lambda}$, for some $\lambda > 0$ for polynomially ergodic Markov chains. We use their result directly. \cite{kuel:phil:1980} show that $\lambda$ only depends on $p$, $\epsilon$ and $\delta$, however the exact relationship remains an open problem.
For slow mixing processes $\lambda$ is closer to $0$ while for fast mixing processes $\lambda$ is closer to $1/2$ \citep{dame:1991, dame:1994}.
\end{remark}
\begin{remark}\label{rem:requiring_reg}
It is natural to consider $b_n = \lfloor n^{\nu} \rfloor$ for $ 0 < \nu < 1 $. Then $\lambda$ in the SIP must satisfy $\lambda > (1 - \nu)/4$ so that $b_n^{-1/2} (\log n)^{1/2} n^{1/2 - \lambda} \to 0$ as $n \to \infty$.
Since $\nu > 1 - 2\lambda$, smaller batch sizes suffice for fast mixing processes and slow mixing processes require larger batch sizes. This reinforces our intuition that higher correlation calls for larger batch sizes. Calibrating $\nu$ in $b_n = \lfloor n^{\nu} \rfloor$ is essential to ensuring the mBM estimates perform well in finite samples. Using mean square consistency of univariate batch means estimators, \cite{fleg:jone:2010} concluded that an asymptotically optimal batch size is proportional to $\lfloor n^{1/3} \rfloor$.
\end{remark}
\begin{remark}\label{rem:weaker_conditions}
For $p = 1$, \cite{jone:hara:caff:neat:2006} proved strong consistency of the batch means estimator under the stronger assumption of geometric ergodicity and a one-step minorization, which we do not make. Thus, in Theorem \ref{thm:mbm} while extending the result of strong consistency to $p \geq 1$, we also weaken the conditions for the univariate case.
\end{remark}
\begin{remark}\label{rem:eigen_con}
By Theorem 3 in \cite{vats:fleg:jones:2017}, strong consistency of the mBM estimator implies strong consistency of its eigenvalues.
\end{remark}
\section{Examples}
\label{sec:examples}
In each of the following examples we present a target distribution $F$, a Markov chain with $F$ as its invariant distribution, we specify $g$, and are interested in estimating $\text{E}_Fg$. We consider the finite sample performance (based on 1000 independent replications) of the relative standard deviation fixed-volume sequential stopping rules and compare them to the relative standard deviation fixed-width sequential stopping rules (see \cite{fleg:gong:2015} and the supplementary material). In each case we make 90\% confidence regions for various choices of $\epsilon$ and specify our choice of $n^*$ and $b_n$. The sequential stopping rules are checked at 10\% increments of the current Monte Carlo sample size.
\subsection{Bayesian Logistic Regression}
\label{sec:bayes_log}
We continue the Bayesian logistic regression example of
Section \ref{sub:illust_example}. Recall that a random walk
Metropolis-Hastings algorithm was implemented to sample from the
intractable posterior. We prove the chain is geometrically ergodic in
the supplementary material.
\begin{theorem}
\label{thm:geom_erg_logistic}
The random walk based Metropolis-Hastings algorithm with invariant distribution given by the posterior from \eqref{eq:logistic model} is geometrically ergodic.
\end{theorem}
As a consequence of Theorem \ref{thm:geom_erg_logistic} and that $F$ has a moment generating function, the conditions of Theorems~\ref{thm:asymp_valid} and \ref{thm:mbm} hold.
Motivated by the ACF plot in Figure \ref{fig:blog_trace_acf}, $b_n$ was set to $\lfloor n^{1/2} \rfloor$ and $n^* = 1000$. For calculating coverage probabilities, we declare the ``truth'' as the posterior mean from an independent simulation of length $10^9$. The results are presented in Table \ref{tab:blog_all}. As before, the univariate uncorrected method has poor coverage probabilities. For $\epsilon = 0.02$ and $0.01$, the coverage probabilities for both the mBM and uBM-Bonferroni regions are at 90\%. However, termination for mBM is significantly earlier.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:blog_all}Bayesian Logistic Regression: Over 1000 replications, we present termination iterations, effective sample size at termination and coverage probabilities at termination for each corresponding method. Standard errors are in parenthesis.}
\begin{center}
\begin{tabular}{c|ccc}
\hline
& mBM & uBM-Bonferroni & uBM\\ \hline
\multicolumn{4}{c}{Termination Iteration} \\
\hline
$\epsilon= 0.05$ & 133005 \tiny{(196)}& 201497 \tiny{(391)} & 100445 \tiny{(213)}\\
$\epsilon = 0.02$ & 844082 \tiny{(1158)} & 1262194 \tiny{(1880)} & 629898 \tiny{(1036)}\\
$\epsilon = 0.01$ & 3309526 \tiny{(1837)} & 5046449 \tiny{(7626)} & 2510673 \tiny{(3150)}\\
\hline
\multicolumn{4}{c}{Effective Sample Size} \\ \hline
$\epsilon = 0.05$ & 7712 \tiny{(9)} & 9270 \tiny{(13)} & 4643 \tiny{(7)}\\
$\epsilon = 0.02$ & 47862 \tiny{(51)} & 57341 \tiny{(65)} & 28768 \tiny{(36)}\\
$\epsilon = 0.01$ & 186103 \tiny{(110)} & 228448 \tiny{(271)}&113831 \tiny{(116)}\\ \hline
\multicolumn{4}{c}{Coverage Probabilities} \\ \hline
$\epsilon = 0.05$ & 0.889 \tiny{(0.0099)} & 0.909 \tiny{(0.0091)} & 0.569 \tiny{(0.0157)}\\
$\epsilon = 0.02$ & 0.896 \tiny{(0.0097)} & 0.912 \tiny{(0.0090)} & 0.606 \tiny{(0.0155)}\\
$\epsilon = 0.01$ & 0.892 \tiny{(0.0098)} & 0.895 \tiny{(0.0097)} &0.606 \tiny{(0.0155)}\\ \hline
\end{tabular}
\end{center}
\end{table}
Recall from Theorem \ref{thm:asymp_valid}, as $\epsilon$ decreases to zero, the coverage probability of confidence regions created at termination using the relative standard deviation fixed-volume sequential stopping rule converges to the nominal level. This is demonstrated in Figure \ref{fig:blog_asymp_validity} where we present the coverage probability over 1000 replications as $-\epsilon$ increases (or $\epsilon$ decreases). Notice that the increase in coverage probabilities need not be monotonic due to the underlying randomness.
\begin{figure}[h]
\centering
\includegraphics[width = 4in]{blog_asymp_validity}
\caption{\footnotesize Bayesian Logistic: Plot of coverage probability with confidence bands as $\epsilon$ decreases at 90\% nominal rate. Replications = 1000.}
\label{fig:blog_asymp_validity}
\end{figure}
\subsection{Vector Autoregressive Process}
\label{sec:var}
Consider the vector autoregressive process of order 1 (VAR(1)). For $ t = 1, 2, \dots $,
\[ Y_t = \Phi Y_{t-1} + \epsilon_t, \]
where $Y_t \in \mathbb{R}^p$, $\Phi$ is a $p \times p$ matrix, $\epsilon_t \overset{iid}{\sim} N_p(0, \Omega)$, and $\Omega$ is a $p \times p$ positive definite matrix. The matrix $\Phi$ determines the nature of the autocorrelation. This Markov chain has invariant distribution $F = N_p(0, V)$ where $vec(V) = (I_{p^2} - \Phi \otimes \Phi)^{-1} vec(\Omega)$, $\otimes$ denotes the Kronecker product, and is geometrically ergodic when the spectral radius of $\Phi$ is less than 1 \citep{tjos:1990} .
Consider the goal of estimating the mean of $F$, $\text{E}_FY = 0$ with $\bar{Y}_n$. Then
\[ \Sigma = (I_p - \Phi)^{-1}V + V (I_p - \Phi)^{-1} -V. \] Let $p = 5$, $\Phi = \text{diag}(.9, .5, .1, .1, .1)$, and $\Omega$ be the AR(1) covariance matrix with autocorrelation $0.9$. Since the first eigenvalue of $\Phi$ is large, the first component mixes slowest. We sample the process for $10^5$ iterations and in Figure \ref{fig:var_acf_trace} present the ACF plot for $Y^{(1)}$ and $Y^{(3)}$ and the cross-correlation (CCF) plot between $Y^{(1)}$ and $Y^{(3)}$ in addition to the trace plot for $Y^{(1)}$. Notice that $Y^{(1)}$ exhibits higher autocorrelation than $Y^{(3)}$ and there is significant cross-correlation between $Y^{(1)}$ and $Y^{(3)}$.
\begin{figure}[h]
\centering
\subfloat[]{\includegraphics[width = 2.5in]{var_acf_trace_13} \label{fig:var_acf_trace}}
\subfloat[]{\includegraphics[width = 2.5in]{var_13.pdf} \label{fig:ellipse_conf_region_var}}
\caption{\footnotesize VAR: (a) ACF plot for $Y^{(1)}$ and $Y^{(3)}$, CCF plot between $Y^{(1)}$ and $Y^{(3)}$, and trace plot for $Y^{(1)}$. Monte Carlo sample size is $10^5$. (b) Joint 90\% confidence region for first the two components of $Y$. The solid ellipse is made using mBM, the dotted box using uBM uncorrected and the dashed line using uBM corrected by Bonferroni. The Monte Carlo sample size is $10^5$. }
\end{figure}
Figure \ref{fig:ellipse_conf_region_var} displays joint confidence regions for $Y^{(1)}$ and $Y^{(3)}$. Recall that the true mean is $(0,0)$, and is present in all three regions, but the ellipse produced by mBM has significantly smaller volume than the uBM boxes. The orientation of the ellipse is determined by the cross-correlations witnessed in Figure \ref{fig:var_acf_trace}.
We set $n^* = 1000$, $b_n = \lfloor n^{1/2} \rfloor$, $\epsilon$ in $\{0.05, 0.02, 0.01\}$ and at termination of each method, calculate the coverage probabilities and effective sample size. Results are presented in Table \ref{tab:var_all}. Note that as $\epsilon$ decreases, termination time increases and coverage probabilities tend to the 90\% nominal for each method.
Also note that the uncorrected methods produce confidence regions with undesirable coverage probabilities and thus are not of interest. Consider $\epsilon = .02$ in Table \ref{tab:var_all}. Termination for mBM is at 8.8e4 iterations compared to 9.6e5 for uBM-Bonferroni. However, the estimates for multivariate ESS at 8.8e4 iterations is 4.7e4 samples compared to univariate ESS of 5.6e4 samples for 9.6e5 iterations. This is because the leading component $Y^{(1)}$ mixes much slower than the other components and defines the behavior of the univariate ESS.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:var_all}VAR: Over 1000 replications, we present termination iterations, effective sample size at termination and coverage probabilities at termination for each corresponding method. Standard errors are in parentheses.}
\begin{center}
\begin{tabular}{c|ccc}
\hline
& mBM & uBM-Bonferroni & uBM\\ \hline
\multicolumn{4}{c}{Termination Iteration} \\
\hline
$\epsilon= 0.05$ & 14574 \tiny{(27)} & 169890 \tiny{(393)} & 83910 \tiny{(222)}\\
$\epsilon = 0.02$ & 87682 \tiny{(118)} & 1071449 \tiny{(1733)} & 533377 \tiny{(1015)}\\
$\epsilon = 0.01$ & 343775 \tiny{(469)} & 4317599 \tiny{(5358)} & 2149042 \tiny{(3412)}\\
\hline
\multicolumn{4}{c}{Effective Sample Size} \\ \hline
$\epsilon = 0.05$ & 8170 \tiny{(11)} & 9298 \tiny{(13)} & 4658 \tiny{(7)}\\
$\epsilon = 0.02$ & 48659 \tiny{(50)} & 57392 \tiny{(68)} & 28756 \tiny{(37)}\\
$\epsilon = 0.01$ & 190198 \tiny{(208)} & 228772 \tiny{(223)} & 114553 \tiny{(137)}\\ \hline
\multicolumn{4}{c}{Coverage Probabilities} \\ \hline
$\epsilon = 0.05$ & 0.911 \tiny{(0.0090)} & 0.940 \tiny{(0.0075)} & 0.770 \tiny{(0.0133)}\\
$\epsilon = 0.02$ & 0.894 \tiny{(0.0097)} & 0.950 \tiny{(0.0069)} & 0.769 \tiny{(0.0133)}\\
$\epsilon = 0.01$ & 0.909 \tiny{(0.0091)} & 0.945 \tiny{(0.0072)} & 0.779 \tiny{(0.0131)}\\ \hline
\end{tabular}
\end{center}
\end{table}
A small study presented in Table \ref{tab:var_ess} elaborates on this behavior. We present the mean estimate of ESS using multivariate and univariate methods based on 100 replications of Monte Carlo sample sizes $10^5$ and $10^6$. The estimate of ESS for the first component is significantly smaller than all other components leading to a conservative univariate estimate of ESS.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:var_ess}VAR: Effective sample sample (ESS) estimated using proposed multivariate method and the univariate method of \cite{gong:fleg:2015} for Monte Carlo sample sizes of $n = 1e5, 1e6$ and 100 replications. Standard errors are in parentheses.}
\begin{center}
\begin{tabular}{c|c|ccccc}
\hline
$n$ & ESS & ESS$_1$ & ESS$_2$ & ESS$_3$ & ESS$_4$ & ESS$_5$\\ \hline
1e5 & 55190 \tiny{(200)} & 5432 \tiny{(41)} & 33707 \tiny{(280)} & 82485 \tiny{(728)} & 82903 \tiny{(731)} & 82370 \tiny{(726)}\\ \hline
1e6 & 551015 \tiny{(945)} & 53404 \tiny{(193)} & 334656 \tiny{(1345)} & 819449 \tiny{(3334)} & 819382 \tiny{(3209)} & 819840 \tiny{(2858)}\\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Bayesian Lasso}
\label{sub:bayesian_lasso}
Let $y$ be a $K \times 1$ response vector and $X$ be a $K \times r$ matrix of predictors. We consider the following Bayesian lasso formulation of \cite{park:cas:2008}.
\begin{align*}
y| \beta, \sigma^2, \tau^2 & \sim N_K(X\beta, \sigma^2 I_n) \\
\beta | \sigma^2, \tau^2 & \sim N_r(0, \sigma^2 D_{\tau}) \quad \text{where} \quad D_{\tau} = \text{diag}(\tau^2_1, \tau^2_2, \dots, \tau^2_r)\\
\sigma^2 & \sim \text{Inverse-Gamma}(\alpha, \xi)\\
\tau^2_{j} & \overset{iid}{\sim} \text{Exponential} \left( \dfrac{\lambda^2}{2} \right) \quad \text{ for } j = 1, \dots, r,
\end{align*}
where $\lambda, \alpha$, and $\xi$ are fixed and the Inverse-Gamma$(a,b)$ distribution with density proportional to $x^{-a -1} e^{-b/x}$. We use a deterministic scan Gibbs sampler to draw approximate samples from the posterior; see \citet{khare:hobe:2013} for a full description of the algorithm. \cite{khare:hobe:2013} showed that for $K \geq 3$, this Gibbs sampler is geometrically ergodic for arbitrary $r$, $X$, and $\lambda$.
We fit this model to the cookie dough dataset of \cite{osb:fearn:mill:doug:1984}. The data was collected to test the feasibility of near infra-red (NIR) spectroscopy for measuring the composition of biscuit dough pieces. There are 72 observations. The response variable is the amount of dry flour content measured and the predictor variables are 25 measurements of spectral data spaced equally between 1100 to 2498 nanometers. Since we are interested in estimating the posterior mean for $(\beta, \tau^2, \sigma^2)$, $p = 51$. The data is available in the \texttt{R} package \texttt{ppls}, and the Gibbs sampler is implemented in function \texttt{blasso} in \texttt{R} package \texttt{monomvn}. The ``truth'' was declared by averaging posterior means from 1000 independent runs each of length 1e6. We set $n^* = $ 2e4 and $b_n = \lfloor n^{1/3} \rfloor$.
In Table \ref{tab:blasso_all} we present termination results from 1000 replications. With $p$ = 51, the uncorrected univariate regions produce confidence regions with low coverage probabilities. The uBM-Bonferroni and mBM provide competitive coverage probabilities at termination. However, termination for mBM is significantly earlier than univariate methods over all values of $\epsilon$. For $\epsilon = .05$ and $.02$ we observe zero standard error for termination using mBM since termination is achieved at the same 10\% increment over all 1000 replications. Thus the variability in those estimates is less than $10\%$ of the size of the estimate.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:blasso_all}Bayesian Lasso: Over 1000 replications, we present termination iterations, effective sample size at termination and coverage probabilities at termination for each corresponding method. Standard errors are in parentheses.}
\begin{center}
\begin{tabular}{c|ccc}
\hline
& mBM & uBM-Bonferroni & uBM\\ \hline
\multicolumn{4}{c}{Termination Iteration} \\
\hline
$\epsilon= 0.05$ & 20000 \tiny{(0)}& 69264 \tiny{(76)} & 20026 \tiny{(7)}\\
$\epsilon = 0.02$ & 69045 \tiny{(0)} & 445754 \tiny{(664)} & 122932 \tiny{(103)}\\
$\epsilon = 0.01$ & 271088 \tiny{(393)} & 1765008 \tiny{(431)} &508445 \tiny{(332)}\\
\hline
\multicolumn{4}{c}{Effective Sample Size} \\ \hline
$\epsilon = 0.05$ & 15631 \tiny{(4)} & 16143 \tiny{(15)}& 4778 \tiny{(6)}\\
$\epsilon = 0.02$ & 52739 \tiny{(8)} & 101205 \tiny{(122)} & 28358 \tiny{(24)}\\
$\epsilon = 0.01$ & 204801 \tiny{(283)} & 395480 \tiny{(163)}&115108 \tiny{(74)}\\ \hline
\multicolumn{4}{c}{Coverage Probabilities} \\ \hline
$\epsilon = 0.05$ & 0.898 \tiny{(0.0096)} & 0.896 \tiny{(0.0097)} & 0.010 \tiny{(0.0031)}\\
$\epsilon = 0.02$ & 0.892 \tiny{(0.0098)} & 0.905 \tiny{(0.0093)} & 0.009 \tiny{(0.0030)}\\
$\epsilon = 0.01$ & 0.898 \tiny{(0.0096)} & 0.929 \tiny{(0.0081)} & 0.009 \tiny{(0.0030)}\\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{comment}
Aside from the conservative multiple testing correction, delayed termination for the univariate methods is also due to conservative univariate estimates of ESS. Figure \ref{fig:blasso_ess} shows the ESS using multivariate and univariate methods at termination over 1000 replications for $\epsilon = 0.02$. The multivariate ESS at termination is more conservative than the univariate ESS at uBM-Bonferroni termination, but the iterations to termination is significantly lower for mBM.
\begin{figure}[h]
\begin{center}
\includegraphics[width = 2.9in]{blasso_ess_term_comp}
\caption{\footnotesize Bayesian Lasso: Plot of termination iteration and ESS for mBM, uBM-Bonferroni and uBM for $\epsilon = .02$. Replications = 1000.}
\label{fig:blasso_ess}
\end{center}
\end{figure}
\end{comment}
\subsection{Bayesian Dynamic Spatial-Temporal Model}
\label{sec:spatio_example}
\cite{gelf:ban:gam:2005} propose a Bayesian hierarchical model for modeling univariate and multivariate dynamic spatial data viewing time as discrete and space as continuous. The methods in their paper have been implemented in the R package \texttt{spBayes}. We present a simpler version of the dynamic model as described by \cite{fin:ban:gel:2015}.
Let $s = 1, 2, \dots, N_s$ be location sites, $t = 1, 2, \dots, N_t$ be time-points, and the observed measurement at location $s$ and time $t$ be denoted by $y_t(s)$. In addition, let $x_t(s)$ be the $r \times 1$ vector of predictors, observed at location $s$ and time $t$, and $\beta_t$ be the $r \times 1$ vector of coefficients. For $t = 1, 2, \dots, N_t$,
\begin{align*}
y_t(s) & = x_t(s)^T \beta_t + u_t(s) + \epsilon_t(s), \quad \epsilon_t(s) \overset{ind}{\sim} N(0, \tau^2_t); \numberthis \label{eq:spatio_measurement}\\
\beta_t & = \beta_{t-1} + \eta_t, \quad \eta_t \overset{iid}{\sim} N(0, \Sigma_{\eta});\\ \numberthis \label{eq:spatio_transition}
u_t(s) & = u_{t-1}(s) + w_t(s), \quad w_t(s) \overset{ind}{\sim} GP(0, \sigma^2_t \rho(\cdot; \phi_t)),
\end{align*}
where $GP(0, \sigma^2_t \rho(\cdot; \phi_t))$ denotes a spatial Gaussian process with covariance function $\sigma^2_t \rho(\cdot; \phi_t)$. Here, $\sigma_t^2$ denotes the spatial variance component and $\rho(\cdot, \phi_t)$ is the correlation function with exponential decay.
Equation \eqref{eq:spatio_measurement} is referred to as the measurement equation and $\epsilon_t(s)$ denotes the measurement error, assumed to be independent of location and time. Equation \eqref{eq:spatio_transition} contains the transition equations which emulate the Markovian nature of dependence in time. To complete the Bayesian hierarchy, the following priors are assumed
\begin{align*}
\beta_0 \sim N(m_0, C_0) & \quad \text{ and } \quad u_0(s) \equiv 0;\\
\tau^2_t \sim \text{IG}(a_{\tau}, b_{\tau}) & \quad \text{ and } \quad \sigma^2_t \sim \text{IG}(a_s, b_s);\\
\Sigma_{\eta} \sim \text{IW}(a_{\eta}, B_{\eta}) & \quad \text{ and } \quad \phi_t \sim \text{Unif}(a_{\phi}, b_{\phi}) \, ,
\end{align*}
where IW is the Inverse-Wishart distribution with probability density function proportional to $|\Sigma_{\eta}| ^{-\frac{a_{\eta} + q + 1}{2}} e^{-\frac{1}{2} tr(B_{\eta} \Sigma_{\eta}^{-1} )} $ and IG$(a, b)$ is the Inverse-Gamma distribution with density proportional to $x^{-a -1} e^{-b/x}$. We fit the model to the \texttt{NETemp} dataset in the \texttt{spBayes} package. This dataset contains monthly temperature measurements from 356 weather stations on the east coast of the USA collected from January 2000 to December 2010. The elevation of the weather stations is also available as a covariate. We choose a subset of the data with 10 weather stations for the year 2000, and fit the model with an intercept. The resulting posterior has $p$ = 185 components.
A component-wise Metropolis-Hastings sampler \citep{john:jone:neat:2013, jone:robe:rose:2014} is described in \cite{gelf:ban:gam:2005} and implemented in the \texttt{spDynLM} function. Default hyper parameter settings were used. The posterior and the rate of convergence for this sampler have not been studied; thus we do not know if the conditions of our theoretical results are satisfied. Our goal is to estimate the posterior expectation of $\theta = (\beta_t, u_t(s), \sigma_t^2, \Sigma_{\eta}, \tau_t^2, \phi_t)$. The truth was declared by averaging over 1000 independent runs of length 2e6 MCMC samples. We set $b_n = \lfloor n^{1/2} \rfloor$ and $n^*$ = 5e4 so that $a_n > p$ to ensure positive definitiveness of $\Sigma_n$.
Due to the Markovian transition equations in \eqref{eq:spatio_transition}, the $\beta_t$ and $u_t$ exhibit a significant covariance in the posterior distribution. This is evidenced in Figure \ref{fig:spatio_conf_region} where for Monte Carlo sample size $n = 10^5$, we present confidence regions for $\beta_1^{(0)}$ and $\beta_2^{(0)}$ which are the intercept coefficient for the first and second months, and for $u_1(1)$ and $u_2(1)$ which are the additive spatial coefficients for the first two weather stations. The thin ellipses indicate that the principle direction of variation is due to the correlation between the components. This significant reduction in volume, along with the conservative Bonferroni correction ($p$ = 185) results in increased delay in termination when using univariate methods. For smaller values of $\epsilon$ it was not possible to store the MCMC output in memory on a 8 gigabyte machine using uBM-Bonferroni methods.
\begin{figure}[h]
\centering
\includegraphics[width = 3.8in]{spatio_conf_region}
\caption{\footnotesize Bayesian Spatial: 90\% confidence regions for $\beta_1^{(0)}$ and $\beta_2^{(0)}$ and $u_1(1)$ and $u_2(1)$. The Monte Carlo sample size = $10^5$.}
\label{fig:spatio_conf_region}
\end{figure}
As a result (see Table \ref{tab:spatio_all}), the univariate methods could not be implemented for smaller $\epsilon$ values. For $\epsilon = .10$, termination for mBM was at $n^* = $5e4 for every replication. At these minimum iterations, the coverage probability for mBM is at 88\%, whereas both the univariate methods have far lower coverage probabilities at 0.62 for uBM-Bonferroni and 0.003 for uBM. The coverage probabilities for the uncorrected methods are quite small since we are making 185 confidence regions simultaneously.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:spatio_all}Bayesian Spatial: Over 1000 replications, we present termination iteration, effective sample size at termination and coverage probabilities at termination for each corresponding method at 90\% nominal levels. Standard errors are in parentheses.}
\begin{center}
\begin{tabular}{c|ccc}
\hline
& mBM & uBM-Bonferroni & uBM\\ \hline
\multicolumn{4}{c}{Termination Iteration} \\
\hline
$\epsilon= 0.10$ & 50000 \tiny{(0)}& 1200849 \tiny{(28315)} & 311856 \tiny{(491)}\\
$\epsilon= 0.05$ & 50030 \tiny{(12)}& - & 1716689 \tiny{(2178)}\\
$\epsilon = 0.02$ & 132748 \tiny{(174)} &- & - \\
\hline
\multicolumn{4}{c}{Effective Sample Size} \\ \hline
$\epsilon = 0.10$ & 55170 \tiny{(20)} & 3184 \tiny{(75)} & 1130 \tiny{(1)}\\
$\epsilon = 0.05$ & 55190 \tiny{(20)} & - & 4525 \tiny{(4)} \\
$\epsilon = 0.02$ & 105166 \tiny{(97)} & - & - \\ \hline
\multicolumn{4}{c}{Coverage Probabilities} \\ \hline
$\epsilon = 0.10$ & 0.882 \tiny{(0.0102)} & 0.625 \tiny{(0.0153)} & 0.007 \tiny{(0.0026)}\\
$\epsilon = 0.05$ & 0.881 \tiny{(0.0102)} & - & 0.016 \tiny{(0.0040)} \\
$\epsilon = 0.02$ & 0.864 \tiny{(0.0108)} & - & -\\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{comment}
In Figure \ref{fig:spatio_asymp_validity}, we illustrate asymptotic validity of the confidence regions constructed using the relative standard deviation fixed-volume sequential stopping rule. We present the observed coverage probabilities over 1000 replications for several values of $\epsilon$.
\end{comment}
\section{Discussion}
\label{sec:discussion}
Multivariate analysis of MCMC output data has received little attention. \cite{seil:1982} and \cite{chen:seila:1987} built a framework for multivariate analysis for a Markov chain using regenerative simulation. Since establishing regenerative properties for a Markov chain requires a separate analysis for every problem and will not work well in component-wise Metropolis-Hastings samplers, the application of their work is limited. \cite{paul:mac:2012} used a specific multivariate Markov chain CLT for their MCMC convergence diagnostic. More recently, \cite{vats:fleg:jones:2017} showed strong consistency of the multivariate spectral variance (mSV) estimators of $\Sigma$, which could potentially substitute for the mBM estimator in applications to termination rules. However, outside of toy problems where $p$ is small, the mSV estimator is computationally demanding compared to the mBM estimator. To compare these two estimators, we extend the VAR(1) example discussed in Section \ref{sec:var}, to $p = 50$. Since in this case we know the true $\Sigma$, we assess the relative error in estimation $\|\hat{\Sigma} - \Sigma\|_F/\|\Sigma\|_F$ where $\hat{\Sigma}$ is either the mBM or mSV estimator and $\|\cdot\|_F$ is the Frobenius norm. The results are reported in Table \ref{tab:comparison_msve}. The mSV estimator overall has slightly smaller relative error, but as the Monte Carlo sample size increases, computation time is significantly higher than the mBM estimator. Also note that the relative error for both estimators decreases with an increase in Monte Carlo sample size. The mSV and mBM estimators along with multivariate ESS have been implemented in the \texttt{mcmcse} R package \citep{fleg:hugh:vats:2015}.
\begin{table}[h]
\footnotesize
\caption{\footnotesize \label{tab:comparison_msve} Relative error and computation time (in seconds) comparison between mSV and mBM estimators for a VAR(1) model with $p = 50$ over varying Monte Carlo samples sizes. Standard errors are in parentheses.}
\begin{center}
\begin{tabular}{lccc}
\hline
Method & $n$ & Relative Error & Computation Time\\ \hline
mBM & 1e3 & 0.373 \tiny{(0.00374)} & 0.00069 \tiny{(1.5e-05)} \\
mSV & 1e3 & 0.371 \tiny{(0.00375)} & 0.06035 \tiny{(2.1e-05)}\\
\hline
mBM & 1e4 & 0.177 \tiny{(0.00205)} & 0.00376 \tiny{(1.7e-05)} \\
mSV & 1e4 & 0.163 \tiny{(0.00197)}& 2.13193 \tiny{(1.8e-04)} \\
\hline
mBM & 1e5 & 0.095 \tiny{(0.00113)} & 0.04038 \tiny{(8.6e-05)} \\
mSV & 1e5 & 0.081 \tiny{(0.00100)}& 68.2796 \tiny{(0.11416)} \\
\hline
\end{tabular}
\end{center}
\end{table}
There are two aspects of the proposed methodology that will benefit from further research. First, the rate of convergence of the Markov chain affects the choice of $b_n$ through the $\lambda$ in the SIP. Aside from \cite{damerdji:1995} and \cite{fleg:jone:2010}, little work has been done in optimal batch size selection for batch means estimators. This area warrants further research both in asymptotic and finite sample optimal batch selection. In the supplement we study the effect of different batch sizes on simulation termination using the relative standard deviation fixed-volume sequential stopping rule. We notice that a decrease in the tolerance level $\epsilon$, decreases the sensitivity of termination to the choice of $b_n$. We have found that using a large batch size such as $b_n = \lfloor n^{1/2} \rfloor$ often works well.
Second, when using the mBM estimator, a truly large $p$ requires a large Monte Carlo sample size to ensure a positive definite estimate of $\Sigma$. The mSV estimator might yield positive definite estimates at smaller sample sizes, but is far too computationally expensive (see Table \ref{tab:comparison_msve}). It could be interesting to investigate the use of dimension reduction techniques or high-dimensional asymptotics to address this problem.
\bibliographystyle{apalike}
|
1,116,691,500,305 | arxiv | \section{Introduction}
Graphene, a single layer of carbon atoms laid out in a honeycomb lattice, is one of the most
interesting electronic systems discovered in recent years \cite{ag,ah}. It differs from conventional
two dimensional electron gas (2DEG) systems in that the low energy physics is governed by a massless
Dirac Hamiltonian rather than the more common form used for semiconductors, characterized by an effective
mass and a band gap.
The tunneling of Dirac fermions in graphene has already been verified experimentally~\cite{n},
which in turn has spurred an extraordinary amount of interest in the investigation of the electronic
transport properties in graphene based quantum wells, barriers, $p$-$n$ junctions, transistors,
quantum dots, superlattices,
$\cdots$ etc. The electrostatic barriers in graphene can be generated in various ways~\cite{ka,hs},
by applying a gate voltage, cutting it into finite width nanoribbons and using doping or otherwise.
On the other hand, magnetic barrier could in principle~\cite{ka,mr,ma} be realized with the creation
of magnetic dots. In the case of graphene, results of the transmission coefficient and the tunneling
conductance were already reported for the electrostatic barriers,~\cite{ka,hs,mr,ma,s},
magnetic barriers,~\cite{mr,ch,mo} and potential barrier~\cite{ben,ad,nov}.
The fact that in an ideal graphene sheet the carriers are massless gives rise to Klein paradox,
which allows particles to tunnel through any electrostatic potential barriers, that is the wavefunction
has an oscillatory tail outside the electrostatic barrier region. Hence this property excludes the possibility
to confine electrons using electrostatic gates, as in usual semiconductors. Thus to enable the fabrication of
confined structures, such as quantum dots, we need to use other type of potential coupling such as the vector
potential coupling considered in the present work. More precisely, we will use two methods, one numerical
(Poincar\'e map)~\cite{bahlouli} and the other analytical, in the prospect of studying the tunneling of Dirac fermions through
a biased graphene strip.
The paper is organized as follows. In section 2, we describe our theoretical model Hamiltonian and apply the Poincar\'e
map approach, based on
the space discretization of the effective 1D Dirac equation. In section 3, we expose the direct analytical
approach to solve the same problem. In section 4, we proceed to discuss the numerical implementation of our
approaches to a specific model potential, the linear potential which generates a static electric field,
and make a comparative study between the two approaches.
\section{Poincar\'e map}
Before we embark on the two approaches mentioned above, we would like to describe mathematically our system
of massless Dirac fermions within a strip of graphene characterized by a very large length scale, and a
width $W$ in the presence of the applied linear potential $V(x)$ between $x=0$ and $x=L$. So our system is
composed of three major regions: the extremes $(\sf I)$ and $(\sf III)$ contain intrinsic graphene free of
any external potentials and an intermediate region $(\sf II)$ subject to the applied linear potential $V(x)$.
Graphene band structure has two Fermi points, each with a two-fold band degeneracy, and can be described by a
tight binding Hamiltonian describing two interlacing honeycomb sublattices. At low energies this Hamiltonian
can be can be described by a continuum approximation to the original tight binding model which reduces to the
two dimensional Dirac equation with a four-component envelope wavefunction whose components are labeled by a
Fermi-point pseudospin $=\pm 1$. Specifically, the Hamiltonian for one-pseudospin component for the present
system can be written as
\begin{eqnarray}\label{ak}
H=v_F \vec{\sigma}\cdot\vec{p}+V(x)
\end{eqnarray}
where $v_F\simeq9.84\times10^{6}m/s$ is the Fermi velocity
and $\vec{\sigma}=(\sigma_{x},\sigma_{y})$ are the Pauli matrices.
Hereafter we set our units such that $v_F=\hbar=1$. The linear potential $V(x)$ has the following form
\begin{equation} \label{vxx}
V(x)=\left\{\begin{array}{cc} {-Fx+V_0}, & \qquad {0<x<L} \\ {0}, & \qquad {\mbox{otherwise}} \end{array}\right.
\end{equation}
where
$F=\frac{V_0}{L}$ is the strength of the static electric field. This potential configuration is shown in Figure~1
below.
\begin{center}
\includegraphics [width=3in,keepaspectratio]
{fig01}\\
{\sf{Figure 1: Discretization of the linear potential $V(x)$. }}
\end{center}
Our system is supposed to have a finite width $W$ with infinite mass boundary conditions for
the wavefunction at the boundaries $y = 0$ and $y = W$ along the $y$-direction~\cite{ben,ber}.
This boundary condition results in a quantization of the transverse momentum along the $y$-direction, which gives
\begin{equation}\lb{1}
k_{y}=\frac{\pi}{W}\left(l+\frac{1}{2}\right), \qquad l=0,1,2\cdots.
\end{equation}
One can therefore assume a spinor solution of the following form $\psi^{j}(x,y)=\left(\phi_{{1}}^{j}(x),\phi_{{2}}^{j}(x)\right)^{\dag}e^{ik_{y}y}$ where the superscript $j={\sf I}, {\sf II}, {\sf III}$, indicates the space region while the subscripts indicate the two spinor components.
Thus our problem reduces to an effective 1D problem whose Dirac equation can be written as
\begin{equation} \label{1d}
\left(\begin{array}{cc} {V(x)-\varepsilon } & {\frac{d}{dx} +k_{y}} \\ {-\frac{d}{dx} +k_{y}} & {V(x)-\varepsilon } \end{array}\right)\left(\begin{array}{c} {\phi_{{1}}^{j}(x)} \\ {} \\ {-i \phi_{{2}}^{j}(x) } \end{array}\right)=0.
\end{equation}
Due to the space dependence of the potential $V(x)$ we make the following transformation on our spinor components to enable us to obtain Schrodinger like equations for each component, $\chi_{{1}}^{j}=\frac{1}{2}\left(\psi_{{1}}^{j}+\psi_{{2}}^{j}\right)$ and $\chi_{{2}}^{j}=\frac{1}{2i}\left(\psi_{{1}}^{j}-\psi_{{2}}^{j}\right)$, which obey the coupled stationary equations. These are
\begin{eqnarray}\label{co}
\frac {d\chi_{{1,2}}^{j}(x)}{dx}\pm i \left( V(x)-\epsilon \right)\chi_{{1,2}}^{j}(x)\mp ik_{{y}}\chi_{{{2,1}}}^{j}(x) =0.
\end{eqnarray}
Each spinor component $\chi_{{1,2}}^{j}$ can be shown to satisfy the following uncoupled second order differential equation
\begin{equation}\label{df}
{\frac {d^{2}}{d{x}^{2}}}\chi_{{{1,2}}}^{j} \left( x \right) + \left( \pm i{\frac
{d}{dx}}V \left( x \right) + \left[ V \left( x \right) -\varepsilon \right] ^{2}
-{k_y}^{2} \right) \chi_{{{1,2}}}^{j} \left( x \right)=0.
\end{equation}
In this section we will apply the Poincar\'e map approach to solve the above effective 1D Dirac equation.
In this approach we start by subdividing the potential interval $L$ into $N +1$ regions (Figure 1).
In every $n$-th region we approximate the linear potential by a constant value $V_n = V(x_n)$ where $x_n = nh$
and $h=\frac{L}{N+1}$. Hence, the Dirac equation in each region ($n$), defined by $h(n - 1) <x< hn$, can be easily
solved for the piece-wise constant potential.
For simplicity, we chose the incident wave propagating from right to left and apply the continuity of the
spinor wavefunctions at the boundary separating adjacent regions. The general solutions of equation (\ref{df})
in the $n$-th region where $V(x)=V_n$ are given by
\begin{equation} \label{GrindEQ5}
\psi_{{n}}= A_{n}\left(
\begin{array}{c}
1 \\
-z_{n}^{*} \\
\end{array}
\right) e^{-ik_{n}x}+B_n\left(
\begin{array}{c}
1 \\
z_{n} \\
\end{array}
\right)e^{ik_{n}x}
\end{equation}
with $k_{n}=\sqrt{(\varepsilon-V_{n})^{2}-k_{y}^{2}}$, the complex number $z_{n}$ is defined by $z_{n}=\frac{1}{z_{n}^{*}}={\mbox{sgn}} \left(\varepsilon-V_n\right)\frac{k_{n}+ik_{y}}{\sqrt{k_{n}^{2}+k_{y}^{2}}}$.
In order to obtain the relationship between $\psi_{n+1}$ and $\psi_{n}$ we apply continuity of
$\psi$ at the boundary $x =x_n$ (Figure 2). This leads to
\begin{equation} \label{GrindEQ11}
M_{n}(x_n)\left(
\begin{array}{c}
A_{n} \\
B_{n} \\
\end{array}
\right)=M_{n+1}(x_n)\left(
\begin{array}{c}
A_{n+1} \\
B_{n+1} \\
\end{array}
\right).
\end{equation}
Also $M_{n+1}(x_{n+1})$ and $M_{n+1}(x_n)$ are related by
\begin{equation}\label{GrindEQ__12_}
M_{n+1}(x_{n+1})=M_{n+1}(x_n) S_{n+1}, \qquad S_{n+1}=\left(
\begin{array}{cc}
e^{-ihk_{n+1}} & 0 \\
0 & e^{ihk_{n+1}} \\
\end{array}
\right)
\end{equation}
\begin{center}
\includegraphics [width=10.10cm,keepaspectratio]
{figure02}\\
{\sf{Figure 2: Solutions of the 1D Dirac equation in two consecutive regions, continuity of spinors is applied at $x=x_n$.}}
\end{center}
Using the above results we can write the desired Poincar\'{e} map as
\begin{equation}\label{GrindEQ16}
\psi_{n+1}(x_{n+1})=\tau_{n}\psi_{n}(x_n)
\end{equation}
where we have defined a simplified notation by $\psi_{n}= \psi_{n}(x_n)$ and $\tau_{n}=M_{n+1}(x_{n+1})S_{n+1}M_{n+1}^{-1}(x_{n+1})$, or more explicitly
\begin{equation}
\tau _{n}= \frac{1} {{z_{(1+n)}^*}+z_{(1+n)}} \\
\left(
\begin{array}{cc}
z_{(1+n)}^*e^{i k_{(1+n)}} + z_{(1+n)} {e^{-i k_{(1+n)}} } & -e^{-i k_{(1+n)}} + {e^{-i k_{(1+n)}} } \\
-e^{-i k_{(1+n)}} + {e^{-i k_{(1+n)}} } & z_{(1+n)}^*e^{-i k_{(1+n)}} + z_{(1+n)} {e^{-i k_{(1+n)}} }
\end{array}
\right).
\end{equation}
To make use of the above Poincar\'{e} map in solving our scattering problem we need to define our incident,
reflected and transmitted waves. For $x \leq 0$ where $V = 0$ (region {\sf I}), we can use for our transmitted spinor
evaluated at $n = 0$, the
suitably normalized form
\begin{equation}\label{GrindEQ6}
\psi_{0}=\left(
\begin{array}{c}
1 \\
-{z_{0}^*} \\
\end{array}
\right).
\end{equation}
This is juste the value of the transmitted wave at the zeroth site, $x = 0$ $(n = 0)$,
which is given by
\begin{equation}\label{GrindEQ7}
\psi_{L}=A_{0} \left(
\begin{array}{c}
1\\
-{z_{0}^*} \\
\end{array}
\right)e^{-ik_{0}x}.
\end{equation}
On the other side, for $x \geq h(N+1)$ where $ V = 0 $ (region {\sf III}), we have both incident and reflected spinor
waves. Just outside the potential region on the right hand side in the $(N+2)$-th region the spinor wave
can be written as
\begin{equation}\label{GrindEQ8}
\psi_{R}= A_{N+2}\left(
\begin{array}{c}
1 \\
-{z_{0}^*} \\
\end{array}
\right) e^{-ik_{0}x}+B_{N+2}\left(
\begin{array}{c}
1 \\
z_{0} \\
\end{array}
\right)e^{ik_{0}x}.
\end{equation}
Hence to evaluate the transmission amplitude all we need is to find $A_{N+2}$ using the above recursive
scheme. Our strategy now is to express $A_{N+2}$ in terms of $\psi_{N+1}$ and $\psi_{N+2}$, the two end point spinors.
This can be easily done using our previous relationships and leads to
\begin{equation}\label{GrindEQ21}
A_{N+2}=\frac{e^{ihk_{0}(N+2)}}{2(1-e^{2ihk_{0}})}
\left(
\begin{array}{ccc}
1 & & -z_{0} \\
\end{array}
\right) \left(\psi_{N+2}-e^{ihk_{0}}\psi_{N+1}\right).
\end{equation}
From the above notation we can easily define the transmission amplitude as follows
\begin{equation}\label{GrindEQ9}
t=\frac{1}{A_{N+2}}.
\end{equation}
Summing up, our numerical procedure requires first that we iterate the Poincar\'{e} map (\ref{GrindEQ16}) to obtain the end point spinors, $\psi_{N+1}$ and $\psi_{N+2}$, in terms of the normalized transmitted spinor. These spinors will then be injected in (\ref{GrindEQ21}) and (\ref{GrindEQ9}) to determine the transmission amplitude. The transmission coefficient is given by $T=\left|t\right|^{2}$. The numerical implementation of this scheme in the case of linear vector potential will be done in section 4.
\section{Analytical method}
Let us now solve analytically the effective 1D Dirac equation or equivalently equation (\ref{df}) in
the presence of a quantum barrier (region {\sf II}). Our objective is to find the transmission coefficient
for a Dirac fermion scattered by a linear potential and then compare our results with those found in
previous section using the Poincar\'e map method. The solution of equation (\ref{df}) in region {\sf I}
and {\sf III} are given by
\begin{equation}
\phi^{{\sf I}}(x)= \left
\begin{array}{c}
1 \\
z \\
\end{array
\right)e^{ik_{x}x}+r\left
\begin{array}{c}
1 \\
-z^{\ast} \\
\end{array
\right) e^{-ik_{x}x}, \qquad \phi^{\sf III}(x)=t\left
\begin{array}{c}
1 \\
z \\
\end{array
\right)e^{ik_{x}x}
\end{equation}
where $r$ and $t$ are the reflection and transmission amplitudes, respectively.
The wave vector $k_{x}=\sqrt{\varepsilon^{2}-k_y^{2}}$ and the complex number $z$ is defined by $z={\mbox{sgn}} (\varepsilon) (k_{x}+ik_{y})/\sqrt{k_{x}^{2}+k_{y}^{2}}$. In region {\sf II} the general solution can be expressed in terms of the parabolic cylinder function~\cite{Ab,vi} as
\begin{equation}\label{aaaa}
\chi_{{1}}^{{\sf II}}(x)=\alpha D_{\nu-1}\left(\sqrt{\frac{2}{F}}e^{i\pi/4}(F x+E)\right)+\beta D_{-\nu}\left(-\sqrt{\frac{2}{F}}e^{-i\pi/4}(F x+E)\right)
\end{equation}
where
$\nu=\frac{ik_{y}^{2}}{2F}$, $E=\varepsilon-V_0$,
$\alpha$ and $\beta$ are constants. Substituting (\ref{aaaa}) in (\ref{co}) gives the other component
\begin{eqnarray}\label{cc}
\chi_{{2}}^{{\sf II}}(x)&=- \frac{\beta}{k_y}\left[2(E+Fx) D_{-\nu}\left(-\sqrt{\frac{2}{F}}e^{-i\pi/4}(F x+E)\right)+\sqrt{2 F}e^{i\pi/4}D_{-\nu+1}\left(-\sqrt{\frac{2}{F}}e^{-i\pi/4}(F x+E)\right)\right]\nonumber\\
&- \frac{\alpha}{k_y}\sqrt{2 F}e^{-i\pi/4} D_{\nu}\left(\sqrt{\frac{2}{F}}e^{i\pi/4}(F x+E)\right)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
The components of the spinor solution of the Dirac equation (\ref{ak}) in region {\sf II} can be obtained from (\ref{aaaa}) and (\ref{cc}) where $\phi_{{1}}^{{\sf II}}(x)=\chi_{{1}}^{{\sf II}}+i\chi_{{2}}^{{\sf II}}$ and $\phi_{{2}}^{{\sf II}}(x)=\chi_{{1}}^{{\sf II}}-i\chi_{{2}}^{{\sf II}}$. This results in
\begin{equation}
\psi^{{\sf II}}(x)= \alpha\left
\begin{array}{c}
a^{+}(x) \\
a^{-}(x) \\
\end{array
\right)+\beta\left
\begin{array}{c}
b^{+}(x) \\
b^{-}(x) \\
\end{array
\right)
\end{equation}
where the function $a^{\pm}(x)$ and $b^{\pm}(x)$ are given by
\begin{eqnarray}\label{}
a^{\pm}(x)&=&D_{\nu-1}\left(\sqrt{\frac{2}{F}}e^{i\pi/4}(F x+E)\right)\mp\frac{\sqrt{2 F}}{k_y}e^{i\pi/4} D_{\nu}\left(\sqrt{\frac{2}{F}}e^{i\pi/4}(F x+E)\right)\nonumber\\
b^{\pm}(x)&=&\pm\frac{1}{k_y}\sqrt{2 F}e^{-i\pi/4}D_{-\nu+1}\left(-\sqrt{\frac{2}{F}}e^{-i\pi/4}(F x+E)\right)\nonumber\\
&&\pm\frac{1}{k_y}(-2iE\pm k_y-2iFx)D_{-\nu}\left(-\sqrt{\frac{2}{F}}e^{-i\pi/4}(F x+E)\right).
\end{eqnarray}
The coefficients $r$, $\alpha$, $\beta$ and $t$ are determined from the continuity of the spinor wavefunctions at the boundaries $x = 0, L$, that is $\psi^{\sf I}(x=0)=\psi^{\sf II}(x=0)$ and $\psi^{\sf II}(x=L)=\psi^{\sf III}(x=L)$. The transmission coefficient through the linear potential is obtained from $T=\left|t\right|^{2}$ where the corresponding amplitude $t$ is obtained from the aforementioned boundary conditions.
It is given by
\begin{eqnarray}\label{tt}
t=\frac{e^{-ik_{x}L}\left[1+z^{2}\right]\left[b^{+}(L)a^{-}(L)-b^{-}(L)a^{+}(L)\right]}
{\left[b^{+}(0)+z b^{-}(0)\right]\left[a^{-}(L)-z a^{+}(L)\right]-
\left[a^{+}(0)+z a^{-}(0)\right]\left[b^{-}(L)-z b^{+}(L)\right]}.
\end{eqnarray}
\section{Results and discussion}
In this section we implement our previous Poincar\'e map and analytical approaches to a nanoribbon system subject to an electric potential of strength $V_0 = 10, 20$ and a field region of length $L = 3, 10$ so that the resulting static electric field strength is given by $F = V_0/L = 10/3, 2$, respectively. In Figure 3 we show the transmission as a function of energy for a transverse momentum $k_y = 1$. The solid lines corresponds to the exact transmission derived in section 3 and given by equation (\ref{tt}) while the dashed lines are generated by our Poincar\'e map for $N = 200$ iterations, the agreement is just perfect.\\
\begin{center}
\includegraphics [width=6.6cm,keepaspectratio]
{fig4}~~~~~~~~~~~~
\includegraphics [width=6.6cm,keepaspectratio]
{fig3}
\end{center}
\begin{center}
{\sf{Figure 3:Transmission coefficients $T$ versus energy $ \varepsilon $for
$L = 3,10 $, $V_0 = 10,20 $ and $k_y = 1$.
}}
\end{center}
\noindent {{Figure 3}} shows the concordance between the results generated by the analytical and Poincar\'e map method we adapted. We note that below a certain critical energy $\varepsilon = k_y$ the transmission is almost zero, then it starts oscillations whose frequency increases with $L$, the size of the region subject to the electric field.
The transmission increases with $L$ and reaches unity for energies above $V_0 + 2 k_y$.
\begin{center}
\includegraphics [width=6.6cm,keepaspectratio]
{fig6}~~~~~~~~~~~~
\includegraphics [width=6.6cm,keepaspectratio]
{fig5}
\end{center}
\begin{center}
{\sf{Figure 4:Transmission coefficients $T$ versus $ V_0 $ for
$L = 3,10 $, $\varepsilon = 10,20 $ and $k_y = 1 $}}
\end{center}
\noindent {{Figure 4}} shows the transmission as a function of the strength of the applied voltage, total transmission is observed for small values of $V_0$ less than the energy of the incident fermion. It then decreases sharply for $V_0 > \varepsilon - 2 k_y $ until it reaches a relative minimum and then begins to increase in an oscillatory manner.
We notice in both Figures 3 and 4 that the amplitude of oscillations and period increase as we decrease the size of the electric field region, $L$.
\begin{center}
\includegraphics [width=6.6cm,keepaspectratio]
{fig8}
\end{center}
\begin{center}
{\sf{Figure 5:Transmission coefficients $T$ versus energy $ \varepsilon $ for
$L = 3 $, $V_0 = 20 $ and different values of $k_{y}$.}}
\end{center}
{Figure 5 shows that the effect of the transverse momentum $k_y$ on transmission, is antagonistic to that of length $L$. But it should be pointed out that the number of oscillations increases as $k_y$ decreases and the curves for different values of $k_y$ do not intersect.}
Now we would point out that our effective 1D massless Dirac equation is equivalent to a massive one with an effective mass equal to the transverse quantized wave vector $k_{y}$. For this purpose we would like to consider a unitary transformation, which enable us to map the effective 1D (equation (\ref{1d})) into a 1D massive Dirac equation. Such a unitary transformation does not affect the energy spectrum or the physics of the problem. We choose a rotation by ${\pi \mathord{\left/ {\vphantom {\pi 4}} \right. \kern-\nulldelimiterspace} 4} $ about the $y$-axis, $U=e^{i{\textstyle\frac{\pi }{4}} \sigma _{2} } $. Thus, the transformed Hamiltonian and wavefunction
read
\begin{eqnarray} \label{masse}
\left(\begin{array}{cc} {V(x)-\varepsilon+k_y } & {\frac{d}{dx}} \\ {-\frac{d}{dx}} & {V(x)-\varepsilon-k_y } \end{array}\right)\left(\begin{array}{c} {\tilde{\psi}_{{1}}^{j}(x)} \\ {} \\ {\tilde{\psi}_{{2}}^{j}(x) } \end{array}\right)=0, \qquad \tilde{\psi}_{j_{1,2}}(x)=U\psi_{j_{1,2}}(x).
\end{eqnarray}
which is identical to a 1D massive Dirac equation with an effective mass $m = k_y$. To check the validity of this assertion numerically we show in Figure 6 the transmission as a function of energy as generated by the exact analytical result (\ref{tt}), the Poincar\'e map (\ref{GrindEQ9}) and the 1D massive Dirac equation with an effective mass $m^{\ast}=k_y$ in (\ref{masse}). We see from this figure that the three curves coincide to the point that we cannot even distinguish between them. This lead us to include an inset in Figure 6 showing each figure translated for ease of comparison purposes.
\begin{center}
\includegraphics [width=6.6cm,keepaspectratio]
{fig7}
\end{center}
\begin{center}
{\sf{Figure 6 :Transmission coefficients $T (\varepsilon)$ for
$L = 1$, $V_0 = 40$ and $k_y = m^{\ast} = \frac{9\pi}{10}$.}}
\end{center}
This last figure confirms, numerically, the equivalence between a one-dimensional system of Dirac fermions with mass and a two-dimensional system of massless Dirac fermions constrained along the $y$-direction by an infinite mass boundary condition, hence forming a graphene nanoribbon. The transverse component of the wave vector, $k_y$, played the role of an effective mass \cite{Hai} in the resulting effective 1D Dirac equation.
\section*{{Acknowledgments}}
\noindent The generous support provided by the Saudi Center for Theoretical Physics (SCTP) is highly appreciated by all Authors. AJ and (EBC, AE) acknowledge partial support by King Faisal University and KACST, respectively. We also acknowledge the support of KFUPM under project RG1108-1-2.
|
1,116,691,500,306 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace{-5pt}
Environmental sounds are essential for expressive media content, e.g., movies, video games, and animation, to make them immersive and realistic.
One way to prepare a desired sound is to obtain it from an environmental sound database.
However, the number of databases currently available is very limited \cite{Imoto_AST_2018}, so the desired sound is not always in the database.
On the other hand, there is a large amount of unlabeled environmental sounds on the Internet, but it is not easy to expand the database because it requires rich domain knowledge and taxonomy.
Even if the database became large, its usability might decrease because it would also require users to have domain knowledge.
Intuitive methods for sound retrieval have been proposed.
For example, vocal imitation \cite{zhang_CHIIR_2020, Zhang_ICASSP_2016,Kim_ICASSP_2019, Zhang_TASLP_2019} and onomatopoeic words \cite{Ikawa_DCASE_2018} were used as search queries in some sound retrieval systems.
It has also been reported that user satisfaction is high when using an intuitive sound-retrieval technique \cite{zhang_CHIIR_2020,Zhang_ICASSP_2016}.
Therefore, it would also be useful for content creators if they can extract a desired sound intuitively.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.9]{Overview_soundExtraction_v02.eps}
\vspace{-2pt}
\caption{Overview of environmental sound extraction using an onomatopoeic word. The word ``kankan'' is often used in Japanese to represent hitting sounds.}
\label{fig:overview_method}
\end{figure}
\begin{figure*}[t!]
\centering
\resizebox{\linewidth}{!}{%
\includegraphics{OnomatopoeiaSeparation.eps}%
}
\caption{Detailed architecture of proposed environmental-sound-extraction method using an onomatopoeic word.}
\label{fig:propsed_method}
\end{figure*}
We propose an environmental-sound-extraction method using an onomatopoeic word, which is a character sequence that phonetically imitates a sound.
It has been shown that onomatopoeic words are effective in expressing the characteristics of sound \cite{Lemaitre_JASA_2018,Sundaram_AAAI_2006} such as sound duration, pitch, and timbre.
Onomatopoeic words are also advantageous in terms of low labeling cost since they do not require domain knowledge and taxonomy for labeling.
In our proposed method, therefore, uses an onomatopoeic word is used to specify the sound to extract from a mixture sound, as shown in Fig.~\ref{fig:overview_method}.
We used a U-Net architecture \cite{Ronneberger_MICCAI_2015}, which has been used in various source-separation and sound-extraction studies \cite{Mesequer_ISMIR_2019, Sudo_IROS_2019, Jansson_ISMIR_2017, Kong_ICASSP_2020}, to estimate the time-frequency mask of the target sound.
To the best of our knowledge, there has been no study on extracting only specific sound by using an onomatopoeic word.
The rest of the paper is organized as follows.
In Sec.~\ref{sec:conventional}, we describe related work on environmental sound extraction.
In Sec.~\ref{sec:propose}, we present our proposed method for extracting environmental sounds using an onomatopoeic word from an input mixture.
In Sec.~\ref{sec:experiments}, we discuss experiments we conducted on the effectiveness of our proposed method compared with baseline methods that use class labels to specify the target sound.
Finally, we summarize and conclude this paper in Sec.~\ref{sec:conclusion}.
\section{RELATED WORK}
\label{sec:conventional}
\vspace{-5pt}
Methods of environmental sound extraction and separation using deep learning have been developed \cite{Sudo_IROS_2019, ochiai_INTERSPEECH_2020, Lee_ISMIR_2019, Kavalerov_WASPPA_2019}.
Sudo et al. developed an environmental-sound-separation method based on U-Net architecture \cite{Sudo_IROS_2019}.
A similar method using U-Net was also proposed for source separation \cite{Mesequer_ISMIR_2019, Slizovskaia_ICASSP_2109}.
Ochiai et al. used Conv-TasNet \cite{Luo_TASLP_2019}, which was originally proposed for speech separation, to extract only the sounds of specific sound events \cite{ochiai_INTERSPEECH_2020}.
These methods use the sound-event class as the input to specify the desired sound.
However, environmental sounds have various characteristics that cannot be described as a sound class, such as sound duration, pitch, and timbre.
For example, if the ``whistle sound'' class is defined regardless of the pitch, it is not possible for a conventional method to extract only the sound of the desired pitch.
One possible solution is to define more fine-grained sound-event classes, e.g., ``high-pitched whistle sound'' and ``low-pitched whistle sound.''
However, this is impractical because the labeling cost will increase.
Even if we could define such fine-grained sound-event classes, there would always be intra-class variation, and we have no way to distinguish between them.
Therefore, the method of conditioning by sound-event class is not suitable to extract specific sounds.
Another possible method similar to that discussed in this paper is singing voice extraction using humming \cite{Smaradis_WASPAA_2019}.
In this case, the target sound is always a human voice, so humming is sufficient to represent it.
However, in the case of environmental sound extraction, humming is insufficient to determine the target sound because it cannot express timbre, and some kinds of sounds cannot be represented by humming, e.g., plosive sounds.
\section{PROPOSED METHOD}
\label{sec:propose}
\vspace{-5pt}
\subsection{Overview of environmental sound extraction using an onomatopoeic word}
\label{subsec:overview}
\vspace{-5pt}
Our purpose was to reconstruct a target sound $\bmit{y}$ from a mixture sound $\bmit{x}$, where the target is specified by an onomatopoeic word $w$.
We estimate $\hat{\bmit{y}}$ from $\bmit{x}$ and $w$ using a nonlinear transformation $\mathsf{Extractor}(\cdot,\cdot)$ as follows:
\begin{align}
\hat{{\bmit y}} = \mathsf{Extractor}({\bmit x}, w).
\label{eq:train}
\end{align}
We explain this $\mathsf{Extractor}(\cdot,\cdot)$ in Sec.~\ref{subsec:training_method}.
\subsection{Proposed sound extraction method}
\label{subsec:training_method}
\vspace{-5pt}
Figure \ref{fig:propsed_method} shows the detailed architecture of the proposed method.
The method involves time-frequency mask estimation using U-Net and feature vector extraction from an onomatopoeic word.
We condition the output of the U-Net encoder with an onomatopoeic word to specify the target environmental sound to extract.
In previous studies, the target sound to be extracted was conditioned by sound-event class \cite{Mesequer_ISMIR_2019}, or further conditioned by the estimated interval of the target sound \cite{Sudo_IROS_2019}.
These studies have shown that conditioning on intermediate features after passing through convolutional neural network layers can be effective.
Thus, we also use conditioning on the intermediate features of the U-Net encoder.
The proposed method takes the following as inputs, as shown in Fig. ~\ref{fig:propsed_method}.
One is a $T$-length $F$-dimensional mixture spectrogram $\bm{X} \in \mathbb{R}^{F \times T}$ extracted from the input mixture sound ${\bmit x}$.
The other is a one-hot encoded phoneme sequence ${\bmit L}=(\bmit{l}_1,\dots,\bmit{l}_{N})$ extracted from $w$.
The extracted acoustic feature $\bm{X}$ is fed to the U-Net encoder, which consists of $K$-stacked convolutional layers.
In each layer of the U-Net encoder, the time-frequency dimension decreases by half and the number of channels doubles.
As a result, $C\left(=2^K\right)$ feature maps are calculated as
\begin{align}
\left[\bm{V}_1,\dots,\bm{V}_C\right]=\mathsf{UNetEncoder}\left(\bm{X}\right)\in\mathbb{R}^{F'\times T'\times C},
\end{align}
where $\bmit{V}_c\in\mathbb{R}^{F'\times T'} (c=1,\dots,C)$ denotes the feature map of the $c$-th channel.
At the same time, the phoneme sequence ${\bmit L}$ is fed to the bidirectional long short-term memory (BiLSTM) encoder.
As a result, a $D$-dimensional word-level embedding ${\bmit o}=\trans{\left[o_1,\dots,o_D\right]}\in\mathbb{R}^{D}$ that captures the entire onomatopoeic word is extracted as follows:
\begin{align}
\bmit{o}=\mathsf{BiLSTMEncoder}\left(\bmit{L}\right)\in\mathbb{R}^{D}.\label{eq:word_embedding}
\end{align}
The extracted embedding $\bm{o}$ is stretched in the time and frequency directions to form $D$ feature maps $\left[\bmit{O}_1,\dots,\bmit{O}_D\right]$, where $\bmit{O}_d\in\mathbb{R}^{F'\times T'}$ for $d\in\left\{1,\dots,D\right\}$ is the matrix the whose elements are all $o_d$.
Finally, a time-frequency soft mask $M\in\left(0,1\right)^{F\times T}$ is estimated using the U-Net decoder, which consists of $K$-stacked deconvolutional layers.
The feature maps from the U-Net encoder and BiLSTM encoder are concatenated to be $C+D$ channels and fed to the U-Net decoder followed by the element-wise sigmoid function $\sigma\left(\cdot\right)$ as
\begin{align}
\bmit{Z}&=\mathsf{UNetDecoder}\left(\left[\bmit{V}_1,\dots,\bmit{V}_C,\bmit{O}_1,\dots,\bmit{O}_D\right]\right)\in\mathbb{R}^{F\times T},\\
\bm{M}&=\sigma\left(\bmit{Z}\right)\in\left(0,1\right)^{F\times T}.\label{eq:softmask}
\end{align}
The target signal in time-frequency domain $\hat{\bm{Y}}$ is then recovered by masking the input $\bm{Y}$ as
\begin{align}
\hat{\bm{Y}}=\bm{M}\odot\bm{X}\in\mathbb{R}^{F\times T},
\end{align}
where $\odot$ is the Hadamard product.
During training, the loss function defined as root mean square error between $\hat{\bm{Y}}$ and target features $\bm{Y}\in\mathbb{R}^{F\times T}$, which is extracted from $\bm{x}$, is used:
\begin{align}
L\left(\bm{Y},\bm{\hat{Y}}\right)&=\sqrt{\frac{1}{TF} \norm{\bm{Y} - \bm{\hat{Y}}}_F^2},
\end{align}
where $\norm{\cdot}_F$ is the Frobenius norm.
In the inference phase, we reconstruct an environmental sound wave from the masked acoustic features $\bm{\hat{Y}}$ using the Griffin–Lim algorithm \cite{Griffin_TASSP_1984}.
\section{EXPERIMENTS}
\label{sec:experiments}
\vspace{-5pt}
\subsection{Dataset construction}
\label{sec:dataset}
\vspace{-5pt}
To construct the datasets for this task, we used environmental sounds extracted from RealWorld Computing Partnership-Sound Scene Database (RWCP-SSD) \cite{Nakamura_AST_1999}.
Some sound events in RWCP-SSD are labeled in the ``event entry + ID'' format, e.g., \textit{whistle1} and \textit{whistle2}.
We created hierarchical sound-event classes by grouping labels with the same event entry, e.g., \textit{whistle}.
We first selected 44 sound events from RWCP-SSD, which we call subclasses, and grouped them into 16 superclasses.
The superclasses and subclasses used in this study are listed in Table \ref{table:class_list}.
We selected 16 types of sound events in superclass and 44 types of sound events in subclass from RWCP-SSD to construct the dataset.
The sounds in each subclass were divided as 7:2:1, used for training, validation, and evaluation, respectively.
The onomatopoeic words corresponding to each environmental sound were extracted from RWCP-SSD-Onomatopoeia \cite{okamoto_DCASE_2020}.
Each sound was annotated with more than 15 onomatopoeic words in RWCP-SSD-Onomatopoeia, and we used randomly selected three onomatopoeic words for each sound for our experiments.
We constructed the following three evaluation datasets using the selected sound events:
\begin{itemize}
\item {\bf Inter-superclass dataset}:
Each mixture sound in this dataset is composed of a target sound and interference sounds, the superclass of each is different from that of the target sound.
\item {\bf Intra-superclass dataset}:
Each mixture sound in this dataset is composed of a target sound and interference sounds, the superclass of each is the same as that of the target sound, but the subclass is different.
\item {\bf Intra-subclass dataset}:
Each mixture sound in this dataset is composed of a target sound and interference sounds, the subclass of each is the same as that of the target sound, but the onomatopoeic words are different.
\end{itemize}
The mixture sounds in each dataset were created by varying the signal-to-noise ratio (SNR) by $\{-10, -5, 0, 5, 10\}$ \si{\dB}.
The SNR between a target signal $\bm{s}_\text{target}$ and an interference signal $\bm{s}_\text{interference}$ is defined as
\begin{align}
\mbox{SNR} = 10\log_{10}\left(\frac{\norm{\bm{s_\text{target}}}^2}{\norm{\bm{s}_\text{interference}}^2}\right).
\end{align}
The training and validation sets consisted of 7,563 and 2,160 mixture sounds, respectively.
Each evaluation set consisted of 1,107 mixture sounds for each SNR.
The audio clips for these sets were randomly selected from RWCP-SSD.
\begin{table}[t!]
\caption{Experimental conditions}
\vspace{2pt}
\label{table:experiment}
\centering
\begin{tabular}{@{}ll@{}}
\wcline{1-2}
&\\[-8pt]
Mixture-sound length & \SI{5}{\second} \\
Sampling rate & \SI{16}{kHz}\\
Waveform encoding & 16-bit linear PCM \\
\cline{1-2}
&\\[-8pt]
\# of U-Net encoder blocks& 4\\
\# of U-Net decoder blocks& 4\\
\# of BiLSTM encoders & 1 \\
\# of units in BiLSTM encoder & 512\\
Batch size & 8 \\
Optimizer & RAdam \cite{RAdam_ICLR_2020}\\
\cline{1-2}
&\\[-8pt]
Acoustic feature & Amplitude spectrogram\\
Window length for FFT & \SI{128}{\ms} (2,048 samples) \\
Window shift for FFT & \SI{32}{\ms} (512 samples) \\
\wcline{1-2}
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Superclass and subclass sound events used in this study}
\vspace{2pt}
\label{table:class_list}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{@{}ll|ll@{}}
\wcline{1-4}
&\\[-8pt]
Superclass & Subclass&Superclass&Subclass\\
\cline{1-4}
&\\[-8pt]
\textbf{metal}&metal05, metal10,&\textbf{bells}&bells1, bells2, bells3,\\
&metal15&&bells4, bells5\\
\textbf{dice}&dice1, dice2, dice3&\textbf{coin}&coin1, coin2, coin3\\
\textbf{bottle}&bottle1, bottle2&\textbf{coins}&coins1, coins2, coins3,\\
\textbf{cup}&cup1, cup2&&coins4, coins5\\
\textbf{particl}&particl1, particl2&\textbf{whistle}&whistle1, whistle2,\\
\textbf{cap}&cap1, cap2&&whistle3\\
\textbf{clap}&clap1, clap2&\textbf{phone}&phone1, phone2,\\
\textbf{claps}&claps1, claps2&&phone3, phone4\\
\textbf{clip}&clip1, clip2&\textbf{toy}&toy1, toy2\\
\textbf{bell} & bell1, bell2\\
\wcline{1-4}
\end{tabular}%
}
\end{table}
\begin{table*}[t!]
\caption{SDRi [dB] for extracted signals}
\vspace{2pt}
\label{table:result}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{@{}llccccc@{}}
\wcline{1-7}
& \\[-8pt]
& & \multicolumn{5}{c}{SNR} \\
\cline{3-7}\\[-8pt]
Dataset& Method&\SI{-10}{\dB} & \SI{-5}{\dB} & \SI{0}{\dB} & \SI{5}{\dB} & \SI{10}{\dB}\\
\cline{1-7}
& \\[-8pt]
\multirow{3}{*}{Inter-superclass dataset} & Superclass-conditioned method&$5.11 \pm 3.02$ & $4.72 \pm 2.75$ & $4.06 \pm 2.55$ & $2.70 \pm 2.13$ & $1.33 \pm 2.12$ \\
& Subclass-conditioned method&$5.06 \pm 2.97$ & $4.75 \pm 2.85$ & $4.04 \pm 2.52$ & $2.81 \pm 2.31$ & $1.25 \pm 2.09$\\
& {\bf Onomatopoeia-conditioned method} &$4.63 \pm 2.58$ & $4.57 \pm 2.69$ & $4.02 \pm 2.53$ & $2.77 \pm 2.22$ & $1.41 \pm 2.12$ \\
\cline{1-7}
& \\[-8pt]
\multirow{3}{*}{Intra-superclass dataset} & Superclass-conditioned method&$2.05 \pm 2.37$ & $1.97 \pm 2.40$ & $1.86 \pm 2.38$ & $1.50 \pm 2.19$ & $0.82 \pm 1.89$ \\
& Subclass-conditioned method&$5.03 \pm 2.56$ & $4.77 \pm 2.59$ & $4.19 \pm 2.45$ & $2.74 \pm 2.12$ & $1.26 \pm 2.06$\\
& {\bf Onomatopoeia-conditioned method} &$5.61 \pm 2.78$ & $5.36 \pm 2.75$ & $4.73 \pm 2.52$ & $3.10 \pm 2.27$ & $1.42 \pm 2.06$ \\
\cline{1-7}
& \\[-8pt]
\multirow{3}{*}{Intra-subclass dataset} & Superclass-conditioned method&$2.03 \pm 2.40$ & $2.06 \pm 2.54$ & $1.87 \pm 2.37$ & $1.49 \pm 2.09$ & $0.79 \pm 1.98$ \\
& Subclass-conditioned method&$3.14 \pm 2.78$ & $3.09 \pm 2.77$ & $2.84 \pm 2.63$ & $2.21 \pm 2.29$ & $1.01 \pm 2.12$\\
& {\bf Onomatopoeia-conditioned method} &$5.83 \pm 2.43$ & $5.68 \pm 2.53$ & $5.11 \pm 2.58$ & $3.34 \pm 2.24$ & $1.64 \pm 2.02$ \\
\wcline{1-7}
\end{tabular}%
}
\end{table*}
\begin{figure*}[t!]
\vspace{5pt}
\centering
\includegraphics[scale=0.75]{spectrogram_result_20211003_vol04.eps}
\vspace{-11pt}
\caption{Examples of environmental sound extraction using intra-subclass dataset. Mixture spectrogram (first row), results of subclass-conditioned sound extraction (second row), results of onomatopoeia-conditioned sound extraction (proposed) (third row), and ground truth spectrogram (fourth row).}
\label{fig:spectro_result}
\end{figure*}
\subsection{Training and evaluation setup}
\label{sec:setup}
\vspace{-5pt}
Table~\ref{table:experiment} shows the experimental conditions and parameters used for the proposed method (onomatopoeia-conditioned method).
As baselines, we also evaluated the methods with which the target sound is conditioned by the superclass or subclass sound-event class.
We used the one-hot representation of the label for $\bmit{o}$ in (\ref{eq:word_embedding}) instead of the word embeddings.
To evaluate each method, we used signal-to-distortion ratio improvement (SDRi) \cite{SDR_TASLP_2006} as an evaluation metric.
SDRi is defined as the difference between the SDR of the target sound to the mixture and that of the target sound to the extracted sound as follows:
\begin{align}
\mbox{SDRi} = 10\log_{10}\left(\frac{\norm{\bm{y}}^2}{\norm{\bm{y} - \hat{\bm{y}}}^2}\right) - 10\log_{10}\left(\frac{\norm{\bm{y}}^2}{\norm{\bm{y} - \bm{x}}^2}\right).
\end{align}
We conducted evaluations regarding SDRi on each of the three evaluation datasets introduced in Sec.~\ref{sec:dataset}.
\subsection{Experimental results}
\label{sec:results}
\vspace{-5pt}
Table~\ref{table:result} shows the SDRi on each evaluation dataset.
We observed that the superclass-conditioned method performed well on the inter-superclass dataset but performed poorly on the intra-superclass and intra-subclass datasets.
We also observed that the subclass-conditioned method performed well on the inter-superclass and intra-superclass datasets but did not on the intra-subclass dataset.
These results indicate that the performance of sound extraction using an event class as a condition is highly dependent on the fineness of the class definition.
The onomatopoeia-conditioned method showed almost the same SDRi on the three datasets.
This suggests that an onomatopoeic word can behave like a more fine-grained class than the subclasses, even though it does not require any special domain knowledge for labeling.
Figure \ref{fig:spectro_result} shows the spectrograms of the extracted sounds using the subclass-conditioned and onomatopoeia-conditioned methods.
For this visualization, we used five samples in the intra-subclass dataset with \SI{0}{\dB}.
We observed that the subclass-conditioned method left a significant amount of non-target sounds, while the onomatopoeia-conditioned method extracted only the target sound.
Although the onomatopoeia-conditioned method performed better than the superclass- and subclass-conditioned methods, it still did not perform well when the target sound was highly overlapped with interference sounds (cf. ``Subclass: Phone4'' in Fig.~\ref{fig:spectro_result} ).
As a result, the mixtures with high overlap ratios resulted in small SDRi and the mixtures with low overlap ratios resulted in large SDRi, and thus the standard deviations in Table \ref{table:result} are large overall.
The extraction of overlapping sounds requires further study.
The extracted sounds are available on our web page\footnote{\url{ https://y-okamoto1221.github.io/Sound_Extraction_Onomatopoeia/}}.
\section{CONCLUSION}
\label{sec:conclusion}
\vspace{-5pt}
We proposed an environmental-sound-extraction method using onomatopoeic words.
The proposed method estimates a time-frequency mask of the target sound specified by an onomatopoeic word with the U-Net encoder-decoder architecture.
The experimental results indicate that our proposed method extracts specific sounds from mixture sounds by using an onomatopoeic word as a condition.
Our proposed method outperformed conventional methods that use a sound-event class as a condition.
The results indicate that onomatopoeic words can behave like more fine-grained classes than sound-event classes, even though it does not require any special domain knowledge for labeling.
In the future, it will be necessary to verify the effectiveness of the proposed method for onomatopoeic words assigned by speakers of different languages.
\vfill\pagebreak
\bibliographystyle{IEEEtran}
|
1,116,691,500,307 | arxiv | \section{Introduction}
Single Particle Tracking (SPT) is a method to observe and classify the motion of individual particles as \textit{trajectories}: estimates of a particle's position in a sequence of discrete measurement times. In the field of biological microscopy, SPT has been used for finding and analyzing protein motion in heterogeneous environments like the cellular membrane~\cite{Saxton1997a, Saxton2010} and cytoplasm~\cite{sanamrad2014single,calderon2013quantifying}. The SPT trajectory information can be used to resolve variations in the individual motion of molecules that would otherwise be lost in ensemble imaging techniques.
In the analysis of trajectories, the pure Brownian motion model is often the first model used to describe a trajectory in the absence of prior information about the movement. The behavior of a single particle dominated by Brownian motion can be described by a normal distribution with the variance term proportional to a single physical scale parameter $D$, the diffusion constant; which makes Brownian motion the simplest model for describing stochastic motion. More complicated behavior could potentially be modeled as Brownian motion with discrete changes in the diffusion constant that could be identified with change point analysis~\cite{monnier2012bayesian}. Therefore, the estimation of the diffusion constant of a particle from discrete, noisy, and possibly short particle trajectories is a fundamental problem in single particle tracking.
In this manuscript, we focus on the likelihood distribution of $D$. We present a maximum-likelihood-based approach for estimating the diffusion constant of a particle given an SPT trajectory that includes the individual localization error for each position in the trajectory, the time of the observation, and the camera integration time. Our approach is based on a direct solution to the likelihood equation for the observation of a particular trajectory. The need for such an estimation procedure has evolved out of the rapid progress that has been made in SPT analysis techniques over the last few years~\cite{jaqaman2008robust, serge2008dynamic, chenouard2014objective, mont2014new, yoon2008bayesian}. In particular, some emitter localization techniques can not only accurately resolve the location of an emitter to tens of nanometers, but can also reliably estimate the localization error~\cite{Smith2010b}. Because the signal to noise ratio of a particle can vary significantly from frame to frame in an image sequence (e.g. from varying background, or photobleaching of the probe), the localization error reported for each observation in a trajectory can also vary significantly from frame to frame. We have therefore developed an estimator that takes into account this information.
\subsection{Background and Related Work}
Historically, one of the primary techniques for estimating the diffusion constant from trajectories relied on a linear regression of the mean-squared-displacement (MSD) of the tracked particle coordinates as a function of time lag~\cite{Qian1991a}. In the absence of measurement errors, the observed MSD for pure Brownian motion scales linearly with time lag and intersects at the origin, allowing the direct recovery of the diffusion constant from a linear regression on the well sampled data points. It has been shown that a regression of the MSD with an offset parameter can be interpreted to account for the cumulative effects of static~\cite{martin2002apparent} and dynamic measurement errors~\cite{Savin2005}. If the MSD is built using the same data points for multiple time lags, the correlation between MSD values must also be taken into account in the regression ~\cite{Qian1991a,Michalet2010,Michalet2012}. Although it seems theoretically possible to include individual localization error into the MSD regression, to date this has not been described.
A separate line of work has focused on maximum likelihood approaches to the estimation procedure. A maximum likelihood estimator works by finding the maximum of a likelihood function $\L(D)=\P{O}{D}$ that gives the probability of observing a particular trajectory $O$, given a diffusion constant $D$. Ideally this probability should incorporate both the variable localization errors of the trajectory and effect of motion-blur. The \textit{motion-blur} effect arises from the fact that each localization is performed on data that is acquired over some non-zero exposure time. Typically camera sensors integrate the signal over the exposure time resulting in a blurring of the particle image. This blurring has important numerical effects on the likelihood function~\cite{Montiel2006}. A specific solution to the likelihood function has been accurately derived that incorporates the effects of motion-blur but with the caveat that only a single global localization error estimate is used as an input or estimated~\cite{Berglund2010, Michalet2012}. This estimator is a more robust alternative to the MSD-based estimators because it can implement all trajectory information without incurring systematic error when the data is not well conditioned for a linear regression. Subsequent work has extended this approach to deal with non-uniformly spaced or intermittent trajectories~\cite{Shuang2013}, however the particular implementation in ~\cite{Shuang2013} did not account for motion blur. Maximum likelihood estimators are not the only class of diffusion estimators that have evolved recently; continued development on displacement-based estimators has resulted in an estimator that incorporates the effects of covariances between sequentially observed displacements~\cite{vestergaard2014optimal}.
In this work we provide a generalized solution to the likelihood function, incorporating variable localization errors and variable displacement periods, which results in an improvement in estimation accuracy for short trajectories, trajectories with large variations in localization accuracy, and trajectories with intermittently spaced measurements. In Sec.~\ref{sec:theory} we formulate the diffusion likelihood function to directly incorporate the effects of motion-blur, variable localization errors, and intermittent or non-uniformly spaced observations in time. We present three independent solutions to this likelihood function. The first derivation, the \textit{recursive method} (Sec.~\ref{sec:theory-recursive}), is a sequential integration of the nuisance parameters and provides the fastest numerical implementation. The second derivation, the \textit{Laplace method} (Sec.~\ref{sec:theory-laplace}), utilizes a second order Taylor expansion to express the likelihood as a multivariate Gaussian in the basis of integration. The Laplace method additionally returns the maximum likelihood values of the true positions given a sampled $D$. The third derivation, the \textit{Markov method} (Sec.~\ref{sec:theory-markov}), calculates the characteristic function in order to express the likelihood in the basis of displacements. The \textit{Markov method} allows us to verify that the generalized form of the expression derived in \cite{Berglund2010} is the same distribution as the expressions derived in this manuscript. The \textit{Markov method} was also instrumental in determining the coefficients necessary to reduce the computational complexity of all the methods (Sec.~\ref{sec:simplerforms}). Each of these derivations leads to an independent, numerically accurate computational algorithm for estimating the likelihood of $D$ (Sec.~\ref{sec:implementation}), making full use of all the information contained in a noisy trajectory. The resulting likelihood calculation allows for robust computations in specific problems, such as a maximum likelihood estimator, maximum a posteriori estimate, or change point analysis. We compare the results of our maximum likelihood estimator (MLE) to the current state of the art estimation software~\cite{Michalet2012} with the squared log loss function and demonstrate that the additional information provided from the localization errors allows for better estimates of $D$ with trajectories parameterized by any non-constant, but otherwise arbitrary distribution of localization variances.
\section{Theory}
\label{sec:theory}
If a diffusing particle is accurately and exactly observed at a discrete sequence of $N+1$ positions $\mathbf{X}=\{\mathbf{x}_i\}^{N+1}_{i=1}$ at times $t_i$, then $\P{\mathbf{X}}{D}$, the probability of sequence $\mathbf{X}$ given diffusion constant $D$, is
\begin{equation} \label{Eq:Act}
\P{\mathbf{X}}{D} = \prod^{N}_{i=1} \P{\mathbf{x}_{i+1}}{\mathbf{x}_i}.
\end{equation}
In Eq.~\ref{Eq:Act}, $\P{\mathbf{x}_{i+1}}{\mathbf{x}_i}=\P{\mathbf{x}_{i+1}}{\mathbf{x}_i,D}$ is the probability density of each discrete jump from $\mathbf{x}_i\to\mathbf{x}_{i+1}$ over time step $\delta t_i=t_{i+1}-t_i$, given diffusion constant $D$.
When measured experimentally, however, the true positions $\mathbf{X}$ are never known exactly, but are related to $N$ observed positions $\mathbf{O}$ by some distribution $\P{\mathbf{o}_i}{\mathbf{x}_i,{\mathbf{x}_{i+1} }}$, where the dependence on both $\mathbf{x}_i$ and $\mathbf{x}_{i+1}$ arises from the effects of exposure time integration by the observation apparatus which will be dealt with in detail later. Under this experimental model, $\P{\mathbf{O},\mathbf{X}}{D}$, the combined likelihood of the observed positions $\mathbf{O}$ and the actual positions $\mathbf{X}$ is a product of the observation probability densities $\P{\mathbf{o}_i}{\mathbf{x}_i,\mathbf{x}_{i+1}}$ and the diffusion transition probability densities $\P{\mathbf{x}_{i+1}}{\mathbf{x}_i}$ for each of the $N$ observed positions and displacements,
\begin{equation}
\label{eq:poxd}
\P{\mathbf{O},\mathbf{X}}{D} =
\prod^{N}_{i=1} \P{\mathbf{o}_i}{\mathbf{x}_i,\mathbf{x}_{i+1}} \P{\mathbf{x}_{i+1}}{\mathbf{x}_i}.
\end{equation}
Since $\mathbf{X}$ is unknown for experimental data, we integrate Eq.~\ref{eq:poxd} over all possible $\mathbf{X}$ to marginalize out the dependence on $\mathbf{X}$,
and write the diffusion likelihood as an integral over the space of all $\mathbf{X}$-values,
\begin{equation*} \label{ndintegral2}
\P{\mathbf{O}}{D}
=\int \d{\mathbf{X}} \P{\mathbf{O},\mathbf{X}}{D}
=\int \d{\mathbf{X}} \prod^{N}_{i=1} \P{\mathbf{o}_i}{\mathbf{x}_i,\mathbf{x}_{i+1}} \P{\mathbf{x}_{i+1}}{\mathbf{x}_i}.
\end{equation*}
Experimental data typically involves trajectories with two or three spatial dimensions. For diffusion in an isotropic medium and particle uncertainties given as normal distributions with no covariance among the spatial dimensions, the probability distribution of a particular displacement in each dimension is separable. Thus, if $\Upsilon$ is the number of dimensions, then
\begin{equation}
\label{eq:separable}
\P{\mathbf{O}}{D} = \prod_{n=1}^\Upsilon \P{O_n}{D}.
\end{equation}
Hence, it is sufficient to only consider the estimation problem in the one-dimensional (1D) case $O=\{o_i\}_{i=1}^N$, and
\begin{equation} \label{eq:pod}
\P{O}{D} = \intRNI \d{X} \prod^{N}_{i=1} \P{o_i}{x_i,x_{i+1}} \P{x_{i+1}}{x_i}.
\end{equation}
\subsection{Accounting for the effects of exposure time integration}
Equation~\ref{eq:pod} is the fundamental description of the likelihood of diffusion constant $D$ given observations $O$. Unfortunately, solving for this expression explicitly is difficult because every $o_i$ term is dependent on both $x_i$ and $x_{i+1}$. This is because the estimate of $o_i$'s position is typically made from data collected over an exposure time $0<t_\epsilon\leq t_{i+1}-t_i$. If the observational apparatus is a camera sensor, the signal will be integrated over the frame, resulting in a motion-blurred image of the moving particle, hence the observed location is conditional upon the particle's true position at the beginning ($x_i$) and end ($x_{i+1}$) of the frame.
In the case where exposure time $t_\epsilon$ goes to 0, but $\delta t_i=t_{i+1}-t_i$ remains constant, the
motion-blur effect is no longer present, so the observed location $o_i$ depends only on position $x_i$,
\begin{equation}
\label{eq:pod-pulsed}
\P{O}{D} = \intRN \d{X} \prod^{N}_{i=1} \P{o_i}{x_i} \prod^{N-1}_{j=1} \P{x_{j+1}}{x_j}.
\end{equation}
Without the additional dependence on $x_{i+1}$, the methods required to solve the integral in Eq.~\ref{eq:pod-pulsed} are simpler.
In order to use this simpler representation, we will transform Eq.~\ref{eq:pod} into a form which resembles Eq.~\ref{eq:pod-pulsed}, and seek functions
$\M(o_i,x_i)$ and $\T(x_{j+1},x_j)$ such that
\begin{equation} \label{eq:PtoM}
\P{O}{D} = \intRNI \d{X} \prod^{N}_{i=1} \P{o_i}{x_i,x_{i+1}} \P{x_{i+1}}{x_i} = \intRN \d{X} \prod^{N}_{i=1} \M(o_i,x_i) \prod^{N-1}_{j=1} \T(x_{j+1},x_j).
\end{equation}
The function $\T(x_{i+1},x_i)$ stands for the transition probability; it is simply the probability of a particle diffusing with constant $D$ moving from $x_i$ to $x_{i+1}$, over time $\delta t_i$. The function $\M(o_i,x_i)$ stands for the measurement probability and it encapsulates the net effect of both the measurement localization error and the motion-blur. The details of the representation equivalence of Eq.~\ref{eq:PtoM} are important for correctness, but they also unnecessarily complicate the exposition, and so can be found in Sec.~\ref{sec:ExpDeriv}. Other authors~\cite{Savin2005,Berglund2010} have investigated the motion-blur effects of exposure time integration, and found that the effect can be approximated by an effective decrease in variance of the measurement localization error, dependent on diffusion constant $D$ and exposure time $t_\epsilon$. Our derivations in Sec.~\ref{sec:ExpDeriv} agrees with the effective correction factor in \cite{Savin2005,Berglund2010}, and more importantly provides a form for the diffusion likelihood that is directly amenable to the solution techniques we employ in
Secs.~\ref{sec:theory-recursive},\ref{sec:theory-laplace},~and~\ref{sec:theory-markov}.
The result of the transformation of Eq.~\ref{eq:PtoM} is that the effective measurement function $\M_i$ and the transition function $T_i$ take the form of normalized Gaussians. We use the notation
\begin{equation*}
\N(a,a_0,\eta)=\frac{1}{\sqrt{2\pi\eta}}\exp{\left[ -\frac{(a-a_0)^2}{2\eta} \right]}.
\end{equation*}
to represent the normalized Gaussian function with variance $\eta=\sigma^2$ centered around
mean $a_0$ considered as a function of $a,a_0,$ and $\eta$. Using this notation, we can succinctly represent the measurement and transition functions as,
\begin{align}
\label{eq:varT} \T_i = \T_i(x_{i+1},x_i) = & \N(x_{i+1},x_i,\omega_i(D)), \quad\textrm{for } 1\leq i \leq N-1, \textrm{ and }\\
\label{eq:varM} \M_i = \M_i(o_i,x_i) = & \N(o_i,x_i,\eps_i(D)),\quad\textrm{for } 1\leq i\leq N.
\end{align}
The transition functions (Eq.~\ref{eq:varT}), are unaffected by the motion blur transformation and their Gaussian representation follows directly from the normally distributed displacements of diffusive processes, hence the variance is
\begin{equation*}
\label{eq:var-diffusion}
\omega_i(D)=2D\delta t_i.
\end{equation*}
For the measurement functions (Eq.~\ref{eq:varM}), the variance $\eps_i(D)$, is the variance due to measurement error, $v_i$, combined with a correction for the effect of motion-blur that is dependent on diffusion constant $D$ and exposure time $t_\epsilon$,
\begin{equation*}
\label{eq:var-measurement}
\eps_i(D)=v_i- D t_{\eps}/3.
\end{equation*}
where the factor of $1/3$ comes from the continuous limit integration of photon emissions for averaged Brownian trajectories (Sec.~\ref{sec:TimeAv}). It is important to note that the independence of $t_{\eps}$ and $\delta t_i$ allows for gaps in the trajectories, where $\delta t_i$ could span a duration of multiple frames but $t_{\eps}$ is the exposure time of a single frame.
The result is that Eq.~\ref{eq:PtoM} allows us to express the likelihood function exactly in a simple form that deals directly with variable localization error, motion-blur effects, and missing or irregularly spaced trajectory localizations,
\begin{equation} \label{eq:Funk}
\P{O}{D} = \intRN \d{X} \prod^{N}_{i=1} \M_i \prod^{N-1}_{j=1} \T_j.
\end{equation}
\section{Recursive Method}
\label{sec:theory-recursive}
The notation for the transition and measurement functions allows us to define the likelihood function $\L(D)$, by writing Eq.~\ref{eq:Funk} in a form that emphasizes the dependencies on each marginalized position $x_i$,
\begin{equation} \label{eq:recursiveform}
\L(D)=\P{O}{D} = \int\d{x_N} \M_N \int \d{x_{N-1}} \M_{N-1} \T_{N-1} \ldots
\int\d{x_2} \M_2 \T_2 \int\d{x_1} \M_1 \T_1.
\end{equation}
The form of Eq.~\ref{eq:recursiveform} leads to a direct recursive solution, taking into account the properties of integrals over products of normalized Gaussian functions. Define $\L_i$ as the sub-integrand of $\L(D)$ considering only the first $i$ observations,
\begin{equation}\label{eq:LRecurseDef}
\begin{aligned}
\L_1(D,x_2) & = \intII \M_1 \T_1 \d{x_1}, \\
\L_i(D,x_{i+1}) & = \intII \M_i \T_i \mathcal{L}_{i-1} \d{x_i}, \quad 2\leq i\leq N-1 \\
\L(D) =\L_N(D) & = \intII \M_N \mathcal{L}_{N-1} \d{x_N}. \nonumber
\end{aligned}
\end{equation}
Now, consider that the integral of a product of two normalized Gaussians sharing a parameter $x$, with means and variances denoted by $c_i$ and $\varphi_i$ respectively, is itself a normalized Gaussian (Sec.~\ref{sec:NormalIdent}),
\begin{equation} \label{eq:convolve2}
\intII \d{x} \prod^2_{i=1} \N(x,c_i,\varphi_i) = \N(c_1,c_2,\varphi_1+\varphi_2).
\end{equation}
Hence, $\L_1$ is a normalized Gaussian in parameter $x_2$,
\[ \L_1(D,x_2) = \int\d{x_1} \M_1 \T_1 = \N(o_1 , x_2, \eps_1 + \omega_1). \]
This implies that $\L_2(D,x_3)$ which is an integral over positions $x_2$, can now be written as an integral over three normalized Gaussians, all of which
share parameter $x_2$,
\begin{equation} \label{eq:L2recursive}
\L_2(D,x_3)=\int\d{x_2}\M_2\T_2 \int\d{x_1} \M_1 \T_1= \int\d{x_2} \M_2\T_2\L_1.
\end{equation}
Similarly, the integral of a product of three normalized Gaussians sharing the integrated parameter is itself a product of two normalized Gaussians (Sec.~\ref{sec:NormalIdent}),
\begin{equation} \label{eq:convolve3}
\intII \d{x} \prod^3_{i=1} \N(c_i,x,\varphi_i) = \N(c_1,c_2,\varphi_1+\varphi_2)\N(c_3,c',\gamma),
\end{equation}
where,
\begin{align} \label{gensubs}
c' = \frac{c_1 \varphi_2 + c_2 \varphi_1}{\varphi_1+\varphi_2}, \quad \text{and} \quad
\gamma = \frac{\varphi_1\varphi_2 + \varphi_1\varphi_3 + \varphi_2\varphi_3}{\varphi_1 + \varphi_2}. \nonumber
\end{align}
Hence, applying Eq.~\ref{eq:convolve3} to Eq.~\ref{eq:L2recursive}, we find that
\[\L_2=\int\d{x_2} \\M_2T_2\L_1 =\N(o_1,o_2,(\eps_1+\omega_1)+\eps_2)\N(x_3,\mu_2,\eta_2) \]
is a product of two normalized Gaussians, one of which depends on $x_3$ and the other is a constant with respect to $X$. The variables $\mu_2$ and $\eta_2$ follow from Eq.~\ref{eq:convolve2} and Eq.~\ref{eq:convolve3}, and are the second pair of values in a recursive solution. Since all subsequent integrals, barring the last integral, can be expressed as a product of three normalized Gaussians, we can express the recursion variables as
\begin{equation}
\label{eq:rec-vars1}
\mu_1 = o_1, \quad\ \eta_1 = \eps_1 + \omega_1, \quad\ \text{and} \quad \alpha_1 = \eta_1 + \eps_2,
\end{equation}
and for $2\leq i \leq N-1$,
\begin{equation}
\label{eq:rec-vars2}
\mu_i = \frac{\mu_{i-1} \eps_i + \eta_{i-1} o_i } {\alpha_{i-1} }, \quad \eta_i = \frac{ \eta_{i-1} \eps_i} {\alpha_{i-1}} + \omega_i, \quad \text{and} \quad \alpha_i = \eta_i+\eps_{i+1}.
\end{equation}
Finally, this allows us to express our integrands $\L_i$ as
\begin{equation}
\label{eq:recursive-sol}
\begin{aligned}
\L_1 & = \N(x_2,\mu_1, \eta_1) \\
\L_i & = \N(x_{i+1},\mu_i,\eta_i) \prod_{k=1}^{i-1} \N(o_{k+1}, \mu_k, \alpha_k) , \quad 2\leq i\leq N-1 \\
\L(D) = \L_N & = \prod_{k=1}^{N-1} \N(o_{k+1}, \mu_k, \alpha_k) .
\end{aligned}
\end{equation}
Equation~\ref{eq:recursive-sol} is the final form of the recursive solution for $\L(D)$ which is simply the product of $N-1$ normalized Gaussians each of which has parameters which come from a recursive relationship on $o_i$, $\eps_i$, and $\omega_i$. The value of $D$ that maximizes $\L(D)$ is the maximum likelihood estimate.
\section{Laplace Method}
\label{sec:theory-laplace}
An independent solution for Eq.~\ref{eq:Funk} can be obtained using the Laplace method which is based on integrating the second moment of the Taylor expansion of the exponential component of a function. Given that the second moment of a Taylor expansion is quadratic, this ensures that the function under the integral is always a Gaussian function \cite{de1774memoire}. Another caveat to the Laplace method is that the Taylor expansion has to occur about the peak of the exponential, so that the first moment of the Taylor expansion goes to 0. To perform the Laplace method, we express our likelihood $\L(D)=\P{O}{D}$ in terms of exponential and non-exponential components
\begin{equation*}
\label{eq:laplaceform}
\L(D) = \intII \d{X} f(X) = \intII \d{X} h(X) \Exp{-g(X)},
\end{equation*}
where $f(x)$ is simply the integrand of Eq.~\ref{eq:Funk},
\begin{equation}
\label{eq:LaplaceF}
f(X)=h(X) \Exp{-g(X)}=\prod_{i=1}^{N}\M_{i}\prod_{j=1}^{N-1}\T_{j}.
\end{equation}
Thus, using equations~\ref{eq:varM}~and~\ref{eq:varT}, we see that $h=h(X)$ is independent of $X$ and $g(X)$ is quadratic in $X$,
\begin{align*}
h = & \prod^N_{i=1} \frac{1}{\sqrt{2 \pi \eps_i}} \prod^{N-1}_{i=1} \frac{1}{\sqrt{2 \pi \omega_i}},\\
g(X) = & \sum^N_{i=1} \frac{(o_i-x_i)^2}{2 \eps_i} + \sum^{N-1}_{i=1} \frac{(x_{i+1}-x_i)^2}{2 \omega_i}.
\end{align*}
The maximum likelihood estimate $\Xhat$ of the actual positions $X$, given $D$ and $O$ will be wherever the integrand is maximized, and since $g(X)\geq0$,
\begin{equation*}
\Xhat = \argmax_X f(X) = \argmin_X g(X).
\end{equation*}
Now, given that $g(X)$ is quadratic, a second order Taylor expansion of $g$ about $\Xhat$ is exact and the Laplace method will provide an exact solution
for $\L(D)$ as the integral can be shown to take the form of a standard Gaussian integral. To see this, we write out the second order Taylor expansion
\begin{equation}
\label{eq:Dlaplaceform}
\intII \d{X} f(X) = h \intII \d{X} \Exp{-g(\Xhat) - \nabla g(\Xhat)(X-\Xhat) - \frac{1}{2} (X-\Xhat)^\transp \nabla \nabla g(\Xhat) (X-\Xhat) }.
\end{equation}
Since $\Xhat$ is the minima of $g(X)$, the gradient $\nabla g(\Xhat)=0$, and we can rearrange
Eq.~\ref{eq:Dlaplaceform} to extract all terms independent of $X$,
\begin{equation} \label{eq:laplaceint}
\intII \d{X} f(X) = f(\Xhat) \intII \d{X} \Exp{- \frac{1}{2} (X-\Xhat)^\transp \nabla \nabla g(\Xhat) (X-\Xhat) }.
\end{equation}
Furthermore, since $h$ is independent of $X$, we know that $-\nabla\nabla \ln{f(X)} = \nabla \nabla g(X)=M$, where $M$ can be thought of as the inverse of the covariance matrix for the multivariate Gaussian, or equivalently as the Hessian matrix of $-\ln{f(X)}$. Substituting $M$ for $\nabla \nabla g(X)$ in Eq.~\ref{eq:laplaceint} we are left with a Gaussian integral with the solution,
\begin{equation}
\label{eq:laplacelikelihood}
\L(D) = f(\Xhat) \intII \d{X} \Exp{- \frac{1}{2} (X-\Xhat)^\transp M (X-\Xhat) } = f(\Xhat) \sqrt{ \frac{(2\pi)^N}{\det{M}}}.
\end{equation}
The Hessian matrix $M$ is independent of $X$ and is symmetric tri-diagonal with non-zero elements,
\begin{equation}
\label{eq:LaplaceHessian}
\begin{aligned}
M_{1,1} &= -\frac{\partial^2\ln f}{\partial x_1^2} = \frac{1}{\eps_1}+\frac{1}{\omega_1} \\
M_{i,i} &= -\frac{\partial^2\ln f}{\partial x_i^2} = \frac{1}{\eps_i}+\frac{1}{\omega_i}+\frac{1}{\omega_{i-1}}, \quad 2\leq i \leq N-1 \\
M_{N,N} &= -\frac{\partial^2\ln f}{\partial x_N^2} = \frac{1}{\eps_N}+\frac{1}{\omega_{N-1}} \\
M_{i,i+1} = M_{i+1,i} &= -\frac{\partial^2\ln f}{\partial x_i\partial x_{i+1}} = -\frac{1}{\omega_i}, \quad 1\leq i \leq N-1.
\end{aligned}
\end{equation}
We also require $\Xhat$ to compute Eq.~\ref{eq:laplacelikelihood}, which can be solved for with the relation $-\nabla \ln f(\Xhat)=0$ (Sec.~\ref{sec:MethCom}),
giving
\begin{equation} \label{eq:MLE_X}
\Xhat = M^{-1} \Theta,
\end{equation}
where $\Theta=\{\theta_i = o_i/\eps_i\}_{i=1}^N$.
\section{Markov Method}
\label{sec:theory-markov}
In this section we present a derivation of the likelihood function $\L(D)$ utilizing a technique developed by Andrey Markov \cite{markov1912} and generalized by Chandrasekhar \cite{Chandrasekhar1943}. Markov's method allows us to transform $\P{O}{D}$ (Eq.~\ref{eq:Funk}) from a function of the $N$ observed positions $O=\{o_i\}_{i=1}^N$ into a function of the $N-1$ discrete steps (displacements) between subsequent observations
\begin{equation*}
\Jump=\{\jump_i = o_{i+1}-o_i\}_{i=1}^{N-1}.
\end{equation*}
This is possible because the spatial invariance of diffusion means $\L(D)$ depends not on the absolute spatial positions $O$, but only their relative displacements, $\Jump$. Thus we should expect that $\P{O}{D}$ can also be expressed as $\P{\Jump}{D}$, however Eq.~\ref{eq:recursiveform}, which defines $\P{O}{D}$, cannot be directly transformed into a function on $\Jump$. This is where Markov's method allows us to solve for a function $\P{\Jump}{D}=\P{O}{D}$ for a given $D$ value. For a particular fixed $\Jump$ and $\d{\Jump}$ of interest, the value of $\P{\Jump}{D}\d{\Jump}$ gives the probability that variable $\Jump'$ is within the bounds
\begin{equation}
\label{eq:dS-bounds}
\Jump-\frac{1}{2}\d{\Jump}\leq \Jump' \leq \Jump+\frac{1}{2}\d{\Jump}.
\end{equation}
More formally, $\P{\Jump}{D}\d{\Jump}$ is the integral over the volume $\d{\Jump}$ around the point of interest $\Jump$, and we let $\Jump'$ represent
the variable integrated over,
\begin{equation*}
\P{\Jump}{D}\d{\Jump} = \int^{\Jump + \frac{1}{2}\d{\Jump}}_{\Jump - \frac{1}{2}\d{\Jump}} \d{\Jump'} \P{O}{D}.
\end{equation*}
The issue remains that $\P{O}{D}$ is expressed in a basis of $O$ rather than of $S$, and integrating with respect to bounds in a different basis is non-trivial. In order to circumvent this issue, Markov utilized a product of Dirichlet integrals, $\Xi(\Jump') = \prod^{N-1}_{k=1} \xi_k(\jump_k')$, to expand the limits of integration to all space
\begin{equation} \label{Markov1}
\P{\Jump}{D} \d{\Jump}= \intII \d{\Jump'} \Xi(\Jump') \P{O}{D}.
\end{equation}
The idea is that for each dimension of $S$, the Dirichlet integral $\xi_k(\jump_k')$ acts like a continuous indicator function determining if $\jump_k'$ is within the bounds of Eq.~\ref{eq:dS-bounds},
\begin{equation*} \label{dirichletInt}
\xi_k(\jump_k') = \frac{1}{\pi} \intII \d{\rho_k} \frac{\sin(\frac{1}{2}\d{\jump_k} \rho_k)}{\rho_k} \Exp{\imath \rho_k (\jump_k' - \jump_k) }, \nonumber
\end{equation*}
so that,
\begin{equation}
\xi_k(\jump_k') =
\begin{cases}
1 & \jump_k-\frac{1}{2}\d{\jump_k}\leq \jump_k' \leq \jump_k+\frac{1}{2}\d{\jump_k} \\
0 & \text{otherwise}
\end{cases}.\nonumber
\end{equation}
Therefore $\Xi(\Jump')$ is the indicator function acting over the whole space of $S'$, and determining if $S'$ is within the volume $\d{S}$ around our point of interest $S$,
\begin{equation}
\Xi(\Jump') = \prod_{k=1}^{N-1} \xi_k(\jump_k') =
\begin{cases}
1 & \displaystyle \bigwedge_{k=1}^{N-1} \textstyle s'_k \in \left[\jump_k-\frac{1}{2}\d{\jump_k}, \jump_k+\frac{1}{2}\d{\jump_k}\right] \\
0 & \text{otherwise}
\end{cases}.\nonumber
\end{equation}
This puts Eq.~\ref{Markov1} in the form
\begin{equation}
\label{eq:expanded-rho-S-int}
\P{\Jump}{D} \d{\Jump}= \int \d{\Jump'} \frac{1}{\pi^{N-1}} \int \d{\rhovec} \left[ \prod^{N-1}_{k=1} \frac{\sin(\frac{1}{2} \d{\jump_k} \rho_k)}{\rho_k} \right] \Exp { \imath \rhovec^\transp (\Jump' - \Jump) } \P{O}{D},
\end{equation}
where $\rhovec =\{\rho_k\}_{k=1}^{N-1}$ is a vector of conjugate coordinates to $\Jump'$. We can then rearrange Eq.~\ref{eq:expanded-rho-S-int}
to move the integral over $\d{\Jump'}$ and all factors dependent on $\Jump'$ into function $\Lambda({\rhovec})$,
\begin{equation} \label{eq:MarkovTrans}
\P{\Jump}{D} \d{\Jump}= \frac{1}{\pi^{N-1}} \int \d{\rhovec} \left[ \prod^{N-1}_{k=1} \frac{\sin(\frac{1}{2} \d{\jump_k} \rho_k)}{\rho_k} \right]
\Exp {-\imath \rhovec^\transp \Jump } \Lambda(\rhovec).
\end{equation}
Now, we can interpret $\Lambda({\rhovec})$ as the characteristic function of $\P{O}{D}$ in the $\Jump'$ basis and it has the form
\begin{equation} \label{fourierTerm}
\Lambda(\rhovec) = \int \d{\Jump'} \Exp{\imath\rhovec^\transp \Jump' } \P{O}{D}.
\end{equation}
The form of Eq.~\ref{fourierTerm} implies that $\Lambda(\rhovec)$ is the inverse Fourier transform of $\P{O}{D}$.
Due to the properties of the Fourier transform, we assert $\Lambda(\rhovec)$ is a bounded function with a finite integral because $\P{O}{D}$ is expressed as a finite product of Gaussians with non-zero variance, hence it is a continuous function \cite{cahill2013physical}.
Given that $\int \d{\rhovec} \Lambda(\rhovec)$ is bounded and $\d{\Jump}$ is small we can approximate the product of sinc functions in Eq.~\ref{eq:MarkovTrans} as
\begin{equation*}
\prod_{k=1}^{N-1} \frac{\sin(\frac{1}{2}\d{\jump_k} \rho_k)}{\rho_k}=\prod_{k=1}^{N-1} \frac{\d{\jump_k}}{2}=\frac{\d{\Jump}}{2^{N-1}}.
\end{equation*}
Thus Eq.~\ref{eq:MarkovTrans} becomes the Fourier transform of $\Lambda(\rhovec)$
\begin{equation} \label{MarkovForm}
\P{\Jump}{D} \d{\Jump}= \frac{\d{\Jump}}{(2\pi)^{N-1}}\int \d{\rhovec} \Exp{-\imath\rhovec^\transp\Jump}\Lambda(\rhovec).
\end{equation}
We are now interested in evaluating $\Lambda(\rhovec)$ explicitly. To do so we note that $\int \d{\Jump'} \P{O}{D}=1$ since $\P{O}{D}$ is a probability
distribution that, as we have argued, can be equivalently expressed in the $S$ basis as $\P{S}{D}$ (Sec.~\ref{sec:MethCom}).
With this understanding we evaluate $\Lambda(\rhovec)$, by expanding the exponential under integration in Eq.~\ref{fourierTerm} as a
Taylor series about the origin
\begin{equation}
\Exp{\imath \rhovec^\transp \Jump'} = 1 + \imath \left( \sum^{N-1}_{j=1} \rho_j \jump_j' \right)
-\frac{1}{2}\left( \left[\sum^{N-1}_{j=1} \rho_j^2 \jump_j'^{\,2} \right] + \left[ \sum^{N-2}_{j=1} \sum^{N-1}_{k=j+1} 2\rho_j\rho_k \jump'_j\jump'_k \right] \right) + \mathcal{O}(\rhovec^3). \nonumber
\end{equation}
Solving the integral for $\Lambda(\rhovec)$ with the Taylor expanded exponential term given that we know $\P{O}{D}$ is a normalized probability density under $\Jump'$ allows us to write $\Lambda(\rhovec)$ in terms of expected values for $s_k$ (using $\E{\cdot}$ to represent expectation),
\begin{equation} \label{sumofExpect}
\Lambda(\rhovec) = 1 + \imath \left( \sum^{N-1}_{j=1} \rho_j \E{\jump_j'} \right)
-\frac{1}{2}\left( \left[\sum^{N-1}_{j=1} \rho_j^2 \E{\jump_j'^{\,2}} \right] + \left[ \sum^{N-2}_{j=1} \sum^{N-1}_{k=j+1} 2\rho_j\rho_k \E{\jump'_j\jump'_k} \right] \right)
+ \mathcal{O}(\rhovec^3) .
\end{equation}
We take the approach of solving for the expectation values on $\Jump'$ to see if those results will simplify the expression in Eq.~\ref{sumofExpect}. If we set one of the observations, $o_i$, as a constant, we find that we can marginalize all of $\P{O}{D}$ except for the terms that are independently represented by the basis we are interested in. In other words, we find that
\begin{align}
\E{\jump_i'} & = \int \d{\jump_i'} \jump_i' \N(\jump_i',0,\eps_i + \eps_{i+1} + \omega_i) = 0 \nonumber \\
\E{\jump_i'^{\,2}} & = \int \d{\jump_i'} \jump_i'^{\,2} \N(\jump_i',0,\eps_i + \eps_{i+1} + \omega_i) = \eps_i + \eps_{i+1} + \omega_i. \nonumber \\
\E{\jump_i' \jump_{i+1}'} & = \int \d{\jump_i'} \d{\jump_{i+1}'} \jump_i' \jump_{i+1}' \N(S_b,0,\Sigma_b) = -\eps_{i+1}. \nonumber
\end{align}
Where the substitution of variables required for integrating the expectation of $\E{\jump_i' \jump_{i+1}'}$ induces a bivariate Gaussian function with location parameters $S_b = [s_i, s_{i+1}]^\transp$ and covariance matrix
\begin{equation}
\Sigma_b = \begin{bmatrix} \omega_i + \epsilon_i + \epsilon_{i+1} & -\epsilon_{i+1} \\
-\epsilon_{i+1} & \omega_{i+1} + \epsilon_{i+1} + \epsilon_{i+2} \end{bmatrix}. \nonumber
\end{equation}
Furthermore, we find that the separability of two non-adjacent displacements results in the relation $\E{\jump_i' \jump_{i+k}'} = \E{\jump_i'}\E{\jump_{i+k}'} = 0 $ for $k>1$. Since $\P{O}{D}$ is a multivariate Gaussian with 0 mean, we can apply Isserlis's theorem to solve for all of the moments of the distribution \cite{Isserlis1918} given knowledge of the second moments of the Fourier transform on $\P{O}{D}$, which can be expressed as a covariance matrix. This allows us to express Eq.~\ref{sumofExpect} as
\begin{align} \label{fourierCompact}
\Lambda(\rhovec) = \Exp {- \frac{1}{2} \rhovec^\transp \Sigma \rhovec }.
\end{align}
The covariance matrix $\Sigma$ is symmetric tri-diagonal, with non-zero elements
\begin{equation}
\label{eq:markov-cov-mat}
\begin{aligned}
& \Sigma_{i,i} = \omega_i + \eps_i + \eps_{i+1} \\
& \Sigma_{i,i+1} = \Sigma_{i+1,i}= - \eps_{i+1}.
\end{aligned}
\end{equation}
The expression in Eq.~\ref{fourierCompact} is well known as the characteristic function of a multivariate Gaussian. Substituting Eq.~\ref{fourierCompact} into Eq.~\ref{MarkovForm} and factoring out $\d{\Jump}$ gives
\begin{align} \label{eq:markov-final}
\L(D) = \P{\Jump}{D} = \frac{1}{\sqrt{(2\pi)^{N-1}\det{\Sigma}}} \Exp {-\frac{1}{2} \Jump^\transp \Sigma^{-1} \Jump }.
\end{align}
\section{Implementation}
\label{sec:implementation}
We have presented three independent solutions to the experimental diffusion likelihood $\L(D)$ (Eq.~\ref{eq:recursiveform}): the recursive method (Sec.~\ref{sec:theory-recursive}), the Laplace method (Sec.~\ref{sec:theory-laplace}), and the Markov method (Sec.~\ref{sec:theory-markov}). While each method requires separate consideration, several features are common to all of the implementations. The separability of the problem allows us to estimate diffusion constants for any dimensional inputs inputs using the 1D algorithms (Eq.~\ref{eq:separable}). The inputs to the algorithms are: (1) the observed particle locations, $\mathbf{O}=\{\mathbf{o}_i\}_{i=1}^N$; (2) the observation times $T=\{t_i\}_{i=1}^N$;
(3) the measurement variance for each observation $\mathbf{V}=\{\mathbf{v}_i\}_{i=1}^N$; (4) the exposure of each frame $t_{\epsilon}$; and (5) one or more diffusion constants $D$ at which to evaluate the likelihood. The output for each $D$ value is $\ln(\L(D))$. The logarithm of the likelihood makes the computation of products and exponentials much faster, and avoids the problem of numerical underflow for very small values of $\L(D)$. Additionally, because the logarithm is a strictly monotonically increasing function, $\argmax_D{\L(D)}=\argmax_D{\ln(\L(D))}$, so the maximum likelihood estimate is identical for the log-likelihood.
\subsection{Recursive Method}
The recursive algorithm follows directly from the recursively defined variables
(Eqs.~\ref{eq:rec-vars1}~and~\ref{eq:rec-vars2}), and the expression of $\L(D)$
as a product of Gaussians (Eq.~\ref{eq:recursive-sol}). The recursive expressions for $\alpha_i$, $\eta_i$, and $\mu_i$, are causal (the $i$-terms depend only on the $(i-1)$-terms), enabling their computation in a simple for loop over $N$. Noting that the logarithm of a normalized Gaussian is
\begin{equation}
\label{eq:lognormal}
\ln\N(a,b,v)=-\frac{1}{2}\left[\ln(2\pi)+\ln(v)+\frac{(a-b)^2}{v}\right],
\end{equation}
we apply Eq.~\ref{eq:lognormal} directly to Eq.~\ref{eq:recursive-sol} to arrive at a computationally efficient form for the recursive solution of the log-likelihood
\begin{equation}
\ln\L(D)=\sum_{i=1}^{N-1} \ln\N(o_{i+1}, \mu_i, \alpha_i) = -\frac{1}{2}\left[(N-1)\ln(2\pi)+\sum_{i=1}^{N-1} \ln(\alpha_i)+\sum_{i=1}^{N-1} \frac{(o_{i+1}-\mu_i)^2}{\alpha_i} \right].\nonumber
\end{equation}
Of all the methods, the recursive method is the simplest to implement and the most computationally efficient and numerically stable.
\subsection{Laplace Method}
The computational core of the Laplace method centers around the Hessian matrix $M$ (Eq.~\ref{eq:LaplaceHessian}). This matrix is symmetric tri-diagonal, which means all non-zero elements are on the main diagonal and the diagonals immediately above and below. Using $M$ we can solve the linear system $\Xhat = M^{-1} \Theta$ (Eq.~\ref{eq:MLE_X}) to obtain the maximum likelihood estimates $\Xhat$ for the true particle locations. Typically, solving large linear systems is expensive but since $M$ is tri-diagonal there are algorithms to solve this system in linear time~\cite{el2004inverse}. We refer the reader to Sec.~\ref{sec:Imp} for the details of tri-diagonal matrix algorithms and our implementation.
Given a solution for $\Xhat$, we can use the definition of $f(X)$ in Eq.~\ref{eq:LaplaceF}
along with Eq.~\ref{eq:lognormal} to compute
\begin{equation}
\ln f(\Xhat)=-\frac{1}{2}\left[
\sum_{i=1}^{N} \ln(2\pi\eps_i)
+ \sum_{i=1}^{N} \frac{(o_i-\widehat{x}_i)^2}{\eps_i}
+ \sum_{i=1}^{N-1} \ln(2\pi\omega_i)
+ \sum_{i=1}^{N-1} \frac{(\widehat{x}_{i+1}-\widehat{x}_i)^2}{\omega_i} \right].
\end{equation}
Finally we can compute the log-likelihood using the Laplace solution of Eq.~\ref{eq:laplacelikelihood}, finding that
\begin{equation}
\label{eq:LaplaceLLH}
\begin{aligned}
\ln\L(D)&= \ln f(\Xhat)+\frac{N}{2}\ln(2\pi)-\frac{1}{2}\logdet M\\
&= -\frac{1}{2}\left[(N-1)\ln(2\pi)
+ \sum_{i=1}^{N-1} \ln(\omega_i)
+ \sum_{i=1}^{N-1} \frac{(\widehat{x}_{i+1}-\widehat{x}_i)^2}{\omega_i}
+ \sum_{i=1}^{N} \ln(\eps_i)
+ \sum_{i=1}^{N} \frac{(o_i-\widehat{x}_i)^2}{\eps_i}
+\logdet M \right]
\end{aligned}
\end{equation}
\subsection{Markov Method}
Finally, the Markov method computation, like the Laplace method, is centered around matrix computations. In this case, the matrix of interest is the $N-1$ dimensional covariance
matrix $\Sigma$ (Eq.~\ref{eq:markov-cov-mat}), which also happens to be symmetric tri-diagonal, so the
same linear-time algorithms used in the Laplace method are applicable (Sec.~\ref{sec:Imp}).
For the Markov method computation we first solve the linear system $\Phi=\Sigma^{-1} S$,
then apply this solution along with the tri-diagonal log-determinant algorithm to compute the logarithm of the likelihood expression from
Eq.~\ref{eq:markov-final}, giving
\begin{equation}
\ln\L(D)=-\frac{1}{2}\left[(N-1)\ln(2\pi)
+ S^\transp \Phi
+ \logdet{\Sigma}
\right].\nonumber
\end{equation}
\section{Results}
\label{sec:results}
To demonstrate the benefits of including individual localization errors in the estimation, we opt for a loss function to evaluate the quality of the maximum likelihood estimators (MLE) under consideration. The mean squared error is a popular choice for evaluating estimators, but it is not a sufficient loss function for the estimation of $D$, where the mean squared error shows over penalization for higher estimations as it has a bounded loss at $\hat{D}=0$ and an unbounded loss at $\hat{D}=\infty$. Instead, we will use the squared log loss function~\cite{brown1968inadmissibility}
\begin{equation*} \label{sqlogloss}
\ell(D,\hat{D}) = \left( \ln(D) - \ln(\hat{D}) \right)^2.
\end{equation*}
The squared log loss function is similar to the mean squared error, except that the squared distance is between the logarithms of $D$ and $\hat{D}$. From \cite{gelman2014bayesian}, we see that the variance term, $D$, scales with the observed data as a logarithm, hence the choice of the squared log loss function represents a metric between the expected data given by $D$ and $\hat{D}$.
For one set of SPT simulations, the true trajectory coordinates were first generated with the pure diffusion model. Full frame motion blur was accounted for and localization errors were independently and identically drawn from either a Uniform or a Gamma distribution to test the effects of variable localization errors without attributing success to a particular choice of distribution. As a control, a set of simulations with a constant localization error for all observations was generated to show that the likelihood distribution represented by the class of diffusion estimators that only recognize a Scalar Localization Error (SLE) was exactly the same as the distribution represented by either of our three derived methods which recognize a Vector of Localization Errors (VLE). We then used the Gamma and Uniform distribution based localization errors to compare the new VLE based estimators that account for individually measured localization errors over the SLE based estimators, for which we had input the square root of the mean localization variance as the best representation for the scalar error in all trajectory observations. We performed 10,000 trajectory trials for each data point to estimate the risk of using a particular estimator, where risk is defined as $R(D,\hat{D}) = \E{\ell(D,\hat{D})}$. The squared log risk is the variance of $\ln(\hat{D})$ for an unbiased estimator.
The simulations with the gamma distributed localization errors were generated according to
\begin{equation*}
\label{eq:variablegV}
\sqrt{V}=\sigma\sim \mathrm{Gamma}(4,\langle{\sqrt{V}}\rangle/4),
\end{equation*}
where the standard gamma distribution p.d.f. is $\mathrm{Gamma}(k,\theta)= \theta^{-k} \sqrt{V}^{k-1} \exp(-\sqrt{V}/\theta)/\Gamma(k)$, and $\langle{\sqrt{V}}\rangle$ represents the mean error. The simulations with the uniform distributed localization errors were generated according to
\begin{equation*}
\label{eq:variableuV}
\sqrt{V}=\sigma\sim \mathrm{Uniform}(\frac{1}{2} \langle{\sqrt{V}}\rangle, \frac{3}{2} \langle{\sqrt{V}}\rangle),
\end{equation*}
where the uniform distribution p.d.f. is $\mathrm{Uniform}(a,b) = 1/(b-a)$ for $\sqrt{V} \in [a,b]$ and $\mathrm{Uniform}(a,b) = 0$ for all other values of $\sqrt{V}$.
\begin{figure}[H]
\begin{center}
\begin{tabular}{ll}
(A) & (B)\\
\includegraphics[height=2.2in]{figure1a.pdf} & \includegraphics[height=2.2in]{figure1b.pdf} \\
(C) & (D)\\
\includegraphics[height=2.2in]{figure1c.pdf} & \includegraphics[height=2.2in]{figure1d.pdf}
\end{tabular}
\end{center}
\caption{Comparison of estimation risk of the Vector of Localization Errors (VLE) and Scalar Localization Error (SLE) MLEs with localization errors drawn from Gamma or Uniform distributions. The MLE was found with the fminbnd command from Matlab on the calculated likelihood distributions with a lower bound of $10^{-8}$. Log-log plots (A) and (C) show the risk of using a particular estimator on trajectories of various lengths with other constant parameters set to typical experimental values with standard errors drawn from gamma (A) or uniform (C) distributions. Log-log plots (B) and (D) show the risk of using a particular estimator on trajectories of length 50 with various standard errors drawn from gamma (B) or uniform distributions and all other parameters were set to 1 to study the effects of relative localization error. In both (A) and (C) a fiducial line shows that after 30 observations the risk for the methods decreases approximately $\propto 1/N$ with trajectory length; in this regime the SLE estimators perform worse by a constant factor (to simulation precision) relative to the VLE estimators. In (B) and (D) the risk for SLE estimators increase at a faster rate than the VLE estimators, indicating that the localization errors provide valuable information for improving estimator reliability.}
\label{fig:risks}
\end{figure}
Figures \ref{fig:risks}A and C are shown in log space to show that in the presence of sufficient information the risk of the VLE estimators is less than the risk of the SLE estimators by a constant proportionality factor when a scalar localization error is used to parameterize a continuous distribution of localization errors. In other words, for a given parameterized scalar localization error, the amount of observations required to generate the same quality $\hat{D}$ estimate is always less by a proportional factor for the VLE estimators method compared to the SLE estimators. Figures \ref{fig:risks}B and D are shown in log space to show how each estimator begins to fail in the presence of increasing relative localization error given a fixed set of trajectory observations (50). For these subplots, the VLE estimators show a noticeable improvement in estimator reliability when the relative error is equal to or greater than the true underlying $D$.
In an experimental trajectory, the distribution that parameterizes the localization error is typically a function of several environmental variables so that it can often appear arbitrary or specific to a particular experimental trial. In this manuscript, we focus on two simple distributions to provide a fair metric for validation over thousands of simulated trajectories. It is worthwhile to further investigate the the precision increase of the VLE estimators over their SLE counterparts with localization error distributions of varying error variances. We do so by performing trials on trajectories with localization errors parameterized by the Gamma and Uniform distribution, but this time we vary the parameters that characterize the variance of these distributions without altering the value of the mean localization variance. To do so, we run simulations where the shape parameter, k, of the gamma distribution is altered so that our expression looks like
\begin{equation*}
\label{eq:variableKV}
\sqrt{V}=\sigma\sim \mathrm{Gamma}(k,\langle{\sqrt{V}}\rangle/k),
\end{equation*}
and the bounds of the uniform distribution are altered so the expression becomes
\begin{equation*}
\label{eq:variableuV}
\sqrt{V}=\sigma\sim \mathrm{Uniform}([1-b] \langle{\sqrt{V}}\rangle, [1+b] \langle{\sqrt{V}}\rangle).
\end{equation*}
\begin{figure}[H]
\begin{center}
\begin{tabular}{ll}
(A) & (B) \\
\includegraphics[height=2.2in]{figure2a.pdf} & \includegraphics[height=2.2in]{figure2b.pdf}
\end{tabular}
\end{center}
\caption{Risk ratio of the Scalar Localization Error (SLE) and Vector of Localization Errors (VLE) MLEs for various trajectories parameterized by localization errors drawn from Gamma and Uniform distributions of a single varying parameter but the same mean $\langle \sqrt{V} \rangle$. Log-log plot (A) shows the risk ratio of SLE and VLE with 5 Gamma distributions of the same mean localization error and different shape parameters, k. The increased value of k reduces the variance of the gamma distribution; k = 1 is an exponential distribution and k = 13 is approaching a gaussian distribution. All of the gamma distributions in (A) have a variance of $\langle \sqrt{V} \rangle^2 / k$. Log-log plot (B) shows the risk ratio of SLE and VLE with 5 Uniform distributions of the same localization variance and different sampling boundaries, b. The reduced value of b reduces the variance of the uniform distribution; a given b in plot (B) corresponds to a variance of $\left(\langle \sqrt{V} \rangle b\right)^2/3$, hence b = 0.1 is nearly a variance of 0. As long as the variance of $\langle \sqrt{V} \rangle$ is greater than 0, the VLE estimators outperform the SLE estimators.}
\label{fig:varparam}
\end{figure}
We see in Fig.~\ref{fig:varparam} that the effect of increasing the variance relative to the mean of the localization errors in these test distributions results in a growing disparity between the two classes of estimators. From Fig.~\ref{fig:risks} the effect of increasing $\langle \sqrt{V} \rangle$ sets a minimum trajectory length where estimates become reasonable; e.g the linear decrease in risk for the gamma distribution of Fig.~\ref{fig:risks}A is seen at shorter trajectories than the uniform distribution of Fig.~\ref{fig:risks}C even though both the gamma and uniform distributions have the same variance because the uniform distribution has a larger $\langle \sqrt{V} \rangle$. In Fig.~\ref{fig:varparam} the constant precision improvement of the VLE estimators are recognized in the regime where both estimators start reporting risk values that scale linearly with trajectory length. Prior to that, the VLE estimators start performing significantly better at shorter trajectories, which is represented by the peaks seen in Fig.~\ref{fig:varparam}. These simulations, while overly simplified versions of real SPT trajectories, highlight the importance of characterizing each localization error accordingly to their associated localizations in a trajectory.
\section{Discussion and Conclusion}
Starting from the fundamental diffusion likelihood expression (Eq.~\ref{eq:Funk}), we presented three independent solutions, each of which has different benefits, and leads to a computational algorithm with different advantages. The recursive method presents the simplest solution that is numerically more stable than the other methods for estimating likelihoods when $D$ approaches 0. The Laplace method has the advantage that the expected true positions $\hat{X}$ are computed along with the likelihood, which may be useful in some applications. The Markov method was crucial for deriving the terms $\epsilon_i$ in the components $\M_i$ given knowledge of the true underlying probability distribution, hence the generality of the Markov method is its main advantage. In terms of numerical accuracy and computational efficiency the Markov method is better than the Laplace method especially for very small $D$ values, but it remains computationally inferior to the recursive method. For practical implementations, we recommend the recursive method unless the MLE of the true positions is also desired.
The method described here naturally allows for trajectories with missing or irregularly spaced localizations by decoupling the concept of observation times $t_i$ from the exposure time $t_\epsilon$. This is important when some localizations are missed because the gap between $t_i$ and $t_{i+1}$ becomes larger, but the exposure time remains the same. Thus to correctly account for the motion-blur, the effective weighting of the dependence of observed position $o_i$ on $x_i$ and $x_{i+1}$ changes, and our technique directly incorporates this effect. Trajectory intermittency has been accounted for in prior studies~\cite{Shuang2013} and in those same studies extensions to dynamic errors were suggested, but a convenient computational framework for an estimator that seamlessly factors in both trajectory intermittencies and dynamic error had not been explicitly worked out until now.
Numerical implementations of the likelihood forms resulting from the three derivations were tested to justify the equivalence among the likelihood forms. Since all three derivations began from the same set of first principles, the three likelihood calculations are essentially equivalent. We note however that our implementation remains unit agnostic, so that the trajectory with a very small $D$ value can be easily scaled up to appropriate units so that the value of $D$ is closer to 1, where the numerical calculations will be more robust.
Variable localization uncertainties could occur in practice from variable background intensities or photobleaching of the fluorescent label. We compared the performance of our VLE estimator to the current state-of-the-art SLE estimator using the squared log loss function and found a clear performance benefit when trajectories had variable localization uncertainties.
\section{Author Contributions}
PKR and MJO contributed equally to this work. KAL, PJC and PKR conceived the project. PJC initiated the formulation of the estimation problem as a MLE given a set of observations. PKR derived the recursive, Markov, and Laplace methods. MJO derived the efficient algorithms for the three solution methods and the C++ and MATLAB implementations of the methods, generated the estimation accuracy results and helped to simplify the presentation. All authors contributed to the writing and editing of the manuscript.
\section{Acknowledgments}
We wish to acknowledge Stan Steinberg and Michael Wester for reading our manuscript and providing enlightening discussions and helpful comments. Financial support for this work was provided primarily by the National Science Foundation grant 0954836. Additional support was provided by The New Mexico Spatiotemporal Modeling Center: NIH P50GM085273 (KAL), NIH grant 1R01GM100114 (KAL,MJO) and NIH grant 1R01NS071116 (KAL, MJO).
|
1,116,691,500,308 | arxiv | \section{Introduction}
\label{sec:intro}
When multiple speech signals are observed by distant microphones (e.g., in a conference room),
they are contaminated with reverberation and background noise.
The problem of extracting each speech signal and removing the reverberation and background noise from only the observed signal
is called (convolutive) blind source separation or extraction
(BSE)~\cite{pedersen2008convolutive,comon2010handbook,cichocki2002adaptive}.
Here, we consider BSE in the short-term Fourier transform (STFT) domain
under the following two conditions:
\begin{itemize}
\item The reverberation time ($\mathrm{RT}_{60}$) is larger than the frame length of the STFT, and the mixture should be treated as a convolutive mixture in the STFT domain as well.
\item The number of microphones $M$ is greater than that of speech signals $K$
and there can be background noise.
\end{itemize}
To cope with reverberation,
one can apply a dereverberation method~\cite{naylor2010speech} such as weighted prediction error (WPE)~\cite{nakatani2010wpe,yoshioka2011wpe-ica,yoshioka2012wpe-mimo}
as preprocessing of BSE for instantaneous mixtures in the STFT domain (called BSE-inst in this paper).
We then apply some BSE-inst method
such as independent vector analysis (IVA)~\cite{kim2007,hiroe2006,li2009joint}
and independent vector extraction (IVE)~\cite{koldovsky2018ive,jansky2020adaptive,scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020ive,ike2020overiva}
developed for less reverberant environments, to extract $K$ speech signals.
Such a cascade configuration of WPE and IVA/IVE has a low computational cost,
but the WPE dereverberation filter is estimated without considering the separation attained by IVA/IVE following WPE.
To jointly optimize the WPE dereverberation and separation filters through a unified optimization,
methods that integrate WPE and several BSE-inst methods have been proposed~\cite{yoshioka2011wpe-ica,yoshioka2012wpe-mimo,nakatani2020computationally,ike2019ilrma,kagami2018wpe-ilrma},
and it has been reported that these methods can give higher separation performance than the cascade configuration of WPE and BSE-inst (see, e.g.,~\cite{nakatani2020computationally}).
However, the computational cost of optimizing both WPE and BSE-inst models becomes huge when $M$ is large.
To reduce the computational cost of the conventional joint optimization methods while maintaining their separation performance,
we propose a new BSE method called \textit{IVE for convolutive mixtures (IVE-conv)}, which integrates WPE and IVE (Section~\ref{sec:model}).
We show that, given source power spectra,
the IVE-conv optimization problem can be reduced to the IVE optimization problem by exploiting the stationary condition,
and this reduction is not computationally intensive (Section~\ref{sec:reduction}).
The IVE optimization problem can be solved fast~\cite{scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020ive,ike2020overiva},
and so can the IVE-conv optimization problem (Section~\ref{sec:alg:1}).
We also propose another new algorithm for IVE-conv that alternately optimizes WPE and IVE (Section~\ref{sec:alg:2}).
Similar algorithms have already been developed in~\cite{yoshioka2011wpe-ica,yoshioka2012wpe-mimo}, but our proposed one significantly reduces the computational time complexity of the conventional ones.
In a numerical experiment in which two speech signals are extracted from mixtures,
we show the effectiveness of our new approach.
\section{Blind source extraction problem}
\label{sec:conv-bse-problem}
Let $M$ be the number of microphones.
Suppose that an observed mixture $\bm{x} \coloneqq \{ \bm{x}(f,t) \}_{f,t} \subset \mathbb{C}^M$ in the STFT domain is a convolutive mixture of
$K$ nonstationary source signals and $N_{\z} \coloneqq M - K$ background noise signals:%
\footnote{
The assumption that the dimension of the noise signal is $M - K$ concerns the rigorous development of efficient algorithms
and can be violated to some extent when applied in practice (see numerical experiments in Section~\ref{sec:exp}).
}
\begin{align}
\nonumber
\bm{x}(f,t) = \sum_{\tau = 0}^{N_{\tau}} \left[
\sum_{i = 1}^{K} \bm{a}_i (f,\tau) s_i(f,t - \tau) + A_{\z}(f,\tau) \bm{z}(f, t - \tau)
\right],
\end{align}
\vspace{-3 mm}
\begin{alignat}{5}
\label{eq:ai}
& \bm{a}_i(f,\tau) &&\in \mathbb{C}^M,
\quad
& s_i(f,t) &\in \mathbb{C}, \quad i \in \{ 1,\ldots,K \},
\\
\label{eq:Az}
& A_{\z}(f,\tau) &&\in \mathbb{C}^{M \times N_{\z}},
\quad
& \bm{z}(f,t) &\in \mathbb{C}^{N_{\z}}.
\end{alignat}
Here, $f = 1,\ldots,F$ and $t = 1,\ldots,T$ denote the frequency bin and time frame indexes, respectively.
Also, $s_i(f,t) \in \mathbb{C}$ and $\bm{z}(f,t) \in \mathbb{C}^{N_{\z}}$ are the signals of the target source $i = 1,\ldots,K$ and the background noises, respectively.
$\{ \bm{a}_i(f,\tau) \}_{\tau = 0}^{N_\tau}$ and $\{ A_{\z}(f,\tau) \}_{\tau = 0}^{N_\tau}$ are the acoustic transfer functions (ATFs) for the corresponding sources,
where $N_\tau + 1$ is the length of the ATFs.
The BSE problem addressed in this paper is defined as the problem of estimating
the sources of interest, i.e.,
$\{ s_i(f,t) \}_{i,f,t}$.
We assume that $K$ is given and the background noises are more stationary than the sources of interest.
\section{Probabilistic model}
\label{sec:model}
We present the proposed IVE-conv model that integrates WPE~\cite{yoshioka2011wpe-ica,yoshioka2012wpe-mimo,nakatani2010wpe} and IVE~\cite{scheibler2019overiva,scheibler2020fast,scheibler2020ive,ike2020overiva,ike2020ive,koldovsky2018ive,jansky2020adaptive}.
Let $\hat{\bm{x}}(f,t) \in \mathbb{C}^{M + L}$ with $L = M(D_2 - D_1 + 1)$ and $0 < D_1 \leq D_2$ be given by
\begin{align}
\nonumber
\hat{\bm{x}}(f,t) = [\, \bm{x}(f,t)^\top, \bm{x}(f,t - D_1)^\top, \ldots, \bm{x}(f,t - D_2)^\top \,]^\top,
\end{align}
where $\empty^\top$ is the transpose of a vector.
Suppose that there exists a convolutional filter
$\hat{W}(f) \in \mathbb{C}^{(M + L) \times M}$ satisfying
\begin{align}
\label{eq:s=Px}
s_i(f,t) &= \hat{\bm{w}}_i(f)^h \hat{\bm{x}}(f,t) \in \mathbb{C}, \quad i \in \{1,\ldots,K\},
\\
\label{eq:z=Px}
\bm{z}(f,t) &= \hat{W}_{\z}(f)^h \hat{\bm{x}}(f,t) \in \mathbb{C}^{N_{\z}},
\\
\hat{W}(f) &= [\hat{\bm{w}}_1(f), \ldots, \hat{\bm{w}}_{K}(f), \hat{W}_{\z}(f)] \in \mathbb{C}^{(M + L) \times M},
\end{align}
where $\empty^h$ denotes the conjugate transpose.
As pointed out in~\cite{boeddeker2020jointly,nakatani2020cbf}, convolutional filter $\hat{W}(f)$ can be decomposed into
the WPE prediction matrix $G(f) \in \mathbb{C}^{L \times M}$ and the ICA separation matrix $W(f) \in \mathbb{C}^{M \times M}$:
\begin{align}
\label{eq:GW}
\hat{W}(f) &=
\begin{bmatrix}
W(f) \\
-G(f) W(f)
\end{bmatrix}
=
\begin{bmatrix}
I_M \\
-G(f)
\end{bmatrix}
W(f).
\end{align}
Here, $I_d \in \mathbb{C}^{d \times d}$ is the identity matrix.
We also assume that the original source signals are mutually independent and that
the target source (resp. noise) signals obey time-dependent (resp. time-independent) complex Gaussian distributions
in the same way as in IVE~\cite{ike2020ive,ike2020overiva,scheibler2020fast,scheibler2020ive,scheibler2019overiva,jansky2020adaptive,koldovsky2018ive}:
\begin{align}
\label{eq:si:vec}
\bm{s}_i(t) &\coloneqq [s_i(1,t), \ldots, s_i(F,t)]^\top \in \mathbb{C}^F,
\\
\label{eq:si:gauss}
\bm{s}_i(t) &\sim \mathbb{C}\mathcal{N} \left( \bm{0}_F, v_i(t) I_F \right), \quad v_i(t) \in \mathbb{R}_{> 0},
\\
\label{eq:z-pdf}
\bm{z}(f,t) &\sim \mathbb{C}\mathcal{N} \left( \bm{0}_{N_{\z}}, \Omega(f) \right),
\quad \Omega(f) \in \mathcal{S}_{++}^{N_{\z}},
\\
& \hspace{-10 mm}
\label{eq:iid}
\text{
$\{ \bm{s}_i(t), \bm{z}(f,t) \}_{i,f,t}$ are mutually independent.
}
\end{align}
Here, $\bm{0}_d \in \mathbb{C}^d$ is the zero vector,
$\mathcal{S}_{++}^d$ denotes the set of all Hermitian positive definite matrices of size $d \times d$,
and $\mathbb{R}_{> 0} = \mathcal{S}_{++}^1$.
Assumption \eqref{eq:z-pdf} that the background noise signal is stationary and Gaussian distributed is essential for developing computationally efficient algorithms.
In Section~\ref{sec:exp}, we will experimentally show that this assumption can be violated to some extent when applied in practice.
The IVE-conv model is defined by \eqref{eq:s=Px}--\eqref{eq:iid}.
The parameters $\hat{W} \coloneqq \{ \hat{W}(f) \}_f$,
$v \coloneqq \{ v_i(t) \}_{i,t}$,
and $\Omega \coloneqq \{ \Omega(f) \}_f$
can be estimated based on maximum likelihood,
which is equivalent to minimizing $\hat{g}(\hat{W},\Omega,v) \coloneqq - \frac{1}{T} \log p(\bm{x})$:
\begin{align}
\nonumber
\hat{g}(\hat{W}, \Omega, v)
&=
\sum_{f = 1}^F
\sum_{i = 1}^{K}
\Big[
\hat{\bm{w}}_i(f)^h \hat{R}_i(f) \hat{\bm{w}}_i(f) + \frac{1}{T} \sum_{t = 1}^T \log v_i(t)
\Big]
\\
\nonumber
&
+ \sum_{f = 1}^F \trace \big( \hat{W}_{\z}(f)^h \hat{R}_{\z}(f) \hat{W}_{\z}(f) \Omega(f)^{-1} \big)
\\
\label{eq:obj}
&
- \sum_{f = 1}^F \log \det \big( W(f)^h W(f) \Omega(f)^{-1} \big),
\\
\nonumber
\hat{R}_i(f) &=
\frac{1}{T} \sum_{t = 1}^T
\frac{\hat{\bm{x}}(f,t) \hat{\bm{x}}(f,t)^h }{v_i(t)}, \quad i \in \{1,\ldots,K,\z\},
\end{align}
where we define $v_{\z}(t) = 1$ for all $t = 1,\ldots,T$
(see, e.g.,~\cite{ike2019ilrma} for the derivation of $\hat{g}$).
If $L = 0$ and $\hat{W}(f) = W(f)$, then objective function $\hat{g}$ has the same form as the counterparts of ICA, IVA, and IVE,
which has been discussed extensively in the literature~\cite{pham2001,degerine2006maxdet,yeredor2012SeDJoCo,ono2010auxica,ono2011auxiva,ike2020ive,ike2020overiva,scheibler2019overiva,scheibler2020fast,scheibler2020ive}.
For $L \geq 1$, $K = M$, and $N_{\z} = 0$, the optimization problem has been discussed explicitly in~\cite{ike2019ilrma,nakatani2020computationally}
and implicitly in~\cite{yoshioka2011wpe-ica,yoshioka2012wpe-mimo,kagami2018wpe-ilrma}.
\begin{remark}
\label{remark:WPE-ICA}
The proposed IVE-conv is an integration of WPE and IVE.
If we replace IVE with ICA, IVA, or independent low-rank matrix analysis (ILRMA)~\cite{kitamura2016ilrma},
then the IVE-conv turns out to be the method that integrates
WPE with ICA~\cite{yoshioka2011wpe-ica},
WPE with IVA (IVA-conv)~\cite{nakatani2020computationally},
or WPE with ILRMA~\cite{kagami2018wpe-ilrma,ike2019ilrma}, respectively.
In this sense, the novelty of the IVE-conv model might seem limited.
However, if $M$ gets large, computationally efficient algorithms can be developed only for IVE-conv,
which is our main contribution.%
\footnote{
This letter is based on our work~\cite{ike2020asj} reported in a domestic workshop in which an algorithm similar to but less efficient than Algorithm 1 (proposed in Section~\ref{sec:alg:1}) was first presented.
Recently, as follow-up research of our previous work~\cite{ike2020asj}, a method has been developed~\cite{togami2020over}
that replaces the IVE-conv spectrum model~\eqref{eq:si:vec}--\eqref{eq:si:gauss} with a model using nonnegative matrix factorization (NMF)~\cite{lee1999nmf,fevotte2009,smaragdis2003NMF}.
In contrast, here, we develop a more efficient Algorithm 1 in a rigorous way by providing new insight into the IVE-conv optimization problem in Section~\ref{sec:reduction}.
In addition, Algorithm 2 proposed in Section~\ref{sec:alg:2} is completely new.
}
\end{remark}
\section{Optimization algorithm}
\label{sec:alg}
To obtain a local optimal solution for the minimization problem of \eqref{eq:obj},
two block coordinate descent (BCD~\cite{tseng2001convergence}) algorithms summarized in Table~\ref{table:alg} will be developed.
All the algorithms shown in Table~\ref{table:alg} update $v$ and $(\hat{W}, \Omega)$ alternately.
The flowchart of IVE-conv is shown in Figure~\ref{fig:process-flow}.
When $(\hat{W}, \Omega)$ are kept fixed, $v$ can be optimized as
\begin{align}
v_i(t) = \frac{1}{F} \| \bm{s}_i(t) \|_2^2 = \frac{1}{F} \bm{s}_i(t)^h \bm{s}_i(t).
\end{align}
In what follows, we will develop two BCDs to optimize
$(\hat{W}, \Omega)$ while keeping $v$ fixed.
Because this subproblem can be addressed independently for each frequency bin,
we focus only on optimizing $\hat{W}(f)$ and $\Omega(f)$, and the frequency bin index $f$ is dropped off to ease the notation.
Also, we will denote the submatrices of $\hat{W}$ and $\hat{R}_i$, $i \in \{1,\ldots,K,\z\}$ as
\begin{align}
\label{eq:W}
\hat{W} &=
\begin{bmatrix}
W \\
-G W
\end{bmatrix}
=
\begin{bmatrix}
W \\
\bar{W}
\end{bmatrix}
=
\left[
\begin{array}{c|c|c|c}
\bm{w}_1 & \cdots & \bm{w}_K & W_{\z}
\\
\bar{\bm{w}}_1 & \cdots & \bar{\bm{w}}_K & \bar{W}_{\z}
\end{array}
\right],
\\
\nonumber
\bm{w}_i &\in \mathbb{C}^{M},
\quad \bar{\bm{w}}_i \in \mathbb{C}^{L},
\quad W_{\z} \in \mathbb{C}^{M \times N_{\z}},
\quad \bar{W}_{\z} \in \mathbb{C}^{L \times N_{\z}},
\\
\nonumber
\hat{R}_i &= \begin{bmatrix}
R_i & \bar{P}_i^h \\
\bar{P}_i & \bar{R}_i
\end{bmatrix} \in \mathcal{S}_{++}^{M + L},
\quad
\bar{P}_i
\in \mathbb{C}^{L \times M},
\quad \bar{R}_i \in \mathcal{S}_{++}^L.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.99\linewidth]{fig/process_flow.pdf}
\end{center}
\vspace{-4 mm}
\caption{
Flowchart of IVE-conv
}
\label{fig:process-flow}
\vspace{-3 mm}
\end{figure}
\subsection{Reduction from IVE-conv to IVE when $v$ is kept fixed}
\label{sec:reduction}
Before developing the algorithms, we show that the problem of minimizing $\hat{g}$ with respect to $\hat{W}$ and $\Omega$ (when source power spectra $v = \{v_i(t)\}_{i,t}$ are kept fixed), i.e.,
\begin{align}
\label{problem:maxdet-conv}
(\hat{W}, \Omega) \in \argmin_{(\hat{W},\, \Omega)} \hat{g}(\hat{W}, \Omega, v),
\end{align}
can be reduced to problem \eqref{problem:maxdet} below
that has been addressed in the study of
IVE~\cite{ike2020ive,ike2020overiva,scheibler2019overiva,scheibler2020fast,scheibler2020ive}.
Every optimal $\bar{W}$ (the lower part of $\hat{W}$) in problem~\eqref{problem:maxdet-conv} satisfies the stationary condition~\cite{nocedal-Jorge2006optimization},
which is computed as
\begin{align}
\nonumber
\frac{\partial \hat{g}}{\partial \bar{\bm{w}}_i^\ast}
&= \bm{0}_L
&~~&\Longleftrightarrow&~~&
\bar{P}_i \bm{w}_i + \bar{R}_i \bar{\bm{w}}_i = \bm{0}_L \in \mathbb{C}^L,
\\
\label{eq:opt:wi-wpe}
&&~~&\Longleftrightarrow&~~&
\bar{\bm{w}}_i = -\bar{R}_i^{-1} \bar{P}_i \bm{w}_i \in \mathbb{C}^L,
\\
\nonumber
\frac{\partial \hat{g}}{\partial \bar{W}_{\z}^\ast}
&= O
&~~&\Longleftrightarrow&~~&
\bar{P}_{\z} W_{\z} + \bar{R}_{\z} \bar{W}_{\z} = O \in \mathbb{C}^{L \times N_{\z}},
\\
\label{eq:opt:Wz-wpe}
&&~~&\Longleftrightarrow&~~&
\bar{W}_{\z} = -\bar{R}_{\z}^{-1} \bar{P}_{\z} W_{\z} \in \mathbb{C}^{L \times N_{\z}},
\end{align}
where $\empty^\ast$ denotes the element-wise conjugate.
Eqs. \eqref{eq:opt:wi-wpe} and \eqref{eq:opt:Wz-wpe} imply that the optimal $\bar{W}$ is a function of $W$
and that the variable $\bar{W}$ can be removed from $\hat{g}$ by substituting \eqref{eq:opt:wi-wpe} and \eqref{eq:opt:Wz-wpe}.
In other words, problem \eqref{problem:maxdet-conv} is equivalent to the following problem through \eqref{eq:opt:wi-wpe} and \eqref{eq:opt:Wz-wpe}:
\begin{align}
\label{problem:maxdet}
&
(W, \Omega) \in \argmin_{(W,\, \Omega)} g(W, \Omega, v),
\\
\nonumber
g &=
\sum_{i = 1}^{K} \bm{w}_i^h V_i \bm{w}_i
+ \trace \big( W_{\z}^h V_{\z} W_{\z} \Omega^{-1} \big)
- \log \det \big( W^h W \Omega^{-1} \big),
\\
\label{eq:Vi}
V_i &\coloneqq R_i - \bar{P}_i^h \bar{R}_i^{-1} \bar{P}_i \in \mathcal{S}_{++}^M, \quad i \in \{ 1,\ldots,K,\z \}.
\end{align}
Since problem \eqref{problem:maxdet} is nothing but the problem
addressed in the study of IVE, we can directly apply efficient algorithms that have been developed for
IVE~\cite{ike2020ive,ike2020overiva,scheibler2019overiva,scheibler2020fast,scheibler2020ive}.
Our new algorithm developed in Section~\ref{sec:alg:1} is based on this observation.
\begin{table*}[t]
\begin{center}
{\footnotesize
\caption{Optimization process of BCD}
\label{table:alg}
\begin{tabular}{cccll} \hline
& Method & Reference & \multicolumn{1}{c}{Optimization process$\empty^{1)}$} & \multicolumn{1}{c}{Computational time complexity}
\\ \hline
\multirow{2}{*}{Conventional}
& \multirow{2}{*}{IVA-conv$\empty^{2)}$}
& \cite{ike2019ilrma,nakatani2020computationally}
& $v \rightarrow \hat{\bm{w}}_1 \rightarrow \cdots \rightarrow \hat{\bm{w}}_K \rightarrow \hat{\bm{w}}_{K + 1} \rightarrow \cdots \rightarrow \hat{\bm{w}}_M$
& $\mathrm{O} (M L^2 FT + M L^3 F )$
\\
&& \cite{yoshioka2011wpe-ica,yoshioka2012wpe-mimo,kagami2018wpe-ilrma,nakatani2020computationally}
& $v \rightarrow G \rightarrow \bm{w}_1 \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow \bm{w}_{K + 1} \rightarrow \cdots \rightarrow \bm{w}_M$
& $\mathrm{O}( M L^2 FT + M^3 L^3 F )$ in~\cite{nakatani2020computationally}
\\
\hline
\multirow{2}{*}{Proposed}
& \multirow{2}{*}{IVE-conv$\empty^{2)}$}
& \S\ref{sec:alg:1} (Algorithm 1)
& $v \rightarrow \hat{\bm{w}}_1 \rightarrow (\hat{W}_{\z}, \Omega) \rightarrow \cdots \rightarrow \hat{\bm{w}}_{K} \rightarrow (\hat{W}_{\z}, \Omega)$
& \multirow{2}{*}{$\mathrm{O}( (K + 1) L^2 FT + (K + 1) L^3 F )$}
\\
&& \S\ref{sec:alg:2} (Algorithm 2)
& $v \rightarrow G \rightarrow \bm{w}_1 \rightarrow (W_{\z}, \Omega) \rightarrow \cdots \rightarrow \bm{w}_{K} \rightarrow (W_{\z}, \Omega)$
\\
\hline
\multicolumn{5}{l}{$\empty^{1)}$
We use the notations $\hat{W}_{\z} = [\hat{ \bm{w}}_{K + 1},\ldots,\hat{\bm{w}}_M ] \in \mathbb{C}^{(M + L) \times (M - K)}$
and $W_{\z} = [ \bm{w}_{K + 1}, \ldots, \bm{w}_M ] \in \mathbb{C}^{M \times (M - K)}$.
}
\\
\multicolumn{5}{l}{$\empty^{2)}$
The IVA and IVE source models can be freely changed to the ICA and ILRMA source models, and so we discuss only the IVA or IVE models.
}
\end{tabular}
}
\end{center}
\vspace{-2 mm}
\end{table*}
\subsection{Algorithm 1: Update each convolutional filter one by one}
\label{sec:alg:1}
To solve problem \eqref{problem:maxdet-conv},
we propose a cyclic BCD algorithm that updates
$\hat{\bm{w}}_1 \rightarrow (\hat{W}_{\z}, \Omega) \rightarrow \cdots \rightarrow \hat{\bm{w}}_K \rightarrow (\hat{W}_{\z}, \Omega)$ one by one by solving the following subproblems:
\begin{align}
\label{problem:ip1:wi}
\hat{\bm{w}}_i &\in \argmin_{\hat{\bm{w}}_i} \hat{g}(\hat{\bm{w}}_1,\ldots,\hat{\bm{w}}_K,\hat{W}_{\z}, \Omega, v),
\\
\label{problem:ip1:Wz}
(\hat{W}_{\z}, \Omega) &\in \argmin_{(\hat{W}_{\z},\, \Omega)} \hat{g}(\hat{\bm{w}}_1,\ldots,\hat{\bm{w}}_K,\hat{W}_{\z}, \Omega, v).
\end{align}
From the observation given in Section~\ref{sec:reduction},
these subproblems can be equivalently transformed to
\begin{align}
\label{problem:wi:alg1}
\bm{w}_i &\in \argmin_{\bm{w}_i} \bm{w}_i^h V_i \bm{w}_i - \log \det \big( W^h W \big),
\\
\label{problem:Wz:alg1}
(W_{\z}, \Omega) &\in \argmin_{(W_{\z},\, \Omega)} g_{\z}(W_{\z},\Omega),
\\
\nonumber
g_{\z}(W_{\z},\Omega) &= \trace \big( W_{\z}^h V_{\z} W_{\z} \Omega^{-1} \big) - \log \det \big( W^h W \Omega^{-1} \big)
\end{align}
through \eqref{eq:opt:wi-wpe} and \eqref{eq:opt:Wz-wpe}, respectively.
Here, $V_i$ and $V_{\z}$ are defined by \eqref{eq:Vi}.
As shown in~\cite{ono2011auxiva}, problem \eqref{problem:wi:alg1} can be solved as
\begin{align}
\label{eq:ip1:wi:1}
\bm{u}_i &\leftarrow ( W^h V_i )^{-1} \bm{e}_i \in \mathbb{C}^M,
\\
\label{eq:ip1:wi:2}
\bm{w}_i &\leftarrow \bm{u}_i ( \bm{u}_i^h V_i \bm{u}_i )^{-\frac{1}{2}} \in \mathbb{C}^M,
\end{align}
where $\bm{e}_i$ is the $i$-th column of $I_M$.
On the other hand, as shown in {\cite[Proposition 4]{ike2020ive}}, problem \eqref{problem:Wz:alg1} can be solved as
\begin{align}
\label{eq:ip1:Wz}
W_{\z} &\leftarrow \begin{bmatrix}
( W_{\s}^h V_{\z} E_{\s} )^{-1} (W_{\s}^h V_{\z} E_{\z} )
\\
-I_{N_{\z}}
\end{bmatrix} \in \mathbb{C}^{M \times N_{\z}},
\\
\label{eq:ip1:Omega}
\Omega &\leftarrow W_{\z}^h V_{\z} W_{\z} \in \mathcal{S}_{++}^{N_{\z}},
\end{align}
where
$W_{\s} \coloneqq [\bm{w}_1,\ldots,\bm{w}_K] \in \mathbb{C}^{M \times K}$,
$E_{\s} \in \mathbb{C}^{M \times K}$ is the first $K$ columns of $I_M$,
and $E_{\z} \in \mathbb{C}^{M \times N_{\z}}$ is the last $N_{\z}$ columns of $I_M$,
i.e., $[E_{\s}, E_{\z}] = I_M$.
\begin{remark}
The update formula for $\bm{w}_i$,
i.e., \eqref{eq:opt:wi-wpe}, \eqref{eq:Vi}, \eqref{eq:ip1:wi:1}, and \eqref{eq:ip1:wi:2},
has already been developed in our previous paper \cite{nakatani2020computationally,ike2020asj} in a different manner.
In this subsection, we reveal that it can also be developed by exploiting the stationary condition.
The efficient update formula for $\hat{W}_{\z}$,
i.e., \eqref{eq:opt:Wz-wpe}, \eqref{eq:Vi}, and \eqref{eq:ip1:Wz}, is newly developed based on the stationary Gaussian assumption of the background noises.
There is no need to update $\Omega$ as it does not affect the behavior of the algorithm.
\end{remark}
\subsection{Algorithm 2: Alternate update of WPE and ICA}
\label{sec:alg:2}
In Section~\ref{sec:model}, we recalled by~\eqref{eq:GW} that convolutional filter $\hat{W}$ can be decomposed into WPE prediction matrix $G$ and ICA separation matrix $W$.
Here, we develop a new cyclic BCD that updates
$G \rightarrow \bm{w}_1 \rightarrow W_{\z} \rightarrow \cdots \rightarrow \bm{w}_K \rightarrow W_{\z}$
one by one by solving the following subproblems:
\begin{alignat}{3}
\label{problem:G}
G &\in \argmin_{G} \hat{g} (G, \bm{w}_1,\ldots,\bm{w}_K, W_{\z}, \Omega, v),
\\
\label{problem:wi}
\bm{w}_i &\in \argmin_{\bm{w}_i} \hat{g} (G, \bm{w}_1,\ldots,\bm{w}_K, W_{\z}, \Omega, v),
\\
\label{problem:Wz}
(W_{\z}, \Omega) &\in \argmin_{(W_{\z},\, \Omega)} \hat{g} (G, \bm{w}_1,\ldots,\bm{w}_K, W_{\z}, \Omega, v).
\end{alignat}
When $K = M$ and there are no noise components,
problems \eqref{problem:G} and \eqref{problem:wi} have already been discussed in \cite{kagami2018wpe-ilrma,nakatani2020computationally,yoshioka2011wpe-ica,yoshioka2012wpe-mimo}.
However, the conventional algorithms to solve \eqref{problem:G}
suffer from a huge computational cost as shown in Table~\ref{table:alg}.
We thus propose a more computationally efficient algorithm.
\subsubsection{Algorithm to solve problems \eqref{problem:wi} and \eqref{problem:Wz}}
We first explain how to solve problems \eqref{problem:wi} and \eqref{problem:Wz}.
By substituting Eq.~\eqref{eq:GW} into objective function $\hat{g}$,
these problems can be simply expressed as
problems \eqref{problem:wi:alg1} and \eqref{problem:Wz:alg1}, respectively,
except that $V_i$ is replaced by the following $V_i'$ for each $i \in \{ 1,\ldots,K, \z\}$:
\begin{align}
V_i' &=
\begin{bmatrix} I_M \\ -G \end{bmatrix}^h
\hat{R}_i
\begin{bmatrix} I_M \\ -G \end{bmatrix}
\in \mathcal{S}_{++}^M.
\end{align}
Thus, in the same way as in the previous subsection,
problem \eqref{problem:wi} can be solved as \eqref{eq:ip1:wi:1}--\eqref{eq:ip1:wi:2}, where
$V_i$ is replaced by $V_i'$.
Also, problem \eqref{problem:Wz} can be solved as \eqref{eq:ip1:Wz}--\eqref{eq:ip1:Omega}, where $V_{\z}$ is replaced by $V_{\z}'$.
\subsubsection{Algorithm to solve problem \eqref{problem:G}}
We next propose an algorithm to solve \eqref{problem:G}
with less computational time complexity than conventional ones.
Every optimal $G \in \mathbb{C}^{L \times M}$ of problem \eqref{problem:G} (when $W$, $\Omega$, and $v$ are kept fixed) satisfies the stationary condition, which can be computed as
\begin{alignat}{3}
&&&&&~~ O_{L,M} = \frac{\partial \hat{g}}{\partial {G}^\ast}
= - \left. \frac{\partial \hat{g}}{\partial \bar{W}^\ast} \right|_{\bar{W} = -GW} W^h,
\\
\label{eq:G:1}
&&&\Longleftrightarrow&~&
\begin{cases}
G \bm{w}_i = \bar{R}_i^{-1} \bar{P}_i \bm{w}_i,
\quad i = 1,\ldots,K,
\\
G W_{\z} = \bar{R}_{\z}^{-1} \bar{P}_{\z} W_{\z},
\end{cases}
\\
&&&\Longleftrightarrow&~~~&
\label{eq:G}
G =
\begin{bmatrix}
\bar{R}_1^{-1} \bar{P}_1 \bm{w}_1 \mid \cdots \mid
\bar{R}_K^{-1} \bar{P}_K \bm{w}_K \mid
\bar{R}_{\z}^{-1} \bar{P}_{\z} W_{\z}
\end{bmatrix} W^{-1}.
\end{alignat}
Here, we used \eqref{eq:opt:wi-wpe} and \eqref{eq:opt:Wz-wpe} to derive \eqref{eq:G:1}.
Because problem \eqref{problem:G} is (strictly) convex, the update formula \eqref{eq:G} gives the (unique) global optimal solution.
The computational time complexity to calculate \eqref{eq:G} is shown in Table~\ref{table:alg}, which is much smaller than that of the conventional methods.
\section{Experiment}
\label{sec:exp}
In this experiment, we evaluated the signal extraction and runtime performance of the four methods described in Table~\ref{table:exp-alg}.
\begin{table}[t]
{
\footnotesize
\centering
\caption{Methods tested in experiment}
\label{table:exp-alg}
\vspace{-2 mm}
\begin{tabular}{c|l}
\hline
Method & \multicolumn{1}{c}{Description}
\\ \hline
IVE~\cite{scheibler2019overiva,ike2020ive} &
Identical to IVE-conv-(Alg1) with $L = 0$
\\
\hline
\multirow{2}{*}{IVA-conv~\cite{nakatani2020computationally}} &
An integration of WPE and IVA, which is identical to
\\
& IVE-conv-(Alg1) with $K = M$, $D_1 = 2$, and $D_2 = 5$.
\\
\hline
IVE-conv-(Alg1) & IVE-conv with $D_1 = 2$ and $D_2 = 5$ using Algorithm 1.
\\
\hline
\multirow{2}{*}{IVE-conv-(Alg2)} & IVE-conv with $D_1 = 2$ and $D_2 = 5$ using Algorithm 2.
\\
& For every five updates to $v$ and $W$, we updated $G$ once.
\\
\hline
\end{tabular}
}
\vspace{-0 mm}
\end{table}
\textit{Dataset}:
We generated synthesized convolutive noisy mixtures of two speech signals.
We obtained speech signals from the test set of the TIMIT corpus~\cite{timit}
and concatenated them so that the length of each signal exceeded 10 seconds.
We obtained point-source noise signals recorded in a cafe (\textsf{CAF}) and a pedestrian area (\textsf{PED})
from the third `CHiME' Speech Separation and Recognition Challenge (CHiME-3)~\cite{chime3}.
Note that the noise signals are nonstationary, but are considered to be more stationary than speech signals.
We obtained RIR data recorded in room \textsf{OFC} from the RWCP Sound Scene Database in Real Acoustical Environments~\cite{rwcp}.
The reverberation time ($\mathrm{RT}_{60}$) of room \textsf{OFC} is 780 ms.
The generated mixtures consisted of $K = 2$ speech signals and six noise signals randomly chosen from the above dataset.
The SNR of each mixture was adjusted to
$\mathrm{SNR} = 10 \log_{10} \frac{
(\lambda_1^{(\mathrm{s})} + \lambda_2^{(\mathrm{s})}) / 2
}{
\lambda_1^{(\mathrm{n})} + \cdots + \lambda_6^{(\mathrm{n})}
} = 5$ or 10 [dB],
where
$\lambda_i^{(\mathrm{s})}$
and
$\lambda_j^{(\mathrm{n})}$
denote the sample variances of the $i$-th speech signal ($i = 1,2$)
and the $j$-th noise singal ($j = 1,\ldots,6$).
\textit{Criteria}:
Using \textit{museval}~\cite{museval},
we measured the signal-to-distortion ratio (SDR)~\cite{vincent2006sdr}
between the separated and oracle spatial images of the speech signals at the first microphone.
The oracle spatial images were obtained by truncating the RIRs at 32 ms (i.e., the points after 32 ms were replaced by 0)
and convolving them with the speech signals.
\textit{Conditions}:
The sampling rate was 16 kHz,
the frame length was 2048 (128 ms),
and the frame shift was 512 (32 ms).
\textit{Initialization}:
For all methods, we initialized the convolutional filter as $W(f) = -I_M$ and $\bar{W}(f) = G(f) = O$,
and then updated $W_{\z}(f)$ once using~\eqref{eq:ip1:Wz} before the optimization.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{fig/SDR.K2_M8_noi6_snr5_nonoverlap_trir32_CAF.pdf}
\end{subfigure}
\begin{subfigure}[b]{\linewidth}
\includegraphics[width=\linewidth]{fig/SDR.K2_M8_noi6_snr10_nonoverlap_trir32_PED.pdf}
\end{subfigure}
\vspace{-6 mm}
\caption{
SDR [dB] performance as a function of runtime.
The noise condition was \textsf{CAF} with $\mathrm{SNR} = 5$ [dB] (top) or \textsf{PED} with $\mathrm{SNR} = 10$ [dB] (bottom),
and the number of microphones was $M = 4$ (left) or 6 (right).
Results shown were averaged over 50 mixtures
and obtained by running the algorithms on a PC with ``Intel(R) Core(TM) i7-7820 CPU @ 3.60 GHz'' using a single thread.
The average length of the mixture signals is 12.51 sec.
The separated spatial image was obtained by $(W(f)^{-h} \bm{e}_i) (\hat{\bm{w}}_i(f)^h \hat{\bm{x}}(f,t)) \in \mathbb{C}^M$
for each source $i = 1,2$.
}
\label{fig:SDR}
\end{figure}
\subsection{Experimental results}
Figure~\ref{fig:SDR} shows the convergence of the SDR when each method was applied.
Compared to IVE, which does not handle reverberation, both IVA-conv and IVE-conv showed the higher SDRs.
Although the SDR performance at the convergence points is comparable,
the convergence of the proposed IVE-conv was much faster than that of IVA-conv since the computational cost to update $\hat{W}_{\z}$ is much lower.
This fast convergence behavior is important in practice,
since using more microphones can improve the SDR at the expense of increased runtime as observed in Fig.~\ref{fig:SDR}.
IVE-conv-(Alg2) converged faster than IVE-conv-(Alg1), but gave a slightly lower SDR.
\section{Conclusion}
To achieve joint source separation and dereverberation with a small computational cost,
we proposed IVE-conv, which is an integration of IVE and WPE.
We also developed two efficient BCD algorithms for optimizing IVE-conv.
The experimental results showed that IVE-conv yields significantly faster convergence than the integration of IVA and WPE while maintaining its separation performance.
\vfill\pagebreak
\bibliographystyle{IEEEtran}
|
1,116,691,500,309 | arxiv |
\section{Introduction}
\label{sect:intro}
Consider the problem of estimating an unknown parameter $x\in\mathbb{R}^n$ based on a linear measurement $y\in\mathbb{R}^m$ corrupted by additive noise $w \in \mathbb{R}^m$. This setup is formalized through the linear measurement model
\begin{equation}
\label{eq:linear:observation}
y = H x + w,
\end{equation}
where the observation matrix $H \in \mathbb{R}^{m \times n}$ is assumed to be known. We further assume that the distribution~$\mathbb{P}_w$ of~$w$ has finite second moments and that~$w$ is independent of~$x$. Thus, the conditional distribution~$\mathbb{P}_{y|x}$ of~$y$ given $x$ is obtained by shifting~$\mathbb{P}_w$ by~$Hx$.
We emphasize that none of the subsequent results rely on a particular ordering of the dimension~$n$ of the parameter~$x$ and the dimension~$m$ of the measurement~$y$.
The linear measurement model~\eqref{eq:linear:observation} is fundamental for numerous applications in engineering ({\em e.g.}, linear systems theory \cite{ref:golnaraghi2017automatic, ref:ogata2009modern}), econometrics ({\em e.g.}, linear regression \cite{ref:stock2015introduction, ref:wooldridge2010econometric}, time series analysis \cite{ref:chatfield2016analysis, ref:hamilton1994time}), machine learning and signal processing ({\em e.g.}, Kalman filtering \cite{ref:kay1993fundamentals, ref:murphy2012machine, ref:oppenheim2015signals}) or information theory ({\em e.g.}, multiple-input multiple-output systems \cite{ref:cover2006elements, ref:mackay2003information}) etc.
In addition, model~\eqref{eq:linear:observation} also emerges naturally in many applications in operations research such as traffic management and control~\cite{ref:lint2012applications}, inventory control~\cite{ref:aviv2003time}, advertising and promotion budgeting~\cite{ref:sriram2007optimal} or resource management~\cite{ref:rubel2017robust}.
An estimator of $x$ given $y$ is a measurable function $\psi: \mathbb{R}^m \rightarrow \mathbb{R}^n$ that grows at most linearly. Thus, there exists $C>0$ such that $|\psi(y)|\le C(1+\|y\|)$ for all $y\in\mathbb{R}^m$. The function value $\psi(y)$ is the prediction of $x$ based on the measurement $y$ under the estimator $\psi$. In the following we denote the family of all estimators by~${\mc F}$. The quality of an estimator is measured by a risk function $R:{\mc F}\times\mathbb{R}^n\rightarrow \mathbb{R}$, which quantifies the mismatch between the parameter $x$ and its prediction $\psi(y)$. A popular risk function is the mean square error~(MSE)
\begin{align*}
R (\psi, x) = \mathds{E}_{\mathbb{P}_{y|x}} \left[ \| x - \psi(y)\|^2 \right],
\end{align*}
which defines the estimation error as the expected squared Euclidean distance between $\psi(y)$ and $x$. If $x$ was known, then $R (\psi, x)$ could be minimized directly, and the constant estimator $\psi^\star(y)\equiv x$ would be optimal. In practice, however, $x$ is unobservable. Otherwise there would be no need to solve an estimation problem in the first place. With $x$ unknown, it is impossible to minimize the MSE directly. The statistics literature proposes two complementary workarounds for this problem: the Bayesian approach and the minimax approach.
The Bayesian statistician treats $x$ as a random vector governed by a {\em prior} distribution~$\mathbb{P}_x$ that captures her beliefs about $x$ before seeing~$y$ \cite[\S~11.4]{ref:kay1993fundamentals} and solves the minimum MSE (MMSE) estimation problem
\begin{align}\label{eq:Bayes}
\mathop{\text{minimize}}_{\psi\in{\mc F}} ~ \mathds{E}_{\mathbb{P}_x} \left[ R(\psi, x) \right] .
\end{align}
If the distribution $\mathbb{P}_x$ of~$x$ has finite second moments, then~\eqref{eq:Bayes} is solvable. In this case, the optimal estimator, which is usually termed the Bayesian MMSE estimator, is of the form $\psi^\star_{\mathcal B}(y)=\mathds{E}_{\mathbb{P}_{x|y}} [x]$, where the conditional distribution~$\mathbb{P}_{x|y}$ of~$x$ given $y$ is obtained from $\mathbb{P}_x$ and $\mathbb{P}_{y|x}$ via Bayes' theorem. However, the Bayesian MMSE estimator suffers from two conceptual shortcomings. First, $\psi^\star_{\mathcal B}$ is highly sensitive to the prior distribution~$\mathbb{P}_x$, which is troubling if the statistician has little confidence in her beliefs. Second, computing $\psi^\star_{\mathcal B}$ requires precise knowledge of the noise distribution $\mathbb{P}_w$, which is typically unobservable and thus uncertain at least to some extent. Moreover, $\psi^\star_{\mathcal B}$ may generically have a complicated functional form, and evaluating $\psi^\star_{\mathcal B}(y)$ to high precision for a particular measurement $y$ ({\em e.g.}, via Monte Carlo simulation) may be computationally challenging if the dimension of $x$ is high.
These shortcomings are mitigated if we restrict the space ${\mc F}$ of all measurable estimators in~\eqref{eq:Bayes} to the~space
\begin{align} \label{eq:affine}
\Ac = \big\{ \psi \in{\mc F}\;:\; \existsA \in \mathbb{R}^{n \times m}, \, b \in \mathbb{R}^n \text{ with } \psi(y) = A y + b ~\forall y\in\mathbb{R}^m \big\}
\end{align}
of all {\em affine} estimators. In this case the distributions $\mathbb{P}_x$ and $\mathbb{P}_w$ need not be fully known. Instead, in order to evaluate the optimal affine estimator $\psi_{\mathcal A}^\star(y)= A^\star y+b^\star$, it is sufficient to know the mean vectors $\mu_x$ and $\mu_w$ as well as the covariance matrices $\Sigma_x$ and $\Sigma_w$ of the distributions $\mathbb{P}_x$ and $\mathbb{P}_w$, respectively. If $H \Sigma_x H^\top + \Sigma_w \succ 0$, which is the case if the noise covariance matrix has full rank, then the coefficients of the best affine estimator can be computed in closed form. Using~\eqref{eq:linear:observation} together with the independence of $x$ and $w$ one can show that
\begin{align}
\label{eq:optimal:affine}
A^\star = \Sigma_x H^\top (H \Sigma_x H^\top + \Sigma_w)^{-1} \quad\text{and}\quad
b^\star = \mu_x - A^\star (H \mu_x + \mu_w).
\end{align}
If the random vector $(x,y)$ follows a normal distribution, then the best affine estimator is also optimal among all measurable estimators. In general, however, we do not know how much optimality is sacrificed by restricting attention to affine estimators. Moreover, the uncertainty about $\mathbb{P}_x$ and $\mathbb{P}_w$ transpires through to their first- and second-order moments. As the coefficients~\eqref{eq:optimal:affine} tend to be highly sensitive to these moments, their uncertainty remains worrying.
The minimax approach models the statistician's prior knowledge concerning $x$ via a convex closed uncertainty set $\mathbb{X}\subseteq \mathbb{R}^n$ as commonly used in robust optimization. The minimax MSE estimation problem is then formulated as a zero-sum game between the statistician, who selects the estimator $\psi\in{\mc F}$ with the goal to minimize the MSE, and nature, who chooses the parameter value $x\in\mathbb{X}$ with the goal to maximize the MSE.\begin{subequations}
\begin{align}
\label{eq:minimax}
\mathop{\text{minimize}}_{\psi\in{\mc F}} ~ \max_{x \in \mathbb{X}} ~ R(\psi, x)
\end{align}
By construction, any minimizer $\psi_{\mathcal{M}}^\star$ of~\eqref{eq:minimax} incurs the smallest possible estimation error under the worst parameter realization within the uncertainty set $\mathbb{X}$. For this reason $\psi_{\mathcal{M}}^\star$ is called a minimax estimator. Note that the MSE~$R(\psi, x)$ generically displays a complicated non-concave dependence on~$x$ for any fixed~$\psi$, which implies that nature's inner maximization problem in~\eqref{eq:minimax} is usually non-convex. Thus, we should not expect the zero-sum game~\eqref{eq:minimax} between the statistician and nature to admit a Nash equilibrium. However, the inner maximization problem can be convexified by allowing nature to play mixed (randomized) strategies, that is, by reformulating~\eqref{eq:minimax} as the (equivalent) convex-concave saddle point problem
\begin{align}
\label{eq:minimax-saddle}
\mathop{\text{minimize}}_{\psi\in{\mc F}} \max_{\mathbb{Q}_x \in \mathcal{M}(\mathbb{X})} \mathds{E}_{\mathbb{Q}_x} \left[ R(\psi, x) \right] ,
\end{align}
\end{subequations}
where $\mathcal{M}(\mathbb{X})$ stands for the family of all distributions supported on $\mathbb{X}$ with finite second-order moments. As $\mathds{E}_{\mathbb{Q}_x} [ R(\psi, x)]$ is convex in $\psi$ for any fixed $\mathbb{Q}_x$ and concave (linear) in $\mathbb{Q}_x$ for any fixed $\psi$, while ${\mc F}$ and $\mathcal{M}(\mathbb{X})$ are both convex sets, the zero-sum game~\eqref{eq:minimax-saddle} admits a Nash equilibrium $(\psi_{\mathcal{M}}^\star, \mathbb{Q}_x^\star)$ under mild technical conditions. Note that $\psi_{\mathcal{M}}^\star$ is again a minimax estimator. Moreover, $\psi_{\mathcal{M}}^\star$ is the statistician's best response to nature's choice $\mathbb{Q}_x^\star$ and vice versa. Using the terminology introduced above, this means that $\psi_{\mathcal{M}}^\star$ is the Bayesian MMSE estimator corresponding to the prior~$\mathbb{Q}_x^\star$. For this reason, $\mathbb{Q}_x^\star$ is usually referred to as the {\em least favorable prior}. Even though the minimax approach exonerates the statistician from narrowing down her beliefs to a single prior distribution $\mathbb{Q}_x$, it still requires precise information about $\mathbb{P}_w$, which may not be available in practice. On the other hand, as it robustifies the estimator against {\em any} distribution on $\mathbb{X}$, the minimax approach is often regarded as overly pessimistic. Moreover, as in the case of the Bayesian MMSE estimation problem, $\psi^\star_{\mathcal{M}}$ may generically have a complicated functional form, and evaluating $\psi^\star_{\mathcal{M}}(y)$ to high precision may be computationally challenging if the dimension of $x$ is high. A simple remedy to mitigate these computational challenges would be to restrict ${\mc F}$ to the family $\mathcal A$ of affine estimators. The loss of optimality incurred by this approximation for different choices of $\mathbb{X}$ is discussed in~\cite[\S~4]{ref:juditsky2018lectures} and the references therein.
In this paper we bridge the Bayesian and the minimax approaches by leveraging tools from distributionally robust optimization. Specifically, we study distributionally robust estimation problems of the form
\begin{align}
\label{eq:dro_estimator}
\mathop{\text{minimize}}_{\psi\in{\mc F}} \max_{\mathbb{Q}_x \in \mathcal Q_x} \mathds{E}_{\mathbb{Q}_x} \left[ R(\psi, x) \right] ,
\end{align}
where $\mathcal Q_x\subseteq \mathcal{M}(\mathbb{R}^n)$ is an {\em ambiguity set} of multiple (possibly infinitely many) plausible prior distributions of~$x$. Note that if the ambiguity set collapses to the singleton $\mathcal Q_x = \{\mathbb{P}_{x}\}$ for some $\mathbb{P}_x\in\mathcal{M}(\mathbb{R}^n)$, then the distributionally robust estimation problem~\eqref{eq:dro_estimator} reduces to the Bayesian MMSE estimation problem~\eqref{eq:Bayes}. Similarly, under the ambiguity set $\mathcal Q_x=\mathcal{M}(\mathbb{X})$ for some convex closed uncertainty set $\mathbb{X}\subseteq \mathbb{R}^n$, problem~\eqref{eq:dro_estimator} reduces to the minimax mean square error estimation problem~\eqref{eq:minimax-saddle}. By providing considerable freedom in tailoring the ambiguity set $\mathcal Q_x$, the distributionally robust approach thus allows the statistician to reconcile the specificity of the Bayesian approach with the conservativeness of the minimax approach.
The estimation model~\eqref{eq:dro_estimator} still relies on the premise that the noise distribution~$\mathbb{P}_w$ is precisely known, and this assumption is not tenable in practice. However, nothing prevents us from further robustifying~\eqref{eq:dro_estimator} against uncertainty in $\mathbb{P}_w$. To this end, we define $\mathcal{M}(\mathbb{R}^{n+m})$ as the family of all joint distributions of $x$ and $w$ with finite second-order moments. Moreover, we define the {\em average risk} $\mathcal{R}:{\mc F} \times \mathcal{M}(\mathbb{R}^{n+m}) \rightarrow \mathbb{R}$ through
\[
\mathcal{R}(\psi, \mathbb{P}) = \mathds{E}_{\mathbb{P}} [ \| x - \psi(H x + w) \|^2 ].
\]
If $\mathbb{P}=\mathbb{P}_x\times\mathbb{P}_w$ for some marginal distributions $\mathbb{P}_x\in\mathcal{M}(\mathbb{R}^n)$ and $\mathbb{P}_w\in\mathcal{M}(\mathbb{R}^m)$, which implies that $x$ and $w$ are independent under $\mathbb{P}$, and if $\mathbb{P}_{y|x}$ is defined as $\mathbb{P}_w$ shifted by $Hx$, then $\mathcal{R}(\psi, \mathbb{P})= \mathds{E}_{\mathbb{P}_x} [ R(\psi, x) ]$. Thus, the average risk $\mathcal{R}(\psi, \mathbb{P})$ corresponds indeed to the risk $R(\psi, x)$ averaged under the marginal distribution~$\mathbb{P}_x$. In the remainder of this paper we will study generalized distributionally robust estimation problems of the form
\begin{align}
\label{eq:dro}
\mathop{\text{minimize}}_{\psi\in{\mc F}} \sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) ,
\end{align}
where the ambiguity set $\mathbb B(\wh \PP)\subseteq \mathcal{M}(\mathbb{R}^{n+m})$ captures distributional uncertainty in both $\mathbb{P}_x$ and $\mathbb{P}_w$. Specifically, we will model $\mathbb B(\wh \PP)$ as a set of factorizable distributions $\mathbb{Q}=\mathbb{Q}_x\times \mathbb{Q}_w$ close to a nominal distribution $\wh \PP = \wh \PP_x \times \wh \PP_w$ in the sense that $\mathbb{Q}_x$ and $\mathbb{Q}_w$ are close to $\wh \PP_x$ and $\wh \PP_w$ in Wasserstein distance, respectively.
\begin{definition}[Wasserstein distance]
\label{definition:wasserstein}
For any $d\in\mathbb N$, the type-2 Wasserstein distance between two distributions $\mathbb{Q}_1,\mathbb{Q}_2\in\mathcal{M}(\mathbb{R}^d)$ is defined as
\begin{equation}
\notag
\mathds{W}(\mathbb{Q}_1, \mathbb{Q}_2) = \inf\limits_{\pi\in\Pi(\mathbb{Q}_1,\mathbb{Q}_2)} \left( \int_{\mathbb{R}^d \times \mathbb{R}^d} \norm{\xi_1 - \xi_2}^2\, \pi({\rm d}\xi_1, {\rm d} \xi_2) \right)^{\frac{1}{2}},
\end{equation}
where $\Pi(\mathbb{Q}_1,\mathbb{Q}_2)$ denotes the set of all joint distributions or couplings $\pi\in \mathcal{M}(\mathbb{R}^d\times \mathbb{R}^d)$ of the random vectors $\xi_1\in\mathbb{R}^d$ and $\xi_2\in\mathbb{R}^d$ with marginal distributions $\mathbb{Q}_1$ and $\mathbb{Q}_2$, respectively.
\end{definition}
The dependence of the Wasserstein distance on $d$ is notationally suppressed to avoid clutter. Note that $\mathds{W}(\mathbb{Q}_1, \mathbb{Q}_2)^2$ is naturally interpreted as the optimal value of a transportation problem that determines the minimum cost of moving the distribution $\mathbb{Q}_1$ to $\mathbb{Q}_2$, where the cost of moving a unit probability mass from $\xi_1$ to $\xi_2$ is given by the squared Euclidean distance $\| \xi_1 - \xi_2\|^2$. For this reason, the optimization variable $\pi$ is sometimes referred to as a transportation plan and the Wasserstein distance as the earth mover's distance.
Formally, we define the {\em Wasserstein ambiguity set} as
\begin{align}\label{eq:Ambi}
\begin{aligned}
\mathbb B(\wh \PP) = \left\{ \mathbb{Q}_x \times \mathbb{Q}_w:
\def1.2{1.2}
\begin{array}{l@{\;}ll@{\;}l}
\mathbb{Q}_x &\in \mathcal{M}(\mathbb{R}^n),& \, \mathds{W}(\mathbb{Q}_x, \wh \PP_x) &\leq \rho_x \\
\mathbb{Q}_w &\in \mathcal{M}(\mathbb{R}^m),& \, \mathds{W}(\mathbb{Q}_w, \wh \PP_w) &\leq \rho_w
\end{array}
\right\},
\end{aligned}
\end{align}
where $\wh \PP_x$ and $\wh \PP_w$ represent prescribed nominal distributions that could be constructed via statistical analysis or expert judgement, while the Wasserstein radii $\rho_x\ge 0$ and $\rho_w\ge 0$ constitute hyperparameters that quantify the statistician's uncertainty about the nominal distributions of $x$ and $w$. We emphasize that the distributionally robust estimation model~\eqref{eq:dro} generalizes all preceding models. Indeed, if $\rho_w=0$, then~\eqref{eq:dro} reduces to the first distributionally robust model~\eqref{eq:dro_estimator}, which in turn encompasses both the MMSE estimation problem~\eqref{eq:Bayes} (for $\rho_x=0$) and the minimax estimation problem~\eqref{eq:minimax-saddle} (for $\rho_x=\infty$) as special cases.
The distributionally robust estimation model~\eqref{eq:dro} is conceptually attractive because the hyperparameters~$\rho_x$ and~$\rho_w$ allow the statistician to specify her level of trust in the nominal prior distribution~$\wh \PP_x$ and the nominal noise distribution~$\wh \PP_w$. In the remainder of the paper we will show that~\eqref{eq:dro} is also computationally attractive. This is maybe surprising because mixtures of factorizable distributions are generally not factorizable, which implies that the Wasserstein ambiguity set $\mathbb B(\wh \PP)$ is non-convex.
We remark that one could also work with an alternative ambiguity set of the form
\begin{align}
\label{eq:Ambi'}
\mathbb B'(\wh \PP) = \left\{ \mathbb{Q}_x \times \mathbb{Q}_w: \mathbb{Q}_x \in \mathcal{M}(\mathbb{R}^n),~ \mathbb{Q}_w \in \mathcal{M}(\mathbb{R}^m), ~\mathds{W}(\mathbb{Q}_x\times \mathbb{Q}_w, \wh \PP_x\times \wh \PP_w) \leq \rho \right\},
\end{align}
which involves only a single hyperparameter $\rho\ge 0$ and is therefore less expressive but maybe easier to calibrate than~$\mathbb B(\wh \PP)$. The following lemma is instrumental to understanding the relation between~$\mathbb B(\wh \PP)$ and~$\mathbb B'(\wh \PP)$. The proof of this result is relegated to the appendix.
\begin{lemma}[Pythagoras' theorem for Wasserstein distances]
\label{lem:wasserstein-decomposition}
For any $\mathbb{Q}_x^1,\mathbb{Q}_x^2\in\mathcal{M}(\mathbb{R}^n)$ and $\mathbb{Q}_w^1,\mathbb{Q}_w^2\in\mathcal{M}(\mathbb{R}^m)$ we have $\mathds{W}(\mathbb{Q}_x^1\times \mathbb{Q}_w^1, \mathbb{Q}_x^2\times \mathbb{Q}_w^2)^2 = \mathds{W}(\mathbb{Q}_x^1, \mathbb{Q}_x^2)^2 + \mathds{W}(\mathbb{Q}_w^1, \mathbb{Q}_w^2)^2$.
\end{lemma}
If we denote the ambiguity sets \eqref{eq:Ambi} and \eqref{eq:Ambi'} temporarily by $\mathbb B_{\rho_x,\rho_w} (\wh \PP)$ and $\mathbb B'_\rho(\wh \PP)$ in order to make their dependence on the hyperparameters explicit, then Lemma~\ref{lem:wasserstein-decomposition} implies that
\begin{align*}
\mathbb B'_\rho(\wh \PP) =\bigcup_{\rho_x^2+\rho_w^2\le \rho^2} \mathbb B_{\rho_x,\rho_w} (\wh \PP).
\end{align*}
This relation suggests that $\mathbb B'_\rho(\wh \PP)$ could be substantially larger than $\mathbb B_{\rho_x,\rho_w} (\wh \PP)$ for any fixed $\rho,\rho_x,\rho_w\ge 0$ with $\rho_x^2+\rho_w^2= \rho^2$ and thus lead to substantially more conservative estimators.
In the following we summarize the key contributions of this paper.
\begin{enumerate}
\item \label{contribution1} We construct a safe approximation for the distributionally robust MMSE estimation problem~\eqref{eq:dro} by restricting attention to {\em affine} estimators and by maximizing the average risk over an {\em outer} approximation of the Wasserstein ambiguity set, which is described through first- and second-order moment conditions. We then prove that this safe approximation is equivalent to a tractable convex program.
\item \label{contribution2} We also study a {\em dual} estimation problem, which is obtained by interchanging the minimization and maximization operations in the {\em primal} problem~\eqref{eq:dro} and thus lower bounds the optimal value of~\eqref{eq:dro}. We then construct a safe approximation for this dual problem by restricting the Wasserstein ambiguity set to contain only {\em normal} distributions. Assuming that the nominal distribution is normal, we prove that this safe approximation is again equivalent to a tractable convex program.
\item By construction, the primal and dual estimation problems are upper and lower bounded by their respective safe approximations. We prove, however, that the optimal values of the safe approximations collapse if the nominal distribution is normal. This result has three important implications.
\begin{enumerate}
\item The primal and dual estimation problems and their safe approximations are all equivalent. This implies via contributions~(\ref{contribution1}) and~(\ref{contribution2}) that both original estimation problems are tractable.
\item The primal estimation problem is solved by an {\em affine} estimator, and the dual estimation problem is solved by a {\em normal} distribution. In other words, we have discovered a new class of adaptive distributionally robust optimization problems for which affine decision rules are optimal.
\item The affine estimator and the normal distribution that solve the primal and dual estimation problems, respectively, form a {\em Nash equilibrium} for the zero-sum game between the statistician and nature. Thus, the optimal normal distribution constitutes a {\em least favorable prior}, and the optimal affine estimator represents the corresponding {\em Bayesian MMSE estimator}.
\end{enumerate}
We leverage these insights to prove that the optimal affine estimator can be constructed easily from the least favorable prior without the need to solve another optimization problem.
\item We argue that our main results remain valid if the nominal distribution is any elliptical distribution.
\item We develop a tailor-made Frank-Wolfe algorithm that can solve the dual estimation problem orders of magnitude faster than state-of-the-art general purpose solvers. We show that this algorithm enjoys a linear convergence rate. Moreover, we prove that the direction-finding subproblems can be solved in quasi-closed form, which means that the algorithm offers a favorable iteration complexity.
\end{enumerate}
We highlight that the Wasserstein ambiguity set~\eqref{eq:Ambi} is non-convex as it contains only distributions under which the signal and the noise are independent. To our best knowledge, we describe the first distributionally robust optimization model with independence conditions that admits a tractable reformulation.
We also emphasize that some of our results hold only if the nominal distribution~$\wh \PP$ is normal or elliptical. While this is restrictive, there is strong evidence that normal distributions are natural candidates for~$\wh \PP$. One reason for this is that the normal distribution has maximum entropy among all distributions with prescribed first- and second-order moments~\cite[\S~12]{ref:cover2006elements}. Therefore, it has appeal as the least prejudiced baseline model. Similarly, if the parameter~$x$ in~\eqref{eq:linear:observation} is normally distributed, then a normal distribution minimizes the mutual information between~$x$ and the observation~$y$ among all noise distributions with bounded variance~\cite[Lemma~II.2]{ref:diggavi2001worst}. In this sense, normally distributed noise renders the observations least informative. Conversely, if the noise in~\eqref{eq:linear:observation} is normally distributed, then a normal distribution maximizes the MMSE across all distributions of~$x$ with bounded variance~\cite[Proposition~15]{ref:guo2011estimation}. In this sense, normally distributed parameters are the hardest to estimate. Using normal nominal distributions thus amounts to adopting a worst-case perspective.
In the following we briefly survey the landscape of existing MMSE estimation models that have a robustness flavor. Several authors have addressed the {\em minimax} MMSE estimation problem~\eqref{eq:minimax} from the perspective of classical robust optimization \cite{ref:beck2007regularization, ref:beck2007mean, ref:eldar2008minimax, ref:eldar2004linear, ref:eldar2004robust, ref:juditsky2018nearoptimality}. To guarantee computational tractability, in all of these papers the estimators are restricted to be affine functions of the measurements. In this case, the minimax MMSE estimation problem can be reformulated as a tractable semidefinite program (SDP) if the uncertainty set $\mathbb{X}$ is an ellipsoid~\cite{ref:eldar2004linear, ref:eldar2004robust} or an intersection of two ellipsoids~\cite{ref:beck2007regularization}. Similar SDP reformulations are available if the observation matrix $H$ is also subject to uncertainty and ranges over a spectral norm ball~\cite{ref:eldar2004robust} or displays a block circulant structure, with each block ranging over a Frobenius norm ball \cite{ref:beck2007mean}. If the uncertainty set is described by an intersection of several ellipsoids, then the minimax MMSE estimation problem admits an (inexact) SDP relaxation~\cite{ref:eldar2008minimax}. Even though the restriction to affine estimators may incur a loss of optimality, affine estimators are known to be near-optimal in all of the above minimax estimation models~\cite{ref:juditsky2018nearoptimality}.
Another stream of literature investigates the {\em distributionally robust} estimation model~\eqref{eq:dro_estimator} with an ambiguous signal distribution and a crisp noise distribution. When focusing on affine estimators only, this model can be reformulated as a tractable SDP if the uncertainty in the signal distribution is characterized through spectral constraints on its covariance matrix~\cite{ref:eldar2004competitive}. This tractability result also extends to uncertain observation matrices. Similar SDP reformulations are available for the distributionally robust estimation model~\eqref{eq:dro} when both the signal and the noise distribution are ambiguous and their covariance matrices are subject to spectral constraints~\cite{ref:eldar2006robust}. Extensions to uncertain block circulant observation matrices are discussed in~\cite{ref:beck2006robust}.
Some authors have studied less structured distributionally robust estimation problems where the signal~$x$ and the measurement~$y$ are governed by a distribution that may {\em not} obey the linear measurement model~\eqref{eq:linear:observation}. In this case, the zero-sum game between the statistician and nature admits a Nash equilibrium if nature may choose any distribution that has a bounded Kullback-Leibler divergence with respect to a {\em normal} nominal distribution~\cite{ref:levy2004robust}. Intriguingly, the (affine) Bayesian MMSE estimator for the nominal distribution is optimal in this model and thus enjoys strong robustness properties. On the downside, there is no hope to improve this estimator's performance by tuning the size of the Kullback-Leibler ambiguity set. The underlying distributionally robust estimation model also serves as a fundamental building block for a robust Kalman filter~\cite{ref:levy2012robust}. Extensions to general $\tau$-divergence ambiguity sets that contain only normal distributions are reported in~\cite{ref:zorzi2017robustness}. We emphasize that all papers surveyed so far merely derive SDP reformulations or SDP relaxations that can be addressed with general purpose solvers, but none of them develops a customized solution algorithm.
The present paper extends the distributionally robust MMSE estimation model introduced in~\cite{ref:shafieezadeh2018wasserstein}, which accommodates a simple Wasserstein ambiguity set for the distribution of the signal-measurement pairs and makes no structural assumptions about the measurement noise. Note, however, that the linear measurement model~\eqref{eq:linear:observation} abounds in the literature on control theory, signal processing and information theory, implying that there are numerous applications where the measurement noise is {\em known} to be additive and independent of the signal. As we will see in Section~\ref{sect:numerical}, ignoring this structural information may result in weak estimators that sacrifice predictive performance. In Sections~\ref{sect:approx}--\ref{sect:nash} we will further see that constructing an explicit Nash equilibrium is considerably more difficult in the presence of structural information. Finally, we describe here an accelerated Frank-Wolfe algorithm that improves the sublinear convergence rate established in~\cite{ref:shafieezadeh2018wasserstein} to a linear rate. We emphasize that, in contrast to the robust MMSE estimators derived in~\cite{ref:levy2004robust, ref:zorzi2017robustness} that are insensitive to the radii of the underlying divergence-based ambiguity sets, the estimators constructed here change with the Wasserstein radii~$\rho_x$ and~$\rho_w$. Thus, using a Wasserstein ambiguity set to robustify the nominal MMSE estimation problem has a regularizing effect and leads to a parametric family of estimators that can be tuned to attain maximum prediction accuracy. Similar connections between robustification and regularization have previously been discovered in the context of statistical learning~\cite{ref:shafieezadeh2019mass-transportation} and covariance estimation~\cite{ref:nguyen2018distributionally}.
The paper is structured as follows. Sections~\ref{sect:approx} and~\ref{sect:dual} develop conservative approximations for the primal and dual distributionally robust MMSE estimation problems, respectively, both of which are equivalent to tractable convex programs. Section~\ref{sect:nash} shows that if the nominal distribution is normal, then both approximations are exact and can be used to find a Nash equilibrium for the zero-sum game between the statistician and nature. Extensions to non-normal nominal distributions are discussed in Section~\ref{sect:elliptical}. Section~\ref{sect:algorithm} develops an efficient Frank-Wolfe algorithm for the dual MMSE estimation problem, and Section~\ref{sect:numerical} reports on numerical results.
\textbf{Notation.} For any $A \in \mathbb{R}^{d \times d}$ we use $\Tr{A}$ to denote the trace and $\|A\|=\sqrt{\Tr{A^\top A}}$ to denote the Frobenius norm of $A$. By slight abuse of notation, the Euclidean norm of $v\in\mathbb{R}^d$ is also denoted by $\|v\|$. Moreover, $I_d$ stands for the identity matrix in $\mathbb{R}^{d \times d}$. For any $A,B\in\mathbb{R}^{d\times d}$, we use $\inner{A}{B} = \Tr{A^\top B}$ to denote the inner product and $A\otimes B$ to denote the Kronecker product of $A$ and $B$. The space of all symmetric matrices in $\mathbb{R}^{d\times d}$ is denoted by $\mathbb S^d$. We use $\mathbb{S}_{+}^d$ ($\mathbb{S}_{++}^d$) to represent the cone of symmetric positive semidefinite (positive definite) matrices in $\mathbb S^d$. For any $A,B\in\mathbb S^d$, the relation $A\succeq B$ ($A\succ B$) means that $A-B\in\mathbb{S}_{+}^d$ ($A-B\in \mathbb{S}_{++}^d$). The unique positive semidefinite square root of a matrix $A \in \mathbb{S}_{+}^d$ is denoted by $A^\frac{1}{2}$. For any $A \in \mathbb S^d$, $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ denote the minimum and maximum eigenvalues of $A$, respectively.
\section{The Gelbrich MMSE Estimation Problem}
\label{sect:approx}
The distributionally robust estimation problem~\eqref{eq:dro} poses two fundamental challenges. First, checking feasibility of the inner maximization problem in~\eqref{eq:dro} requires computing the Wasserstein distances $\mathds{W}(\wh \PP_x,\mathbb{Q}_x)$ and $\mathds{W}(\wh \PP_w,\mathbb{Q}_w)$, which is \#P-hard even if~$\wh \PP_x$ and~$\wh \PP_w$ are simple two-point distributions, while~$\mathbb{Q}_x$ and~$\mathbb{Q}_w$ are uniform distributions on hypercubes \cite{ref:taskesen2019complexity}. Efficient algorithms for computing Wasserstein distances are available only if both involved distributions are discrete~\cite{ref:cuturi2013sinkhorn, ref:peyre2019computational, ref:solomon2015convolutional}, and analytical formulas are only known in exceptional cases ({\em e.g.}, if both distributions are Gaussian~\cite{ref:givens1984class} or belong to the same family of elliptical distributions~\cite{ref:gelbrich1990formula}). The second challenge is that the outer minimization problem in~\eqref{eq:dro} constitutes an infinite-dimensional functional optimization problem. In order to bypass these computational challenges, we first seek a conservative approximation for~\eqref{eq:dro} by relaxing the ambiguity set~$\mathbb B(\wh \PP)$ and restricting the feasible set~${\mc F}$. We begin by constructing an outer approximation for the ambiguity set. To this end, we introduce a new distance measure on the space of mean vectors and covariance matrices.
\begin{definition}[Gelbrich distance]
\label{def:induced_distance}
For any $d\in\mathbb N$, the Gelbrich distance between two tuples of mean vectors and covariance matrices $(\mu_1, \Sigma_1),(\mu_2, \Sigma_2) \in \mathbb{R}^d \times \mathbb{S}_{+}^d$ is defined as
\[
\mathds{G} \big( (\mu_1, \Sigma_1), (\mu_2, \Sigma_2) \big) = \sqrt{\norm{\mu_1 - \mu_2}^2 + \Tr{\Sigma_1 + \Sigma_2 - 2 \left( \Sigma_2^\frac{1}{2} \Sigma_1 \Sigma_2^\frac{1}{2} \right)^\frac{1}{2}}}.
\]
\end{definition}
The dependence of the Gelbrich distance on $d$ is notationally suppressed in order to avoid clutter. One can show that $\mathds{G}$ constitutes a metric on $\mathbb{R}^d \times \mathbb{S}_{+}^d$, that is, $\mathds{G}$ is symmetric, non-negative, vanishes if and only if $(\mu_1,\Sigma_1)=(\mu_2,\Sigma_2)$ and satisfies the triangle inequality~\cite[pp.~239]{ref:givens1984class}.
\begin{proposition}[{Commuting covariance matrices \cite[p.~239]{ref:givens1984class}}]
\label{prop:commuting}
If $\mu_1,\mu_2\in\mathbb{R}^d$ are identical and $\Sigma_1, \Sigma_2\in\mathbb{S}_{+}^d$ commute ($\Sigma_1 \Sigma_2= \Sigma_2 \Sigma_1$), then the Gelbrich distance simplifies to $\mathds{G}\big( (\mu_1, \Sigma_1), (\mu_2, \Sigma_2)\big) = \norm{\sqrt{\Sigma_1} - \sqrt{\Sigma_2}}$.
\end{proposition}
While the Gelbrich distance is non-convex, the squared Gelbrich distance is convex in all of its arguments.
\begin{proposition}[Convexity and continuity of the squared Gelbrich distance]
\label{prop:gelbrich-convexity}
The squared Gelbrich distance $\mathds{G}\big( (\mu_1, \Sigma_1), (\mu_2, \Sigma_2)\big)^2$ is jointly convex and continuous in $\mu_1 , \mu_2 \in \mathbb{R}^d$ and $\Sigma_1, \Sigma_2 \in \mathbb{S}_{+}^d$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:gelbrich-convexity}]
By \cite[Proposition~2]{ref:malago2018wasserstein}, the squared Gelbrich distance $\mathds{G}\big( (\mu_1, \Sigma_1), (\mu_2, \Sigma_2)\big)^2$ coincides with the optimal value of the semidefinite program
\begin{equation*}
\begin{array}{cl}
\min & \| \mu_1 - \mu_2 \|_2^2 + \Tr{\Sigma_1 + \Sigma_2 - 2 C} \\[1ex]
\st & C \in \mbb R^{d \times d} \\[1ex]
& \begin{bmatrix} \Sigma_1 & C \\ C^\top & \Sigma_2 \end{bmatrix} \succeq 0,
\end{array}
\end{equation*}
see also \cite[Section~3]{ref:dowson1982frechet}. Less general results that hold when one of the matrices $\Sigma_1$ or $\Sigma_2$ is positive definite are reported in~\cite{ref:givens1984class, ref:knott1984optimal, ref:olkin1982distance}.
The convexity of the squared Gelbrich distance then follows from~\cite[Proposition~3.3.1]{ref:bertsekas2009convex}, which guarantees that convexity is preserved under partial minimization. Moreover, the continuity of the squared Gelbrich distance follows from the continuity of the matrix square root established in Lemma~\ref{lemma:hoelder}.
\end{proof}
Our interest in the Gelbrich distance stems mainly from the next proposition, which lower bounds the Wasserstein distance between two distributions in terms of their first- and second-order moments. We will later see that this bound becomes tight when $\mathbb{Q}_1$ and $\mathbb{Q}_2$ are normal or---more generally---elliptical distributions of the same type.
\begin{proposition}[Moment bound on the Wasserstein distance {\cite[Theorem~2.1]{ref:gelbrich1990formula}}]
\label{prop:wasserstein}
For any distributions $\mathbb{Q}_1$,~$\mathbb{Q}_2 \in \mathcal{M}(\mathbb{R}^d)$ with mean vectors $\mu_1$, $\mu_2 \in \mathbb{R}^d$ and covariance matrices $\Sigma_1$, $\Sigma_2 \in \mathbb{S}_{+}^d$, respectively, we have
\begin{align} \notag
\mathds{W}(\mathbb{Q}_1, \mathbb{Q}_2) \geq \mathds{G} \big( (\mu_1, \Sigma_1), (\mu_2, \Sigma_2) \big).
\end{align}
\end{proposition}
Proposition~\ref{prop:wasserstein} prompts us to construct an outer approximation for the Wasserstein ambiguity set $\mathbb B(\wh \PP)$ by using the Gelbrich distance. Specifically, we define the {\em Gelbrich ambiguity set} centered at $\wh \PP=\wh \PP_x\times \wh \PP_w$ as
\begin{align*}
\begin{aligned}
{\mbb G}(\wh \PP) = \left\{ \mathbb{Q}_x \times \mathbb{Q}_w:
\def1.2{1.2}
\begin{array}{l}
\begin{array}{l@{\;}@{\;}l@{\;}l}
\mathbb{Q}_x \in \mathcal{M}(\mathbb{R}^n), & \mu_x\,= \mathds{E}_{\mathbb{Q}_x}[x], & \Sigma_x\,= \mathds{E}_{\mathbb{Q}_x} [x x^\top] -\mu_x \mu_x^\top \\
\mathbb{Q}_w \in \mathcal{M}(\mathbb{R}^m), & \mu_w=\mathds{E}_{\mathbb{Q}_w}[w], & \Sigma_w = \mathds{E}_{\mathbb{Q}_w} [w w^\top] - \mu_w \mu_w^\top
\end{array}\\
\begin{array}{ll}
\mathds{G} \big( (\mu_x, \Sigma_x), (\wh{\m}_x, \covsa_x) \big)\leq \rho_x, & \mathds{G} \big( (\mu_w, \Sigma_w), (\wh{\m}_w, \covsa_w) \big) \leq \rho_w
\end{array}
\end{array}
\right\},
\end{aligned}
\end{align*}
where $\wh{\m}_x$ and $\wh{\m}_w$ denote the mean vectors and $\covsa_x$ and $\covsa_w$ the covariance matrices of $\wh \PP_x$ and $\wh \PP_w$, respectively.
\begin{corollary}[Relation between Gelbrich and Wasserstein ambiguity sets]
\label{cor:wasserstein-in-gelbrich}
For any $\wh \PP =\wh \PP_x\times \wh \PP_w$ with $\wh \PP_x\in \mathcal{M}(\mathbb{R}^n)$ and $\wh \PP_w\in \mathcal{M}(\mathbb{R}^m)$ we have $\mathbb B(\wh \PP) \subseteq {\mbb G}(\wh \PP)$.
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{cor:wasserstein-in-gelbrich}]
Select any $\mathbb{Q}=\mathbb{Q}_x\times\mathbb{Q}_w\in\mathbb B(\wh \PP)$ and define $\mu_x$ and $\mu_w$ as the mean vectors and $\Sigma_x$ and $\Sigma_w$ as the covariance matrices of $\mathbb{Q}_x$ and $\mathbb{Q}_w$, respectively. By Proposition~\ref{prop:wasserstein} we then have
\[
\mathds{G} \big( (\mu_x, \Sigma_x), (\wh{\m}_x, \covsa_x) \big) \le \mathds{W}(\mathbb{Q}_x, \wh \PP_x)\le \rho_x\quad\text{and}\quad
\mathds{G} \big( (\mu_w, \Sigma_w), (\wh{\m}_w, \covsa_w) \big) \le \mathds{W}(\mathbb{Q}_w, \wh \PP_w)\le \rho_w,
\]
which in turn implies that $\mathbb{Q}\in{\mbb G}(\wh \PP)$. We may thus conclude that $\mathbb B(\wh \PP) \subseteq {\mbb G}(\wh \PP)$.
\end{proof}
By restricting ${\mc F}$ to the set $\Ac$ of all affine estimators while relaxing $\mathbb B(\wh \PP)$ to the Gelbrich ambiguity set ${\mbb G}(\wh \PP)$, we obtain the following conservative approximation of the distributionally robust estimation problem~\eqref{eq:dro}.
\begin{align}
\label{eq:dro:approx}
\mathop{\text{minimize}}_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q})
\end{align}
From now on we will call~\eqref{eq:dro} and~\eqref{eq:dro:approx} the Wasserstein and Gelbrich MMSE estimation problems, and we will refer to their minimizers as Wasserstein and Gelbrich MMSE estimators, respectively. As the average risk $\mathcal{R}(\psi, \mathbb{Q})$ of a fixed affine estimator $\psi\in\Ac$ is convex and quadratic in the mean vector $\mu$ and affine in the covariance matrix $\Sigma$ of the distribution~$\mathbb{Q}$, the inner maximization problem in~\eqref{eq:dro:approx} is non-convex. Thus, one might suspect that the Gelbrich MMSE estimation problem is intractable. Below we will show, however, that~\eqref{eq:dro:approx} is equivalent to a finite convex program that can be solved in polynomial time. To this end, we first show that, under mild conditions, problem~\eqref{eq:dro:approx} is stable with respect to changes of its input parameters.
\begin{proposition}[Regularity of the Gelbrich MMSE estimation problem]
\label{prop:Gelbrich}
The Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} enjoys the following regularity properties.
\begin{enumerate}[label = $(\roman*)$]
\item {\bf Conservativeness:} \label{prop:Gelbrich:conservative}
Problem~\eqref{eq:dro:approx} upper bounds the Wasserstein MMSE estimation problem~\eqref{eq:dro}.
\item {\bf Solvability:} \label{prop:Gelbrich:solvability}
If $\rho_x>0$ and $\rho_w>0$, then the minimum of~\eqref{eq:dro:approx} is attained.
\item {\bf Stability:} \label{prop:Gelbrich:cont}
If $\rho_x>0$ and $\rho_w>0$, then the minimum of~\eqref{eq:dro:approx} is continuous in $(\wh{\m}_x,\wh{\m}_w,\covsa_x,\covsa_w)$.
\end{enumerate}
\end{proposition}
The proof of Proposition~\ref{prop:Gelbrich} is lengthy and technical and is therefore relegated to the appendix. We are now ready to prove the main result of this section.
\begin{theorem}[Gelbrich MMSE estimation problem]
\label{thm:conservative}
The Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} is equivalent to the finite convex optimization problem
\begin{equation}
\label{eq:VA}
\begin{array}{cl}
\inf
& \dualvar_x \big( \rho_x^2 - \Tr{\covsa_x} \big) + \dualvar_x^2 \inner{[\dualvar_x I_n - (I_n- A H)^\top (I_n - AH)]^{-1}}{\covsa_x} \\
& \hspace{1cm} + \dualvar_w \big( \rho_w^2 - \Tr{\covsa_w} \big) + \dualvar_w^2 \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w} \\[1ex]
\st & A \in \mathbb{R}^{n \times m},\quad \dualvar_x, \dualvar_w \in \mathbb{R}_+ \\
& \dualvar_x I_n - (I_n-A H)^\top (I_n-A H) \succ 0, \quad \dualvar_w I_m - A^\top A \succ 0.
\end{array}
\end{equation}
Moreover, if $\rho_x> 0$ and $\rho_w > 0$, then~\eqref{eq:VA} admits an optimal solution\footnote{We say that $A^\star$ solves~\eqref{eq:VA} if adding the constraint $A=A^\star$ does not change the infimum of~\eqref{eq:VA}. Note that the infimum of the resulting problem over $(\dualvar_x,\dualvar_w)$ may not be attained, {\em i.e.}, the existence of a solution $A^\star$ does not imply that~\eqref{eq:VA} is solvable.}~$A^\star$, and the infimum of~\eqref{eq:dro:approx} is attained by the affine estimator $\psi^\star(y) = A^\star y +b^\star$, where $b^\star = \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w)$.
\end{theorem}
{The \textit{strict} semidefinite inequalities in~\eqref{eq:VA} ensure that the inverse matrices in the objective function exist.}
\begin{proof}[Proof of Theorem~\ref{thm:conservative}]
Throughout this proof we denote by $\psi_{A,b}\in\Ac$ the affine estimator $\psi_{A,b}(y)=Ay+b$ corresponding to the sensitivity matrix $A \in \mathbb{R}^{n \times m}$ and the vector $b\in\mathbb{R}^n$ of intercepts. In the following we fix some $A \in \mathbb{R}^{n \times m}$ and define $K = I_n-A H$ in order to simplify the notation. By the definitions of the average risk $\mathcal{R}(\psi, \mathbb{Q})$ and the Gelbrich ambiguity set ${\mbb G}(\widehat \mathbb{P})$, we then have
\begin{equation}
\label{eq:robust}
\inf_{\substack{b}} \sup\limits_{\mathbb{Q} \in {\mbb G}(\widehat \mathbb{P})} \mathcal{R}(\psi_{A,b}, \mathbb{Q}) = \left\{
\begin{array}{ccl}
\inf\limits_{b}& \sup\limits_{\substack{\mu_x, \mu_w \\ \Sigma_x, \Sigma_w\succeq 0}} & \inner{K^\top K}{\Sigma_x + \mu_x \mu_x^\top} + \inner{A^\top A}{\Sigma_w + \mu_w \mu_w^\top} + b^\top b \\[-2ex]
&& \hspace{1cm} - 2 \mu_x^\top K^\top A \mu_w - 2 b^\top ( K \mu_x - A \mu_w ) \\[1ex]
&\st & \mathds{G} \big( (\mu_x, \Sigma_x), (\wh{\m}_x, \covsa_x) \big)^2 \leq \rho_x^2 \\
&& \mathds{G} \big( (\mu_w, \Sigma_w), (\wh{\m}_w, \covsa_w) \big)^2 \leq \rho_w^2.
\end{array} \right.
\end{equation}
The outer minimization problem in~\eqref{eq:robust} is convex because the objective function of the minimax problem is convex in $b$ for any fixed $(\mu_x, \mu_w, \Sigma_x, \Sigma_w)$ and because convexity is preserved under maximization. Moreover, the inner maximization problem in~\eqref{eq:robust} is non-convex because its objective function is convex in~$(\mu_x, \mu_w)$. This observation prompts us to maximize over $(\mu_x,\mu_w)$ and $(\Sigma_x,\Sigma_w)$ sequentially and to reformulate~\eqref{eq:robust}~as
\begin{equation}
\label{eq:robust2}
\begin{array}{cccl}
\inf\limits_{b}& \sup\limits_{\substack{\mu_x, \mu_w\\ \| \mu_x - \wh{\m}_x \| \leq \rho_x \\\| \mu_w - \wh{\m}_w \| \leq \rho_w}}& \sup\limits_{\substack{ \Sigma_x, \Sigma_w\succeq 0}} & \inner{K^\top K}{\Sigma_x + \mu_x \mu_x^\top} + \inner{A^\top A}{\Sigma_w + \mu_w \mu_w^\top} + b^\top b \\[-4ex]
&&& \hspace{0.2cm} - 2 \mu_x^\top K^\top A \mu_w - 2 b^\top ( K \mu_x - A \mu_w ) \\[1ex]
&&\st & \mathds{G} \big( (\mu_x, \Sigma_x), (\wh{\m}_x, \covsa_x) \big)^2 \leq \rho_x^2 \\
&&& \mathds{G} \big( (\mu_w, \Sigma_w), (\wh{\m}_w, \covsa_w) \big)^2 \leq \rho_w^2.
\end{array}
\end{equation}
As $\| \mu_x - \wh{\m}_x \| \leq \mathds{G} ( (\mu_x, \Sigma_x), (\wh{\m}_x, \covsa_x) )$ and as this inequality is tight for $\Sigma_x=\covsa_x$, the extra constraint $\| \mu_x - \wh{\m}_x \| \leq \rho_x$ is actually redundant and merely ensures that the maximization problem over $\Sigma_x$ remains feasible for any admissible choice of $\mu_x$. An analogous statement holds for $\mu_w$ and $\Sigma_w$.
By the definition of the Gelbrich distance, the innermost maximization problem over $(\Sigma_x,\Sigma_w)$ in~\eqref{eq:robust2} admits the Lagrangian~dual
\begin{equation}
\label{eq:robust-inner}
\begin{array}{c@{\;}c@{}l}
\inf\limits_{\dualvar_x, \dualvar_w \geq 0} & \sup\limits_{\Sigma_x, \Sigma_w\succeq 0} & \inner{K^\top K}{\Sigma_x + \mu_x \mu_x^\top} + \inner{A^\top A}{\Sigma_w + \mu_w \mu_w^\top} + b^\top b - 2 \mu_x^\top K^\top A \mu_w - 2 b^\top ( K \mu_x - A \mu_w ) \\[-0.5ex]
&& \hspace{1cm} + \dualvar_x \big( \rho_x^2 - \| \mu_x - \wh{\m}_x \|^2 - \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2} \Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \big) \\[2ex]
&& \hspace{1cm} + \dualvar_w\! \big( \rho_w^2 - \| \mu_w - \wh{\m}_w \|^2 - \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2} \Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \big).
\end{array}
\end{equation}
Strong duality holds by \cite[Proposition~5.5.4]{ref:bertsekas2009convex}, which applies because the primal problem has a non-empty compact feasible set. Next, we observe that the inner maximization problem in~\eqref{eq:robust-inner} can be solved analytically by using Proposition~\ref{prop:lin_obj:Lag} in the appendix, and thus the dual problem~\eqref{eq:robust-inner} is equivalent to
\begin{align}
\label{eq:robust-inner2}
\begin{array}{cl}
\inf\limits_{\substack{\dualvar_x,\dualvar_w \\ \dualvar_x I_n \succ K^\top K \\ \dualvar_w I_m \succ A^\top A}} & \inner{K^\top K}{\mu_x \mu_x^\top} + \inner{A^\top A}{\mu_w \mu_w^\top} + b^\top b
- 2 \mu_x^\top K^\top A \mu_w - 2 b^\top ( K \mu_x - A \mu_w ) \\[-4ex]
& \hspace{1cm} + \dualvar_x \big(\rho_x^2 - \| \mu_x - \wh{\m}_x \|^2 -\Tr{ \covsa_x} + \dualvar_x \inner{(\dualvar_x I_n - K^\top K )^{-1}}{\covsa_x} \big) \\[2ex]
& \hspace{1cm} + \dualvar_w \big(\rho_w^2 - \| \mu_w - \wh{\m}_w \|^2 -\Tr{ \covsa_w} + \dualvar_w \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w} \big).
\end{array}
\end{align}
Substituting~\eqref{eq:robust-inner2} back into~\eqref{eq:robust2} then allows us to reformulate the Gelbrich MMSE estimation problem~\eqref{prop:Gelbrich}~as
\begin{align}
\label{eq:robust3}
\begin{array}{c@{\,}l}
\inf\limits_{b } \, \sup\limits_{\substack{\mu_x, \mu_w \\ \| \mu_x - \wh{\m}_x \| \leq \rho_x \\ \| \mu_w - \wh{\m}_w \| \leq \rho_w}} \,
\inf\limits_{\substack{\dualvar_x,\dualvar_w \\ \dualvar_x I_n \succ K^\top K \\ \dualvar_w I_m \succ A^\top A }} & \inner{K^\top K}{\mu_x \mu_x^\top}
+ \inner{A^\top A}{\mu_w \mu_w^\top} + b^\top b - 2 \mu_x^\top K^\top A \mu_w - 2 b^\top ( K \mu_x - A \mu_w ) \\[-4ex]
& \hspace{1cm} + \dualvar_x \big(\rho_x^2 - \| \mu_x - \wh{\m}_x \|^2 -\Tr{ \covsa_x} + \dualvar_x \inner{(\dualvar_x I_n - K^\top K )^{-1}}{\covsa_x} \big) \\[2ex]
& \hspace{1cm} + \dualvar_w \big(\rho_w^2 - \| \mu_w - \wh{\m}_w \|^2 -\Tr{ \covsa_w} + \dualvar_w \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w} \big).
\end{array}
\end{align}
The infimum of the inner minimization problem over $(\dualvar_x, \dualvar_w)$ in~\eqref{eq:robust3} is convex quadratic in~$b$. Moreover, it is concave in~$(\mu_x,\mu_w)$ because $K^\top K - \dualvar_x I_n\prec 0$ and $A^\top A - \dualvar_w I_m\prec 0$ for any feasible choice of $(\dualvar_x,\dualvar_w)$ and because concavity is preserved under minimization. Finally, the feasible set for~$(\mu_x,\mu_w)$ is convex and compact. By Sion's classical minimax theorem, we may therefore interchange the infimum over $b$ with the supremum over $(\mu_x, \mu_w)$. The minimization problem over $b$ thus reduces to an unconstrained (strictly) convex quadratic program that has the unique optimal solution $ b = K \mu_x - A \mu_w$. Substituting this expression back into~\eqref{eq:robust3} then yields
\begin{align}
\label{eq:robust4}
\begin{array}{c@{\,}l}
\sup\limits_{\substack{\mu_x, \mu_w \\ \| \mu_x - \wh{\m}_x \| \leq \rho_x \\ \| \mu_w - \wh{\m}_w \| \leq \rho_w}} \,
\inf\limits_{\substack{\dualvar_x,\dualvar_w \\ \dualvar_x I_n \succ K^\top K \\ \dualvar_w I_m \succ A^\top A }} &
\dualvar_x \big( \rho_x^2 - \| \mu_x - \wh{\m}_x \|^2 - \Tr{\covsa_x} \big) + \dualvar_x^2 \inner{(\dualvar_x I_n - K^\top K)^{-1}}{\covsa_x} \\[-4ex]
& \hspace{0.5cm} + \dualvar_w \big( \rho_w^2 - \| \mu_w - \wh{\m}_w \|^2 - \Tr{\covsa_w} \big) + \dualvar_w^2 \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w}.
\end{array}
\end{align}
It is easy to verify that the resulting maximization problem over $(\mu_x, \mu_w)$ is solved by $\mu_x=\wh{\m}_x$ and $\mu_w= \wh{\m}_w$. Substituting the corresponding optimal value into~\eqref{eq:robust} finally yields
\[
\inf_{\substack{b}} \sup\limits_{\mathbb{Q} \in {\mbb G}(\widehat \mathbb{P})} \mathcal{R}(\psi_{A,b}, \mathbb{Q}) = \left\{
\begin{array}{c@{\,}l}
\inf\limits_{\substack{\dualvar_x,\dualvar_w \\ \dualvar_x I_n \succ K^\top K \\ \dualvar_w I_m \succ A^\top A }} &
\dualvar_x \big( \rho_x^2 - \Tr{\covsa_x} \big) + \dualvar_x^2 \inner{(\dualvar_x I_n - K^\top K)^{-1}}{\covsa_x} \\[-3ex]
& \hspace{0.5cm} + \dualvar_w \big( \rho_w^2 - \Tr{\covsa_w} \big) + \dualvar_w^2 \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w}.
\end{array}
\right.
\]
From the above equation and the definition of $K$ it is evident that the Gelbrich MMSE estimation problem
\begin{equation} \label{eq:dro:approx:2}
\inf_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) = \inf_{A, b} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi_{A,b}, \mathbb{Q})
\end{equation}
is indeed equivalent to the finite convex optimization problem~\eqref{eq:VA}.
Assume now that $\rho_x>0$ and $\rho_w > 0$. In this case we know from Proposition~\ref{prop:Gelbrich}~\ref{prop:Gelbrich:solvability} that the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx:2} admits an optimal affine estimator $\psi^\star(y)=A^\star y+b^\star$ for some $A^\star\in\mathbb{R}^{n\times m}$ and $b^\star \in\mathbb{R}^m$. The reasoning in the first part of the proof then implies that $A^\star$ solves~\eqref{eq:VA}. Moreover, it implies that $b^\star$ is optimal in~\eqref{eq:robust} when we fix $A=A^\star$. As~\eqref{eq:robust} is equivalent to~\eqref{eq:robust3} and as the unique optimal solution of~\eqref{eq:robust3} for $A=A^\star$ is given by $b = \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w)$, we may finally conclude that
\[
b^\star = \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w).
\]
By reversing these arguments, one can further show that if $A^\star$ solves~\eqref{eq:VA} and $b^\star$ is defined as above, then the affine estimator $\psi^\star(y)=A^\star y+b^\star$ is optimal in~\eqref{eq:dro:approx:2}. This observation completes the proof.
\end{proof}
Using Schur complement arguments, the convex program~\eqref{eq:VA} can be further simplified to a standard semidefinite program (SDP), which can be addressed with off-the-shelf solvers.
\begin{corollary}[SDP reformulation] \label{cor:primal:refor}
The Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} is equivalent to the SDP
\begin{equation}
\label{eq:program:SDP:linear}
\begin{array}{cl}
\inf & \dualvar_x \big( \rho_x^2 - \Tr{\covsa_x} \big) + \Tr{U_x} + \dualvar_w \big( \rho_w^2 - \Tr{\covsa_w} \big) + \Tr{U_w} \vspace{1mm}\\
\st & A \in \mathbb{R}^{n \times m}, \quad \dualvar_x, \dualvar_w \in \mathbb{R}_+ \\
& U_x \in \mathbb{S}_{+}^{n}, \quad V_x \in \mathbb{S}_{+}^{n} , \quad U_w \in \mathbb{S}_{+}^{m}, \quad V_w \in \mathbb{S}_{+}^{m} \\
& \begin{bmatrix}
U_x ~ & \dualvar_x \covsa_x^\frac{1}{2} \\
\dualvar_x \covsa_x^\frac{1}{2} ~ & V_x
\end{bmatrix} \succeq 0, \quad\,\,
\begin{bmatrix}
\dualvar_x I_n - V_x ~ & ~ I_n - H^\top A^\top \\
I_n - A H ~& ~ I_n
\end{bmatrix} \succeq 0 \\
& \begin{bmatrix}
U_w ~ & \dualvar_w \covsa_w^\frac{1}{2} \\
\dualvar_w \covsa_w^\frac{1}{2} ~ & V_w
\end{bmatrix} \succeq 0, \quad
\begin{bmatrix}
\dualvar_w I_m - V_w ~ & A^\top \\
A ~ & I_n
\end{bmatrix} \succeq 0.
\end{array}
\end{equation}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{cor:primal:refor}]
Define the extended real-valued function $h_w:\mathbb{R}^{n\times m}\times \mathbb{R}_+\rightarrow (-\infty,\infty]$ through
\[
h_w(A,\dualvar_w) = \left\{ \begin{array}{ll} \dualvar_w^2 \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w} &\text{if } \dualvar_w I_m - A^\top A \succ 0, \\
\infty & \text{otherwise.} \end{array}\right.
\]
If $\dualvar_w I_m - A^\top A \succ 0$, then, we have
\begin{align}
h_w(A,\dualvar_w) =&\inf_{U_w\succeq 0} \left\{ \Tr{U_w}\; :\; U_w \succeq \dualvar_w^2 \covsa_w^\frac{1}{2} (\dualvar_w I_m - A^\top A)^{-1} \covsa_w^\frac{1}{2} \right\} \notag \\
=&\inf_{U_w\succeq 0, V_w\succ 0} \left\{ \Tr{U_w}\;:\; U_w \succeq \dualvar_w^2 \covsa_w^\frac{1}{2} V_w^{-1} \covsa_w^\frac{1}{2}, ~\dualvar_w I_m - A^\top A \succeq V_w
\right\} \notag \\
=&\inf_{U_w\succeq 0, V_w\succ 0} \left\{ \Tr{U_w}\;:\;
\begin{bmatrix}
\dualvar_w I_m - V_w & A^\top \\
A & I_n
\end{bmatrix} \succeq 0, ~
\begin{bmatrix}
U_w & \dualvar_w \covsa_w^\frac{1}{2} \\
\dualvar_w \covsa_w^\frac{1}{2} & V_w
\end{bmatrix} \succeq 0
\right\},
\label{eq:SDP-reformulation}
\end{align}
where the first equality holds due to the cyclicity of the trace operator and because $U_w \succeq \bar U_w$ implies $\Tr{U_w} \geq \Tr{\bar U_w}$ for all $U_w,\bar U_w\succeq 0$, the second equality holds because $V_w\succeq \bar V_w$ is equivalent to $V_w^{-1}\preceq \bar V_w^{-1}$ for all $V_w,\bar V_w\succ 0$, and the last equality follows from standard Schur complement arguments; see, {\text{\emph{e.g.}}}, \cite[\S~A.5.5]{ref:boyd2004convex}. If $\dualvar_w I_m - A^\top A \nsucc 0$, on the other hand, then the first matrix inequality in~\eqref{eq:SDP-reformulation} implies that $V_w$ must have at least one non-positive eigenvalue, which contradicts the constraint $V_w\succ 0$. The SDP~\eqref{eq:SDP-reformulation} is therefore infeasible, and its infimum evaluates to $\infty$. Thus, $h_w(A,\dualvar_w)$ coincides with the optimal value of the SDP~\eqref{eq:SDP-reformulation} for all $A\in\mathbb{R}^{n\times m}$ and $\dualvar_w\in\mathbb{R}_+$.
A similar SDP reformulation can be derived for the function $h_x:\mathbb{R}^{n\times m}\times \mathbb{R}_+\rightarrow (-\infty,\infty]$ defined through
\[
h_x(A,\dualvar_x)= \left\{ \begin{array}{ll} \dualvar_x^2 \inner{[\dualvar_x I_n - (I_n - A H)^\top (I_n - A H)]^{-1}}{\covsa_x} & \text{if } \dualvar_x I_n - (I_n - A H)^\top (I_n - A H) \succ 0,\\
\infty & \text{otherwise.}\end{array} \right.
\]
The claim now follows by substituting the SDP reformulations for $h_w(A,\dualvar_w)$ and $h_x(A,\dualvar_x)$ into~\eqref{eq:VA}. In doing so, we may relax the strict semidefinite inequalities $V_w\succ 0$ and $V_x\succ 0$ to weak inequalities $V_w\succeq 0$ and $V_x\succeq 0$, which amounts to taking the closure of the (non-empty) feasible set and does not change the infimum of problem~\eqref{eq:VA}. This observation completes the proof.
\end{proof}
\begin{remark}[Numerical stability]
The SDP~\eqref{eq:program:SDP:linear} requires the square roots of the nominal covariance matrices as inputs. Unfortunately, iterative methods for computing matrix square roots often suffer from numerical instability in high dimensions. As a remedy, one may replace those matrix inequalities in~\eqref{eq:program:SDP:linear} that involve $\covsa^\frac{1}{2}_x$ and $\covsa^\frac{1}{2}_w$ with
\[
\begin{bmatrix}
U_x ~ & \dualvar_x \Lambda_x^\top \\
\dualvar_x \Lambda_x ~ & V_x
\end{bmatrix} \succeq 0 \qquad \text{and} \qquad \begin{bmatrix}
U_w ~ & \dualvar_w \Lambda_w^\top \\
\dualvar_w \Lambda_w ~ & V_w
\end{bmatrix} \succeq 0,
\]
where $\Lambda_x$ and $\Lambda_w$ represent the lower triangular Cholesky factors of $\covsa_x$ and $\covsa_w$, respectively. Thus, we have $\covsa_x = \Lambda_x \Lambda_x^\top$ and $\covsa_w = \Lambda_w \Lambda_w^\top$. We emphasize that $\Lambda_x$ and $\Lambda_w$ can be computed reliably in high dimensions.
\end{remark}
\section{The Dual Wasserstein MMSE Estimation Problem over Normal
Priors}
\label{sect:dual}
We now examine the dual Wasserstein MMSE estimation
problem
\begin{align}
\label{eq:dual-dro}
\mathop{\text{maximize}}_{\mathbb{Q} \in \mathbb B(\wh \PP)}\inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}) ,
\end{align}
which is obtained from~\eqref{eq:dro} by interchanging the order of minimization and maximization. Any maximizer $\mathbb{Q}^\star$ of this dual estimation problem, if it exists, will henceforth be called a {\em least favorable prior}. Unfortunately, problem~\eqref{eq:dual-dro} is
generically intractable. Below we will demonstrate, however,
that~\eqref{eq:dual-dro} becomes tractable if the nominal
distribution~$\wh \PP$ is normal.
\begin{definition}[Normal distributions]
\label{def:normal-dist}
We say that $\mathbb P$ is a normal distribution on $\mathbb{R}^d$ with mean $\mu\in\mathbb{R}^d$ and covariance matrix $\Sigma\in\mathbb{S}_{+}^d$, that is, $\mathbb P=\mathcal N(\mu,\Sigma)$, if $\mathbb P$ is supported on $\text{\rm supp}(\mathbb P)=\{\mu+Ev: v\in\mathbb{R}^k\}$, and if the density function of $\mathbb P$ with respect to the Lebesgue measure on $\text{\rm supp}(\mathbb P)$ is given by
\[
\varrho_{\mathbb P}(\xi) = \frac{1}{\sqrt{(2\pi)^k\det(D)}} e^{-(\xi-\mu)^\top ED^{-1}E^\top(\xi-\mu)},
\]
where $k=\text{\rm rank}(\Sigma)$, $D\in \mathbb{S}_{++}^k$ is the diagonal matrix of the positive eigenvalues of $\Sigma$, and $E\in\mathbb{R}^{d\times k}$ is the matrix whose columns correspond to the orthonormal eigenvectors of the positive eigenvalues of $\Sigma$.
\end{definition}
Definition~\ref{def:normal-dist} also accounts for degenerate normal distributions with singular covariance matrices. We now recall some basic properties of normal distributions that are
crucial for the results of this paper.
\begin{proposition}[{Affine transformations
\cite[Theorem~2.16]{ref:fang1990symmetric}}]
\label{prop:normal-affine}
If $\xi \in\mathbb{R}^d$ follows the normal distribution
$\mathcal N(\mu, \Sigma)$, while $A \in \mathbb{R}^{k \times d}$ and
$b \in \mathbb{R}^k$, then $A \xi + b\in\mathbb{R}^k$ follows the
normal distribution $\mathcal N(A \mu + b, A
\Sigma A^\top)$.
\end{proposition}
\begin{proposition}[{Affine conditional expectations
\cite[Corollary~5]{ref:cambanis1981theory}}]
\label{prop:normal-cond-exp}
Assume that $\xi \in\mathbb{R}^d$ follows the normal distribution $\mathbb{P}=
\mathcal N(\mu, \Sigma)$ and that
\begin{align*}
\xi = \begin{bmatrix} \xi_1 \\ \xi_2 \end{bmatrix}, \qquad
\mu = \begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix} \qquad
\text{and} \qquad \Sigma = \begin{bmatrix}
\Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22}
\end{bmatrix},
\end{align*}
where $\xi_1,\mu_1\in\mathbb{R}^{d_1}$, $\xi_2,\mu_2\in\mathbb{R}^{d_2}$,
$\Sigma_{11}\in\mathbb{R}^{d_1\times d_1}$, $\Sigma_{22}\in\mathbb{R}^{d_2\times d_2}$ and
$\Sigma_{12}=\Sigma_{21}^\top \in\mathbb{R}^{d_1\times d_2}$ for some
$d_1,d_2\in\mathbb N$ with $d_1+d_2= d$. Then, there exist
$A\in\mathbb{R}^{d_1\times d_2}$ and $b\in\mathbb{R}^{d_1}$ such that
$\mathds{E}_\mathbb{P}[\xi_1| \xi_2]=A \xi_2 + b $ $\mathbb{P}$-almost surely.
\end{proposition}
Another useful but lesser known property of normal distributions is
that their Wasserstein distances can be expressed analytically in terms of the distributions' first- and second-order moments.
\begin{proposition}[Wasserstein distance between normal distributions {\cite[Proposition~7]{ref:givens1984class}}]
\label{prop:normal:distance}
The Wasserstein distance between two normal distributions
$\mathbb{Q}_1 = \mathcal N(\mu_1, \Sigma_1)$ and $\mathbb{Q}_2 = \mathcal N(\mu_2, \Sigma_2)$ equals the Gelbrich distance between their mean vectors and covariance matrices, that is, $\mathds{W}(\mathbb{Q}_1, \mathbb{Q}_2) = \mathds{G}( (\mu_1, \Sigma_1), (\mu_2,
\Sigma_2) )$.
\end{proposition}
Assume now that the nominal distributions of the parameter $x\in\mathbb{R}^n$ and the noise $w\in\mathbb{R}^m$ are normal, that is, assume that $\wh \PP_x = \mathcal N(\widehat
\mu_x, \covsa_x)$ and $\wh \PP_w = \mathcal N(\widehat \mu_w,
\covsa_w)$.
Thus, the joint nominal distribution $\wh \PP = \wh \PP_{x} \times
\wh \PP_{w}$ is also normal, that is,
\begin{align}
\label{eq:nominal:normal}
\wh \PP=\mathcal N(\widehat \mu, \covsa) \qquad \text{where}
\qquad \widehat \mu = \begin{bmatrix} \widehat \mu_x \\ \widehat \mu_w
\end{bmatrix} \qquad \text{and} \qquad \covsa = \begin{bmatrix}
\covsa_{x} & 0 \\ 0 & \covsa_{w} \end{bmatrix}.
\end{align}
Armed with the fundamental results on normal distributions summarized above, we are now ready to address the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro} with a normal nominal distribution. In analogy to Section~\ref{sect:approx}, where we proposed the Gelbrich MMSE estimation problem as an easier conservative approximation for the original {\em primal} estimation problem~\eqref{eq:dro}, we will now construct an easier conservative approximation for the original {\em dual} estimation problem~\eqref{eq:dual-dro}. To this end, we define the restricted ambiguity set
\begin{align}
\notag
\mathbb B_{\mathcal N}(\wh \PP) = \left\{ \mathbb{Q}_x \times \mathbb{Q}_w \in \mathcal{M}(\mathbb{R}^{n})\times
\mathcal{M}(\mathbb{R}^{m}) :
\begin{array}{l}
\exists \Sigma_x\in\mathbb{S}_{+}^n,~ \Sigma_w\in\mathbb{S}_{++}^m \text{ with }\\
\mathbb{Q}_x = \mathcal N(\wh{\m}_x, \Sigma_x), ~\mathbb{Q}_w =
\mathcal N(\wh{\m}_w, \Sigma_w), \\
\mathds{W} (\mathbb{Q}_x, \wh \PP_x ) \leq \rho_x,~\mathds{W} (\mathbb{Q}_w, \wh \PP_w) \leq \rho_w
\end{array}
\right\}.
\end{align}
By construction, $\mathbb B_{\mathcal N}(\wh \PP)$ contains all {\em normal}
distributions $\mathbb{Q}=\mathbb{Q}_x \times \mathbb{Q}_w$ from within the original
Wasserstein ambiguity set $\mathbb B(\wh \PP)$ that have the same mean vector
$(\wh{\m}_x, \wh{\m}_w)$ as the nominal
distribution~$\wh \PP=\wh \PP_x\times\wh \PP_w$, and where the covariance matrix of $\mathbb{Q}_w$ is strictly positive definite.
Thus, we have $\mathbb B_{\mathcal N}(\wh \PP) \subseteq \mathbb B(\wh \PP)$. Note also that $\mathbb B_{\mathcal N}(\wh \PP)$ is non-convex because mixtures of normal distributions usually fail to be normal.
By restricting the original Wasserstein ambiguity set $\mathbb B(\wh \PP)$ to
its subset $\mathbb B_{\mathcal N}(\wh \PP)$, we obtain the following conservative
approximation for the dual Wasserstein MMSE estimation
problem~\eqref{eq:dual-dro}.
\begin{align}
\label{eq:dual-dro:conservative}
\mathop{\text{maximize}}_{\mathbb{Q} \in \mathbb
B_{\mathcal N}(\wh \PP)}\inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q})
\end{align}
We will henceforth refer to~\eqref{eq:dual-dro:conservative} as the dual Wasserstein MMSE estimation problem {\em over normal priors}. The
following main theorem shows that~\eqref{eq:dual-dro:conservative} is
equivalent to a finite convex optimization problem.
\begin{theorem}[Dual Wasserstein MMSE estimation problem over
normal priors]
\label{thm:least-favorable-prior}
Assume that the Wasserstein ambiguity set $\mathbb B_{\mathcal N}(\wh \PP)$ is centered at a normal distribution $\wh \PP$ of the form~\eqref{eq:nominal:normal}. Then, the dual Wasserstein MMSE estimation problem over normal priors~\eqref{eq:dual-dro:conservative} is equivalent to the finite convex optimization problem
\begin{equation}
\label{eq:program:dual}
\begin{array}{cl}
\sup & \Tr{\Sigma_x - \Sigma_x H^\top \left( H \Sigma_x H^\top +
\Sigma_w \right)^{-1} H \Sigma_x} \\
\st & \Sigma_x \in \mathbb{S}_{+}^{n}, \quad \Sigma_w \in \mathbb{S}_{++}^{m} \\[0.5ex]
& \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2}
\Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_x^2 \\[0.5ex]
& \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2}
\Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_w^2.
\end{array}
\end{equation}
If $\covsa_w \succ 0$, then~\eqref{eq:program:dual} is solvable, and the maximizer denoted by~$(\Sigma_x^\star, \Sigma_w^\star)$ satisfies $\Sigma_x^\star \succeq \lambda_{\min}(\covsa_x) I_n$ and $\Sigma_w^\star \succeq \lambda_{\min}(\covsa_w) I_m$. Moreover, the supremum of~\eqref{eq:dual-dro:conservative} is attained by the normal distribution $\mathbb{Q}^\star=\mathbb{Q}^\star_x\times\mathbb{Q}^\star_w$ defined through $\mathbb{Q}^\star_x = \mathcal N(\wh{\m}_x, \Sigma_x^\star)$ and $\mathbb{Q}^\star_w= \mathcal N(\wh{\m}_w, \Sigma_w^\star)$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:least-favorable-prior}]
If $(x,w)$ is governed by a normal distribution $\mathbb{Q} \in
\mathbb B_{\mathcal N}(\wh \PP)$, then the linear transformation $(x, y) = (x, Hx +
w)$ is also normally distributed by virtue of
Proposition~\ref{prop:normal-affine}, and the average risk $\mathcal{R}(\psi,
\mathbb{Q})$ is minimized by the Bayesian MMSE estimator $\psi^\star_{\mathcal
B}(y)=\mathds{E}_{\mathbb{P}_{x|y}} [x]$, which is affine due to
Proposition~\ref{prop:normal-cond-exp}. Thus, in the dual Wasserstein
MMSE estimation problem with normal priors, the set~${\mc F}$ of {\em
all} estimators may be restricted to the set~$\Ac$ of all {\em affine}
estimators without sacrificing optimality, that is,
\begin{equation}
\label{eq:dual-simplification}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi,
\mathbb{Q}) = \sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in\Ac} \mathcal{R}(\psi,
\mathbb{Q}).
\end{equation}
As the average risk $\mathcal{R}(\psi, \mathbb{Q})$ of an affine estimator
$\psi\in\Ac$ simply evaluates the expectation of a quadratic function in
$(x,w)$, it depends on $\mathbb{Q}$ only through its first and second moments.
Moreover, as $\mathbb{Q}$ and $\wh \PP$ are normal distributions, their Wasserstein distance coincides with
the Gelbrich distance between their mean vectors and covariance
matrices; see Proposition~\ref{prop:normal:distance}. Thus, the
maximization problem over $\mathbb{Q}$ on the right hand side
of~\eqref{eq:dual-simplification} can be recast as an equivalent
maximization problem over the first and second moments of~$\mathbb{Q}$.
Specifically, by the definitions of $\mathcal{R}(\psi, \mathbb{Q})$ and $ \mathbb
B_\phi(\wh \PP)$ we~find
\begin{align*}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in\Ac} \mathcal{R}(\psi, \mathbb{Q})
= & \left\{
\begin{array}{cl}
\sup\limits_{\Sigma_x, \Sigma_w} & \inf\limits_{\substack{A, K \\ K =
I_n-A H}} \, \inf\limits_{b} ~\; \inner{K^\top K}{\Sigma_x + \wh{\m}_x
\wh{\m}_x^\top} + \inner{A^\top A}{\Sigma_w + \wh{\m}_w \wh{\m}_w^\top} +
b^\top b \\[-2ex]
& \hspace{2.5cm} - 2 \wh{\m}_x^\top K^\top A \wh{\m}_w -
2b^\top (K\wh{\m}_x - A \wh{\m}_w)\\[1ex]
\st & \mathds{G}\big((\wh{\m}_x, \Sigma_x), (\wh{\m}_x,
\covsa_x)\big)\leq \rho_x,~\mathds{G}\big( (\wh{\m}_w, \Sigma_w), (\wh{\m}_w,
\covsa_w) \big) \leq \rho_w \\
& \Sigma_x \succeq 0,~\Sigma_w \succ
0, \end{array}
\right.
\end{align*}
where the auxiliary decision variable $K = I_n-A H$ has
been introduced to simplify the objective function. The innermost minimization problem over $b$ constitutes an unconstrained (strictly) convex quadratic program that has the unique optimal solution $b= K\wh{\m}_x - A \wh{\m}_w$. Substituting this minimizer back into the objective function of the above problem and recalling the definition of the Gelbrich distance then yields
\begin{align}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in\Ac} \mathcal{R}(\psi,
\mathbb{Q}) =& \left\{
\begin{array}{cl}
\sup\limits_{\Sigma_x, \Sigma_w} & \inf\limits_{\substack{A, K \\ K =
I_n-A H}} \; \inner{K^\top K}{\Sigma_x} + \inner{A^\top
A}{\Sigma_w} \\
\st & \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2}
\Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_x^2 \\[2ex]
& \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2}
\Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_w^2 \\[2ex]
& \Sigma_x \succeq 0,~\Sigma_w \succ 0.
\end{array}
\right. \label{eq:least1}
\end{align}
By using the equality $K = I_n-A H$ to eliminate $K$, the inner minimization problem in~\eqref{eq:least1} can be reformulated as an unconstrained quadratic program in~$A$. As $\Sigma_w \succ 0$, this quadratic program is strictly convex, and an elementary calculation reveals that its unique optimal solution is given by
\[
A^\star = \Sigma_x H^\top \left( H {\Sigma_x} H^\top +
\Sigma_w\right)^{-1}.
\]
Substituting $A^\star$ as well as the corresponding auxiliary decision variable $K^\star=I_n-A^\star H$ into the objective function of~\eqref{eq:least1} finally yields the postulated convex program~\eqref{eq:program:dual}.
Assume now that $\covsa_w \succ 0$, and define
\[
\mathcal S_x = \left\{ \Sigma_x \in \mathbb{S}_{+}^n: \mathds{G}\big((\wh{\m}_x, \Sigma_x), (\wh{\m}_x,
\covsa_x)\big)\leq \rho_x \right\}\quad \text{and}\quad
\mathcal S_w = \left\{ \Sigma_w \in \mathbb{S}_{+}^m: \mathds{G}\big( (\wh{\m}_w, \Sigma_w), (\wh{\m}_w,
\covsa_w) \big) \leq \rho_w \right\}.
\]
Equations~\eqref{eq:dual-simplification} and~\eqref{eq:least1} imply that
\begin{align}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi,
\mathbb{Q}) \le & \sup\limits_{\Sigma_x \in \mathcal S_x} \sup\limits_{\Sigma_w \in \mathcal S_w} \inf\limits_{\substack{A, K \\ K =
I_n-A H}} \; \inner{K^\top K}{\Sigma_x} + \inner{A^\top
A}{\Sigma_w} \notag \\
=& \sup\limits_{\substack{\Sigma_x \in \mathcal S_x \\ \Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n}} \sup\limits_{\substack{\Sigma_w \in \mathcal S_w \\ \Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m}} \inf\limits_{\substack{A, K \\ K =
I_n-A H}} \; \inner{K^\top K}{\Sigma_x} + \inner{A^\top
A}{\Sigma_w}, \label{eq:least2}
\end{align}
where the inequality holds because we relax the requirement that $\Sigma_w$ be strictly positive definite, and the equality follows from applying Lemma~\ref{lemma:monotone loss} consecutively to each of the two maximization problems. If $\covsa_w \succ 0$, then problem~\eqref{eq:least2} constitutes a restriction of~\eqref{eq:least1} and therefore provides also a lower bound on the dual Wasserstein MMSE estimation problem. In summary, we thus have
\begin{align}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi,
\mathbb{Q}) =& \left\{
\begin{array}{cl}
\sup\limits_{\Sigma_x, \Sigma_w} & \inf\limits_{\substack{A, K \\ K =
I_n-A H}} \; \inner{K^\top K}{\Sigma_x} + \inner{A^\top
A}{\Sigma_w} \\
\st & \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2}
\Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_x^2 \\[2ex]
& \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2}
\Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_w^2 \\[2ex]
& \Sigma_x \succeq\lambda_{\min}(\covsa_x) I_n, ~\Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m.
\end{array}
\right. \label{eq:least3}
\end{align}
This reasoning implies that if $\covsa_w \succ 0$, then the constraints $\Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n$ and $\Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m$ can be appended to problem~\eqref{eq:least1} and, consequently, to problem~\eqref{eq:program:dual} without altering their common optimal value. Problem~\eqref{eq:program:dual} with the additional constraints $\Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n$ and $\Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m$ has a continuous objective function over a compact feasible set and is thus solvable. Any of its optimal solutions is also optimal in problem~\eqref{eq:program:dual}, which has no redundant constraints. Thus, problem~\eqref{eq:program:dual} is solvable.
It remains to show that $\mathbb{Q}^\star$ as constructed in the theorem statement is optimal in~\eqref{eq:dual-dro:conservative}. The feasibility of $(\Sigma_x^\star, \Sigma_w^\star)$ in~\eqref{eq:program:dual} implies that $\mathbb{Q}^\star \in \mathbb B_{\mathcal N}(\wh \PP)$, and thus $\mathbb{Q}^\star$ is feasible in~\eqref{eq:dual-dro:conservative}. Moreover, we have
\begin{align}
\label{eq:q-optimality}
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N} (\wh \PP)} \, \inf\limits_{\psi \in {\mc F}} \,
\mathcal{R}(\psi, \mathbb{Q}) &\ge \inf\limits_{\psi \in {\mc F}} \, \mathcal{R}(\psi, \mathbb{Q}^\star) =
\Tr{\Sigma_x^\star - \Sigma_x^\star H^\top \left( H \Sigma_x^\star H^\top +
\Sigma_w^\star \right)^{-1} H \Sigma_x^\star},
\end{align}
where the equality follows from elementary algebra, recalling that the affine estimator $\psi(y) = A^\star y+b^\star$ with
\[
A^\star = \Sigma_x^\star H^\top \left( H {\Sigma_x^\star}
H^\top + \Sigma_w^\star\right)^{-1}\quad \text{and} \quad b^\star=
\mu_x - A^\star (H \wh{\m}_x + \wh{\m}_w)
\]
is the Bayesian MMSE estimator for the normal distribution
$\mathbb{Q}^\star$. As the right hand side of~\eqref{eq:q-optimality} coincides
with the maximum of~\eqref{eq:program:dual} and as
problem~\eqref{eq:program:dual} is equivalent to the dual Wasserstein
MMSE estimation problem~\eqref{eq:dual-dro:conservative} over normal
priors, we may thus conclude that the
inequality in~\eqref{eq:q-optimality} is tight. Thus, we find
\[
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \, \inf\limits_{\psi \in {\mc F}} \,
\mathcal{R}(\psi, \mathbb{Q}) = \inf\limits_{\psi \in {\mc F}} \, \mathcal{R}(\psi, \mathbb{Q}^\star),
\]
which in turn implies that $\mathbb{Q}^\star$ is optimal
in~\eqref{eq:dual-dro:conservative}. This observation completes the proof.
\end{proof}
\begin{remark}[Singular covariance matrices]
A nonlinear SDP akin to~\eqref{eq:program:dual} has been derived in~\cite{ref:shafieezadeh2018wasserstein} under the stronger assumption that the covariance matrix of the nominal distribution $\wh \PP$ is non-degenerate, which implies that $\covsa_x \succ 0$ and $\covsa_w \succ 0$. However, the weaker condition $\covsa_w \succ 0$ is sufficient to ensure that the matrix inversion in the objective function of problem~\eqref{eq:program:dual} is well-defined. Therefore, Theorem~\ref{thm:least-favorable-prior} remains valid if the nominal covariance matrix $\covsa_x$ is singular, which occurs in many applications.
On the other hand, it is common to require that $\covsa_w = \sigma^2 I_m$ for some $\sigma>0$, see, e.g.,~\cite{ref:chang2000adaptive}.
\end{remark}
Corollary~\ref{corol:dual:refor} below asserts that the convex program~\eqref{eq:program:dual} admits a canonical linear SDP reformulation. The proof is omitted as it relies on standard Schur complement arguments familiar from the proof of Corollary~\ref{cor:primal:refor}.
\begin{corollary}[SDP reformulation] \label{corol:dual:refor}
Assume that the Wasserstein ambiguity set $\mathbb B_{\mathcal N}(\wh \PP)$ is centered at a normal distribution $\wh \PP$ of the form~\eqref{eq:nominal:normal} with noise covariance matrix $\covsa_w \succ 0$. Then, the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro:conservative} over normal priors is equivalent to the SDP
\begin{equation} \label{eq:SDP:dual}
\begin{array}{cl}
\max & \Tr{\Sigma_x} - \Tr{U} \\[1ex]
\st & \Sigma_x \in \mathbb{S}_{+}^n, \, \Sigma_w \in \mathbb{S}_{+}^m, \, V_x \in
\mathbb{S}_{+}^n, \, V_w \in \mathbb{S}_{+}^m, \, U \in \mathbb{S}_{+}^n \\[1ex]
& \begin{bmatrix} \covsa_x^\frac{1}{2} \Sigma_x \covsa_x^\frac{1}{2} &
V_x \\ V_x & I_n \end{bmatrix} \succeq 0, \quad
\begin{bmatrix} \covsa_w^\frac{1}{2} \Sigma_w \covsa_w^\frac{1}{2} & V_w
\\ V_w & I_m \end{bmatrix} \succeq 0\\[3ex]
& \Tr{\Sigma_x + \covsa_x - 2V_x} \leq \rho_x^2, \quad
\Tr{\Sigma_w + \covsa_w - 2V_w} \leq \rho_w^2 \\[2ex]
& \begin{bmatrix} U & \Sigma_x H^\top \\ H \Sigma_x & H \Sigma_x
H^\top + \Sigma_w \end{bmatrix} \succeq 0, \quad \Sigma_x \succeq
\lambda_{\min}(\covsa_x) I_n, \quad \Sigma_w \succeq
\lambda_{\min}(\covsa_w) I_m.
\end{array}
\end{equation}
\end{corollary}
We emphasize that the lower bounds on $\Sigma_x $ and $\Sigma_w$ are redundant but have been made explicit in~\eqref{eq:SDP:dual}.
\section{Nash Equilibrium and Optimality of Affine Estimators}
\label{sect:nash}
If $\wh \PP$ is a normal distribution of the form~\eqref{eq:nominal:normal}, then we have
\begin{equation} \label{eq:strong:duality}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) \ge
\inf\limits_{\psi\in{\mc F}} \sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) \ge
\sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP)}\inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}) \ge
\sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)}\inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}),
\end{equation}
where the first inequality follows from the inclusions $\Ac \subseteq {\mc F}$ and $\mathbb B(\wh \PP) \subseteq \mathbb G(\wh \PP)$, the second inequality exploits weak duality, and the last inequality holds due to the inclusion $\mathbb B_{\mathcal N}(\wh \PP) \subseteq \mathbb B(\wh \PP)$. Note that the leftmost minimax problem is the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} studied in Section~\ref{sect:approx}, and the rightmost maximin problem is the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro:conservative} over normal priors studied in Section~\ref{sect:dual}. We also highlight that these restricted primal and dual estimation problems sandwich the original Wasserstein estimation problems~\eqref{eq:dro} and~\eqref{eq:dual-dro}, which coincide with the second and third problems in~\eqref{eq:strong:duality}, respectively. The following theorem asserts that all inequalities in~\eqref{eq:strong:duality} actually collapse to equalities.
\begin{theorem}[Sandwich theorem]
\label{thm:sandwich}
If $\wh \PP$ is a normal distribution of the form~\eqref{eq:nominal:normal}, then the optimal values of the restricted primal and dual estimation problems~\eqref{eq:dro:approx} and~\eqref{eq:dual-dro:conservative} coincide, i.e.,
\[
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) = \sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)}\inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}).
\]
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:sandwich}]
By Theorem~\ref{thm:conservative}, the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} can be expressed as
\begin{align*}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) =
\left\{
\begin{array}{ccl}
\inf\limits_{\substack{A, K \\ K = I_n - A H}} & \inf\limits_{\substack{\dualvar_x,\dualvar_w \\ \dualvar_x I_n \succ K^\top K \\ \dualvar_w I_m \succ A^\top A }} &
\dualvar_x \big( \rho_x^2 - \Tr{\covsa_x} \big) + \dualvar_x^2 \inner{(\dualvar_x I_n - K^\top K)^{-1}}{\covsa_x} \\[-4ex]
&& \hspace{0.5cm} + \dualvar_w \big( \rho_w^2 - \Tr{\covsa_w} \big) + \dualvar_w^2 \inner{(\dualvar_w I_m - A^\top A)^{-1}}{\covsa_w},
\end{array}
\right.
\end{align*}
where the auxiliary variable $K = I_n-A H$ has been introduced to highlight the problem's symmetries.
Next, we introduce the feasible sets
\[
\mathcal S_x = \left\{ \Sigma_x \in \mathbb{S}_{+}^n: \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2} \Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_x^2 \right\}
\]
and
\[
\mathcal S_w = \left\{ \Sigma_w \in \mathbb{S}_{+}^m: \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2} \Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_w^2 \right\},
\]
both of which are convex and compact by virtue of Lemma~\ref{lemma:compact:FS}.
Using Proposition~\ref{prop:quadratic}\,\ref{prop:quad:dual} to reformulate the inner minimization problem over $\gamma_x$ and $\gamma_w$, we then obtain
\begin{align}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) &=
\inf\limits_{\substack{A, K \\ K = I_n - A H}} \sup\limits_{\substack{\Sigma_x \in \mathcal S_x \\ \Sigma_w \in \mathcal S_w}} ~\inner{K^\top K}{\Sigma_x} + \inner{A^\top A}{\Sigma_w} \notag
\\
&= \sup\limits_{\Sigma_x \in \mathcal S_x} \inf\limits_{\substack{A, K \\ K = I_n - A H}} \sup\limits_{ \Sigma_w \in \mathcal S_w} ~\inner{K^\top K}{\Sigma_x} + \inner{A^\top A}{\Sigma_w} \notag
\end{align}
where the second equality holds due to Sion's minimax theorem~\cite{ref:sion1958minimax}. Define now the auxiliary function
\[
f(A) = \sup\limits_{\Sigma_w \in \mathcal S_w}~\inner{A^\top A}{\Sigma_w}.
\]
As $\Sigma_w \succeq 0$ for any $\Sigma_w \in \mathcal S_w$, $f$ constitutes a pointwise maximum of convex functions and is therefore itself convex. In addition, as the set $\mathcal S_w$ is compact by Lemma~\ref{lemma:compact:FS}, $f$ is everywhere finite and thus continuous thanks to \cite[Theorem~2.35]{ref:rockafellar2010variational}. This allows us to conclude that
\begin{align}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q})
&= \sup\limits_{\Sigma_x \in \mathcal S_x} \inf\limits_{\substack{A, K \\ K = I_n - A H}} ~\inner{K^\top K}{\Sigma_x} + f(A) \notag \\
&= \sup\limits_{\substack{\Sigma_x \in \mathcal S_x \\ \Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n}} \inf\limits_{\substack{A, K \\ K = I_n - A H}} ~\inner{K^\top K}{\Sigma_x} + f(A) \notag \\
&= \inf\limits_{\substack{A, K \\ K = I_n - A H}} \sup\limits_{\substack{\Sigma_x \in \mathcal S_x \\ \Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n}} \sup\limits_{ \Sigma_w \in \mathcal S_w} ~\inner{K^\top K}{\Sigma_x} + \inner{A^\top A}{\Sigma_w}, \notag
\end{align}
where the second equality exploits Lemma~\ref{lemma:monotone loss}, and the last equality follows from Sion's minimax theorem~\cite{ref:sion1958minimax}. Another (trivial) application of Lemma~\ref{lemma:monotone loss} then allows us to append the constraint $\Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m$ to the maximization problem over $\Sigma_w$. Sion's minimax theorem~\cite{ref:sion1958minimax} finally implies that
\begin{align*}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q})&= \sup\limits_{\substack{\Sigma_x \in \mathcal S_x \\ \Sigma_x \succeq \lambda_{\min}(\covsa_x) I_n}} \sup\limits_{\substack{ \Sigma_w \in \mathcal S_w \\ \Sigma_w \succeq \lambda_{\min}(\covsa_w) I_m}} \inf\limits_{\substack{A, K \\ K = I_n - A H}} ~\inner{K^\top K}{\Sigma_x} + \inner{A^\top A}{\Sigma_w} \notag \\
&= \sup\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}), \notag
\end{align*}
where the last equality has already been established in the proof of Theorem~\ref{thm:least-favorable-prior}; see Equation~\eqref{eq:least3}. Thus, the claim follows.
\end{proof}
Theorem~\ref{thm:sandwich} suggests that solving any of the restricted estimation problems is tantamount to solving both original primal and dual estimation problems. This intuition is formalized in the following corollary.
\begin{corollary}[Nash equilibrium] \label{corol:nash}
If $\wh \PP$ is a normal distribution of the form~\eqref{eq:nominal:normal} with $\covsa_w \succ 0$, then the affine estimator~$\psi^\star$ that solves~\eqref{eq:dro:approx} is optimal in the primal Wasserstein MMSE estimation problem~\eqref{eq:dro}, while the normal distribution~$\mathbb{Q}^\star$ that solves~\eqref{eq:dual-dro:conservative} is optimal in the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro}. Moreover, $\psi^\star$ and $\mathbb{Q}^\star$ form a Nash equilibrium for the game between the statistician and nature, that is,
\begin{equation}
\label{eq:nash}
\mathcal{R}(\psi^\star, \mathbb{Q}) \le \mathcal{R}(\psi^\star, \mathbb{Q}^\star) \le \mathcal{R}(\psi, {\mathbb{Q}}^\star) \quad \forall \psi \in {\mc F},~ \mathbb{Q} \in \mathbb B(\wh \PP) \,.
\end{equation}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{corol:nash}]
As $\covsa_w \succ 0$, the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} is solved by the affine estimator $\psi^\star$ defined in Theorem~\ref{thm:conservative}, and the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro} over normal priors is solved by the normal distribution $\mathbb{Q}^\star$ defined in Theorem~\ref{thm:least-favorable-prior}. Thus, we have
\begin{align*}
\mathcal{R} (\psi^\star, \mathbb{Q}^\star) \ge \inf\limits_{\psi \in {\mc F}} \mathcal{R} (\psi, \mathbb{Q}^\star) = \max\limits_{\mathbb{Q} \in \mathbb B_{\mathcal N}(\wh \PP)} \inf\limits_{\psi\in{\mc F}} \mathcal{R}(\psi, \mathbb{Q}) = \min\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) = \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi^\star, \mathbb{Q}) \ge \mathcal{R} (\psi^\star, \mathbb{Q}^\star),
\end{align*}
where the three equalities follow from the definition of $\mathbb{Q}^\star$, Theorem~\ref{thm:sandwich} and the definition of $\psi^\star$, respectively. As the left and the right hand sides of the above expression coincide, we may then conclude that
\[
\mathcal{R}(\psi^\star, \mathbb{Q}) \le \mathcal{R}(\psi^\star, \mathbb{Q}^\star) \le \mathcal{R}(\psi, {\mathbb{Q}}^\star) \quad \forall \psi \in {\mc F},~ \mathbb{Q} \in \mathbb G(\wh \PP).
\]
Moreover, as $\mathbb B(\wh \PP) \subseteq \mathbb G(\wh \PP)$, the above relation implies~\eqref{eq:nash}.
It remains to be shown that $\psi^\star$ and $\mathbb{Q}^\star$ solve the primal and dual Wasserstein MMSE estimation problems~\eqref{eq:dro} and~\eqref{eq:dual-dro}, respectively. As for $\psi^\star$, we have
\[
\sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP)} \mathcal{R}(\psi^\star, \mathbb{Q}) \le \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi^\star, \mathbb{Q}) = \inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}) = \inf\limits_{\psi\in{\mc F}} \sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}),
\]
where the inequality holds because $\mathbb B(\wh \PP) \subseteq {\mbb G}(\wh \PP)$. The first equality follows from the definition of $\psi^\star$, while the second equality exploits Theorem~\ref{thm:sandwich}, which implies that all inequalities in~\eqref{eq:strong:duality} are in fact equalities. This reasoning shows that $\psi^\star$ is optimal in~\eqref{eq:dro}. The optimality of $\mathbb{Q}^\star$ in~\eqref{eq:dual-dro} can be proved similarly.
\end{proof}
Corollary~\ref{corol:nash} implies that $\psi^\star$ can be viewed as a Bayesian estimator for the least favorable prior $\mathbb{Q}^\star$ and that $\mathbb{Q}^\star$ represents a worst-case distribution for the optimal estimator $\psi^\star$. Next, we will argue that $\psi^\star$ can not only be constructed from the solution of the convex program~\eqref{eq:VA}, which is equivalent to the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx}, but also from the solution of the convex program~\eqref{eq:program:dual}, which is equivalent to the dual MMSE estimation problem~\eqref{eq:dual-dro:conservative} over normal~priors. This alternative construction is useful because problem~\eqref{eq:program:dual} is amenable to highly efficient first-order methods to be derived in Section~\ref{sect:algorithm}.
\begin{corollary}[Dual construction of the optimal estimator]
\label{corol:alternative}
If $\wh \PP$ is a normal distribution of the form~\eqref{eq:nominal:normal} with $\covsa_w\succ 0$, and $(\Sigma_x^\star, \Sigma_w^\star)$ is a maximizer of~\eqref{eq:program:dual}, then the affine estimator $\psi^\star(y) = A^\star y + b^\star$ with
\begin{equation}
\label{eq:alternative:def}
A^\star = \Sigma_x^\star H^\top \left( H {\Sigma_x^\star} H^\top + \Sigma_w^\star\right)^{-1} \quad \text{and} \quad
b^\star= \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w)
\end{equation}
solves the Wasserstein MMSE estimation problem~\eqref{eq:dro}.
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{corol:alternative}]
Define $\psi^\star$ as the affine estimator that solves~\eqref{eq:dro:approx} and $\mathbb{Q}^\star$ as the normal distribution that solves~\eqref{eq:dual-dro:conservative}. By Corollary~\ref{corol:nash}, the second inequality in~\eqref{eq:nash} holds for all admissible estimators $\psi\in{\mc F}$, which implies that $\psi^\star \in \arg\min_{\psi\in{\mc F}} \mathcal{R}(\psi,\mathbb{Q}^\star)$, that is, $\psi^\star$ solves the Bayesian MMSE estimation problem corresponding to $\mathbb{Q}^\star$. As any Bayesian MMSE estimator satisfies $\psi^\star(y)=\mathds{E}_{\mathbb{Q}^\star_{x|y}} [x]$ for $\mathbb{Q}^\star$-almost all $y$ and as $\Sigma^\star_w\succ 0$, we may use the known formulas for conditional normal distributions to conclude that the unique affine Bayesian MMSE estimator for $\mathbb{Q}^\star$ is of the form $\psi^\star(y) = A^\star y + b^\star$ with parameters defined as in~\eqref{eq:alternative:def}.
\end{proof}
\section{Non-normal Nominal Distributions}
\label{sect:elliptical}
We will first show that the results of Sections~\ref{sect:approx}--\ref{sect:nash} remain valid if $\wh \PP$ is an arbitrary elliptical (but maybe non-normal) distribution. To this end, we first review some basic results on elliptical distributions.
\begin{definition}[Elliptical distributions]
\label{definition:elliptical}
The distribution $\mathbb{P}$ of $\xi\in\mathbb{R}^d$ is called elliptical if the
characteristic function $\Phi_\mathbb{P}(t)= \mathds{E}_\mathbb{P}[ \exp(i t^\top \xi)]$
of $\mathbb{P}$ is given by $\Phi_\mathbb{P}(t)=\exp(i t^\top \mu) \phi(t^\top S
t)$ for some location parameter ${\mu \in \mathbb{R}^d}$, dispersion matrix
$S \in \mathbb{S}_{+}^{d}$ and characteristic generator $\phi:\mathbb{R}_+\! \rightarrow
\mathbb{R}$. In this case we write~${\mathbb{P}=\mathcal{E}^d_\phi(\mu, S)}$. The class of
all $d$-dimensional elliptical distributions with characteristic
generator $\phi$ is denoted by~$\mathcal{E}^{d}_\phi$.
\end{definition}
The class of elliptical distributions was introduced
in~\cite{ref:Kelker-1970} with the aim to generalize the family of
normal distributions, which are obtained by setting the characteristic
generator to~$\phi(u)=e^{-u/2}$. We emphasize that, unlike the
moment-generating function $M_\mathbb{P}(t)= \mathds{E}_\mathbb{P}[ \exp(t^\top \xi)]$,
the characteristic function $\Phi_\mathbb{P}(t)$ is always finite for all
$t\in\mathbb{R}^d$ even if some moments of $\mathbb{P}$ do not exist. Thus,
Definition~\ref{definition:elliptical} is general enough to cover also
heavy-tailed distributions with non-zero tail dependence coefficients
\cite{ref:hult2002advances}. Examples of elliptical distributions include the Laplace, logistic and $t$-distribution etc. Useful theoretical properties
of elliptical distributions are discussed in~\cite{ref:cambanis1981theory,
ref:fang1990symmetric}. We also highlight that elliptical distributions are
central to a wide spectrum of diverse applications ranging from genomics
\cite{ref:posekany2011biological} and medical imaging \cite{ref:ruttimann1998statistical}
to finance \cite[\S~6.2.1]{ref:jondeau2007financial}, to name a few.
If the dispersion matrix $S\in\mathbb{S}_{+}^d$ has rank $r$, then there
exists $\Lambda\in\mathbb{R}^{d\times r}$ with $S=\Lambda\Lambda^\top$, and
there exists a generalized inverse $\Lambda^{-1}\in\mathbb{R}^{r\times d}$ with
$\Lambda^{-1} \Lambda=I_r=\Lambda^\top(\Lambda^{-1})^\top$. One easily
verifies that if $\xi\in\mathbb{R}^d$ follows an elliptical distribution
$\mathbb{P}=\mathcal{E}^d_\phi(\mu, S)$, then $\tilde \xi=
\Lambda^{-1}(\xi-\mu)\in\mathbb{R}^r$ follows the spherically symmetric
distribution $\tilde\mathbb{P}=\mathcal{E}^d_\phi(0, I_r)$ with characteristic function
$\Phi_{\tilde\mathbb{P}}(t)=\phi(\|t\|^2)$. Thus, the choice of the
characteristic generator $\phi$ is constrained by the implicit condition
that $\phi(\|t\|^2)$ must be an admissible characteristic function. For
instance, the normalization of probability distributions necessitates
that $\phi(0) = 1$, while the dominated convergence theorem implies that
$\phi$ must be continuous etc. As any distribution is uniquely
determined by its characteristic function, and as $\phi(\|t\|^2)$
depends only on the norm of~$t$, the spherical distribution $\tilde \mathbb{P}$
is indeed invariant under rotations. This implies that $\mathds{E}_{\tilde
\mathbb{P}}[\tilde \xi]=0$ and, via the linearity of the expectation, that
$\mathds{E}_{\mathbb{P}}[\xi]=\mu$ provided that $\tilde\xi$ and $\xi$ are
integrable, respectively. Thus, the location parameter $\mu$ of an
elliptical distribution coincides with its mean vector whenever the mean
exists. By the definition of the characteristic function, the covariance
matrix of $\tilde \mathbb{P}$, if it exists, can be expressed as
\[
\tilde \Sigma=-\left.\nabla_t^2\Phi_{\tilde \mathbb{P}}(t)\right|_{t=0} =
-\left.\nabla_t^2\phi(\|t\|^2)\right|_{t=0}= -2 \phi'(0) I_r,
\]
where $\phi'(0)$ denotes the right derivative of $\phi(u)$ at $u=0$.
Hence, $\tilde \Sigma$ exists if and only if $\phi'(0)$ exists and is
finite. Similarly, the covariance matrix of $\mathbb{P}$ is given by $\Sigma=-2
\phi'(0)S$, if it exists \cite[Theorem~4]{ref:cambanis1981theory}. Below
we will focus on elliptical distributions with finite first- and
second-order moments ({\em i.e.}, we will only consider characteristic
generators with $|\phi'(0)|<\infty$), and we will assume that
$\phi'(0)=-\frac{1}{2}$, which ensures that the dispersion matrix $S$
equals the covariance matrix $\Sigma$. The latter assumption does not
restrict generality. In fact, changing the characteristic generator to
$\phi(\frac{-u}{2\phi'(0)})$ and the dispersion matrix to $-2
\phi'(0)S$ has no impact on the elliptical distribution $\mathbb{P}$ but
matches the dispersion matrix $S$ with the covariance matrix $\Sigma$.
The elliptical distributions inherit many desirable properties from the normal distributions but are substantially more expressive as they include also heavy- and light-tailed distributions. For example, any class of elliptical distributions with a common characteristic generator is closed under affine transformations and affine conditional expectations; see {\em e.g.}, \cite[Theorem~1 and Corollary~5]{ref:cambanis1981theory}. Moreover, the Wasserstein distance between two elliptical distributions with the same characteristic generator equals the Gelbrich distance between their mean vectors and covariance matrices \cite[Theorem~2.4]{ref:gelbrich1990formula}. Thus, the Propositions~\ref{prop:normal-affine}, \ref{prop:normal-cond-exp} and~\ref{prop:normal:distance} extend verbatim from the class of normal distributions to {\em any} class of elliptical distributions that share the same characteristic generator. For the sake of brevity, we do not restate these results for elliptical distributions.
The above discussion suggests that the results of Sections~\ref{sect:approx}--\ref{sect:nash} carry over almost verbatim to MMSE estimation problems involving elliptical nominal distributions. In the following we will therefore assume that
\begin{align}
\label{eq:nominal:elliptical}
\wh \PP=\mathcal{E}^{n+m}_\phi(\wh{\m}, \covsa) \qquad \text{with}
\qquad \widehat \mu = \begin{bmatrix} \widehat \mu_x \\ \widehat \mu_w
\end{bmatrix} \qquad \text{and} \qquad \covsa = \begin{bmatrix}
\covsa_{x} & 0 \\ 0 & \covsa_{w} \end{bmatrix},
\end{align}
where $\phi$ denotes a prescribed characteristic generator. As the class of all elliptical distributions with characteristic generator $\phi$ is closed under affine transformations, the marginal distributions $\wh \PP_x$ and $\wh \PP_w$ of $x$ and $w$ under $\wh \PP$ are also elliptical distributions with the same characteristic generator $\phi$.
Note that while the signal~$x$ and the noise~$w$ are uncorrelated under $\wh \PP$ irrespective of~$\phi$, they fail to be independent unless~$\wh \PP$ is a normal distribution. When working with generic elliptical nominal distributions, we must therefore abandon any independence assumptions.
Otherwise, the ambiguity set would be empty for small radii $\rho_x$ and $\rho_w$. This insight prompts us to redefine the Wasserstein ambiguity set as
\begin{equation}
\label{eq:Ambi-elliptical}
\mathbb B(\wh \PP) = \left\{ \mathbb{Q}\in \mathcal{M}(\mathbb{R}^{n+m}):
\mathds{E}_\mathbb{Q}[xw^\top]= \mathds{E}_\mathbb{Q}[x]\cdot\mathds{E}_\mathbb{Q}[w]^\top,~\mathds{W}(\mathbb{Q}_x, \wh \PP_x) \leq \rho_x,~ \mathds{W}(\mathbb{Q}_w, \wh \PP_w) \leq \rho_w \right\},
\end{equation}
which relaxes the independence condition in~\eqref{eq:Ambi} and merely requires $x$ and $w$ to be uncorrelated. When using the new ambiguity set~\eqref{eq:Ambi-elliptical} to model the distributional uncertainty, we can again compute a Nash equilibrium between the statistician and nature by solving a tractable convex optimization problem.
\begin{theorem}[Elliptical distributions] \label{thm:nash-elliptical}
Assume that $\wh \PP$ is an elliptical distribution of the form~\eqref{eq:nominal:elliptical} with characterisic generator $\phi$ and noise covariance matrix $\covsa_w \succ 0$, and define the ambiguity set $\mathbb B(\wh \PP)$ as in~\eqref{eq:Ambi-elliptical}. If $(\Sigma_x^\star, \Sigma_w^\star)$ solves the finite convex program~\eqref{eq:dual-dro:conservative}, then the affine estimator $\psi^\star(y) = A^\star y + b^\star$ with
\[
A^\star = \Sigma_x^\star H^\top \left( H {\Sigma_x^\star} H^\top + \Sigma_w^\star\right)^{-1} \quad \text{and} \quad
b^\star= \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w)
\]
solves the Wasserstein MMSE estimation problem~\eqref{eq:dro}, while the elliptical distribution
\[
\mathbb{Q}^\star=\mathcal E_\phi^{n+m}(\wh{\m},\Sigma^\star)\quad \text{with} \quad \Sigma^\star= \begin{bmatrix}
\Sigma_{x}^\star & 0 \\ 0 & \Sigma_{w^\star} \end{bmatrix}
\]
solves the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro}. Moreover, $\psi^\star$ and $\mathbb{Q}^\star$ form a Nash equilibrium for the game between the statistician and nature, that is,
\[
\mathcal{R}(\psi^\star, \mathbb{Q}) \le \mathcal{R}(\psi^\star, \mathbb{Q}^\star) \le \mathcal{R}(\psi, {\mathbb{Q}}^\star) \quad \forall \psi \in {\mc F},~ \mathbb{Q} \in \mathbb B(\wh \PP) \,.
\]
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:nash-elliptical}]
The proof replicates the arguments used to establish Theorems~\ref{thm:conservative}, \ref{thm:least-favorable-prior} and \ref{thm:sandwich} as well as Corollary~\ref{corol:nash} with obvious minor modifications. Details are omitted for brevity.
\end{proof}
Theorem~\ref{thm:nash-elliptical} asserts that the optimal estimator depends only on the first and second moments of the nominal elliptical distribution~$\wh \PP$ but {\em not} on its characteristic generator. Whether~$\wh \PP$ displays heavier or lighter tails than a normal distribution has therefore no impact on the prediction of the signal. Note, however, that the characteristic generator of~$\wh \PP$ determines the shape of the least favorable prior.
If the nominal distribution fails to be elliptical, the minimum of the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} may strictly exceed the maximum of the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro:conservative} over normal priors. Note that in this case the ambiguity set $\mathbb B_{\mathcal N}(\wh \PP)$ may even be empty. Moreover, while typically suboptimal for the original Wasserstein MMSE estimation problem~\eqref{eq:dro}, the usual affine estimator constructed from a solution of the nonlinear SDP~\eqref{eq:program:dual} still solves the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx}.
\begin{proposition}[Non-elliptical nominal distributions] \label{prop:equivalence}
Suppose that $\wh \PP = \wh \PP_{x} \times \wh \PP_{w}$, where $\wh \PP_x$ and $\wh \PP_w$ are arbitrary signal and noise distributions with mean vectors $\wh{\m}_x$ and $\wh{\m}_w$ and covariance matrices $\covsa_x \succeq 0$ and $\covsa_w \succ 0$, respectively. Then, the nonlinear SDP~\eqref{eq:program:dual} is solvable, and for any optimal solution~$(\Sigma_x^\star, \Sigma_w^\star)$ of~\eqref{eq:program:dual} the affine estimator $\psi^\star(y) = A^\star y + b^\star$ with
\[
A^\star = \Sigma_x^\star H^\top \left( H {\Sigma_x^\star} H^\top + \Sigma_w^\star\right)^{-1} \quad \text{and} \quad
b^\star= \wh{\m}_x - A^\star (H \wh{\m}_x + \wh{\m}_w)
\]
solves the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx}.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:equivalence}]
Denote by~$\wh \PP' = \wh \PP'_{x} \times \wh \PP'_{w}$ the normal distribution with the same first and second moments as~$\wh \PP$. As $\covsa_w\succ 0$, the nonlinear SDP~\eqref{eq:program:dual} is then solvable by virtue of Theorem~\ref{thm:least-favorable-prior}. Theorem~\ref{thm:sandwich} further implies that the first inequality in~\eqref{eq:strong:duality} with $\wh \PP'$ instead of $\wh \PP$ collapses to the equality
\begin{equation}
\label{eq:gebrich-meets-wasserstein}
\inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP')} \mathcal{R}(\psi, \mathbb{Q})= \inf\limits_{\psi\in{\mc F}} \sup\limits_{\mathbb{Q} \in \mathbb B(\wh \PP')} \mathcal{R}(\psi, \mathbb{Q}).
\end{equation}
In addition, Corollary~\ref{corol:alternative} ensures that the affine estimator $\psi^\star$ defined in the proposition statement solves the modified Wasserstein MMSE estimation problem with normal nominal distribution $\wh \PP'$ on the right hand side of~\eqref{eq:gebrich-meets-wasserstein}. Because $\psi^\star$ is affine, it is also feasible in the modified Gelbrich MMSE estimation problem on the left hand side. In addition, the average risk of any affine estimator depends only on the mean vectors and covariance matrices of $x$ and $w$. If we denote by $T$ the mean-covariance projection that maps any distribution $\mathbb{Q} = \mathbb{Q}_x \times \mathbb{Q}_w$ of $(x,w)$ to the mean vectors and covariance matrices of $x$ and $w$ under $\mathbb{Q}$, then the images of the ambiguity sets $\mathbb G(\wh \PP')$ and $\mathbb B(\wh \PP')$ under $T$ coincide by Proposition~\ref{prop:normal:distance}.
These observations imply that the affine estimator $\psi^\star$ also solves the estimation problem on the left hand side of~\eqref{eq:gebrich-meets-wasserstein}.
As $\wh \PP$ and $\wh \PP'$ share the same first and second moments, the Gebrich ball ${\mbb G}(\wh \PP)$ around the generic distribution~$\wh \PP$ coincides with the Gelbrich ambiguity set ${\mbb G}(\wh \PP')$ around the normal distribution~$\wh \PP'$. Thus, we find
\begin{align*}
\sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP)} \mathcal{R}(\psi^\star, \mathbb{Q}) = \sup\limits_{\mathbb{Q} \in {\mbb G}(\wh \PP')} \mathcal{R}(\psi^\star, \mathbb{Q}) = \inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP')} \mathcal{R}(\psi, \mathbb{Q}) = \inf\limits_{\psi\in\Ac} \sup\limits_{\mathbb{Q} \in \mathbb G(\wh \PP)} \mathcal{R}(\psi, \mathbb{Q}),
\end{align*}
where the second equality holds because $\psi^\star$ solves the Gelbrich MMSE estimation problem with the normal nominal distribution $\wh \PP'$ on the left hand side of~\eqref{eq:gebrich-meets-wasserstein}. Hence, $\psi^\star$ solves the Gelbrich MMSE estimation problem~\eqref{eq:dro:approx} with the generic nominal distribution $\wh \PP$.
\end{proof}
\section{Numerical Solution of Wasserstein MMSE Estimation Problems}
\label{sect:algorithm}
By Corollaries~\ref{cor:primal:refor} and~\ref{corol:dual:refor}, the primal and dual Wasserstein MMSE estimation problems~\eqref{eq:dro} and~\eqref{eq:dual-dro} can be addressed with off-the-shelf SDP solvers. Unfortunately, however, general-purpose interior-point methods quickly run out of memory when the signal dimension~$n$ and the noise dimension~$m$ grow. It is therefore expedient to look for customized first-order algorithms that can handle larger problem instances.
In this section we develop a Frank-Wolfe method for the nonlinear SDP~\eqref{eq:program:dual}, which is equivalent to the dual Wasserstein MMSE estimation problem~\eqref{eq:dual-dro}. This approach is meaningful because any solution to~\eqref{eq:program:dual} allows us to construct both an optimal estimator as well as a least favorable prior that form a Nash equilibrium in the sense of Corollary~\ref{corol:nash}; see also Corollary~\ref{corol:alternative}. Addressing the nonlinear SDP~\eqref{eq:program:dual} directly with a Frank-Wolfe method has great promise because the subproblems that identify the local search directions can be shown to admit quasi-closed form solutions and can therefore be solved very quickly.
In Section~\ref{sec:fully-adaptive-FW} we first review three variants of the Frank-Wolfe algorithm corresponding to a static, an adaptive and a more flexible {\em fully} adaptive stepsize rule, and we prove that the fully adaptive rule offers a linear convergence guarantee under standard regularity conditions. In Section~\ref{sect:FW-MMSE} we then show that the nonlinear SDP~\eqref{eq:program:dual} is amenable to the fully adaptive Frank-Wolfe algorithm and can thus be solved efficiently.
\subsection{Frank-Wolfe Algorithm for Generic Convex Optimization Problems}
\label{sec:fully-adaptive-FW}
Consider a generic convex minimization problem of the form
\begin{equation}
\label{eq:generic-convex}
f^\star = \min_{s \in \mathcal S}~f(s)
\end{equation}
with a convex compact feasible set $\mathcal S \subseteq \mathbb{R}^d$ and a convex differentiable objective function $f:\mathcal S\rightarrow \mathbb R$. We assume that for each precision $\delta \in [0, 1]$ we have access to an inexact oracle $F: \mathcal S \rightarrow \mathcal S$ that maps any~$s\in\mathcal S$ to a $\delta$-approximate solution of an auxiliary problem linearized around~$s$. More precisely, we assume that
\begin{align}\label{oracle}
\left( F(s) - s \right)^\top \nabla f(s) \leq \delta \min\limits_{z \in \mathcal S}~ \left( z - s \right)^\top \nabla f(s).
\end{align}
By the standard optimality condition for convex optimization problems, the minimum on the right hand side of~\eqref{oracle} vanishes if and only if~$s$ solves the original problem~\eqref{eq:generic-convex}. Otherwise, the minimum is strictly negative. If $\delta = 1$, then the oracle returns an exact mininizer of the linearized problem. If $\delta=0$, on the other hand, then the oracle returns any solution that is weakly preferred to $s$ in the linearized problem. Given an oracle satisfying~\eqref{oracle}, one can design a Frank-Wolfe algorithm whose iterates obey the recursion
\begin{align}\label{frank-wolfe}
s_{t+1} = s_t + \eta_t (F(s_t) - s_t) \quad \forall t\in\mathbb N\cup\{0\},
\end{align}
where $s_0\in\mathcal S$ is an arbitrary initial feasible solution, $\delta$ is a prescribed precision, and $\eta_t \in[0,1]$ is a stepsize that may depend on the current iterate~$s_t$.
The Frank-Wolfe algorithm was originally developed for quadratic programs~\cite{ref:frank1956algorithm} and later extended to general convex programs with differentiable objective functions and compact convex feasible sets \cite{ref:levitin1966constrained, ref:demyanov1970approximate, ref:dunn1978conditional, ref:dunn1979rates, ref:dunn1980convergence}. Convergence guarantees for the Frank-Wolfe algorithm typically rely on the assumption that the gradient of $f$ is Lipschitz continuous \cite{ref:levitin1966constrained, ref:dunn1979rates, ref:dunn1980convergence, ref:garber2015faster, ref:freund2016new}, that $f$ has a bounded curvature constant \cite{ref:clarkson2010coresets, ref:jaggi2013revisiting}, or that the gradient of $f$ is H\"older continuous \cite{ref:nesterov2018complexity}.
Throughout this section we will assume that the decision variable can be represented as $s=(s^{[1]},\ldots, s^{[K]})$, where $s^{[k]}\in\mathbb R^{d_k}$ and $\sum_{k=1}^{K} d_k=d$. Moreover, we will assume that the feasible set~$\mathcal S = \times_{k=1}^K \mathcal S^{[k]}$ constitutes a $K$-fold Cartesian product, where the marginal feasible set~$\mathcal S^{[k]}\subseteq \mathbb R^{d_k}$ is convex and compact for each $k=1,\ldots, K$. This assumption is unrestrictive because we are free to set $K=1$ and $\mathcal S^{[1]}=\mathcal S$. For ease of notation, we use from now on $\nabla_{[k]}$ to denote the partial gradient with respect to the subvector~$s^{[k]}\in\mathcal S^{[k]}$,~$k=1,\ldots,K$.
The subsequent convergence analysis will rely on the following regularity conditions.
\begin{assumption} [Regularity conditions]
\label{a:FW} ~
\begin{enumerate}[label = $(\roman*)$]
\item \label{a:FW:smooth}
The objective function is $\beta$-smooth for some $\beta>0$, i.e.,
\[
\|\nabla f(s) - \nabla f(\bar s)\| \leq \beta \|s - \bar s\|\quad \forall s,\bar s\in\mathcal S.
\]
\item \label{a:FW:set}
The marginal feasible sets are $\alpha$-strongly convex with respect to $f$ for some $\alpha>0$, i.e.,
\[
\theta s^{[k]} + (1-\theta) \bar s^{[k]} - \theta(1-\theta)\frac{\alpha}{2}\left\|s^{[k]} - \bar s^{[k]} \right\|^2 \frac{\nabla_{[k]} f(s)}{\|\nabla_{[k]} f(s)\|} \in \mathcal S^{[k]} \quad \forall s, \bar s \in \mathcal S,~\theta \in [0,1],~k=1,\ldots, K.
\]
\item \label{a:FW:lower}
The objective function is $\varepsilon$-steep for some $\varepsilon > 0$, i.e.,
\[
\| \nabla_{[k]} f(s) \| \ge \varepsilon\quad \forall s \in \mathcal S,~k=1,\ldots, K.
\]
\end{enumerate}
\end{assumption}
Assumption~\ref{a:FW}\,\ref{a:FW:set} relaxes the standard strong convexity condition prevailing in the literature, which is obtained by setting $K=1$ and requiring that the condition stated here remains valid when the normalized gradient~${\nabla_{[1]} f(s) / \|\nabla_{[1]} f(s)\|}$ is replaced with any other vector in the Euclidean unit ball, see, {\em e.g.},~\cite[Equation~(25)]{ref:journee2010generalized}. We emphasize that our weaker condition is sufficient for the standard convergence proofs of the Frank-Wolfe algorithm but is necessary for our purposes because the feasible set of problem~\eqref{eq:program:dual} fails to be strongly convex in the traditional sense. Similarly, Assumption~\ref{a:FW}\,\ref{a:FW:lower} generalizes the usual $\varepsilon$-steepness condition from the literature, which is recovered by setting~$K=1$, see, {\em e.g.}, \cite[Assumption~1]{ref:journee2010generalized}. Under this assumption the gradient never vanishes on $\mathcal S$, and the minimum of~\eqref{eq:generic-convex} is attained on the boundary of~$\mathcal S$.
In the following we will distinguish three variants of the Frank-Wolfe algorithm with different stepsize rules. The {\em vanilla Frank-Wolfe} algorithm employs the harmonically decaying static stepsize
\begin{equation*}
\eta_t = \frac{2}{2 + t},
\end{equation*}
which results in a sublinear $\mathcal O(1/t)$ convergence whenever Assumption~\ref{a:FW}\,\ref{a:FW:smooth} holds \cite{ref:frank1956algorithm, ref:dunn1978conditional}. The {\em adaptive Frank-Wolfe} algorithm uses the stepsize
\begin{align}\label{adp_step}
\eta_t = \min \left\{1, \frac{(s_t - F (s_t))^\top \nabla f(s_t)}{\beta \| s_t - F(s_t) \|^2 } \right\},
\end{align}
which adapts to the iterate $s_t$. If all of the Assumptions~\ref{a:FW}\,\ref{a:FW:smooth}--\ref{a:FW:lower} hold, then the adaptive Frank-Wolfe algorithm enjoys a linear~$\mathcal O (c^t)$ convergence guarantee, where $c\in(0,1)$ is an explicit function of the oracle precision~$\delta$, the smoothness parameter~$\beta$, the strong convexity parameter~$\alpha$ and the steepness parameter~$\varepsilon$~\cite{ref:levitin1966constrained,ref:garber2015faster}. Note that the stepsize~\eqref{adp_step} is constructed as the unique solution of the univariate quadratic program
\begin{equation*}
\min_{\eta \in [0, 1]} ~ f(s_{t}) - \eta \big(s_{t} - F(s_{t})\big)^\top \nabla f(s_{t}) + \frac{1}{2}{\beta \eta^2} \left\| s_{t} - F(s_{t}) \right\|^2,
\end{equation*}
which minimizes a quadratic majorant of the objective function $f$ along the line segment from $s_t$ to $F(s_t)$.
The adaptive stepsize rule~\eqref{adp_step} has undergone further scrutiny in \cite{ref:pedregosa2018stepsize}, where it was discovered that one may improve the algorithm's convergence behavior by replacing the global smoothness parameter~$\beta$ in~\eqref{adp_step} with an adaptive smoothness parameter~$\beta_t$ that captures the smoothness of $f$ along the line segment from~$s_t$ to~$F(s_t)$. This extra flexibility is useful because~$\beta_t$ can be chosen smaller than the unnecessarily conservative global smoothness parameter~$\beta$ and because $\beta_t$ is easier to estimate than~$\beta$, which may not even be accessible.
Following~\cite{ref:pedregosa2018stepsize}, we will henceforth only require that $\beta_t>0$ satisfies the inequality
\begin{align} \label{eq:stepsize_full}
f\Big( s_t - \eta_t(\beta_t) \big(s_t - F(s_t)\big) \Big) \le f(s_t) - \eta_t(\beta_t) \big(s_t - F(s_t)\big)^\top \nabla f(s_t) + \frac{1}{2}{\beta_t \eta_t(\beta_t)^2} \big\|s_t - F(s_t)\big\|^2,
\end{align}
where $\eta_t(\beta_t)$ is defined as the adaptive stepsize~\eqref{adp_step} with $\beta$ replaced by $\beta_t$. As it adapts both to $s_t$ and $\beta_t$, we will from now on refer to $\eta_t=\eta_t(\beta_t)$ as the {\em fully} adaptive stepsize. The above discussion implies that~\eqref{eq:stepsize_full} is always satisfiable if Assumption~\ref{a:FW}\,\ref{a:FW:smooth} holds, in which case one may simply set~$\beta_t$ to the global smoothness parameter~$\beta$. In practice, however, the inequality~\eqref{eq:stepsize_full} is often satisfiable for much smaller values~$\beta_t \ll \beta$ that may not even be related to the smoothness properties of the objective function. A close upper bound on the smallest~$\beta_t>0$ that satisfies~\eqref{eq:stepsize_full} can be found efficiently via backtracking line search. Specifically, the {\em fully adaptive Frank-Wolfe} algorithm sets $\beta_t$ to the smallest element of the discrete search space $\frac{\beta_{t-1}}{\zeta}\cdot\{1, \tau, \tau^2, \tau^3,\ldots\}$ that satisfies~\eqref{eq:stepsize_full}, where $\tau>1$ and $\zeta>1$ are prescribed line search parameters. A detailed description of the fully adaptive Frank-Wolfe algorithm in pseudocode is provided in Algorithm~\ref{algorithm:FAFW}.
It has been shown in~\cite{ref:pedregosa2018stepsize} that Algorithm~\ref{algorithm:FAFW} enjoys the same sublinear~$\mathcal O(1/t)$ convergence guarantee as the vanilla Frank-Wolfe algorithm when Assumption~\ref{a:FW}\,\ref{a:FW:smooth} holds. Below we will leverage techniques from~\cite{ref:levitin1966constrained, ref:garber2015faster} to show that Algorithm~\ref{algorithm:FAFW} offers indeed a linear convergence rate if all of the Assumptions~\ref{a:FW}\,\ref{a:FW:smooth}--\ref{a:FW:lower} hold.
\begin{table}[th]
\begin{minipage}{0.71\columnwidth}
\begin{algorithm}[H]
\caption{Fully adaptive Frank-Wolfe algorithm}
\label{algorithm:FAFW}
\begin{algorithmic}
\REQUIRE initial feasible point $s_0 \in \mathcal S$, initial smoothness parameter $\beta_{-1} > 0$ \\
\hspace{2.2em}
line search parameters $\tau > 1$, $\zeta > 1$, initial iteration counter $t=0$ \\[0.5ex]
\WHILE{stopping criterion is not met} \vspace{0.25em}
\STATE solve the oracle subproblem to find $\tilde s_t = F(s_t)$
\STATE set $d_t \leftarrow \tilde s_{t} - s_t$ and $ g_t \leftarrow -d_t^\top{\nabla f(s_{t})}$ \vspace{0.1em}
\STATE set $\beta_t \leftarrow \beta_{t-1} / \zeta$ and $\eta \leftarrow \min \{1, g_t/(\beta_t \| d_t \|^2) \} $
\WHILE{$ \displaystyle f(s_t + \eta d_t) > f(s_t) - \eta g_t + \frac{\eta^2 \beta_t}{2} \| d_t \|^2 $}
\STATE $\beta_t \leftarrow \tau \beta_t$ and $\eta \leftarrow \min \{1, g_t/(\beta_t \| d_t \|^2) \}$
\ENDWHILE
\STATE set $\eta_t \leftarrow \eta$ and $s_{t+1} \leftarrow s_t + \eta_t d_t$
\STATE set $t \leftarrow t + 1$
\ENDWHILE \vspace{0.5ex}
\ENSURE $s_t$
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{table}
\begin{theorem}[Linear convergence of the fully adaptive Frank-Wolfe algorithm]\label{theorem:algorithm}
If Assumption~\ref{a:FW} holds and $\overline{\beta} = \max \{ \tau \beta, \beta_{-1} \}$, then Algorithm~\ref{algorithm:FAFW} enjoys the linear convergence guarantee
\[f(s_t) - f^\star \le \max \left\{ 1 - \frac{\delta}{2} , 1 - \frac{(1-\sqrt{1 - \delta})\alpha \varepsilon}{4 \overline \beta} \right\}^t (f(s_{0}) - f^\star) \quad \forall t\in\mathbb N.\]
\end{theorem}
The proof of Theorem~\ref{theorem:algorithm} relies on the following preparatory lemma.
\begin{lemma}[Bounds on the surrogate duality gap]
\label{lem:FW}
The surrogate duality gap~$g_t = -d_t^\top \nabla f(s_t)$ corresponding to the search direction~$d_t = F(s_t) - s_t $ admits the following lower bounds.
\begin{enumerate}[label = $(\roman*)$,itemsep = 2mm]
\item \label{lem:FW:convex}
If the objective function $f$ is convex, then $g_t \ge \delta(f(s_t) - f^\star)$.
\item \label{lem:FW:set}
If the marginal feasible sets are $\alpha$-strongly convex with respect to $f$ for some $\alpha>0$ in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:set}, then
\[
g_t \geq \min_{k \in \{ 1, \hdots, K \}} \frac{(1-\sqrt{1 - \delta}) \alpha} {2 \delta} \| d_t \|^2 \|\nabla_{[k]} f(s_t) \|.
\]
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:FW}]
By the definition of~$g_t$ we have
\begin{align}\label{eq:lem:gk}
g_t = \big(s_t - F(s_t)\big)^\top \nabla f(s_t) \ge \delta \big(s_t - s\big)^\top \nabla f(s_t) \qquad \forall s \in \mathcal S,
\end{align}
where the inequality follows from the defining property~\eqref{oracle} of the inexact oracle with precision~$\delta$. Setting~$s$ in~\eqref{eq:lem:gk} to a global minimizer~$s^\star$ of~\eqref{eq:generic-convex} then implies via the first-order convexity condition for~$f$ that
$$g_t \ge \delta \big(s_t - s^\star\big)^\top \nabla f(s_t) \ge \delta\big(f(s_t) - f^\star\big) \,.$$
This observation establishes assertion~\ref{lem:FW:convex}. To prove assertion~\ref{lem:FW:set}, we first rewrite the estimate~\eqref{eq:lem:gk} as
\begin{equation}
\label{eq:cartesian}
g_t \ge \delta (s_t - s )^\top \nabla f(s) = \delta \sum_{t=1}^T \big(s_t^{[k]} - s^{[k]} \big)^\top \nabla_{[k]} f(s_t) \quad \forall s \in \mathcal S \,.
\end{equation}
In the following, we denote by $F^{[k]}:\mathcal S\to \mathcal S^{[k]}$ the $k$-th suboracle for $k=1,\ldots, K$, which is defined through the identity $F=(F^{[1]},\ldots, F^{[K]})$. Similarly, for any $\theta\in[0,1]$, we define $s(\theta)=(s^{[1]}(\theta), \ldots,s^{[K]}(\theta))$ through
\begin{align*}
s^{[k]}(\theta) = \theta F^{[k]}(s_t) + (1-\theta)s_t^{[k]} - \frac{\alpha}{2}\theta(1-\theta)\left\|F^{[k]}(s_t) - s_t^{[k]}\right\|^2 \frac{\nabla_{[k]} f(s_t)}{\|\nabla_{[k]} f(s_t)\|}.
\end{align*}
By Assumption~\ref{a:FW}\,\ref{a:FW:set}, we have $s^{[k]}(\theta)\in\mathcal S^{[k]}$ for every $k=1,\ldots, K$. Thanks to the rectangularity of the feasible set this implies that $s(\theta)\in\mathcal S$. Setting $s$ in~\eqref{eq:cartesian} to $s(\theta)$, we thus find
\begin{align*}
g_t &\ge \delta \sum_{t=1}^T \Big(\theta \big(s_t^{[k]} - F^{[k]}(s_t)\big) + \frac{\alpha}{2} \theta(1-\theta)\left\|F^{[k]}(s_t) - s_t^{[k]}\right\|^2\frac{\nabla_{[k]} f(s_t)}{\|\nabla_{[k]} f(s_t)\|} \Big)^\top \nabla_{[k]} f(s_t) \\
& = \delta \left( \theta g_t + \frac{\alpha}{2} \theta (1-\theta) \left[ \sum_{k=1}^K \left\|F^{[k]}(s_t) - s_t^{[k]}\right\|^2 \|\nabla_{[k]} f(s_t)\| \right] \right) \\
&\geq \delta \left( \theta g_t + \frac{\alpha}{2} \theta (1-\theta) \|F(s_t) - s_t \|^2 \Big(\min_{k \in \{ 1, \hdots, K \} } \|\nabla_{[k]} f(s_t)\| \Big) \right) \quad \forall \theta \in [0,1] \,,
\end{align*}
where the equality follows from the definition of~$g_t$, and the last inequality exploits the Pythagorean theorem. Reordering the above inequality to bring $g_t$ to the left hand side yields
\begin{align}\label{eq:lem:gk2}
g_t \ge \min_{k \in \{ 1, \hdots, K \} } \frac{\alpha}{2} \|F(s_t) - s_t\|^2 \|\nabla_{[k]} f(s_t)\| \frac{\delta \theta(1-\theta)}{1 - \delta \theta} \qquad \forall \theta \in [0,1].
\end{align}
A tedious but straightforward calculation shows that the lower bound on the right hand side of~\eqref{eq:lem:gk2} is maximized by $\theta^\star=(1 - \sqrt{1 - \delta}) / \delta$. Assertion~\ref{lem:FW:set} then follows by substituting $\theta^\star$ into~\eqref{eq:lem:gk2}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:algorithm}]
By Assumption~\ref{a:FW}\,\ref{a:FW:smooth} the function $f$ is $\beta$-smooth, and thus one can show that
\begin{align}
\label{eq:surrogate-objective}
f(s_t + \eta d_t) \leq f(s_t) - \eta g_t + \frac{\eta^2 \beta}{2} \| d_t \|^2 \quad \forall \eta \in [0,1] \,,
\end{align}
where the surrogate duality gap~$g_t\ge 0$ and the search direction~$d_t\in\mathbb R^d$ are defined as in Lemma~\ref{lem:FW}. We emphasize that~\eqref{eq:surrogate-objective} holds in fact for all $\eta\in\mathbb R$. However, the next iterate $s_{t+1}=s_t + \eta d_t$ may be infeasible unless $\eta \in [0,1]$. The inequality~\eqref{eq:surrogate-objective} implies that any $\beta_t\ge \beta$ satisfies the condition of the inner while loop of Algorithm~\ref{algorithm:FAFW}, and thus the loop must terminate at the latest after $\lceil \log(\zeta\beta/\beta_{-1})/\log(\tau)\rceil$ iterations, outputting a smoothness parameter~$\beta_t$ and a stepsize~$\eta_t$ that satisfy the inequality~\eqref{eq:stepsize_full}.
We henceforth denote by $h_t = f(s_t) - f^\star$ the suboptimality of the $k$-th iterate and note that
\begin{equation}
\label{eq:h_t-recursion}
h_{t+1} = f(s_t+\eta_t d_t)-f(s_t)+ h_t \le -g_t + \frac{1}{2}\beta_t\eta_t^2 \|d_t\|^2+h_t\,,
\end{equation}
where the inequality exploits~\eqref{eq:stepsize_full} and the definitions of $g_t$ and $d_t$. In order to show that $h_t$ decays geometrically, we distinguish the cases $(i)$ $g_t / (\beta_t \|d_t\|^2) \ge 1$ and $(ii)$ $g_t / (\beta_t \|d_t\|^2) < 1$. In case~$(i)$, the stepsize $\eta_t$ defined in~\eqref{eq:stepsize_full} satisfies $\eta_t = \min \{1, g_t / (\beta_t \|d_t\|^2) \} = 1$, and thus we have
\begin{equation}
\label{eq:h_t-recursion_i}
h_{t+1} \leq \left( \frac{\beta_t \| d_t \|^2}{2 g_t} - 1 \right) g_t +h_t \leq -\frac{g_t}{2}+h_t \leq \left( 1 - \frac{\delta}{2} \right) h_t,
\end{equation}
where the first inequality follows from~\eqref{eq:h_t-recursion}, while the third inequality holds due to Lemma~\ref{lem:FW}\,\ref{lem:FW:convex}.
In case~$(ii)$, the stepsize satisfies~$\eta_t = g_t / \beta_t \|d_t\|^2 < 1$, and thus we find
\begin{align}
\nonumber
h_{t+1} &\leq - g_t + \frac{g_t^2}{2\beta_t \| d_t \|^2} + h_t \leq - \frac{g_t^2}{2 \beta_t \| d_t \|^2} + h_t \leq \left(1 - \frac{\delta g_t}{2 \beta_t \| d_t \|^2}\right) h_t\\
\label{eq:h_t-recursion_ii}
&\le \left( 1 - \min_{k \in \{ 1, \hdots, K \} } \frac{(1-\sqrt{1 - \delta}) \alpha}{4 \beta_t} \|\nabla_{[k]} f(s_t)\|\right) h_t
\le \left( 1 - \frac{(1-\sqrt{1 - \delta}) \alpha\varepsilon}{4 \overline\beta} \right) h_t\,,
\end{align}
where the first and the second inequalities follow from~\eqref{eq:h_t-recursion} and from multiplying $-g_t$ with $\eta_t<1$, respectively, while the third and the fourth inequalities exploit Lemmas~\ref{lem:FW}\,\ref{lem:FW:convex} and~\ref{lem:FW}\,\ref{lem:FW:set}, respectively. The last inequality in~\eqref{eq:h_t-recursion_ii} holds because of Assumption~\ref{a:FW}\,\ref{a:FW:lower} and because $\beta_t \leq \overline{\beta}$ for all $t\in\mathbb N$; see~\cite[Proposition~2]{ref:pedregosa2018stepsize}. By the estimates~\eqref{eq:h_t-recursion_i} and~\eqref{eq:h_t-recursion_ii}, the suboptimality of the current iterate decays at least~by
\[
\max \left\{ 1 - \frac{\delta}{2} , 1 - \frac{(1-\sqrt{1 - \delta})\alpha \varepsilon}{4 \overline \beta} \right\} <1
\]
in each iteration of the algorithm. This observation completes the proof.
\end{proof}
\subsection{Frank-Wolfe Algorithm for Wasserstein MMSE Estimation Problems}
\label{sect:FW-MMSE}
We now use the fully adaptive Frank-Wolfe algorithm of Section~\ref{sec:fully-adaptive-FW} to solve the nonlinear SDP~\eqref{eq:program:dual}, which is equivalent to the dual Wasserstein MMSE estimation problem over normal priors. Recall from Corollary~\ref{corol:alternative} that any solution of~\eqref{eq:program:dual} can be used to construct a least favorable prior and an optimal estimator that form a Nash equilibrium. Unlike the generic convex program~\eqref{eq:generic-convex}, the nonlinear SDP~\eqref{eq:program:dual} is a convex {\em maximization} problem. This prompts us to apply Algorithm~\ref{algorithm:FAFW} to the convex minimization problem obtained from problem~\eqref{eq:program:dual} by turning the objective function upside down.
Throughout this section we assume that $\covsa_x\succ 0$, $\covsa_w\succ 0$, $\rho_x>0$ and $\rho_w>0$, which implies via Theorem~\ref{thm:least-favorable-prior} that the nonlinear SDP~\eqref{eq:program:dual} is solvable and can be reformulated more concisely as
\begin{equation}
\label{eq:program:dual:concise}
\max_{\Sigma_x\in\mathcal S^+_x, \Sigma_w\in\mathcal S^+_w}~ f(\Sigma_x, \Sigma_w)\,,
\end{equation}
where the objective function $f:\mathcal S^+_x \times \mathcal S^+_w\to \mathbb R$ is defined through
\[
f(\Sigma_x, \Sigma_w) = \Tr{\Sigma_x - \Sigma_x H^\top \left( H \Sigma_x H^\top + \Sigma_w \right)^{-1} H \Sigma_x}\,,
\]
and where the separate feasible sets for $\Sigma_x$ and $\Sigma_w$ are given by
\[
\mathcal S^+_x = \left\{ \Sigma_x \in \mathbb{S}_{+}^n: \Tr{\Sigma_x + \covsa_x - 2 \big( \covsa_x^\frac{1}{2} \Sigma_x \covsa_x^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_x^2,~\Sigma_x\succeq \lambda_{\min}(\covsa_x)I_n \right\}
\]
and
\[
\mathcal S^+_w = \left\{ \Sigma_w \in \mathbb{S}_{+}^m: \Tr{\Sigma_w + \covsa_w - 2 \big( \covsa_w^\frac{1}{2} \Sigma_w \covsa_w^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho_w^2,~\Sigma_w\succeq \lambda_{\min}(\covsa_w)I_m \right\}\,,
\]
respectively. One readily verifies that $f$ is concave and differentiable. Moreover, in the terminology of Section~\ref{sec:fully-adaptive-FW}, the feasible set of the nonlinear SDP~\eqref{eq:program:dual:concise} constitutes a Cartesian product of $K=2$ marginal feasible sets~$\mathcal S^+_x$ and~$\mathcal S^+_w$, both of which are convex and compact thanks to Lemma~\ref{lemma:compact:FS}. Note that~$\mathcal S_x^+$ and~$\mathcal S_w^+$ constitute restrictions of the feasible sets~$\mathcal S_x$ and~$\mathcal S_w$, respectively, which appeared in the proofs of Theorems~\ref{thm:least-favorable-prior} and~\ref{thm:sandwich}. The oracle problem that linearizes the objective function of the nonlinear SDP~\eqref{eq:program:dual:concise} around a fixed feasible solution $\Sigma_x\in\mathcal S^+_x$ and $\Sigma^+_w\in\mathcal S_w$ can now be expressed concisely as
\begin{equation}
\label{eq:oracle}
\max_{L_x\in\mathcal S^+_x, L_w\in\mathcal S^+_w} \inner{L_x - \Sigma_x}{D_x} + \inner{L_w - \Sigma_w}{D_w}\,,
\end{equation}
where $D_x = \nabla_{\Sigma_x} f\big(\Sigma_x, \Sigma_w\big)$ and $D_w =\nabla_{\Sigma_w} f\big(\Sigma_x, \Sigma_w\big)$ represent the gradients of $f$ with respect to~$\Sigma_x$ and~$\Sigma_w$. Lemma~\ref{lem:Taylor-f} offers analytical formulas for $D_x$ and $D_w$ and shows that they are both positive semidefinite.
The oracle problem~\eqref{eq:oracle} is manifestly separable in $L_x$ and $L_w$ and can therefore be decomposed into a sum of two structurally identical marginal subproblems. The Frank-Wolfe algorithm is an ideal method to address the nonlinear SDP~\eqref{eq:program:dual:concise} because these two marginal oracle subproblems admit quasi-closed form solutions. Specifically, Proposition~\ref{prop:quadratic}\,\ref{prop:quad:cov_min} in the appendix implies that problem~\eqref{eq:oracle} is uniquely solved by
\[
L_x^\star = (\gamma_x^\star)^2 (\gamma_x^\star I_n - D_x)^{-1} \covsa_x (\gamma_x^\star I_n - D_x)^{-1} \quad \text{and}\quad
L_w^\star = (\gamma_w^\star)^2 (\gamma_w^\star I_m - D_w)^{-1} \covsa_w (\gamma_w^\star I_m - D_w)^{-1},
\]
where $\gamma_x^\star\in(\lambda_{\max}(D_x),\infty)$ and $\gamma_w^\star\in(\lambda_{\max}(D_w),\infty)$ are the unique solutions of the algebraic equations
\begin{equation}
\label{eq:algebraic-equations}
\rho_x^2 - \inner{\covsa_x}{\big( I_n - \gamma_x^\star (\gamma_x^\star I_n - D_x)^{-1} \big)^2} = 0\quad \text{and}\quad
\rho_w^2 - \inner{\covsa_w}{\big( I_m - \gamma_w^\star (\gamma_w^\star I_m - D_w)^{-1} \big)^2} = 0,
\end{equation}
respectively. In practice, these algebraic equations need to be solved numerically.
However, the numerical errors in~$\gamma_x^\star$ and~$\gamma_w^\star$ must be contained to ensure that~$L_x^\star$ and~$L_w^\star$ give rise to a $\delta$-approximate solution for~\eqref{eq:oracle} in the sense of~\eqref{oracle}. In the following we will show that $\delta$-approximate solutions for each of the two oracle subproblems in~\eqref{eq:oracle} and for each $\delta\in(0,1)$ can be computed with an efficient bisection algorithm.
\begin{table}[th]
\begin{minipage}{0.60\columnwidth}
\begin{algorithm}[H]
\caption{Bisection algorithm for the oracle subproblem}
\label{algorithm:bisection}
\begin{algorithmic}
\REQUIRE nominal covariance matrix $\covsa \in \mathbb{S}_{++}^d$, radius $\rho \in\mathbb R_{++}$, \\ \hspace{2.3em} reference covariance matrix $\Sigma \in \mathbb{S}_{+}^d$ feasible in~\eqref{eq:oracle}, \\
\hspace{2.3em} gradient matrix $D \in \mathbb{S}_{+}^d$, $D\neq 0$, precision $\delta \in (0, 1)$,\\
\hspace{2.3em} dual objective function $\varphi(\gamma)$ defined in Theorem~\ref{thm:oracle}
\vspace{0.21em}
\STATE set $\lambda_1\leftarrow \lambda_{\max}(D)$, and let $v_1\in\mathbb R^d$ be an eigenvector for $\lambda_1$
\STATE set $\underline\gamma \leftarrow \lambda_1 ( 1 + (v_1^\top \covsa v_1)^\frac{1}{2} / \rho ) $ and $\overline\gamma \leftarrow \lambda_1 ( 1 + \Tr{\covsa}^{\frac{1}{2}} / \rho ) $
\REPEAT
\STATE Set $\tilde \gamma \leftarrow (\overline\gamma + \underline\gamma) / 2$ and $\tilde L \leftarrow (\tilde \gamma)^2 (\tilde \gamma I_d - D)^{-1} \covsa (\tilde \gamma I_d - D)^{-1}$
\STATE \textbf{if } $\frac{{\rm d} \varphi}{{\rm d}\gamma}(\tilde \gamma)<0$ \textbf{ then } Set $\underline\gamma \leftarrow \tilde \gamma$ \textbf{ else } Set $\overline\gamma \leftarrow \tilde \gamma$ \textbf{ endif}
\UNTIL{$\frac{{\rm d} \varphi}{{\rm d}\gamma}(\tilde \gamma)>0$ and $\inner{\tilde L - \Sigma}{D} \geq \delta\,\varphi(\tilde \gamma) }$ \vspace{0.21em}
\ENSURE $\tilde L$
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{table}
\begin{theorem}[Approximate oracle] \label{thm:oracle}
For any fixed $\rho\in \mathbb R_{++}$, $\covsa\in\mathbb{S}_{++}^d$ and $D \in\mathbb{S}_{+}^d$, $D\neq 0$, consider the generic oracle subproblem
\begin{equation}
\label{eq:oracle-primal}
\begin{array}{cl}
\displaystyle \max_{L \in \mathbb{S}_{+}^d} & \inner{L - \Sigma}{D}\\[-1.5ex]
\st & \Tr{L + \covsa - 2 \big( \covsa^\frac{1}{2} L \covsa^\frac{1}{2} \big)^\frac{1}{2}} \leq \rho^2,~L\succeq \lambda_{\min}(\covsa)I_d\,,
\end{array}
\end{equation}
where $\Sigma\in\mathbb{S}_{+}^d$ represents a feasible reference solution. Moreover, denote the feasible set of problem~\eqref{eq:oracle-primal} by~$\mathcal S^+$, let $\delta\in(0,1)$ be the desired oracle precision, and define $\varphi(\gamma)= \gamma(\rho^2 + \inner{\gamma(\gamma I_d - D)^{-1} - I_d}{\covsa})-\inner{\Sigma}{D}$ for any $\gamma>\lambda_{\max}(D)$. Then, Algorithm~\ref{algorithm:bisection} returns in finite time a matrix $\tilde L\in\mathbb{S}_{+}^d$ with the following~properties.
\begin{enumerate}[label = $(\roman*)$,itemsep = 2mm]
\item Feasibility: $\tilde L\in\mathcal S^+$
\item $\delta$-Suboptimality: $\inner{\tilde L - \Sigma}{D} \geq \delta \max_{L \in \mathcal S^+} ~ \inner{L - \Sigma}{D}$
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:oracle}]
Proposition~\ref{prop:quadratic}\,\ref{prop:quad:cov_min} in the appendix guarantees that the lower bound on $L$ in~\eqref{eq:oracle-primal} is redundant and can be omitted without affecting the problem's optimal value. By Proposition~\ref{prop:quadratic}\,\ref{prop:quad:dual}, the oracle subproblem~\eqref{eq:oracle-primal}
thus admits the strong Lagrangian dual
\[
\min_{\gamma > \lambda_{\max}(D)}~\varphi(\gamma)\,,
\]
where the convex and differentiable function $\varphi(\gamma)$ is defined as in the theorem statement. In the following we denote by~$\lambda_1>0$ the largest eigenvalue of $D$ and let $v_1\in\mathbb R^d$ be a corresponding eigenvalue. By Proposition~\ref{prop:quadratic}\,\ref{prop:quad:cov_min}, the dual oracle subproblem admits a minimizer $\gamma^\star\in[\underline \gamma,\overline\gamma]$ that is uniquely determined by the first-order optimality condition $\frac{{\rm d} \varphi}{{\rm d}\gamma}(\gamma^\star)=0$, where
\[
\underline\gamma= \lambda_1\left( 1+\sqrt{v_1^\top\covsa v_1}/\rho \right)\quad \text{and}\quad\overline\gamma= \lambda_1\left( 1+\sqrt{\Tr{\covsa}}/\rho\right),
\]
while the primal problem~\eqref{eq:oracle-primal} admits a unique maximizer $L^\star=L(\gamma^\star)$, where
\[
L(\gamma) = \gamma^2 (\gamma I_d - D)^{-1} \covsa (\gamma I_d - D)^{-1}.
\]
From the proof of Proposition~\ref{prop:quadratic}\,\ref{prop:quad:cov_min} it is evident that $L(\gamma)\succ \lambda_{\min}(\covsa)I_d$ for every $\gamma>0$.
A direct calculation further shows that
\begin{align*}
\frac{{\rm d}\varphi}{{\rm d}\gamma}(\gamma)=\rho^2 - \inner{\covsa}{\big( I_d - \gamma (\gamma I_d - D)^{-1} \big)^2}
=\rho^2- \Tr{L(\gamma) + \covsa - 2 \big( \covsa^\frac{1}{2} L(\gamma) \covsa^\frac{1}{2} \big)^\frac{1}{2}}.
\end{align*}
Recalling that $\varphi(\gamma)$ is convex and that its derivative is non-negative for all~$\gamma\ge \gamma^\star$, the above reasoning implies that $L(\gamma)\in\mathcal S^+$ for all $\gamma\ge \gamma^\star$. Note also that the optimal value of the primal problem~\eqref{eq:oracle-primal} is non-negative because $\Sigma \in \mathcal S^+$. The continuity of $\inner{L(\gamma)-\Sigma}{D}$ at $\gamma=\gamma^\star$ thus ensures that there exists $\delta'>0$ with
\[
\inner{L(\gamma)- \Sigma}{D} \leq \delta \inner{L(\gamma^\star)- \Sigma}{D} = \delta \max_{L \in \mathcal S^+} ~ \inner{L - \Sigma}{D}\quad
\forall \gamma\in[\gamma^\star,\gamma^\star+\delta'].
\]
In summary, computing a feasible and $\delta$-suboptimal matrix $\tilde L\in\mathbb{S}_{+}^d$ is tantamount to finding $\tilde \gamma\in[\gamma^\star,\gamma^\star+\delta']$. Algorithm~\ref{algorithm:bisection} uses bisection over the interval $[\underline \gamma, \overline\gamma]$ to find a $\tilde\gamma$ with these properties.
\end{proof}
Theorem~\ref{thm:oracle} complements~\cite[Theorem~3.2]{ref:shafieezadeh2018wasserstein}, which constructs an approximate oracle for a nonlinear SDP similar to~\eqref{eq:program:dual:concise} that offers an {\em additive} error guarantee. The {\em multiplicative} error guarantee of the oracle constructed here is needed to ensure the linear convergence of the fully adaptive Frank-Wolfe algorithm. Next, we prove that the nonlinear SDP~\eqref{eq:program:dual:concise} satisfies all regularity conditions listed in Assumption~\ref{a:FW}.
\begin{proposition}[Regularity conditions of the nonlinear SDP~\eqref{eq:program:dual:concise}] \label{prop:regularity}
If $\rho_x \in \mbb R_{++}$, $\rho_w \in \mbb R_{++}$, $\covsa_x \in \mathbb{S}_{+}^n$ and $\covsa_w \in \mathbb{S}_{+}^m$, then the nonlinear SDP~\eqref{eq:program:dual:concise} obeys the following regularity conditions.
\begin{enumerate}[label = $(\roman*)$,itemsep = 2mm]
\item \label{prop:regularity-i} The objective function of problem~\eqref{eq:program:dual:concise} is $\beta$-smooth in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:smooth}, where
\[
\beta = 2\lambda_{\min}^{-1}(\covsa_w) \left( C + C \; \lambda_{\max}^2(H^\top H) + \lambda_{\max}(H^\top H) \right),
\]
which depends on the auxiliary constant $C = \lambda_{\max} ( H^\top H ) \cdot \lambda_{\min}^{-2}(\covsa_w) \cdot ( \rho_x + \Tr{\covsa_x}^\frac{1}{2} )^4$.
\item \label{prop:regularity-ii} The marginal feasible sets $\mathcal S_x^+$ and $\mathcal S^+_w$ of problem~\eqref{eq:program:dual:concise} are $\alpha$-strongly convex with respect to $-f$ in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:set}, where $\alpha = \min \, \{\alpha_x, \alpha_w \}$, which depends on the auxiliary constants
\[
\alpha_x = \frac{\lambda_{\min}^{\frac{5}{4}}(\covsa_x)}{2 \rho_x \big( \rho_x + \Tr{\covsa_x}^\frac{1}{2} \big)^{\frac{7}{2}} } \quad\text{and}\quad \alpha_w = \frac{\lambda_{\min}^{\frac{5}{4}}(\covsa_w)}{2 \rho_w \big( \rho_w + \Tr{\covsa_w}^\frac{1}{2} \big)^{\frac{7}{2}} }\,.
\]
\item \label{prop:regularity-iii}
The objective function of problem~\eqref{eq:program:dual:concise} is $\varepsilon$-steep in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:lower}, where $\varepsilon = \min \, \{\varepsilon_x, \varepsilon_w \}$, which depends on the auxiliary constants
\begin{align*}
\varepsilon_x &= \left( \frac{\lambda_{\min}(\covsa_w)}{\big( \rho_x + \Tr{\covsa_x}^\frac{1}{2} \big)^2 \lambda_{\max}(H^\top H)+ \big(\rho_w + \Tr{\covsa_w}^{\frac{1}{2}} \big)^2} \right)^2
\end{align*}
and
\begin{align*}
\varepsilon_w &= \lambda_{\max}(H^\top H) \left( \frac{\lambda_{\min}(\covsa_x)}{\big( \rho_w + \Tr{\covsa_w}^\frac{1}{2} \big)^2 + \lambda_{\min}(\covsa_x) \lambda_{\max}(H^\top H)} \right)^2.
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:regularity}]
The proof repeatedly uses the fact that, for any $A \in \mbb R^{d_1 \times d_2}$ and $ B \in \mathbb{S}_{+}^{d_2}$, we have
\begin{equation}\label{eq:eig}
\lambda_{\max}(A B A^\top)
= \lambda_{\max}(A^\top A B)
\leq \lambda_{\max}(A^\top A) \, \lambda_{\max}(B).
\end{equation}
The equality in~\eqref{eq:eig} holds because all eigenvalues of $A B A^\top$ are non-negative and because the non-zero spectrum of $A B A^\top$ is identical to that of $A^\top A B$ due to \cite[Proposition~4.4.10]{ref:bernstein2009matrix}. The inequality follows from the observation that $\lambda_{\max}(A^\top A)$ and $\lambda_{\max}(B)$ coincide with the operator norms of the positive semidefinite matrices $A^\top A$ and $B$, respectively.
As for assertion~\ref{prop:regularity-i}, recall first that the objective function$~f$ of the nonlinear SDP~\eqref{eq:program:dual} is concave. In order to show that $f$ is $\beta$-smooth for some $\beta>0$, it thus suffices to prove that the largest eigenvalue of the positive semidefinite Hessian matrix of $-f$ admits an upper bound uniformly across $\mathcal S^+_x \times \mathcal S^+_w$. By Lemma~\ref{lem:Taylor-f}, the partial gradients of~$f$ evaluated at~$\Sigma_x\succ 0$ and~$\Sigma_w\succ 0$ are given by
\begin{align*}
D_x &= \nabla_{\Sigma_x} f\big(\Sigma_x, \Sigma_w\big) = (I_n - \Sigma_x H^\top G^{-1}H )^\top (I_n - \Sigma_x H^\top G^{-1}H)\\
D_w &= \nabla_{\Sigma_w} f\big(\Sigma_x, \Sigma_w\big) = G^{-1} H \Sigma_x^2 H^\top G^{-1}\,,
\end{align*}
where $G= H\Sigma_x H^\top +\Sigma_w$. Moreover, the Hessian matrix
\[
\mathcal H =
\begin{bmatrix}
\mathcal H_{xx} & \mathcal H_{xw} \\
\mathcal H_{xw}^\top & \mathcal H_{ww}
\end{bmatrix} \succeq 0
\]
of the convex function~$-f$ evaluated at $\Sigma_x\succ 0$ and~$\Sigma_w\succ 0$ consists of the submatrices
\begin{align*}
\begin{array}{l@{\;}l@{\;}l@{\;\;}l}
\mathcal H_{xx} & = -\nabla^2_{xx} f(\Sigma_x, \Sigma_w) &=& 2 D_x \otimes H^\top G^{-1} H \\[0.5ex]
\mathcal H_{xw} &= -\nabla^2_{xw} f(\Sigma_x, \Sigma_w) &=& H^\top G^{-1} \otimes (H^\top D_w - \Sigma_x H^\top G^{-1}) + (H^\top D_w - \Sigma_x H^\top G^{-1}) \otimes H^\top G^{-1} \\[0.5ex]
\mathcal H_{ww} & = -\nabla^2_{ww} f(\Sigma_x, \Sigma_w) &=& 2 D_w \otimes G^{-1},
\end{array}
\end{align*}
where $\nabla_x$ and $\nabla_w$ are used as shorthands for the nabla operators with respect to $\vect (\Sigma_x)$ and $\vect (\Sigma_w)$, respectively. To construct an upper bound on $\lambda_{\max}(\mathcal H)$ uniformly across $\mathcal S^+_x \times \mathcal S^+_w$, we note first that
\begin{equation}
\label{eq:hessian-bound}
\lambda_{\max}(\mathcal H) \leq \lambda_{\max}(\mathcal H_{xx}) + \lambda_{\max}(\mathcal H_{ww}) = 2 \left( \lambda_{\max} \left( D_x \right) \lambda_{\max} \left( H^\top G^{-1} H \right) + \lambda_{\max} \left( D_w \right) \lambda_{\max} \left( G^{-1} \right) \right),
\end{equation}
where the inequality follows from \cite[Fact~5.12.20]{ref:bernstein2009matrix} and the subadditivity of the maximum eigenvalue, whereas the equality exploits the trace rule of the Kronecker product \cite[Proposition~7.1.10]{ref:bernstein2009matrix}. In the remainder of the proof, we derive an upper bound for each term on the right hand side of the above expression.
By the definition of $G$ and because $\Sigma_w\in\mathcal S^+_w$, we have $G \succeq \lambda_{\min}(\Sigma_w) I_m \succeq \lambda_{\min}(\covsa_w) I_m$. As $\covsa_w\succ 0$ by assumption, we may thus conclude that $\lambda_{\max} ( G^{-1} ) \leq \lambda_{\min}^{-1}(\covsa_w)$, which in turn implies via~\eqref{eq:eig} that
\[
\lambda_{\max} ( H^\top G^{-1} H ) \leq \lambda_{\max}(G^{-1}) \, \lambda_{\max}(H H^\top) \leq \lambda_{\min}^{-1}(\covsa_w) \, \lambda_{\max}(H^\top H).
\]
Similarly, by the definition of~$D_w$ we find
\begin{align*}
\lambda_{\max} ( D_w )
&= \lambda_{\max} (G^{-1} H \Sigma_x^2 H^\top G^{-1}) \\
&\leq \lambda_{\max}^2 ( \Sigma_x ) \, \lambda_{\max}^2 ( G^{-1} ) \, \lambda_{\max} ( H^\top H ) \leq \big( \rho_x + \Tr{\covsa_x}^{\frac{1}{2}} \big)^4 \, \lambda_{\min}^{-2} ( \covsa_w ) \, \lambda_{\max} ( H^\top H ),
\end{align*}
where the first inequality follows from applying the estimate~\eqref{eq:eig} twice, while the last inequality reuses the bound on~$\lambda_{\max} ( G^{-1} )$ derived above and exploits Lemma~\ref{lemma:compact:FS}. Finally, by the definition of~$D_w$ we have
\begin{align*}
\lambda_{\max} ( D_x ) &\leq 1+\lambda_{\max} ( -H^\top G^{-1} H \Sigma_x )+\lambda_{\max} ( -\Sigma_x H^\top G^{-1} H) + \lambda_{\max} ( H^\top G^{-1} H \Sigma_x^2 H^\top G^{-1} H ) \\&\leq 1 + \lambda_{\max} ( H^\top G^{-1} H \Sigma_x^2 H^\top G^{-1} H ) \\
&\leq 1 + \lambda_{\max}^2 ( \Sigma_x ) \, \lambda_{\max}^2 ( G^{-1} ) \, \lambda_{\max}^2 ( H^\top H ) \\
&\leq 1 + \big( \rho_x + \Tr{\covsa_x}^{\frac{1}{2}} \big)^4 \, \lambda_{\min}^{-2}(\covsa_w) \, \lambda_{\max}^2 ( H^\top H ).
\end{align*}
where the first inequality holds due to the subadditivity of the maximum eigenvalue and~\cite[Proposition~4.4.10]{ref:bernstein2009matrix}, which implies that the nonzero spectra of $-\Sigma_x H^\top G^{-1} H$ and $-H^\top G^{-1} H \Sigma_x$ are both real and coincide with the nonzero spectrum of the negative semidefinite matrix $-\Sigma_x^\frac{1}{2} H^\top G^{-1} H \Sigma_x^\frac{1}{2}$. The third inequality follows from applying the estimate~\eqref{eq:eig} three times, and the fourth inequality reuses the bound on~$\lambda_{\max} ( G^{-1} )$ and exploits Lemma~\ref{lemma:compact:FS}. Substituting all the above bounds into~\eqref{eq:hessian-bound} completes the proof of assertion~\ref{prop:regularity-i}.
As for assertion~\ref{prop:regularity-ii}, we first show that the feasible set $\mathcal S^+_x$ is $\alpha_x$-strongly convex with respect to $-f$ in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:set}. To see this, fix any $\Sigma_x, \Sigma_x' \in \mathcal S^+_x$ and $\theta \in [0,1]$, and set
\begin{align}\label{cov_theta}
\Sigma_{\theta} = \theta \Sigma_x + (1-\theta) \Sigma_x' + \theta(1-\theta)\frac{\alpha_x}{2} \| \Sigma_x - \Sigma_x' \|^2 \frac{D_x}{\| D_x \|},
\end{align}
where $\alpha_x>0$ is defined as in the proposition statement, and $D_x$ denotes again the partial gradient of $f$ with respect to $\Sigma_x$. To prove strong convexity of $\mathcal S^+_x$ with respect to $-f$, we will show that $\Sigma_{\theta}\in \mathcal S^+_x$. Note first that $\Sigma_\theta \succeq \lambda_{\min}(\covsa_x) I_n$ because $\Sigma_x, \Sigma_x' \in \mathcal S^+_x$ and because $D_x$ is positive semidefinite. Next, define
\[
\overline {\mathcal S}_x = \left\{ \Sigma_x \in \mathbb{S}_{+}^n: \lambda_{\min}(\covsa_x) I_n \preceq \Sigma_x \preceq (\rho_x + \Trace\big[\covsa_x\big]^{\frac{1}{2}})^2 I_n \right\},
\]
and note that $\mathcal S^+_x \subseteq \overline{\mathcal S}_x$ by Lemma~\ref{lemma:compact:FS}. Moreover, \cite[Theorem~1]{ref:bhatia2018strong} implies that the function $g_x:\mathbb{S}_{+}^n\to\mathbb R$ defined through $g_x(\Sigma_x)=\Tr{\Sigma_x + \covsa_x - 2(\covsa_x^{\frac{1}{2}} \Sigma_x \covsa_x^{\frac{1}{2}})^{\frac{1}{2}}}$ is $\kappa_1$-strongly convex and $\kappa_2$-smooth over~$\overline {\mathcal S}_x$, where
\begin{equation*}
\kappa_1 = \frac{\lambda_{\min}^{\frac{1}{2}}(\covsa_x)}{2 (\rho_x + \Trace\big[ \covsa_x \big]^\frac{1}{2})^3} \quad\text{and}\quad
\kappa_2 = \frac{\rho_x + \Trace\big[\covsa_x\big]^\frac{1}{2}}{2 \lambda_{\min}^{\frac{3}{2}}(\covsa_x)}.
\end{equation*}
By \cite[Theorem~12]{ref:journee2010generalized}, the sublevel set $\mathcal S_x^+=\{\Sigma_x\in\overline{\mathcal S}_x:g_x(\Sigma_x)\le \rho_x^2\}$ is thus strongly convex in the canonical sense---relative to $\overline{\mathcal S}_x$---with convexity parameter $\alpha_x=\kappa_1 / (\sqrt{2 \kappa_2} \rho_x)$. This insight implies that $g_x(\overline \Sigma_x)\le \rho_x^2$, which in turn ensures that $\overline\Sigma_x\in\mathcal S_x^+$. As $\theta\in[0,1]$ was chosen arbitrarily, we may conclude that~$\mathcal S_x^+$ is $\alpha_x$-strongly convex with respect to~$f$. Using an analogous argument, one can show that~$\mathcal S^+_w$ is $\alpha_w$-strongly convex with respect to $f$, where $\alpha_w>0$ is defined as in the proposition statement. In summary, $\mathcal S^+_x \times \mathcal S^+_w$ is therefore $\alpha$-strongly convex with respect to $f$ in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:set}, where $\alpha=\min \{ \alpha_x, \alpha_w \}$.
In order to prove assertion~\ref{prop:regularity-iii}, we will establish lower bounds on $\lambda_{\max}(D_x)$ and $\lambda_{\max}(D_w)$ uniformly across $\mathcal S^+_x \times \mathcal S^+_w$. The claim then follows from the observation that $\lambda_{\max}(D_x)\leq \|D_x\|$ and $\lambda_{\max}(D_w)\leq\|D_w\|$. We first derive a lower bound on~$\lambda_{\max}(D_x)$. To this end, set $T_1 = I_n - \Sigma_x H^\top G^{-1}H$ and $T_2 = H \Sigma_x H^\top G^{-1}$, and note that both $T_1$ and $T_2$ have real spectra thanks to~\cite[Proposition~4.4.10]{ref:bernstein2009matrix}. As $D_x = T_1^\top T_1$, we find
\begin{align} \label{eq:Dx-bound-1}
\lambda_{\max}(D_x)
= \lambda_{\max} (T_1 T_1^\top)
\geq \max_{\lambda \in \spec(T_1)} | \lambda |^2 = \max_{\lambda \in \spec(T_2)} | 1 - \lambda |^2 = \lambda_{\max}^2(I - T_2) = \lambda_{\max}^2(\Sigma_w G^{-1}),
\end{align}
where $\spec(T)$ denotes the eigenvalue spectrum of any square matrix $T$. The inequality in~\eqref{eq:Dx-bound-1} follows from Browne's theorem \cite[Fact~5.11.21]{ref:bernstein2009matrix}, the second equality holds because the non-zero spectrum of $\Sigma_x H^\top G^{-1}H$ matches that of $ H \Sigma_x H^\top G^{-1}$ thanks to~\cite[Proposition~4.4.10]{ref:bernstein2009matrix}, and the last equality follows from the identity
$T_2= I_m - \Sigma_w G^{-1}$.
Notice that all eigenvalues of $\Sigma_w G^{-1}$ are real because $T_2$ has a real spectrum.
The estimate~\eqref{eq:Dx-bound-1} implies that a uniform lower bound on the largest eigenvalue of $D_x$ can be obtained by maximizing the largest eigenvalue of $\Sigma_w G^{-1}$ over $\mathcal S^+_x \times \mathcal S^+_w$. By the definition of $G$, we have
\begin{equation}
\label{eq:Dx-bound-2}
\begin{aligned}
\inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \lambda_{\max}(\Sigma_w G^{-1})
&\ge \inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \lambda_{\min} \big(\Sigma_w (H \Sigma_x H^\top + \Sigma_w)^{-1} \big) \\
&\ge \inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \lambda_{\min}(\Sigma_w) \,\lambda_{\min} \left( (H \Sigma_x H^\top + \Sigma_w)^{-1} \right) \\
&\ge \inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \frac{\lambda_{\min}(\Sigma_w)}{\lambda_{\max}(\Sigma_x) \lambda_{\max}(H^\top H)+ \lambda_{\max}(\Sigma_w)} \\
&\ge \frac{\lambda_{\min}(\covsa_w)}{\big( \rho_x + \Trace\big[\covsa_x\big]^\frac{1}{2} \big)^2 \lambda_{\max}(H^\top H)+ \big(\rho_w + \Trace\big[\covsa_w\big]^{\frac{1}{2}} \big)^2}, \end{aligned}
\end{equation}
where the second inequality holds because $\lambda_{\min}(T)=\lambda_{\max}^{-1}(T^{-1})$ for any $T\succ 0$ and because the maximum eigenvalue of a positive definite matrix coincides with its operator norm. The third inequality exploits~\eqref{eq:eig} and the subadditivity of the maximum eigenvalue, and the last inequality follows from Lemma~\ref{lemma:compact:FS}.
Combining~\eqref{eq:Dx-bound-1} and~\eqref{eq:Dx-bound-2} shows that
$ \| D_x \| \geq \lambda_{\max}(D_x) \geq \varepsilon_x, $
where $\varepsilon_x$ is defined as in the proposition statement.
Using similar arguments, we can also derive a uniform lower bound on~$\lambda_{\max}(D_w)$. Specifically, we have
\begin{equation}
\label{eq:lb1}
\begin{aligned}
\lambda_{\max}(D_w) \lambda_{\max}(H^\top H)
&\geq \lambda_{\max}(H^\top D_w H)
= \lambda_{\max} \big(H^\top G^{-1} H \Sigma_x \, (H^\top G^{-1} H \Sigma_x)^\top \big)\\
&\geq \lambda_{\max}^2(H \Sigma_x H^\top G^{-1})
= \left( 1 - \lambda_{\min}(\Sigma_w G^{-1}) \right)^2\\ & = \left( 1 - \frac{1}{1 + \lambda_{\max}(H \Sigma_x H^\top \Sigma_w^{-1})} \right)^2\\ & = \left( 1 - \frac{1}{1 + \lambda_{\max}\left(\Sigma_w^{-\frac{1}{2}} H \Sigma_x H^\top \Sigma_w^{-\frac{1}{2}}\right)} \right)^2,
\end{aligned}
\end{equation}
where the first inequality follows from~\eqref{eq:eig}, and the first equality holds due to the definition of $D_w$. Moreover, the second inequality exploits Brown's theorem~\cite[Fact~5.11.21]{ref:bernstein2009matrix}, and the second equality uses the definition of~$G$. Finally, the third equality follows from the relation $\lambda_{\min}(\Sigma_w G^{-1}) = \lambda_{\max}^{-1}(G \Sigma_w)$, and the fourth equality holds due to~\cite[Proposition~4.4.10]{ref:bernstein2009matrix}.
A uniform lower bound on $D_w$ can thus be obtained from the estimate
\[
\Sigma_w^{-\frac{1}{2}} H \Sigma_x H^\top \Sigma_w^{-\frac{1}{2}} \succeq \lambda_{\min}(\Sigma_x) \Sigma_w^{-\frac{1}{2}} H H^\top \Sigma_w^{-\frac{1}{2}} \succeq \frac{\lambda_{\min}(\Sigma_x)}{\lambda_{\max}(\Sigma_w)} H H^\top,
\]
which implies via Lemma~\ref{lemma:compact:FS} that
\begin{align}
\label{eq:lb2}
\inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \lambda_{\max}(\Sigma_w^{-\frac{1}{2}} H \Sigma_x H^\top \Sigma_w^{-\frac{1}{2}})
&\geq \inf_{\substack{\Sigma_x \in \mathcal S^+_x \\ \Sigma_w \in \mathcal S^+_w}} \frac{\lambda_{\min}(\Sigma_x)}{\lambda_{\max}(\Sigma_w)} \lambda_{\max}(H^\top H) \geq \frac{\lambda_{\min}(\covsa_x)}{\big(\rho_w + \Trace\big[\covsa_w\big]^{\frac{1}{2}} \big)^2} \lambda_{\max}(H^\top H).
\end{align}
Combining \eqref{eq:lb1} and \eqref{eq:lb2} shows that $\| D_w \| \geq \lambda_{\max}(D_w) \geq \varepsilon_w$, where $\varepsilon_w$ is defined as in the proposition statement. We thus conclude that $f$ is $\varepsilon$-steep in the sense of Assumption~\ref{a:FW}\,\ref{a:FW:lower} with $\varepsilon = \min \{\varepsilon_x, \varepsilon_w \}$.
\end{proof}
By Theorem~\ref{theorem:algorithm}, which is applicable because of Proposition~\ref{prop:regularity}, the fully adaptive Frank-Wolfe algorithm (see Algorithm~\ref{algorithm:FAFW}) solves the (minimization problem equivalent to the) nonlinear SDP~\eqref{eq:program:dual:concise} at a linear convergence rate. Moreover, Theorem~\ref{thm:oracle} ensures that the oracle problem~\eqref{eq:oracle}, which needs to be solved in each iteration of Algorithm~\ref{algorithm:FAFW}, can be solved highly efficiently via bisection (see Algorithm~\ref{algorithm:bisection}).
We emphasize that if $\covsa_x$ is singular, then the strong convexity parameter~$\alpha$ of Assumption~\ref{a:FW}\,\ref{a:FW:set} vanishes, and therefore Theorem~\ref{theorem:algorithm} is no longer applicable. In that case, however, Algorithm~\ref{algorithm:FAFW} is still guaranteed to converge, albeit at a sublinear rate; see \cite{ref:pedregosa2018stepsize} for further details.
\section{Numerical Experiments}
\label{sect:numerical}
All experiments are run on an Intel XEON CPU with 3.40GHz clock speed and 16GB of RAM. All (linear) SDPs are solved with MOSEK~8 using the YALMIP interface~\cite{ref:lofberg2004YALMIP}. In order to ensure the reproducibility of our experiments, we make all source codes available at \url{https://github.com/sorooshafiee/WMMSE}.
\subsection{Scalability of the Frank-Wolfe Algorithm}
\label{sec:scalability}
We first compare the convergence behavior of the Frank-Wolfe algorithm developed in Section~\ref{sect:algorithm} against that of MOSEK. Each experiment consists of $10$ independent simulation runs, in all of which we fix the signal and noise dimensions to $n=m=d$ and the Wasserstein radii to $\rho_x = \rho_w = \sqrt{d}$ for some~$d\in \mathbb N$. In each simulation run we randomly generate two nominal covariance matrices $\covsa_x$ and $\covsa_w$ as follows. First, we sample $Q_x$ and $Q_w$ from the standard normal distribution on $\mbb R^{d\times d}$, and we denote by $R_x$ and $R_w$ the orthogonal matrices whose columns correspond to the orthonormal eigenvectors of $Q_x + Q_x^\top$ and $Q_w+Q_w^\top$, respectively. Then, we set $\covsa_x = R_x \Lambda_x (R_x)^\top$ and $\covsa_w= R_w \Lambda_w R_w^\top$, where $\Lambda_x$ and $\Lambda_w$ are diagonal matrices whose main diagonals are sampled uniformly from $[1,5]^d$ and $[1,2]^d$, respectively. Finally, we set $\wh{\m}_x=0$ and $\wh{\m}_w=0$. The Wasserstein MMSE estimator can then be computed either by solving the nonlinear SDP~\eqref{eq:program:dual} with a Frank-Wolfe algorithm or by solving the linear SDP~\eqref{eq:SDP:dual} with MOSEK. Figures~\ref{fig:algorithm:a} and~\ref{fig:algorithm:b} show the execution time and the number of iterations needed by the vanilla, adaptive and fully adaptive versions of the Frank-Wolfe (FW) algorithm as well as by MOSEK to drive the (surrogate) duality gap below~$10^{-3}$. MOSEK runs out of memory for all dimensions~$d>100$. Figure~\ref{fig:algorithm:c} visualizes the empirical convergence behavior of the three different Frank-Wolfe algorithms. We observe that the fully adaptive Frank-Wolfe algorithm finds highly accurate solutions already after $20$ iterations for problem instances of dimension $d=1{,}000$.
\begin{figure*}
\centering
\subfigure[Scaling of execution time]{\label{fig:algorithm:a} \includegraphics[width=0.31\columnwidth]{time.pdf}} \hspace{0pt}
\subfigure[Scaling of iteration count]{\label{fig:algorithm:b} \includegraphics[width=0.31\columnwidth]{iteration.pdf}} \hspace{0pt}
\subfigure[Convergence for $d=1000$]{\label{fig:algorithm:c} \includegraphics[width=0.31\columnwidth]{convergence.pdf}}
\caption{Scalability properties of different methods for computing the Wasserstein MMSE estimator (shown are the means (solid lines) and the ranges (shaded areas) of the respective performance measures across $10$ simulation runs)}
\label{fig:algorithm}
\end{figure*}
\subsection{The Value of Structural Information}
In the second experiment we study the predictive power of different MMSE estimators.
The experiment consists of $100$ independent simulation runs. In each run, we use the same procedure as in Section~\ref{sec:scalability} to generate two random covariance matrices $\Sigma_x$ and $\Sigma_w$ of dimensions $n=m = 50$, and we set the true signal and noise distributions to $\mathbb{P}_x = \mathcal N(0, \Sigma_x)$ and $\mathbb{P}_w = \mathcal N(0, \Sigma_w)$, respectively. Next, we define $\covsa_x$ and $\covsa_w$ as the sample covariance matrices corresponding to $100$ independent samples from $\mathbb{P}_x$ and $\mathbb{P}_w$, respectively. Moreover, we set $H=I_n$. To assess the value of structural information, we compare the Wasserstein MMSE estimator proposed in this paper against the Bayesian MMSE estimator associated with the nominal signal and noise distributions and the unstructured Wasserstein MMSE estimator proposed in~\cite{ref:shafieezadeh2018wasserstein}. The latter uses a single Wasserstein ball to model the ambiguity of the joint distribution of~$x$ and~$y$, thereby ignoring the structural information that~$w=Hy-x$ is independent of~$x$. Both the structured and unstructured Wasserstein MMSE estimators collapse to the nominal Bayesian MMSE estimator when the underlying Wasserstein radii are set to zero. Recall also that the nominal Bayesian MMSE estimator is optimal in distributionally robust estimation problems whose ambiguity sets are defined via information divergences \cite{ref:levy2004robust, ref:levy2012robust, ref:zorzi2016robust, ref:zorzi2017robustness}. This robustness property makes the nominal Bayesian MMSE estimator an interesting benchmark.
We quantify the performance of a given estimator by its {\em regret}, which is defined as the difference between the estimator's average risk and the least possible average risk of {\em any} estimator under the unknown true distributions $\mathbb{P}_x$ and $\mathbb{P}_w$. Note that the minimum average risk is attained by the (affine) Bayesian MMSE estimator corresponding to the (normal) distributions $\mathbb{P}_x$ and $\mathbb{P}_w$. Figure~\ref{fig:perf:a} shows the regret of the Wasserstein MMSE estimator with $\rho_x=\rho$ and $\rho_w=0$, the Wasserstein MMSE estimator with $\rho_x=0$ and $\rho_w=\rho$ and the unstructured Wasserstein MMSE estimator from~\cite{ref:shafieezadeh2018wasserstein} with Wasserstein radius $\rho$ for all $\rho\in[0.1,10]$. The solid lines represent the averages and the shaded areas the ranges of the regret across all $100$ simulation runs. The regret of the structured Wasserstein MMSE estimator with Wasserstein radii $\rho_x,\rho_w\in[0.1,10]$ averaged across all $100$ simulation runs is visualized by the surface plot in Figure~\ref{fig:perf:b}.
We observe that the average regret of the nominal Bayesian MMSE estimator amounts to~$16.7$ (the leftmost value of all curves in Figure~\ref{fig:perf:a}), while the best unstructured Wasserstein MMSE estimator attains a significantly lower average regret of~$13.1$ (the minimum of the blue curve in Figure~\ref{fig:perf:a}). The best structured Wasserstein MMSE estimator without noise ambiguity ($\rho_w=0$) displays a similar performance, attaining an average regret of~$13.2$ (the minimum of the red curve in Figure~\ref{fig:perf:a}), while the one without signal ambiguity ($\rho_x=0$) further reduces the average regret by more than $50\%$ to $6.0$ (the minimum of the yellow curve in Figure~\ref{fig:perf:a}). Finally, the best among {\em all} structured Wasserstein MMSE estimators, which is obtained by tuning both $\rho_x$ and $\rho_w$, attains an even lower average regret of~$2.2$ (the minimum of the surface plot in Figure~\ref{fig:perf:b}). This experiment confirms our hypothesis that structural (independence) information as well as information about distributional ambiguity can improve the predictive power of MMSE estimators.
Unlike in data-driven optimization, where the nominal distribution and the radii of the ambiguity sets can be tuned from data, we assume here that the nominal distribution and the radii of the ambiguity sets reflect the modeler's prior distributional information. Thus, they are reminiscent of prior distributions in Bayesian statistics.
The radii could be tuned using standard cross validation techniques, for example, if we had access to samples $(\hat x_i, \hat y_i)$, $i=1,\ldots, N$ drawn independently from the true joint distribution of $(x,y)$ under~$\mathbb{P}$. This conflicts, however, with our central assumption that only $y$ is observable. In this case, it is fundamentally impossible to assess the empirical performance of an estimator and to tune the radii via cross validation.
\begin{figure*}[t]
\centering
\subfigure[Regret of different Wasserstein MMSE estimators involving a single hyperparameter $\rho$.]{\label{fig:perf:a} \includegraphics[width=0.45\columnwidth]{fig2-all.pdf}} \hspace{0pt}
\subfigure[Regret of the Wasserstein MMSE estimator involving two hyperparameters $\rho_x$ and $\rho_w$.]{\label{fig:perf:b} \includegraphics[width=0.45\columnwidth]{fig2-3d.pdf}}
\caption{Regret of different estimators averaged across 100 simulation runs.}
\label{fig:perf}
\end{figure*}
\noindent \textbf{Acknowledgments.} We are grateful to Erick Delage for valuable comments on an earlier version of this paper. This research was supported by the Swiss National Science Foundation grant number BSCGI0\_157733.
\subsection{\@startsection{subsection}{2}%
\z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}%
{\normalfont\large\bfseries}}
\def\subsubsection{\@startsection{subsubsection}{3}%
\z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}%
{\normalfont\itshape}}
\RequirePackage[colorlinks,citecolor=blue,urlcolor=blue]{hyperref}
\usepackage{dsfont, amssymb, amsmath, subfigure, graphicx, enumitem}
\usepackage{amsfonts,dsfont,mathtools, mathrsfs, amsthm, accents}
\usepackage[amssymb, thickqspace]{SIunits}
\usepackage{enumitem}
\usepackage{amsthm}
\numberwithin{equation}{section}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{definition}[theorem]{Definition}
\newtheorem{assumption}[theorem]{Assumption}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\usepackage[numbers]{natbib}
\usepackage{mathrsfs, mathtools, fancyhdr, cases}
\usepackage{float, color, bm, bbm}
\usepackage{subfigure, graphicx, epstopdf, transparent}
\usepackage{multirow, titlecaps, bigstrut}
\usepackage{algorithm, algorithmic}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\usepackage{footnote}
\makesavenoteenv{tabular}
\makesavenoteenv{table}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
|
1,116,691,500,310 | arxiv | \section{Motivation and objectives}
Variable-density flows play an important role in a variety of physical phenomena, including inertial confinement fusion \citep {Lindl1992}, oceanic flows \citep {Boyd}, and stellar evolution and the explosion of supernovae \citep {Zingale2009} with a wide range of length scales. Variable-density effects arise whenever two fluids of differing molecular weights begin to mix or when thermal or compressibility effects significantly modify the fluid density. When the fluid densities are close to each other, i.e., the density ratio is close to 1, the density field is usually modeled as a passive scalar with the Boussinesq approximation. For a flow with sufficiently high fluid density variance, which is characterized by Atwood number ($At = (\rho_2 - \rho_1)/(\rho_2 + \rho_1)$), where $\rho_1$ and $\rho_2$ are light and heavy fluid density, respectively, the density gradients play a very important role in the transport and evolution of the turbulent characteristics of the flow. Motivated by these applications, significant progress has been made in studying the variable-density flows using direct numerical simulations (DNS) and large-eddy simulations (LES). However, these approaches remain computationally expensive, especially for complex applications. For complex applications, computationally tractable and efficient turbulence models based on transport equations, which predict the “averaged” behavior of the turbulent mixing zone, are employed. Due to ensemble averaging of the second- and higher-order correlations of turbulent fluctuations, additional terms arise and should be modeled.
Among the models of variable-density flow, the Besnard-Harlow-Rauenzahn (BHR) model \citep {Besnard1992} is very important and computationally tractable. This model is based on the evolution equations arising from second-order correlations and gradient-diffusion approximations. Using a mass-weighted averaged decomposition, the original BHR model includes full transport equations for the Reynolds stresses, turbulent mass-flux velocity, density fluctuations, and turbulence kinetic energy dissipation rate. Many advanced variants \citep{Schwarzkopf2011, Schwarzkopf2016} of the BHR model have been developed to predict the variable-density flows.
The Reynolds-averaged Navier-Stokes (RANS) models, such as $k-\epsilon$ and $k-\omega-SST$, are computationally efficient and widely employed in simulating turbulent flows in complex engineering applications. However, the simplifications and assumptions made in RANS models induce a significant degree of epistemic uncertainty in numerical predictions. The structural uncertainty in RANS models has been quantified by incorporating the perturbations to the eigenvalues \citep [] { Gorle2013, Emory2013} and the eigenvectors \citep [] { Mishra2016, Thompson2019} of the modeled Reynolds stress tensor. In this methodology, the structural uncertainties in turbulence models are quantified by injecting uncertainty into the shape and orientation of the Reynolds stress tensor to quantify the variability and biases. Specifically, the perturbations of the Reynolds stress anisotropy are governed by sampling from the extreme states of the turbulence componentiality, and the perturbations to the Reynolds stress eigenvectors are guided by the maximal states of the production mechanism. Consequently, the approximate bounds of the structural uncertainty can be estimated within five RANS simulations \citep [] {Iaccarino2017}. The eddy-viscosity hypothesis assumes that the Reynolds stress has the same eigenvectors as the mean rate of strain, which is known to be invalid in complex flows such as flow with curvy boundaries and separations. The physical complexity is further increased by the buoyancy effect. Currently, the structural uncertainty framework accounts for the discrepancy between experimental observations and high-fidelity simulations for many flows with separations \citep [] {Iaccarino2017} and curvy boundaries \citep [] { Thompson2019}. However, the extension to variable-density flow is very limited due to the significantly increased complexity, for example, the buoyancy effect in the turbulent closures and transport equations for turbulent mass-flux velocities.
In this brief, we try to develop a comprehensive framework to identify, quantify, isolate, and reduce the uncertainties in the original BHR model \citep {Besnard1992} for variable-density flows. Because the eigenspace perturbation of Reynolds stress successfully accounts for the structural uncertainty in many flows, we first extend this methodology to the BHR model for variable-density flows with different $At$. The philosophy of eigenspace perturbation in Reynolds stress, i.e., quantifying the states of maximizing and minimizing the production mechanism, has led to the proposals of the structural uncertainty quantification strategy in the BHR model for variable-density flows. Several canonical flows, including one-dimensional (1D) Rayleigh-Taylor instability (1DRT), Rayleigh-Taylor mixing in a tilted rig (TR) and turbulent jet mixing flow, are investigated.
\section{Mathematical formulations }
In this section, we introduce the basic knowledge of the original BHR model for variable-density flows \citet {Besnard1992} and the eigenspace perturbation framework \citet { Mishra2017}, respectively. The BHR model and the eigenspace perturbation method are implemented in OpenFOAM to simulate the variable-density flows.
\subsection{BHR model for variable-density flow}
For the Reynolds decompositions of $\rho = \overline{\rho} + \rho'$, $u_i = \overline{u_i} + u_i'$ and $p = \overline{p} + p'$ for density, velocity, and pressure, respectively, where the over-bar denotes the uniformly weighted ensemble. However, for variable-density flows, a mass-weighted averaging called Favre decomposition is employed
\begin{equation}
u_i=\widetilde{u_i} + u_i'', \label{2}
\end{equation}
where $\widetilde{u_i} = \overline{\rho u_i}/\overline{\rho}$ and $u_i''$ represents the mass-weighted fluctuation with $\overline{\rho u_i''} = 0$. Then, the mass-weighted average velocity $\widetilde{u_i}$ can be rewritten by applying the Reynolds decomposition
\begin{equation}
\tilde {u_i} = \frac{\overline{\rho u_i}} {\overline {\rho}} = \overline{u_i} + \frac{\overline{\rho' u_i'}} {\overline {\rho}}, \label{3}
\end{equation}
where $ \overline{\rho' u_i'}/ \overline {\rho}$ is the turbulent mass-flux velocity, which is usually denoted as $a_i$ . Another important quantity in the BHR model equations is the density self-correlation, $b$, defined as
\begin{equation}
b = - \overline{ \rho ' \left(\frac{1} {\rho}\right)' }, \label{4}
\end{equation}
which can also be written as $b = \overline{\rho'^2/\rho \overline{\rho}}$. The generalized Reynolds stress tensor, $R_{ij} = \overline{\rho u_i'' u_j''}$, is obtained by Favre decomposition.
The resulting modeled governing equations are given by
\begin{equation}
\frac{\partial \overline{\rho}}{\partial t} + \frac{\partial \overline{\rho} \widetilde{u_j}}{\partial x_j} = 0, \label{5}
\end{equation}
\begin{equation}
\frac{\partial \overline{\rho} \widetilde{c}_{\alpha}}{\partial t} + \frac{\partial \overline{\rho} \widetilde{u_j} \widetilde{c}_{\alpha} }{\partial x_j} = \frac{\partial }{\partial x_j} \left( \frac{ \mu_T \partial \widetilde{c}_{\alpha} }{ \sigma_c \partial x_j} \right), \label{6}
\end{equation}
\begin{equation}
\frac{\partial \overline{\rho} \widetilde{u_i} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u_j} \widetilde{u_i}}{\partial x_j} = -\frac{\partial (\overline{p} \delta_{ij} + R_{ij} )}{\partial x_j} + \overline{\rho} g_i, \label{7}
\end{equation}
\begin{equation}
\frac{\partial \overline{\rho} k }{\partial t} + \frac{\partial \overline{\rho} k \widetilde{u_i}}{\partial x_j} = a_j \frac{\partial \overline{p} }{\partial x_j} - R_{ij} \frac{\partial \widetilde{u_i}}{\partial x_j} + \frac{\partial }{\partial x_j} \left( \frac{ \mu_T \partial k }{ \sigma_k \partial x_j} \right) - \overline{\rho} \frac{k^{3/2}}{S}, \label{9}
\end{equation}
\begin{equation}
\begin{split}
\frac{\partial \overline{\rho} S }{\partial t} + \frac{\partial \overline{\rho} S \widetilde{u_i}}{\partial x_j} = & \frac{S}{k} \left[ \left( \frac{3}{2} - C_4 \right) a_j \frac{\partial \overline{p} }{\partial x_j} - \left( \frac{3}{2} - C_1 \right) R_{ij} \frac{\partial \widetilde{u_i}}{\partial x_j} \right] \\
&+ \frac{\partial }{\partial x_j} \left( \frac{ \mu_T \partial S }{ \sigma_S \partial x_j} \right) - \left( \frac{3}{2} - C_2 \right) \overline{\rho} k^{1/2}, \label{10}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\frac{\partial \overline{\rho} {a_i} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u_j} {a_i}}{\partial x_j} = & b \frac{\partial \overline{p} }{\partial x_i} - \overline{\rho} a_j \frac{\partial (\widetilde{u_i} - a_i)}{\partial x_j} - \frac{R_{ij}}{\overline{\rho}}\frac{\partial \overline{\rho}}{\partial x_j} \\
&+ \overline{\rho} \frac{\partial a_i a_j}{\partial x_j}+ \frac{\partial }{\partial x_j} \left( \frac{ \mu_T \partial a_i }{ \sigma_a \partial x_j} \right) -
C_a \overline{\rho} a_i \frac{k^{1/2}}{S}, \label{11}
\end{split}
\end{equation}
\begin{equation}
b = \frac{\alpha_1 \alpha_2 \left(\rho_1 - \rho_2\right)}{\rho_1\rho_2}. \label{12}
\end{equation}
where $c_{\alpha}$ is the mass fraction, $\mu_T$ is the turbulent eddy viscosity and is modeled as $\mu_T = C_{\mu} \overline {\rho} S k^{1/2} $, $R_{ij} = 2/3\overline{\rho}k \delta_{ij} - 2 \mu_T S_{ij} $ is the generalized Reynolds stress with the rate of mean strain definition $ S_{ij} = 1/2 \left[ {\partial \widetilde{u_i}}/{\partial x_j}+{\partial \widetilde{u_j}}/{\partial x_i} - {2}/{3} {\partial \widetilde{u_k}}/{\partial x_k} \delta_{ij} \right]$, $k$ is the turbulent kinetic energy, $S$ is the turbulent mixing length, and $a_i$ and $b$ are the turbulent mass-flux velocity and the density self-correlation, respectively, as discussed above. The averaged density $\overline{\rho} = \alpha_1 \rho_1 + \alpha_2 \rho_2$, and $ \alpha_1 + \alpha_2 = 1$. According to previous studies \citep {Schwarzkopf2011, Schwarzkopf2016}, the first two terms $G$ and $P$ in the right handside of turbulent kinetic energy transport equation, are the production, where $G = a_j {\partial \overline{p} }/{\partial x_j} = \bold{a} \cdot \bold{p}_g$ where $ \bold{a}$ is the turbulent mass-flux velocity vector and $\bold{p}_g$ is the pressure gradient, and $ P = -R_{ij} {\partial \widetilde{u_i}}/{\partial x_j} $.
\subsection{Eigenspace perturbations}
The methodology to perturb the eigenvectors of the Reynolds stress tensor in the RANS modeling framework without using any additional data or modeling assumptions has been proposed \citep [] {Gorle2013} and developed for many engineering flows \citep []{Emory2013, Gorle2015, Mishra2016, Mishra2017, Thompson2019}. The structural uncertainty framework works very well for many flows with separations and enables us to account for the discrepancy between experimental observations and high-fidelity simulations. The basic methodology is described as follows. The turbulence anisotropy tensor is
\begin{equation}
a_{ij} = \frac{R_{ij}}{2 \overline{\rho}k} - \frac{1}{3} \delta_{ij}, \label{13}
\end{equation}
The condition of realizability and the Cauchy-Schwartz inequality require that $ -1/3 \le a_{ij} \le 2/3$ for $i = j$ and $ -1/2 \le a_{ij} \le 1/2$ for $i \ne j$. The anisotropy tensor can be represented by an eigenvalue decomposition
\begin{equation}
a_{ij} = v_{in} \Lambda_{nl} v_{lj}, \label{13}
\end{equation}
where $v$ denotes the eigenvector matrix and $\Lambda$ denotes the eigenvalues, and they are ordered by $\lambda_1 \ge \lambda_2 \ge \lambda_3 $ without any loss of generality. Therefore, the Reynolds stress can be expressed as
\begin{equation}
R_{ij} = 2 \overline{\rho} k \left( v_{in} \Lambda_{nl} v_{lj} + \frac{1}{3} \delta_{ij} \right), \label{13}
\end{equation}
Perturbing the eigenvalues and eigenvector by injecting the uncertainties, one obtains the perturbed Reynolds stress.
\begin{equation}
R_{ij} = 2 \overline{\rho} k \left( v_{in}^* \Lambda_{nl}^* v_{lj}^* + \frac{1}{3} \delta_{ij} \right), \label{13}
\end{equation}
\subsection{Structural uncertainty quantification in the BHR model}
The purpose of this investigation is to develop a framework to identify and quantify the uncertainties of the BHR model for variable-density flows. For eigenspace perturbation in Reynolds stress, the states of maximizing and minimizing the production mechanism are very important. Similarly, we propose a perturbation strategy to estimate the maximal and minimal states of the production to quantify the structural uncertainty in the BHR model for variable-density flows. Note that the term $G$ in the production is the inner product of the turbulent mass-flux velocity vector and mean pressure gradient; the alignment of the two vectors gives the maximum and minimum. Recalling the maximal and minimal states of $P$ arising from the eigenspace perturbation, we can capture the maximal and minimal states of the term $P$ in the BHR model. Therefore, we can predict the model structural uncertainty intervals quantified by the production mechanism.
By perturbing the orientation of $\bold{a}$, i.e., the angle between the $\bold{a}$ and $\bold{p}_g$, the extreme states of $G$ are very easily obtained as $G_0$ and $G_{\pi}$, which correspond to the angle between the two vectors, $0$ and $\pi$, respectively. Perturbing the eigenvalue and eigenvectors in $P$ gives the five extreme states. By combining them, one can quantify the extreme states of the production in the BHR model by the combination of $G$ and $P$ perturbation, i.e., with the 10 states.
The BHR model, the eigenspace perturbation method, and the perturbation of $G$ are implemented in OpenFOAM to simulate variable-density flows. In this brief, we investigate three variable-density flows: 1DRT, two-dimensional Rayleigh-Taylor-driven mixing in a TR (2DTR), and turbulent jet mixing.
\section{Results and discussions}
We use the structural uncertainty quantification method by maximizing and minimizing the production mechanism in the BHR model for variable-density flows. The eigenspace perturbations in $P$ are performed to study the partial structural uncertainty. The uncertainty intervals caused by $G$ are compared with the eigenspace perturbations in $P$. Finally, the structural uncertainties of the BHR model are quantified by the combination of the perturbations of $G$ and $P$. The numerical results show that the 10 extreme cases enable us to account for the discrepancy between experimental observations and high-fidelity simulations.
\subsection{1DRT}
We first consider the 1DRT case at a low Atwood number, $At = 0.04$, due to the available air-helium gas channel experimental measurements \citep [] { Banerjee2010a} and numerical simulations \citep [] { Banerjee2010b}. Furthermore, the turbulent mass-flux velocity and pressure gradient have the same direction due to the 1D setup. Therefore, $G$ is certainty, and no perturbation is needed. The production perturbation is simplified to the eigenspace perturbation \citep [] {Emory2013, Iaccarino2017} in the term of $P$ with the five extreme cases.
In the experiment by \citet [] {Banerjee2010a}, two gas streams, one containing air and the other containing a helium-air mixture, flow in parallel with each other separated by a thin splitter plate. The two streams meet at the end of the splitter plate, leading to the formation of an unstable interface and of buoyancy-driven mixing. The buoyancy-driven mixing experiment allows for long data collection time and is statistically steady. The experimental setup allows 1D mixing simulations at the streamwise section. The two fluids have constant densities, $\rho_1 =$ 1.0833 g/cm$^3$ and $\rho_2 =$ 1.0 g/cm$^3$, to match the experiment. The buoyancy force is relatively weak at this low Atwood number since it represents the effect of density difference on the flow mixture. The numerical results obtained with three different grid sizes are shown in Figure \ref{fig1} for normalized density, turbulent kinetic energy, and turbulent mass-flux velocity. The density distribution agrees very well with the experimental data \citep [] {Banerjee2010a}, and both the turbulent kinetic energy and mass-flux velocity show reasonable agreement. Furthermore, good grid convergence is obtained since the lines are on top of each other. Therefore, the following simulations with perturbations are performed using the $600$ grid size.
Eigenspace perturbation in Reynolds stress is performed, and the five extreme cases \citep [] {Iaccarino2017} are simulated. The uncertainty bounds for the density, turbulent kinetic energy, and turbulent mass-flux velocity are presented in Figure \ref{fig2}. As shown in Figure \ref{fig1}(a), the numerical results agree very well with experimental data, and the uncertainty interval covers the experimental data. The uncertainty bounds of the turbulent kinetic energy and turbulent mass-flux velocity are able to account for the discrepancy between the model prediction and experimental data only near the center of the mixing layer. It seems that the five extreme eigenspace perturbation cases for the very simplified 1D case are insufficient to capture all of the model uncertainties in comparison to the complex three-dimensional experimental measurements.
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{RTV.png}
\caption{Simulations of 1DRT by the BHR model compared with the experimental measurements by \citet { Banerjee2010a} for (a) normalized density, (b) normalized turbulent kinetic energy, and (c) normalized turbulent mass-flux velocity.}
\label{fig1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\textwidth]{RTuq.png}
\caption{Model uncertainty estimation quantified by the eigenspace perturbation in $P$ of 1DRT for (a) normalized density, (b) normalized turbulent kinetic energy, and (c) normalized turbulent mass-flux velocity.}
\label{fig2}
\end{center}
\end{figure}
\subsection{2DTR}
The TR problem is famous for a series of experiments \citep{Smeeton1987, Youngs1989} performed at Atomic Weapons Establishment (AWE) in the late 1980s. The light fluid is filled above the heavy one in a tank, and the apparatus is tilted to obtain an interface angle $\beta$. Details of the configuration are presented by \cite {Andrews2014}. In this brief, we simply simulate experiment case 110 described by \cite{Smeeton1987} and \cite{Andrews2014}, which has an Atwood number of 0.48. The heavy fluid is a NaI solution with density, $\rho_1 =$ 1.89 g/cm$^3$, and the light fluid is hexane with density $\rho_1 =$ 0.66 g/cm$^3$. The interface angle $\beta = 5.7667$ deg in the rectangle rig with dimensions $L_x = 15$ cm and $L_y = 24$ cm. The tank acceleration for case 110 is time varying. However, there are two ways to represent the variable acceleration. We can incorporate the time-varying acceleration directly into the simulation, which is not convenient. Alternatively, we can use a proper constant acceleration to nondimensionalize the time in order to compare with the experimental results. According to \cite{Andrews2014}, the constant acceleration $35 g$ agrees well with case 110.
2D simulations of buoyancy-driven flow are performed with three grid sizes, $160\times 100$, $320 \times 200$, and $640 \times 400$, and the results are presented in Figure \ref{fig3} comparing with the experimental data \citep{Smeeton1987} and numerical results \citep{Andrews2014}. All the numerical results of the three resolutions agree very well with the experiments and simulations. Very good grid convergence is obtained since the difference between the resolutions $320 \times 200$ and $640 \times 400$ is negligible. Therefore, grid size $320 \times 200$ is employed in the following simulations.
We first performed the eigenspace perturbation in Reynolds stress. The uncertainty bounds for spike and bubble height and the mixing width are presented in Figure \ref{fig4}. The uncertainty intervals for all the quantities are negligible, which shows that the term $P$ in the turbulent kinetic energy transport equation should be very small for this buoyancy-driven flow. Therefore, the alignment or misalignment between the eigenvector of Reynolds stress and mean rate of strain is insufficient to affect the variable-density flow.
In the 2D set up, the angle between the vectors $\bold{a}$ and $\bold{p}_g$ could be varying from 0 to $\pi$. Since the flow is driven by buoyancy, the turbulent mass-flux velocity is most likely to obey the density gradient, or pressure gradient, and the angle between $\bold{a}$ and $ \bold{p}_g$ varying from 0 to $\pi/2$ is more reasonable. Therefore, the term $G = \bold{a} \cdot \bold{p}_g$ has the range $0 \le G \le \mid \bold{a} \mid \mid \bold{p}_g \mid$, where $\mid \mid$ is the magnitude of the vector. Consequently, the two extreme cases of $G$ for buoyancy-driven flow are simulated, and the uncertainty estimation is shown in Figure \ref{fig6} in comparison to experimental data and DNS results. The uncertainty interval is significantly increased for $G$ perturbation compared with that of $P$ eigenspace perturbation, and enables us to account for the discrepancy between experimental observations and DNS results.
The ratio between the volume integral of $G$ and $P$, $ \langle G \rangle/\langle P\rangle$ where $\langle \psi \rangle = \int_{V} \psi dV$ , is shown in Figure \ref{fig5}. It is obvious that the term $G$ is much larger than $P$, which explains the negligible uncertainty estimation of eigenspace perturbation in Reynolds stress shown in Figure \ref{fig4}.
Finally, the BHR model uncertainty is quantified by perturbing the two terms $G$ and $P$ together in the production. The uncertainty interval, shown in Figure \ref{fig6p}, is captured by the 10 extreme states.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{TRV.png}
\caption{ Simulations of 2DTR using the BHR model compared with experimental data \citep{Smeeton1987} and numerical results \citep{Andrews2014} for (a) spike height, (b) bubble height, and (c) the mixing width.}
\label{fig3}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{TRPuq.png}
\caption{Uncertainty estimation quantified by the eigenspace perturbation in $P$ of 2DTR for (a) spike height, (b) bubble height, and (c) the mixing width.}
\label{fig4}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{TRGuq.png}
\caption{Estimation of the model uncertainty considering the $G$ term of 2DTR for (a) spike height, (b) bubble height, and (c) the mixing width.}
\label{fig6}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{GandP.png}
\caption{The variation of the ratio between the volume integral of $G$ and $P$ with time.}
\label{fig5}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{TRproductionuq.png}
\caption{The structural uncertainty quantified by the production perturbation of 2DTR for (a) spike height, (b) bubble height, and (c) the mixing width.}
\label{fig6p}
\end{center}
\end{figure}
\subsection{Turbulent jet mixing}
In the above simulations, the flows are driven by buoyancy due to the variable density. Here, we consider the effect of density gradients on the transport of fluid and the evolution of the turbulent characteristics of the turbulent jet mixing flow. The turbulent jet flow has been studied experimentally by \citet {Charonko2017} using particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) measurements. The jet flow is supplied using a $d_0 = 11$ mm inner-diameter tube vertically oriented in an open-circuit wind tunnel with a $0.53 \times 0.53$ m cross section. The acetone and gas mixture in the jet has density $\rho_1 = 1.1 $ g/cm$^3$, while the density of ambient air is $\rho_2 = 0.92 $ g/cm$^3$, which results in Atwood number $At = 0.09$. The inlet velocity of the jet is $27.7$ m/s, while the coflowing ambient air has velocity $1.4$ m/s. To simulate the flow, a jet with diameter $d_0 = 11$ mm in a circular channel with radius $r = 25 $ cm is chosen for the computational domain, the inlet turbulent kinatic energy of the jet $k = 20.0$ m$^2$/s$^2$ is selected. The calculated velocity and the mass fraction with different mesh sizes are compared with experimental data in Figure \ref {fig7}. The numerical results show reasonable agreement with experimental data, and the mesh size $150 \times 51$ is chosen for the following simulations.
Unlike the TR case, the values of volume-averaged $P$ is about 24 times that of $G$ for the turbulent jet mixing flow, which means that $P$ is dominant. The uncertainty interval of eigenspace perturbation in $P$ is shown in Figure \ref {fig8}. The eigenspace perturbation bounds are able to account for the discrepancy between model predictions and experimental data except near the jet inlet, as expected since $P$ is dominant. In contrast, the uncertainty estimation of the $G$ term perturbation is negligible, as shown in Figure \ref {fig8p}.
The structural uncertainty quantified by maximizing and minimizing the production in the BHR model is shown in Figure \ref {fig9} by simulating the 10 extreme cases. Due to the combination of the eigenspace perturbation in $P$ and the perturbation in $G$, the perturbation of the production quantifies the structural uncertainty.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{jetV.png}
\caption{Simulations of the turbulent jet mixing flow using the BHR model by comparison with experimental data \citep {Charonko2017} for (a) velocity and (b) mass fraction at the centerline.}
\label{fig7}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{jetPuq.png}
\caption{Uncertainty estimation of the model by eigensapce perturbation in $P$ for turbulent jet mixing flow of (a) velocity and (b) mass fraction at the centerline.}
\label{fig8}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{jetGuq.png}
\caption{Estimation of the uncertainty of the model by perturbing the $G$ term for turbulent jet mixing flow of (a) velocity and (b) mass fraction at the centerline.}
\label{fig8p}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{Jetproduction.png}
\caption{Estimation of the model uncertainty by production perturbation for turbulent jet mixing flow of (a) velocity and (b) mass fraction at the centerline.}
\label{fig9}
\end{center}
\end{figure}
\section{Conclusions}
Variable-density flows play an important role in a variety of physical phenomena and engineering applications. Simulations including DNS and LES are computationally expensive, especially for complex applications. Therefore, computationally tractable models for variable-density flows are more desirable in engineering applications. Therefore, the quantification of structural uncertainty of the models are very important. In this brief, we have tried to build a comprehensive framework to identify, quantify, and reduce the uncertainties of the BHR model \citep {Besnard1992}, which have been extensively investigated for variable-density flows. In the BHR model, the production has two component terms, including the product of the Reynolds stress and rate of mean strain $ P$ and the inner product of the turbulent mass-flux velocity and pressure gradient $G$. By isolating the two terms, the effects of the eigenspace perturbation of $P$ and $G$ perturbation are discussed. According to our numerical results about 1DRT, Rayleigh-Taylor mixing in TR, and turbulent jet mixing flow, only consider one term in the production is not sufficient to capture the model uncertainty. The eigenspace perturbation in $P$ is not sufficient to account for the model uncertainty for $G$-dominant buoyancy-driven flows while $G$ perturbation is not sufficient to account for the model uncertainty for $P$-dominant variable density flows. Nevertheless, determining the the production perturbation by considering the $P$ and $G$ perturbation together enables us to quantify the structural uncertainty in the BHR model for variable-density flows. The proposed framework should be examined in more variable-density flows.
\section*{Acknowledgments}
This work is funded by a grant from Los Alamos National Laboratory.
\bibliographystyle{ctr}
|
1,116,691,500,311 | arxiv |
\subsubsection*{\textbf{\emph{#1}}}}
\newcommand{\Paragraph}[1]{\paragraph*{#1.}}
\usepackage{amsmath}
\usepackage{mathpartir}
\usepackage{multirow}
\usepackage{amssymb}
\begin{document}
\title{\ourlang: A DSL for Verified \\Secure Multi-party Computations}
\iflong
\author{Aseem Rastogi\inst{1} \and
Nikhil Swamy\inst{1} \and
Michael Hicks\inst{2}}
\authorrunning{Rastogi et al.}
\institute{Microsoft Research\\
\email{\{aseemr,nswamy\}@microsoft.com} \and
University of Maryland\\
\email{[email protected]}}
\else
\author{}
\institute{}
\fi
\maketitle
\begin{abstract}
Secure multi-party computation (\mc) enables a set of mutually
distrusting parties to cooperatively compute, using a cryptographic
protocol, a function over their private data.
This paper presents \ourlang, a new domain-specific language (DSL)
for
writing \emph{mixed-mode} \mc{s}. \ourlang is an embedded DSL hosted in \fstar, a
verification-oriented, effectful programming language.
\ourlang source programs are
essentially \fstar{} programs written in a custom \mc effect, meaning
that the programmers can use \fstar{}'s logic to verify the correctness
and security properties of their programs. To reason about the
distributed runtime semantics of these programs, we formalize a deep
embedding of \ourlang, also in \fstar{}. We mechanize the necessary
metatheory to prove that the properties verified for the \ourlang
source programs carry over to the distributed, multi-party
semantics. Finally, we use \fstar{}'s extraction to
extract an interpreter that we have proved matches this semantics,
yielding a partially verified implementation. \ourlang is the first
DSL to enable formal verification of \mc programs.
We have implemented several \mc protocols in \ourlang, including
private set intersection, joint median, and an \mc-based card dealing
application, and have verified their correctness and security.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Secure multi-party computation (\mc) enables two
or more parties to compute a function $f$ over their private inputs
$x_i$ so that parties don't see each others' inputs, but
rather only see the output $f(x_1,...,x_n)$. Using a trusted
third party to compute $f$ would achieve this goal, but in fact we can
achieve it using one of a variety of cryptographic protocols carried
out only among the
participants~\cite{shamir1980mental, Yao, GMW, BMR}.
One example use of \mc is private set intersection (PSI): the
$x_i$ could be individuals' personal interests, and the function $f$
computes their intersection, revealing which interests the group has
in common, but not any interests that they don't. \mc has also been used for
auctions~\cite{Bogetoft2009}, detecting tax fraud~\cite{taxfraudsmc},
managing supply chains~\cite{supplychainsmc}, privacy
preserving statistical analysis~\cite{UT:Kamm15}, and more recently
for machine learning tasks~\cite{ezpc,Buscher:2018:HCH:3243734.3243786,gazelle,secureml,Liu:2017:ONN:3133956.3134056}.
Typically, cryptographic protocols expect $f$ to be specified as a
boolean or arithmetic circuit. Programming directly with circuits and
cryptography is painful,
so starting with the Fairplay project~\cite{fairplay} many researchers
have designed higher-level domain-specific languages (DSLs) in which
to program
\mc{s}~\cite{Huang11,viff,Malka2011,fairplaymp,Holzer12,Nielsen07,Nielsen09,sharemind,Schropfer2011,wysteria,Liu2014,Laud:2015:DLL:2810103.2813664,Crockett:2018:ALC:3243734.3243828,Araki:2018:GSC:3243734.3243854,Buscher:2018:HCH:3243734.3243786,frigate}.
These DSLs compile source code to
circuits which are then given to the underlying protocol. While doing this
undoubtedly makes it easier to program \mc{s}, these languages still
have several drawbacks regarding both security and usability.
This paper presents \ourlang, a new \mc DSL that addresses several
problems in prior DSLs. Unlike most previous \mc DSLs, \ourlang is not a
standalone language, but is rather an embedded DSL hosted in
\fstar~\cite{fstar2016}, a full-featured, verification-oriented, effectful
programming language. \ourlang has the following two distinguishing
elements:
\Paragraph{1. A program logic for \mc} (\S\ref{sec:overview} and \S\ref{sec:formal}.)
In their most general form, \mc applications are \emph{mixed-mode}:
they consist of parties
performing (potentially different)
local, in-clear computations (e.g. I/O, preprocessing
inputs) interleaved with joint, secure computations. \ourlang is the
first MPC DSL to provide a program logic to formally reason about the
\emph{correctness and security} of such applications,
e.g., to prove that the outputs will not reveal too much information
about a party's inputs~\cite{mardziel13belief}.\footnote{Our attacker
model is the ``honest-but-curious'' model where the attackers are
the participants themselves,
who play their
roles in the protocol faithfully, but are motivated to deduce as much as they
can about the other participants' secrets by observing the
protocol. \S\ref{sec:wysts} makes the security model of \ourlang more precise.}
To avoid reasoning about separate programs for each
party, \ourlang builds on the basic programming model of the
Wysteria \mc DSL~\cite{wysteria} that allows applications to be
written as a single specification. \ourlang presents a
\emph{shallow embedding} of the Wysteria programming model in \fstar.
When writing \ourlang source programs, programmers essentially write
\fstar programs in a new \ls{Wys} effect,
against a library of \mc combinators. The pre- and
postcondition specifications on the combinators encode a program logic
for MPC. The logic provides
\emph{observable traces}---a novel addition to the Wysteria
semantics---which programmers can use to
specify security properties such as delimited release~\cite{sm03delimited}.
Since \ourlang programs are \fstar programs, \fstar computes
verification conditions (VCs) for
them which are discharged using Z3~\cite{z3} as usual.
We prove the soundness of the program logic---that the properties proven about
the \ourlang source programs carry over when these programs are run by
multiple parties in a distributed manner---also in \fstar. The proof connects
the pre- and postconditions of the \ourlang combinators to their distributed
semantics in two steps.
First, we implement the combinators in \fstar, proving the
validity of their pre- and postconditions of against their implementation.
Next, we reason about this implementation and the distributed runtime
semantics through a deepembedding of \ourlang in \fstar.
Essentially, we deep-embed the \ourlang combinator abstract syntax trees (ASTs)
as an \fstar
datatype and formalize two operational semantics for them:
a conceptual single-threaded semantics that models their \fstar implementation,
and the actual distributed
semantics that models the multi-party runs of the
programs. We prove, in \fstar, that the single-threaded
semantics is sound with respect to the distributed semantics (\S\ref{sec:formal}).
While we use \fstar, the program logic is general and
it should be possible to embed it in other verification frameworks
(e.g., in Coq, in the style of Hoare Type Theory~\cite{ynot-icfp08}).
\paragraph*{2. A full-featured, partially verified implementation}
(\S\ref{sec:formal}.)
\ourlang's implementation is, in part, formally verified. The hope is
that formal verification will reduce the occurrence of security
threatening bugs, as it has in prior
work~\cite{csmith,Leroy2009Formal,BhargavanFKPS13,polarssl,export:122884}.
We define an interpreter in \fstar that operates over the \ourlang
ASTs produced by a custom \fstar
extraction for the \ls{Wys} effect.
While the local computations are executed locally by the interpreter,
the interpreter compiles secure-computation ASTs
to circuits, on the fly, and executes them using the Goldreich, Micali and
Wigderson (GMW) multi-party computation protocol~\citep{GMW}. The
\ourlang AST (and hence the interpreter) does not ``bake in'' standard
\fstar{} constructs like numbers and lists. Rather, inherited language
features appear abstractly in the AST, and their semantics is handled
by a foreign function interface (FFI). This permits \ourlang programs
to take advantage of existing code and libraries available in \fstar{}.
To prove the interpreter behaves correctly, we prove, in \fstar,
that it correctly implements the formalized distributed semantics.
The circuit library and the GMW implementation
are not verified---while it is possible to verify the circuit
library~\cite{Almeida:2017:FVS:3133956.3134017}, verifying a GMW
implementation is an open research question.
But the stage is set for verified versions to be plugged into
the \ourlang codebase. We characterize the Trusted Computing Base
(TCB) of the \ourlang toolchain in \S\ref{sec:impl}.
Using \ourlang we have
implemented several programs, including PSI,
joint median, and a card dealing application
(\S\ref{sec:apps}). For PSI and joint median we
implement two versions: a straightforward one and an optimized one that
improves performance but increases
the number of adversary-observable events. We formally
prove that the optimized and unoptimized
versions are equivalent,
both functionally and w.r.t. privacy of parties' inputs.
Our card dealing application relies on \ourlang's support
for secret shares~\citep{Shamir79}. We formally prove that the card
dealing algorithm always deals a fresh card.
In sum, \ourlang constitutes the first DSL that supports proving
security and correctness properties about \mc programs, which are
executed by a partially verified implementation of a full-featured
language. No prior DSL provides these benefits
(\S\ref{sec:related}).
The \ourlang implementation, example programs, and
proofs are publicly available on
\iflong
Github at \url{https://github.com/FStarLang/FStar/tree/stratified_last/examples/wysteria}.\footnote{This development was
done on an older \fstar version, but the core ideas of what we
present here apply to the present version as well.}
\else
Github.\footnote{This development was
done on an older \fstar version, but the core ideas of what we
present here should apply to the present version as well.
\fi
\section{Verifying and deploying \ourlang programs}
\label{sec:overview}
We illustrate the main concepts of \ourlang by showing, in
several stages, how to program, optimize, and verify the two-party
joint median example~\cite{Kerschbaum11,rastogi13knowledge}.
In this example, two parties, Alice and Bob, each have a set of $n$ distinct,
locally sorted integers, and they want to compute the median of the union of
their sets without revealing
anything else; our running example fixes $n = 2$, for simplicity.
\subsection{Secure computations with \ls$as_sec$}
\label{sec:assec}
In \ourlang, as in its predecessor Wysteria~\cite{wysteria},
an \mc is written as a single specification that
executes in one of two \emph{computation modes}. The primary mode is called \ls$sec$
mode. In it, a computation is carried out using a \mc protocol
among multiple principals on separate hosts. Here is joint median in \ourlang:
\begin{lstlisting}[xleftmargin=1em, numbers=left, frame=single]
let median a b in_a in_b =
as_sec {a, b} (fun () -> let cmp = fst (reveal in_a) > fst (reveal in_b) in $\label{line:acomp}$
let x3 = if cmp then fst (reveal in_a) else snd (reveal in_a) in $\label{line:acull}$
let y3 = if cmp then snd (reveal in_b) else fst (reveal in_a) in $\label{line:bcull}$
if x3 > y3 then y3 else x3)
\end{lstlisting}
The four arguments to \ls$median$ are, respectively, principal
identifiers for Alice and Bob, and Alice and Bob's secret inputs expressed as
tuples.
In \ourlang, values specific to each
principal are
\emph{sealed} with the principal's name (which appears in the sealed
container's type). As such, the types of \ls$in_a$ and \ls$in_b$
are, respectively, \ls$sealed {a} (int * int)$ and
\ls$sealed {b} (int * int)$.
The \ls$as_sec ps f$
construct indicates that thunk \ls$f$
should be run in \ls$sec$ mode among principals in the set \ls{ps}.
In this mode, the code has access to the secrets of the principals
\ls$ps$,
which it can reveal using the \ls{reveal} coercion.
As we will see later, the type of \ls{reveal} ensures that
parties cannot \ls{reveal} each others' inputs outside \ls{sec}
mode.\footnote{The runtime
representation of \ls{sealed {a} v} at \ls{b}'s host is an opaque constant
$\bullet$ (\S\ref{sec:deployoverview}).} Also note that the
code freely uses standard \fstar library functions like \ls{fst} and \ls{snd}.
The example extends naturally
to $n > 2$~\cite{10.1007/978-3-540-24676-3_3}.
To run this program, both Alice and Bob would start a \ourlang
interpreter at their host and direct it to run the \ls$median$ function
Upon reaching the
\ls$as_sec$ thunk, the interpreters coordinate with each other to compute the
result using the underlying \mc protocol.
\S\ref{sec:deployoverview} provides more details.
\subsection{Optimizing \ls{median} with \ls$as_par$}
Although \ls$median$ gets the job done, it can
be inefficient for large $n$.
However, it turns out if we reveal the result
of comparison on line~\ref{line:acomp} to both the parties,
then the computation on line~\ref{line:acull} (resp. line~\ref{line:bcull})
can be performed locally
by Alice (resp. Bob) without need of cryptography. Doing so can
massively improve performance:
previous work~\cite{Kerschbaum11} has observed a
$30\times$ speedup for $n = 64$.
This optimized variant is a \emph{mixed-mode} computation, where participants
perform some local computations interleaved with small, jointly
evaluated secure computations. \ourlang's second
computation mode, \ls$par$ mode, supports such mixed-mode computations. The
construct \ls$as_par ps f$ states that each principal in \ls$ps$
should locally execute the thunk \ls$f$, simultaneously; any
principal not in the set \ls$ps$ simply skips the computation. Within
\ls$f$, while running in \ls$par$ mode, principals may engage in
secure computations via \ls$as_sec$.
Here
is an optimized version of \ls{median} using \ls{as_par}:
\begin{lstlisting}[xleftmargin=1em, numbers=left, frame=single]
let median_opt a b in_a in_b =
let cmp = as_sec {a, b} (fun () -> fst (reveal in_a) > fst (reveal in_b)) in$\label{line:optacomp}$
let x3 = as_par {a} (fun () -> if cmp then fst (reveal in_a) else snd (reveal (in_a))) in $\label{line:optacull}$
let y3 = as_par {b} (fun () -> if cmp then snd (reveal in_b) else fst (reveal (in_b))) in $\label{line:optbcull}$
as_sec {a, b} (fun () -> if reveal x3 > reveal y3 then reveal y3 else reveal x3) $\label{line:finalm}$
\end{lstlisting}
The secure computation on (line~\ref{line:optacomp}) \emph{only}
computes \ls{cmp} and returns the result to both the parties. Line~\ref{line:optacull}
is then a \ls{par} mode computation involving only Alice
in which she discards one of her inputs based on \ls{cmp}.
Similarly, on line~\ref{line:optbcull}, Bob discards
one of his inputs. Finally, line~\ref{line:finalm} compares the remaining
inputs using \ls{as_sec} and returns the result as the final median.
One might wonder whether \ls$par$ mode is necessary. Could we
program the local parts of a mixed-mode program in normal \fstar, and
use a special compiler to convert the \ls$sec$ mode parts to circuits
and pass them to a GMW \mc service? We could, but it would complicate
both writing \mc{s} and formally reasoning that the whole computation is
correct and secure.
In particular, programmers would need to write one program for each
party that performs a different local computation (as in
\ls{median_opt}). The potential interleaving among local computations
and their synchronization behavior when securely computing together
would be a source of possible error and thus must be considered in any
proof. For example, Alice's code might have a bug in it that prevents
it from reaching a synchronization point with Bob, to do a GMW-based
\mc. For \ourlang, the situation is much simpler. Programmers may
write and maintain a single program. This program can be formally
reasoned about directly using a SIMD-style, ``single-threaded''
semantics, per the soundness result from \S\ref{sec:metatheory}. This
semantics permits reasoning about the coordinated behavior of multiple
principals, without worry about the effects of interleavings or wrong
synchronizations. Thanks to \ls{par} mode, invariants about
coordinated local computations are directly evident since we can
soundly assume about lockstep behavior (e.g., loop iterations in the
PSI example in \S\ref{sec:apps}).
\subsection{Embedding a type system for \ourlang in \fstar}
\label{sec:wysts}
Designing high-level, multi-party computations is relatively
easy using Wysteria's abstractions. Before trying to run such a
computation, we might wonder:
\begin{enumerate}
\item Is it \emph{realizable}? For example, does a computation
that is claimed to be executed only by some principals \ls$ps$ (e.g.,
using an \ls$as_par ps$ or an \ls$as_sec ps$) only ever access data
belonging to \ls$ps$?
\item Is it \emph{correct?} For example, does \ls$median_opt$
correctly compute the median of Alice and Bob's inputs?
\item Is it \emph{secure}? For example, do the optimizations in
\ls$median_opt$, which produce more visible outputs, potentially
leak more about the inputs?
\end{enumerate}
By embedding \ourlang in \fstar and leveraging its extensible,
monadic, dependent type-and-effect system, we address each of these
three questions.
We define a new indexed monad called \ls$Wys$ for computations that
use \mc combinators \ls$as_sec$ and \ls$as_par$. Using \ls$Wys$
along with the \ls$sealed$ type, we can ensure that protocols are
realizable. Using \fstar's capabilities for formal verification, we
can reason about a computation's correctness. By characterizing
observable events as part of \ls$Wys$, we can define trace
properties of \mc programs, to reason about security.
To elaborate on the last: we are interested in \emph{application-level} security
properties, assuming that the underlying
cryptographic \mc protocol (GMW~\cite{GMW} in our implementation) is secure.
In particular,
the \ls{Wys} monad models the \emph{ideal} behavior of \ls{sec} mode---a
secure computation reveals only the final output and nothing
else. Thus the programmer could reason, for example, that optimized
\mc programs reveal no more than their unoptimized versions.
To relate the proofs over ideal functionality to the actual
implementation, as is standard, we rely on the security of the
cryptographic protocol and the composition theorem~\cite{Canetti:2000:SCM:2724987.2725177}
to postulate that the implementation securely realizes the ideal
specification.
\Paragraph{The \ls$Wys$ monad} The \ls$Wys$ monad provides several features. First, all DSL code is
typed in this monad, encapsulating it from the rest of \fstar. Within
the monad, computations and their specifications can make use of two
kinds of \emph{ghost state}: \emph{modes} and \emph{traces}.
The mode of a computation indicates whether the computation is running
in an \ls$as_par$ or in an \ls$as_sec$ context.
The trace of a computation records the sequence and nesting structure
of outputs of the jointly executed \ls$as_sec$ expressions---the result of a
computation and its trace constitute its observable behavior.
The \ls$Wys$ monad is, in essence, the product of a reader monad on
modes and a writer monad on traces~\cite{Wadler:1995:MFP:647698.734146,Moggi:1991:NCM:116981.116984}.
Formally, we define the following \fstar types for modes and traces. A
mode \ls$Mode m ps$ is a pair of a mode tag (either \ls$Par$
or \ls$Sec$) and a set of principals \ls$ps$. A \ls$trace$ is a forest
of trace element (\ls$telt$) trees. The leaves of the trees record
messages \ls$TMsg x$ that are received as the result of executing
an \ls$as_sec$ thunk. The tree structure represented by the
\ls$TScope ps t$ nodes record the set of principals that are able to
observe the messages in the trace \ls$t$.
\begin{lstlisting}
type mtag = Par | Sec
type mode = Mode: m:mtag -> ps:prins -> mode
type telt = | TMsg : x:$\alpha$ -> telt | TScope: ps:prins -> t:list telt -> telt
type trace = list telt
\end{lstlisting}
Every \ourlang computation $e$ has a monadic computation
type \ls$Wys t pre post$.
The type indicates that $e$ is in the \ls$Wys$ monad (so it may
perform multi-party computations);
\ls$t$ is its result type;
\ls$pre$ is a pre-condition on the mode in which \ls$e$ may be executed;
and \ls$post$ is a post-condition relating the computation's mode, its
result value, and its trace of observable events.
When run in a context with mode \ls$m$ satisfying the pre-condition
predicate \ls$pre m$, $e$ may produce the trace \ls$tr$,
and if and when it returns, the result is
a \ls$t$-typed value \ls$v$ validating \ls@post m v tr@.
The style of indexing a monad with a computation's pre- and
post-condition is a standard
technique~\citep{atkey09parameterised,nmb08htt,fstar2016}---we defer
the definition of the monad's \ls$bind$ and \ls$return$ to the
actual implementation and focus instead on specifications of \ourlang
specific combinators.
We describe
\ls$as_sec$, \ls$reveal$, and \ls$as_par$, and how we give them types in
\fstar, leaving the rest to
Figure~\ref{fig:wysstar-api} in the Appendix.
By convention, any free variables in the type signatures are
universally prenex quantified.
\Paragraph{Defining \ls$as_sec$ in \ourlang}
~
\begin{lstlisting}[numbers=left]
val as_sec: ps:prins -> f:(unit -> Wys a pre post) -> Wys a
(requires (fun m -> m=Mode Par ps /\ pre (Mode Sec ps))) $\label{line:assec-requires}$
(ensures (fun m r tr -> tr=[TMsg r] /\ exists t. post (Mode Sec ps) r t)))$\label{line:assec-ensures}$
\end{lstlisting}
The type of \ls$as_sec$ is \emph{dependent} on the first
parameter, \ls$ps$. Its second argument \ls$f$ is the thunk to be
evaluated in \ls$sec$ mode. The result's computation type has the
form
\ls@Wys a (requires $\phi$) (ensures $\psi$)@, for some pre-condition
and post-condition predicates $\phi$ and $\psi$, respectively.
We use the
\ls$requires$ and \ls$ensures$ keywords for readability---they are not semantically significant.
The pre-condition of \ls$as_sec$ is a predicate on the mode \ls$m$ of
the computation in whose context
\ls$as_sec ps f$ is called.
For all the \ls$ps$ to jointly execute \ls$f$, we require all
of them to transition to perform the \ls$as_sec ps f$ call
simultaneously, i.e., the current mode must be
\ls$Mode Par ps$.
We also require the pre-condition \ls$pre$
of \ls$f$ to be valid once the mode has transitioned to
\ls$Mode Sec ps$---line~\ref{line:assec-requires} says just this.
The post-condition of \ls$as_sec$ is a predicate
relating the initial mode \ls$m$, the result \ls$r:a$, and the
trace \ls$tr$ of the computation.
Line~\ref{line:assec-ensures} states that the trace of a secure
computation \ls$as_sec ps f$ is just a singleton \ls$[TMsg r]$,
reflecting that its execution reveals only result
\ls$r$.
Additionally, it ensures that the result \ls$r$ is related to the mode
in which \ls$f$ is run (\ls$Mode Sec ps$) and some trace \ls$t$
according to \ls$post$, the post-condition of \ls$f$. The API models the
``ideal functionality'' of secure computation protocols (such as GMW)
where the participants only observe the final result.
\Paragraph{Defining \ls$reveal$ in \ourlang} As discussed earlier, a
value \ls$v$ of type \ls$sealed ps t$ encapsulates a \ls$t$ value that
can be accessed by calling \ls$reveal v$. This call should only
succeed under certain circumstances. For example, in \ls$par$ mode, Bob
should not be able to reveal a value of type \ls$sealed {Alice} int$.
The type of \ls$reveal$ makes the access control rules clear:
\begin{lstlisting}
val unseal: sealed ps $\alpha$ -> Ghost $\alpha$
val reveal: x:sealed ps $\alpha$ -> Wys $\alpha$
(requires (fun m -> m.mode=Par ==> m.ps $\subseteq$ ps /\ m.mode=Sec ==> m.ps $\cap$ ps $\neq$ $\emptyset$))
(ensures (fun m r tr -> r=unseal x /\ tr=[]))
\end{lstlisting}
The \ls{unseal} function is a \ls{Ghost} function, meaning that it can
only be used in specifications for reasoning purposes. On the other
hand, \ls{reveal} can be called in the concrete \ourlang programs.
Its precondition says that when executing in \ls$Mode Par ps'$,
\emph{all} current participants must be listed in the seal, i.e.,
\ls@ps' $\subseteq$ ps@.
However, when executing in \ls$Mode Sec ps'$, only a subset of
current participants is required:
\ls@ps' $\cap$ ps $\neq$ $\emptyset$@.
This is because the secure computation is executed jointly by all of
\ls$ps'$, so it can access any of their individual data.
The postcondition of \ls{reveal} relates the result \ls{r} to the
argument \ls{x} using the \ls{unseal} function.
\Paragraph{Defining \ls$as_par$ in \ourlang}
~
\begin{lstlisting}[numbers=left]
val as_par: ps:prins -> (unit -> Wys a pre post) -> Wys (sealed ps a)
(requires (fun m -> m.mode=Par /\ ps $\subseteq$ m.ps /\ can_seal ps a /\ pre (Mode Par ps)))
(ensures (fun m r tr -> exists t. tr=[TScope ps t] /\ post (Mode Par ps) (unseal r) t)))
\end{lstlisting}
The type of \ls{as_par} enforces the current mode to be \ls{Par},
and \ls{ps} to be a subset of current principals. Importantly,
the API scopes the trace \ls{t} of \ls{f} to model the fact that
any observables of \ls{f} are only visible to the
principals in \ls{ps}. Note that \ls{as_sec} did not require such scoping,
as there \ls{ps} and the set of current principals in \ls{m} are the same.
\subsection{Correctness and security verification}
\label{sec:verification}
Using the \ls$Wys$ monad and the \ls$sealed$ type, we can write down
precise types for our \ls$median$ and \ls$median_opt$ programs,
proving various useful properties. We discuss
the statements of the main lemmas and the overall proof structure.
By programming the protocols as a single specification using the high-level
abstractions provided by \ourlang, our proofs are relatively straightforward---in
all the proofs of this section, \fstar required no additional hints.
In particular, we rely heavily on the view that both parties execute
(different fragments of) the same code, thus avoiding the unwieldy
task of reasoning about low-level message passing.
\Paragraph{Correctness and security of \ls{median}} We first define a pure specification of median of
two \ls{int} tuples:
\begin{lstlisting}
let median_of (x1, x2) (y1, y2) = let (_, m, _, _) = sort x1 x2 y1 y2 in m
\end{lstlisting}
Further, we capture the preconditions using the following predicate:
\begin{lstlisting}
let median_pre (x1, x2) (y1, y2) = x1 < x2 $\wedge$ y1 < y2 $\wedge$ distinct x1 x2 y1 y2
\end{lstlisting}
Using these, we prove the following top-level specification for \ls{median}:
\begin{lstlisting}
val median: in_a:sealed {a} (int * int) -> in_b:sealed {b} (int * int) -> Wys int
(requires (fun m -> m = Mode Par {a, b})) (* should be called in the Par mode *)
(ensures (fun m r tr -> let in_a, in_b = unseal in_a, unseal in_b in
(median_pre in_a in_b ==> r = median_of in_a in_b) /\ (* functional correctness *)
tr = [TMsg ra])) (* trace is just the final value *)
\end{lstlisting}
This signature establishes that when Alice and Bob
simultaneously execute \ls$median$ (in \ls$Par$
mode), with secrets \ls$in_a$ and \ls$in_b$, then if and when the
protocol terminates,
(a) if their inputs satisfy the
precondition \ls{median_pre}, then the result is the joint median of
their inputs and (b) the observable trace consists
only of the final result, as there is but a single \ls{as_sec} thunk
in \ls{median}, i.e., it is \emph{secure}.
\Paragraph{Correctness and security of \ls{median_opt}}
The security proof of \ls{median_opt} is particularly interesting, because
the program intentionally reveals more than just the final result,
i.e., the output of the first comparison.
We would like to verify that this additional information does not
compromise the privacy of the parties' inputs.
To do this, we take the following approach.
First, we characterize the observable trace of \ls{median_opt} as a pure,
specification-only function. Then, using relational reasoning, we prove
a \emph{noninteference with delimited release} property~\citep{sm03delimited}
on these traces. Essentially we prove that, for two runs of \ls{median_opt}
where Bob's inputs and the output median are the same, the observable traces
are also same irrespective of Alice's inputs. Thus, from Alice's perspective, the
observable trace does not reveal more to Bob than what the output already does.
We prove this property symmetrically for Bob.
We start by defining a trace function for \ls{median_opt}:
\begin{lstlisting}
let opt_trace a b (x1, _) (y1, _) m = [
TMsg (x1 > y1); (* observable from the first as_sec *)
TScope {a} []; TScope {b} []; (* observables from two local as_par *)
TMsg m ] (* observable from the final as_sec *)
\end{lstlisting}
\noindent A trace will have four elements: output of the first \ls{as_sec}
computation, two empty scoped traces for the two local \ls{as_par}
computations, and the final output.
Using this function, we prove correctness of \ls{median_opt}, thus:
\begin{lstlisting}
val median_opt: in_a:sealed {a} (int * int) -> in_b:sealed {b} (int * int) -> Wys int
(requires (fun m -> m = Mode Par {a, b})) (* should be called in the Par mode *)
(ensures (fun m r tr -> let in_a = unseal in_a in let in_b = unseal in_b in
(median_pre in_a in_b ==> r = median_of in_a in_b) /\ (* functional correctness *)
tr = opt_trace a b in_a in_b m (* opt_trace precisely describes the observable trace *)
\end{lstlisting}
The delimited release property is then captured by the following lemma:
\begin{lstlisting}
val median_opt_is_secure_for_alice: a:prin -> b:prin
-> in_a$_1$:(int * int) -> in_a$_2$:(int * int) -> in_b:(int * int) (* possibly diff a1, a2 *)
-> Lemma (requires (median_pre in_a$_1$ in_b /\ median_pre in_a$_2$ in_b /\
median_of in_a$_1$ in_b = median_of in_a$_2$ in_b)) (* but same median *)
(ensures (opt_trace a b in_a$_1$ in_b (median_of in_a$_1$ in_b) = (* ensures .. *)
opt_trace a b in_a$_2$ in_b (median_of in_a$_2$ in_b))) (* .. same trace *)
\end{lstlisting}
The lemma proves that for two runs of \ls$median_opt$ where Bob's
input and the final output remain same, but Alice's inputs vary
arbitrarily, the observable traces are the same. As such, no more
information about information leaks about Alice's inputs via the traces than
what is already revealed by the output. We
also prove a symmetrical lemma \ls{median_opt_is_secure_for_bob}.
In short, because the \ls{Wys} monad provides programmers
with the observable traces in the logic, they can then be used to prove
properties, relational or otherwise, in the pure fragment of
\fstar outside the \ls{Wys} monad. We present more examples and their
verification details in \S\ref{sec:apps}.
\subsection{Deploying \ourlang programs}
\label{sec:deployoverview}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{sec2fig.png}
\caption{Architecture of an \ourlang deployment}
\label{fig:distarch}
\end{figure}
Having defined a proved-secure \mc program in \ourlang, how do we run it?
Doing so requires the following steps (Figure~\ref{fig:distarch}).
First, we run the
\fstar compiler in a special mode that \emph{extracts} the \ourlang
code (say \ls{psi.fst}),
into the \ourlang AST as a data structure (in \ls{psi.ml}).
Except for the \ourlang specific nodes (\ls{as_sec}, \ls{as_par}, etc.),
the rest of the program is extracted
into \emph{FFI nodes} that indicate the use of, or calls into,
functionality provided by \fstar itself.
The next step is for each party to run the extracted
AST using the \ourlang interpreter. This interpreter is written in
\fstar and we have proved (see \S\ref{sec:impl}) it implements a deep embedding of the \ourlang
semantics, also specified in \fstar (Figures~\ref{fig:dsl-proto-semantics} and \ref{fig:dsl-tgt-semantics},
\S\ref{sec:formal}). The
interpreter is extracted to OCaml by the usual \fstar extraction.
Each party's interpreter executes the AST locally
until it reaches an \ls$as_sec ps f$ node,
where the interpreter's back-end compiles
\ls$f$, on-the-fly, for particular values of the secrets in \ls$f$'s
environment, to a boolean circuit. First-order, loop-free code can be
compiled to a circuit; \ourlang provides specialized support for
several common combinators
(e.g., \ls{fst}, \ls{snd}, list combinators such as \ls$List.intersect$, \ls$List.mem$,
\ls$List.nth$ etc.).
The circuit is handed to a library by Choi et al.~\citet{cryptoeprint:2011:257}
that implements the GMW~\citep{GMW} \mc protocol.
Running the GMW protocol involves the parties in \ls{ps}
generating and communicating (XOR-based) secret shares \citep{Shamir79} for
their secret inputs, and then cooperatively evaluating the boolean
circuit for \ls$f$ over them.
One obvious question is how both parties are able to get this process
off the ground, given that they don't know some of the inputs (e.g., other
parties' secrets).
The \ls{sealed} abstraction helps here.
Recall that for \ls$median$,
the types of the inputs are of the form \ls$sealed {a} (int * int)$ and
\ls$sealed {b} (int * int)$. When the program is run on Alice's host,
the former will be a pair of Alice's values, whereas the latter
will be an opaque constant (which we denote as $\bullet$). The
reverse will be true on Bob's host. When the circuit is constructed,
each principal links their non-opaque inputs to the relevant input
wires of the circuit. Similarly, the output map component of each party
is derived from their output wires in the circuit, and thus,
each party only gets to see their own output.
\section{Formalizing and implementing \ourlang}
\label{sec:formal}
\newcommand{\ext}[1]{\ensuremath{\mathsf{#1}}}
\newcommand{\ensuremath{s}}{\ensuremath{s}}
\begin{figure}[t]
\[
\begin{array}{@{}@{}rlcl}
\text{Principal} \quad p & & & \text{ Principal set} \quad \ensuremath{s} \quad \text{ FFI const} \quad \ext{c}, \ext{f}\\
\text{Constant} & c & ::= & p \mid \ensuremath{s} \mid () \mid \kw{true} \mid \kw{false} \mid \ext{c}\\
\text{Expression} & e & ::=
& \kw{as_par}\;e_1\;e_2 \mid \kw{as_sec}\;e_1\;e_2 \mid \kw{seal}\;e_1\;e_2 \mid \kw{reveal}\;e \mid \kw{ffi}\;\ext{f}\;\bar{e}\\
& & \mid & \kw{mkmap}\;e_1\;e_2 \mid \kw{project}\;e_1\;e_2 \mid \kw{concat}\;e_1\;e_2\\
& & \mid & c \mid x \mid \kw{let}\;x = e_1\;\kw{in}\;e_2 \mid \lambda x.e \mid e_1\;e_2 \mid \kw{fix}\;f.\lambda x.e \mid \kw{if}\;e_1\;\kw{then}\;e_2\;\kw{else}\;e_3\\
\end{array}
\]
\caption{\ourlang syntax}
\label{fig:dsl-syntax}
\end{figure}
In the previous section, we presented examples of verifying properties
about \ourlang programs using \fstar{}'s logic. However,
these programs are not executed using the \fstar{} (single-threaded) semantics; they
have a distributed semantics involving multiple parties. So,
how do the properties that we verify using \fstar{} carry over?
In this section, we present the metatheory that answers this
question. First, we formalize the \ourlang single-threaded (ST) semantics,
that
faithfully models the \fstar{} semantics of the \ourlang API presented
in \S\ref{sec:overview}. Next, we formalize
the distributed (DS) semantics that multiple parties use
to run \ourlang programs. Then we prove the former is \emph{sound}
with respect to the latter, so that properties proved of programs
under ST apply when run under DS\@. We have mechanized the proof of
this theorem in \fstar{}.
\subsection{Syntax}
Figure~\ref{fig:dsl-syntax} shows the complete
syntax of \ourlang. Principals and principal sets are first-class
values, and are denoted by $p$ and $\ensuremath{s}$ respectively. Constants in
the language also include $()$ (unit), booleans, and FFI
constants \ext{c}.
Expressions $e$ include the regular forms for functions, applications,
let bindings, etc. and the \ourlang-specific constructs. Among the
ones that we have not seen in \S\ref{sec:overview}, expression
$\kw{mkmap}\;e_1\;e_2$ creates a map from principals in $e_1$
(which is a principal set) to the value computed by
$e_2$. $\kw{project}\;e_1\;e_2$ projects the value of principal $e_1$
from the map $e_2$, and $\kw{concat}\;e_1\;e_2$ concatenates the two
maps. The maps are used if an \ls{as_sec} computation returns
different outputs to the parties.
Host language (i.e., \fstar) constructs are also part of the syntax of
\ourlang, including constants \ext{c} include strings, integers, lists,
tuples, etc. Likewise, host language functions/primitives can be called
from \ourlang---$\kw{ffi}\;\ext{f}\;\bar{e}$ is the invocation of a
host-language function \ext{f} with arguments
$\bar{e}$. The FFI confers two benefits. First, it simplifies the core
language while still allowing full consideration of security relevant
properties. Second, it helps the language scale by incorporating
many of the standard features, libraries, etc. from the host language.
\begin{figure}[t]
\[
\begin{array}{@{}rlcl}
\text{Map} & m & ::= & \cdot \mid m[p \mapsto v]\\
\text{Value} & v & ::= & p \mid \ensuremath{s} \mid () \mid \kw{true} \mid \kw{false} \mid \kw{sealed}\;\ensuremath{s}\;v \mid m \mid \ext{v} \mid (L, \lambda x.e) \mid (L, \kw{fix}\;f.\lambda x.e) \mid \bullet\\
\text{Mode} & M & ::= & \kw{Par}\;\ensuremath{s} \mid \kw{Sec}\;\ensuremath{s}\\
\text{Context} & E & ::= & \langle\rangle \mid \kw{as_par}\;\langle\rangle\;e \mid \kw{as_par}\;v\;\langle\rangle \mid \kw{as_sec}\;\langle\rangle\;e \mid \kw{as_sec}\;v\;\langle\rangle \mid \ldots\\
\text{Frame} & F & ::= & (M, L, E, T)\\
\text{Stack} & X & ::= & \cdot \mid F,X\\
\text{Environment} & L & ::= & \cdot \mid L[x \mapsto v]\\
\text{Trace element} & t & ::= & \kw{TMsg}\;v \mid \kw{TScope}\;\ensuremath{s}\;T\\
\text{Trace} & T & ::= & \cdot \mid t, T\\
\text{Configuration} & C & ::= & M; X; L; T; e\\\\
\text{Par component} & P & ::= & \cdot \mid P[p \mapsto C]\\
\text{Sec component} & S & ::= & \cdot \mid S[\ensuremath{s} \mapsto C]\\
\text{Protocol} & \pi & ::= & P; S\\
\end{array}
\]
\caption{Runtime configuration syntax}
\label{fig:dsl-runtime-syntax}
\end{figure}
\begin{figure*}[t]
\[
\begin{array}{l}
\inferrule*[lab=S-aspar]
{
e_1 = \kw{as_par}\;\ensuremath{s}\;(L_1, \lambda x.e) \quad M = \kw{Par}\;\ensuremath{s}_1 \quad \ensuremath{s} \subseteq \ensuremath{s}_1 \\\\
X_1 = (M; L; \kw{seal}\;\ensuremath{s}\;\langle\rangle; T), X
}
{
M; X; L; T; e_1 \rightarrow \kw{Par}\;\ensuremath{s};X_1; L_1[x \mapsto ()]; \cdot; e
}
\hspace{0.3cm}
\inferrule*[lab=S-parret]
{
X = (M_1; L_1; \kw{seal}\;\ensuremath{s}\;\langle\rangle; T_1), X_1\\\\
\kw{can_seal}\;\ensuremath{s}\;v \quad
T_2 = \kw{append}\;T_1\;[\kw{TScope}\;\ensuremath{s}\;T]
}
{
M; X; L; T; v \rightarrow M_1; X_1; L_1; T_2; \kw{sealed}\;\ensuremath{s}\;v
}
\\\\
\inferrule*[lab=S-assec]
{
e_1 = \kw{as_sec}\;\ensuremath{s}\;(L_1, \lambda x.e) \quad M = \kw{Par}\;\ensuremath{s} \\\\
X_1 = (M; L; \langle\rangle\; T), X
}
{
M; X; L; T; e_1 \rightarrow \kw{Sec}\;\ensuremath{s}; X_1; L_1[x \mapsto ()]; \cdot; e
}
\hspace{0.9cm}
\inferrule*[lab=S-secret]
{
\kw{is_sec}\;M \quad X = (M_1; L_1; \langle\rangle; T), X_1 \\\\
T_1 = \kw{append}\;T\;[\kw{TMsg}\;v] \quad
}
{
M; X; L; \cdot; v \rightarrow M_1; X_1; L_1; T_1; v
}
\end{array}
\]
\caption{\ourlang ST semantics (selected rules)}
\label{fig:src-semantics}
\end{figure*}
\subsection{Single-threaded semantics}
\label{sec:st}
We formalize the semantics in the style of Hieb and
Felleisen~\cite{felleisen1992revised}, where the redex is chosen by
(standard, not shown) \emph{evaluation contexts} $E$, which prescribe
left-to-right, call-by-value evaluation order.
The ST semantics, a model of the \fstar{} semantics and the \ourlang API,
defines a judgment
$C \rightarrow C'$ that represents a single step of an abstract
machine (Figure~\ref{fig:src-semantics}). Here, $C$ is a \emph{configuration} $M; X; L; T; e$. This
five-tuple consists of a mode $M$, a stack $X$, a local environment
$L$, a trace $T$, and an expression $e$. The syntax for these elements
is given in Figure~\ref{fig:dsl-runtime-syntax}. The value form
$\ext{v}$ represents the host language (FFI) values. The stack and
environment are standard; trace $T$ and mode $M$ were discussed in
the previous section.
For space reasons, we focus on the two main \ourlang constructs \kw{as_par}
and \kw{as_sec}.
Appendix B shows rules for other \ourlang specific constructs.
Rules {\sc{S-aspar}} and {\sc{S-parret}} (Figure~\ref{fig:src-semantics})
reduce an \kw{as_par}
expression once its arguments are fully evaluated---its first argument $s$
is a principal set, while the second argument $(L_1, \lambda x.e)$ is a closure
where $L_1$ captures the free variables of thunk $\lambda x.e$. {\sc{S-aspar}}
first checks that the current mode $M$ is \kw{Par} and contains all the
principals from the set $\ensuremath{s}$. It then pushes a
$\kw{seal}\;\ensuremath{s}\;\langle\rangle$ frame on the stack, and starts evaluating
$e$ under the environment $L_1[x \mapsto ()]$. The rule {\sc{S-asparret}} pops the frame and
seals the result,
so that it is accessible only to the
principals in $\ensuremath{s}$. The rule also creates a trace element
$\kw{TScope}\;\ensuremath{s}\;T$, essentially making observations during the
reduction of $e$ (i.e., $T$) visible only to principals in $\ensuremath{s}$.
Turning to \kw{as_sec}, the rule {\sc{S-assec}} checks the precondition of the API,
and the rule {\sc{S-assecret}} generates a trace observation $\kw{TMsg}\;v$,
as per the postcondition of the API. As mentioned before, \ls{as_sec} semantics
models the ideal, trusted third-party semantics of secure computations where the
participants only observe the final output. We
can confirm that the rules implement the types of \kw{as_par} and
\kw{as_sec} shown in \S\ref{sec:overview}.
\begin{figure*}[t]
\[
\begin{array}{l}
\inferrule*[lab=P-par]
{
C \leadsto C'
}
{
P[p \mapsto C]; S \longrightarrow P[p \mapsto C']; S
}
\hspace{0.6cm}
\inferrule*[right=P-enter]
{
\forall p \in \ensuremath{s}.\;P[p].e = \kw{as_sec}\;\ensuremath{s}\;(L_p, \lambda x.e) \\\\
\ensuremath{s} \not\in \kw{dom}(S) \quad
L = \kw{combine}\;\bar{L}_p \quad
}
{
P; S \longrightarrow P; S[\ensuremath{s} \mapsto \kw{Sec}\;\ensuremath{s}; \cdot; L[x \mapsto ()]; \cdot; e]
}
\\\\
\inferrule*[lab=P-sec]
{
C \rightarrow C'
}
{
P; S[\ensuremath{s} \mapsto C] \longrightarrow P; S[\ensuremath{s} \mapsto C']
}
\hspace{0.6cm}
\inferrule*[lab=P-exit]
{
S[\ensuremath{s}] = \kw{Sec}\;\ensuremath{s}; \cdot; L; T; v \\\\
P' = \forall p \in \ensuremath{s}.\;P[p \mapsto P[p] \triangleleft (\kw{slice_v}\;p\;v)] \quad
S' = S \setminus \ensuremath{s}
}
{
P; S \longrightarrow P'; S'
}
\end{array}
\]
\caption{Distributed semantics, multi-party rules}
\label{fig:dsl-proto-semantics}
\end{figure*}
\begin{figure*}[t]
\[
\begin{array}{l}
\inferrule*[lab=L-aspar1]
{
e_1 = \kw{as_par}\;\ensuremath{s}\;(L_1, \lambda x.e) \quad p \in \ensuremath{s} \\\\
X_1 = (M; L; \kw{seal}\;\ensuremath{s}\;\langle\rangle; T), X
}
{
\kw{Par}\;p; X; L; T; e_1 \leadsto \kw{Par}\;p; X_1; L_1[x \mapsto ()]; \cdot; e
}
\hspace{0.3cm}
\inferrule*[lab=L-parret]
{
X = (M; L_1; \kw{seal}\;\ensuremath{s}\;\langle\rangle; T_1), X_1 \\\\
T_2 = \kw{append}\;T_1\;T \quad v_1 = \kw{sealed}\;\ensuremath{s}\;v
}
{
\kw{Par}\;p; X; L; T; v \leadsto \kw{Par}\;p; X_1; L_1; T_2; v_1
}
\\\\
\hspace{1cm}\inferrule*[left=L-aspar2]
{
p \not\in \ensuremath{s}
}
{
\kw{Par}\;p; X; L; T; \kw{as_par}\;\ensuremath{s}\;(L_1, \lambda x.e) \leadsto \kw{Par}\;p; X; L; T; \kw{sealed}\;\ensuremath{s}\;\bullet
}
\end{array}
\]
\caption{Distributed semantics, selected local rules {\small{(the mode $M$ is always \kw{Par}\;$p$)}}}
\label{fig:dsl-tgt-semantics}
\end{figure*}
\subsection{Distributed semantics}
\label{sec:ds}
In the DS semantics, principals
evaluate the same program locally and
asynchronously until they reach a secure computation, at which point
they synchronize to jointly perform the computation.
The semantics consists of two parts: (a) a judgment of the form
$\pi \longrightarrow \pi'$ (Figure~\ref{fig:dsl-proto-semantics}),
where a protocol $\pi$ is a tuple $(P; S)$
such that $P$ maps each principal to its local
configuration and $S$ maps a set of principals to the configuration of
an ongoing, secure computation; and (b) a local evaluation judgment
$C \leadsto C'$ (Figure~\ref{fig:dsl-tgt-semantics}) to model how a single
principal behaves while in \ls$par$ mode.
Rule {\sc{P-Par}} in Figure~\ref{fig:dsl-proto-semantics} models a
single party taking a step, per the local evaluation rules.
Figure~\ref{fig:dsl-tgt-semantics} shows these rules
for \ls{as_par}.
(See Appendix B for more local evaluation rules.)
A principal either participates in the
\kw{as_par} computation, or skips it. Rules {\sc{L-aspar1}} and {\sc{L-parret}}
handle the case when $p \in \ensuremath{s}$, and so, the principal $p$
participates in the computation. The rules closely mirror the
corresponding ST semantics rules in Figure~\ref{fig:src-semantics}. One difference in the rule
{\sc{L-asparret}} is that the trace $T$ is not scoped. In the DS
semantics, traces only contain \kw{TMsg} elements; i.e., a trace is
the (flat) list of secure computation outputs observed by that
active principal.
If $p \not\in \ensuremath{s}$, then the principal skips the computation with the
result being a sealed value containing the opaque constant $\bullet$ (rule
{\sc{L-aspar2}}). The contents of the sealed value do not matter,
since the principal will not be allowed to unseal the value anyway.
As should be the case, there
are no local rules for \kw{as_sec}---to perform a
secure computation parties need to combine their data and jointly do
the computation. Rule {\sc{P-enter}} in Figure~\ref{fig:dsl-proto-semantics}
handles the case when principals enter a secure
computation. It requires that all
the principals $p \in \ensuremath{s}$ must have the expression form
$\kw{as_sec}\;\ensuremath{s}\;(L_p, \lambda x.e)$, where $L_p$ is their local
environment associated with the closure. Each party's local
environment contains its secret values (in addition to some public
values). Conceptually, a secure computation \emph{combines} these
environments, thereby producing a joint view, and evaluates $e$ under
the combination. We define an auxiliary \ls$combine$ function for this
purpose:
\begin{lstlisting}
combine_v ($\bullet$, v) = v
combine_v (v, $\bullet$) = v
combine_v (sealed s v$_1$, sealed s v$_2$) = sealed s (combine_v v$_1$ v$_2$)
...
\end{lstlisting}
The rule {\sc{P-enter}} combines the principals' environments, and
creates a new entry in the $S$ map. The principals are now waiting for
the secure computation to finish. Rule {\sc{P-sec}} models a stepping rule
inside the \ls{sec} mode.
The rule {\sc{P-exit}} applies when a secure computation has
completed and returns results to the waiting principals. If the
secure computation terminates with value $v$, each principal $p$ gets
the value $\kw{slice_v}\;p\;v$. The \kw{slice_v} function is
analogous to \kw{combine}, but in the opposite direction---it
strips off the parts of $v$ that are not accessible to $p$:
\begin{lstlisting}
slice_v p (sealed s v) = sealed s $\bullet$, if p $\not\in$ s
slice_v p (sealed s v) = sealed s (slice_v p v), if p $\in$ s
...
\end{lstlisting}
In the rule {\sc{P-exit}}, the $\triangleleft$ notation is defined as:
\vspace{0.1cm}
$M; X; L; T; \_\;\triangleleft\;v = M; X; L; \kw{append}\;T\;[\kw{TMsg}\;v]; v$
\vspace{0.1cm}
That is, the returned value is also added to the principal's trace to
note their observation of the value.
\subsection{Metatheory}
\label{sec:metatheory}
Our goal is to show that the ST semantics faithfully represents the
semantics of \ourlang programs as they are executed by multiple parties, i.e.,
according to the DS semantics. We do this by proving
\emph{simulation} of the ST semantics by the DS semantics, and by
proving \emph{confluence} of the DS semantics. Our \fstar{}
development mechanizes all the metatheory presented in this section.
\Paragraph{Simulation} We define a \kw{slice}\;\ensuremath{s}\;$C$
function that returns the corresponding protocol $\pi_C$ for an ST
configuration $C$. In the $P$ component of $\pi_C$, each
principal $p \in \ensuremath{s}$ is mapped to their \emph{slice} of the
protocol. For slicing values, we use the same \kw{slice_v}
function as before. Traces are sliced as follows:
\begin{lstlisting}
slice_tr p (TMsg v) = [TMsg (slice_v p v)]
slice_tr p (TScope s T) = slice_tr p T, if p $\in$ s
slice_tr p (TScope s T) = [], if p $\not\in$ s
\end{lstlisting}
The slice of an expression (e.g., the source program) is itself. For
all other components of $C$, slice functions are defined analogously.
We say that $C$ is \emph{terminal} if it is in \kw{Par} mode and is fully
reduced to a value
(i.e. when $C = \_; X; \_; \_; e$, $e$ is a value and $X$ is empty). Similarly, a protocol
$\pi = (P, S)$ is terminal if $S$ is empty and all the local
configurations in $P$ are terminal. The simulation theorem
is then the following:
\begin{theorem}[Simulation of ST by DS]
Let $\ensuremath{s}$ be the set of all principals. If $C_1 \rightarrow^{*} C_2$,
and $C_2$ is terminal, then there exists some derivation
$(\kw{slice}\;\ensuremath{s}\;C_1) \longrightarrow^{*} (\kw{slice}\;\ensuremath{s}\;C_2)$
such that
$(\kw{slice}\;\ensuremath{s}\;C_2)$ is terminal.
\end{theorem}
To state \emph{confluence}, we first
define the notion of \emph{strong termination}.
\begin{definition}[Strong termination]
If all possible runs of protocol $\pi$ terminate at $\pi_t$, we say
$\pi$ \emph{strongly terminates in $\pi_t$}, written $\pi \Downarrow
\pi_t$.
\end{definition}
Our confluence result then says:
\begin{theorem}[Confluence of DS]
If $\pi \longrightarrow^{*} \pi_t$ and $\pi_t$ is terminal, then
$\pi \Downarrow \pi_t$.
\end{theorem}
Combining the two theorems, we get a corollary that establishes the
soundness of the ST semantics w.r.t. the DS semantics:
\begin{corollary}[Soundness of ST semantics]
Let $\ensuremath{s}$ be the set of all principals. If $C_1 \rightarrow^* C_2$,
and $C_2$ is terminal, then
$(\kw{slice}\;\ensuremath{s}\;C_1) \Downarrow (\kw{slice}\;\ensuremath{s}\;C_2)$.
\end{corollary}
Now suppose that for a \ourlang source program, we prove in \fstar{}
a post-condition that the result is \kw{sealed} \kw{alice} $n$, for
some $n > 0$. By the soundness of the ST semantics, we can conclude
that when the program is run in the DS semantics, it may diverge, but
if it terminates, \kw{alice}'s output will also
be \kw{sealed} \kw{alice} $n$, and for all other principals their
outputs will be \kw{sealed} \kw{alice} $\bullet$. Aside from the
correspondence on results, our semantics also covers correspondence
on traces. Thus the
correctness and security properties that we prove about a \ourlang
program using \fstar's logic, hold for the program that actually runs.
\subsection{Implementation}
\label{sec:impl}
The formal semantics presented in the prior section is
mechanized as
an inductive type in \fstar. This style is useful for
proving properties, but does not directly translate to an
implementation. Therefore, we implement an interpretation function
\ls$step$ in \fstar and prove that it corresponds to the rules; i.e.,
that for all input configurations $C$, \ls{step}$(C) = C'$ implies
that $C \rightarrow C'$ according to the
semantics. Then, the core of each principal's implementation is
an \fstar{} stub function \kw{tstep} that repeatedly invokes \ls{step} on the
AST of the source program (produced by the \fstar extractor run
in a custom mode), unless the AST is an \kw{as_sec}
node. Functions \ls{step} and \kw{tstep} are extracted to OCaml by the
standard \fstar{} extraction process.
Local evaluation is not defined for \ls$as_sec$, so the stub
implements what amounts to {\sc{P-enter}} and {\sc{P-exit}} from
Figure~\ref{fig:dsl-proto-semantics}. When the stub notices the
program has reached an \kw{as_sec} expression, it calls into a circuit
library we have written that converts the AST of the second argument
of \kw{as_sec} to a boolean circuit. This circuit and the encoded inputs are
communicated to a co-hosted server that implements the GMW \mc
protocol~\cite{cryptoeprint:2011:257}.
The server evaluates the circuit, coordinating with the GMW
servers of the other principals, and sends back the result. The
circuit library decodes the result and returns it to the stub.
The stub then carries on with the local evaluation. Our FFI interface
currently provides a form of monomorphic, first-order
interoperability between the (dynamically typed) interpreter and the
host language.
Our \fstar{} formalization of the \ourlang semantics, including the AST
specification, is 1900 lines of code. This formalization is used both
by the metatheory as well as by the (executable) interpreter. The
metatheory that connects the ST and DS semantics
(\S\ref{sec:formal}) is 3000 lines. The interpreter
and its correctness proof are another 290 lines of \fstar{} code.
The interpreter \kw{step} function is essentially a
big switch-case on the current expression, that calls into the
functions from the semantics specification. The \kw{tstep} stub is
another 15 lines. The size of the circuit
library, not including the GMW implementation, is 836 lines.
The stub, the implementation of GMW, the circuit library, and \fstar
toolchain (including the custom \ourlang extraction mode) are part
of our Trusted Computing Base (TCB).
\section{Applications}
\label{sec:apps}
In addition to joint median, presented in \S\ref{sec:overview}, we have
implemented and proved properties of two other \mc applications,
\emph{dealing for on-line card games} and \emph{private set intersection} (PSI).
\Paragraph{Card dealing} We have implemented an \mc-based card
dealing application in \ourlang. Such an application can play the
role of the dealer in a game of online poker, thereby eliminating the
need to trust the game portal for card dealing. The application relies
on \ourlang's support for \emph{secret
shares}~\citep{Shamir79}. Using secret shares, the participating
parties can share a value in a way that none of the parties can
observe the actual value individually (each party's share consists of
some random-looking bytes), but they can recover the value by combining
their shares in \ls{sec} mode.
In the application, the parties maintain a list of secret shares of
already dealt cards (the number of already dealt cards is public
information). To deal a new card, each party first generates a random
number locally. The parties then perform a secure computation to
compute the sum of their random numbers modulo 52, let's call it
$n$. The output of the secure computation is secret shares of $n$. Before
declaring $n$ as the newly dealt card, the parties needs to ensure
that the card $n$ has not already been dealt. To do so, they
iterate over the list of secret shares of already dealt cards, and for
each element of the list, check that it is different from $n$. The
check is performed in a secure computation that simply combines the shares
of $n$, combines the shares of the list element, and checks the
equality of the two values. If $n$ is different from all the
previously dealt cards, it is declared to be the new card, else the
parties repeat the protocol by again generating a fresh random number
each.
\ourlang provides the following API for secret shares:
\begin{lstlisting}
type Sh: Type -> Type
type can_sh: Type -> Type
assume Cansh_int: can_sh int
val v_of_sh: sh:Sh $\alpha$ -> Ghost $\alpha$
val ps_of_sh: sh:Sh $\alpha$ -> Ghost prins
val mk_sh: x:$\alpha$ -> Wys (Sh $\alpha$)
(requires (fun m -> m.mode = Sec /\ can_sh $\alpha$))
(ensures (fun m r tr -> v_of_sh r = x /\ ps_of_sh r = m.ps /\ tr = [])
val comb_sh: x:Sh $\alpha$ -> Wys $\alpha$ (requires (fun m -> m.mode = Sec /\ ps_of_sh x = m.ps))
(ensures (fun m r tr -> v_of_sh x = r /\ tr = [])
\end{lstlisting}
Type \ls{Sh $\alpha$} types the shares of values of type \ls{$\alpha$}. Our
implementation currently supports shares of \ls{int} values only;
the \ls{can_sh} predicate enforces this restriction on the source
programs. Extending secret shares support to other
types (such as pairs) should be straightforward (as in~\cite{wysteria}).
Functions \ls{v_of_sh}
and \ls{ps_of_sh} are marked \ls{Ghost}, meaning that they can only
be used in specifications for reasoning purposes. In the concrete
code, shares are created and combined using the \ls{mk_sh}
and \ls{comb_sh} functions. Together, the specifications of these
functions enforce that the shares are created and combined by the same
set of parties (through \ls{ps_of_sh}), and that \ls{comb_sh} recovers
the original value (through \ls{v_of_sh}).
The \ourlang interpreter transparently handles the low-level details of
extracting shares from the GMW implementation of Choi et
al. (\ls{mk_sh}), and reconstituting the shares back (\ls{comb_sh}).
In addition to implementing the card dealing application
in \ourlang, we have formally verified that the
returned card is fresh. The signature of the function that checks for
freshness of the newly dealt card is as follows (\ls{abc} is the set
of three parties in the computation):
\begin{lstlisting}
val check_fresh: l:list (Sh int){forall s'. mem s' l ==> ps_of_sh s' = abc}
-> s:Sh int{ps_of_sh s = abc}
-> Wys bool (requires (fun m -> m = Mode Par abc))
(ensures (fun _ r _ -> r <==> (forall s'. mem s' l ==> not (v_of_sh s' = v_of_sh s))))
\end{lstlisting}
The specification says that the function takes two arguments: \ls{l}
is the list of secret shares of already dealt cards, and \ls{s} is the
secret shares of the newly dealt card. The function returns a boolean
\ls{r} that is \ls{true} iff the concrete value (\ls{v_of_sh})
of \ls{s} is different from the concrete values of all the elements
of the list \ls{l}. Using \fstar{}, we verify that the implementation
of \ls{check_fresh} meets this specification.
\Paragraph{PSI}
Consider a dating application that enables its users to compute their
common interests without revealing all of them.
This is an instance of the more general private set intersection
(PSI) problem~\cite{Huang12}.
We implement a straightforward version of PSI in \ourlang:
\begin{lstlisting}[frame=single]
let psi a b (input_a:sealed {a} (list int)) (input_b:sealed {b} (list int)) (l_a:int) (l_b:int) =
as_sec {a,b} (fun () -> List.intersect (reveal input_a) (reveal input_b) l_a l_b)
\end{lstlisting}
\noindent where the input sets are expressed as lists with public lengths.
Huang et al.~\cite{Huang12} provide an optimized PSI algorithm
that performs much better when the density of common elements in
the two sets is high. We implement their algorithm in \ourlang.
The optimized version consists of two nested loops -- an outer loop for Alice's
set and an inner loop for Bob's -- where an iteration of the inner loop
compares the current element of Alice's set with the current element of
Bob's. The nested loops are written using \ls{as_par} so that both Alice and
Bob execute the loops in lockstep (note that the set sizes are public), while
the comparison in the inner loop happens using \ls{as_sec}.
Instead of naive \ls{l_a * l_b} comparisons, Huang et al.~\cite{Huang12} observe
that once an element of Alice's set \ls{ax}
matches an element of Bob's set \ls{bx}, the inner loop can return immediately,
skipping the comparisons of \ls{ax} with the rest of Bob's set. Furthermore,
\ls{bx} can be removed from Bob's set, excluding it from any further comparisons
with other elements in Alice's set. Since there are no repeats in the input sets,
all the excluded comparisons are guaranteed to be false. We show the full code
and its performance comparison with \ls{psi} in
Appendix A.
As with the median example from \S\ref{sec:overview}, the optimized PSI
intentionally reveals more for performance gains. As such, we would like
to verify that the optimizations do not reveal more about parties' inputs.
We take the following stepwise refinement approach. First, we characterize the trace of
the optimized implementation as a pure function \ls{trace_psi_opt la lb}
(omitted for space reasons),
and show that the trace of \ls{psi_opt} is precisely \ls{trace_psi_opt la lb}.
Then, we define an intermediate PSI implementation that has the same
nested loop structure, but performs
\ls{l_a * l_b} comparisons without any optimizations. We characterize
the trace of this intermediate implementation as the pure function
\ls{trace_psi}, and show that it precisely captures the trace.
To show that \ls{trace_psi} does not reveal more than the intersection of
the input sets, we prove the following lemma.
\newcommand\defeq{\mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily def}}}{=}}}
\begin{lstlisting}
$\Psi$ la$_0$ la$_1$ lb$_0$ lb$_1$ $\defeq$ (* possibly diff input sets, but with *)
la$_0$ $\cap$ lb$_0$ = la$_1$ $\cap$ lb$_1$ /\ (* intersections the same *)
length la$_0$ = length la$_1$ /\ length lb$_0$ = length lb$_1$ (* lengths the same *)
val psi__interim_is_secure: la$_0$:_ -> lb$_0$:_ -> la$_1$:_ -> lb$_1$:_ -> Lemma
(requires ($\Psi$ la$_0$ la$_1$ lb$_0$ lb$_1$)) (ensures (permutation (trace_psi la$_0$ lb$_0$) (trace_psi la$_1$ lb$_1$)))
\end{lstlisting}
The lemma essentially says that for two runs on same length inputs,
if the output is the same, then the resulting traces are permutation of
each other.\footnote{Holding Bob's
(resp. Alice's) inputs fixed and varying Alice's (resp. Bob's) inputs,
as done for \ls$median$ in \S\ref{sec:verification},
is covered by this more general property.}
We can reason about the traces of \ls$psi_interim$ up to
permutation because Alice has no prior knowledge of the choice of
representation of Bob's set (Bob can shuffle his list), so cannot
learn anything from a permutation of the trace.\footnote{We could formalize
this observation using a probabilistic, relational variant
of \fstar~\citep{rfstar14}.} This establishes the security of
\ls{psi_interim}.
Finally, we can connect \ls$psi_interim$ to \ls$psi_opt$
by showing that there exists a function \ls$f$, such that for
any trace \ls$tr=trace_psi la lb$, the trace of \ls$psi_opt$,
\ls$trace_psi_opt la lb$, can be computed by
\ls$f (length la) (length lb) tr$.
In other words, the trace produced by the optimized implementation can be computed
using a function of information already available to Alice (or Bob)
when she (or he) observes a run of the secure, unoptimized version
\ls$psi_interim la lb$. As such, the optimizations do not reveal
further information.
\section{Related work}
\label{sec:related}
\Paragraph{Source \mc verification} While the verification of the
underlying crypto protocols has received some attention
\cite{Almeida:2017:FVS:3133956.3134017,cryptoeprint:2014:456},
the verification of correctness and security properties of \mc source
programs has remained largely unexplored, surprisingly so given that
the goal of \mc is to preserve the privacy of secret inputs.
The only previous work that
we know of is Backes et. al. \cite{backes_et_al:LIPIcs:2010:2877} who
devise an applied pi-calculus based abstraction for \mc, and use it
for formal verification. For an auction protocol that computes the
\kw{min} function, their abstraction comprises about 1400 lines of
code. \ourlang, on the other hand, enables direct verification of the
higher-level \mc source programs, and not their models,
and in addition provides a partially verified toolchain.
\Paragraph{Wysteria}
\ourlang's computational model is based on programming
abstractions of a previous domain-specific language,
Wysteria~\citep{wysteria}. \ourlang's realization as an embedded DSL in
\fstar{} makes important advances. In particular, \ourlang (a) enhances the
Wysteria semantics to include a notion of observable traces, and provides
the novel capability to prove security and correctness properties
about mixed-mode \mc source programs, (b) expands the
programming constructs available by drawing on features and libraries
of \fstar, and (c) adds assurance via a (partially) proved-correct
implementation.
\Paragraph{Verified \mc toolchain} Almeida et al.~\cite{Almeida:2017:FVS:3133956.3134017}
build a verified toolchain consisting of (a) a verified
circuit compiler from (a subset of) C to boolean circuits, and
(b) a verified implementation of Yao's~\cite{Yao}
garbled circuits protocol for 2-party \mc.
They use CompCert~\cite{Leroy2009Formal}
for the former, and EasyCrypt~\cite{Barthe:2011:CSP:2033036.2033043}
for the latter. These are significant advances, but there are several distinctions
from our work.
The MPC programs in their toolchain are not \emph{mixed-mode},
and thus it cannot express examples like \ls{median_opt} and the optimized PSI.
Their framework does not enable formal verification of source programs
like \ourlang does. It may be possible to use other frameworks for
verifying C programs (e.g. Frama-C~\cite{framac}), but it is inconvenient
as one has to work in the subset of C that falls in the intersection
of these tools. \ourlang is also more general as it supports
general $n$-party MPC; e.g., the card dealing application in \S\ref{sec:apps}
has 3 parties. Nevertheless, \ourlang may use the
verified Yao implementation for the special case of 2 parties.
\Paragraph{\mc DSLs and DSL extensions} In addition to Wysteria
several other \mc DSLs have been proposed in the
literature~\cite{Huang11,viff,Malka2011,fairplaymp,Holzer12,Nielsen07,Nielsen09,sharemind,Schropfer2011,wysteria,Liu2014,Laud:2015:DLL:2810103.2813664}. Most
of these languages have standalone implementations, and the
(usability/scalability) drawbacks
that come with them. Like \ourlang,
a few are implemented as language extensions. Launchbury et
al.~\citet{Launchbury2012} describe a Haskell-embedded DSL for writing
low-level ``share protocols'' on a multi-server ``SMC machine''.
OblivC~\cite{oblivc} is an extension to C for two-party \mc that
annotates variables and conditionals with an \kw{obliv} qualifier to
identify private inputs; these programs are compiled by
source-to-source translation.
The former is essentially a shallow embedding, and the latter is
compiler-based; \ourlang is unique in that it combines a shallow embedding
to support source program
verification and a deep embedding to support a non-standard
target semantics. Recent work~\cite{ezpc,Buscher:2018:HCH:3243734.3243786} compiles
to cryptographic protocols that include both arithmetic and boolean
circuits; the compiler decides which fragments
of the program fall into which category. It would be interesting work
to integrate such a backend in \ourlang.
\Paragraph{Mechanized metatheory} Our
verification results are different from a typical verification result
that might either mechanize metatheory for
an idealized language~\cite{Aydemir:2005:MMM:2145056.2145062}, or
might prove an interpreter or compiler correct w.r.t. a formal
semantics~\cite{Leroy2009Formal}---we do both. We mechanize the
metatheory of \ourlang establishing the soundness of the conceptual
ST semantics w.r.t. the actual DS semantics, and
mechanize the proof that the interpreter implements
the correct DS semantics.
\Paragraph{General DSL implementation strategies} DSLs (for \mc or
other purposes) are implemented
in various ways, such as by developing a standalone
compiler/interpreter, or by shallow or deep embedding in
a host language. Our approach bears relation
to the approach taken in LINQ~\cite{Meijer:2006:LRO:1142473.1142552},
which embeds a query language in normal C\# programs, and implements
these programs by extracting the query syntax tree and passing it to a
\emph{provider} to implement for a particular backend. Other
researchers have embedded DSLs in verification-oriented host languages
(e.g., Bedrock~\cite{bedrock} in Coq~\cite{coq}) to permit formal
proofs of DSL programs. Low$^\star$~\cite{Protzenko:2017:VLP:3136534.3110261}
is a shallow-embedding of a small, sequential,
well-behaved subset of C in \fstar that extracts to C using a \fstar-to-C
compiler. Low$^\star$ has been used to verify and implement several cryptographic
constructions. Fromherz et al.~\cite{valepopl} present a deep embedding
of a subset of x64 assembly in \fstar that allows efficient verification of
assembly and its interoperation with C code generated from Low$^\star$.
They design (and verify) a custom VC generator for the deeply embedded DSL,
that allows for the proofs of assembly crypto routines to scale.
\section{Conclusions}
This paper has presented \ourlang, the first DSL to enable formal
verification of efficient source \mc programs as written in a
full-featured host programming language, \fstar. The paper presented
examples such as joint median, card dealing, and PSI, and showed
how the DSL enables their correctness and security proofs.
\ourlang implementation, examples, and
proofs are publicly available on
\iflong
Github at \url{https://github.com/FStarLang/FStar/tree/stratified\_last/examples/wysteria}.
\else
Github.
\fi
\bibliographystyle{splncs04}
|
1,116,691,500,312 | arxiv | \section{Introduction}
Spectral algorithms are central to machine learning and scientific computing. In machine learning, eigendecomposition and singular value decomposition are foundational tools, used for PCA as well as a wide variety of other models. In scientific applications, solving for the eigenfunction of a given linear operator is central to the study of PDEs, and gives the time-independent behavior of classical and quantum systems. For systems where the linear operator of interest can be represented as a reasonably-sized matrix, full eigendecomposition can be achieved in $\mathcal{O}(n^3)$ time \citep{pan1998complexity}, and in cases where the matrix is too large to diagonalize completely (or even store in memory), iterative algorithms based on Krylov subspace methods can efficiently compute a fixed number of eigenvectors by repeated application of matrix-vector products \citep{golub2012matrix}.
At a larger scale, the eigenvectors themselves cannot be represented explicitly in memory. This is the case in many applications in quantum physics and machine learning, where the state space of interest may be combinatorially large or even continuous and high dimensional. Typically, the eigenfunctions of interest are approximated from a fixed number of points small enough to be stored in memory, and then the value of the eigenfunction at other points is approximated by use of the Nystr{\"o}m method \citep{bengio2004out}. As this depends on evaluating a kernel between a new point and every point in the training set, this is not practical for large datasets, and some form of function approximation is necessary. By choosing a function approximator known to work well in a certain domain, such as convolutional neural networks for vision, we may be able to bias the learned representation towards reasonable solutions in a way that is difficult to encode by choice of kernel.
In this paper, we propose a way to approximate eigenfunctions of linear operators on high-dimensional function spaces with neural networks, which we call {\em Spectral Inference Networks} (SpIN). We show how to train these networks via bilevel stochastic optimization. Our method finds correct eigenfunctions of problems in quantum physics and discovers interpretable representations from video. This significantly extends prior work on unsupervised learning without a generative model and we expect will be useful in scaling many applications of spectral methods.
The outline of the paper is as follows. Sec~\ref{sec:related_work} provides a review of related work on spectral learning and stochastic optimization of approximate eigenfunctions. Sec.~\ref{sec:eigenoptimization} defines the objective function for Spectral Inference Networks, framing eigenfunction problems as an optimization problem. Sec.~\ref{sec:method} describes the algorithm for training Spectral Inference Networks using bilevel optimization and a custom gradient to learn ordered eigenfunctions simultaneously. Experiments are presented in Sec.~\ref{sec:experiments} and future directions are discussed in Sec.~\ref{sec:discussion}. We also include supplementary materials with more in-depth derivation of the custom gradient updates (Sec.~\ref{sec:gradient_derivation}), a TensorFlow implementation of the core algorithm (Sec.~\ref{sec:tf_code}), and additional experimental results and training details (Sec.~\ref{sec:experimental_details}).
\section{Related Work}
\label{sec:related_work}
Spectral methods are mathematically ubiquitous, arising in a number of diverse settings. Spectral clustering \citep{ng2002spectral}, normalized cuts \citep{shi2000normalized} and Laplacian eigenmaps \citep{belkin2002laplacian} are all machine learning applications of spectral decompositions applied to graph Laplacians. Related manifold learning algorithms like LLE \citep{Tenenbaum2000} and IsoMap \citep{Roweis2000} also rely on eigendecomposition, with a different kernel. Spectral algorithms can also be used for asymptotically exact estimation of parametric models like hidden Markov models and latent Dirichlet allocation by computing the SVD of moment statistics \citep{hsu2008spectral, anandkumar2012spectral}.
In the context of reinforcement learning, spectral decomposition of predictive state representations has been proposed as a method for learning a coordinate system of environments for planning and control \citep{boots2011closing}, and when the transition function is symmetric its eigenfunctions are also known as {\em proto-value functions} (PVFs) \citep{mahadevan2007proto}. PVFs have also been proposed by neuroscientists as a model for the emergence of grid cells in the entorhinal cortex \citep{stachenfeld17}. The use of PVFs for discovering subgoals in reinforcement learning has been investigated in \citep{Machado2017a} and combined with function approximation in \citep{machado2018eigenoption}, though using a less rigorous approach to eigenfunction approximation than SpIN. A qualitative comparison of the two approaches is given in the supplementary material in Sec.~\ref{sec:atari}.
Spectral learning with stochastic approximation has a long history as well. Probably the earliest work on stochastic PCA is that of ``Oja's rule" \citep{Oja1982}, which is a Hebbian learning rule that converges to the first principal component, and a wide variety of online SVD algorithms have appeared since. Most of these stochastic spectral algorithms are concerned with learning fixed-size eigenvectors from online data, while we are concerned with cases where the eigenfunctions are over a space too large to be represented efficiently with a fixed-size vector.
The closest related work in machine learning on finding eigenfunctions by optimization of parametric models is Slow Feature Analysis (SFA) \citep{wiskott2002slow}, which is a special case of SpIN. SFA is equivalent to function approximation for Laplacian eigenmaps \citep{sprekeler2011relation}, and it has been shown that optimizing for the slowness of features in navigation can also lead to the emergence of units whose response properties mimic grid cells in the entorhinal cortex of rodents \citep{wyss2006model, franzius2007slowness}. SFA has primarily been applied to train shallow or linear models, and when trained on deep models is typically trained in a layer-wise fashion, rather than end-to-end \citep{kompella2012incremental, sun2014dl}. The features in SFA are learned sequentially, from slowest to fastest, while SpIN allows for simultaneous learning of all eigenfunctions, which is more useful in an online setting.
Spectral methods and deep learning have been combined in other ways. The spectral networks of \cite{bruna2014spectral} are a generalization of convolutional neural networks to graph and manifold structured data based on the idea that the convolution operator is diagonal in a basis defined by eigenvectors of the Laplacian. In \citep{ionescu2015matrix} spectral decompositions were incorporated as differentiable layers in deep network architectures. Spectral decompositions have been used in combination with the kernelized Stein gradient estimator to better learn implicit generative models like GANs \citep{shi2018spectral}. While these use spectral methods to design or train neural networks, our work uses neural networks to solve large-scale spectral decompositions.
In computational physics, the field of approximating eigenfunctions of a Hamiltonian operator is known as {\em Variational Quantum Monte Carlo} (VMC) \citep{foulkes2001quantum}. VMC methods are usually applied to finding the ground state (lowest eigenvalue) of electronic systems, but extensions to excited states (higher eigenvalues) have been proposed \citep{blunt2015krylov}. Typically the class of function approximator is tailored to the system, but neural networks have been used for calculating ground states \citep{carleo2017solving} and excited states \citep{choo2018symmetries}. Stochastic optimization for VMC dates back at least to \cite{harju1997stochastic}. Most of these methods use importance sampling from a well-chosen distribution to eliminate the bias due to finite batch sizes. In machine learning we are not free to choose the distribution from which the data is sampled, and thus cannot take advantage of these techniques.
\section{Spectral Decomposition as Optimization}
\label{sec:eigenoptimization}
\subsection{Finite-dimensional eigenvectors}
Eigenvectors of a matrix $\mathbf{A}$ are defined as those vectors $\mathbf{u}$ such that $\mathbf{A}\mathbf{u} = \lambda \mathbf{u}$ for some scalar $\lambda$, the eigenvalue. It is also possible to define eigenvectors as the solution to an optimization problem. If $\mathbf{A}$ is a symmetric matrix, then the largest eigenvector of $\mathbf{A}$ is the solution of:
\begin{equation}
\max_{\substack{\mathbf{u} \\ \mathbf{u}^T\mathbf{u} = 1}} \mathbf{u}^T \mathbf{A} \mathbf{u}
\end{equation}
or equivalently (up to a scaling factor in $\mathbf{u}$)
\begin{equation}
\max_{\substack{\mathbf{u}}} \frac{\mathbf{u}^T \mathbf{A} \mathbf{u}}{\mathbf{u}^T\mathbf{u}}
\end{equation}
This is the {\em Rayleigh quotient}, and it can be seen by setting derivatives equal to zero that this is equivalent to finding $\mathbf{u}$ such that $A\mathbf{u} = \lambda \mathbf{u}$, where $\lambda$ is equal to the value of the Rayleigh quotient. We can equivalently find the lowest eigenvector of $\mathbf{A}$ by minimizing the Rayleigh quotient instead. Amazingly, despite being a nonconvex problem, algorithms such as power iteration converge to the global solution of this problem \cite[Sec. 4]{daskalakis2017converse}.
To compute the top $N$ eigenvectors $\mathbf{U} = \left(\mathbf{u}_1, \ldots, \mathbf{u}_N \right)$, we can solve a sequence of maximization problems:
\begin{equation}
\mathbf{u}_i = \arg\max_{\substack{\mathbf{u} \\ \mathbf{u}_{j}^T\mathbf{u} = 0 \\ j < i }} \frac{\mathbf{u}^T \mathbf{A} \mathbf{u}}{\mathbf{u}^T\mathbf{u}}
\label{eqn:sequential_rayleigh}
\end{equation}
If we only care about finding a subspace that spans the top $N$ eigenvectors, we can divide out the requirement that the eigenvectors are orthogonal to one another, and reframe the problem as a single optimization problem \cite[Sec. 4.4]{edelman1998geometry}:
\begin{equation}
\max_{\mathbf{U}} \mathrm{Tr}\left((\mathbf{U}^T \mathbf{U})^{-1} \mathbf{U}^T\mathbf{A}\mathbf{U}\right)
\label{eqn:generalized_rayleigh}
\end{equation}
or, if $\mathbf{u}^i$ denotes row $i$ of $\mathbf{U}$:
\begin{equation}
\max_{\mathbf{U}} \mathrm{Tr}\left(\left(\sum_i \mathbf{u}^{iT} \mathbf{u}^{i}\right)^{-1}\sum_{ij} A_{ij} \mathbf{u}^{iT} \mathbf{u}^{j}\right)
\label{eqn:generalized_rayleigh_sum}
\end{equation}
Note that this objective is invariant to right-multiplication of $\mathbf{U}$ by an arbitrary matrix, and thus we do not expect the columns of $\mathbf{U}$ to be the separate eigenvectors. We will discuss how to break this symmetry in Sec.~\ref{sec:break_symmetry}.
\subsection{From Eigenvectors to Eigenfunctions}
\label{sec:eigvec_to_eigfunc}
We are interested in the case where both $\mathbf{A}$ and $\mathbf{u}$ are too large to represent in memory. Suppose that instead of a matrix $\mathbf{A}$ we have a symmetric (not necessarily positive definite) kernel $k(\mathbf{x}, \mathbf{x}')$ where $\mathbf{x}$ and $\mathbf{x}'$ are in some measurable space $\Omega$, which could be either continuous or discrete. Let the inner product on $\Omega$ be defined with respect to a probability distribution with density $p(\mathbf{x})$, so that $\langle f, g \rangle=\int f(\mathbf{x})g(\mathbf{x}) p(\mathbf{x}) d\mathbf{x} = \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[f(\mathbf{x})g(\mathbf{x})]$. In theory this could be an improper density, such as the uniform distribution over $\mathbb{R}^n$, but to evaluate it numerically there must be some proper distribution over $\Omega$ from which the data are sampled. We can construct a symmetric linear operator $\mathcal{K}$ from $k$ as $\mathcal{K}[f](\mathbf{x}) = \mathbb{E}_{\mathbf{x}'}\left[k(\mathbf{x}, \mathbf{x}')f(\mathbf{x}')\right]$. To compute a function that spans the top $N$ eigenfunctions of this linear operator, we need to solve the equivalent of Eq.~\ref{eqn:generalized_rayleigh_sum} for function spaces. Replacing rows $i$ and $j$ with points $\mathbf{x}$ and $\mathbf{x}'$ and sums with expectations, this becomes:
\begin{equation}
\max_{\mathbf{u}} \mathrm{Tr}\left(\mathbb{E}_{\mathbf{x}}\left[\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x})^T\right]^{-1}
\mathbb{E}_{\mathbf{x},\mathbf{x}'}\left[k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x}')^T\right]\right)
\label{eqn:stochastic_rayleigh}
\end{equation}
where the optimization is over all functions $\mathbf{u}:\Omega \rightarrow \mathbb{R}^N$ such that each element of $\mathbf{u}$ is an integrable function under the metric above. Also note that as $\mathbf{u}^i$ is a row vector while $\mathbf{u}(\mathbf{x})$ is a column vector, the transposes are switched. This is equivalent to solving the constrained optimization problem
\begin{equation}
\max_{\substack{\mathbf{u} \\ \mathbb{E}_{\mathbf{x}}\left[\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x})^T\right]=\mathbf{I}}} \mathrm{Tr}\left(
\mathbb{E}_{\mathbf{x},\mathbf{x}'}\left[k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x}')^T\right]\right)
\label{eqn:stochastic_rayleigh_constrained}
\end{equation}
For clarity, we will use $\mathbf{\Sigma} = \mathbb{E}_{\mathbf{x}}\left[\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x})^T\right]$ to denote the covariance\footnote{Technically, this is the second moment, as $\mathbf{u}(\mathbf{x})$ is not necessarily zero-mean, but we will refer to it as the covariance for convenience.} of features and $\mathbf{\Pi} = \mathbb{E}_{\mathbf{x},\mathbf{x}'}\left[k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x}')^T\right]$ to denote the kernel-weighted covariance throughout the paper, so the objective in Eq.~\ref{eqn:stochastic_rayleigh} becomes $\mathrm{Tr}(\mathbf{\Sigma}^{-1}\mathbf{\Pi})$. The empirical estimate of these quantities will be denoted as $\mathbf{\hat{\Sigma}}$ and $\hat{\mathbf{\Pi}}$.
\subsection{Kernels}
The form of the kernel $k$ often allows for simplification to Eq.~\ref{eqn:stochastic_rayleigh}. If $\Omega$ is a graph, and $k(\mathbf{x}, \mathbf{x}') = -1$ if $\mathbf{x} \ne \mathbf{x}'$ and are neighbors and 0 otherwise, and $k(\mathbf{x}, \mathbf{x})$ is equal to the total number of neighbors of $\mathbf{x}$, this is the {\em graph Laplacian}, and can equivalently be written as:
\begin{equation}
k(\mathbf{x},\mathbf{x}')\mathbf{u}(\mathbf{x})\mathbf{u}(\mathbf{x}')^T = \left(\mathbf{u}(\mathbf{x})-\mathbf{u}(\mathbf{x}')\right)\left(\mathbf{u}(\mathbf{x})-\mathbf{u}(\mathbf{x}')\right)^T
\label{eqn:sfa_objective}
\end{equation}
for neighboring points \cite[Sec. 4.1]{sprekeler2011relation}. It's clear that this kernel penalizes the difference between neighbors, and in the case where the neighbors are adjacent video frames this is Slow Feature Analysis (SFA) \citep{wiskott2002slow}. Thus SFA is a special case of SpIN, and the algorithm for learning in SpIN here allows for end-to-end online learning of SFA with arbitrary function approximators. The equivalent kernel to the graph Laplacian for $\Omega = \mathbb{R}^n$ is
\begin{equation}
k(\mathbf{x}, \mathbf{x}') = \lim_{\epsilon \rightarrow 0} \textstyle\sum_{i=1}^n \epsilon^{-2}(2\delta(\mathbf{x}-\mathbf{x}') - \delta(\mathbf{x} - \mathbf{x}' - \epsilon \mathbf{e}_i) - \delta(\mathbf{x} - \mathbf{x}' + \epsilon \mathbf{e}_i))
\end{equation}
where $\mathbf{e}_i$ is the unit vector along the axis $i$. This converges to the {\em differential} Laplacian, and the linear operator induced by this kernel is $\nabla^2\triangleq \sum_i \frac{\partial^2}{\partial x_i^2}$, which appears frequently in physics applications. The generalization to generic manifolds is the {\em Laplace-Beltrami} operator. Since these are purely local operators, we can replace the double expectation over $\mathbf{x}$ and $\mathbf{x}'$ with a single expectation.
\section{Method}
\label{sec:method}
There are many possible ways of solving the optimization problems in Equations~\ref{eqn:stochastic_rayleigh}~and~\ref{eqn:stochastic_rayleigh_constrained}. In principle, we could use a constrained optimization approach such as the augmented Lagrangian method \citep{bertsekas2014constrained}, which has been successfully combined with deep learning for approximating maximum entropy distributions \citep{loaiza2017maximum}. In our experience, such an approach was difficult to stabilize. We could also construct an orthonormal function basis and then learn some flow that preserves orthonormality. This approach has been suggested for quantum mechanics problems by \cite{cranmer2018quantum}. But, if the distribution $p(\mathbf{x})$ is unknown, then the inner product $\langle f, g \rangle$ is not known, and constructing an explicitly orthonormal function basis is not possible. Also, flows can only be defined on continuous spaces, and we are interested in methods that work for large discrete spaces as well. Instead, we take the approach of directly optimizing the quotient in Eq.~\ref{eqn:stochastic_rayleigh}.
\subsection{Learning Ordered Eigenfunctions}
\label{sec:break_symmetry}
Since Eq.~\ref{eqn:stochastic_rayleigh} is invariant to linear transformation of the features $\mathbf{u}(\mathbf{x})$, optimizing it will only give a function that {\em spans} the top $N$ eigenfunctions of $\mathcal{K}$. If we were to instead sequentially optimize the Rayleigh quotient for each function $u_i(\mathbf{x})$:
\begin{equation}
\max_{\substack{u_i \\ \mathbb{E}_{\mathbf{x}}\left[u_i(\mathbf{x})u_i(\mathbf{x})\right]=1 \\
\mathbb{E}_{\mathbf{x}}\left[u_i(\mathbf{x})u_{j}(\mathbf{x})\right]=0 \\
j=1,\ldots,i-1}}
\mathbb{E}_{\mathbf{x},\mathbf{x}'}\left[k(\mathbf{x}, \mathbf{x}')u_i(\mathbf{x})u_i(\mathbf{x}')\right]
\label{eqn:sequential_stochastic_rayleigh}
\end{equation}
we would recover the eigenfunctions in order. However, this would be cumbersome in an online setting. It turns out that by masking the flow of information from the gradient of Eq.~\ref{eqn:stochastic_rayleigh} correctly, we can simultaneously learn all eigenfunctions in order.
First, we can use the invariance of trace to cyclic permutation to rewrite the objective in Eq.~\ref{eqn:stochastic_rayleigh} as $\mathrm{Tr}\left(\mathbf{\Pi}^{-1}\mathbf{\Sigma}\right) = \mathrm{Tr}\left(\mathbf{L}^{-T}\mathbf{L}^{-1}\mathbf{\Sigma}\right) = \mathrm{Tr}\left(\mathbf{L}^{-1}\mathbf{\Pi}\mathbf{L}^{-T}\right)$ where $\mathbf{L}$ is the Cholesky decomposition of $\mathbf{\Sigma}$. Let $\mathbf{\Lambda}=\mathbf{L}^{-1}\mathbf{\Pi}\mathbf{L}^{-T}$, this matrix has the convenient property that the upper left $n \times n$ block only depends on the first $n$ functions $\mathbf{u}_{1:n}(\mathbf{x}) = (u_1(\mathbf{x}),\ldots,u_n(\mathbf{x}))^T$. This means the maximum of $\sum_{i=1}^{n} \Lambda_{ii}$ with respect to $\mathbf{u}_{1:n}(\mathbf{x})$ spans the first $n < N$ eigenfunctions. If we additionally mask the gradients of $\Lambda_{ii}$ so they are also independent of any $u_j(\mathbf{x})$ where $j$ is {\em less than} $i$:
\begin{equation}
\frac{\tilde{\partial} \Lambda_{ii}}{\partial u_{j}} =
\begin{cases}
\frac{\partial \Lambda_{ii}}{\partial u_{j}} & \text{if}\ i=j \\
0 & \text{otherwise}
\end{cases}
\label{eqn:modify_gradient}
\end{equation}
and combine the gradients for each $i$ into a single masked gradient $\tilde{\nabla}_{\mathbf{u}}\mathrm{Tr}(\mathbf{\Lambda}) = \sum_i \tilde{\nabla}_{\mathbf{u}} \Lambda_{ii} = (\frac{\partial \Lambda_{11}}{\partial u_1}, \ldots,\frac{\partial \Lambda_{NN}}{\partial u_N})$ which we use for gradient ascent, then this is equivalent to independently optimizing each $u_i(\mathbf{x})$ towards the objective $ \Lambda_{ii}$. Note that there is still nothing forcing all $\mathbf{u}(\mathbf{x})$ to be orthogonal. If we explicitly orthogonalize $\mathbf{u}(\mathbf{x})$ by multiplication by $\mathbf{L}^{-1}$, then we claim that the resulting $\mathbf{v}(\mathbf{x}) = \mathbf{L}^{-1}\mathbf{u}(\mathbf{x})$ will be the true ordered eigenfunctions of $\mathcal{K}$. A longer discussion justifying this is given in the supplementary material in Sec.~\ref{sec:gradient_derivation}. The closed form expression for the masked gradient, also derived in the supplementary material, is given by:
\begin{equation}
\tilde{\nabla}_{\mathbf{u}}\mathrm{Tr}(\mathbf{\Lambda}) = \mathbb{E}[k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})^T]\mathbf{L}^{-T}\mathrm{diag}(\mathbf{L})^{-1} - \mathbb{E}[\mathbf{u}(\mathbf{x})^T]\mathbf{L}^{-T}\mathrm{triu}\left(\mathbf{\Lambda}\mathrm{diag}(\mathbf{L})^{-1}\right)
\label{eqn:symmetry_broken_gradient_u}
\end{equation}
where $\mathrm{triu}$ and $\mathrm{diag}$ give the upper triangular and diagonal of a matrix, respectively. This gradient can then be passed as the error from $\mathbf{u}$ back to parameters $\theta$, yielding:
\begin{equation}
\tilde{\nabla}_{\theta}\mathrm{Tr}(\mathbf{\Lambda}) = \mathbb{E}\left[k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})^T\mathbf{L}^{-T}\mathrm{diag}(\mathbf{L})^{-1}\frac{\partial \mathbf{u}}{\partial \theta}\right] - \mathbb{E}\left[\mathbf{u}(\mathbf{x})^T\mathbf{L}^{-T}\mathrm{triu}\left(\mathbf{\Lambda}\mathrm{diag}(\mathbf{L})^{-1}\right)\frac{\partial \mathbf{u}}{\partial \theta}\right]
\end{equation}
To simplify notation we can express the above as
\begin{equation}
\tilde{\nabla}_{\theta}\mathrm{Tr}(\mathbf{\Lambda}) = \mathbb{E}\left[\mathbf{J}_{\mathbf{\Pi}}\left(\mathbf{L}^{-T}\mathrm{diag}(\mathbf{L})^{-1}\right)\right] - \mathbb{E}\left[\mathbf{J}_{\mathbf{\Sigma}}\left(\mathbf{L}^{-T}\mathrm{triu}\left(\mathbf{\Lambda}\mathrm{diag}(\mathbf{L})^{-1}\right)\right)\right]
\label{eqn:symmetry_broken_gradient}
\end{equation}
Where $\mathbf{J}_{\Pi}(\mathbf{A}) = k(\mathbf{x}, \mathbf{x}')\mathbf{u}(\mathbf{x})^T \mathbf{A} \frac{\partial \mathbf{u}}{\partial \theta}$ and $\mathbf{J}_{\Sigma}(\mathbf{A}) = \mathbf{u}(\mathbf{x})^T \mathbf{A} \frac{\partial \mathbf{u}}{\partial \theta}$ are linear operators that denote left-multiplication of the Jacobian of $\mathbf\Pi$ and $\mathbf\Sigma$ with respect to $\theta$ by $\mathbf{A}$. A TensorFlow implementation of this gradient is given in the supplementary material in Sec.~\ref{sec:tf_code}
\subsection{Bilevel Optimization}
The expression in Eq.~\ref{eqn:symmetry_broken_gradient} is a nonlinear function of multiple expectations, so naively replacing $\mathbf{\Pi}$, $\mathbf{\Sigma}$, $\mathbf{L}$, $\mathbf{\Lambda}$ and their gradients with empirical estimates will be biased. This makes learning Spectral Inference Networks more difficult than standard problems in machine learning for which unbiased gradient estimates are available. We can however reframe this as a {\em bilevel} optimization problem, for which convergent algorithms exist. Bilevel stochastic optimization is the problem of simultaneously solving two coupled minimization problems $\min_x f(\mathbf{x}, \mathbf{y})$ and $\min_y g(\mathbf{x}, \mathbf{y})$ for which we only have noisy unbiased estimates of the gradient of each: $\mathbb{E}[\mathbf{F}(\mathbf{x}, \mathbf{y})] = \nabla_{\mathbf{x}} f(\mathbf{x}, \mathbf{y})$ and $\mathbb{E}[\mathbf{G}(\mathbf{x}, \mathbf{y})] = \nabla_{\mathbf{y}} g(\mathbf{x}, \mathbf{y})$. Bilevel stochastic problems are common in machine learning and include actor-critic methods, generative adversarial networks and imitation learning \citep{pfau2016connecting}. It has been shown that by optimizing the coupled functions on two timescales then the optimization will converge to simultaneous local minima of $f$ with respect to $\mathbf{x}$ and $g$ with respect to $\mathbf{y}$ \citep{borkar1997stochastic}:
\begin{eqnarray}
\mathbf{x}_t & \leftarrow & \mathbf{x}_{t-1} - \alpha_t \mathbf{F}(\mathbf{x}_{t-1}, \mathbf{y}_{t-1}) \\
\mathbf{y}_t & \leftarrow & \mathbf{y}_{t-1} - \beta_t \mathbf{G}(\mathbf{x}_t, \mathbf{y}_{t-1})
\label{eqn:two_timescale}
\end{eqnarray}
where $\lim_{t\rightarrow \infty} \frac{\alpha_t}{\beta_t} = 0$, $\sum_t \alpha_t = \sum_t \beta_t = \infty$, $\sum_t \alpha_t^2 < \infty$, $\sum_t \beta_t^2 < \infty$.
By replacing $\mathbf{\Sigma}$ and $\mathbf{J}_{\mathbf{\Sigma}}$ with a moving average in Eq.~\ref{eqn:symmetry_broken_gradient}, we can cast learning Spectral Inference Networks as exactly this kind of bilevel problem. Throughout the remainder of the paper, let $\hat{\mathbf{X}}_t$ denote the empirical estimate of a random variable $\mathbf{X}$ from the minibatch at time $t$, and let $\bar{\mathbf{X}}_t$ represent the estimate of $\mathbf{X}$ from a moving average, so $\bar{\mathbf{\Sigma}}_t$ and $\bar{\mathbf{J}}_{\mathbf{\Sigma}_t}$ are defined as:
\begin{eqnarray}
\bar{\mathbf{\Sigma}}_t &\leftarrow &\bar{\mathbf{\Sigma}}_{t-1} - \beta_t(\bar{\mathbf{\Sigma}}_{t-1} - \mathbf{\hat{\Sigma}}_t) \\
\bar{\mathbf{J}}_{\mathbf{\Sigma}_t} &\leftarrow &\bar{\mathbf{J}}_{\mathbf{\Sigma}_{t-1}} - \beta_t(\bar{\mathbf{J}}_{\mathbf{\Sigma}_{t-1}} - \hat{\mathbf{J}}_{\mathbf{\Sigma}_t})
\end{eqnarray}
This moving average is equivalent to solving
\begin{equation}
\min_{\mathbf{\Sigma}, \mathbf{J}_{\mathbf{\Sigma}}} \frac{1}{2}\left(||\mathbf{\Sigma} - \bar{\mathbf{\Sigma}}_t||^2 + ||\mathbf{J}_{\mathbf{\Sigma}} -\bar{\mathbf{J}}_{\mathbf{\Sigma}_t} ||^2\right)
\end{equation}
by stochastic gradient descent and clearly has the true $\mathbf{\Sigma}$ and $\mathbf{J}_{\mathbf{\Sigma}}$ as a minimum for a fixed $\theta$. Note that Eq.~\ref{eqn:symmetry_broken_gradient} is a {\em linear} function of $\mathbf{\Pi}$ and $\mathbf{J}_{\mathbf{\Pi}}$, so plugging in $\hat{\mathbf{\Pi}}_t$ and $\hat{\mathbf{J}}_{\mathbf{\Pi}_t}$ gives an unbiased noisy estimate. By also replacing terms that depend on $\mathbf{\Sigma}$ and $\mathbf{J}_{\mathbf{\Sigma}}$ with $\bar{\mathbf{\Sigma}}_t$ and $\bar{\mathbf{J}}_{\mathbf{\Sigma}_t}$, then alternately updating the moving averages and $\theta_t$, we convert the problem into a two-timescale update. Here $\theta_t$ corresponds to $\mathbf{x}_t$, $\bar{\mathbf{\Sigma}}_t$ and $\bar{\mathbf{J}}_{\mathbf{\Sigma}_t}$ correspond to $\mathbf{y}_t$, $\tilde{\nabla}_{\mathbf{\theta}}\mathrm{Tr}(\mathbf{\Lambda}(\hat{\mathbf{\Pi}}_t, \bar{\mathbf{\Sigma}}_t, \hat{\mathbf{J}}_{\mathbf{\Pi}_t}, \bar{\mathbf{J}}_{\mathbf{\Sigma}_t}))$ corresponds to $\mathbf{F}(\mathbf{x}_t, \mathbf{y}_t)$ and $(\bar{\mathbf{\Sigma}}_{t-1} - \mathbf{\hat{\Sigma}}_t, \bar{\mathbf{J}}_{\mathbf{\Sigma}_{t-1}} - \hat{\mathbf{J}}_{\mathbf{\Sigma}_t})$ corresponds to $\mathbf{G}(\mathbf{x}_t, \mathbf{y}_t)$.
\begin{algorithm}[t]
\caption{Learning in Spectral Inference Networks}\label{alg:spin}
\begin{algorithmic}[1]
\State \textbf{given} symmetric kernel $k$, decay rates $\beta_t$, first order optimizer \textsc{Optim}
\State \textbf{initialize} parameters $\theta_0$, average covariance $\bar{\mathbf{\Sigma}}_0 = \mathbf{I}$, average Jacobian of covariance $\bar{\mathbf{J}}_{\mathbf{\Sigma}_0} = 0$
\While{not converged}
\State Get minibatches $\mathbf{x}_{t1}, \ldots, \mathbf{x}_{tN}$ and $\mathbf{x}'_{t1}, \ldots, \mathbf{x}'_{tN}$
\State $\hat{\mathbf{\Sigma}}_t = \frac{1}{2}\left(\frac{1}{N}\sum_{i}\mathbf{u}_{\theta_t}(\mathbf{x}_{ti})\mathbf{u}_{\theta_t}(\mathbf{x}_{ti})^T +\frac{1}{N}\sum_{i} \mathbf{u}_{\theta_t}(\mathbf{x}'_{ti})\mathbf{u}_{\theta_t}(\mathbf{x}'_{ti})^T\right)$, covariance of minibatches
\State $\hat{\mathbf{\Pi}}_t = \frac{1}{N}\sum_{i}k(\mathbf{x}_{ti}, \mathbf{x}'_{ti})\mathbf{u}_{\theta_t}(\mathbf{x}_{ti})\mathbf{u}_{\theta_t}( \mathbf{x}'_{ti})^T$
\State $\bar{\mathbf{\Sigma}}_t \gets (1-\beta_t) \bar{\mathbf{\Sigma}}_{t-1} + \beta_t\hat{\mathbf{\Sigma}}_t$
\State $\bar{\mathbf{J}}_{\mathbf{\Sigma}_t} \gets (1-\beta_t) \bar{\mathbf{J}}_{\mathbf{\Sigma}_{t-1}} + \beta_t\hat{\mathbf{J}}_{\mathbf{\Sigma}_t}$
\State $\bar{\mathbf{L}}_t \gets$ Cholesky decomposition of $\bar{\mathbf{\Sigma}}_t$
\State Compute gradient $\tilde{\nabla}_{\mathbf{\theta}}\mathrm{Tr}(\mathbf{\Lambda}(\hat{\mathbf{\Pi}}_t, \bar{\mathbf{\Sigma}}_t, \hat{\mathbf{J}}_{\mathbf{\Pi}_t}, \bar{\mathbf{J}}_{\mathbf{\Sigma}_t}))$ according to Eq.~\ref{eqn:symmetry_broken_gradient}
\State $\theta_t \gets \mathrm{\textsc{Optim}}(\theta_{t-1}, \tilde{\nabla}_{\theta}\mathrm{Tr}(\mathbf{\Lambda}(\hat{\mathbf{\Pi}}_t, \bar{\mathbf{\Sigma}}_t, \hat{\mathbf{J}}_{\mathbf{\Pi}_t}, \bar{\mathbf{J}}_{\mathbf{\Sigma}_t}))$
\EndWhile
\State \textbf{result} Eigenfunctions $\mathbf{v}_{\theta^*}(\mathbf{x})=\mathbf{L}^{-1}\mathbf{u}_{\theta^*}(\mathbf{x})$ of $\mathcal{K}[f](\mathbf{x}) = \mathbb{E}_{\mathbf{x}'}[k(\mathbf{x}, \mathbf{x}')f(\mathbf{x}')]$
\end{algorithmic}
\end{algorithm}
\clearpage
\begin{figure}
\captionsetup{justification=centering}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{hydrogen_exact}
\caption{Eigenvectors found by exact eigensolver on a grid}
\label{fig:schroedinger_exact}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{hydrogen_eigenvector_does_not_work.png}
\caption{Eigenfunctions found by SpIN without bias correction ($\beta=1$)}
\label{fig:schroedinger_small_batch_does_not_work}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{hydrogen_eigenvector_works.png}
\caption{Eigenfunctions found by SpIN with $\beta=0.01$ to correct for biased gradients}
\label{fig:schroedinger_small_batch_works}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\centering
\includegraphics[width=\textwidth]{hydrogen_eigenvalues_does_not_work.png}
\caption{Eigenvalues without bias correction ($\beta=1$)}
\label{fig:schroedinger_eigenvalue_bad}
\end{subfigure}
~
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{hydrogen_eigenvalue_works.png}
\caption{Eigenvalues with bias correction ($\beta=0.01$)}
\label{fig:schroedinger_eigenvalue_good}
\end{subfigure}
\caption{Results of SpIN for solving two-dimensional hydrogen atom. \\ Black lines in (d) and (e) denote closed-form solution.}
\label{fig:schroedinger}
\end{figure}
\subsection{Defining Spectral Inference Networks}
We can finally combine all these elements together to define what a Spectral Inference Network is. We consider a Spectral Inference Network to be any machine learning algorithm that:
\begin{enumerate}
\item Minimizes the objective in Eq.~\ref{eqn:stochastic_rayleigh} end-to-end by stochastic optimization
\item Performs the optimization over a parametric function class such as deep neural networks
\item Uses the modified gradient in Eq.~\ref{eqn:symmetry_broken_gradient} to impose an ordering on the learned features
\item Uses bilevel optimization to overcome the bias introduced by finite batch sizes
\end{enumerate}
The full algorithm for training Spectral Inference Networks is given in Alg.~\ref{alg:spin}, with TensorFlow pseudocode in the supplementary material in Sec.~\ref{sec:tf_code}. There are two things to note about this algorithm. First, we have to compute an explicit estimate $\hat{\mathbf{J}}_{\mathbf{\Sigma}_t}$ of the Jacobian of the covariance with respect to the parameters at each iteration. That means if we have $N$ eigenfunctions we are computing, each step of training will require $N^2$ backward gradient computations. This will be a bottleneck in scaling the algorithm, but we found this approach to be more stable and robust than others. Secondly, while the theory of stochastic optimization depends on proper learning rate schedules, in practice these proper learning rate schedules are rarely used in deep learning. Asymptotic convergence is usually less important than simply getting into the neighborhood of a local minimum, and even for bilevel problems, a careful choice of constant learning rates often suffices for good performance. We follow this practice in our experiments and pick constant values of $\alpha$ and $\beta$.
\section{Experiments}
\label{sec:experiments}
In this section we present empirical results on a quantum mechanics problem with a known closed-form solution, and an example of unsupervised feature learning from video without a generative model. We also provide experiments comparing our approach against the successor feature approach of \cite{machado2018eigenoption} for eigenpurose discovery on the Arcade Learning Environment in Sec.~\ref{sec:atari} in the supplementary material for the interested reader. Code for the experiments in Sec.~\ref{sec:schroedinger} and \ref{sec:atari} is available at \url{https://github.com/deepmind/spectral_inference_networks}.
\subsection{Solving the Schr{\"o}dinger Equation}
\label{sec:schroedinger}
As a first experiment to demonstrate the correctness of the method on a problem with a known solution, we investigated the use of SpIN for solving the Schr{\"o}dinger equation for a two-dimensional hydrogen atom. The time-independent Schr{\"o}dinger equation for a single particle with mass $m$ in a potential field $V(\mathbf{x})$ is a partial differential equation of the form:
\begin{equation}
E\psi(\mathbf{x}) = \frac{-\hbar^2}{2m}\nabla^2\psi(\mathbf{x}) + V(\mathbf{x})\psi(\mathbf{x}) = \mathcal{H}[\psi](\mathbf{x})
\end{equation}
whose solutions describe the wavefunctions $\psi(\mathbf{x})$ with unique energy $E$. The probability of a particle being at position $\mathbf{x}$ then has the density $|\psi(\mathbf{x})|^2$. The solutions are eigenfunctions of the linear operator $\mathcal{H} \triangleq \frac{-h^2}{2m}\nabla^2 + V(\mathbf{x})$ --- known as the {\em Hamiltonian} operator. We set $\frac{\hbar^2}{2m}$ to 1 and choose $V(\mathbf{x}) = \frac{1}{|\mathbf{x}|}$, which corresponds to the potential from a charged particle. In 2 or 3 dimensions this can be solved exactly, and in 2 dimensions it can be shown that there are $2n+1$ eigenfunctions with energy $\frac{-1}{(2n+1)^2}$ for all $n=0,1,2,\ldots$ \citep{yang1991analytic}.
We trained a standard neural network to approximate the wavefunction $\psi(\mathbf{x})$, where each unit of the output layer was a solution with a different energy $E$. Details of the training network and experimental setup are given in the supplementary material in Sec.~\ref{sec:schroedinger_extra}. We found it critical to set the decay rate for RMSProp to be slower than the decay $\beta$ used for the moving average of the covariance in SpIN, and expect the same would be true for other adaptive gradient methods. To investigate the effect of biased gradients and demonstrate how SpIN can correct it, we specifically chose a small batch size for our experiments. As an additional baseline over the known closed-form solution, we computed eigenvectors of a discrete approximation to $\mathcal{H}$ on a $128\times 128$ grid.
Training results are shown in Fig.~\ref{fig:schroedinger}. In Fig.~\ref{fig:schroedinger_exact}, we see the circular harmonics that make up the electron orbitals of hydrogen in two dimensions. With a small batch size and no bias correction, the eigenfunctions (Fig.~\ref{fig:schroedinger_small_batch_does_not_work}) are incorrect and the eigenvalues (Fig.~\ref{fig:schroedinger_eigenvalue_bad}, ground truth in black) are nowhere near the true minimum. With the bias correction term in SpIN, we are able to both accurately estimate the shape of the eigenfunctions (Fig.~\ref{fig:schroedinger_small_batch_works}) and converge to the true eigenvalues of the system (Fig.~\ref{fig:schroedinger_eigenvalue_good}). Note that, as eigenfunctions 2-4 and 5-9 are nearly degenerate, any linear combination of them is also an eigenfunction, and we do not expect Fig.~\ref{fig:schroedinger_exact} and Fig.~\ref{fig:schroedinger_small_batch_works} to be identical. The high accuracy of the learned eigenvalues gives strong empirical support for the correctness of our method.
\subsection{Deep Slow Feature Analysis}
\label{sec:deep_sfa}
\begin{figure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{bouncing_balls_heatmap.png}
\caption{Heatmap of activation of each eigenfunction as a function of position of objects}
\label{fig:bouncing_balls_heatmap}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{bouncing_balls_best_frames.png}
\caption{Frames which most (top) and least (bottom) activate eigenfunction with heatmap outlined in green in Fig.~\ref{fig:bouncing_balls_heatmap}. Successive frames are overlaid in red and blue.}
\label{fig:bouncing_balls_best_frames}
\end{subfigure}
\caption{Results of Deep SFA on video of bouncing balls}
\label{fig:bouncing_balls}
\end{figure}
Having demonstrated the effectiveness of SpIN on a problem with a known closed-form solution, we now turn our attention to problems relevant to representation learning in vision. We trained a convolutional neural network to extract features from videos, using the Slow Feature Analysis kernel of Eq.~\ref{eqn:sfa_objective}. The video is a simple example with three bouncing balls. The velocities of the balls are constant until they collide with each other or the walls, meaning the time dynamics are reversible, and hence the transition function is a symmetric operator. We trained a model with 12 output eigenfunctions using similar decay rates to the experiments in Sec.~\ref{sec:schroedinger}. Full details of the training setup are given in Sec.~\ref{sec:deep_sfa_extra}, including training curves in Fig.~\ref{fig:bouncing_balls_curves}. During the course of training, the order of the different eigenfunctions often switched, as lower eigenfunctions sometimes took longer to fit than higher eigenfunctions.
Analysis of the learned solution is shown in Fig.~\ref{fig:bouncing_balls}. Fig.~\ref{fig:bouncing_balls_heatmap} is a heatmap showing whether the feature is likely to be positively activated (red) or negatively activated (blue) when a ball is in a given position. Since each eigenfunction is invariant to change of sign, the choice of color is arbitrary. Most of the eigenfunctions are encoding for the position of balls independently, with the first two eigenfunctions discovering the separation between up/down and left/right, and higher eigenfunctions encoding higher frequency combinations of the same thing. However, some eigenfunctions are encoding more complex joint statistics of position. For instance, one eigenfunction (outlined in green in Fig.~\ref{fig:bouncing_balls_heatmap}) has no clear relationship with the marginal position of a ball. But when we plot the frames that most positively or negatively activate that feature (Fig.~\ref{fig:bouncing_balls_best_frames}) we see that the feature is encoding whether all the balls are crowded in the lower right corner, or one is there while the other two are far away. Note that this is a fundamentally {\em nonlinear} feature, which could not be discovered by a shallow model. Higher eigenfunctions would likely encode for even more complex joint relationships. None of the eigenfunctions we investigated seemed to encode anything meaningful about velocity, likely because collisions cause the velocity to change rapidly, and thus optimizing for slowness of features is unlikely to discover this. A different choice of kernel may lead to different results.
\section{Discussion}
\label{sec:discussion}
We have shown that a single unified framework is able to compute spectral decompositions by stochastic gradient descent on domains relevant to physics and machine learning. This makes it possible to learn eigenfunctions over very high-dimensional spaces from very large datasets and generalize to new data without the Nystr{\"om} approximation. This extends work using slowness as a criterion for unsupervised learning without a generative model, and addresses an unresolved issue with biased gradients due to finite batch size. A limitation of the proposed solution is the requirement of computing full Jacobians at every time step, and improving the scaling of training is a promising direction for future research. The physics application presented here is on a fairly simple system, and we hope that Spectral Inference Nets can be fruitfully applied to more complex physical systems for which computational solutions are not yet available. The representations learned on video data show nontrivial structure and sensitivity to meaningful properties of the scene. These representations could be used for many downstream tasks, such as object tracking, gesture recognition, or faster exploration and subgoal discovery in reinforcement learning. Finally, while the framework presented here is quite general, the examples shown investigated only a small number of linear operators. Now that the basic framework has been laid out, there is a rich space of possible kernels and architectures to combine and explore.
|
1,116,691,500,313 | arxiv | \section{Introduction}
Since the celebrated Sz.Nagy dilation theorem that showed a contraction has an isometric dilation, people have been trying to generalize this beautiful result to several variables. Ando showed that a pair of commuting contractions has a commuting isometric dilation. However, a well-known example due to Parrott shows a triple of commuting contractions may fail to have a commuting isometric dilation. Many studies consider what kind of extra conditions we need to guarantee a Sz.Nagy type dilation. There are two seemingly opposite directions this paper aims to unify.
On one direction, Brehmer \cite{Brehmer1961} showed that if we put some extra conditions on the family of commuting contractions, then not only do they have an isometric dilation, but the isometric dilation actually satisfies a stronger condition known as regularity. Brehmer's result has been recently generalized to representations of any lattice ordered semigroup by the author \cite{BLi2014}.
In another direction, we may consider non-commutative variables. Suppose $T_1,\cdots,T_n$ are contractions that are not necessarily commuting. It is observed by Frazho-Bunce-Popescu \cite{Frazho1982,Bunce1984,Popescu1989} that if $T_1,\cdots,T_n$ forms a row contraction in the sense that $\sum_{i=1}^n T_i T_i^*\leq I$, then there exists an isometric dilation for $T_i$ that is a row contraction.
These two directions are seemingly unrelated. Indeed, one requires the commuting contractions to satisfy a stronger condition, whereas the other deals with non-commutative contractions. However, in this paper, I will show both results are special cases of $\ast$-regular dilation on graph products of $\mathbb{N}$.
Graph products of $\mathbb{N}$ are considered as important examples of quasi-lattice ordered semigroups in \cite{CrispLaca2002}. Isometric Nica-covariant representations on a quasi-lattice ordered group have been intensively studied in the past decade. However, contractive Nica-covariant representations have only recently been defined in the lattice ordered group case (see \cite{Fuller2013, DFK2014}). Lattice ordered groups are quite restrictive compared to quasi-lattice ordered groups, and many interesting quasi-lattice ordered semigroups (e.g. the free semigroup $\mathbb{F}_n^+$) are not lattice ordered. This leads to the question for which type of representation on a quasi-lattice ordered group has a minimal isometric Nica-covariant dilation. We answer this question for this special class of quasi-lattice ordered semigroups of graph products of $\mathbb{N}$ and establish that having a minimal isometric Nica-covariant dilation is equivalent to being $\ast$-regular.
Popescu \cite{Popescu1999} showed that having a minimal isometric Nica-covariant dilation for a special class operators is equivalent to a property for which he calls the property (P). We extend Popescu's property (P) to the larger class of operators that correspond to representations of graph products of $\mathbb{N}$, and show that the property (P) holds whenever the representation is $\ast$-regular.
\section{Background}
Throughout this paper, an operator $T$ is understood as a bounded linear operator on a complex Hilbert space $\mathcal{H}$. It is called \emph{a contraction} if its operator norm $\|T\|\leq 1$, and an \emph{isometry} if $T^* T=I$. If there is a larger Hilbert space $\mathcal{K}\supseteq\mathcal{H}$, we say that an operator $W\in\bh{K}$ is a \emph{dilation} of $T$ if $P_\mathcal{H} W^n \big|_\mathcal{H} = T^n$ for all $n\geq 1$. A familiar result due to Sarason \cite{Sarason1966} states that in such case, $\mathcal{K}$ decomposes as $\mathcal{H}_-\oplus\mathcal{H}\oplus\mathcal{H}_+$, and with respect to this decomposition, $$W=\begin{bmatrix} * & 0 & 0 \\ * & T & 0 \\ * & * & * \end{bmatrix}.$$
In particular, we say that $W$ is an \emph{extension} of $T$ if $\mathcal{H}$ is invariant. In other words, $\mathcal{H}_+=\{0\}$, and with respect to $\mathcal{K}=\mathcal{H}_-\oplus\mathcal{H}$, $$W=\begin{bmatrix} * & 0 \\ * & T \end{bmatrix}.$$
Dually, $W$ is called a \emph{co-extension} for $T$ if $\mathcal{H}^\perp$ is invariant for $W$. In other words, $\mathcal{H}_-=\{0\}$, and with respect to $\mathcal{K}=\mathcal{H}\oplus\mathcal{H}^+$, $$W=\begin{bmatrix} T & 0 \\ * & * \end{bmatrix}.$$
The celebrated Sz.Nagy dilation states that a contraction $T\in\bh{H}$ has an isometric co-extension. Moreover, the isometric co-extension $W$ can be chosen to be minimal in the sense that $$\mathcal{K}=\overline{span}\{W^k h:k\geq 0, h\in\mathcal{H}\}.$$
We call $W$ a \emph{minimal isometric dilation} for $T$.
There are several attempts to generalize Sz.Nagy's result to the multivariate context. Ando \cite{Ando1963} proved that a pair of commuting contractions $T_1,T_2$ has commuting isometric dilation. However, the generalization to three commuting contractions fails as Parrott \cite{Parrott1970} gave a counter-example where three commuting contractions fails to have commuting isometric dilations. Given $n$ contractions $T_1,\cdots,T_n$, when can we find certain isometric dilation for them?
There are two approaches to this question I would like to discuss in this paper. The first one requires some extra conditions of $T_i$. Brehmer \cite{Brehmer1961} first considered the question when $T_i$ has a stronger version of isometric dilation, now known as regular dilation. It has since been studied by many authors \cite{Halperin1962, SFBook, Gaspar1997}. It has been generalized to product systems \cite{Solel2008, Shalit2010}, and more recently, to any lattice ordered semigroup \cite{BLi2014}. For $n=(n_1,\cdots,n_k)\in\mathbb{Z}^k$, denote $T^n=\prod_{i=1}^k T_i^{n_i}$. Also denote $n^+=(n_1^+,\cdots,n_k^+)$ and $n^-=(n_1^-,\cdots,n_k^-)$, where $x^+=\max\{0,x\}$ and $x^-=\max\{0,-x\}$. It is clear that $n=n^+-n^-$.
\begin{definition}\label{df.regular} An isometric dilation $(W_i)$ for $(T_i)$ is called \emph{regular} if it has an additional property that for any $n\in\mathbb{Z}^k$, $$T^{*n^-} T^{n^+}=P_\mathcal{H} W^{*n^-} W^{n^+} \big|_\mathcal{H}.$$
Dually, it is called \emph{$\ast$-regular} if for any $n\in\mathbb{Z}^k$, $$T^{n^+}T^{*n^-} =P_\mathcal{H} W^{*n^-} W^{n^+} \big|_\mathcal{H}.$$
\end{definition}
For every subset $J\subset\{1,\cdots,n\}$, we denote $T_J=\prod_{j\in J} T_j$. Brehmer shows a necessary and sufficient condition for $T_i$ to have a regular (or $\ast$-regular) dilation:
\begin{theorem}[Brehmer \cite{Brehmer1961}]\label{thm.Brehmer} A commuting $n$-tuple of contractions $T_1,\cdots,T_n$ has a regular dilation if and only if for every $V\subset\{1,2,\cdots,k\}$, we have,
\begin{equation}\label{eq.Brehmer.regular}
\sum_{U\subset V} (-1)^{|U|} T_U^* T_U \geq 0
\end{equation}
Here $|U|$ denotes the cardinality of $U$.
Dually, $T_i$ has a $\ast$-regular dilation if and only if for every $V\subset\{1,2,\cdots,k\}$, we have,
\begin{equation}\label{eq.Brehmer}
\sum_{U\subset V} (-1)^{|U|} T_U T_U^* \geq 0
\end{equation}
\end{theorem}
\begin{remark} Due to Theorem \ref{thm.Brehmer}, the family $(T_i)$ has a regular dilation if and only if $(T_i^*)$ has a $\ast$-regular dilation. To make the notation more consistent with the row contraction condition in the Frazho-Popescu-Bunce dilation, we shall mostly consider the $\ast$-regular dilation from now on.
\end{remark}
Condition \pref{eq.Brehmer.regular} is a much stronger condition than the usual contractive condition that we require for an isometric dilation. For example, given two commuting contractions $T_1, T_2$, Ando's theorem always give an isometric dilation for this pair. However, to have a $\ast$-regular dilation, Brehmer's condition is equivalent of saying $I-T_1T_1^*-T_2T_2^* +T_1T_2T_2^* T_1^*\geq 0$. This can be false for many pairs of commuting contractions.
The other approach to generalize Sz.Nagy's dilation is kind of the opposite to Brehmer's approach. Instead of adding more conditions to commutative $T_i$, we can replace the commutative condition by the row contractive condition, which states that $\sum_{i=1}^n T_i T_i^*\leq I$. Here, $T_i$ are no longer required to be commuting. It is now known as the Frazho-Bunce-Popescu dilation that if $T_1,\cdots,T_n$ is row contractive, then they can be dilated to row contractive isometries $V_1,\cdots,V_n$.
Row contractive isometries (often called row isometries) is a well-studied subject in the field of operator algebra. For example, the $C^*$-algebra generated by row isometry is the well-known Cuntz-Toeplitz algebra. If we require further that $\sum_{i=1}^n V_i V_i^*=I$, its $C^*$-algebra is the renowned Cuntz-algebra. The $WOT$-closed non-self-adjoint algebra generated by a row isometry is a free semigroup algebra.
At first glance, Brehmer's result is seemingly unrelated to the Frazho-Bunce-Popescu dilation. Indeed, Brehmer's result is related to representations of the commutative semigroup $\mathbb{N}^k$ while the Frazho-Bunce-Popescu dilation is related to representations of the non-commutative free semigroup $\mathbb{F}_k^+$. Though regular dilation has been generalized to a larger class of semigroups called lattice ordered semigroups, the lattice order is a very restrictive requirement. For example, the free semigroup $\mathbb{F}_n^+$, and most quasi-lattice ordered semigroups, are not lattice ordered.
This paper makes a first attempt to define regular and $\ast$-regular dilations on a larger class of semigroups, known as graph products of $\mathbb{N}$. It is also called the graph semigroup, or the right-angled Artin monoid (see \cite{CrispLaca2002, Eilers2016}). It is an important class of quasi-lattice ordered semigroups, recently studied in \cite{CrispLaca2002} for its Nica-covariant covariant representations. The main result of this paper generalizes both Brehmer's theorem and the Frazho-Bunce-Popescu dilation to this context.
Throughout this paper, we let $\Gamma$ denote a countable simple graph with vertex set $\Lambda$ and edge set $E(\Gamma)$. In other words, the vertex set $\Lambda$ is a countable set, and every edge $e\in E(\Gamma)$ corresponds to two distinct vertices $i,j\in\Lambda$. Every edge is undirected, and we say that $i$ \emph{is adjacent to} $j$ if there is an edge $e=(i,j)\in E(\Gamma)$.
The graph product of $\mathbb{N}$, $P_\Gamma=\Gamma_{i\in\Lambda} \mathbb{N}$, is defined to be the unital semigroup generated by $\{e_i\}_{i\in\Lambda}$, with additional rules that $e_i,e_j$ commutes whenever $i$ is adjacent to $j$.
A representation $T:P_\Gamma\to\bh{H}$ is uniquely determined by its value on the set of generators $T_i=T(e_i)$ that satisfies $T_iT_j=T_jT_i$ whenever $i$ is adjacent to $j$. We extend the definition of regularity (see Definition \ref{df.regular1}) to representations on $P_\Gamma$ and show that $T$ is $\ast$-regular if and only if for every finite subset $W\subseteq\Lambda$:
\begin{equation}\label{eq.main1}
\sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\geq 0.
\end{equation}
Here, a set $U\subseteq\Lambda$ is called a \emph{clique} if vertices in $U$ are pairwise adjacent to one another. This unifies Brehmer's regular dilation on $\mathbb{N}^k$ and Frazho-Bunce-Popescu's dilation of row contractions. Indeed, when $\Gamma$ is the complete graph, every subset $U\subseteq W$ is a clique. In such case, Condition \pref{eq.main1} is the same as the $\ast$-regular Condition \pref{eq.Brehmer} in the Brehmer's result. When $\Gamma$ contains no edge, the only cliques in $\Gamma$ are the singletons. In such case, Condition \pref{eq.main1} is equivalent of saying $\sum_{i\in W} T_i T_i^*\leq I$ and thus a row contraction as in the Frazho-Bunce-Popescu's dilation.
We further consider the question of when a representation of the graph product semigroup $P_\Gamma$ has a minimal isometric Nica-covariant dilation. According to \cite{Gaspar1997}, a pair of commuting contractions has isometric Nica-covariant dilation if and only if they are $\ast$-regular. Frazho-Bunce-Popescu's result also shows a row contraction can be dilated to isometries with pair-wise orthogonal range, which corresponds to an isometric Nica-covariant dilation on the free semigroup. We show that the $\ast$-regular condition is equivalent of having an isometric Nica-covariant dilation.
To summarize, we establishs the following equivalence:
\begin{theorem}\label{thm.main.all} Let $T:P_\Gamma\to\bh{H}$ be a representation. Then the following are equivalent:
\begin{enumerate}
\item $T$ is $\ast$-regular,
\item $T$ has a minimal isometric Nica-covariant dilation,
\item $T$ satisfies Condition \pref{eq.main1} for every finite subset $W\subseteq\Lambda$.
\end{enumerate}
\end{theorem}
\section{Graph Products}
Fix a simple graph $\Gamma$ with a countable vertex set $\Lambda$. Recall that a graph product of $\mathbb{N}$ is a unital semigroup $P_\Gamma=\Gamma_{i\in\Lambda}\mathbb{N}$, generated by generators $\{e_i\}_{i\in\Lambda}$ where $e_i,e_j$ commute whenever $i,j$ are adjacent in $\Gamma$. We also call $P_\Gamma$ the graph semigroup or the right-angled Artin monoid. It is also closely related to the Cartier-Foata monoid \cite{Heaps} where $e_i, e_j$ commute whenever $i,j$ are not adjacent.
We can similarly define the graph product of $\mathbb{Z}$, $G_\Gamma=\Gamma_{i\in\Lambda} \mathbb{Z}$. It is defined to be the free product of $\mathbb{Z}$ modulo the rule that elements in the $i$-th and $j$-th copies of $\mathbb{Z}$ commute whenever $(i,j)$ is an edge of $\Gamma$. $G_\Gamma$ is a group, which is also called the graph group or the right-angled Artin group. $G_\Gamma$ together with $P_\Gamma$ is an important example of a quasi-lattice ordered group that is studied by Crisp and Laca \cite{CrispLaca2002}.
\begin{example}\label{ex.graphprod}[Examples of Graph Products]
\begin{enumerate}
\item Consider the complete graph $\Gamma$ that contains every possible edge $(i,j)$ $i\neq j$. The graph product $\Gamma_{i\in\Lambda} \mathbb{N}$ is equal to the abelian semigroup $\mathbb{N}_k^+$, since any two generators $e_i,e_j$ commute.
\item Consider the graph $\Gamma$ that contains no edges. The graph product $P_\Gamma=\Gamma_{i\in\Lambda} \mathbb{N}$ is equal to the free product $\mathbb{F}_k^+$.
\item Consider the following graph product associated with the graph in Figure \ref{fg.1}.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.75]
\draw [line width=1pt] (-1,-1) -- (-1,1);
\draw [line width=1pt] (-1,-1) -- (1,-1);
\draw [line width=1pt] (1,1) -- (1,-1);
\draw [line width=1pt] (1,1) -- (-1,1);
\draw [line width=1pt] (1,1) -- (-1,-1);
\node at (-1,1) {$\bullet$};
\node at (1,-1) {$\bullet$};
\node at (-1,-1) {$\bullet$};
\node at (1,1) {$\bullet$};
\node at (-1.35,1) {1};
\node at (1.35,1) {2};
\node at (-1.35,-1) {4};
\node at (1.35,-1) {3};
\end{tikzpicture}
\caption{A simple graph of 4 vertices}
\label{fg.1}
\end{figure}
The graph product semigroup is a unital semigroup generated by $4$ generators $e_1,\cdots,e_4$, where the commutation relation is dictated by the edges of the graph. In this example, $e_i,e_j$ pairwise commute except for the pair $e_1, e_3$.
\end{enumerate}
\end{example}
A typical element of $P_\Gamma$ is equal to $x=x_1x_2\cdots x_n$, where each $x_i$ belongs to a certain copy of $\mathbb{N}$. We often call $x$ an element of the semigroup, and $x_1x_2\cdots x_n$ an expression of $x$. Each $x_i$ is called \emph{a syllable} of this expression. We shall denote by $I(x_i)$ the index of $\mathbb{N}$ to which $x_i$ belongs. There might be many equivalent forms for $x$. First of all, $x$ may be able to be rewritten using fewer syllables: if $I(x_i)=I(x_{i+1})$, we may simply replace $x_ix_{i+1}=x_i'$, and write $x$ using $x_1\cdots x_{i-1} x_i' x_{i+2}\cdots x_n$. In particular, if $x_i=e$, we can always treat this $e$ as the identity for $I(x_{i+1})$ (or $I(x_{i-1})$), and thus $I(x_i)=I(x_{i+1})$. This process of merging two adjacent syllable from the same copy of $\mathbb{N}$ is called \emph{an amalgamation}.
If $I(x_i)$ is adjacent to $I(x_{i+1})$ in the graph $\Gamma$, the commutative rules implies $x_ix_{i+1}=x_{i+1}x_i$, and thus $x$ can also be written as $$x_1\cdots x_{i-1}x_{i+1}x_i x_{i+2}\cdots x_n.$$ This process of switching two adjacent syllables when their corresponding copies of $\mathbb{N}$ commutes is called \emph{a shuffle}. We call two elements $x=x_1\cdots x_n$ and $y=y_1\cdots y_m$ \emph{shuffle equivalent} if we can obtain $y$ by repeatedly shuffling $x$.
An expression $x=x_1\cdots x_n$ is called \emph{reduced} if it cannot be shuffled to another expression $x'$ which admits an amalgamation. Similar definitions of amalgamation and shuffle can be made for the graph group $\Gamma_{i\in\Lambda} \mathbb{Z}$.
\begin{lemma}\label{lm.reduce} An expression $x=x_1\cdots x_n$ is reduced ($x$ can be in either $P_\Gamma$ or $G_\Gamma$) if and only if for all $i<j$ such that $I(x_i)=I(x_j)$, there exists an $i<t<j$ so that $I(x_t)$ is not adjacent to $I(x_i)$.
\end{lemma}
The idea is that when $I(x_i)=I(x_j)$, as long as everything between $x_i$ and $x_j$ commute with $x_i$ and $x_j$, we can shuffle $x_j$ to be adjacent to $x_i$ and amalgamate the two. It is observed in \cite{Green1990} that reduced expressions are shuffle equivalent:
\begin{theorem}[Green \cite{Green1990}]\label{thm.shuffle} If $x=x_1\cdots x_n=x_1'\cdots x_m'$ are two reduced expressions for $x\in G_\Gamma$ (or $P_\Gamma$). Then two expressions are shuffle equivalent. In particular $m=n$.
\end{theorem}
This allows us to define \emph{the length of an element} $x$ to be $\ell(x)=n$, when $x$ has a reduced expression $x_1\cdots x_n$.
Given a reduced expression $x=x_1\cdots x_n$, a syllable $x_i$ is called \emph{an initial syllable} if $x$ can be shuffled as $x=x_ix_1\cdots x_{i-1}x_{i+1}\cdots x_n$. Equivalently, it means the vertex $I(x_i)$ is adjacent to any previous vertices $I(x_j)$, $j<i$. The vertex $I(x_i)$ of an initial syllable is called \emph{an initial vertex}. The following lemma is partially taken from \cite[Lemma 2.3]{CrispLaca2002}.
\begin{lemma}\label{lm.initial} Let $x=x_1\cdots x_n$ be a reduced expression. Then,
\begin{enumerate}
\item If $i\neq j$ and $x_i, x_j$ are two initial syllables, then $I(x_i)\neq I(x_j)$.
\item The initial vertices of $X$ are pairwise adjacent.
\item Let $J=\{i: x_i\mbox{ is an initial syllable}\}$. Then $x=\prod_{j\in J} x_j \prod_{j\notin J} x_j$, where the second product is taken in the same order as in the original expression.
\end{enumerate}
\end{lemma}
\begin{proof} If $I(x_i)=I(x_j)$ in a reduced expression, by Lemma \ref{lm.reduce}, there has to be an index $i<t<j$ so that $I(x_t)$ is not adjacent to $I(x_i)=I(x_j)$. Therefore, it is impossible to shuffle $x_j$ to the front. Therefore, any two initial syllables have different vertices.
If $x_i,x_j$ are two initial syllables where $i<j$. Then to shuffle $x_j$ to the front, it must be the case that $x_j$ can commute with $x_i$, and thus $I(x_i)$ is adjacent with $I(x_j)$. This shows initial vertices are pairwise adjacent.
Now let $J=\{1<j_1<j_2<\cdots<j_m\}$ be all $i$ where $x_i$ is an initial syllable. Then, we can recursively shift each $x_{j_s}$ to the front. The result is that we can shuffle all the initial vertices to the front as $\prod_{j\in J} x_j$, while all the other syllables are multiplied subsequently in the original order.
\end{proof}
Lemma \ref{lm.initial} shows that the initial vertices are pairwise adjacent and thus form a clique of the graph $\Gamma$.
Lemma \ref{lm.initial} allows us to further divide a reduced expression of $x$ into blocks. Given a reduced expression $x=x_1\cdots x_n$, we define the first block $b_1$ of $x$ to be the product of all initial syllables. Since any two initial syllables commute, there is no ambiguity in the order of this product. We simply denote $I_1(x)=\{i: x_i\mbox{ is an initial syllable}\}$, and $b_1=\prod_{j\in I_1(x)} x_j$. Since $x_1$ is always an initial syllable, $I_1(x)\neq\emptyset$ and $b_1\neq e$.
Now $x=b_1 x^{(1)}$, where $x^{(1)}$ has strictly shorter length compared to $x$. We can define the second block $b_2$ of $x$ to be the first block of $x^{(1)}$ when $x^{(1)}\neq e$. Of course, if $x^{(1)}=e$, we are finished since $x=b_1$. Repeat this process, and let each $x^{(t)}=b_{t+1} x^{(t+1)}$, where $b_{t+1}$ is the first block of $x^{(t)}$. Since the length of $x^{(t)}$ is always strictly decreasing, we eventually reach a state when $x^{(m-1)}=b_m x^{(m)}$ and $x^{(m)}=e$. In such case, $x$ is written as a product of $m$ blocks $x=b_1b_2\cdots b_{m}$. Here, each $b_j$ is the first block of $b_jb_{j+1}\cdots b_m$. We call this a block representation of $x$. We shall denote $I_t(x)$ be the vertex of all syllables in the $t$-th block $b_t$.
Since any two reduced expressions are shuffle equivalent, it is easy to see this block representation is unique.
\begin{lemma}\label{lm.block} Let a reduced expression $x=x_1\cdots x_n$ have a block representation $b_1\cdots b_m$
\begin{enumerate}
\item\label{lm.block1} Two adjacent $I_t(x), I_{t+1}(x)$ are disjoint.
\item\label{lm.block2} For any vertex $\lambda_2\in I_{t+1}(x)$, there exists another vertex $\lambda_1\in I_t(x)$ so that $\lambda_1,\lambda_2$ are not adjacent.
\end{enumerate}
\end{lemma}
\begin{proof}
For \pref{lm.block1}, if $I_t(x),I_{t+1}(x)$ share some common vertex $\delta$, then the syllable corresponding to $\delta$ in the $(t+1)$-th block can be shuffled to the front of the $(t+1)$-th block, and since $\delta\in I_t(x)$, this syllable commutes with all syllable in the $t$-th block. Therefore, it can be amalgamated into the $t$-th block, leading to a contradiction that the expression is reduced.
For \pref{lm.block2}, if otherwise, we can pick a vertex $\lambda_2\in I_{t+1}(x)$ that is adjacent to every vertex in $I_t(x)$. The syllable corresponding to $\lambda_2$ can be shuffled to the front of $(t+1)$-th block, and commutes with everything in the $t$-th block. Therefore, it must be an initial syllable for $b_tb_{t+1}\cdots b_m$. But in such case, $\delta\in I_t(x)$ and cannot be in $I_{t+1}(x)$ by \pref{lm.block1}.
\end{proof}
Studying regular dilations often requires a deep understanding of elements of the form $x^{-1}y$ for $x,y$ from the semigroup.
\begin{lemma}\label{lm.remove.ini} Let $x,y\in P_\Gamma$. Then, there exists $u,v\in P_\Gamma$ with $x^{-1}y=u^{-1}v$, and $I_1(u)$ is disjoint from $I_1(v)$. Moreover, $u,v$ are unique.
\end{lemma}
\begin{proof} Suppose that there exists a vertex $\lambda\in I_1(x)\bigcap I_1(y)$. Then we can find initial syllables $e_\lambda^{m_1}$ and $e_\lambda^{m_2}$ from reduced expressions of $x,y$. We may without loss of generality assume that $x_1=e_\lambda^{m_1}$ and $y_1=e_\lambda^{m_2}$.
Set $u_1=e_\lambda^{-\min\{m_1,m_2\}} x$ and $v_1=e_\lambda^{-\min\{m_1,m_2\}} y$. We have the relation $u_1^{-1} v_1=x^{-1}y$. Notice that at least one of $x_1$ and $y_1$ is removed in this process, and thus the total length $\ell(u_1)+\ell(v_1)$ is strictly less than $\ell(x)+\ell(y)$. Repeat this process whenever $I_1(u_j)\bigcap I_1(v_j)\neq \emptyset$, and recursively define $u_{j+1},v_{j+1}$ in the same manner to keep $u_j^{-1} v_j=u_{j+1}^{-1} v_{j+1}$. Since the total length $u_j,v_j$ is strictly decreasing in the process, we eventually stop in a state when $I_1(u_j)$ is disjoint from $I_1(v_j)$. This gives a desired $u=u_j, v=v_j$.
Suppose that $u^{-1}v=s^{-1} t$ for some other $s,t\in P_\Gamma$ with $I_1(s)\bigcap I_1(t)=\emptyset$. Let reduced expressions for $u,v,s,t$ be,
\begin{align*}
u &= u_1\cdots u_m \\
v &= v_1\cdots v_n \\
s &= s_1\cdots s_l \\
t &= t_1\cdots t_r
\end{align*}
We first show $u^{-1}v=u_m^{-1}\cdots u_1^{-1} v_1 \cdots v_n$ is a reduced expression in $G_\Gamma$, and so is $s^{-1}t=s_l^{-1} \cdots s_1^{-1} t_1 \cdots t_r$. Assume otherwise, by Lemma \ref{lm.reduce}, there exists two syllables from the same vertex that commute with everything in between. These two syllables must have one from $u$ and the other from $v$, since $u_1\cdots u_m$ and $v_1\cdots v_n$ are both reduced. Let $u_i, v_j$ be two such syllables that come from the same vertex that commutes with everything in between. In that case, by Lemma \ref{lm.initial}, $u_i,v_j$ are both initial syllables for $u,v$. But $u,v$ have no common initial syllables, this leads to a contradiction.
Therefore, $u_m^{-1}\cdots u_1^{-1} v_1 \cdots v_n=s_l^{-1} \cdots s_1^{-1} t_1 \cdots t_r$ are both reduced expressions for $u^{-1}v=s^{-1}t$, and thus by Theorem \ref{thm.shuffle} are shuffle equivalent. Notice each individual syllable $u_i, v_i, s_i, t_i$ is from the graph semigroup. To shuffle from $u_m^{-1}\cdots u_1^{-1} v_1 \cdots v_n$ to $s_l^{-1} \cdots s_1^{-1} t_1 \cdots t_r$, each $s_i^{-1}$ must be some $u_j^{-1}$, and $t_i$ must be some $v_j$. Therefore, $v_1\cdots v_n$ must be a shuffle of $t_1\cdots t_r$, and also $u_1\cdots u_m$ is a shuffle of $s_1\cdots s_l$. Hence, $s=u,t=v$. \end{proof}
\begin{lemma}\label{lm.comm} Suppose $u,v\in\Gamma_{i\in\Lambda} \mathbb{N}$. Then the following are equivalent:
\begin{enumerate}
\item\label{lm.comm1} $u,v$ commute.
\item\label{lm.comm2} Every syllable $v_j$ of $v$ commutes with $u$.
\end{enumerate}
\end{lemma}
\begin{proof}
\pref{lm.comm2}$\Longrightarrow$\pref{lm.comm1} is trivial. Assuming \pref{lm.comm1} and let $v=v_1\cdots v_m$. Consider the first syllable $v_1$ of $v$. Since $uv=vu$, $v_1$ is a initial syllable of $uv$. Therefore, $v_1$ commutes with $u$. By canceling $v_1$, one can observe that $v_2\cdots v_m$ also commutes with $u$, and recursively each $v_j$ commutes with $u$. \end{proof}
\begin{lemma}\label{lm.ini} Suppose $p\in P_\Gamma$, $\lambda\in\Lambda$ so that $\lambda\notin I_1(p)$ and $e_\lambda$ does not commute with $p$. Let $x,y\in P_\Gamma$ and apply the procedure in the Lemma \ref{lm.remove.ini} to repeatedly remove common initial vertex of $e_\lambda x$ and $py$ until $(e_\lambda x)^{-1} py=u^{-1}v$ with $I_1(u)\bigcap I_1(v)=\emptyset$. Then $u,v$ do not commute.
\end{lemma}
\begin{proof} Let $p=p_1\cdots p_n$ be a reduced expression of $p$. By Lemma \ref{lm.comm}, there exists a smallest $i$ so that $e_\lambda$ does not commute with $p_i$. We first observe that none of $p_1,\cdots,p_{i-1}$ come from the vertex $\lambda$. Otherwise, if some $p_s$ comes from the vertex $\lambda$, it must commute with every $p_1,\cdots,p_{i-1}$ as $e_\lambda$ does. Therefore, $p_s$ is an initial syllable and $\lambda\in I_1(p)$, which contradicts to our assumption.
Let $p_i$ be a syllable corresponding to vertex $\lambda'$, where $\lambda'$ is certainly not adjacent to $\lambda$.
Consider the procedure of removing a common initial vertex for $u_0=e_\lambda x$ and $v_0=py$. At each step, we removed a common initial vertex $\lambda_i$ for $u_i, v_i$ and obtained $u_{i+1}^{-1} v_{i+1}=u_i^{-1} v_i$, until we reach $u_m=u,v_m=v$ that shares no common initial vertex. It is clear that $\lambda\notin I_1(v_0)$ and $\lambda'\notin I_1(u_0)$.
Observe that $\lambda_0\neq \lambda'$ since $\lambda\in I_1(e_\lambda x)$ and $\lambda'$ cannot be an initial vertex of $e_\lambda x$. Therefore, the syllable $p_i$ remains in $u_1$ after the first elimination step, while no syllable before $p_i$ belongs to the vertex $\lambda$. Hence, $\lambda\notin I_1(v_1)$ and $\lambda'\notin I_1(u_1)$. Inductively, $\lambda\notin I_1(v_j)$ and $\lambda'\notin I_1(u_j)$, and thus $e_\lambda$ is still an initial syllable of $u$ and $p_i$ is still a syllable of $v$. Therefore, $u,v$ do not commute. \end{proof}
\section{Completely Positive Definite Kernels}
The problem of finding an isometric dilation turns out to be equivalent to showing that a certain kernel satisfies a so-called completely positive definite condition. Structures of completely positive definite kernels are studied in \cite{Popescu1996,Popescu1999b}, and we shall restate some of the results to our context.
Let $P$ be a unital semigroup sitting inside a group $G$ so that $P\bigcap P^{-1}=\{e\}$. For our purpose, the unital semigroup is taken to be a graph product $P_\Gamma=\Gamma_{i\in\Lambda} \mathbb{N}$, which lives naturally inside $G_\Gamma=\Gamma_{i\in\Lambda} \mathbb{Z}$. A \emph{unital Toeplitz kernel }on $P$ is a map $K:P\times P\to\bh{H}$ with the property that $K(e,e)=I$, $K(p,q)=K(q,p)^*$, and $K(ap,aq)=K(p,q)$ for all $a,p,q\in P$.
We call such a kernel \emph{completely positive definite }if for any $p_1,\cdots,p_n\in P$ and $h_1,\cdots,h_n\in\mathcal{H}$, we have $$\sum_{i,j=1}^n \left\langle K(p_i, p_j) h_j, h_i\right\rangle\geq 0.$$
Equivalently, this is saying that the $n\times n$ operator matrix $\left[K(p_i,p_j)\right]$, viewed as an operator on $\mathcal{H}^n$, is positive. Alternatively, each unital Toeplitz kernel $K$ corresponds to a map $\tilde{K}:P^{-1}P\to\bh{H}$, where $\tilde{K}(p^{-1} q)=K(p,q)$ and $\tilde{K}(x^{-1})=\tilde{K}(x)^*$. We shall abbreviate unital completely positive definite Toeplitz kernel as completely positive definite kernel.
Existence of a completely positive definite kernel is closely related to the existence of an isometric dilation. A classical result known as Naimark dilation theorem \cite{Naimark1943} can be restated as the following theorem (\cite[Theorem 3.2]{Popescu1999b}):
\begin{theorem}\label{thm.Naimark} If $K$ is a completely positive definite kernel on a unital semigroup $P$, then there exists a Hilbert space $\mathcal{K}\supset\mathcal{H}$ and an isometric representation $V:P\to\bh{K}$ so that $$K(p,q)=P_\mathcal{H} V(p)^* V(q)\big|_\mathcal{H} \mbox{ for all }p,q\in P.$$
Moreover, $V$ can be taken as minimal in the sense that
$$\overline{\lspan}\{V(p)h: p\in P,h\in\mathcal{H}\}=\mathcal{K},$$
The minimal isometric representation $V$ is unique up to unitary equivalence.
\end{theorem}
Notice that in Theorem \ref{thm.Naimark}, if we set $p=e$, we get $K(e,q)=P_\mathcal{H} V(q)\big|_\mathcal{H}$. Assume now that $T:P\to\bh{H}$ is a contractive representation. If we can find a completely positive definite kernel $K$ so that $K(e,q)=T(q)$ for all $q\in P$, then Theorem \ref{thm.Naimark} gives us an isometric representation $V$ so that $T(q)=P_\mathcal{H} V(q)\big|_\mathcal{H}$. In other words, $V$ is an isometric dilation for $T$. Therefore, we reach the following conclusion:
\begin{corollary} Let $T:P\to\bh{H}$ be a contractive representation, for which there exists a completely positive definite kernel $K$ so that $K(e,q)=T(q)$. Then $T$ has an isometric dilation $V:P\to\bh{K}$, which can be taken as minimal in the sense that $$\overline{\lspan}\{V(p)h: p\in P,h\in\mathcal{H}\}=\mathcal{K}.$$
In particular, each $V(p)$ is a co-extension of $T(p)$.
\end{corollary}
Such a kernel $K$ may not always exist. Indeed, if $P=\mathbb{N}^3$, let $T$ send three generators to the three commuting contractions as in the Parrott's example \cite{Parrott1970}. Such $T$ can never have an isometric dilation and thus there is no completely positive definite kernel $K$ so that $K(e,q)=T(q)$. Even when $T$ has an isometric dilation, $K$ may be extremely hard to define explicitly.
Let us now turn our attention to contractive representations on a graph product $P_\Gamma=\Gamma_{i\in\Lambda} \mathbb{N}$. This semigroup is the free semigroup generated by $e_1,\cdots,e_n$ with additional rules that $e_i e_j=e_j e_i$ whenever $(i,j)\in E(\Gamma)$. Therefore, a representation $T$ of $P_\Gamma$ is uniquely determined by its values on generators $T_i=T(e_i)$, where they have to satisfy $T_i T_j=T_j T_i$ whenever $(i,j)\in E(\Gamma)$.
Let us fix a contractive representation $T:P_\Gamma\to\bh{H}$. We start by finding an appropriate completely positive definite kernel for $T$. Suppose an isometric regular dilation $V$ for $T$ exists, Theorem \ref{thm.Naimark} implies that $K(p,q)=P_\mathcal{H} V(p)^* V(q)\big|_\mathcal{H}$. Therefore, if $I_1(p)\bigcap I_1(q)\neq\emptyset$, $p=e_j p',q=e_j q'$ can both starts with a syllable in the same copy of $\mathbb{N}$. But since $V$ is isometric, $$V(p)^*V(q)=V(p')^*V(e_j)^*V(e_j)V(q')=V(p')^*V(q').$$
Therefore, it suffices to first consider the case that $I_1(p)\bigcap I_1(q)=\emptyset$. If otherwise, Lemma \ref{lm.remove.ini} gives $u,v$ so that $I_1(u)\bigcap I_1(v)=\emptyset$ and $u^{-1}v=p^{-1}q$. In such case, we shall define $K(p,q)=K(u,v)$.
\begin{definition}\label{df.kernel} Given a contractive representation $T$ of the graph product $\Gamma_{i\in\Lambda} \mathbb{N}$, we define \emph{the Toeplitz kernel $K$ associated with $T$} using the following rules:
\begin{enumerate}
\item\label{df.proc1} $K(p,q)=T(q) T(p)^*$ whenever $I_1(p)\bigcap I_1(q)=\emptyset$ and $p,q$ commute.
\item\label{df.proc2} $K(p,q)=0$ whenever $I_1(p)\bigcap I_1(q)=\emptyset$ and $p,q$ do not commute.
\item\label{df.proc3} If there exists a vertex $i=I_1(p)\bigcap I_1(q)\neq\emptyset$. Let $p=e_i p'$, $q=e_i q'$. Define $K(p,q)=K(p',q')$.
\end{enumerate}
\end{definition}
\begin{remark} We may observe that since $I_1(e)=\emptyset$, and $e$ commutes with any $q$. $K(e,q)=T(q)$ by \pref{df.proc1}. Therefore, if $K$ is completely positive definite, the isometric Naimark dilation $V$ will be a dilation for $T$.
\end{remark}
\begin{remark} It follows from Lemma \ref{lm.remove.ini} that one can recursively remove common initial vertices from $p,q$ using \pref{df.proc3}, until we end up with unique $u,v$ with $u^{-1}v=p^{-1}q$ and $I_1(u)\bigcap I_1(v)=\emptyset$. Therefore, the Definition \ref{df.kernel} is well-defined for all pairs of $p,q$.
\end{remark}
One can verify that the kernel $K$ is indeed a Toeplitz kernel. In fact, it satisfies a stronger property.
\begin{lemma}\label{lm.toeplitz} If $p,q,x,y\in P_\Gamma$ satisfies $p^{-1}q=x^{-1}y$, then $K(p,q)=K(x,y)$.
\end{lemma}
\begin{proof} Repeatedly removing common initial vertices for the pairs $p,q$ and $x,y$ using the procedure in Lemma \ref{lm.remove.ini}, we end up with $p^{-1}q=u^{-1}v$, $x^{-1}y=s^{-1}t$, where $u,v$ has no common initial vertex; $s,t$ has no common initial vertex. Then, $K(p,q)=K(u,v)$ and $K(x,y)=K(s,t)$. By Lemma \ref{lm.remove.ini}, $u=s,t=v$. Therefore, $K(p,q)=K(x,y)$.
\end{proof}
\begin{definition}\label{df.regular1} We say that $T$ is \emph{$\ast$-regular} if the Toeplitz kernel $K$ associated with $T$ as defined in Definition \ref{df.kernel} is completely positive definite. A Naimark dilation $V$ for this kernel $K$ is called a $\ast$-regular dilation for $T$. Dually, we say that $T$ is \emph{regular} if $T^\ast$ is $\ast$-regular. Here, $T^*(e_i)=T(e_i)^*$.
\end{definition}
\begin{remark} Our definition of regular dilation is slightly different from that of Brehmer's. When the graph semigroup is the abelian semigroup $\mathbb{N}^k$, Brehmer defined $T$ to be regular if a kernel $K^*$ is completely positive definite, where $K^*$ is the Toeplitz kernel by replacing Condition \pref{df.proc1} by $K^*(p,q)=T(p)^*T(q)$. In general, the kernel $K^*$ is different from the kernel we defined in Definition \ref{df.kernel}. However, it turns out when the semigroup is the abelian semigroup $\mathbb{N}^k$, our definition of regular dilation (Definition \ref{df.regular1}) coincides with Brehmer's definition (Definition \ref{df.regular}).
However, on a general graph semigroup, when the kernel $K^*$ is completely positive definite is hard to characterize. For example, when the graph $\Gamma$ contains no edge and the graph semigroup corresponds to the free semigroup, the only chance that $p,q$ commute and $I_1(p)\bigcap I_1(q)=\emptyset$ is when at least one of $p,q$ is $e$. Therefore, in such case, $K^*=K$ and $K^*$ is completelely positive definite whenever $K$ is.
Our definition of regular dilation implies there are isometric dilations for $T_i^*$ and thus co-isometric extensions for $T_i$. This coincides with the literature on the dilation of row contractions: for example, dilations for column contractions considered by Bunce \cite{Bunce1984} can be thought as regular dilation on the free semigroup $\mathbb{F}_+^k$.
\end{remark}
The $\ast$-regular representations are precisely those with a certain minimal Naimark dilation due to Theorem \ref{thm.Naimark}.
\begin{theorem} $T:P_\Gamma\to\bh{H}$ is $\ast$-regular if and only if it has a minimal isometric Naimark dilation $V:P_\Gamma\to\bh{K}$ so that for all $p,q\in P_\Gamma$, $K(p,q)=P_\mathcal{H} V(p)^* V(q)\big|_\mathcal{H}$.
\end{theorem}
\begin{remark} Given a representation $T:P_\Gamma\to\bh{H}$, there might be kernels different from the kernel we defined in Definition \ref{df.kernel} that are also completely positive definite. For example, it is pointed out in \cite{Opela2006} that when $\Gamma$ is acyclic, $T$ always has a unitary dilation. By restricting to $\mathcal{H}$, such a unitary dilation defines a completely positive definite kernel that is generally different from the kernel we defined. Popescu \cite{Popescu1999b} has also considered many ways to construct completely positive definite kernels on the free semigroup.
\end{remark}
The goal of the next two sections is to provide a necessary condition for $\ast$-regularity of a contractive representation of a graph semigroup, which turns out to be also a sufficient condition. We draw our inspiration from two special cases where the graph is the complete graph and where the graph is the empty graph.
\begin{example}\label{ex.kernel.brehmer} In the case when $\Gamma$ is a complete graph on $k$ vertices. The graph semigroup $P_\Gamma$ is simply the abelian semigroup $\mathbb{N}^k$. It forms a lattice ordered semigroup. Each element in this semigroup can be written as a $k$ tuple $(a_1,\cdots,a_k)$. Since this semigroup is abelian, the set of initial vertex is precisely $\{i:a_i\neq 0\}$.
Two elements $p=(p_i),q=(q_i)$ have disjoint initial vertex sets if and only if at least one of $p_i,q_i$ is zero for all $i$. In the terminology of the lattice order, this implies the greatest lower bound $p\wedge q=e$. As it is first defined in \cite{Brehmer1961}, a representation $T:\mathbb{N}^k\to\bh{H}$ is called $\ast$-regular if the kernel $K(p,q)$ is completely positive definite.
Brehmer's result (Theorem \ref{thm.Brehmer}) shows that $K$ is completely positive definite if and only if for every subset $V\subseteq \{1,2,\cdots,k\}$, $$\sum_{U\subseteq V} (-1)^{|U|} T_U T_U^*\geq 0.$$
Here $|U|$ is the cardinality of $U$, and $T_U=\prod_{i\in U} T(e_i)$ with the convention that $T_\emptyset=I$.
\end{example}
\begin{example}\label{ex.kernel.popescu} In the case when $\Gamma$ is a graph on $k$ vertices with no edge. The graph semigroup $\Gamma_{i\in\Lambda} \mathbb{N}$ is simply the free semigroup $\mathbb{F}_k^+$. Fix a contractive representation $T:\mathbb{F}_k^+\to\bh{H}$, which is uniquely determined by its value on generators $T_i=T(e_i)$. The Toeplitz kernel associated with $T$ defined in Definition \ref{df.kernel} is the same as the kernel considered in \cite{Popescu1996, Popescu1999b}, where it is shown that $K$ is completely positive definite if and only if $T$ is row contractive in the sense that $$I-\sum_{i=1}^k T_i T_i^*\geq 0.$$
It turns out the minimal Naimark dilation for $K$ in this case is also a row contraction, and thus proves the Frazho-Bunce-Popescu dilation.
\end{example}
Inspired by both Example \ref{ex.kernel.brehmer} and \ref{ex.kernel.popescu}, our first main result unifies the Brehmer's dilation and the Frazho-Bunce-Popescu dilation. Recall that a set of vertices $U\subseteq\Lambda$ is called a clique if the subgraph induced on $U$ is a complete subgraph.
\begin{theorem}\label{thm.main} Let $T$ be a contractive representation of a graph semigroup $P_\Gamma$. Then, $T$ is $\ast$-regular if for every $W\subseteq\Lambda$,
\begin{equation}\label{eq.main}
\sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\geq 0.
\end{equation}
\end{theorem}
\begin{remark} Condition \pref{eq.main} coincides with the condition in both Example \ref{ex.kernel.brehmer} and \ref{ex.kernel.popescu}. Indeed, when $\Gamma$ is a complete graph, any $U\subseteq V$ is a clique. When $\Gamma$ contains no edge, the only cliques in $\Gamma$ are singletons $\{i\}$.
\end{remark}
\section{Technical Lemmas}
Since we are dealing with positive definiteness of operator matrices, the following lemma, taken from \cite[Lemma 14.13]{NestAlgebra}, is extremely useful.
\begin{lemma}\label{lm.Davidson} If an operator matrix $\begin{bmatrix} A & B^* \\ B & C\end{bmatrix}\in\mathcal{B}(\mathcal{H}_1\oplus\mathcal{H}_2)$ is positive, then there exists an operator $X:\mathcal{H}_1\to\mathcal{H}_2$ so that $B=XA^{1/2}$. Moreover, if $B$ has this form, then the operator matrix is positive if and only if $C\geq X X^*$.
\end{lemma}
\begin{lemma}\label{lm.tech1} Let $X,L\in\bh{H}$ and $X\geq 0$. Define an $n\times n$ operator matrix $$A_n=
\begin{bmatrix}
X & XL^* & XL^{*2} & \cdots & XL^{*(n-1)} \\
LX & X & XL^* & \cdots & XL^{*(n-2)} \\
L^2X & LX & X & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & XL^* \\
L^{n-1}X & L^{n-2}X & \cdots & LX & X
\end{bmatrix}.$$
If $LXL^*\leq X$, then every $A_n$ is positive.
\end{lemma}
\begin{proof} Assuming $LXL^*\leq X$, we shall inductively show each $A_n$ is positive. Since the case when $n=1$, $A_1=X\geq 0$ is given. Suppose $A_n\geq 0$, and rewrite $A_{n+1}$ as $$A_{n+1}=\left[
\begin{array}{ccccc|c}
& & & & & XL^{*n} \\
& & & & & XL^{*(n-1)} \\
& & A_n & & & \vdots \\
& & & & & \vdots \\
& & & & & XL^* \\ \hline
L^n X & L^{n-1}X & \cdots & \cdots & LX & X \\
\end{array}\right].$$
Now notice that the row operator $[L^nX,\cdots,LX]=[0,\cdots,0,L] A_n$. Therefore, by Lemma \ref{lm.Davidson}, $A_{n+1}\geq 0$ if $$[0,\cdots,0,L] A_n \begin{bmatrix} 0 \\ \vdots \\ 0 \\ L^*\end{bmatrix} \leq X.$$
Expand the left hand side gives $LXL^*\leq X$.
\end{proof}
\begin{corollary}\label{cor.tech1} The matrix $A_n$ defined in Lemma \ref{lm.tech1} is positive if and only if $A_0=X\geq 0$ and $A_1\geq 0$.
\end{corollary}
\begin{proof} Indeed, $A_1=\begin{bmatrix} X & X^{1/2} X^{1/2}L^* \\ LX^{1/2} X^{1/2} & X\end{bmatrix}\geq 0$ if and only if $X\geq 0$ and $\left(LX^{1/2}\right)\left(X^{1/2}L\right)=LXL^*\leq X$ by Lemma \ref{lm.Davidson}. This is sufficient for every $A_n\geq 0$ by Lemma \ref{lm.tech1}.
\end{proof}
We now turn our attention to the contractive representation $T$ of a graph semigroup $P_\Gamma=\Gamma_{i\in\Lambda} \mathbb{N}$. Throughout this section, we fix such a representation $T$ and its associated Toeplitz kernel $K$ defined in Definition \ref{df.kernel}. For two finite subsets $F_1,F_2\subset P_\Gamma$, where $F_1=\{p_1,\cdots,p_m\}$ and $F_2=\{q_1,\cdots,q_n\}$, we denote $K[F_1,F_2]$ to be the $m\times n$ operator matrix, whose $(i,j)$-entry is equal to $K(p_i,q_j)$. When $F_1=F_2$, we simply write $K[F_1]=K[F_1,F_1]$. Recall $K$ is completely positive definite if and only if for all finite subsets $F\subseteq P_\Gamma$, $K[F]\geq 0$. If $F$ is a collection of elements that may contain duplicates, we may similarly define $K[F]$. It turns out duplicated elements will not affect the positivity of $K[F]$.
\begin{lemma}\label{lm.rep} Let $F=\{p_1,p_1,p_2,\cdots,p_m\}$ and $F_1=\{p_1, p_2,\cdots,p_m\}$. Then $K[F]\geq 0$ if and only if $K[F_1]\geq 0$.
\end{lemma}
\begin{proof} Denote $F_2=\{p_2,\cdots,p_m\}$. We have,
$$K[F]=\left[\begin{array}{c|cc}
I & I & K[p_1, F_2] \\ \hline
I & I & K[p_1,F_2] \\
K[F_2,p_1] & K[F_2,p_1] & K[F_2] \end{array} \right].$$
Here, the lower right corner is $K[F_1]$.
By Lemma \ref{lm.Davidson}, $K[F]\geq 0$ if and only if $K[F_2,p_1] K[p_1 F_2]\leq K[F_2]$. By Lemma \ref{lm.Davidson} again, this happens if and only if $K[F_1]\geq 0$.
\end{proof}
\begin{lemma}\label{lm.add.left} Let $F_1=\{p_1,\cdots,p_m\}$ and $F_2=\{q_1,\cdots,q_n\}$ and fix a vertex $\lambda\in\Lambda$ so that $\lambda$ is not an initial vertex for any of the $p_i$. Let $D(\lambda,F_1)$ be a diagonal $m\times m$ operator matrix whose $i$-th diagonal entry is equal to $T(e_\lambda)^m$ if $e_\lambda$ commutes with $p_i$ and $0$ otherwise. Then, $K[F_1,e_\lambda^m \cdot F_2]=D(\lambda,F_1)\cdot K[F_1,F_2]$.
\end{lemma}
\begin{proof} This is essentially proving that $K(p_i, e_\lambda^m q_j)=T(e_\lambda)^m K(p_i,q_j)$ if $e_\lambda$ commutes with $p_i$ and $0$ otherwise.
Assuming first that $e_\lambda$ commutes with $p_i$. Then $p_i^{-1} e_\lambda^m q_j = e_\lambda^m p_i^{-1} q_j$. A key observation here is that when this happens, $p_i$ contains no syllable from the vertex $\lambda$. Since $e_\lambda$ commutes with every syllable of $p_i$, if there is a syllable of $p_i$ from the vertex $\lambda$, it must be an initial syllable, which contradicts to our selection of $p_i$.
Repeatedly removing common initial vertices for $p_i,q_j$ using Lemma \ref{lm.remove.ini}, we end up with $p_i^{-1} q_j=u^{-1}v$, where $u,v$ have no common initial vertex. It follows from the Definition \ref{df.kernel} that $K(p_i,q_j)=K(u,v)$. Notice that $I_1(e_\lambda^m v)$ includes $\lambda$ and every vertex in $I_1(v)$ that is adjacent to $\lambda$. Moreover, we observed that $\lambda\notin I_1(u)$. Therefore, we have $I_1(e_\lambda^m v)\bigcap I_1(u)=\emptyset$.
Suppose $u,v$ commute. Then $p_i^{-1} e_\lambda^m p_j=e_\lambda^m v u^{-1}=u^{-1} e_\lambda v $. Therefore, by Lemma \ref{lm.toeplitz}, $K(p_i, e_\lambda^m q_j)=K(u, e_\lambda^m v)$. Hence, in this case,
$$K(u, e_\lambda v)=T(e_\lambda)^m T(v) T(u)^*=T(e_\lambda)^mK(u,v).$$
If $u,v$ does not commute, $e_\lambda^m v$ also does not commute with $u$. Therefore, $K(u,v)=K(u,e_\lambda v)=0$.
Assume now that $e_\lambda$ does not commute with $p_i$. Consider the procedure of removing common initial syllables in $p_i$ and $e_\lambda^m q_j$: since $\lambda$ is not an initial vertex of $p_i$, each step we have to cancel out a syllable from $p_i$ and $q_j$ that both commute with $e_\lambda^m$. After each step of removing a common initial vertex, we removed some syllable from $p_i$ that commute with $e_\lambda$. Since $\lambda$ is not an initial vertex of $p_i$, each step will not cancel out any $e_\lambda^m$. Eventually, we always end up with $p_i^{-1} q_j= u^{-1} e_\lambda^m v$, where $u,e_\lambda^m v$ do not share any common initial vertex.
By Lemma \ref{lm.comm}, some syllable in $p_i$ does not commute with $e_\lambda$. Since all the syllables that got canceled commute with $e_\lambda$, there has to be some syllable in the left over $u$ that does not commute with $e_\lambda$. Therefore, $u$ and $e_\lambda^m v$ do not commute. Hence, $K(u,e_\lambda^m v)=0$. \end{proof}
As an immediate corollary,
\begin{corollary}\label{cor.tech.F} Let $F=\{p_1,\cdots,p_n\}$ be a finite subset of $P_\Gamma$, and $\lambda\in\Lambda$ is a vertex that is not an initial vertex for any of $p_i$. For every $m\geq 0$, denote $F_m=\bigcup_{j=0}^m e_\lambda^j \cdot F$. Then $K[F_m]\geq 0$ if and only if $K[F]\geq 0$ and $K[F_1]\geq 0$.
\end{corollary}
\begin{proof} For each $i\leq j$, $K[e_\lambda^i F, e_\lambda^j F]=K[F,e_\lambda^{j-i} F]$. Let $D=D(\lambda,F)$ be the $n\times n$ diagonal operator matrix, whose $(i,i)$-entry is $T(e_\lambda)$ if $e_\lambda$ commutes with $p_i$ and $0$ otherwise. It follows from Lemma \ref{lm.add.left} that $K[F,e_\lambda^{j-i} F]=D^{j-i}K[F]$. Similarly, for each $i>j$, $$K[e_\lambda^i F, e_\lambda^j F]=K[e_\lambda^j F, e_\lambda^i F]^*=K[F]D^{*(i-j)}.$$
Therefore,
$$K[F_m]=
\begin{bmatrix}
K[F] & K[F]D^* & K[F]D^{*2} & \cdots & K[F]D^{*m} \\
DK[F] & K[F] & K[F] D^* & \cdots & K[F]D^{*(m-1)} \\
D^2K[F] & DK[F] & K[F] & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & K[F]D^* \\
D^m K[F] & D^{m-1}X & \cdots & DK[F] & K[F]
\end{bmatrix}.$$
Corollary \ref{cor.tech1} can be applied so that $K[F_m]\geq 0$ if and only if $K[F]\geq 0$ and $K[F_1]\geq 0$.
\end{proof}
\begin{lemma}\label{lm.reduction} Let $F_1=\{p_1,\cdots,p_n\}$, $F_2=\{q_1,\cdots,q_m\}$ be finite subsets of $P_\Gamma$, and $\lambda\in\Lambda$ is a vertex that is not an initial vertex for any of $p_i$ nor $q_j$. Suppose that $e_\lambda$ commutes with every $q_j$, but not with any $p_i$. Denote,
\begin{align*}
F_0 &= F_1\bigcup F_2 \\
F &= e_\lambda\cdot\left(F_1\bigcup F_2\right)\bigcup\left(F_1\bigcup F_2\right) \\
&= e_\lambda F_0\bigcup F_0 \\
F' &= e_\lambda\cdot F_2\bigcup F_1\bigcup F_2
\end{align*}
Then, $K[F]\geq 0$ if and only if $K[F']\geq 0$.
\end{lemma}
\begin{proof} Let $D$ denote an $m\times m$ diagonal operator matrix whose diagonal entries are all $T(e_\lambda)$. Repeatedly apply Lemma \ref{lm.add.left},
\begin{equation*}
K[F] = \begin{bmatrix}
K[F_1] & K[F_1,F_2] & 0 & K[F_1,F_2] D^* \\
K[F_2,F_1] & K[F_2] & 0 & K[F_2] D^* \\
0 & 0 & K[F_1] & K[F_1,F_2] \\
D K[F_2,F_1] & DK[F_2] & K[F_2,F_1] & K[F_2]
\end{bmatrix}.
\end{equation*}
Denote the upper left $2\times 2$ corner by $X= \begin{bmatrix} K[F_1] & K[F_1,F_2] \\ K[F_2,F_1] & K[F_2]\end{bmatrix}$. It is clear that $X=K[F_0]$. Let $L$ be a $(n+m)\times (n+m)$ diagonal operator matrix, whose first $n$ diagonal entries are $0$, and the rest $m$ diagonal entries be $T(e_\lambda)$. Then, the lower left $2\times 2$ corner can be written as $LX$, and $K[F]=\begin{bmatrix} X & XL^* \\ LX & X\end{bmatrix}$.
Lemma \ref{lm.tech1} states that $K[F]\geq 0$ if and only if $X=K[F_0]\geq 0$ and $LXL^*\leq X$. Explicitly writing out $X-LXL^*$, we get,
\begin{equation}\label{eq.reduction1}
X-LXL^* = \begin{bmatrix} K[F_1] & K[F_1,F_2] \\ K[F_2,F_1] & K[F_2] - DK[F_2]D^* \end{bmatrix}.
\end{equation}
Now consider $K[F']$:
\begin{equation}
K[F'] = \begin{bmatrix}
K[F_2] & 0 & K[F_2] D^* \\
0 & K[F_1] & K[F_1,F_2] \\
DK[F_2] & K[F_2,F_1] & K[F_2]
\end{bmatrix}.
\end{equation}
Notice here $\begin{bmatrix} 0 \\ DK[F_2] \end{bmatrix} = \begin{bmatrix} 0 \\ D\end{bmatrix} K[F_2]$. By Lemma \ref{lm.Davidson}, $K[F']\geq 0$ if and only if $K[F_2]\geq 0$ and $$\begin{bmatrix} 0 \\ D\end{bmatrix} K[F_2] \begin{bmatrix} 0 & D^*\end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & D K[F_2] D^*\end{bmatrix} \leq \begin{bmatrix} K[F_1] & K[F_1,F_2] \\ K[F_2,F_1] & K[F_2]\end{bmatrix}.$$
This is precisely the condition required in Condition \pref{eq.reduction1}. Therefore, combing the results from above, $K[F]\geq 0$ if and only if $K[F']\geq 0$, $K[F_0]\geq 0$ and $K[F_2]\geq 0$. But notice $F_0,F_2$ are subset of $F'$, the later condition is equivalent to $K[F']\geq 0$. \end{proof}
\section{Proof of The Main Result}
We prove the first main result (Theorem \ref{thm.main}) in this section. The goal is to show that for every finite $F=\{p_1,\cdots,p_n\}\subset P_\Gamma$, $K[F]\geq 0$ where $K$ is the Toeplitz kernel associated with a contractive representation $T:P_\Gamma\to\bh{H}$ that satisfies Condition \pref{eq.main}.
The plan to prove the main result Theorem \ref{thm.main} is divided into 2 steps. In the first step, we define an order on finite subsets of $P_\Gamma$, and show that for each $F\subset P_\Gamma$, $K[F]\geq 0$ follows from $K[F']\geq 0$ for some $F'<F$ under this order. This allows us to make an induction along finite subsets of $P_\Gamma$.
The base case of the induction turns out to be the case when every element in $F$ has precisely one block. The second step is to show for all such $F$, $K[F]\geq 0$. Inspired by \cite[Section 6]{BLi2014}, we shall then use an argument to show such $K[F]$ can be decomposed as $RR^*$ for some operator matrix $R$ explicitly.
For the first step, we show that as long as $F$ contains some element that has more than 1 block, one can find another finite subset $F'\subset P_\Gamma$ so that $K[F]\geq 0$ if $K[F']\geq 0$. The key is then to show that this process of finding $F'$ will terminate after finitely many steps.
\begin{definition}\label{df.operation} For each $\lambda\in\Lambda$, and $p\in P_\Gamma$, define $d_\lambda(p)$ to be:
\begin{enumerate}
\item\label{rm.op1} If $p=e_\lambda^{n_1} p'\in F$ where $e_\lambda$ does not commute with $p'$, then $d_\lambda(p)=\{p'\}$.
\item\label{rm.op2} If $p=e_\lambda^{n_1} p'\in F$ where $e_\lambda$ commutes with $p'$, then $d_\lambda(p)=\{e_\lambda p', p'\}$.
\item\label{rm.op3} If $\lambda$ is not an initial vertex of $p$ and $e_\lambda$ does not commute with $p$, then $d_\lambda(p)=\{p\}$.
\item\label{rm.op4} If $\lambda$ is not an initial vertex of $p$ and $e_\lambda$ commutes with $p$,, then $d_\lambda(p)=\{e_\lambda p, p\}$.
\end{enumerate}
For any finite set $F\subseteq P_\Gamma$, denote $d_\lambda(F)=\bigcup_{p\in F} d_\lambda(p)$.
\end{definition}
\begin{lemma}\label{lm.main.reduction} Let $F=\{p_1,\cdots,p_n\}\subset P_\Gamma$ with some $p_i$ containing at least $2$ blocks. Pick a $\lambda$ that is an initial vertex for some $p_i$, but $e_\lambda$ does not commute with $p_i$.
Then $K[F]\geq 0$ if $K[d_\lambda(F)]\geq 0$.
\end{lemma}
\begin{proof} Without loss of generality, assume $p_1$ has at least two blocks. First of all, by Lemma \ref{lm.block}, there exists an initial vertex $\lambda$ of $p_1$ that is not adjacent to some vertex $\lambda'$ in the second block of $p_1$. Therefore, $e_\lambda$ does not commute with $p_1$. We fix this vertex $\lambda$, and reorder $p_1,\cdots,p_n$ so that $\lambda$ is an initial vertex for $p_1,\cdots,p_m$ but not $p_{m+1},\cdots,p_n$.
Write $p_i=e_\lambda^{n_i}p_i'$ for all $1\leq i\leq m$. Denote $F_0=\{p_1',\cdots,p_m',p_{m+1},\cdots,p_n\}$. None of elements in $F_0$ has $\lambda$ as an initial vertex. Let $N=\max\{n_i\}$ and denote $F_N=\bigcup_{j=0}^N e_\lambda^j\cdot F_0$. It is clear that $F\subseteq F_N$, and thus $K[F]\geq 0$ if $K[F_N]\geq 0$. By Corollary \ref{cor.tech.F}, $K[F_N]\geq 0$ if and only if $K[F_1]\geq 0$ where $F_1=\left(e_\lambda\cdot F_0\right)\bigcup F_0$.
We may further split $F_0$ into two subsets $F_0=C\bigcup N$, where $C=\{f\in F: f\mbox{ commutes with }e_\lambda\}$ and $N=\{f\in F: f\mbox{ does not commute with }e_\lambda\}$. Now apply Lemma \ref{lm.reduction}, $K[F_1]\geq 0$ if and only if $K[\left(e_\lambda\cdot C\right)\bigcup F_0]\geq 0$. Denote $$F'=\left(e_\lambda\cdot C\right)\bigcup F_0=\left(e_\lambda\cdot C\right)\bigcup C \bigcup N.$$ This proves that $K[F']\geq 0$ implies $K[F]\geq 0$.
To see $F'=d_\lambda(F)$: fix an element $p_i\in F$ and consider $4$ possibilities:
\begin{enumerate}
\item If $p_i=e_\lambda^{n_1} p_i'\in F$ where $e_\lambda$ does not commute with $p_i'$, then $d_\lambda(p_i)=\{p_i'\}$ is contained in $N\subseteq F_0\subseteq F'$;
\item If $p_i=e_\lambda^{n_1} p_i'\in F$ where $e_\lambda$ commutes with $p_i'$, then $p_i'$ is an element of $C$ and thus $d_\lambda(p_i)=\{e_\lambda p_i', p_i'\}$ is contained in $\left(e_\lambda\cdot C\right)\bigcup C\subseteq F'$;
\item If $\lambda$ is not an initial vertex of $p_i$ and $e_\lambda$ does not commute with $p_i$, then $p_i$ is in the set $N$ and $d_\lambda(p_i)=\{p_i\}$ is contained in $N\subseteq F'$;
\item If $\lambda$ is not an initial vertex of $p_i$ and $e_\lambda$ commutes with $p_i$, then $p_i$ is in the set $C$ and $d_\lambda(p_i)=\{e_\lambda p_i, p_i\}$ is contained in $\left(e_\lambda\cdot C\right)\bigcup C\subseteq F'$.
\end{enumerate}
One can now observe that $F'=d_\lambda(F)$. This finishes the proof.\end{proof}
\begin{remark}\label{rm.operation} One may observe that due to \pref{rm.op2} and \pref{rm.op4}, the set $F'$ might be a larger set compared to $F$. The idea here is we removed $e_\lambda$ where it does not commute with some later syllables, this should make syllables of each element in $F'$ more commutative with one another. Therefore repeating this process will end up with an $F'$ where every element has only one block. This motivates the Definition \ref{df.bvs}.
\end{remark}
\begin{definition}\label{df.bvs} For each element $p\in P_\Gamma$ with $m$ blocks, we define \emph{the block-vertex sequence} of $p$ to be $m$ sets of vertices $B_1(p),\cdots,B_m(p)$, where $B_1(p)=\{\lambda\in I_1(p):e_\lambda\mbox{ does not commute with }p\}$, and $B_j(p)=I_j(p)$ for all $2\leq j\leq m$. In other words, the $j$-th set is equal to the vertex set of $j$-th block of $p$, except for the first block, where we only include any vertex that does not commutes with the rest of the blocks. We also define $B_0(p)=\{\lambda\in I_1(p): e_\lambda\mbox{ commutes with }p\}$, the set of all initial vertices that are adjacent to every vertex that appears in $p$.
Define the block-vertex length of $p$ be $c(p)=\sum_{j=1}^m \left|B_j(p)\right|$.
\end{definition}
\begin{remark}\label{rm.bv} In the case that $p$ has only one block, then every syllable is initial and thus commuting. In such case, $B_1(p)=\emptyset$ and $c(p)=0$. This is the only case when $c(p)=0$.
Also observe that for $p=e_{\lambda_1}^{m_1}\cdots e_{\lambda_n}^{m_n}$, the power $m_i\geq 1$ does not affect the block-vertex sequence of $p$. The only thing that matters is what kind of vertex appears in each block.
In a reduced expression of $p$, each syllable uniquely corresponds to some vertex in one of $B_0(p),\cdots,B_m(p)$. Therefore, the length $\ell(p)=\sum_{j=0}^m \left|B_j(p)\right|$. The quantity $c(p)=\ell(p)-|B_0(p)|$ counts the number of syllables that do not commute with the rest.
\end{remark}
\begin{lemma}\label{lm.bv.basic} Let $p\in P_\Gamma$ and $\lambda\in\Lambda$.
\begin{enumerate}
\item\label{lm.bv.basic1} If $\lambda\in B_1(p)$, and $p=e_\lambda^n p'$. Then $c(p')<c(p)$.
\item\label{lm.bv.basic2} If $e_\lambda$ commutes with $p$, then the block vertex sequence of any element in $d_\lambda(p)$ is the same as that of $p$. Here, $d_\lambda(p)$ is defined as in the Definition \ref{df.operation}.
\item\label{lm.bv.basic3} If $e_\lambda$ does not commute with $p$ and $\lambda$ is not an initial vertex of $p$, then the block vertex sequence of any element in $d_\lambda(p)$ is the same as that of $p$.
\end{enumerate}
\end{lemma}
\begin{proof} For \pref{lm.bv.basic1}, every vertex in $B_0(p)$ is still in $B_0(p')$. Since we removed the syllable $e_\lambda^n$, $\ell(p')\leq \ell(p)-1$, it is observed by Remark \ref{rm.bv} that $c(p')<c(p)$.
For \pref{lm.bv.basic2}, there are two cases: either $\lambda\in B_0(p)$ or not. In the first case, write $p=e_\lambda^n p'$ and $d_\lambda(p)=\{p,p'\}$. Since we only removed an initial vertex that commutes with the rest of the word, $p'$ has the same block-vertex sequence as $p$. In the later case when $\lambda\notin B_0(p)$, $d_\lambda(p)=\{p,e_\lambda p\}$. Since $e_\lambda$ commutes with $p$, $\lambda$ will be added to $B_0(e_\lambda p)$ and thus will not change the block-vertex sequence of $e_\lambda p$. In any case, the block vertex sequence of any element in $d_\lambda(p)$ is the same as that of $p$.
For \pref{lm.bv.basic3}, $d_\lambda(p)=\{p\}$, and it is clear.
\end{proof}
\begin{lemma}\label{lm.bv} If $p_1,p_2$ have the same block-vertex sequence, then so does every element of $d_\lambda(p_1),d_\lambda(p_2)$.
\end{lemma}
\begin{proof} If $\lambda\in B_1(p_1)=B_1(p_2)$, write $p_i=e_\lambda^{n_i} p_i'$ and $d_\lambda(p_i)=\{p_i'\}$. Then $p_i'$ is $p_i$ with the syllable $e_\lambda^{n_i}$ removed, and since $p_1,p_2$ have the same block-vertex sequence, $p_1',p_2'$ must also have the same block-vertex sequence. In any other case, by Lemma \ref{lm.bv.basic}, every element in $d_\lambda(p_i)$ has the same block-vertex sequence as $p_i$.
\end{proof}
\begin{definition} Let $F\subset P_\Gamma$ be a finite set. Define $c(F)=\sum c(f)$, where the summation is over all $f\in F$, but multiple elements with the same block-vertex sequence are only summed once.
\end{definition}
\begin{lemma}\label{lm.bv.reduce} $c(d_\lambda(F))<c(F)$.
\end{lemma}
\begin{proof} Without loss of generality, let $f_1,\cdots,f_t$ have distinct block-vertex sequences while $f_{t+1},\cdots,f_n$ have the same block vertex sequence as some $f_i$, $1\leq i\leq t$, where $f_1=p_1=e_\lambda^{n_1} p_1'$ and $e_\lambda$ not commuting with $p_1'$. Then $c(F)=\sum_{i=1}^t c(p_i)$.
Now, from Lemma \ref{lm.main.reduction}, $\lambda\in B_1(p_1)$. Therefore, $d_\lambda(f_1)=\{p_1'\}$, and $c(p_1')<c(f_1)$. Now apply Lemma \ref{lm.bv}, the block-vertex sequence of each $d_\lambda(f_{t+1}),\cdots,d_\lambda(f_n)$ is the same as that of some $d_\lambda(f_1),\cdots,d_\lambda(f_t)$. Moreover, by Lemma \ref{lm.bv.basic}, $c(d_\lambda(f_i))\leq c(f_i)$. Therefore, since $d_\lambda(F)=\bigcup_{i=1}^n d_\lambda(f_i)$, we have, $$c(d_\lambda(F))\leq\sum_{i=1}^t c(d_\lambda(f_i))<\sum_{i=1}^t c(f_i)=c(F). \qedhere$$ \end{proof}
To summarize the first step towards the proof of the main theorem,
\begin{proposition}\label{prop.main.step1} For every finite subset $F\subset P_\Gamma$, there exists finite subset $\tilde{F}\subset P_\Gamma$, where every element in $\tilde{F}$ contains exactly one block, and $K[F]\geq 0$ if $K[\tilde{F}]\geq 0$.
\end{proposition}
\begin{proof} We start with $F=F_0$ and repeatedly apply Lemma \ref{lm.main.reduction} to obtain $F_1=d_\lambda(F)$, $F_2=d_\lambda(F_1),\cdots$. Lemma \ref{lm.main.reduction} proves that $K[F_n]\geq 0$ if $K[F_{n+1}]\geq 0$. Lemma \ref{lm.bv.reduce} shows that $c(F_n)$ is a strictly decreasing integral sequence, and thus must stop at some $F_N=\tilde{F}$. If $c(\tilde{F})\neq 0$, some elements in $\tilde{F}$ has at least 2 blocks and Lemma \ref{lm.main.reduction} can still be applied to obtain another set $\tilde{F}'=d_\lambda(\tilde{F})$ with $c(\tilde{F}')<c(\tilde{F})$. Therefore, the last $F_N=\tilde{F}$ must have $c(F_N)=0$, which is equivalent of saying every element in $\tilde{F}$ contains exactly one block. It is also clear that $K[F]\geq 0$ if $K[F_N]\geq 0$.
\end{proof}
Our second step shall prove that for every finite subset $F$ where every element has exactly one block, $K[F]\geq 0$. Since $F$ only containly finitely many syllables, we may consider only the case when $\Gamma$ is a finite graph. If an element has exactly one block, then every syllable commutes with all other syllables, and thus their vertices corresponds to a clique in $\Lambda$. For a clique $U$, denote $e_U=\prod_{\lambda\in J} e_\lambda$. Since $U$ is a clique, there is no ambiguity in the order of this product. One exception to the definition is that we shall consider the empty set as a clique as well, and denote $e_\emptyset = e$. When $\Gamma$ is a finite graph, there are only finitely many cliques. Denote $F_c=\{e_U:U\mbox{ is a clique}\}$. The first lemma shows that it suffices to prove $K[F_c]\geq 0$.
\begin{lemma}\label{lm.kfc} If $K[F_c]\geq 0$, then for any finite subset $F$ of $P_\Gamma$ whose elements all have one block, $K[F]\geq 0$.
\end{lemma}
\begin{proof} Suppose $F=\{p_1,\cdots,p_n\}$ contains an element $e_\lambda^{n} p'$ with $n\geq 2$, then reorder $p_1,\cdots,p_n$ so that $\lambda$ is an initial vertex for $p_1,\cdots,p_m$ but not $p_{m+1},\cdots,p_n$. Let $p_i'$ be the $p_i$ with the syllable corresponding to $\lambda$ removed. Let $F_0=\{p_1',\cdots,p_m',p_{m+1},\cdots,p_n\}$ and let $C\subseteq F_0$ be all elements that commute with $e_\lambda$. Lemma \ref{lm.main.reduction} proves that $K[F_0]\geq 0$ if $K[F']=K[\left(e_\lambda\cdot C\right)\bigcup F_0]\geq 0$. Since elements in $F_0$ contain exactly one block, and elements in $C$ commute with $F_0$, we have every element in $F'$ contains exactly one block.
Moreover, each syllable corresponding to the vertex $\lambda$ is $e_\lambda$. Repeat this process until we reach $\tilde{F}$ where for all $\lambda$, all syllables corresponding to $\lambda$ are $e_\lambda$. In such case, every element has the form $e_U$ for some clique $U$. It is clear that $\tilde{F}\subset F_c$ and thus if $K[F_c]\geq 0$, then $K[\tilde{F}]\geq 0$ and thus $K[F]\geq 0$.
\end{proof}
To show $K[F_c]\geq 0$, it suffices to show $K[F_c]$ can be decomposed as $R_c R_c^*$. Following the technique outlined in \cite[Section 6]{BLi2014}, we can explicitly find such $R_c$. Moreover, under a certain ordering, $R_c$ can be chosen to be a lower triangular matrix, and can thus be viewed as a Cholesky decomposition of $K[F_c]$. This will be done in Proposition \ref{prop.kfc}, where we shall see where the conditions in Condition \pref{eq.main} come from.
From Condition \pref{eq.main}, denote
\begin{equation}\label{eq.main.Z}
Z_V=\sum_{\substack{U\subseteq V \\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\geq 0.
\end{equation}
Here, $V$ is any subset of the vertex set $\Lambda$, and $T_U=T(e_U)$. Assuming Condition \pref{eq.main} holds true for a contractive representation $T$, each $Z_V\geq 0$ and we can thus take its square root $Z_V^{1/2}\geq 0$.
\begin{definition}\label{df.neighborhood} For a clique $V$, we define the neighborhood of $V$, denoted by $N_V$, to be $$N_V=\{\lambda\in\Lambda:\lambda\notin V,\mbox{ and }\lambda\mbox{ is adjacent to every vertex in }V\}.$$ In particular, we define $N_\emptyset=\Lambda$.
\end{definition}
\begin{lemma}\label{lm.clique.id} Fix a clique $F$, then $$\sum_{\substack{F\subseteq W \\ W\mbox{ is a clique}}} T_{W\backslash F} Z_{N_W} T_{W\backslash F}^* = I.$$
\end{lemma}
\begin{proof} Replace $Z_{N_W}$ using Equation \pref{eq.main.Z},
\begin{align*}
& \sum_{\substack{W\supseteq F \\ W\mbox{ is a clique}}} T_{W\backslash F} Z_{N_W} T_{W\backslash F}^* \\
= & \sum_{\substack{W\supseteq F \\ W\mbox{ is a clique}}} T_{W\backslash F} \left(\sum_{\substack{U\subseteq N_W\\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\right) T_{W\backslash F}^* \\
= & \sum_{\substack{W\supseteq F \\ W\mbox{ is a clique}}} \left(\sum_{\substack{U\subseteq N_W\\ U\mbox{ is a clique}}} (-1)^{|U|} T_{(U\bigcup W)\backslash F} T_{(U\bigcup W)\backslash F} ^* \right)
\end{align*}
Suppose $U\subseteq N_W$ is a clique, then every vertex of $U$ is adjacent to every vertex in $W$, and vertices in $U$ are adjacent to one another. Therefore, $U\bigcup W$ is also a clique. The converse is true as well: if $U\bigcup W$ is a clique where $U\bigcap W=\emptyset$, then $U\subseteq N_W$ is a clique. Hence, we can rearrange the double summation so that we first sum over all possible cliques $V=U\bigcup W$, and then sum over all possible $U$. For a fixed clique $V=U\bigcup W$, the set $W=V\backslash U$ and the only requirement is that $F\subseteq W$. Therefore, we only sum those $U$ so that $U\subseteq V\backslash F$. Rewrite the double summation as:
$$\sum_{\substack{V=U\bigcup W \\ V\mbox{ is a clique}}}\left(\sum_{\substack{U\subseteq V\backslash F \\ U\mbox{ is a clique}}} (-1)^{|U|} T_{V\backslash F} T_{V\backslash F}^*\right).$$
For a fixed clique $V=U\bigcup W$ where $U\bigcap W=\emptyset$, consider the inner summation over all clique $U\subseteq V\backslash F$. $|U|$ can take any value between $0$ and $|V\backslash F|$. Moreover, for a fixed size $|U|=k$, there are precisely ${|V\backslash F| \choose |U|}$ possibilities for $U$ where $U\subseteq V\backslash F$ with size $k$.
Therefore, the coefficient for $T_{V\backslash F}T_{V\backslash F}^*$ where $V$ is a clique containing $F$, is equal to
$$\sum_{j=0}^{|V\backslash F|} {|V\backslash F| \choose j} (-1)^j$$
This summation is equal to $1$ if $V=F$ and $|V\backslash F|=0$. Otherwise, this is equal to $(1-1)^{|V\backslash F|}=0$. This proves the double summation is equal to $T_{F\backslash F}T_{F\backslash F}^*=I$.
\end{proof}
We are now ready to show $K[F_c]\geq 0$. $K[F_c]$ is a $|F_c|\times |F_c|$ operator matrix, whose rows and columns are indexed by cliques $U,V$. Its $(U,V)$-entry is equal to $K[e_U, e_V]$. Eliminating common initial vertices, $K[e_U,e_V]=K[e_{U\backslash V}, e_{V\backslash U}]$. Now $e_{U\backslash V}$ commutes with $e_{V\backslash U}$ if and only if all vertices in $U\backslash V$ are adjacent to all vertices in $V\backslash U$. In other words, $U\bigcup V$ is a clique. Therefore, we have,
\begin{equation}\label{eq.kernel.clique}
K[e_U,e_V]=\begin{cases}T_{V\backslash U} T_{U\backslash V}^*, \mbox{ if }U\bigcup V\mbox{ is a clique;} \\ 0, \mbox{ otherwise}.\end{cases}
\end{equation}
Let $R_c$ be a $|F_c|\times |F_c|$ operator matrix, where
\begin{equation}\label{eq.kernel.R}
R_c[U,W]=\begin{cases} T_{W\backslash U} Z_{N_W}^{1/2}, \mbox{ if }U\subseteq W \\ 0, \mbox{ otherwise}.\end{cases}
\end{equation}
\begin{proposition}\label{prop.kfc} $K[F_c]=R_c\cdot R_c^*$. In particular, $K[F_c]\geq 0$.
\end{proposition}
\begin{proof} The $(U,V)$-entry for $R_c\cdot R_c^*$ is equal to $\sum_W R_c[U,W]R_c[V,W]^*$.
If $U\bigcup V$ is not a clique, we cannot find a clique $W$ that contains both $U$ and $V$. Therefore, for every clique $W$, we cannot have both $U,V$ contained in $W$. By Equation \pref{eq.kernel.R}, this implies at least one of $R_c[U,W],R_c[V,W]$ is $0$. Hence, the $(U,V)$-entry for $R_c\cdot R_c^*$ is $0$, which agrees with the $(U,V)$-entry of $K[F_c]$ by Equation \pref{eq.kernel.clique}.
If $U\bigcup V$ is a clique, then $R_c[U,W]R_c[V,W]^*$ may be non-zero only when $W$ is a clique containing both $U,V$. Therefore, in such case,
\begin{align*}
& \sum_W R_c[U,W]R_c[V,W]^* \\
=& \sum_{U\bigcup V\subseteq W} R_c[U,W]R_c[V,W]^* \\
=& \sum_{U\bigcup V\subseteq W} T_{W\backslash U} Z_{N_W} T_{W\backslash V}^* \\
=& T_{V\backslash U} \left(\sum_{U\bigcup V\subseteq W} T_{W\backslash (U\bigcup V) } Z_{N_W} T_{W\backslash (U\bigcup V)}^*\right) T_{U\backslash V}^*
\end{align*}
The summation in the middle is equal to $I$ by Lemma \ref{lm.clique.id}, in which $F$ is the fixed clique $U\bigcup V$. This proves that the $(U,V)$-entry for $R_c\cdot R_c^*$ is equal to $T_{V\backslash U} T_{U\backslash V}^*=K[e_U,e_V]$ in this case.
Therefore, we conclude that $K[F_c]=R_c\cdot R_c^*$ and $K[F_c]\geq 0$.
\end{proof}
\begin{remark} We can regard $R_c$ as a Cholesky decomposition of $K[F_c]$ by rearranging $R_c$ as a lower triangular matrix. We first notice that whenever $U$ contains more elements than $W$, $R_c[U,W]=0$. Moreover, when $|U|=|W|$, $U\subseteq W$ is equivalent to $U=W$. Therefore, $R_c[U,W]=0$ whenever $|U|\leq |W|$ and $U\neq W$. Therefore, if we rearrange $F_c$ according to the size of cliques (larger cliques come first), $R_c$ becomes a lower triangular matrix.
\end{remark}
\begin{example} Let us consider the graph product of $\mathbb{N}$ associated with the graph in Figure \ref{fg.2}:
\begin{figure}[h]
\begin{tikzpicture}[scale=0.75]
\draw [line width=1pt] (0,1) -- (-1,0);
\node at (-1,0) {$\bullet$};
\node at (1,0) {$\bullet$};
\node at (0,1) {$\bullet$};
\node at (-1.35,0) {1};
\node at (1.35,0) {3};
\node at (0,1.35) {2};
\end{tikzpicture}
\caption{A Simple Graph on 3 Vertices}
\label{fg.2}
\end{figure}
The graph semigroup is the unital semigroup generated by $e_1,e_2,e_3$ where $e_1,e_2$ commute. There are 5 cliques in this graph: $\{1,2\}$, $\{1\}$, $\{2\}$, $\{3\}$, and $\emptyset$. Under this ordering,
$$K[F_c] = \begin{bmatrix}
I & T_2^* & T_1^* & 0 & T_2^* T_1^* \\
T_2 & I & T_1^* T_2 & 0 & T_1^* \\
T_1 & T_2^* T_1 & I & 0 & T_2^* \\
0 & 0 & 0 & I & T_3^* \\
T_1 T_2 & T_1 & T_2 & T_3 & I
\end{bmatrix}.$$
We can write out the matrix $R_c$ using Equation \pref{eq.kernel.R}:
$$R_c=\begin{bmatrix}
I & 0 & 0 & 0 & 0 \\
T_2 & Z_{2}^{1/2} & 0 & 0 & 0 \\
T_1 & 0 & Z_1^{1/2} & 0 & 0 \\
0 & 0 & 0 & I & 0 \\
T_1 T_2 & T_1 Z_2^{1/2} & T_2 Z_1^{1/2} & T_3 & Z_{\{1,2,3\}}^{1/2}
\end{bmatrix}.$$
One can verify that $K[F_c]=R_c\cdot R_c^*$. \end{example}
We are now ready to prove the main Theorem \ref{thm.main}.
\begin{thm:main} Let $T$ be a contractive representation of a graph semigroup $P_\Gamma$. Then, $T$ is $\ast$-regular if for every finite $W\subseteq\Lambda$,
\begin{equation*}
\sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\geq 0.
\end{equation*}
\end{thm:main}
\begin{proof} To show $T$ is $\ast$-regular given Condition \pref{eq.main}, it suffices to prove that the Toeplitz kernel $K$ in Definition \ref{df.kernel} is completely positive definite. For any finite subset $F\subset P_\Gamma$, it suffices to prove $K[F]\geq 0$. Proposition \ref{prop.main.step1} shows that it suffices to prove $K[\tilde{F}]\geq 0$ for some finite subset $\tilde{F}\subset P_\Gamma$, where each element in $\tilde{F}$ has precisely one block. Let $\Lambda_0$ be all the vertices that appears in a syllable of some element of $\tilde{F}$, which is a finite set. Denote $F_c=\{e_J\in\Lambda_0: J\mbox{ is a clique}\}$. By Lemma \ref{lm.kfc}, $K[\tilde{F}]\geq 0$ if $K[F_c]\geq 0$. Finally, by Proposition \ref{prop.kfc}, $K[F_c]\geq 0$. \end{proof}
\begin{remark} The converse of Theorem \ref{thm.main} is also true (see Corollary \ref{cor.converse}).
\end{remark}
\section{Nica-Covariant Representation on Graph Products}
Isometric Nica-covariant representations on a quasi-lattice ordered groups are first studied in \cite{Nica1992}, and were soon found to be an important concept in the study of operator algebras. Isometric Nica-covariant representations on graph semigroups, in particular graph products of $\mathbb{N}$, are intensively studied in \cite{CrispLaca2002}. It is observed in \cite[Theorem 24]{CrispLaca2002} that an isometric representation $V$ of the graph semigroup is isometric Nica-covariant if
\begin{enumerate}
\item for any two adjacent vertices $i,j$, $V_i$ and $V_j$ $\ast$-commute.
\item for any two non-adjacent vertices $i,j$, $V_i$ and $V_j$ have orthogonal ranges. In other words, $V_i^* V_j=0$.
\end{enumerate}
Contractive Nica-covariant representations on lattice ordered semigroups are first defined and studied in \cite{Fuller2013, DFK2014}. However, lattice order is quite restrictive compared to quasi-lattice order. For example, the free semigroup $\mathbb{F}_m^+$ is quasi-lattice ordered, but not lattice ordered. In particular, the graph product $P_\Gamma$ is only lattice ordered when the graph $\Gamma$ is the complete graph, which corresponds to the abelian semigroup $\mathbb{N}^k$. This leads to a question of which representations of the graph product $P_\Gamma$ have isometric Nica-covariant dilations.
In \cite{Gaspar1997}, it is shown that a pair of commuting contractions has a $\ast$-regular dilation if and only if they have a $\ast$-commuting isometric dilation, which is an equivalent way of saying a Nica-covariant dilation. The contractive Nica-covariant representations defined in \cite{Fuller2013, DFK2014, BLi2014} are always $\ast$-regular. It turns out that $\ast$-regular is equivalent of having an isometric Nica-covariant dilation.
\begin{theorem}\label{thm.nc} If $T:P_\Gamma\to\bh{H}$ is $\ast$-regular, then it has a minimal Naimark dilation that is an isometric Nica-covariant representation of the graph semigroup.
\end{theorem}
The minimal Naimark dilation in Theorem \ref{thm.Naimark} can be constructed explicitly. We loosely follow the construction in \cite[Theorem 3.2]{Popescu1999b}. Given a completely positive definite kernel $K:P\times P\to\bh{H}$, define $\mathcal{K}_0=P\otimes\mathcal{H}$ with a semi-inner product defined by $$\left\langle \sum \delta_p\otimes h_p, \sum \delta_q\otimes k_q\right\rangle = \sum_{p,q} \langle K(q,p)h_p,k_q\rangle.$$
The original Hilbert space $\mathcal{H}$ can be embedded into $\mathcal{K}_0$ as $\delta_e\otimes\mathcal{H}$. The minimal Naimark dilation $V$ of $T$ acts on the $\mathcal{K}$ by $V(p)\delta_q\otimes h=\delta_{pq}\otimes h$, which are clearly isometries. Moreoever, for any $h_1,h_2\in\mathcal{H}$,
\begin{align*}
\langle V(q)^* V(p) h_1,h_2\rangle =& \langle \delta_p\otimes h_1, \delta_q\otimes h_2\rangle\\
=& \langle K(q,p) h_1, h_2\rangle
\end{align*}
Therefore, $P_\mathcal{H} V(q)^* V(p)\big|_\mathcal{H}=K(q,p)$. Let $\mathcal{N}=\{k\in\mathcal{K}_0: \langle k,k\rangle=0\}$. One can show that $\mathcal{N}$ is invariant for all $V(p)$, and thus we can let $\mathcal{K}=\overline{\mathcal{K}_0/\mathcal{N}}$, which is a Hilbert space. $V$ can be defined as isometries on $\mathcal{K}$, and it turns out that it is a minimal Naimark dilation. For technical details, one may refer to \cite[Theorem 3.2]{Popescu1999b}. It is worth noting that $\mathcal{H}$ is coinvariant for the minimal Naimark dilation $V$, and thus invariant for $V^*$.
To prove Theorem \ref{thm.nc}, it suffices to prove that the minimal Naimark dilation is isometric Nica-covariant. Throughout the rest of this section, we fix a contractive representation $T$ on $P_\Gamma$ that is $\ast$-regular, and let $V:P_\Gamma\to\bh{K}$ be the minimal Naimark dilation for $T$ described as above.
\begin{lemma}\label{lm.orthogonal} Suppose $p\in P_\Gamma$, $\lambda\in\Lambda$ so that $\lambda\notin I_1(p)$ and $e_\lambda$ does not commute with $p$. Then $V(e_\lambda)$ and $V(p)$ have orthogonal ranges. In other words, $V(e_\lambda)^* V(p)=0$.
\end{lemma}
\begin{proof} It suffices to prove for any $h=\sum_i \delta_{x_i}\otimes h_i\in\mathcal{K}_0=P_\Gamma\otimes\mathcal{H}$ and $h=\sum_j \delta_{y_j}\otimes k_i\in\mathcal{K}_0=P_\Gamma\otimes\mathcal{H}$, $\langle V(p) h, V(e_\lambda) k\rangle=0$.
By the definition of the pre-inner product on $\mathcal{K}_0$,
\begin{align*}
\langle V(p) h, V(e_\lambda) k\rangle &= \langle \sum_i \delta_{p\cdot x_i}\otimes h_i , \sum_j \delta_{e_\lambda \cdot y_j}\otimes k_i\rangle \\
&= \sum_{i,j} \langle K(e_\lambda \cdot y_j, p\cdot x_i) h_i, k_j\rangle
\end{align*}
Suppose $(e_\lambda \cdot y_j)^{-1} p\cdot x_i=u^{-1} v$ for some $u,v\in P_\Gamma$, where $u,v$ share no common initial vertices. By Lemma \ref{lm.ini}, $u,v$ do not commute. Therefore, $K(e_\lambda \cdot y_j, p\cdot x_i)=0$ for all $i,j$. Hence, the inner product is equal to $0$. \end{proof}
\begin{lemma}\label{lm.nc} Let $p\in P_\Gamma$ and $\lambda\in\Lambda$ be a vertex such that $\lambda\notin I_1(p)$ and $e_{\lambda}$ commutes with $p$. Then $V(e_{\lambda})^* V(p)\big|_\mathcal{H}=V(p) V(e_{\lambda})^* \big|_\mathcal{H}$
\end{lemma}
\begin{proof} By the minimality of $V$, $\lspan\{V(q)k:q\in P_\Gamma,k\in\mathcal{H}\}$ is dense in $\mathcal{K}$. Therefore, it suffices to prove for all $q\in P_\Gamma$, $h,k\in\mathcal{H}$,
\begin{equation}\label{eq.lm.nc}
\langle V(e_\lambda)^* V(p) h, V(q)k \rangle = \langle V(p) V(e_\lambda)^* h, V(q)k \rangle
\end{equation}
Starting from the left hand side of Equation \pref{eq.lm.nc},
\begin{align*}
\langle V(e_\lambda)^* V(p) h, V(q)k \rangle =& \langle V(e_\lambda q)^* V(p) h, k \rangle \\
=& \langle K(e_\lambda q,p) h, k \rangle \\
=& \langle K(q,p) T(e_\lambda)^* h, k \rangle
\end{align*}
Here we used Lemma \ref{lm.add.left} to show $K(e_\lambda q,p)=K(q,p) T(e_\lambda)^*$. Now since $V(e_\lambda)=\begin{bmatrix} T(e_\lambda) & 0 \\ * & * \end{bmatrix}$ with respect to the decomposition $\mathcal{K}=\mathcal{H}\oplus\mathcal{H}^\perp$, $V(e_\lambda)^* h = T(e_\lambda)^* h \in\mathcal{H}$. Therefore,
\begin{align*}
\langle K(q,p) T(e_\lambda)^* h, k \rangle =& \langle K(q,p) V(e_\lambda)^* h, k \rangle \\
=& \langle V(q)^* V(p) V(e_\lambda)^* h, k \rangle \\
=& \langle V(p) V(e_\lambda)^* h, V(q) k \rangle
\end{align*}
This proves Equation \pref{eq.lm.nc}. \end{proof}
We now prove the main result of this section:
\begin{proof}[Proof of Theorem \ref{thm.nc}] It suffices to pick any two vertices $\lambda_1,\lambda_2$ and consider two cases when they are adjacent or not.
If $\lambda_1,\lambda_2$ are not adjacent, by Lemma \ref{lm.orthogonal}, $V(e_{\lambda_1})$ and $V(e_{\lambda_2})$ are isometries with orthogonal ranges.
If $\lambda_1,\lambda_2$ are adjacent, it suffices to prove for all $p\in P_\Gamma$,
\begin{equation}\label{eq.nc1}
V(e_{\lambda_1})^* V(e_{\lambda_2}) V(p) \big|_\mathcal{H} = V(e_{\lambda_2}) V(e_{\lambda_1})^* V(p) \big|_\mathcal{H}.
\end{equation}
Indeed, since $\lspan \{V(p)h:p\in P_\Gamma,h\in\mathcal{H}\}$ is dense in $\mathcal{K}$, Equation \pref{eq.nc1} implies that $V(e_{\lambda_1})^* V(e_{\lambda_2})= V(e_{\lambda_2}) V(e_{\lambda_1})^*$.
There are now several possibilities:
If $\lambda\in I_1(p)$, we can write $p=e_{\lambda_1} p'$, and thus $V(p)=V(e_{\lambda_1})V(p')$. Since $\lambda_1,\lambda_2$ are adjacent, $V(e_{\lambda_1})$ commutes with $V(e_{\lambda_2})$. Hence, both sides of the Equation \pref{eq.nc1} are equal to $V(e_{\lambda_2}) V(p')\big|_\mathcal{H}$.
If $\lambda\notin I_1(p)$ and $e_{\lambda_1}$ does not commute with $p$, then $\lambda_1\notin I_1(e_{\lambda_2} p)$ and $e_{\lambda_1}$ does not commute with $e_{\lambda_2} p$ as well. Therefore, by Lemma \ref{lm.orthogonal}, $V(e_{\lambda_1})$ and $V(p)$ are isometries with orthogonal ranges, and $V(e_{\lambda_1})^* V(p)=0$. Similarly, $V(e_{\lambda_1})$ and $V(e_{\lambda_2} p)$ are isometries with orthogonal ranges, and $V(e_{\lambda_1})^* V(e_{\lambda_2} p)=0$. Both sides of the Equation \pref{eq.nc1} are $0$.
Lastly, if $\lambda\notin I_1(p)$ and $e_{\lambda_1}$ commutes with $p$. Then $e_{\lambda_2} p$ and $p$ are both element in $P_\Gamma$ that commutes with $e_{\lambda_1}$ without $\lambda_1$ as an initial vertex. By Lemma \ref{lm.nc}, for every $h\in\mathcal{H}$,
\begin{align*}
V(e_{\lambda_1})^* V(e_{\lambda_2}) V(p) h =& V(e_{\lambda_2}) V(p) V(e_{\lambda_1})^* h\\
=& V(e_{\lambda_2}) V(e_{\lambda_1})^* V(p) h
\end{align*}
This is precisely the Equation \pref{eq.nc1}, and thus we finished the proof. \end{proof}
\begin{corollary}\label{cor.converse} If $T$ is has a minimal isometric Nica-covariant dilation, then,
\begin{equation*}
\sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} T_U T_U^*\geq 0.
\end{equation*}
\end{corollary}
\begin{proof} Let $V:P_\Gamma\to\bh{K}$ be the minimal Naimark dilation for $T$. We have $\mathcal{H}$ is co-invariant for $V$, and thus with respect to the decomposition $\mathcal{K}=\mathcal{H}\oplus\mathcal{H}^\perp$, $V(p)=\begin{bmatrix} T(p) & 0 \\ * & * \end{bmatrix}$. Therefore, for every clique $U$ in $\Gamma$, $$T_U T_U^* = P_\mathcal{H} V(e_U) V(e_U)^* \big|_\mathcal{H}.$$
It suffices to show for every $W\subseteq\Lambda$,
\begin{equation}\label{eq.converse}
\sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} V(e_U) V(e_U)^*\geq 0.
\end{equation}
For each vertex $i\in\Lambda$, denote $P_i=V(e_i)V(e_i)^*$ the range projection of the isometry $V(e_i)$. Since $V$ is Nica-covariant, $P_i,P_j$ commutes and
$$P_i P_j=\begin{cases} V_i V_j V_j^* V_i^*, \mbox{ if }i\mbox{ is adjacent to }j; \\ 0, \mbox{ otherwise.}\end{cases}$$
For each $U\subseteq W$, denote $P_U=\prod_{i\in U} P_i$ and in particular let $P_\emptyset = I$. If $U\subseteq W$ is not a clique, then we can find two vertices $i,j\in U$ that are not adjacent. Since $P_iP_j=0$, it follows that $P_U=0$. If $U\subseteq W$ is a clique, then it follows from that Nica-covariant condition that $P_U=V(e_U) V(e_U)^*$.
Consider the projection $R=\prod_{i\in W} (I-P_i)$:
\begin{align*}
R &= \prod_{i\in W} (I-P_i) \\
&= \sum_{U\subseteq W} (-1)^{|U|} P_U \\
&= \sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} P_U \\
&= \sum_{\substack{U\subseteq W \\ U\mbox{ is a clique}}} (-1)^{|U|} V(e_U) V(e_U)^*.
\end{align*}
Since $R$ is a projection, $R\geq 0$ and this proves the Condition \pref{eq.converse}. \end{proof}
We have now established the equivalence among Condition \pref{eq.main1}, $\ast$-regular, and having a minimal isometric Nica-covariant dilation.
\begin{thm:main1} Let $T:P_\Gamma\to\bh{H}$ be a representation. Then the following are equivalent:
\begin{enumerate}
\item\label{thm.main1.1} $T$ is $\ast$-regular,
\item\label{thm.main1.2} $T$ has a minimal isometric Nica-covariant dilation,
\item\label{thm.main1.3} $T$ satisfies Condition \pref{eq.main1}.
\end{enumerate}
\end{thm:main1}
\begin{proof} \pref{thm.main1.1} $\Longrightarrow$ \pref{thm.main1.2} is established in Theorem \ref{thm.nc}. \pref{thm.main1.1} $\Longrightarrow$ \pref{thm.main1.2} is established in Corollary \ref{cor.converse}. Finally, \pref{thm.main1.3} $\Longrightarrow$ \pref{thm.main1.1} is established in Theorem \ref{thm.main}.
\end{proof}
\section{The Property (P)}
Popescu \cite{Popescu1999} first studied the noncommutative Poisson transform associated to a certain class of operators that satisfies the property (P). The property (P) has recently been generalized to higher rank graphs \cite{Skalski2009, Skalski2010}. It turns out that the class of operators Popescu studied can be viewed as a representation of a graph product of $\mathbb{N}$, and we thereby extend the Property (P) to representations of graph products of $\mathbb{N}$. This section proves that $\ast$-regular condition implies the property (P), and they are equivalent under certain conditions.
Throughout this section, we fix a finite simple graph $\Gamma$ whose vertex set is denoted by $\Lambda$.
\begin{definition}\label{df.propertyP} A contractive representation $T:P_\Gamma\to\bh{H}$ is said to have \emph{the Property (P)} if there exists $0\leq \rho<1$ so that for all $\rho\leq r\leq 1$,
\begin{equation}\label{eq.propertyP}
\sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} r^{|U|} T(e_U) T(e_U)^*\geq 0.
\end{equation}
\end{definition}
\begin{example} Let $\Gamma$ be a complete $k$-partite graph $K_{n_1,n_2,\cdots,n_k}$. In other words, denote $\Lambda=\{(i,j):1\leq i\leq k, 1\leq j\leq n_i\}$ be the vertex set, and $(i_1,j_1)$ is adjacent to $(i_2,j_2)$ in $\Gamma$ if and only if $i_1\neq i_2$. A contractive representation $T$ of this graph semigroup $P_\Gamma$ is uniquely determined by $T_{i,j}=T(e_{i,j})$. Here, for each $i$, $T_{i,1},\cdots,T_{i,n_i}$ are not necessarily commuting contractions. However, for each $i_1\neq i_2$, $T_{i_1,j_1}$ commutes with $T_{i_2,j_2}$.
In \cite{Popescu1999}, Popescu considered such class of operators $\{T_{i,j}\}$ where for each $i$, $\{T_{i,j}\}_{j=1}^{n_j}$ forms a row contraction in the sense that, $$\sum_{j=1}^{n_i} T_{i,j} T_{i,j}^* \leq I.$$
This family of operators is also considered in many subsequent papers on non-commutative polyballs (see also \cite{Popescu2015, Popescu2016b}). For such family of operators, Popescu says it has the property (P) if the Condition \pref{eq.propertyP} is satisfied. It is observed in \cite{Popescu1999} that the property (P) allows one to obtain a Poisson transform and subsequently a dilation of the family of operators $\{T_{i,j}\}$.
One may observe that Definition \ref{df.propertyP} of the property (P) does not require the row contractive condition. Instead, this paper mostly considers a contractive representation $T$ of the graph product $P_\Gamma$ that satisfies the Condition \pref{eq.main1} and thus has a $\ast$-regular dilation. The row contractive condition is embedded in the Condition \pref{eq.main1}.
\end{example}
Our first result shows that if $T$ satisfies the Condition \pref{eq.main1}, then it has the property (P). Let $T:P_\Gamma\to\bh{H}$ be a representation that satisfies the Condition \pref{eq.main1}. By Theorem \ref{thm.main.all}, it has a minimal isometric Nica-covariant dilation $V:P_\Gamma\to\bh{K}$. Moreover, $\mathcal{H}$ is co-invariant for $V$, and thus $$P_\mathcal{H} V(e_U) V(e_U)^* \big|_\mathcal{H} = T(e_U)T(e_U)^*.$$
Therefore, to show $T$ has the property (P), it suffices to show $V$ has the property (P). For $r\in\mathbb{R}$, let us denote $$f(r)=\sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} r^{|U|} V(e_U) V(e_U)^*.$$
It follows from the proof of Corollary \ref{cor.converse} that $f(1)\geq 0$. In fact, $f(1)$ is a projection onto the subspace that is orthogonal to all the ranges of $V(e_i)$. Following the notation we used in the proof of Corollary \ref{cor.converse}, for each vertex $i\in\Lambda$, denote $P_i=V_i V_i^*$. Since $V$ is Nica-covariant, $P_i, P_j$ commute, and $$P_i P_j=\begin{cases} V_i V_j V_j^* V_i^*, \mbox{ if }i\mbox{ is adjacent to }j; \\ 0, \mbox{ otherwise.}\end{cases}$$
For each $U\subseteq \Lambda$, denote $P_U=\prod_{i\in U} P_i$, the projection onto the intersection of the ranges of all $\{P_i\}_{i\in U}$. In particular, we let $P_\emptyset=I$. Notice that if there are two vertices $i,j\in U$ that are not adjacent, $P_i P_j=0$ and thus $P_U=0$. Therefore, $P_U\neq 0$ only if $U$ is a clique. The function $f(r)$ can be rewritten as
\begin{align*}
f(r) &=\sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} r^{|U|} P_U \\
&= \sum_{U\subseteq \Lambda} (-1)^{|U|} r^{|U|} P_U \\
&= \sum_{k=0}^{|\Lambda|} \left(\sum_{\substack{U\subseteq \Lambda \\ |U|=k}} (-1)^k P_U \right) r^k
\end{align*}
For each $U\subseteq\Lambda$, denote $R_U=P_U\cdot \prod_{i\notin U} P_i^\perp$. The range of $R_U$ are those vectors that are contained in the range of $P_U$ but orthogonal to the range of $P_i$ where $i\notin U$. In particular, $R_\emptyset=\prod_{i\in\Lambda} P_i^\perp$, which is the projection onto those vectors that are orthogonal to the ranges of all $P_i$. It was observed in Corollary \ref{cor.converse} that $$R_\emptyset=\sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} V(e_U) V(e_U)^*=f(1).$$
Finally, denote
\begin{equation}\label{eq.Qm}
Q_m = \sum_{\substack{U\subseteq \Lambda \\ |U|=m}} R_U.
\end{equation}
In particular, $Q_0=R_\emptyset=f(1)$. Notice that if two distinct subsets $U_1,U_2\subseteq\Lambda$ and $|U_1|=|U_2|=m$, then at least one vertex in $U_1$ is not in $U_2$ and vice versa. Therefore, $R_{U_1} R_{U_2}=0$ and thus $R_{U_1},R_{U_2}$ are projections onto orthogonal subspaces. Hence, $Q_m$ is a projection. Intuitively, the range of $Q_m$ are those vectors that are contained in the range of $m$ of $P_i$ and orthogonal to the range of all other $P_i$. Therefore, $\{Q_m\}_{m=0}^{|\Lambda|}$ are pairwise orthogonal projections and $$\sum_{m=0}^{|\Lambda|} Q_m=I.$$
We first obtain a Taylor expansion of $f$ about $r=1$. For each $1\leq m\leq |\Lambda|$, the $m$-th derivative of $f$ is equal to:
\begin{align*}
f^{(m)}(r) &= \sum_{k=m}^{|\Lambda|} \sum_{\substack{U\subseteq \Lambda \\ |U|=k}} (-1)^k \frac{k!}{(k-m)!}r^{k-m} P_U \\
&= (-1)^m m! \sum_{k=m}^{|\Lambda|} \sum_{\substack{U\subseteq \Lambda \\ |U|=k}} (-1)^{k-m} {k\choose m} r^{k-m} P_U \\
\end{align*}
\begin{lemma}\label{lm.taylorP} $f^{(m)}(1)=(-1)^m m!\cdot Q_m$. Moreover, $f$ has the Taylor series expansion $$f(r)= \sum_{m=0}^{|\Lambda|} (-1)^m (r-1)^m Q_m.$$
\end{lemma}
\begin{proof} It suffices to prove $$Q_m=\sum_{k=m}^{|\Lambda|} \sum_{\substack{U\subseteq \Lambda \\ |U|=k}} (-1)^{k-m} {k\choose m} P_U.$$
Denote the right hand side of the summation $S_m$. It suffices to prove
$$S_m Q_i=Q_i S_m=\begin{cases}
Q_i, \mbox{ if }i=m; \\
0, \mbox{ if }i\neq m.
\end{cases}$$
From Equation \pref{eq.Qm}, $Q_m$ is the sum of all $R_W$ where $|W|=m$. Since $\{R_W\}_{|W|=m}$ are pairwise orthogonal projections, it suffices to prove
$$S_m R_W=R_W S_m=\begin{cases}
R_W, \mbox{ if }|W|=m; \\
0, \mbox{ if }|W|\neq m.
\end{cases}$$
First of all, since $\{P_i\}_{i\in\Lambda}$ are commuting orthogonal projections, $R_W, S_m$ commute for all $W\subseteq\Lambda$ and $0\leq m\leq |\Lambda|$. Fix $W$ and consider $S_m R_W$.
If $|W|<m$, then every $|U|\geq m$ contains some vertex not in $W$. Therefore, $P_U R_W=0$, and hence $S_m R_W=0$.
If $|W|\geq m$, then for each $|U|\geq m$, $$P_U R_W=\begin{cases} R_W,\mbox{ if } U\subseteq W; \\ 0,\mbox{ otherwise}.\end{cases}$$
Therefore,
\begin{align*}
S_m R_W &= \left(\sum_{k=m}^{|\Lambda|} \sum_{\substack{U\subseteq \Lambda \\ |U|=k}} (-1)^{k-m} {k\choose m} P_U\right) \cdot R_W \\
&= \sum_{k=m}^{|W|} \sum_{\substack{U\subseteq W \\ |U|=k}} (-1)^{k-m} {k\choose m} R_W \\
&= \sum_{k=m}^{|W|} (-1)^{k-m} {|W| \choose k} {k\choose m} {k\choose m} R_W \\
&= \sum_{k=m}^{|W|} (-1)^{k-m} \frac{|W|!}{k!(|W|-k)!} \frac{k!}{m!(k-m)!} R_W \\
&= {|W| \choose m} \sum_{k=m}^{|W|} (-1)^{k-m} {{|W|-m} \choose {k-m}} R_W \\
&= {|W| \choose m} \sum_{j=0}^{|W|-m} (-1)^j {{|W|-m} \choose j} R_W.
\end{align*}
Here, $\sum_{j=0}^{|W|-m} (-1)^j {{|W|-m} \choose j}$ is equal to $(1-1)^{|W|-m}=0$ if $|W|>m$, and $1$ if $|W|=m$. Therefore, $$S_m R_W=\begin{cases} R_W, \mbox{ if } |W|=m; \\ 0, \mbox{ otherwise}.\end{cases}$$
This proves $S_m=Q_m$. Since the graph $\Gamma$ is assumed to be a finite graph, $f(r)$ is a finite operator-valued polynomial. Its Taylor series expansion about $1$ is equal to:
\begin{align*}
f(r) &= \sum_{m=0}^{|\Lambda|} \frac{f^{(m)}(1)}{m!} (r-1)^m \\
&= \sum_{m=0}^{|\Lambda|} (-1)^m (r-1)^m Q_m. \qedhere
\end{align*}
\end{proof}
\begin{theorem}\label{thm.propertyP} If a representation $T:P_\Gamma\to\bh{H}$ is $\ast$-regular, then $T$ satisfies the property (P). Moreover, the constant $\rho$ in the property (P) can be chosen to be $\rho=0$.
\end{theorem}
\begin{proof} Let $V:P_\Gamma\to\bh{K}$ be the minimal isometric $\ast$-regular dilation for $T$. By Lemma \ref{lm.taylorP}, for each $0\leq r\leq 1$,
\begin{align*}
f(r) &= \sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} r^{|U|} P_U \\
&= \sum_{m=0}^{|\Lambda|} (-1)^m (r-1)^m Q_m
\end{align*}
For $0\leq r\leq 1$, $(-1)^m (r-1)^m\geq 0$. Since each $Q_m$ is an orthogonal projection, $f(r)\geq 0$. Notice when $U$ is a clique, $P_U=V_UV_U^*$, where $V_U=\begin{bmatrix} T_U & 0 \\ * & * \end{bmatrix}$ with respect to $\mathcal{K}=\mathcal{H}\oplus\mathcal{H}^\perp$. Therefore, by projecting onto the corner corresponding to $\mathcal{H}$, we obtain that for all $0\leq r\leq 1$, $$\sum_{\substack{U\subseteq \Lambda \\ U\mbox{ is a clique}}} (-1)^{|U|} r^{|U|} T_U T_U^*\geq 0.$$
This implies $T$ satisfies the property (P) with $\rho=0$. \end{proof}
It is not clear when the converse of Theorem \ref{thm.propertyP} also holds. Popescu established in \cite[Corollary 5.2]{Popescu1999} the converse for a special class of operators.
\begin{proposition}[Corollary 5.2, \cite{Popescu1999}]\label{prop.Popescu} Let $\Gamma=K_{n_1,\cdots,n_k}$ be a complete $k$-multipartite graph. Let $\{T_{i,j}\in\bh{H}: 1\leq i\leq k, 1\leq j\leq n_i\}$ be a family of operators such that:
\begin{enumerate}
\item\label{cond.P1} For each $i$, $\sum_{j=1}^{n_i} T_{i,j} T_{i,j}^*\leq I$,
\item\label{cond.P2} The associated representation $T:P_\Gamma\to\bh{H}$ has the property (P).
\end{enumerate}
Then the associated representation $T$ has a minimal isometric Nica-covariant dilation.
\end{proposition}
However, for a representation of an arbitrary graph semigroup, it is not clear how one can replace the Condition \pref{cond.P1} in the Proposition \ref{prop.Popescu}.
\begin{example} Let us consider the special case when $n_1=\cdots=n_k=1$ and the graph $\Gamma$ is the complete graph on $k$-vertices. Let $\{T_i\}_{i=1}^k$ be a family of operators as in the Proposition \ref{prop.Popescu}. Notice the Condition \pref{cond.P1} is simply saying that each $T_i$ is a contraction. Proposition \ref{prop.Popescu} states that such $T_i$ has a minimal isometric Nica-covariant dilation, and thus by the Theorem \ref{thm.main.all}, $T_i$ has to satisfy the Condition \pref{eq.main1}. Note that in a complete graph, Condition \ref{eq.main1} is the same as the Brehmer's Condition \pref{eq.Brehmer.regular}.
In fact, we can derive the Condition \pref{eq.main1} directly from the property (P), without invoking the minimal isometric Nica-covariant dilation.
For any subset $W\subseteq\{1,2,\cdots,n\}$, denote $$\Delta_W(r)=\sum_{U\subseteq W} (-1)^{|U|} r^{|U|} T_U T_U^*.$$
The property (P) implies for some $0\leq \rho<1$ and all $\rho\leq r\leq 1$, $\Delta_{\{1,2,\cdots,n\}}(r)\geq 0$. For any $1\leq i\leq n$, let $W_i=\{1,\cdots,i-1,i+1,\cdots,n\}$. Notice that, $$\Delta_{\{1,2,\cdots,n\}}(r) = \Delta_{W_i}(r) - rT_i\Delta_{W_i}(r) T_i^*.$$
We claim that $\Delta_{W_i}(r)\geq 0$ for all $\rho\leq r< 1$. If otherwise, since $\Delta_{W_i}(r)$ is a self-adjoint operator, let $$-M=\inf\{\langle \Delta_{W_i}(r)h,h\rangle: \|h\|=1\}<0.$$
Pick a unit vector $h$ so that $-M\leq \langle \Delta_{W_i}(r)h,h\rangle< -M\cdot r$. Then,
\begin{align*}
\langle rT_i\Delta_{W_i}(r) T_i^*h,h\rangle &= r\cdot \langle \Delta_{W_i}(r) T_i^*h,T_i^*h\rangle \\
&\geq -M\cdot r.
\end{align*}
Therefore,
\begin{align*}
\langle \Delta_{\{1,2,\cdots,n\}}(r) h,h\rangle &= \langle \Delta_{W_i}(r)h,h\rangle - \langle rT_i\Delta_{W_i}(r) T_i^*h,h\rangle \\
&< -M\cdot r + M\cdot r = 0.
\end{align*}
This contradicts that $\Delta_{\{1,2,\cdots,n\}}(r)\geq 0$. Hence, we can conclude that $\Delta_{W_i}(r)\geq 0$. In other words, $\{T_1,\cdots,T_{i-1},T_{i+1},\cdots,T_n\}$ satisfies the property (P). Similarly, by removing one element each time, we obtain that for any $W\subseteq\{1,2,\cdots,n\}$, $\Delta_W(r)\geq 0$ for all $\rho\leq r<1$. In particular, let $r\to 1$, we obtain that for every $W\subseteq\{1,2,\cdots,n\}$, $$\sum_{U\subseteq W} (-1)^{|U|} T_U T_U^*\geq 0.$$
This is exactly the Condition \pref{eq.main1} on the complete graph (equivalently, the Brehmer's Condition \pref{eq.Brehmer.regular}).\end{example}
\begin{remark} For an arbitrary graph $\Gamma$, it is not clear how we can replace the Condition \pref{cond.P2} in the Proposition \ref{prop.Popescu} to guarantee a minimal isometric Nica-covariant dilation for a representation $T:P_\Gamma\to\bh{H}$.
\end{remark}
\bibliographystyle{abbrv}
|
1,116,691,500,314 | arxiv |
\section{Introduction}
Young massive stellar clusters are environments of copious star formation, and typically host a large number of very massive stars \citep{PortegiesZwart2010}.
For this reason, they have long been considered as potential sites of cosmic-ray (CR) acceleration \citep{Parizot2004}.
The acceleration may take place at shock fronts of supernova remnants (SNRs) \citep[which may collide with the strong winds of massive stars in the cluster, see e.g.\xspace][and references therein]{Bykov2020}, or at the termination shock of the superbubble that is excavated by the combined stellar winds of the cluster \citep[e.g.][]{Bykov2014,Gupta2018,Morlino2021}.
Massive clusters form from correspondingly massive molecular clouds, which are not very common in the Milky Way but often found in starburst galaxies \citep{Fujii2016}.
Nevertheless, the notion that massive star clusters are responsible for the bulk of hadronic CRs\footnote{
Here and in the following, the term ``hadronic cosmic ray'' refers to cosmic-ray nuclei, as opposed to, e.g., cosmic-ray electrons and positrons.
}
accelerated within our Galaxy represents a viable alternative to the long-standing ``SNR paradigm'', in which (isolated) SNRs are the primary acceleration sites \citep{PortegiesZwart2010,Aharonian2019,Morlino2021}.
Through interaction with ambient gas and radiation fields, high-energy hadronic CRs produce high-energy $\gamma$ rays, which provides strong motivation for the search for $\gamma$-ray emission from massive stellar clusters \citep[this has been realised already long ago, see e.g.][]{Cesarsky1983}.
Indeed, a bubble, or ``cocoon'' in the Cygnus~X star-forming region has been detected in \emph{Fermi}-LAT data in the $\sim$\SI{1}{\GeV}--few~\SI{100}{\GeV} energy range \citep{FermiLAT2011}.
Subsequently, \emph{Fermi}-LAT has detected diffuse $\gamma$-ray emission in the same energy range around a number of other massive stellar clusters in the Milky Way \citep{Yang2017,Yang2018,Yang2020,Sun2020,Sun2020a}.
Searches at higher energies (i.e.\ at \SI{1}{\TeV} and above) with ground-based instruments have also lead to several detections:
\begin{itemize}
\item the Cygnus region harbours several sources of TeV-energy $\gamma$ rays \citep{Abdo2007,VERITAS2018}, and has recently been detected up to energies of hundreds of TeV \citep{HAWC2021,LHAASO2021};
\item the young stellar cluster Westerlund~2 within the star formation region RCW~49, which hosts with WR~20a an extraordinarily massive binary star system \citep{HESS_Wd2_2007,HESS_Wd2_2011};
\item the young stellar cluster Westerlund~1 \citep{HESS_Westerlund1_2012}, which will be introduced in more detail below;
\item the super bubble 30 Dor C, whose detection is particularly noteworthy due to its distant location in the Large Magellanic Cloud \citep{HESS_30DorC_2015};
\item the stellar cluster Cl$^\ast$~1806$-$20, which contains both a luminous blue variable candidate, LBV~1806$-$20, and a magnetar, SGR~1806$-$20 \citep{HESS_MassiveStars_2018}.
\end{itemize}
However, the $\gamma$-ray emission can be linked directly to the stellar clusters in only some of the above cases, the precise CR acceleration sites are yet unidentified, and none of the detections constitutes unequivocal evidence for the acceleration of hadronic CRs due to the respective clusters.
The assertion of the latter point is complicated by the fact that high-energy $\gamma$ rays can be produced by CRs via two competing processes.
Besides their production in the decay of neutral pions (and other short-lived particles), produced in turn in interactions of hadronic CRs with ambient matter -- the ``hadronic scenario'' -- they may also be created through the inverse Compton (IC) process, in which high-energy electrons and positrons\footnote{
Hereafter, we use the term ``electrons'' to refer to both electrons and positrons.
}
can up-scatter low-energy photons from ambient radiation fields to TeV energies -- the ``leptonic scenario''.
These two scenarios can only be distinguished by carrying out detailed spectromorphological studies of the $\gamma$-ray emission, and combining the results with those obtained at other wavelengths.
In this article, we present such a study for the young massive stellar cluster Westerlund~1.
Westerlund~1, named after its discoverer Bengt Westerlund \citep{Westerlund1961} and located at R.A.(2000)=$16^\mathrm{h}47^\mathrm{m}04.0^\mathrm{s}$, Dec.(2000)=$-45^\circ 51'04.9''$ \citep{Brandner2008}, is the most massive known young stellar cluster in the Milky Way, with an estimated mass of around $10^5\,M_\odot$ \citep{Clark2005,Brandner2008,PortegiesZwart2010}.
It hosts a rich population of evolved massive stars, including significant fractions of all known Galactic Yellow Hypergiants \citep{Clark2005} and Wolf-Rayet stars \citep{Crowther2006}.
The half-mass radius of the cluster is approximately \SI{1}{\pc} \citep{Brandner2008}.
Many estimates for the age of the cluster and its distance from Earth have been put forward in the past.
Most age estimates agree with an age of \SIrange{3}{5}{\mega\year} \citep{Clark2005,Crowther2006,Brandner2008}, although the single-age paradigm has been questioned recently after finding that the observed luminosities of cool supergiants are more consistent with an age of $\sim$\SI{10}{\mega\year} \citep{Beasor2021}.
Early distance estimates, using various techniques, find distances of around \SI{4}{\kpc} \citep{Clark2005,Crowther2006,Kothes2007,Brandner2008}.
Recently, data from the \emph{Gaia} spacecraft were used to obtain new distance estimates.
While most of them are compatible with the old estimates \citep{Davies2019,Rate2020,Beasor2021,Negueruela2022}, closer distances of $\sim$\SI{2.7}{\kpc} have also been obtained \citep{Aghakhanloo2020,Aghakhanloo2021}.
\citet{Clark2019} have questioned the reliability of \emph{Gaia} (DR2) data in the Westerlund~1 field altogether, rendering the new estimates somewhat uncertain.
For this article, we adopted an age of \SI{4}{\mega\year} and a distance of \SI{3.9}{\kpc}, as these values are compatible with the majority of published results.
Additionally, we will need in the course of this paper estimates for the properties of the collective cluster wind of Westerlund~1, which is formed as a superposition of the strong winds of the massive stars in the cluster.
Only few estimates can be found in the literature; we assumed here for the kinetic luminosity of the wind $L_\mathrm{w}\sim$\SI{e39}{\erg\per\second} \citep{Muno2006} and for the wind velocity $v_\mathrm{w}\sim$\SI{3000}{\km\per\second} \citep{Morlino2021} as typical values.
All parameter values for Westerlund~1 assumed in this work are summarised in Table~\ref{tab:wd1_pars}.
\begin{table}
\centering
\caption{Parameter values for Westerlund~1 assumed in this work.}
\label{tab:wd1_pars}
\begin{tabular}{cccc}
\hline\hline
Par. & Description & Value & Ref.\\\hline
$d$ & distance from Earth & \SI{3.9}{\kpc} & (1,2) \\
$\tau$ & cluster age & \SI{4}{\mega\year} & (3,4) \\
$L_\mathrm{w}$ & \makecell{kinetic luminosity\\of cluster wind} & \SI{e39}{\erg\per\second} & (5) \\
$v_\mathrm{w}$ & velocity of cluster wind & \SI{3000}{\km\per\second} & (6) \\
\hline
\end{tabular}
\tablebib{
(1) \citet{Kothes2007}; (2) \citet{Davies2019}; (3) \citet{Clark2005};
(4) \citet{Brandner2008}; (5) \citet{Muno2006}; (6) \citet{Morlino2021}.
}
\end{table}
Westerlund~1 has been studied extensively in the X-ray domain.
Observations with the \emph{Chandra} telescope have revealed diffuse hard X-ray emission from the core of the cluster \citep{Muno2006}, which was later identified as likely being of thermal origin with \emph{XMM-Newton} observations \citep{Kavanagh2011}.
Additionally, \citet{Muno2006a} have identified an X-ray magnetar, CXOU~J164710.2$-$455216, which supposedly was created in the explosion of a very massive ($>40\,M_\odot$) progenitor star \citep{Clark2008,Belczynski2008}.
Interestingly, CXOU~J164710.2$-$455216 is the only known remnant of a stellar explosion within Westerlund~1.
Moving to larger spatial scales (i.e.\ beyond the bounds of the cluster itself), an analysis of \emph{Fermi}-LAT data between 3 and \SI{300}{\GeV} by \citet{Ohm2013} revealed extended $\gamma$-ray emission in the vicinity of Westerlund~1.
With more data accumulated since then, the latest \emph{Fermi}-LAT source catalogue \citep[4FGL-DR2,][]{FermiLAT2020,FermiLAT2020a} lists six sources within $1.1^\circ$ from the cluster centre.
Besides the stellar cluster and its members, several objects that are located at relatively small angular separations from Westerlund~1 could potentially be contributing to the observed $\gamma$-ray emission.
This includes two energetic ($\dot{E}>\SI{2e35}{\erg\per\second}$) pulsars, PSR~J1648$-$4611 and PSR~J1650$-$4601 \citep{Manchester2005}, as well as the low-mass X-ray binary (LMXB) 4U~1642$-$45 \citep{Forman1978}.
On the other hand, it is quite possible that some of the six sources listed in the 4FGL catalogue share a common physical origin, and were separated into distinct components only because the true source morphology is very complex.
Finally, at even higher energies, \citet{HESS_Westerlund1_2012} detected a large, extended ($\sim$$2^\circ$ diameter) emission region centred on Westerlund~1, named HESS~J1646$-$458, between 0.45 and $\sim$\SI{20}{\TeV} with the H.E.S.S.\ experiment.
Based on the properties of the emission and taking into account multi-wavelength data, the authors found Westerlund~1 to be the most likely explanation of the $\gamma$-ray emission in a single-source scenario, but were unable to draw definitive conclusions based on the data set available at the time.
Since then, the exposure collected with H.E.S.S.\ on HESS~J1646$-$458 has almost quintupled, in large part thanks to a dedicated observation campaign in 2017.
This, together with recent advances in analysis techniques, enabled a new, detailed study of the $\gamma$-ray emission surrounding Westerlund~1, which we present here.
In Sect.~\ref{sec:hess_data_analysis}, we introduce the H.E.S.S.\ data set and provide a description of the data analysis.
The results of the H.E.S.S.\ data analysis are given in Sect.~\ref{sec:hess_results}.
Besides H.E.S.S.\ data, we have also analysed data from H~I and CO observations in the vicinity of Westerlund~1 and present the results in Sect.~\ref{sec:radio}.
A detailed discussion of the results, considering multiple explanations for the observed $\gamma$-ray emission, is presented in Sect.~\ref{sec:discussion}, before we summarise our findings and provide an outlook in Sect.~\ref{sec:conclusion}.
\section{Observations and data analysis}
\subsection{H.E.S.S.\ data set and analysis}
\label{sec:hess_data_analysis}
H.E.S.S.\ is an array of five imaging atmospheric Cherenkov telescopes (IACTs), located in the Southern hemisphere in Namibia ($23^\circ 16'18''$~S, $16^\circ 30'00''$~E) at an altitude of \SI{1800}{\meter} above sea level \citep{HESS_Crab_2006,Holler2015}.
The array comprises four \SI{12}{\meter}-diameter telescopes (CT1-4) that are arranged on a \SI{120}{\meter}-side square and began operation in late 2003.
A fifth telescope (CT5), with \SI{28}{\meter} diameter, was added in the centre of the array in 2012.
The telescopes detect the Cherenkov light produced in atmospheric air showers initiated by primary $\gamma$ rays, where the main background consists of showers caused by hadronic CR.
With the central telescope included, the array is sensitive to $\gamma$ rays in the energy range between $\sim$\SI{0.1}{\TeV} and $\sim$\SI{100}{\TeV}; with CT5 alone thresholds as low as \SI{20}{\GeV} have been achieved in studies of pulsed emission \citep{HESS_VelaPulsar_2018}.
The H.E.S.S.\ data set for HESS~J1646$-$458 comprises 362~observation runs after quality selection, taken over the course of more than 13~years between June~18, 2004 and October~11, 2017.
The runs amount to a total observation time of 164.2~hours, which represents a significant increase with respect to the previous publication \citep{HESS_Westerlund1_2012} (33.8~hours).
We note that not all of the observations have targeted HESS~J1646$-$458 directly; some have been taken as part of surveys, and some were primarily targeted at the nearby sources HESS~J1640$-$465 \citep{HESS_1640_2014} and HESS~J1641$-$463 \citep{HESS_1641_2014}, leading to a gradient in exposure across the HESS~J1646$-$458 region.
Only data from the four smaller telescopes (CT1-4) were considered in the analysis presented here.
For best performance, we restricted the maximum zenith angle of the analysed observation runs to $<60^\circ$ and the maximum offset angle between the reconstructed direction of events and the telescope pointing direction to $<2^\circ$.
With this selection, an energy threshold of \SI{0.37}{\TeV} was achieved in the final analysis.
$\gamma$-like events were selected using the method described in \citet{Ohm2009} and their energy and arrival direction were reconstructed with the algorithm presented in \citet{Parsons2014}.
Subsequently, we converted our data to the FITS-based data format described in \citet{Deil2018} and performed the high-level data analysis using the \textsc{Gammapy} package \citep{Deil2017,Deil2020} (v0.17).
All findings were confirmed with two cross-check analyses: one based on a completely independent calibration and data analysis chain \citep{deNaurois2009}, and one based on the same calibration and event reconstruction algorithms as those used in the main analysis, but carried out with the \textsc{ctools} package \citep{Knoedlseder2016} (v.1.6.3); the latter analysis is documented in \citet{Specovius2021}.
Furthermore, results of another, intermediate analysis of the data set, which has inspired parts of the analysis presented here, can be found in \citet{Zorn2019}.
In the high-level analysis, we employed a concept that has only recently been established for the analysis of IACT data: a 3-dimensional likelihood analysis, in which the data can be modelled simultaneously in two spatial dimensions and as a function of energy \citep{Mohrmann2019}.
In this method, contrary to more established ones, the residual background of CR-induced air shower events (``hadronic'' background) for a given observation run is not directly estimated from source-free regions within the observed field itself, but rather provided by a background model.
This model was constructed from archival observations and subsequently adjusted to the analysed observations following the procedure outlined in \citet{Mohrmann2019}.
The 3-dimensional likelihood analysis method is especially suited for the analysis of complex source regions and largely extended sources, and is thus a suitable choice for the analysis of HESS~J1646$-$458.
As a first step in the analysis, separate energy thresholds were determined for each observation run, requiring that the energy reconstruction bias is below 10\% and that the background model is used only above its validity threshold \citep{Mohrmann2019}.
Aiming for sufficient exposure across the entire region down to the lowest energies, a minimal energy threshold of \SI{0.37}{\TeV} was enforced.
We subsequently computed 3-dimensional maps of the observed number of events, predicted background, and exposure, comprised of spatial pixels of $0.02^\circ \times 0.02^\circ$ and an energy axis of 16~bins per decade of energy, with the $6^\circ \times 6^\circ$ region of interest centred on Westerlund~1.
For each observation, we adjusted the background model to the observed data by fitting a global normalisation and a spectral tilt parameter, taking into account only regions in the field of view that are free of $\gamma$-ray emission.
In order to safely exclude regions with $\gamma$-ray emission from the background fit, an iterative procedure as described in \citet{HESS_HGPS_2018} was employed to generate an exclusion map.
\subsection{H.E.S.S.\ analysis results}
\label{sec:hess_results}
\subsubsection{Maps and radial profiles}
\label{sec:maps_profiles}
\begin{figure*}[ht]
\centering
\subfigure[]{
\includegraphics{significance_map_diverging}
\label{fig:sign_map_div}
}
\subfigure[]{
\includegraphics{significance_map_with_boxes}
\label{fig:sign_map_boxes}
}
\caption{
Significance maps after background subtraction.
The position of Westerlund~1 is marked by the black star symbol; the grey, dashed line shows the Galactic plane.
(a) Map for the entire $6^\circ \times 6^\circ$ region of interest, smoothed with a $0.07^\circ$ top-hat kernel.
The final exclusion map is shown in black, earlier iterations in blue, green, purple, and orange.
Locations of previously detected sources that are not connected to HESS~J1646$-$458 are indicated by black, open symbols.
(b) Map with detail view of the emission surrounding Westerlund~1, smoothed with a $0.22^\circ$ top-hat kernel.
The colour scale is saturated at the maximum observed significance value associated with the HESS~J1646$-$458 region.
Contour lines corresponding to a significance of 4, 8, and 12 $\sigma$ are shown in blue.
Signal regions a--p used for spectrum extraction are overlaid (black), as are segments 1--5 for the computation of radial profiles (white dashed).
}
\label{fig:sign_maps}
\end{figure*}
We show in Fig.~\ref{fig:sign_maps} the resulting residual significance maps after background subtraction, where the significance was computed following \citet{Li1983}.
For the map in Fig.~\ref{fig:sign_map_div}, a top-hat smoothing with a kernel of radius $0.07^\circ$ has been employed.
The corresponding distribution of significance values -- for all pixels and those outside the exclusion map, displayed in black -- is presented in Fig.~\ref{fig:sign_dist}.
As expected for the case of purely statistical fluctuations, the distribution for pixels outside the exclusion map follows very closely that of a Gaussian distribution with unit width, indicating a good description of the hadronic background.
Figure~\ref{fig:sign_map_boxes} shows a significance map obtained with a correlation radius of $0.22^\circ$.
Overlaid are 16~square ``signal regions'' (of size $0.45^\circ\times 0.45^\circ$ each), labelled a--p, that cover the $\gamma$-ray emission of HESS~J1646$-$458, as well as 5~circular segments.
These signal regions and segments are used in the further characterisation of the $\gamma$-ray emission (see below).
\begin{figure}
\centering
\includegraphics{significance_dist}
\caption{
Significance entry distribution.
The black histogram corresponds to all pixels of the map shown in Fig.~\ref{fig:sign_map_div} with non-zero entries, the grey histogram to all pixels outside of the final exclusion map.
The green line represents the fit of a Gaussian distribution to the grey histogram, the best-fit mean and width are $\mu=-0.043\pm 0.005$ and $\sigma=1.008\pm 0.005$, respectively.
A Gaussian distribution with mean $\mu=0$ and width $\sigma=1$ is shown by the orange, dashed line for comparison.
}
\label{fig:sign_dist}
\end{figure}
Having obtained a satisfying description of the residual hadronic background, we computed flux maps displaying the excess $\gamma$-ray emission (see Fig.~\ref{fig:flux_maps}).
Focusing on the larger-scale structure of the emission of HESS~J1646$-$458, a top-hat smoothing with a kernel of radius~$0.22^\circ$ -- the same value as already used in \citet{HESS_Westerlund1_2012} -- has been applied for the maps in panels~\subref{fig:flux_map}, \subref{fig:flux_map_1TeV}, and \subref{fig:flux_map_5TeV}.
Additionally, we show in panel~\subref{fig:flux_map_hires} a flux map smoothed with a Gaussian kernel of $0.07^\circ$ width, which corresponds approximately to the size of the point-spread function for the analysis configuration employed here.
\begin{figure*}
\centering
\subfigure[Smoothing kernel: $0.22^\circ$ top hat]{
\includegraphics{flux_map}
\label{fig:flux_map}
}
\subfigure[Smoothing kernel: $0.07^\circ$ Gaussian]{
\includegraphics{flux_map_hires}
\label{fig:flux_map_hires}
}\\
\subfigure[Smoothing kernel: $0.22^\circ$ top hat]{
\includegraphics{flux_map_above_1TeV}
\label{fig:flux_map_1TeV}
}
\subfigure[Smoothing kernel: $0.22^\circ$ top hat]{
\includegraphics{flux_map_above_5TeV}
\label{fig:flux_map_5TeV}
}
\caption{
Flux maps of the HESS~J1646$-$458 region.
The position of Westerlund~1 is marked by the black star symbol; the grey, dashed line shows the Galactic plane.
Coloured symbols indicate objects listed in the legend in panel (a).
Dark grey square markers denote positions of sources from the 4FGL-DR2 catalogue \citep{FermiLAT2020,FermiLAT2020a}, where those sources that are still significant ($\sqrt{\mathrm{TS}}>3$) above \SI{30}{\GeV} are shown with a diamond marker ($\Diamond$).
Grey circles labelled `A' and `B' mark regions defined in \citet{HESS_Westerlund1_2012}; region `C' (at R.A.~$16^\mathrm{h}49^\mathrm{m}4.8^\mathrm{s}$, Dec.~$-46^\circ 06'00''$) is newly defined here.
The white circle marker indicates the coordinate with respect to which the radial profiles in Fig.~\ref{fig:radial_profiles} and \ref{fig:cr_dens_profile_shifted} have been computed.
The scale bar denotes a projected distance of \SI{40}{\pc}, for the nominal distance to Westerlund~1 of \SI{3.9}{\kpc}.
The maps are for different energy thresholds (indicated at the bottom of each panel) and were computed using different smoothing kernels (stated below each figure).
Colour scales are saturated at the maximum observed flux value associated with the HESS~J1646$-$458 region.
Contour lines shown in blue are at flux levels of $F=(12.5 / 20 / 27.5)\times 10^{-9}\si{\per\square\cm\per\second\per\steradian}$ for panels (a) and (b), at $F=(3 / 5.5 / 8)\times 10^{-9}\si{\per\square\cm\per\second\per\steradian}$ for panel (c), and at $F=(1 / 1.5)\times 10^{-9}\si{\per\square\cm\per\second\per\steradian}$ for panel (d).
}
\label{fig:flux_maps}
\end{figure*}
Very strong $\gamma$-ray emission is observed from the known, nearby sources HESS~J1640$-$465 and HESS~J1641$-$463.
Turning to HESS~J1646$-$458, we observe that its spatial morphology is very complex.
Notably, the emission is not peaked at the position of Westerlund~1, but rather exhibits a structure resembling that of a shell, surrounding the stellar cluster.
This global structure is present in all of the displayed maps, and thus does not seem to vary with $\gamma$-ray energy.
On top of the large-scale structure, we identify peaks of the emission in the circular regions labelled `A' and `B', which correspond to regions with enhanced emission already found in \citet{HESS_Westerlund1_2012}, confirming these findings.
Additionally, we find a peak -- visible in particular in the flux map above~\SI{4.9}{\TeV} -- in region `C', which encompasses the two energetic pulsars PSR~J1648$-$4611 and PSR~J1650$-$4601, as well as an emission region extending beyond the shell-like structure, to the east of region~C.
For future reference, we provide the following source names for these regions: HESS~J1645$-$455 (region~A), HESS~J1647$-$465 (region~B), HESS~J1649$-$460 (region~C), and HESS~J1652$-$462 (emission east of region~C).
We stress, however, that we have found -- as will be detailed in the course of this paper -- no indications that the $\gamma$-ray emission in these regions is of a different origin than that of the rest of the emission, and that the regions should therefore not be interpreted as distinct sources.
Rather, the regions have been labelled in order to ease the discussion of the source morphology.
Because HESS~J1646$-$458 is located along the Galactic plane, and towards the inner Galaxy, it is safe to assume that diffuse $\gamma$-ray emission contributes to the observed signal to some degree.
This diffuse emission is produced by CRs that propagate freely within the Galactic disc, and can be due to bremsstrahlung or IC emission of CR electrons, or interactions of hadronic CRs with gas.
Due to its diffuse nature, the diffuse $\gamma$-ray emission from the Galaxy is challenging to measure directly, and while it has been detected over large scales in the TeV energy range \citep[e.g.,][]{HESS_Diffuse_2014,Tibet2021}, these measurements do not provide a good constraint for the level of diffuse emission in the region of HESS~J1646$-$458.
Therefore, in order to assess the possible contamination with diffuse emission of the $\gamma$-ray signal of HESS~J1646$-$458, we have used a prediction of the diffuse $\gamma$-ray flux based on the \textsc{Picard} CR propagation code \citep{Kissmann2014,Kissmann2015,Kissmann2017}.
This analysis is described in more detail in Appendix~\ref{sec:appendix_picard}, where we show in Fig.~\ref{fig:flux_maps_picard_subtracted} the same flux maps as in Fig.~\ref{fig:flux_maps}, but with the predicted flux due to diffuse emission subtracted.
We conclude that, while the Galactic diffuse emission likely contributes at a considerable level -- $\sim$24\% ($\sim$17\%/$\sim$8\%) above a threshold energy of \SI{0.37}{\TeV} (\SI{1}{\TeV}/\SI{4.9}{\TeV}), according to the \textsc{Picard} template --, it cannot explain the bulk of the $\gamma$-ray emission, and does not alter the source morphology in a significant way.
For these reasons, and because of the rather large uncertainties associated with any estimate of the Galactic diffuse emission in a particular region of the sky, we have performed the subsequent analysis without explicitly taking it into account, noting that none of the conclusions drawn in this paper are affected by this.
In order to further characterise the morphology of the emission -- and its apparent invariance with respect to energy -- we derived radial profiles of the observed excess.
Noting that Westerlund~1 is not located precisely at the centre of the shell-like structure, the profiles were computed not with respect to the stellar cluster position, but to a slightly shifted coordinate (R.A.~$16^\mathrm{h}46^\mathrm{m}36^\mathrm{s}$, Dec.~$-46^\circ 01'12''$), which corresponds to the barycentre of the $\gamma$-ray excess.
The asymmetry of the observed emission with respect to the cluster position could for example be caused by inhomogeneities in the ISM surrounding the stellar cluster.
Fig.~\ref{fig:radial_profiles} shows the profiles for different energy bands (upper panel), and for the five segments defined in Fig.~\ref{fig:sign_map_boxes} (lower panel).
The excess profiles:
\begin{enumerate}[(i)]
\item confirm the shell-like structure, with a peak at around $0.5^\circ$ (corresponding to a projected distance of $\sim$\SI{34}{\pc} for a cluster distance of \SI{3.9}{\kpc}), followed by a relatively slow fall-off;
\item exhibit the same shape in all energy bands, that is, they show no indications for an energy-dependent morphology of the excess;
\item are also largely compatible between the five segments, with only minor small-scale deviations discernible.
\end{enumerate}
In order to reinforce the second point above, we carried out $\chi^2$ tests in which we compared the profile for each energy band with one computed using all events outside this band (thus ensuring statistically independent sets).
The results, listed in Table~\ref{tab:ebands}, show that each of the profiles is compatible with the total profile in terms of its shape within the statistical uncertainties.
\begin{figure}
\centering
\includegraphics{radial_profiles}
\caption{
Radial excess profiles.
The upper panel shows exposure-corrected excess counts per unit sky area for different energy bands, the lower panel for different segments as defined in Fig.~\ref{fig:sign_map_boxes} -- the black curve, showing the total excess above threshold in all segments, is the same in both panels.
All profiles are normalised to equal area, to allow an easy comparison.
The profiles have been computed with respect to a centre point at R.A.~$16^\mathrm{h}46^\mathrm{m}36^\mathrm{s}$, Dec.~$-46^\circ 01'12''$, slightly shifted from the Westerlund~1 position.
In the calculation of the profiles, we discarded pixels within $0.6^\circ$ of the position of HESS~J1640$-$465.
}
\label{fig:radial_profiles}
\end{figure}
\begin{table}
\centering
\caption{Morphological fit results.}
\label{tab:ebands}
\begin{tabular}{cccc}
\hline\hline
Energy range & Excess & $\chi^2/N_\mathrm{dof}$ & $p_{\chi^2}$\\
(TeV) & & & \\\hline
$>0.37$ & \num{15310\pm 440} & -- & --\\
0.37 -- 0.65 & \num{5080\pm 300} & 12.2 / 14 & 59.0\%\\
0.65 -- 1.2 & \num{3910\pm 230} & 16.2 / 14 & 30.3\%\\
1.2 -- 4.9 & \num{5190\pm 220} & 20.9 / 14 & 10.3\%\\
$>4.9$ & \num{1130\pm 80} & 19.5 / 14 & 14.6\%\\
\hline
\end{tabular}
\tablefoot{
`Excess' denotes the number of observed excess events within the white, dashed circle in Fig.~\ref{fig:sign_map_boxes}, excluding a circular region with $0.6^\circ$ radius around HESS~J1640$-$465.
The $\chi^2$ values and corresponding $p$-values are a measure of the compatibility of the shape of the radial excess profile of this energy band with the total radial excess profile (see text for details).
}
\end{table}
\subsubsection{Energy spectra}
\label{sec:spectra}
The complex morphology of HESS~J1646$-$458 prohibits a simple extraction of an energy spectrum by means of modelling the emission with a single spatial model.
We therefore considered the 16~signal regions indicated in Fig.~\ref{fig:sign_map_boxes} and extracted energy spectra for each of these regions.
To obtain the spectra, we fitted \citep[using a forward-folding likelihood fit;][]{Piron2001} a power-law (PL) model,
\begin{equation}\label{eq:pl}
\frac{\mathrm{d}N}{\mathrm{d}E} = \phi_0\cdot \left(\frac{E}{E_0}\right)^{-\Gamma}\quad,
\end{equation}
with $E_0=\SI{1}{\TeV}$ kept fixed in the fit, to the observed $\gamma$-ray excess in each signal region and subsequently derived flux points based on this fitted model.
Table~\ref{tab:box_stats} lists the observed $\gamma$-ray excess as well as the fitted power-law model parameters.
A comparison of the shapes of the energy spectra is provided in Fig.~\ref{fig:box_spectra_comparison}.
Finally, Fig.~\ref{fig:box_index_vs_distance} shows the fitted spectral index for each signal region as a function of its angular separation from Westerlund~1.
\begin{table*}
\centering
\caption{Spectral analysis results for signal regions.}
\label{tab:box_stats}
\begin{tabular}{cS[table-align-text-post=false]S[table-align-text-post=false]S[table-align-text-post=false]ccS[table-align-text-post=false]}
\hline\hline
Signal region & {Excess events} & {Significance} & {Significance} & $\phi_0$ & $\Gamma$ & $\sqrt{\Delta\mathrm{TS}}$\\
& & & {$(E>\SI{4.9}{\TeV})$} & $(10^{-13}\,\mathrm{TeV}^{-1}\mathrm{cm}^{-2}\mathrm{s}^{-1})$ & & \\\hline
a & 396.1 & 5.3$\sigma$ & 0.9$\sigma$ & $3.76\pm 0.66$ & $2.71\pm 0.18$ & 5.9\\
b & 454.9 & 5.6$\sigma$ & 1.7$\sigma$ & $4.34\pm 0.64$ & $2.53\pm 0.13$ & 7.5\\
c & 901.8 & 10.3$\sigma$ & 2.8$\sigma$ & $6.33\pm 0.58$ & $2.49\pm 0.08$ & 12.3\\
d & 1014.0 & 10.8$\sigma$ & 7.7$\sigma$ & $6.66\pm 0.58$ & $2.20\pm 0.06$ & 16.1\\
e & 430.7 & 4.7$\sigma$ & 2.9$\sigma$ & $2.84\pm 0.51$ & $2.35\pm 0.12$ & 6.7\\
f & 648.9 & 7.7$\sigma$ & 4.0$\sigma$ & $4.60\pm 0.64$ & $2.33\pm 0.11$ & 10.0\\
g & 1238.5 & 13.5$\sigma$ & 6.0$\sigma$ & $7.41\pm 0.54$ & $2.45\pm 0.07$ & 16.1\\
h & 1409.2 & 14.5$\sigma$ & 4.6$\sigma$ & $8.14\pm 0.54$ & $2.50\pm 0.06$ & 17.3\\
i & 653.4 & 9.0$\sigma$ & 4.0$\sigma$ & $6.65\pm 0.71$ & $2.41\pm 0.09$ & 11.4\\
j & 1229.0 & 14.0$\sigma$ & 6.8$\sigma$ & $9.07\pm 0.63$ & $2.39\pm 0.06$ & 17.7\\
k & 1246.4 & 13.2$\sigma$ & 3.6$\sigma$ & $7.73\pm 0.54$ & $2.48\pm 0.06$ & 16.5\\
l & 1405.7 & 14.1$\sigma$ & 6.3$\sigma$ & $7.95\pm 0.54$ & $2.51\pm 0.06$ & 16.9\\
m & 469.5 & 6.8$\sigma$ & 1.7$\sigma$ & $5.40\pm 0.73$ & $2.56\pm 0.13$ & 8.2\\
n & 415.4 & 5.1$\sigma$ & 3.5$\sigma$ & $3.49\pm 0.62$ & $2.33\pm 0.13$ & 7.4\\
o & 1259.2 & 14.1$\sigma$ & 5.9$\sigma$ & $8.23\pm 0.57$ & $2.39\pm 0.06$ & 17.7\\
p & 996.7 & 10.5$\sigma$ & 4.0$\sigma$ & $6.29\pm 0.55$ & $2.36\pm 0.07$ & 14.7\\
\hline
\end{tabular}
\tablefoot{
See Fig.~\ref{fig:sign_map_boxes} for the definition of the signal regions.
Significance values were computed following \citet{Li1983}, assuming a perfect knowledge of the background.
$\phi_0$ and $\Gamma$ are the best-fit parameter values of the power-law fit for each region (cf.~Eq.~(\ref{eq:pl})).
$\sqrt{\Delta\mathrm{TS}}$ denotes the square root of the difference in test statistic ($\mathrm{TS}=-2\ln(\mathcal{L})$) between the best-fit power-law model and the null hypothesis (corresponding to $\phi_0=0$).
}
\end{table*}
\begin{figure}
\centering
\includegraphics{box_spectra_comparison}
\caption{
Comparison of signal region spectra.
All spectra were divided (at a reference energy of \SI{1}{\TeV}) by a reference power-law spectrum with spectral index $\Gamma=2.41$, corresponding to the weighted average over all signal regions.
Upper limits are at 95\% confidence level, and only two upper limits after the last significant (i.e., $>2\sigma$) flux point are shown.
}
\label{fig:box_spectra_comparison}
\end{figure}
\begin{figure}
\centering
\includegraphics{box_index_vs_distance_centre}
\caption{
Spectral index $\Gamma$ for the signal regions a--p, as a function of angular separation between the centre point of each region and the centre point of the total emission (white circle in Fig.~\ref{fig:flux_maps}).
The red line and band denote the weighted mean and uncertainty across all regions, $\langle \Gamma\rangle = 2.41\pm 0.02$, respectively.
}
\label{fig:box_index_vs_distance}
\end{figure}
The spectra in the signal regions are remarkably similar to each other, both in terms of the fitted power-law indices (which show no dependence on the separation of the region from the centre, see Fig.~\ref{fig:box_index_vs_distance}) and the shape of the spectra as indicated by the extracted flux points (see Fig.~\ref{fig:box_spectra_comparison}).
The only significant deviation is observed in region `d', where the fitted power-law index deviates from the average of all other regions by $\sim 4\sigma$.
We have not been able to identify an issue in the analysis procedure that could explain this deviation, and conclude that it either indicates that the spectrum of the emission in this region is indeed harder, or that it is an unexpectedly large statistical fluctuation.
The similarity of the spectra supports our previous finding of a lack of energy-dependent morphology of HESS~J1646$-$458, and motivates the extraction of a combined energy spectrum.
We computed combined flux points for HESS~J1646$-$458 by adding up the flux points from all 16~square regions, where we have used the best-fit flux for each point and region also in cases where an upper limit was previously derived.
The result is shown in Fig.~\ref{fig:combined_spectrum}.
Besides statistical uncertainties, the displayed error bars contain a systematic uncertainty arising from the applied background model.
The systematic uncertainty is of the same magnitude as the statistical one at the lowest energies considered here, and quickly becomes negligible at higher energies.
The combined flux points clearly show that the $\gamma$-ray emission of HESS~J1646$-$458 extends to at least several tens of TeV.
A comparison with the spectra of the individual signal regions (blue lines in Fig.~\ref{fig:combined_spectrum}) shows that the total spectrum is not dominated by any single signal region across the entire energy range.
\begin{figure}
\centering
\includegraphics{combined_spectrum_with_residuals}
\caption{
Combined energy spectrum.
The black data points correspond to the entire emission of HESS~J1646$-$458; the solid orange, dashed green, and dashed-dotted red lines show the result of fitting a power law with exponential cut-off (ECPL), a hadronic ($pp$) model, and a leptonic (IC) model, respectively, to these points.
Fitted power-law models for each region a--p are displayed by solid blue lines (with darker shades indicating closer proximity to Westerlund~1), while the dashed blue line denotes their sum.
All power-law spectra are plotted up to \SI{100}{\TeV} for better visibility, however, the observed $\gamma$-ray excess is not significant up to this energy in any of the sub-regions.
The bottom panel shows the ratio to the ECPL model; note that the last flux point (with a ratio to the ECPL model of $\sim$3.7$\pm$1.5) is beyond the vertical axis scale.
}
\label{fig:combined_spectrum}
\end{figure}
In order to characterise the combined spectrum, we fitted several spectral models to the derived flux points.
A simple PL model (cf.\ Eq.~\ref{eq:pl}) does not lead to a satisfactory fit ($p=0.06\%$).
The solid orange line in Fig.~\ref{fig:combined_spectrum} shows the result for a power law with exponential cut-off (ECPL),
\begin{equation}\label{eq:ecpl}
\frac{\mathrm{d}N}{\mathrm{d}E} = \phi_0\cdot \left(\frac{E}{E_0}\right)^{-\Gamma}\cdot \exp\left(-\frac{E}{E_c}\right)\quad,
\end{equation}
for which we obtained (with $E_0=\SI{1}{\TeV}$ kept fixed in the fit) $\phi_0=\SI{1.00(03)E-11}{\per\TeV\per\square\cm\per\second}$, $\Gamma=2.30\pm 0.04$, and $E_c=(44^{+17}_{-11})\,\mathrm{TeV}$.
This corresponds to a total $\gamma$-ray luminosity of HESS~J1646$-$458 between the threshold energy of~\SI{0.37}{\TeV} and \SI{100}{\TeV} of $L_\gamma\sim\num{9e34}\,(d/\SI{3.9}{\kpc})^2\,\si{\erg\per\second}$, where $d$ is the assumed distance to the source.
We note that, while the ECPL model yields an acceptable fit ($p=6.3\%$), the high-energy flux points do not provide a clear indication of an exponential cut-off to the spectrum.
Thus, while the energy spectrum of HESS~J1646$-$458 clearly extends to several tens of TeV, its maximum energy cannot be determined reliably with the analysis presented here, and may conceivably lie beyond \SI{100}{TeV}.
However, the last flux point should not be regarded as a clear indication of an upward trend in the spectrum, as it may be afflicted by unknown systematic uncertainties in the high-energy response of the system, which is difficult to calibrate.
Assuming the $\gamma$-ray emission to be generated in collisions of CR protons with ambient matter, we also fitted a primary proton spectral model (of the same form as defined in Eq.~\ref{eq:ecpl}) to the $\gamma$-ray flux points, employing the \textsc{Naima} software package \citep{Zabalza2015} for this task.
For the parameters of the primary proton spectrum we obtained a normalisation (at $E_0=\SI{1}{\TeV}$) of $\phi_0^p=\num{1.28(17)E+38}\,(d/\SI{3.9}{\kpc})^2\,(n/\SI{1}{\per\cubic\cm})^{-1}\,\si{\per\eV}$ (with $n$ the assumed density of the ambient matter), a spectral index of $\Gamma_p=2.33\pm 0.06$, and a cut-off energy of $E_c^p=(400^{+250}_{-130})\,\mathrm{TeV}$; the dashed green line in Fig.~\ref{fig:combined_spectrum} displays the corresponding $\gamma$-ray spectrum.
The 95\% confidence level lower limit on the proton spectrum cut-off energy is $E_c^p > \SI{214}{\TeV}$.
Extrapolating the proton spectrum down to an energy of \SI{1}{\GeV}, the required energy in primary protons is $W_p\sim \num{6e51}\,(d/\SI{3.9}{\kpc})^2\,(n/\SI{1}{\per\cubic\cm})^{-1}\,\si{\erg}$.
In a similar manner, now adopting with \textsc{Naima} a leptonic framework in which the $\gamma$-ray emission is produced through IC scattering of CR electrons, we also fitted a primary electron spectrum to the HESS~J1646$-$458 flux points (again assuming an ECPL model).
Besides the cosmic microwave background, we used as target radiation fields an infrared field ($T_\mathrm{IR}=\SI{26}{K}$, $\rho_\mathrm{IR}=\SI{0.74}{\eV\per\cubic\cm}$) and an optical field ($T_\mathrm{opt}=\SI{2400}{K}$, $\rho_\mathrm{opt}=\SI{1.4}{\eV\per\cubic\cm}$) as predicted by the \citet{Popescu2017} model, as well as an additional field representing diffuse star light from the stellar cluster.
For the latter, we assumed $T_\mathrm{SC}=\SI{40000}{K}$ \citep{Crowther2006} and derived\footnote{
To derive the energy density, we used $\rho_\mathrm{SC}=L\,/\,(4\pi r^2 c)$, where $L$ is the total cluster luminosity and $r$ the distance from the cluster.
For a wind efficiency $\eta=\dot{M}v_\mathrm{w}\,/\,(L/c)\simeq 1$ \citep{Vink2012}, $L=2\,(v_\mathrm{w}/c)^{-1}L_\mathrm{w} \sim \SI{2e41}{\erg\per\second}$ for the parameter values assumed in this work.
Adding up the luminosities of OB supergiant and Yellow Hypergiant stars listed in \citet{Clark2005} yields $L\sim \SI{6e40}{\erg\per\second}$, which is consistent with our estimate.
A slight reduction of the IC emission due to the cluster radiation field is expected because of its anisotropy, which we have not taken into account.
However, since the contribution of this component is suppressed due to the Klein-Nishina effect, the modification of the total IC emission is negligible.
}
an energy density of $\rho_\mathrm{SC}\sim \SI{30}{\eV\per\cubic\cm}$ at a distance of \SI{34}{\pc} from the cluster, which approximately corresponds to the distance at which the radial $\gamma$-ray excess profile peaks (cf.\ Fig.~\ref{fig:radial_profiles}).
In this case, the best-fit parameters are $\phi_0^e=\num{4.7(5)E+35}\,(d/\SI{3.9}{\kpc})^2\,\si{\per\eV}$, $\Gamma_e=2.97\pm 0.07$, and $E_c^e=(180^{+200}_{-70})\,\mathrm{TeV}$, with a 95\% confidence level lower limit on $E_c^e$ of \SI{87}{\TeV} -- the resulting $\gamma$-ray spectrum is shown by the red, dashed-dotted line in Fig.~\ref{fig:combined_spectrum}.
Electrons down to energies of $\sim$\SI{0.4}{\TeV} contribute to the $\gamma$-ray emission detected with H.E.S.S..
Assuming that the spectrum of primary electrons extends down to \SI{0.1}{\TeV}, we obtained a total required energy in primary electrons of $W_e\sim \num{7.2e48}\,(d/\SI{3.9}{\kpc})^2\,\si{\erg}$.
Dividing the required energy by the (energy-dependent) cooling time due to IC scattering \citep{Hinton2009} off the different target fields then yields an estimate for the minimum total required power of $L_e > \num{4.1e35}\,(d/\SI{3.9}{\kpc})^2\,\si{\erg\per\second}$.
The required power is larger if cooling due to synchrotron emission plays a sizeable role, for example, in a magnetic field with $B=\SI{10}{\micro\gauss}$, $L_e > \num{1.7e36}\,(d/\SI{3.9}{\kpc})^2\,\si{\erg\per\second}$.
The chosen analysis method and tools in principle also enable a three-dimensional modelling of the $\gamma$-ray emission detected with H.E.S.S., that is, to decompose the emission into several components with separate energy spectra and morphological models.
However, owing to the complex structure of the emission, a rather complicated model with multiple distinct components is required to obtain a satisfactory description, and its interpretation depends strongly on the chosen models for each of the components.
Considering furthermore the similarity of energy spectra extracted in the signal regions, we do not present such a modelling here.
\subsection{Analysis of radio line data}
\label{sec:radio}
Attributing the $\gamma$-ray emission to interactions of accelerated cosmic-ray nuclei requires sufficiently dense target material.
The presence of such target material can be estimated using radio observations, in particular of the \SI{21}{\cm} H~I emission line -- indicating neutral, atomic hydrogen -- and of the CO~($J$=1--0) transition, which is a tracer for dense clouds of molecular hydrogen, H$_2$ \citep{Heyer2015}.
We have therefore used the H~I Southern Galactic Plane Survey \citep[SGPS;][]{McClureGriffiths2005} and the Mopra Southern Galactic Plane CO Survey \citep{Braiding2018} to investigate the amount of hydrogen gas in the vicinity of Westerlund~1.
The analysis was repeated using the CO data from the (lower-resolution) survey by \citet{Dame2001} instead of the Mopra CO data, leading to consistent results.
The radio data analysis is hampered by the uncertainty on the distance to Westerlund~1.
We show in Fig.~\ref{fig:hi_co_map_dist_3d9} the H~I and CO maps for an interval in velocity with respect to the local standard of rest of $v=[-60,-50]\,\si{\km\per\second}$, which corresponds to the distance of \SI{3.9}{\kpc} that we adopted for this paper \citep{Kothes2007}.
As some measurements indicate smaller distances, maps for two correspondingly chosen alternative velocity intervals, $v=[-48.5, -38.5]\,\si{\km\per\second}$ ($d\approx \SI{3.3}{\kpc}$) and $v=[-37, -27]\,\si{\km\per\second}$ ($d\approx \SI{2.7}{\kpc}$), are shown in Appendix~\ref{sec:appendix_radio}.
\begin{figure}
\centering
\includegraphics{hi_co_map_mopra_-60_-50}
\caption{
Maps showing H I emission \citep{McClureGriffiths2005} (\emph{top panel}) and $^{12}$CO emission \citep{Braiding2018} (\emph{bottom panel}) in the Westerlund~1 region.
Both maps display the emission for an interval in velocity with respect to the local standard of rest of $v=[-60,-50]\,\si{\km\per\second}$, which approximately corresponds to a distance of \SI{3.9}{\kpc}.
The position of Westerlund~1 is marked by the white star symbol and the grey, dashed line shows the Galactic plane.
The transparent, white circle marker denotes the centre point with respect to which the radial CR density profiles in Fig.~\ref{fig:cr_dens_profile_shifted} have been computed; the dashed white line displays a circle with radius $1^\circ$ -- up to which the profiles have been computed -- around this point.
The red lines are contour lines of the flux map shown in Fig.~\ref{fig:flux_map}.
Regions A, B, and C are the same as in Fig.~\ref{fig:flux_maps}.
}
\label{fig:hi_co_map_dist_3d9}
\end{figure}
We find that the gas indicated by the radio observations at a distance of $\sim$\SI{3.9}{\kpc} shows no spatial correlation with the $\gamma$-ray emission that we observe with H.E.S.S..
In fact, both the H~I and CO maps indicate a particularly low atomic and molecular gas density in the circular regions~B and~C, which are bright in $\gamma$ rays.
Using an H~I intensity-mass conversion factor of $X_\mathrm{HI}=\num{1.823e18}\,\mathrm{cm}^{-2}\,/\,(\si{\kelvin\km\per\second})$ \citep{Rohlfs2004}, we obtain for a circular region with radius $1.1^\circ$, centred on Westerlund~1, a total enclosed mass as indicated from H~I of $M_\mathrm{H~I,Wd1}=\num{1.3e5}\,M_\odot$.
This translates into an average density of $n_\mathrm{H~I,Wd1}=\SI{3.2}{\per\cubic\cm}$.\footnote{
\citet{HESS_Westerlund1_2012} derived, for a similar region, a much smaller value of $n_\mathrm{H~I}=\SI{0.24}{\per\cubic\cm}$.
We attribute this to the usage of an erroneous formula in that paper.
}
Similarly, from the CO data we get\footnote{
Due to the more indirect nature of the estimate, the CO-to-H$_2$ conversion factor, $X_\mathrm{CO}$, is less well constrained than $X_\mathrm{H~I}$.
Here we used $X_\mathrm{CO}=\num{1.5e20}\,\mathrm{cm}^{-2}\,/\,(\si{\kelvin\km\per\second})$, which \citet{FermiLAT2012} indicate as an appropriate value for the galactocentric radius of Westerlund~1, $R\approx \SI{4.7}{\kpc}$ for a distance of \SI{3.9}{\kpc} (see their Fig.~25).
This value is also within the range of $(1.4-2.6)\times 10^{20}\,\mathrm{cm}^{-2}\,/\,(\si{\kelvin\km\per\second})$ recommended by \citet{Bolatto2013}.
We have neglected the possible contribution of $^4$He, which could increase the mass estimate by $\sim$25\%.
}
$M_\mathrm{CO,Wd1}=\num{4.3e5}\,M_\odot$ and $n_\mathrm{CO,Wd1}=\SI{10.5}{\per\cubic\cm}$, where $n_\mathrm{CO}$ is the equivalent density for atomic hydrogen and can thus be directly compared to $n_\mathrm{H~I}$.
We stress, however, that in particular the molecular material indicated by the CO observations is not distributed homogeneously inside this region, but rather concentrated in smaller-scale clouds.
For instance, in the CO cloud located in the Northern part of the H.E.S.S.\ emission region (cf.\ Fig.~\ref{fig:hi_co_map_dist_3d9}), we find -- assuming a spherical distribution of the gas -- a density of $n_\mathrm{CO,cloud}\sim \SI{190}{\per\cubic\cm}$.
Following \citet{Aharonian2019}, we also derived a radial profile of the CR density, assuming that the $\gamma$-ray emission is produced in interactions of hadronic CRs with the gas indicated by the H~I and CO data.
Having used the measured $\gamma$-ray flux above our threshold energy of \SI{0.37}{\TeV} to compute the profiles, they indicate the density of CRs above an energy about ten times higher, that is, above $\sim$\SI{4}{\TeV}.
Density profiles for all three considered velocity intervals are shown in Fig.~\ref{fig:cr_dens_profiles}, where we have used for the profiles in panel \subref{fig:cr_dens_profile_shifted} the same shifted centre point as for the radial excess profiles (cf.\ Fig.~\ref{fig:radial_profiles}), and for the profiles in panel \subref{fig:cr_dens_profile_cluster} -- for comparison with \citet{Aharonian2019}~-- the position of Westerlund~1 as centre point.
Regarding first Fig.~\ref{fig:cr_dens_profile_shifted}, for the interval corresponding to the nominal distance of $d\approx \SI{3.9}{\kpc}$ (blue curve with circle markers), we observe a distinct peak at a radial distance from the centre of $\sim$$0.4^\circ$, corresponding to the peak observed in the excess profiles (cf.\ Fig.~\ref{fig:radial_profiles}) at about the same distance.
When choosing the cluster position as centre point, the peak is smeared out, leading to a plateau at small radii.
The profile is then compatible with that derived in \citet{Aharonian2019}, and showing a gradual decline of the density moving away from the cluster.
We stress, however, that the $\gamma$-ray emission is not radially symmetric around Westerlund~1, rendering the interpretation of radial profiles computed with respect to this position difficult.
Our profiles for the alternative velocity intervals (i.e., distances; orange and green curves with triangle markers in Fig.~\ref{fig:cr_dens_profiles}) exhibit approximately the same shape as the profile for the nominal distance, but with a less distinct peak, and lower overall density.
This corresponds to the higher density of hydrogen gas observed for these intervals (cf.\ Figs.~\ref{fig:hi_co_map_dist_3d3},~\ref{fig:hi_co_map_dist_2d7}).
Finally, we note that the shapes of the obtained radial density profiles should be interpreted with care, owing to systematic uncertainties associated with the determination of the gas distributions.
For instance, it is conceivable that a considerable amount of the CO molecules have been photodissociated by the intense ultraviolet radiation from Westerlund~1, implying that the CO line emission would no longer be an accurate tracer of molecular hydrogen gas \citep[e.g.,][]{Wolfire2010}.
\begin{figure}
\centering
\subfigure[]{
\includegraphics{cr_dens_profile_shifted_mopra_final}
\label{fig:cr_dens_profile_shifted}
}
\subfigure[]{
\includegraphics{cr_dens_profile_mopra_final}
\label{fig:cr_dens_profile_cluster}
}
\caption{
Cosmic-ray density profiles above \SI{4}{\TeV} for different velocity intervals.
The profiles in the upper plot have been computed with respect to the centre point shown by the circle marker in Fig.~\ref{fig:hi_co_map_dist_3d9} (the same as for the excess profiles displayed in Fig.~\ref{fig:radial_profiles}), whereas the profiles in the lower plot have been computed with respect to the position of Westerlund~1.
The error bars denote the (statistical) uncertainty of the measured $\gamma$-ray flux only, and in particular do not reflect systematic uncertainties related to the gas distribution.
The CR density at Earth, indicated for comparison, was computed from the all-particle CR spectrum as modelled in \citet{Breuhaus2022} (model `A').
The data points taken from \citet{Aharonian2019}, shown in the lower plot, have been derived using a velocity interval of $v=[-60,-50]\,\si{\km\per\second}$, and should thus be compared to the blue curve with circle markers.
The red, solid line, from the same paper, shows a profile corresponding to a $1/r$-distribution of the CRs, where $r$ is the distance to the cluster.
Both show the density above a slightly higher threshold energy of \SI{10}{\TeV}.
}
\label{fig:cr_dens_profiles}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In this section, we will discuss in turn the various possible sites for the acceleration of CRs that could be responsible for the observed $\gamma$-ray emission.
Evidently, we can include in this discussion only those objects that have been identified through observations at other wavelengths.
While we consider the association of the $\gamma$-ray emission with one of these objects most likely, so far undiscovered objects (e.g. pulsars or SNRs) could also contribute to it.
\subsection{Objects not connected to Westerlund~1}
\subsubsection{4U~1642--45}
The LMXB 4U~1642$-$45 lies close to the peak in $\gamma$-ray emission observed in region~A (cf.\ Fig.~\ref{fig:flux_maps}).
However, despite this positional coincidence, we consider an association of 4U~1642$-$45 with HESS~J1646$-$458 -- or even the emission observed just in region~A -- highly unlikely: LMXBs are not known emitters of $\gamma$ rays in the TeV energy range, in fact, \citet{Kantzas2022} have recently shown that even the upcoming Cherenkov Telescope Array (CTA) will be able to detect LMXB outbursts only under favourable circumstances.
Moreover, the observed emission is spatially extended, which would be unexpected if it is produced inside a transient jet.
Lastly, \citet{HESS_Westerlund1_2012} reported no temporal variations of the observed $\gamma$-ray flux, again disfavouring an association of HESS~J1646$-$458 with 4U~1642$-$45.
While the deviation in spectral index observed in signal region~`d' (cf.\ Sect.~\ref{sec:spectra}), which partly coincides with the circular region~A, is intriguing, the lack of a plausible association leads us to conclude that it is likely a statistical fluctuation, or due to an unidentified, hard-spectrum source contributing to the emission in region~d (but not to the entire region~A, as the signal regions~c, g, and h, which also overlap with region~A, do not show deviating spectral indices).
\subsubsection{PSR~J1648--4611 \& PSR~J1650--4601}
The two energetic pulsars PSR~J1648$-$4611 (with spin-down power $\dot{E}=\SI{2.1e35}{\erg\per\s}$) and PSR~J1650$-$4601 ($\dot{E}=\SI{2.9e35}{\erg\per\s}$) \citep{Manchester2005} are located within region~C (cf.\ Fig.~\ref{fig:flux_maps}), where we observe enhanced $\gamma$-ray emission.
Because pulsar wind nebulae (PWNe) represent a large fraction of known Galactic $\gamma$-ray sources \citep[e.g.,][]{HESS_HGPS_2018}, it seems likely that high-energy electrons provided by one (or both) of the two pulsars contribute to the $\gamma$-ray emission observed in region~C.
This remains true even though neither of the pulsar locations fully coincides with the peak of the emission, as it is not uncommon that $\gamma$-ray PWNe are observed offset from their respective pulsars.
The detection of diffuse, hard X-ray emission from the vicinity of PSR~J1648$-$4611 by \citet{Sakai2013} adds further support for this scenario.
However, while the 4FGL-DR2 catalogue \citep{FermiLAT2020,FermiLAT2020a} contains sources associated with PSR~J1648$-$4611 and PSR~J1650$-$4601, their $\gamma$-ray emission is detected as pulsed and exhibits a very steep spectrum above \SI{10}{\GeV}, implying that these sources are directly connected to the pulsars rather than their putative PWNe.
On the other hand, viewing the entire emission of HESS~J1646$-$458 as resulting from one of the two pulsars would imply an unusually large PWN.
For instance, at the distance of PSR~J1648$-$4611 of \SI{5.7}{\kpc} \citep{Kramer2003}, the emission region spans $\sim$\SI{200}{\pc}, which is twice the size of the largest known $\gamma$-ray PWN, HESS~J1825$-$137 \citep{HESS_1825_2019}.
For such an extended nebula, one would expect a considerable loss of energy due to synchrotron cooling of the electrons when they propagate towards the edges of the nebula, leading to softer $\gamma$-ray spectra in these regions (or, equivalently, energy-dependent morphology).
Because we observe very similar energy spectra across the entire source, and no energy-dependent morphology, we conclude that a PWN powered by PSR~J1648$-$4611 or PSR~J1650$-$4601 cannot explain the entire $\gamma$-ray emission.
\subsection{Distinct acceleration sites within Westerlund~1}
Having established that other known objects in the region cannot explain the bulk of $\gamma$-ray emission from HESS~J1646$-$458, we assume next that the CRs producing the emission are accelerated at one or multiple sites within the stellar cluster Westerlund~1.
Various scenarios can be considered, and will be discussed in this section.
\subsubsection{CXOU J164710.2--455216}
At first sight, the magnetar CXOU~J164710.2$-$455216 -- the only known stellar remnant inside the cluster -- may be suspected to power a $\gamma$-ray PWN that could potentially be associated to HESS~J1646$-$458.
However, its measured period and period derivative \citep[$P=\SI{10.6}{\second},\,\dot{P}=\SI{9.2e-13}{\second\per\second}$;][]{Israel2007} imply a rotational spin-down power of only $\dot{E}=\SI{3e31}{\erg\per\second}$, which is orders of magnitude lower than for any of the pulsars associated with PWN detected at TeV energies with H.E.S.S.\ \citep{HESS_PWNpop_2018}.
Even though the measured X-ray luminosity of $L_X\approx \SI{3e33}{\erg\per\second}$ \citep{Muno2006a} exceeds the rotational spin-down power, implying another source of energy (presumably connected to the high magnetic field of the magnetar), it still appears very unlikely that CXOU~J164710.2$-$455216 would be able to sustain the observed $\gamma$-ray emission.
Additionally, as is the case for PSR~J1648$-$4611 and PSR~J1650$-$4601, the spatial extent of HESS~J1646$-$458 and the lack of energy-dependent morphology disfavour an association of the $\gamma$-ray emission with CXOU~J164710.2$-$455216.
\subsubsection{Supernova remnants}
The existence of CXOU~J164710.2$-$455216 implies that at least one supernova (SN) explosion took place within Westerlund~1.
However, given the abundance of massive stars in Westerlund~1, and its age of several Myr, it seems certain that many more SNe have occurred already.
Indeed, \citet{Muno2006} have argued that the number of SNe during the last $\sim$\SI{1}{\mega\year} could be as high as 80--150, attributing the lack of identified SNRs to a cavity in the interstellar medium (ISM), excavated by the stellar cluster.
Without detailed knowledge about the progenitor mass of CXOU~J164710.2$-$455216, or the SN rate in Westerlund~1, it is not straightforward to estimate the energy output that SNRs in the stellar cluster may provide.
Nevertheless, assuming a canonical value for the kinetic energy released per SN of \SI{e51}{\erg} -- possibly more in the case of CXOU~J164710.2$-$455216, if its progenitor was indeed as heavy as $40\,M_\odot$ \citep{Muno2006a} -- it seems plausible that one or several SNRs in Westerlund~1 could be responsible for the $\gamma$-ray emission in terms of the required energetics.
For instance, in a hadronic scenario, assuming a density of the ambient matter of $5\,m_\mathrm{H}\,\si{\per\cubic\cm}$ gives a required energy in protons of \SI{1.2e51}{\erg} (cf.\ Sect.~\ref{sec:spectra}).
It would take about 10~SNRs to reach this energy if the conversion efficiency from kinetic energy into CRs is $\sim$10\%.
\subsubsection{SN-wind and wind-wind interactions}
\label{sec:wind_wind}
Another, related, possibility for the acceleration of CRs inside Westerlund~1 are interactions of SN shocks with winds of massive stars in the cluster, or interactions between the winds of several stars.
Indeed, for SN shocks interacting with fast stellar winds, the efficiency for CR acceleration can be as high as 30\% \citep{Bykov2020}, and CR acceleration up to $\geq\SI{40}{\PeV}$ has been conjectured \citep{Bykov2015}.
Colliding winds of massive stars are also known to produce non-thermal emission \citep{Eichler1993,Reimer2006}, and several searches for $\gamma$-ray emission from colliding wind binaries have been performed in the past \citep[e.g.][]{Werner2013,Pshirkov2016,MartiDevesa2021}.
A well-known example is provided by the colliding wind binary $\eta$ Car, which has indeed been detected up to $\sim$\SI{100}{\GeV} with the \emph{Fermi}-LAT \citep{AGILE2009,FermiLAT2010}, and even up to $\sim$\SI{400}{\GeV} with H.E.S.S.\ \citep{HESS_EtaCar_2020}.
These considerations strengthen the conclusion that $\gamma$-ray emission at the level observed with H.E.S.S.\ can in principle be produced by CRs accelerated at shock fronts within Westerlund~1.
However, for the scenario of a central CR source, it is important to also take into account the propagation of the CRs into the region where we observe $\gamma$-ray emission with H.E.S.S..
In this case, the non-observation of a peak in the $\gamma$-ray emission at the position of Westerlund~1 and the large extent of HESS~J1646$-$458 essentially rule out the leptonic scenario.
In a hadronic scenario, the complex morphology of HESS~J1646$-$458 may in principle be attributable to the distribution of target material -- although a clear correlation of the $\gamma$-ray emission with gas clouds as indicated by H~I and CO observations is lacking, and the inferred CR density does not peak at the centre for any of the considered distances to the source (cf.\ Fig.~\ref{fig:cr_dens_profile_shifted}), as would be expected for a steady CR injection there (see \citeauthor{Aharonian2019} \citeyear{Aharonian2019}, but also \citeauthor{Bhadra2022} \citeyear{Bhadra2022}).
Possible options to alleviate this problem could be the presence of ``dark'' gas that is not traced by H~I or CO \citep[e.g.,][]{Wolfire2010}, or the assumption that the CRs were provided by a recent impulsive event (e.g., the SN explosion of the CXOU~J164710.2$-$455216 progenitor star and/or other recent SNe), rather than being injected quasi-continuously over the lifetime of the cluster.
Another relevant constraint comes from the maximum energy of the observed $\gamma$ rays, which implies the presence of CRs with energies of several hundred~TeV throughout the emission region.
If the acceleration sites are located exclusively within the compact cluster, particles must pass through the wind zone.
Taking reasonable limits on the diffusion properties, adiabatic losses in the radial wind are unavoidable.
CRs that were injected at the cluster and have propagated to a distance $R$ within the wind region would therefore need to be produced with maximum energy $(R\,/\,R_\mathrm{Wd1})^{2/3}$ times larger \citep{Longair1992}, where $R_\mathrm{Wd1}\sim \SI{1}{\pc}$ is the radius of Westerlund~1.
For the nominal distance of \SI{3.9}{\kpc}, we observe a peak in the $\gamma$-ray emission at $R\sim \SI{34}{\pc}$, implying a need of multi-PeV CRs within Westerlund~1.
The H.E.S.S.\ observations thus provide a valuable constraint for theoretical models of particle acceleration processes within a compact stellar cluster \citep{Bykov2020}.
\subsection{Acceleration by Westerlund~1 as a whole}
Finally, there is the possibility that effects due to the entire stellar cluster provide the means for efficient CR acceleration.
In particular, we consider in the following two scenarios related to the collective cluster wind, in which the CR acceleration predominantly takes place outside of the actual stellar cluster itself.
\subsubsection{Turbulence in a superbubble}
Due to the powerful collective cluster wind, as well as many SN explosions, massive young stellar clusters are thought to create ``superbubbles'', that is, large cavities in the ISM, extending much beyond the boundaries of the cluster itself.
In the shocked medium inside the bubble, strong magnetohydrodynamic (MHD) turbulences provide the conditions for particle acceleration via the second-order Fermi mechanism \citep[e.g.,][]{Bykov2014,Vieu2022}.
With a maximum proton energy of \SI{200}{\TeV} and a source extent of $\sim$\SI{100}{\pc}, the Hillas criterion \citep{Hillas1984} implies for an average turbulent fluid velocity $u=\SI{100}{\km\per\second}$ a minimum magnetic field strength of $\sim$\SI{13}{\micro\gauss} in this scenario.
While the acceleration time scales are much longer compared to the case of acceleration at shocks inside the cluster, the process could -- under favourable circumstances -- generate CRs with up to PeV energies during several Myr, the typical life time of young clusters \citep{Bykov2020}.
For an adiabatically expanding wind, and a density of the ambient ISM $\rho_0$, the radius of the superbubble is given by $R_\mathrm{SB}=0.76\,(L_\mathrm{w}\,/\,\rho_0)^{1/5} \tau^{3/5}$ \citep{Weaver1977,Koo1992}, or $R_\mathrm{SB}\sim 256\,(\rho_0\,/\,1\,m_\mathrm{H}\,\si{\per\cubic\cm})^{-1/5}\si{\pc}$ for our assumptions (cf.\ Table~\ref{tab:wd1_pars}).
Adopting, for example, $\rho_0=5\,m_\mathrm{H}\,\si{\per\cubic\cm}$, we obtain $R_\mathrm{SB}\sim \SI{185}{\pc}$ -- a value more than two orders of magnitude larger than the half-mass radius of the cluster.
A superbubble with a size of this order around Westerlund~1 has not yet been revealed at other wavelengths.
Because its dimensions also exceed the extent of the $\gamma$-ray emission detected with H.E.S.S., an association of this emission with the entire superbubble seems disfavoured.
However, the assumption of a homogeneous medium is an oversimplification, and bubbles in a structured and possibly clumpy medium may have significantly different cooling rates and dynamics \citep[e.g.,][]{Chu2008}.
Moreover, more detailed models of superbubble evolution indicate that the simple estimate for their radius given above often over-predicts their true size \citep{Yadav2017,Vieu2022}.
If this is also the case for Westerlund~1, smaller-scale structures as for example the bubble-like feature `B3' in H~I data reported by \citet{Kothes2007} could be associated with the cluster.
Nevertheless, even in this case a connection between the superbubble and the $\gamma$-ray emission is not obvious (but conceivable, considering also that the $\gamma$-ray emission need not be uniform from the superbubble volume).
\subsubsection{Termination shock of collective cluster wind}
\label{sec:termination_shock}
Another possible site for the acceleration of CRs due to Westerlund~1, but outside the stellar cluster itself, is the termination shock of the collective cluster wind.
The termination shock forms where the pressure of the outgoing wind equals that of the ISM.
Recently, \citet{Morlino2021} have proposed that termination shocks of collective stellar cluster winds may be efficient sites of particle acceleration, and demonstrated that CRs with PeV energies could be produced in powerful clusters like Westerlund~1.
Considering again as above an adiabatic expansion, the radius of the termination shock is given by $R_\mathrm{TS}=0.92\,(L_\mathrm{w}\,/\,\rho_0)^{3/10} v_\mathrm{w}^{-1/2} \tau^{2/5}$ \citep{Koo1992}, or $R_\mathrm{TS}\sim 51\,(\rho_0\,/\,1\,m_\mathrm{H}\,\si{\per\cubic\cm})^{-3/10}\,\si{\pc}$ with our adopted parameter values.
Inserting $\rho_0=5\,m_\mathrm{H}\,\si{\per\cubic\cm}$ yields a radius of $R_\mathrm{TS}\sim \SI{32}{\pc}$.
Notably, this value coincides well with the radial distance of $\sim$$\SI{34}{\pc}$ at which we observe a peak in the $\gamma$-ray excess profiles (cf.\ Sect.~\ref{sec:maps_profiles} and Fig.~\ref{fig:radial_profiles}).
The scenario of particle acceleration at the cluster wind termination shock therefore provides a natural explanation for the shell-like structure exhibited by the $\gamma$-ray emission detected with H.E.S.S., and can furthermore reproduce its radial extent under reasonable assumptions.
The apparent asymmetry of the shell with respect to the position of Westerlund~1 could be caused, for example, by a gradient in the density of the surrounding ISM, or by a SN that occurred within the cluster towards the direction of the asymmetry.
Adopting the hadronic scenario, we find that the termination shock model is also viable in terms of the required energetics: with $L_\mathrm{w}\sim\SI{e39}{\erg\per\second}$ and $\tau\sim\SI{4}{\mega\year}$, the total available energy is $\sim$\SI{1.3e53}{\erg}, which in principle suffices to explain the required energy in CR protons, $W_p\sim \num{6e51}\,(n/\SI{1}{\per\cubic\cm})^{-1}\,\si{\erg}$ -- although, since the cooling time for protons exceeds the cluster lifetime, the acceleration process would need to be rather efficient, or the target density high.
Furthermore, the energe\-tics argument presupposes that the CRs can be confined within the $\gamma$-ray emission region over a significant fraction of the full cluster lifetime, which is not straightforward.
For instance, adopting Bohm diffusion, the diffusion length for protons is $L\sim \sqrt{6Dt}\sim 81\,(E/\SI{100}{\TeV})^{1/2}\,(B/\SI{10}{\micro\gauss})^{-1/2}\,(t/\SI{1}{\mega\year})^{1/2}\,\si{\pc}$ \citep{Chandrasekhar1943}, where we have neglected projection effects.
Hence, even for slow diffusion, to confine protons with energy \SI{200}{\TeV} -- our lower limit for the cut-off energy of the primary proton spectrum in a hadronic scenario -- within a region of radius $\sim$\SI{50}{\pc} for only \SI{1}{\mega\year} already requires a rather large magnetic field strength of $B\sim \SI{50}{\micro\gauss}$.
Nevertheless, taking into account, for example, an additional smearing due to the transformation from protons to $\gamma$ rays, the observed $\gamma$-ray morphology can be reproduced with a not-too-extreme assumption on the magnetic field, provided that the protons do not diffuse too fast.
However, as already mentioned in Sect.~\ref{sec:wind_wind}, the hadronic scenario is challenged further by the absence of a correlation of the $\gamma$-ray emission with the gas observed in the region.
Interestingly, the finding that the expected location of the termination shock coincides with the shell-like structure of the measured $\gamma$-ray emission also renders possible an interpretation within the leptonic scenario.
This is because the geometry of the acceleration site naturally explains the relatively large extent of the emission region and its rather complex structure, which are otherwise not easy to accommodate in a leptonic scenario.
The scenario is also feasible energetically; even in the presence of a \SI{10}{\micro\gauss} magnetic field, the required power of $L_e\sim \SI{1.7e36}{\erg\per\second}$ (cf.\ Sect.~\ref{sec:spectra}) is easily provided by the cluster wind if the acceleration efficiency for electrons is of order 0.1\%.
However, as the accelerated electrons emit synchrotron radiation, the scenario is subject to constraints from observations at the corresponding wavelengths.
For example, from measurements by the \textit{Planck} satellite at a frequency of \SI{30}{\giga\hertz}\footnote{We have used the full-sky frequency map at \SI{30}{\giga\hertz} available through the Planck Legacy Archive, \url{http://pla.esac.esa.int/pla}.}, at which the radiation is dominated by synchrotron emission \citep{Planck_LFI_2018}, we infer an average intensity within $1^\circ$ around Westerlund~1 of \SI{0.55}{\mega\Jy\per\steradian}.
For a magnetic field of \SI{10}{\micro\gauss}, electrons with energies around \SI{0.01}{\TeV} emit synchrotron radiation at \SI{30}{\giga\hertz}.
Assuming that the electron spectrum extends down to these energies, the predicted intensity at \SI{30}{\giga\hertz} is $\sim \SI{0.3}{\mega\Jy\per\steradian}$.
Considering furthermore that only part of the emission detected with \textit{Planck} originates from the vicinity of Westerlund~1, we conclude that the \textit{Planck} measurements imply either a magnetic field strength smaller than \SI{10}{\micro\gauss} or a low-energy cut-off of the primary electron spectrum.
Finally, it is worth noting in this regard that for another superbubble detected at TeV energies, 30~Dor~C in the Large Magellanic Cloud, a synchrotron shell has been detected using X-ray measurements, and a leptonic scenario was found to be favoured to explain the TeV $\gamma$-ray emission \citep[][and references therein]{Kavanagh2019}.
\subsection{Escape of particles from the emission region}
So far we have assumed that particles accelerated in or around Westerlund~1 are confined over the lifetime of the cluster.
If a significant fraction of accelerated particles can escape there are several important consequences:
\begin{enumerate}[(i)]
\item $\gamma$-ray emission would be expected outside the system, in the case of hadronic CRs in particular in molecular clouds;
\item in the (most likely) case of energy-dependent escape, the inferred spectrum of particles within the system would be softened with respect to the injection spectrum by the energy-dependent escape probability;
\item the total energy requirements would be increased.
\end{enumerate}
There is little evidence for (i) in the maps shown in Fig.~\ref{fig:flux_maps} -- with the possible exception of the emission region east of region `C', which, however, does not coincide with a molecular cloud at the nominal distance of Westerlund~1 (cf.\ Fig.~\ref{fig:hi_co_map_dist_3d9}).
The inferred injection indices for electrons and protons of $\Gamma_e\sim$3.0\footnote{
The electron spectral index derived in the \textsc{naima} fit corresponds to the present-time population of electrons, whose energy spectrum is steepened with respect to the injection spectrum due to cooling.
}
and $\Gamma_p\sim$2.3 seem broadly consistent with acceleration theory \citep[e.g.,][]{Bell2013}, so there seems to be no indication for (ii), although a modest variation of the spectrum is tolerable within the precision of the H.E.S.S.\ measurement.
Finally, as the total energy requirement is already a challenge in most acceleration scenarios under the assumption of confinement, there also seems to be little room for (iii).
Thus, while not entirely inconceivable, we at least find no indications for particles escaping from the emission region.
In the absence of evidence for particle escape we need to consider if confinement over the cluster lifetime is reasonable or not.
As already discussed in Sect.~\ref{sec:termination_shock}, this is not straightforward in the case of CR protons, which in a disordered magnetic field would diffuse much too quickly.
A possible way to circumvent this problem would be a magnetic field topology in which field lines are preferentially in the plane orthogonal to the radial direction, which can substantially inhibit the radial diffusion.
\section{Conclusion}
\label{sec:conclusion}
We have presented a detailed analysis of HESS~J1646$-$458, a very-high-energy $\gamma$-ray source positionally coincident with the young massive stellar cluster Westerlund~1.
HESS~J1646$-$458 is largely extended ($\sim$$2^\circ$ in diameter), and exhibits a complex, shell-like structure, with Westerlund~1 close to its centre.
We found no indications for energy-dependent morphology.
The energy spectrum of HESS~J1646$-$458 extends to at least several tens of TeV, with a spectral index of $\sim$$-2.3$, and a gradual steepening above $\sim$\SI{10}{\TeV}.
Energy spectra extracted within 16 signal regions across the source region are very similar to each other, reinforcing the observation that the morphology of HESS~J1646$-$458 does not vary with $\gamma$-ray energy.
In a hadronic scenario with CR protons producing the $\gamma$ rays, the observed $\gamma$-ray spectrum implies proton energies in excess of several hundred TeV.
However, our analysis of the H~I and CO emission around Westerlund~1 indicates no clear correlation between hydrogen gas clouds and the $\gamma$-ray emission, as would be expected to some degree within such a scenario.
Nevertheless, in particular due to uncertainties related to the distribution of target gas, a hadronic scenario remains viable in principle.
On the other hand, the lack of significant energy-dependent morphology of the $\gamma$-ray emission represents a challenge for an interpretation within an IC-dominated, leptonic scenario.
Investigating the possible physical counterparts to HESS~J1646$-$458, we found that -- while the energetic pulsars PSR~J1648$-$4611 and PSR~J1650$-$4601 may be contributing to the emission in their immediate surroundings -- no other known object besides Westerlund~1 can be made responsible for the bulk of the $\gamma$-ray emission.
Particle acceleration due to the cluster may occur at various possible sites: at wind-wind or SN-wind interaction shocks within the cluster, at turbulences in the superbubble excavated by the collective cluster wind, or at the termination shock of the cluster wind.
Models in which the acceleration takes place within the cluster generally need to overcome the problem of transporting the accelerated CRs into the larger region from which we observe the $\gamma$-ray emission without too severe energy losses, and explain the fact that the $\gamma$-ray emission does not peak towards the cluster position -- in particular the latter argument rules out a leptonic scenario with continuous injection for this case.
Attributing the CR acceleration to the superbubble as a whole seems disfavoured because HESS~J1646$-$458 -- although largely extended -- is still significantly smaller than the expected size of the superbubble, which has, furthermore, so far eluded its detection at other wavelengths.
Therefore, we deem most attractive the scenario of CR acceleration at the cluster wind termination shock, because it provides a natural explanation for the shell-like morphology of HESS~J1646$-$458, and the wind is powerful enough to sustain the $\gamma$-ray emission.
Based on the available data, however, we are not able to firmly identify the acceleration mechanism at work.
Our results further support massive stellar clusters as CR accelerators, and motivate to investigate Westerlund~1 and other representatives of this class of objects more deeply in the future.
In particular, we encourage a deep and broad coverage in X-ray observations of the region around Westerlund~1, which may enable the identification of the cluster wind termination shock.
On the other hand, a more accurate measurement of the $\gamma$-ray emission in this region will be provided by the upcoming Cherenkov Telescope Array \citep{CTA2018}, which is designed to be an order of magnitude more sensitive than the H.E.S.S.\ experiment.
Exploiting the data from this and other observatories will be crucial in understanding the contribution of massive stellar clusters to the sea of CRs in the Milky Way.
\begin{acknowledgements}
The support of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S.\ is gratefully acknowledged, as is the support by
the German Ministry for Education and Research (BMBF),
the Max Planck Society,
the German Research Foundation (DFG),
the Helmholtz Association,
the Alexander von Humboldt Foundation,
the French Ministry of Higher Education, Research and Innovation,
the Centre National de la Recherche Scientifique (CNRS/IN2P3 and CNRS/INSU),
the Commissariat \`{a} l'\'{E}nergie atomique et aux \'{E}nergies alternatives (CEA),
the U.K.\ Science and Technology Facilities Council (STFC),
the Knut and Alice Wallenberg Foundation,
the Polish Ministry of Education and Science, agreement no.~2021/WK/06,
the South African Department of Science and Technology and National Research Foundation,
the University of Namibia,
the National Commission on Research, Science \& Technology of Namibia (NCRST),
the Austrian Federal Ministry of Education, Science and Research and the Austrian Science Fund (FWF),
the Australian Research Council (ARC),
the Japan Society for the Promotion of Science,
the University of Amsterdam, and
the Science Committee of Armenia grant 21AG-1C085.
We appreciate the excellent work of the technical support staff in Berlin, Zeuthen, Heidelberg, Palaiseau, Paris, Saclay, T\"{u}bingen and in Namibia in the construction and operation of the equipment.
This work benefited from services provided by the H.E.S.S.\ Virtual Organisation, supported by the national resource providers of the EGI Federation.
This research made use of the \textsc{Gammapy}\footnote{\url{https://gammapy.org}} \citep{Deil2017,Deil2020}, \textsc{Astropy}\footnote{\url{https://www.astropy.org}} \citep{Robitaille2013,PriceWhelan2018}, \textsc{Matplotlib}\footnote{\url{https://matplotlib.org}} \citep{Hunter2007}, and \textsc{Naima}\footnote{\url{https://naima.readthedocs.io}} \citep{Zabalza2015} software packages.
\end{acknowledgements}
\bibliographystyle{aa}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.